Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 5.16-rc8 into char-misc-next

We need the fixes in here as well for testing.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+4270 -1921
+8 -2
Documentation/admin-guide/kernel-parameters.txt
··· 1689 1689 architectures force reset to be always executed 1690 1690 i8042.unlock [HW] Unlock (ignore) the keylock 1691 1691 i8042.kbdreset [HW] Reset device connected to KBD port 1692 + i8042.probe_defer 1693 + [HW] Allow deferred probing upon i8042 probe errors 1692 1694 1693 1695 i810= [HW,DRM] 1694 1696 ··· 2415 2413 Default is 1 (enabled) 2416 2414 2417 2415 kvm-intel.emulate_invalid_guest_state= 2418 - [KVM,Intel] Enable emulation of invalid guest states 2419 - Default is 0 (disabled) 2416 + [KVM,Intel] Disable emulation of invalid guest state. 2417 + Ignored if kvm-intel.enable_unrestricted_guest=1, as 2418 + guest state is never invalid for unrestricted guests. 2419 + This param doesn't apply to nested guests (L2), as KVM 2420 + never emulates invalid L2 guest state. 2421 + Default is 1 (enabled) 2420 2422 2421 2423 kvm-intel.flexpriority= 2422 2424 [KVM,Intel] Disable FlexPriority feature (TPR shadow).
+4 -4
Documentation/devicetree/bindings/i2c/apple,i2c.yaml
··· 20 20 21 21 properties: 22 22 compatible: 23 - enum: 24 - - apple,t8103-i2c 25 - - apple,i2c 23 + items: 24 + - const: apple,t8103-i2c 25 + - const: apple,i2c 26 26 27 27 reg: 28 28 maxItems: 1 ··· 51 51 examples: 52 52 - | 53 53 i2c@35010000 { 54 - compatible = "apple,t8103-i2c"; 54 + compatible = "apple,t8103-i2c", "apple,i2c"; 55 55 reg = <0x35010000 0x4000>; 56 56 interrupt-parent = <&aic>; 57 57 interrupts = <0 627 4>;
+25
Documentation/devicetree/bindings/regulator/samsung,s5m8767.yaml
··· 51 51 description: 52 52 Properties for single BUCK regulator. 53 53 54 + properties: 55 + op_mode: 56 + $ref: /schemas/types.yaml#/definitions/uint32 57 + enum: [0, 1, 2, 3] 58 + default: 1 59 + description: | 60 + Describes the different operating modes of the regulator with power 61 + mode change in SOC. The different possible values are: 62 + 0 - always off mode 63 + 1 - on in normal mode 64 + 2 - low power mode 65 + 3 - suspend mode 66 + 54 67 required: 55 68 - regulator-name 56 69 ··· 76 63 Properties for single BUCK regulator. 77 64 78 65 properties: 66 + op_mode: 67 + $ref: /schemas/types.yaml#/definitions/uint32 68 + enum: [0, 1, 2, 3] 69 + default: 1 70 + description: | 71 + Describes the different operating modes of the regulator with power 72 + mode change in SOC. The different possible values are: 73 + 0 - always off mode 74 + 1 - on in normal mode 75 + 2 - low power mode 76 + 3 - suspend mode 77 + 79 78 s5m8767,pmic-ext-control-gpios: 80 79 maxItems: 1 81 80 description: |
+5 -3
Documentation/i2c/summary.rst
··· 11 11 and so are not advertised as being I2C but come under different names, 12 12 e.g. TWI (Two Wire Interface), IIC. 13 13 14 - The official I2C specification is the `"I2C-bus specification and user 15 - manual" (UM10204) <https://www.nxp.com/docs/en/user-guide/UM10204.pdf>`_ 16 - published by NXP Semiconductors. 14 + The latest official I2C specification is the `"I2C-bus specification and user 15 + manual" (UM10204) <https://www.nxp.com/webapp/Download?colCode=UM10204>`_ 16 + published by NXP Semiconductors. However, you need to log-in to the site to 17 + access the PDF. An older version of the specification (revision 6) is archived 18 + `here <https://web.archive.org/web/20210813122132/https://www.nxp.com/docs/en/user-guide/UM10204.pdf>`_. 17 19 18 20 SMBus (System Management Bus) is based on the I2C protocol, and is mostly 19 21 a subset of I2C protocols and signaling. Many I2C devices will work on an
+6 -5
Documentation/networking/bonding.rst
··· 196 196 ad_actor_system 197 197 198 198 In an AD system, this specifies the mac-address for the actor in 199 - protocol packet exchanges (LACPDUs). The value cannot be NULL or 200 - multicast. It is preferred to have the local-admin bit set for this 201 - mac but driver does not enforce it. If the value is not given then 202 - system defaults to using the masters' mac address as actors' system 203 - address. 199 + protocol packet exchanges (LACPDUs). The value cannot be a multicast 200 + address. If the all-zeroes MAC is specified, bonding will internally 201 + use the MAC of the bond itself. It is preferred to have the 202 + local-admin bit set for this mac but driver does not enforce it. If 203 + the value is not given then system defaults to using the masters' 204 + mac address as actors' system address. 204 205 205 206 This parameter has effect only in 802.3ad mode and is available through 206 207 SysFs interface.
+1
Documentation/networking/device_drivers/ethernet/freescale/dpaa2/overview.rst
··· 183 183 IRQ config, enable, reset 184 184 185 185 DPNI (Datapath Network Interface) 186 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 186 187 Contains TX/RX queues, network interface configuration, and RX buffer pool 187 188 configuration mechanisms. The TX/RX queues are in memory and are identified 188 189 by queue number.
+16
Documentation/networking/device_drivers/ethernet/intel/ixgbe.rst
··· 440 440 a virtual function (VF), jumbo frames must first be enabled in the physical 441 441 function (PF). The VF MTU setting cannot be larger than the PF MTU. 442 442 443 + NBASE-T Support 444 + --------------- 445 + The ixgbe driver supports NBASE-T on some devices. However, the advertisement 446 + of NBASE-T speeds is suppressed by default, to accommodate broken network 447 + switches which cannot cope with advertised NBASE-T speeds. Use the ethtool 448 + command to enable advertising NBASE-T speeds on devices which support it:: 449 + 450 + ethtool -s eth? advertise 0x1800000001028 451 + 452 + On Linux systems with INTERFACES(5), this can be specified as a pre-up command 453 + in /etc/network/interfaces so that the interface is always brought up with 454 + NBASE-T support, e.g.:: 455 + 456 + iface eth? inet dhcp 457 + pre-up ethtool -s eth? advertise 0x1800000001028 || true 458 + 443 459 Generic Receive Offload, aka GRO 444 460 -------------------------------- 445 461 The driver supports the in-kernel software implementation of GRO. GRO has
+4 -2
Documentation/networking/ip-sysctl.rst
··· 25 25 ip_no_pmtu_disc - INTEGER 26 26 Disable Path MTU Discovery. If enabled in mode 1 and a 27 27 fragmentation-required ICMP is received, the PMTU to this 28 - destination will be set to min_pmtu (see below). You will need 28 + destination will be set to the smallest of the old MTU to 29 + this destination and min_pmtu (see below). You will need 29 30 to raise min_pmtu to the smallest interface MTU on your system 30 31 manually if you want to avoid locally generated fragments. 31 32 ··· 50 49 Default: FALSE 51 50 52 51 min_pmtu - INTEGER 53 - default 552 - minimum discovered Path MTU 52 + default 552 - minimum Path MTU. Unless this is changed mannually, 53 + each cached pmtu will never be lower than this setting. 54 54 55 55 ip_forward_use_pmtu - BOOLEAN 56 56 By default we don't trust protocol path MTUs while forwarding
+2 -2
Documentation/networking/timestamping.rst
··· 582 582 and hardware timestamping is not possible (SKBTX_IN_PROGRESS not set). 583 583 - As soon as the driver has sent the packet and/or obtained a 584 584 hardware time stamp for it, it passes the time stamp back by 585 - calling skb_hwtstamp_tx() with the original skb, the raw 586 - hardware time stamp. skb_hwtstamp_tx() clones the original skb and 585 + calling skb_tstamp_tx() with the original skb, the raw 586 + hardware time stamp. skb_tstamp_tx() clones the original skb and 587 587 adds the timestamps, therefore the original skb has to be freed now. 588 588 If obtaining the hardware time stamp somehow fails, then the driver 589 589 should not fall back to software time stamping. The rationale is that
+2
Documentation/sound/hd-audio/models.rst
··· 326 326 Headset support on USI machines 327 327 dual-codecs 328 328 Lenovo laptops with dual codecs 329 + alc285-hp-amp-init 330 + HP laptops which require speaker amplifier initialization (ALC285) 329 331 330 332 ALC680 331 333 ======
+9 -9
MAINTAINERS
··· 3076 3076 F: drivers/phy/qualcomm/phy-ath79-usb.c 3077 3077 3078 3078 ATHEROS ATH GENERIC UTILITIES 3079 - M: Kalle Valo <kvalo@codeaurora.org> 3079 + M: Kalle Valo <kvalo@kernel.org> 3080 3080 L: linux-wireless@vger.kernel.org 3081 3081 S: Supported 3082 3082 F: drivers/net/wireless/ath/* ··· 3091 3091 F: drivers/net/wireless/ath/ath5k/ 3092 3092 3093 3093 ATHEROS ATH6KL WIRELESS DRIVER 3094 - M: Kalle Valo <kvalo@codeaurora.org> 3094 + M: Kalle Valo <kvalo@kernel.org> 3095 3095 L: linux-wireless@vger.kernel.org 3096 3096 S: Supported 3097 3097 W: https://wireless.wiki.kernel.org/en/users/Drivers/ath6kl ··· 13267 13267 F: include/uapi/linux/netdevice.h 13268 13268 13269 13269 NETWORKING DRIVERS (WIRELESS) 13270 - M: Kalle Valo <kvalo@codeaurora.org> 13270 + M: Kalle Valo <kvalo@kernel.org> 13271 13271 L: linux-wireless@vger.kernel.org 13272 13272 S: Maintained 13273 13273 Q: http://patchwork.kernel.org/project/linux-wireless/list/ ··· 14876 14876 M: Ryder Lee <ryder.lee@mediatek.com> 14877 14877 M: Jianjun Wang <jianjun.wang@mediatek.com> 14878 14878 L: linux-pci@vger.kernel.org 14879 - L: linux-mediatek@lists.infradead.org 14879 + L: linux-mediatek@lists.infradead.org (moderated for non-subscribers) 14880 14880 S: Supported 14881 14881 F: Documentation/devicetree/bindings/pci/mediatek* 14882 14882 F: drivers/pci/controller/*mediatek* ··· 15735 15735 F: drivers/media/tuners/qt1010* 15736 15736 15737 15737 QUALCOMM ATHEROS ATH10K WIRELESS DRIVER 15738 - M: Kalle Valo <kvalo@codeaurora.org> 15738 + M: Kalle Valo <kvalo@kernel.org> 15739 15739 L: ath10k@lists.infradead.org 15740 15740 S: Supported 15741 15741 W: https://wireless.wiki.kernel.org/en/users/Drivers/ath10k ··· 15743 15743 F: drivers/net/wireless/ath/ath10k/ 15744 15744 15745 15745 QUALCOMM ATHEROS ATH11K WIRELESS DRIVER 15746 - M: Kalle Valo <kvalo@codeaurora.org> 15746 + M: Kalle Valo <kvalo@kernel.org> 15747 15747 L: ath11k@lists.infradead.org 15748 15748 S: Supported 15749 15749 T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git ··· 15916 15916 F: drivers/media/platform/qcom/venus/ 15917 15917 15918 15918 QUALCOMM WCN36XX WIRELESS DRIVER 15919 - M: Kalle Valo <kvalo@codeaurora.org> 15919 + M: Kalle Valo <kvalo@kernel.org> 15920 15920 L: wcn36xx@lists.infradead.org 15921 15921 S: Supported 15922 15922 W: https://wireless.wiki.kernel.org/en/users/Drivers/wcn36xx ··· 17454 17454 SILVACO I3C DUAL-ROLE MASTER 17455 17455 M: Miquel Raynal <miquel.raynal@bootlin.com> 17456 17456 M: Conor Culhane <conor.culhane@silvaco.com> 17457 - L: linux-i3c@lists.infradead.org 17457 + L: linux-i3c@lists.infradead.org (moderated for non-subscribers) 17458 17458 S: Maintained 17459 17459 F: Documentation/devicetree/bindings/i3c/silvaco,i3c-master.yaml 17460 17460 F: drivers/i3c/master/svc-i3c-master.c ··· 21103 21103 F: arch/x86/kernel/cpu/zhaoxin.c 21104 21104 21105 21105 ZONEFS FILESYSTEM 21106 - M: Damien Le Moal <damien.lemoal@wdc.com> 21106 + M: Damien Le Moal <damien.lemoal@opensource.wdc.com> 21107 21107 M: Naohiro Aota <naohiro.aota@wdc.com> 21108 21108 R: Johannes Thumshirn <jth@kernel.org> 21109 21109 L: linux-fsdevel@vger.kernel.org
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 16 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc5 5 + EXTRAVERSION = -rc8 6 6 NAME = Gobble Gobble 7 7 8 8 # *DOCUMENTATION*
+1
arch/arm/boot/dts/imx6qdl-wandboard.dtsi
··· 309 309 310 310 ethphy: ethernet-phy@1 { 311 311 reg = <1>; 312 + qca,clk-out-frequency = <125000000>; 312 313 }; 313 314 }; 314 315 };
+2
arch/arm/boot/dts/imx6qp-prtwd3.dts
··· 178 178 label = "cpu"; 179 179 ethernet = <&fec>; 180 180 phy-mode = "rgmii-id"; 181 + rx-internal-delay-ps = <2000>; 182 + tx-internal-delay-ps = <2000>; 181 183 182 184 fixed-link { 183 185 speed = <100>;
+1 -1
arch/arm/boot/dts/imx6ull-pinfunc.h
··· 82 82 #define MX6ULL_PAD_CSI_DATA04__ESAI_TX_FS 0x01F4 0x0480 0x0000 0x9 0x0 83 83 #define MX6ULL_PAD_CSI_DATA05__ESAI_TX_CLK 0x01F8 0x0484 0x0000 0x9 0x0 84 84 #define MX6ULL_PAD_CSI_DATA06__ESAI_TX5_RX0 0x01FC 0x0488 0x0000 0x9 0x0 85 - #define MX6ULL_PAD_CSI_DATA07__ESAI_T0 0x0200 0x048C 0x0000 0x9 0x0 85 + #define MX6ULL_PAD_CSI_DATA07__ESAI_TX0 0x0200 0x048C 0x0000 0x9 0x0 86 86 87 87 #endif /* __DTS_IMX6ULL_PINFUNC_H */
+2
arch/arm/boot/dts/ls1021a-tsn.dts
··· 91 91 /* Internal port connected to eth2 */ 92 92 ethernet = <&enet2>; 93 93 phy-mode = "rgmii"; 94 + rx-internal-delay-ps = <0>; 95 + tx-internal-delay-ps = <0>; 94 96 reg = <4>; 95 97 96 98 fixed-link {
+1 -1
arch/arm/boot/dts/socfpga_arria10_socdk_qspi.dts
··· 12 12 flash0: n25q00@0 { 13 13 #address-cells = <1>; 14 14 #size-cells = <1>; 15 - compatible = "n25q00aa"; 15 + compatible = "micron,mt25qu02g", "jedec,spi-nor"; 16 16 reg = <0>; 17 17 spi-max-frequency = <100000000>; 18 18
+1 -1
arch/arm/boot/dts/socfpga_arria5_socdk.dts
··· 119 119 flash: flash@0 { 120 120 #address-cells = <1>; 121 121 #size-cells = <1>; 122 - compatible = "n25q256a"; 122 + compatible = "micron,n25q256a", "jedec,spi-nor"; 123 123 reg = <0>; 124 124 spi-max-frequency = <100000000>; 125 125
+1 -1
arch/arm/boot/dts/socfpga_cyclone5_socdk.dts
··· 124 124 flash0: n25q00@0 { 125 125 #address-cells = <1>; 126 126 #size-cells = <1>; 127 - compatible = "n25q00"; 127 + compatible = "micron,mt25qu02g", "jedec,spi-nor"; 128 128 reg = <0>; /* chip select */ 129 129 spi-max-frequency = <100000000>; 130 130
+1 -1
arch/arm/boot/dts/socfpga_cyclone5_sockit.dts
··· 169 169 flash: flash@0 { 170 170 #address-cells = <1>; 171 171 #size-cells = <1>; 172 - compatible = "n25q00"; 172 + compatible = "micron,mt25qu02g", "jedec,spi-nor"; 173 173 reg = <0>; 174 174 spi-max-frequency = <100000000>; 175 175
+1 -1
arch/arm/boot/dts/socfpga_cyclone5_socrates.dts
··· 80 80 flash: flash@0 { 81 81 #address-cells = <1>; 82 82 #size-cells = <1>; 83 - compatible = "n25q256a"; 83 + compatible = "micron,n25q256a", "jedec,spi-nor"; 84 84 reg = <0>; 85 85 spi-max-frequency = <100000000>; 86 86 m25p,fast-read;
+1 -1
arch/arm/boot/dts/socfpga_cyclone5_sodia.dts
··· 116 116 flash0: n25q512a@0 { 117 117 #address-cells = <1>; 118 118 #size-cells = <1>; 119 - compatible = "n25q512a"; 119 + compatible = "micron,n25q512a", "jedec,spi-nor"; 120 120 reg = <0>; 121 121 spi-max-frequency = <100000000>; 122 122
+2 -2
arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts
··· 224 224 n25q128@0 { 225 225 #address-cells = <1>; 226 226 #size-cells = <1>; 227 - compatible = "n25q128"; 227 + compatible = "micron,n25q128", "jedec,spi-nor"; 228 228 reg = <0>; /* chip select */ 229 229 spi-max-frequency = <100000000>; 230 230 m25p,fast-read; ··· 241 241 n25q00@1 { 242 242 #address-cells = <1>; 243 243 #size-cells = <1>; 244 - compatible = "n25q00"; 244 + compatible = "micron,mt25qu02g", "jedec,spi-nor"; 245 245 reg = <1>; /* chip select */ 246 246 spi-max-frequency = <100000000>; 247 247 m25p,fast-read;
-1
arch/arm/include/asm/efi.h
··· 17 17 18 18 #ifdef CONFIG_EFI 19 19 void efi_init(void); 20 - extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt); 21 20 22 21 int efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md); 23 22 int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);
+3 -5
arch/arm/kernel/entry-armv.S
··· 596 596 tstne r0, #0x04000000 @ bit 26 set on both ARM and Thumb-2 597 597 reteq lr 598 598 and r8, r0, #0x00000f00 @ mask out CP number 599 - THUMB( lsr r8, r8, #8 ) 600 599 mov r7, #1 601 - add r6, r10, #TI_USED_CP 602 - ARM( strb r7, [r6, r8, lsr #8] ) @ set appropriate used_cp[] 603 - THUMB( strb r7, [r6, r8] ) @ set appropriate used_cp[] 600 + add r6, r10, r8, lsr #8 @ add used_cp[] array offset first 601 + strb r7, [r6, #TI_USED_CP] @ set appropriate used_cp[] 604 602 #ifdef CONFIG_IWMMXT 605 603 @ Test if we need to give access to iWMMXt coprocessors 606 604 ldr r5, [r10, #TI_FLAGS] ··· 607 609 bcs iwmmxt_task_enable 608 610 #endif 609 611 ARM( add pc, pc, r8, lsr #6 ) 610 - THUMB( lsl r8, r8, #2 ) 612 + THUMB( lsr r8, r8, #6 ) 611 613 THUMB( add pc, r8 ) 612 614 nop 613 615
+1
arch/arm/kernel/head-nommu.S
··· 114 114 add r12, r12, r10 115 115 ret r12 116 116 1: bl __after_proc_init 117 + ldr r7, __secondary_data @ reload r7 117 118 ldr sp, [r7, #12] @ set up the stack pointer 118 119 ldr r0, [r7, #16] @ set up task pointer 119 120 mov fp, #0
+1 -1
arch/arm/mach-rockchip/platsmp.c
··· 189 189 rockchip_boot_fn = __pa_symbol(secondary_startup); 190 190 191 191 /* copy the trampoline to sram, that runs during startup of the core */ 192 - memcpy(sram_base_addr, &rockchip_secondary_trampoline, trampoline_sz); 192 + memcpy_toio(sram_base_addr, &rockchip_secondary_trampoline, trampoline_sz); 193 193 flush_cache_all(); 194 194 outer_clean_range(0, trampoline_sz); 195 195
-1
arch/arm64/Kconfig.platforms
··· 161 161 162 162 config ARCH_MESON 163 163 bool "Amlogic Platforms" 164 - select COMMON_CLK 165 164 help 166 165 This enables support for the arm64 based Amlogic SoCs 167 166 such as the s905, S905X/D, S912, A113X/D or S905X/D2
+1 -1
arch/arm64/boot/dts/allwinner/sun50i-h5-orangepi-zero-plus.dts
··· 69 69 pinctrl-0 = <&emac_rgmii_pins>; 70 70 phy-supply = <&reg_gmac_3v3>; 71 71 phy-handle = <&ext_rgmii_phy>; 72 - phy-mode = "rgmii"; 72 + phy-mode = "rgmii-id"; 73 73 status = "okay"; 74 74 }; 75 75
+15 -15
arch/arm64/boot/dts/amlogic/meson-axg-jethome-jethub-j100.dts
··· 134 134 type = "critical"; 135 135 }; 136 136 }; 137 - }; 138 137 139 - cpu_cooling_maps: cooling-maps { 140 - map0 { 141 - trip = <&cpu_passive>; 142 - cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 143 - <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 144 - <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 145 - <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>; 146 - }; 138 + cpu_cooling_maps: cooling-maps { 139 + map0 { 140 + trip = <&cpu_passive>; 141 + cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 142 + <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 143 + <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 144 + <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>; 145 + }; 147 146 148 - map1 { 149 - trip = <&cpu_hot>; 150 - cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 151 - <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 152 - <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 153 - <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>; 147 + map1 { 148 + trip = <&cpu_hot>; 149 + cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 150 + <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 151 + <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 152 + <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>; 153 + }; 154 154 }; 155 155 }; 156 156 };
+1 -1
arch/arm64/boot/dts/apple/t8103-j274.dts
··· 60 60 61 61 &port02 { 62 62 bus-range = <3 3>; 63 - ethernet0: pci@0,0 { 63 + ethernet0: ethernet@0,0 { 64 64 reg = <0x30000 0x0 0x0 0x0 0x0>; 65 65 /* To be filled by the loader */ 66 66 local-mac-address = [00 10 18 00 00 00];
+4
arch/arm64/boot/dts/apple/t8103.dtsi
··· 144 144 apple,npins = <212>; 145 145 146 146 interrupt-controller; 147 + #interrupt-cells = <2>; 147 148 interrupt-parent = <&aic>; 148 149 interrupts = <AIC_IRQ 190 IRQ_TYPE_LEVEL_HIGH>, 149 150 <AIC_IRQ 191 IRQ_TYPE_LEVEL_HIGH>, ··· 171 170 apple,npins = <42>; 172 171 173 172 interrupt-controller; 173 + #interrupt-cells = <2>; 174 174 interrupt-parent = <&aic>; 175 175 interrupts = <AIC_IRQ 268 IRQ_TYPE_LEVEL_HIGH>, 176 176 <AIC_IRQ 269 IRQ_TYPE_LEVEL_HIGH>, ··· 192 190 apple,npins = <23>; 193 191 194 192 interrupt-controller; 193 + #interrupt-cells = <2>; 195 194 interrupt-parent = <&aic>; 196 195 interrupts = <AIC_IRQ 330 IRQ_TYPE_LEVEL_HIGH>, 197 196 <AIC_IRQ 331 IRQ_TYPE_LEVEL_HIGH>, ··· 213 210 apple,npins = <16>; 214 211 215 212 interrupt-controller; 213 + #interrupt-cells = <2>; 216 214 interrupt-parent = <&aic>; 217 215 interrupts = <AIC_IRQ 391 IRQ_TYPE_LEVEL_HIGH>, 218 216 <AIC_IRQ 392 IRQ_TYPE_LEVEL_HIGH>,
-2
arch/arm64/boot/dts/freescale/fsl-ls1088a-ten64.dts
··· 38 38 powerdn { 39 39 label = "External Power Down"; 40 40 gpios = <&gpio1 17 GPIO_ACTIVE_LOW>; 41 - interrupts = <&gpio1 17 IRQ_TYPE_EDGE_FALLING>; 42 41 linux,code = <KEY_POWER>; 43 42 }; 44 43 ··· 45 46 admin { 46 47 label = "ADMIN button"; 47 48 gpios = <&gpio3 8 GPIO_ACTIVE_HIGH>; 48 - interrupts = <&gpio3 8 IRQ_TYPE_EDGE_RISING>; 49 49 linux,code = <KEY_WPS_BUTTON>; 50 50 }; 51 51 };
+4
arch/arm64/boot/dts/freescale/fsl-lx2160a-bluebox3.dts
··· 386 386 reg = <2>; 387 387 ethernet = <&dpmac17>; 388 388 phy-mode = "rgmii-id"; 389 + rx-internal-delay-ps = <2000>; 390 + tx-internal-delay-ps = <2000>; 389 391 390 392 fixed-link { 391 393 speed = <1000>; ··· 531 529 reg = <2>; 532 530 ethernet = <&dpmac18>; 533 531 phy-mode = "rgmii-id"; 532 + rx-internal-delay-ps = <2000>; 533 + tx-internal-delay-ps = <2000>; 534 534 535 535 fixed-link { 536 536 speed = <1000>;
+2 -2
arch/arm64/boot/dts/freescale/fsl-lx2160a.dtsi
··· 719 719 clock-names = "i2c"; 720 720 clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL 721 721 QORIQ_CLK_PLL_DIV(16)>; 722 - scl-gpio = <&gpio2 15 GPIO_ACTIVE_HIGH>; 722 + scl-gpios = <&gpio2 15 GPIO_ACTIVE_HIGH>; 723 723 status = "disabled"; 724 724 }; 725 725 ··· 768 768 clock-names = "i2c"; 769 769 clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL 770 770 QORIQ_CLK_PLL_DIV(16)>; 771 - scl-gpio = <&gpio2 16 GPIO_ACTIVE_HIGH>; 771 + scl-gpios = <&gpio2 16 GPIO_ACTIVE_HIGH>; 772 772 status = "disabled"; 773 773 }; 774 774
-2
arch/arm64/boot/dts/freescale/imx8mq.dtsi
··· 524 524 <&clk IMX8MQ_VIDEO_PLL1>, 525 525 <&clk IMX8MQ_VIDEO_PLL1_OUT>; 526 526 assigned-clock-rates = <0>, <0>, <0>, <594000000>; 527 - interconnects = <&noc IMX8MQ_ICM_LCDIF &noc IMX8MQ_ICS_DRAM>; 528 - interconnect-names = "dram"; 529 527 status = "disabled"; 530 528 531 529 port@0 {
+1 -1
arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
··· 97 97 regulator-max-microvolt = <3300000>; 98 98 regulator-always-on; 99 99 regulator-boot-on; 100 - vim-supply = <&vcc_io>; 100 + vin-supply = <&vcc_io>; 101 101 }; 102 102 103 103 vdd_core: vdd-core {
-1
arch/arm64/boot/dts/rockchip/rk3399-khadas-edge.dtsi
··· 705 705 &sdhci { 706 706 bus-width = <8>; 707 707 mmc-hs400-1_8v; 708 - mmc-hs400-enhanced-strobe; 709 708 non-removable; 710 709 status = "okay"; 711 710 };
+1
arch/arm64/boot/dts/rockchip/rk3399-kobol-helios64.dts
··· 276 276 clock-output-names = "xin32k", "rk808-clkout2"; 277 277 pinctrl-names = "default"; 278 278 pinctrl-0 = <&pmic_int_l>; 279 + rockchip,system-power-controller; 279 280 vcc1-supply = <&vcc5v0_sys>; 280 281 vcc2-supply = <&vcc5v0_sys>; 281 282 vcc3-supply = <&vcc5v0_sys>;
+1 -1
arch/arm64/boot/dts/rockchip/rk3399-leez-p710.dts
··· 55 55 regulator-boot-on; 56 56 regulator-min-microvolt = <3300000>; 57 57 regulator-max-microvolt = <3300000>; 58 - vim-supply = <&vcc3v3_sys>; 58 + vin-supply = <&vcc3v3_sys>; 59 59 }; 60 60 61 61 vcc3v3_sys: vcc3v3-sys {
+1 -1
arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi
··· 502 502 status = "okay"; 503 503 504 504 bt656-supply = <&vcc_3v0>; 505 - audio-supply = <&vcc_3v0>; 505 + audio-supply = <&vcc1v8_codec>; 506 506 sdmmc-supply = <&vcc_sdio>; 507 507 gpio1830-supply = <&vcc_3v0>; 508 508 };
-1
arch/arm64/include/asm/efi.h
··· 14 14 15 15 #ifdef CONFIG_EFI 16 16 extern void efi_init(void); 17 - extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt); 18 17 #else 19 18 #define efi_init() 20 19 #endif
+1
arch/arm64/kernel/machine_kexec_file.c
··· 149 149 initrd_len, cmdline, 0); 150 150 if (!dtb) { 151 151 pr_err("Preparing for new dtb failed\n"); 152 + ret = -EINVAL; 152 153 goto out_err; 153 154 } 154 155
+2
arch/mips/include/asm/mach-ralink/spaces.h
··· 6 6 #define PCI_IOSIZE SZ_64K 7 7 #define IO_SPACE_LIMIT (PCI_IOSIZE - 1) 8 8 9 + #define pci_remap_iospace pci_remap_iospace 10 + 9 11 #include <asm/mach-generic/spaces.h> 10 12 #endif
-4
arch/mips/include/asm/pci.h
··· 20 20 #include <linux/list.h> 21 21 #include <linux/of.h> 22 22 23 - #ifdef CONFIG_PCI_DRIVERS_GENERIC 24 - #define pci_remap_iospace pci_remap_iospace 25 - #endif 26 - 27 23 #ifdef CONFIG_PCI_DRIVERS_LEGACY 28 24 29 25 /*
+2
arch/mips/pci/pci-generic.c
··· 47 47 pci_read_bridge_bases(bus); 48 48 } 49 49 50 + #ifdef pci_remap_iospace 50 51 int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr) 51 52 { 52 53 unsigned long vaddr; ··· 61 60 set_io_port_base(vaddr); 62 61 return 0; 63 62 } 63 + #endif
-5
arch/parisc/Kconfig
··· 85 85 config STACK_GROWSUP 86 86 def_bool y 87 87 88 - config ARCH_DEFCONFIG 89 - string 90 - default "arch/parisc/configs/generic-32bit_defconfig" if !64BIT 91 - default "arch/parisc/configs/generic-64bit_defconfig" if 64BIT 92 - 93 88 config GENERIC_LOCKBREAK 94 89 bool 95 90 default y
+2 -2
arch/parisc/include/asm/futex.h
··· 14 14 _futex_spin_lock(u32 __user *uaddr) 15 15 { 16 16 extern u32 lws_lock_start[]; 17 - long index = ((long)uaddr & 0x3f8) >> 1; 17 + long index = ((long)uaddr & 0x7f8) >> 1; 18 18 arch_spinlock_t *s = (arch_spinlock_t *)&lws_lock_start[index]; 19 19 preempt_disable(); 20 20 arch_spin_lock(s); ··· 24 24 _futex_spin_unlock(u32 __user *uaddr) 25 25 { 26 26 extern u32 lws_lock_start[]; 27 - long index = ((long)uaddr & 0x3f8) >> 1; 27 + long index = ((long)uaddr & 0x7f8) >> 1; 28 28 arch_spinlock_t *s = (arch_spinlock_t *)&lws_lock_start[index]; 29 29 arch_spin_unlock(s); 30 30 preempt_enable();
+1 -1
arch/parisc/kernel/syscall.S
··· 472 472 extrd,u %r1,PSW_W_BIT,1,%r1 473 473 /* sp must be aligned on 4, so deposit the W bit setting into 474 474 * the bottom of sp temporarily */ 475 - or,ev %r1,%r30,%r30 475 + or,od %r1,%r30,%r30 476 476 477 477 /* Clip LWS number to a 32-bit value for 32-bit processes */ 478 478 depdi 0, 31, 32, %r20
+2
arch/parisc/kernel/traps.c
··· 730 730 } 731 731 mmap_read_unlock(current->mm); 732 732 } 733 + /* CPU could not fetch instruction, so clear stale IIR value. */ 734 + regs->iir = 0xbaadf00d; 733 735 fallthrough; 734 736 case 27: 735 737 /* Data memory protection ID trap */
+34 -8
arch/powerpc/kernel/module_64.c
··· 422 422 const char *name) 423 423 { 424 424 long reladdr; 425 + func_desc_t desc; 426 + int i; 425 427 426 428 if (is_mprofile_ftrace_call(name)) 427 429 return create_ftrace_stub(entry, addr, me); 428 430 429 - memcpy(entry->jump, ppc64_stub_insns, sizeof(ppc64_stub_insns)); 431 + for (i = 0; i < sizeof(ppc64_stub_insns) / sizeof(u32); i++) { 432 + if (patch_instruction(&entry->jump[i], 433 + ppc_inst(ppc64_stub_insns[i]))) 434 + return 0; 435 + } 430 436 431 437 /* Stub uses address relative to r2. */ 432 438 reladdr = (unsigned long)entry - my_r2(sechdrs, me); ··· 443 437 } 444 438 pr_debug("Stub %p get data from reladdr %li\n", entry, reladdr); 445 439 446 - entry->jump[0] |= PPC_HA(reladdr); 447 - entry->jump[1] |= PPC_LO(reladdr); 448 - entry->funcdata = func_desc(addr); 449 - entry->magic = STUB_MAGIC; 440 + if (patch_instruction(&entry->jump[0], 441 + ppc_inst(entry->jump[0] | PPC_HA(reladdr)))) 442 + return 0; 443 + 444 + if (patch_instruction(&entry->jump[1], 445 + ppc_inst(entry->jump[1] | PPC_LO(reladdr)))) 446 + return 0; 447 + 448 + // func_desc_t is 8 bytes if ABIv2, else 16 bytes 449 + desc = func_desc(addr); 450 + for (i = 0; i < sizeof(func_desc_t) / sizeof(u32); i++) { 451 + if (patch_instruction(((u32 *)&entry->funcdata) + i, 452 + ppc_inst(((u32 *)(&desc))[i]))) 453 + return 0; 454 + } 455 + 456 + if (patch_instruction(&entry->magic, ppc_inst(STUB_MAGIC))) 457 + return 0; 450 458 451 459 return 1; 452 460 } ··· 515 495 me->name, *instruction, instruction); 516 496 return 0; 517 497 } 498 + 518 499 /* ld r2,R2_STACK_OFFSET(r1) */ 519 - *instruction = PPC_INST_LD_TOC; 500 + if (patch_instruction(instruction, ppc_inst(PPC_INST_LD_TOC))) 501 + return 0; 502 + 520 503 return 1; 521 504 } 522 505 ··· 659 636 } 660 637 661 638 /* Only replace bits 2 through 26 */ 662 - *(uint32_t *)location 663 - = (*(uint32_t *)location & ~0x03fffffc) 639 + value = (*(uint32_t *)location & ~0x03fffffc) 664 640 | (value & 0x03fffffc); 641 + 642 + if (patch_instruction((u32 *)location, ppc_inst(value))) 643 + return -EFAULT; 644 + 665 645 break; 666 646 667 647 case R_PPC64_REL64:
+1 -1
arch/powerpc/mm/ptdump/ptdump.c
··· 183 183 { 184 184 pte_t pte = __pte(st->current_flags); 185 185 186 - if (!IS_ENABLED(CONFIG_PPC_DEBUG_WX) || !st->check_wx) 186 + if (!IS_ENABLED(CONFIG_DEBUG_WX) || !st->check_wx) 187 187 return; 188 188 189 189 if (!pte_write(pte) || !pte_exec(pte))
+2 -2
arch/powerpc/platforms/85xx/smp.c
··· 220 220 local_irq_save(flags); 221 221 hard_irq_disable(); 222 222 223 - if (qoriq_pm_ops) 223 + if (qoriq_pm_ops && qoriq_pm_ops->cpu_up_prepare) 224 224 qoriq_pm_ops->cpu_up_prepare(cpu); 225 225 226 226 /* if cpu is not spinning, reset it */ ··· 292 292 booting_thread_hwid = cpu_thread_in_core(nr); 293 293 primary = cpu_first_thread_sibling(nr); 294 294 295 - if (qoriq_pm_ops) 295 + if (qoriq_pm_ops && qoriq_pm_ops->cpu_up_prepare) 296 296 qoriq_pm_ops->cpu_up_prepare(nr); 297 297 298 298 /*
+1
arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts
··· 76 76 spi-max-frequency = <20000000>; 77 77 voltage-ranges = <3300 3300>; 78 78 disable-wp; 79 + gpios = <&gpio 11 GPIO_ACTIVE_LOW>; 79 80 }; 80 81 }; 81 82
+53 -60
arch/riscv/boot/dts/sifive/hifive-unmatched-a00.dts
··· 2 2 /* Copyright (c) 2020 SiFive, Inc */ 3 3 4 4 #include "fu740-c000.dtsi" 5 + #include <dt-bindings/gpio/gpio.h> 5 6 #include <dt-bindings/interrupt-controller/irq.h> 6 7 7 8 /* Clock frequency (in Hz) of the PCB crystal for rtcclk */ ··· 55 54 temperature-sensor@4c { 56 55 compatible = "ti,tmp451"; 57 56 reg = <0x4c>; 57 + vcc-supply = <&vdd_bpro>; 58 58 interrupt-parent = <&gpio>; 59 59 interrupts = <6 IRQ_TYPE_LEVEL_LOW>; 60 + }; 61 + 62 + eeprom@54 { 63 + compatible = "microchip,24c02", "atmel,24c02"; 64 + reg = <0x54>; 65 + vcc-supply = <&vdd_bpro>; 66 + label = "board-id"; 67 + pagesize = <16>; 68 + read-only; 69 + size = <256>; 60 70 }; 61 71 62 72 pmic@58 { ··· 77 65 interrupts = <1 IRQ_TYPE_LEVEL_LOW>; 78 66 interrupt-controller; 79 67 80 - regulators { 81 - vdd_bcore1: bcore1 { 82 - regulator-min-microvolt = <900000>; 83 - regulator-max-microvolt = <900000>; 84 - regulator-min-microamp = <5000000>; 85 - regulator-max-microamp = <5000000>; 86 - regulator-always-on; 87 - }; 68 + onkey { 69 + compatible = "dlg,da9063-onkey"; 70 + }; 88 71 89 - vdd_bcore2: bcore2 { 90 - regulator-min-microvolt = <900000>; 91 - regulator-max-microvolt = <900000>; 92 - regulator-min-microamp = <5000000>; 93 - regulator-max-microamp = <5000000>; 72 + rtc { 73 + compatible = "dlg,da9063-rtc"; 74 + }; 75 + 76 + wdt { 77 + compatible = "dlg,da9063-watchdog"; 78 + }; 79 + 80 + regulators { 81 + vdd_bcore: bcores-merged { 82 + regulator-min-microvolt = <1050000>; 83 + regulator-max-microvolt = <1050000>; 84 + regulator-min-microamp = <4800000>; 85 + regulator-max-microamp = <4800000>; 94 86 regulator-always-on; 95 87 }; 96 88 97 89 vdd_bpro: bpro { 98 90 regulator-min-microvolt = <1800000>; 99 91 regulator-max-microvolt = <1800000>; 100 - regulator-min-microamp = <2500000>; 101 - regulator-max-microamp = <2500000>; 92 + regulator-min-microamp = <2400000>; 93 + regulator-max-microamp = <2400000>; 102 94 regulator-always-on; 103 95 }; 104 96 105 97 vdd_bperi: bperi { 106 - regulator-min-microvolt = <1050000>; 107 - regulator-max-microvolt = <1050000>; 98 + regulator-min-microvolt = <1060000>; 99 + regulator-max-microvolt = <1060000>; 108 100 regulator-min-microamp = <1500000>; 109 101 regulator-max-microamp = <1500000>; 110 102 regulator-always-on; 111 103 }; 112 104 113 - vdd_bmem: bmem { 114 - regulator-min-microvolt = <1200000>; 115 - regulator-max-microvolt = <1200000>; 116 - regulator-min-microamp = <3000000>; 117 - regulator-max-microamp = <3000000>; 118 - regulator-always-on; 119 - }; 120 - 121 - vdd_bio: bio { 105 + vdd_bmem_bio: bmem-bio-merged { 122 106 regulator-min-microvolt = <1200000>; 123 107 regulator-max-microvolt = <1200000>; 124 108 regulator-min-microamp = <3000000>; ··· 125 117 vdd_ldo1: ldo1 { 126 118 regulator-min-microvolt = <1800000>; 127 119 regulator-max-microvolt = <1800000>; 128 - regulator-min-microamp = <100000>; 129 - regulator-max-microamp = <100000>; 130 120 regulator-always-on; 131 121 }; 132 122 133 123 vdd_ldo2: ldo2 { 134 124 regulator-min-microvolt = <1800000>; 135 125 regulator-max-microvolt = <1800000>; 136 - regulator-min-microamp = <200000>; 137 - regulator-max-microamp = <200000>; 138 126 regulator-always-on; 139 127 }; 140 128 141 129 vdd_ldo3: ldo3 { 142 - regulator-min-microvolt = <1800000>; 143 - regulator-max-microvolt = <1800000>; 144 - regulator-min-microamp = <200000>; 145 - regulator-max-microamp = <200000>; 130 + regulator-min-microvolt = <3300000>; 131 + regulator-max-microvolt = <3300000>; 146 132 regulator-always-on; 147 133 }; 148 134 149 135 vdd_ldo4: ldo4 { 150 - regulator-min-microvolt = <1800000>; 151 - regulator-max-microvolt = <1800000>; 152 - regulator-min-microamp = <200000>; 153 - regulator-max-microamp = <200000>; 136 + regulator-min-microvolt = <2500000>; 137 + regulator-max-microvolt = <2500000>; 154 138 regulator-always-on; 155 139 }; 156 140 157 141 vdd_ldo5: ldo5 { 158 - regulator-min-microvolt = <1800000>; 159 - regulator-max-microvolt = <1800000>; 160 - regulator-min-microamp = <100000>; 161 - regulator-max-microamp = <100000>; 142 + regulator-min-microvolt = <3300000>; 143 + regulator-max-microvolt = <3300000>; 162 144 regulator-always-on; 163 145 }; 164 146 165 147 vdd_ldo6: ldo6 { 166 - regulator-min-microvolt = <3300000>; 167 - regulator-max-microvolt = <3300000>; 168 - regulator-min-microamp = <200000>; 169 - regulator-max-microamp = <200000>; 148 + regulator-min-microvolt = <1800000>; 149 + regulator-max-microvolt = <1800000>; 170 150 regulator-always-on; 171 151 }; 172 152 173 153 vdd_ldo7: ldo7 { 174 - regulator-min-microvolt = <1800000>; 175 - regulator-max-microvolt = <1800000>; 176 - regulator-min-microamp = <200000>; 177 - regulator-max-microamp = <200000>; 154 + regulator-min-microvolt = <3300000>; 155 + regulator-max-microvolt = <3300000>; 178 156 regulator-always-on; 179 157 }; 180 158 181 159 vdd_ldo8: ldo8 { 182 - regulator-min-microvolt = <1800000>; 183 - regulator-max-microvolt = <1800000>; 184 - regulator-min-microamp = <200000>; 185 - regulator-max-microamp = <200000>; 160 + regulator-min-microvolt = <3300000>; 161 + regulator-max-microvolt = <3300000>; 186 162 regulator-always-on; 187 163 }; 188 164 189 165 vdd_ld09: ldo9 { 190 166 regulator-min-microvolt = <1050000>; 191 167 regulator-max-microvolt = <1050000>; 192 - regulator-min-microamp = <200000>; 193 - regulator-max-microamp = <200000>; 168 + regulator-always-on; 194 169 }; 195 170 196 171 vdd_ldo10: ldo10 { 197 172 regulator-min-microvolt = <1000000>; 198 173 regulator-max-microvolt = <1000000>; 199 - regulator-min-microamp = <300000>; 200 - regulator-max-microamp = <300000>; 174 + regulator-always-on; 201 175 }; 202 176 203 177 vdd_ldo11: ldo11 { 204 178 regulator-min-microvolt = <2500000>; 205 179 regulator-max-microvolt = <2500000>; 206 - regulator-min-microamp = <300000>; 207 - regulator-max-microamp = <300000>; 208 180 regulator-always-on; 209 181 }; 210 182 }; ··· 211 223 spi-max-frequency = <20000000>; 212 224 voltage-ranges = <3300 3300>; 213 225 disable-wp; 226 + gpios = <&gpio 15 GPIO_ACTIVE_LOW>; 214 227 }; 215 228 }; 216 229 ··· 234 245 235 246 &gpio { 236 247 status = "okay"; 248 + gpio-line-names = "J29.1", "PMICNTB", "PMICSHDN", "J8.1", "J8.3", 249 + "PCIe_PWREN", "THERM", "UBRDG_RSTN", "PCIe_PERSTN", 250 + "ULPI_RSTN", "J8.2", "UHUB_RSTN", "GEMGXL_RST", "J8.4", 251 + "EN_VDD_SD", "SD_CD"; 237 252 };
-1
arch/riscv/include/asm/efi.h
··· 13 13 14 14 #ifdef CONFIG_EFI 15 15 extern void efi_init(void); 16 - extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt); 17 16 #else 18 17 #define efi_init() 19 18 #endif
+2
arch/s390/configs/debug_defconfig
··· 117 117 CONFIG_UNIX_DIAG=m 118 118 CONFIG_XFRM_USER=m 119 119 CONFIG_NET_KEY=m 120 + CONFIG_NET_SWITCHDEV=y 120 121 CONFIG_SMC=m 121 122 CONFIG_SMC_DIAG=m 122 123 CONFIG_INET=y ··· 512 511 CONFIG_MLX4_EN=m 513 512 CONFIG_MLX5_CORE=m 514 513 CONFIG_MLX5_CORE_EN=y 514 + CONFIG_MLX5_ESWITCH=y 515 515 # CONFIG_NET_VENDOR_MICREL is not set 516 516 # CONFIG_NET_VENDOR_MICROCHIP is not set 517 517 # CONFIG_NET_VENDOR_MICROSEMI is not set
+2
arch/s390/configs/defconfig
··· 109 109 CONFIG_UNIX_DIAG=m 110 110 CONFIG_XFRM_USER=m 111 111 CONFIG_NET_KEY=m 112 + CONFIG_NET_SWITCHDEV=y 112 113 CONFIG_SMC=m 113 114 CONFIG_SMC_DIAG=m 114 115 CONFIG_INET=y ··· 503 502 CONFIG_MLX4_EN=m 504 503 CONFIG_MLX5_CORE=m 505 504 CONFIG_MLX5_CORE_EN=y 505 + CONFIG_MLX5_ESWITCH=y 506 506 # CONFIG_NET_VENDOR_MICREL is not set 507 507 # CONFIG_NET_VENDOR_MICROCHIP is not set 508 508 # CONFIG_NET_VENDOR_MICROSEMI is not set
-2
arch/s390/kernel/ftrace.c
··· 290 290 return; 291 291 292 292 regs = ftrace_get_regs(fregs); 293 - preempt_disable_notrace(); 294 293 p = get_kprobe((kprobe_opcode_t *)ip); 295 294 if (unlikely(!p) || kprobe_disabled(p)) 296 295 goto out; ··· 317 318 } 318 319 __this_cpu_write(current_kprobe, NULL); 319 320 out: 320 - preempt_enable_notrace(); 321 321 ftrace_test_recursion_unlock(bit); 322 322 } 323 323 NOKPROBE_SYMBOL(kprobe_ftrace_handler);
+5 -4
arch/s390/kernel/irq.c
··· 138 138 struct pt_regs *old_regs = set_irq_regs(regs); 139 139 int from_idle; 140 140 141 - irq_enter(); 141 + irq_enter_rcu(); 142 142 143 143 if (user_mode(regs)) { 144 144 update_timer_sys(); ··· 158 158 do_irq_async(regs, IO_INTERRUPT); 159 159 } while (MACHINE_IS_LPAR && irq_pending(regs)); 160 160 161 - irq_exit(); 161 + irq_exit_rcu(); 162 + 162 163 set_irq_regs(old_regs); 163 164 irqentry_exit(regs, state); 164 165 ··· 173 172 struct pt_regs *old_regs = set_irq_regs(regs); 174 173 int from_idle; 175 174 176 - irq_enter(); 175 + irq_enter_rcu(); 177 176 178 177 if (user_mode(regs)) { 179 178 update_timer_sys(); ··· 191 190 192 191 do_irq_async(regs, EXT_INTERRUPT); 193 192 194 - irq_exit(); 193 + irq_exit_rcu(); 195 194 set_irq_regs(old_regs); 196 195 irqentry_exit(regs, state); 197 196
+35 -5
arch/s390/kernel/machine_kexec_file.c
··· 7 7 * Author(s): Philipp Rudo <prudo@linux.vnet.ibm.com> 8 8 */ 9 9 10 + #define pr_fmt(fmt) "kexec: " fmt 11 + 10 12 #include <linux/elf.h> 11 13 #include <linux/errno.h> 12 14 #include <linux/kexec.h> ··· 292 290 const Elf_Shdr *relsec, 293 291 const Elf_Shdr *symtab) 294 292 { 293 + const char *strtab, *name, *shstrtab; 294 + const Elf_Shdr *sechdrs; 295 295 Elf_Rela *relas; 296 296 int i, r_type; 297 + int ret; 298 + 299 + /* String & section header string table */ 300 + sechdrs = (void *)pi->ehdr + pi->ehdr->e_shoff; 301 + strtab = (char *)pi->ehdr + sechdrs[symtab->sh_link].sh_offset; 302 + shstrtab = (char *)pi->ehdr + sechdrs[pi->ehdr->e_shstrndx].sh_offset; 297 303 298 304 relas = (void *)pi->ehdr + relsec->sh_offset; 299 305 ··· 314 304 sym = (void *)pi->ehdr + symtab->sh_offset; 315 305 sym += ELF64_R_SYM(relas[i].r_info); 316 306 317 - if (sym->st_shndx == SHN_UNDEF) 318 - return -ENOEXEC; 307 + if (sym->st_name) 308 + name = strtab + sym->st_name; 309 + else 310 + name = shstrtab + sechdrs[sym->st_shndx].sh_name; 319 311 320 - if (sym->st_shndx == SHN_COMMON) 312 + if (sym->st_shndx == SHN_UNDEF) { 313 + pr_err("Undefined symbol: %s\n", name); 321 314 return -ENOEXEC; 315 + } 316 + 317 + if (sym->st_shndx == SHN_COMMON) { 318 + pr_err("symbol '%s' in common section\n", name); 319 + return -ENOEXEC; 320 + } 322 321 323 322 if (sym->st_shndx >= pi->ehdr->e_shnum && 324 - sym->st_shndx != SHN_ABS) 323 + sym->st_shndx != SHN_ABS) { 324 + pr_err("Invalid section %d for symbol %s\n", 325 + sym->st_shndx, name); 325 326 return -ENOEXEC; 327 + } 326 328 327 329 loc = pi->purgatory_buf; 328 330 loc += section->sh_offset; ··· 348 326 addr = section->sh_addr + relas[i].r_offset; 349 327 350 328 r_type = ELF64_R_TYPE(relas[i].r_info); 351 - arch_kexec_do_relocs(r_type, loc, val, addr); 329 + 330 + if (r_type == R_390_PLT32DBL) 331 + r_type = R_390_PC32DBL; 332 + 333 + ret = arch_kexec_do_relocs(r_type, loc, val, addr); 334 + if (ret) { 335 + pr_err("Unknown rela relocation: %d\n", r_type); 336 + return -ENOEXEC; 337 + } 352 338 } 353 339 return 0; 354 340 }
-2
arch/x86/include/asm/efi.h
··· 197 197 198 198 extern void parse_efi_setup(u64 phys_addr, u32 data_len); 199 199 200 - extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt); 201 - 202 200 extern void efi_thunk_runtime_setup(void); 203 201 efi_status_t efi_set_virtual_address_map(unsigned long memory_map_size, 204 202 unsigned long descriptor_size,
+1
arch/x86/include/asm/kvm-x86-ops.h
··· 47 47 KVM_X86_OP(cache_reg) 48 48 KVM_X86_OP(get_rflags) 49 49 KVM_X86_OP(set_rflags) 50 + KVM_X86_OP(get_if_flag) 50 51 KVM_X86_OP(tlb_flush_all) 51 52 KVM_X86_OP(tlb_flush_current) 52 53 KVM_X86_OP_NULL(tlb_remote_flush)
+1
arch/x86/include/asm/kvm_host.h
··· 1349 1349 void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg); 1350 1350 unsigned long (*get_rflags)(struct kvm_vcpu *vcpu); 1351 1351 void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags); 1352 + bool (*get_if_flag)(struct kvm_vcpu *vcpu); 1352 1353 1353 1354 void (*tlb_flush_all)(struct kvm_vcpu *vcpu); 1354 1355 void (*tlb_flush_current)(struct kvm_vcpu *vcpu);
+2 -2
arch/x86/include/asm/pkru.h
··· 4 4 5 5 #include <asm/cpufeature.h> 6 6 7 - #define PKRU_AD_BIT 0x1 8 - #define PKRU_WD_BIT 0x2 7 + #define PKRU_AD_BIT 0x1u 8 + #define PKRU_WD_BIT 0x2u 9 9 #define PKRU_BITS_PER_PKEY 2 10 10 11 11 #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+30 -42
arch/x86/kernel/setup.c
··· 713 713 714 714 early_reserve_initrd(); 715 715 716 - if (efi_enabled(EFI_BOOT)) 717 - efi_memblock_x86_reserve_range(); 718 - 719 716 memblock_x86_reserve_range_setup_data(); 720 717 721 718 reserve_ibft_region(); ··· 737 740 } 738 741 739 742 return 0; 740 - } 741 - 742 - static char * __init prepare_command_line(void) 743 - { 744 - #ifdef CONFIG_CMDLINE_BOOL 745 - #ifdef CONFIG_CMDLINE_OVERRIDE 746 - strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE); 747 - #else 748 - if (builtin_cmdline[0]) { 749 - /* append boot loader cmdline to builtin */ 750 - strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE); 751 - strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE); 752 - strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE); 753 - } 754 - #endif 755 - #endif 756 - 757 - strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE); 758 - 759 - parse_early_param(); 760 - 761 - return command_line; 762 743 } 763 744 764 745 /* ··· 828 853 x86_init.oem.arch_setup(); 829 854 830 855 /* 831 - * x86_configure_nx() is called before parse_early_param() (called by 832 - * prepare_command_line()) to detect whether hardware doesn't support 833 - * NX (so that the early EHCI debug console setup can safely call 834 - * set_fixmap()). It may then be called again from within noexec_setup() 835 - * during parsing early parameters to honor the respective command line 836 - * option. 837 - */ 838 - x86_configure_nx(); 839 - 840 - /* 841 - * This parses early params and it needs to run before 842 - * early_reserve_memory() because latter relies on such settings 843 - * supplied as early params. 844 - */ 845 - *cmdline_p = prepare_command_line(); 846 - 847 - /* 848 856 * Do some memory reservations *before* memory is added to memblock, so 849 857 * memblock allocations won't overwrite it. 850 858 * ··· 859 901 data_resource.end = __pa_symbol(_edata)-1; 860 902 bss_resource.start = __pa_symbol(__bss_start); 861 903 bss_resource.end = __pa_symbol(__bss_stop)-1; 904 + 905 + #ifdef CONFIG_CMDLINE_BOOL 906 + #ifdef CONFIG_CMDLINE_OVERRIDE 907 + strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE); 908 + #else 909 + if (builtin_cmdline[0]) { 910 + /* append boot loader cmdline to builtin */ 911 + strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE); 912 + strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE); 913 + strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE); 914 + } 915 + #endif 916 + #endif 917 + 918 + strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE); 919 + *cmdline_p = command_line; 920 + 921 + /* 922 + * x86_configure_nx() is called before parse_early_param() to detect 923 + * whether hardware doesn't support NX (so that the early EHCI debug 924 + * console setup can safely call set_fixmap()). It may then be called 925 + * again from within noexec_setup() during parsing early parameters 926 + * to honor the respective command line option. 927 + */ 928 + x86_configure_nx(); 929 + 930 + parse_early_param(); 931 + 932 + if (efi_enabled(EFI_BOOT)) 933 + efi_memblock_x86_reserve_range(); 862 934 863 935 #ifdef CONFIG_MEMORY_HOTPLUG 864 936 /*
+15 -1
arch/x86/kvm/mmu/mmu.c
··· 3987 3987 static bool is_page_fault_stale(struct kvm_vcpu *vcpu, 3988 3988 struct kvm_page_fault *fault, int mmu_seq) 3989 3989 { 3990 - if (is_obsolete_sp(vcpu->kvm, to_shadow_page(vcpu->arch.mmu->root_hpa))) 3990 + struct kvm_mmu_page *sp = to_shadow_page(vcpu->arch.mmu->root_hpa); 3991 + 3992 + /* Special roots, e.g. pae_root, are not backed by shadow pages. */ 3993 + if (sp && is_obsolete_sp(vcpu->kvm, sp)) 3994 + return true; 3995 + 3996 + /* 3997 + * Roots without an associated shadow page are considered invalid if 3998 + * there is a pending request to free obsolete roots. The request is 3999 + * only a hint that the current root _may_ be obsolete and needs to be 4000 + * reloaded, e.g. if the guest frees a PGD that KVM is tracking as a 4001 + * previous root, then __kvm_mmu_prepare_zap_page() signals all vCPUs 4002 + * to reload even if no vCPU is actively using the root. 4003 + */ 4004 + if (!sp && kvm_test_request(KVM_REQ_MMU_RELOAD, vcpu)) 3991 4005 return true; 3992 4006 3993 4007 return fault->slot &&
+6
arch/x86/kvm/mmu/tdp_iter.c
··· 26 26 */ 27 27 void tdp_iter_restart(struct tdp_iter *iter) 28 28 { 29 + iter->yielded = false; 29 30 iter->yielded_gfn = iter->next_last_level_gfn; 30 31 iter->level = iter->root_level; 31 32 ··· 161 160 */ 162 161 void tdp_iter_next(struct tdp_iter *iter) 163 162 { 163 + if (iter->yielded) { 164 + tdp_iter_restart(iter); 165 + return; 166 + } 167 + 164 168 if (try_step_down(iter)) 165 169 return; 166 170
+6
arch/x86/kvm/mmu/tdp_iter.h
··· 45 45 * iterator walks off the end of the paging structure. 46 46 */ 47 47 bool valid; 48 + /* 49 + * True if KVM dropped mmu_lock and yielded in the middle of a walk, in 50 + * which case tdp_iter_next() needs to restart the walk at the root 51 + * level instead of advancing to the next entry. 52 + */ 53 + bool yielded; 48 54 }; 49 55 50 56 /*
+16 -13
arch/x86/kvm/mmu/tdp_mmu.c
··· 502 502 struct tdp_iter *iter, 503 503 u64 new_spte) 504 504 { 505 + WARN_ON_ONCE(iter->yielded); 506 + 505 507 lockdep_assert_held_read(&kvm->mmu_lock); 506 508 507 509 /* ··· 577 575 u64 new_spte, bool record_acc_track, 578 576 bool record_dirty_log) 579 577 { 578 + WARN_ON_ONCE(iter->yielded); 579 + 580 580 lockdep_assert_held_write(&kvm->mmu_lock); 581 581 582 582 /* ··· 644 640 * If this function should yield and flush is set, it will perform a remote 645 641 * TLB flush before yielding. 646 642 * 647 - * If this function yields, it will also reset the tdp_iter's walk over the 648 - * paging structure and the calling function should skip to the next 649 - * iteration to allow the iterator to continue its traversal from the 650 - * paging structure root. 643 + * If this function yields, iter->yielded is set and the caller must skip to 644 + * the next iteration, where tdp_iter_next() will reset the tdp_iter's walk 645 + * over the paging structures to allow the iterator to continue its traversal 646 + * from the paging structure root. 651 647 * 652 - * Return true if this function yielded and the iterator's traversal was reset. 653 - * Return false if a yield was not needed. 648 + * Returns true if this function yielded. 654 649 */ 655 - static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm, 656 - struct tdp_iter *iter, bool flush, 657 - bool shared) 650 + static inline bool __must_check tdp_mmu_iter_cond_resched(struct kvm *kvm, 651 + struct tdp_iter *iter, 652 + bool flush, bool shared) 658 653 { 654 + WARN_ON(iter->yielded); 655 + 659 656 /* Ensure forward progress has been made before yielding. */ 660 657 if (iter->next_last_level_gfn == iter->yielded_gfn) 661 658 return false; ··· 676 671 677 672 WARN_ON(iter->gfn > iter->next_last_level_gfn); 678 673 679 - tdp_iter_restart(iter); 680 - 681 - return true; 674 + iter->yielded = true; 682 675 } 683 676 684 - return false; 677 + return iter->yielded; 685 678 } 686 679 687 680 /*
+12 -9
arch/x86/kvm/svm/svm.c
··· 1585 1585 to_svm(vcpu)->vmcb->save.rflags = rflags; 1586 1586 } 1587 1587 1588 + static bool svm_get_if_flag(struct kvm_vcpu *vcpu) 1589 + { 1590 + struct vmcb *vmcb = to_svm(vcpu)->vmcb; 1591 + 1592 + return sev_es_guest(vcpu->kvm) 1593 + ? vmcb->control.int_state & SVM_GUEST_INTERRUPT_MASK 1594 + : kvm_get_rflags(vcpu) & X86_EFLAGS_IF; 1595 + } 1596 + 1588 1597 static void svm_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg) 1589 1598 { 1590 1599 switch (reg) { ··· 3577 3568 if (!gif_set(svm)) 3578 3569 return true; 3579 3570 3580 - if (sev_es_guest(vcpu->kvm)) { 3581 - /* 3582 - * SEV-ES guests to not expose RFLAGS. Use the VMCB interrupt mask 3583 - * bit to determine the state of the IF flag. 3584 - */ 3585 - if (!(vmcb->control.int_state & SVM_GUEST_INTERRUPT_MASK)) 3586 - return true; 3587 - } else if (is_guest_mode(vcpu)) { 3571 + if (is_guest_mode(vcpu)) { 3588 3572 /* As long as interrupts are being delivered... */ 3589 3573 if ((svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK) 3590 3574 ? !(svm->vmcb01.ptr->save.rflags & X86_EFLAGS_IF) ··· 3588 3586 if (nested_exit_on_intr(svm)) 3589 3587 return false; 3590 3588 } else { 3591 - if (!(kvm_get_rflags(vcpu) & X86_EFLAGS_IF)) 3589 + if (!svm_get_if_flag(vcpu)) 3592 3590 return true; 3593 3591 } 3594 3592 ··· 4623 4621 .cache_reg = svm_cache_reg, 4624 4622 .get_rflags = svm_get_rflags, 4625 4623 .set_rflags = svm_set_rflags, 4624 + .get_if_flag = svm_get_if_flag, 4626 4625 4627 4626 .tlb_flush_all = svm_flush_tlb, 4628 4627 .tlb_flush_current = svm_flush_tlb,
+32 -13
arch/x86/kvm/vmx/vmx.c
··· 1363 1363 vmx->emulation_required = vmx_emulation_required(vcpu); 1364 1364 } 1365 1365 1366 + static bool vmx_get_if_flag(struct kvm_vcpu *vcpu) 1367 + { 1368 + return vmx_get_rflags(vcpu) & X86_EFLAGS_IF; 1369 + } 1370 + 1366 1371 u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu) 1367 1372 { 1368 1373 u32 interruptibility = vmcs_read32(GUEST_INTERRUPTIBILITY_INFO); ··· 3964 3959 if (pi_test_and_set_on(&vmx->pi_desc)) 3965 3960 return 0; 3966 3961 3967 - if (vcpu != kvm_get_running_vcpu() && 3968 - !kvm_vcpu_trigger_posted_interrupt(vcpu, false)) 3962 + if (!kvm_vcpu_trigger_posted_interrupt(vcpu, false)) 3969 3963 kvm_vcpu_kick(vcpu); 3970 3964 3971 3965 return 0; ··· 5881 5877 vmx_flush_pml_buffer(vcpu); 5882 5878 5883 5879 /* 5884 - * We should never reach this point with a pending nested VM-Enter, and 5885 - * more specifically emulation of L2 due to invalid guest state (see 5886 - * below) should never happen as that means we incorrectly allowed a 5887 - * nested VM-Enter with an invalid vmcs12. 5880 + * KVM should never reach this point with a pending nested VM-Enter. 5881 + * More specifically, short-circuiting VM-Entry to emulate L2 due to 5882 + * invalid guest state should never happen as that means KVM knowingly 5883 + * allowed a nested VM-Enter with an invalid vmcs12. More below. 5888 5884 */ 5889 5885 if (KVM_BUG_ON(vmx->nested.nested_run_pending, vcpu->kvm)) 5890 5886 return -EIO; 5891 - 5892 - /* If guest state is invalid, start emulating */ 5893 - if (vmx->emulation_required) 5894 - return handle_invalid_guest_state(vcpu); 5895 5887 5896 5888 if (is_guest_mode(vcpu)) { 5897 5889 /* ··· 5910 5910 */ 5911 5911 nested_mark_vmcs12_pages_dirty(vcpu); 5912 5912 5913 + /* 5914 + * Synthesize a triple fault if L2 state is invalid. In normal 5915 + * operation, nested VM-Enter rejects any attempt to enter L2 5916 + * with invalid state. However, those checks are skipped if 5917 + * state is being stuffed via RSM or KVM_SET_NESTED_STATE. If 5918 + * L2 state is invalid, it means either L1 modified SMRAM state 5919 + * or userspace provided bad state. Synthesize TRIPLE_FAULT as 5920 + * doing so is architecturally allowed in the RSM case, and is 5921 + * the least awful solution for the userspace case without 5922 + * risking false positives. 5923 + */ 5924 + if (vmx->emulation_required) { 5925 + nested_vmx_vmexit(vcpu, EXIT_REASON_TRIPLE_FAULT, 0, 0); 5926 + return 1; 5927 + } 5928 + 5913 5929 if (nested_vmx_reflect_vmexit(vcpu)) 5914 5930 return 1; 5915 5931 } 5932 + 5933 + /* If guest state is invalid, start emulating. L2 is handled above. */ 5934 + if (vmx->emulation_required) 5935 + return handle_invalid_guest_state(vcpu); 5916 5936 5917 5937 if (exit_reason.failed_vmentry) { 5918 5938 dump_vmcs(vcpu); ··· 6628 6608 * consistency check VM-Exit due to invalid guest state and bail. 6629 6609 */ 6630 6610 if (unlikely(vmx->emulation_required)) { 6631 - 6632 - /* We don't emulate invalid state of a nested guest */ 6633 - vmx->fail = is_guest_mode(vcpu); 6611 + vmx->fail = 0; 6634 6612 6635 6613 vmx->exit_reason.full = EXIT_REASON_INVALID_STATE; 6636 6614 vmx->exit_reason.failed_vmentry = 1; ··· 7597 7579 .cache_reg = vmx_cache_reg, 7598 7580 .get_rflags = vmx_get_rflags, 7599 7581 .set_rflags = vmx_set_rflags, 7582 + .get_if_flag = vmx_get_if_flag, 7600 7583 7601 7584 .tlb_flush_all = vmx_flush_tlb_all, 7602 7585 .tlb_flush_current = vmx_flush_tlb_current,
+3 -10
arch/x86/kvm/x86.c
··· 1331 1331 MSR_IA32_UMWAIT_CONTROL, 1332 1332 1333 1333 MSR_ARCH_PERFMON_FIXED_CTR0, MSR_ARCH_PERFMON_FIXED_CTR1, 1334 - MSR_ARCH_PERFMON_FIXED_CTR0 + 2, MSR_ARCH_PERFMON_FIXED_CTR0 + 3, 1334 + MSR_ARCH_PERFMON_FIXED_CTR0 + 2, 1335 1335 MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_CORE_PERF_GLOBAL_STATUS, 1336 1336 MSR_CORE_PERF_GLOBAL_CTRL, MSR_CORE_PERF_GLOBAL_OVF_CTRL, 1337 1337 MSR_ARCH_PERFMON_PERFCTR0, MSR_ARCH_PERFMON_PERFCTR1, ··· 3413 3413 3414 3414 if (!msr_info->host_initiated) 3415 3415 return 1; 3416 - if (guest_cpuid_has(vcpu, X86_FEATURE_PDCM) && kvm_get_msr_feature(&msr_ent)) 3416 + if (kvm_get_msr_feature(&msr_ent)) 3417 3417 return 1; 3418 3418 if (data & ~msr_ent.data) 3419 3419 return 1; ··· 9001 9001 { 9002 9002 struct kvm_run *kvm_run = vcpu->run; 9003 9003 9004 - /* 9005 - * if_flag is obsolete and useless, so do not bother 9006 - * setting it for SEV-ES guests. Userspace can just 9007 - * use kvm_run->ready_for_interrupt_injection. 9008 - */ 9009 - kvm_run->if_flag = !vcpu->arch.guest_state_protected 9010 - && (kvm_get_rflags(vcpu) & X86_EFLAGS_IF) != 0; 9011 - 9004 + kvm_run->if_flag = static_call(kvm_x86_get_if_flag)(vcpu); 9012 9005 kvm_run->cr8 = kvm_get_cr8(vcpu); 9013 9006 kvm_run->apic_base = kvm_get_apic_base(vcpu); 9014 9007
+43 -8
arch/x86/net/bpf_jit_comp.c
··· 1252 1252 case BPF_LDX | BPF_MEM | BPF_DW: 1253 1253 case BPF_LDX | BPF_PROBE_MEM | BPF_DW: 1254 1254 if (BPF_MODE(insn->code) == BPF_PROBE_MEM) { 1255 - /* test src_reg, src_reg */ 1256 - maybe_emit_mod(&prog, src_reg, src_reg, true); /* always 1 byte */ 1257 - EMIT2(0x85, add_2reg(0xC0, src_reg, src_reg)); 1258 - /* jne start_of_ldx */ 1259 - EMIT2(X86_JNE, 0); 1255 + /* Though the verifier prevents negative insn->off in BPF_PROBE_MEM 1256 + * add abs(insn->off) to the limit to make sure that negative 1257 + * offset won't be an issue. 1258 + * insn->off is s16, so it won't affect valid pointers. 1259 + */ 1260 + u64 limit = TASK_SIZE_MAX + PAGE_SIZE + abs(insn->off); 1261 + u8 *end_of_jmp1, *end_of_jmp2; 1262 + 1263 + /* Conservatively check that src_reg + insn->off is a kernel address: 1264 + * 1. src_reg + insn->off >= limit 1265 + * 2. src_reg + insn->off doesn't become small positive. 1266 + * Cannot do src_reg + insn->off >= limit in one branch, 1267 + * since it needs two spare registers, but JIT has only one. 1268 + */ 1269 + 1270 + /* movabsq r11, limit */ 1271 + EMIT2(add_1mod(0x48, AUX_REG), add_1reg(0xB8, AUX_REG)); 1272 + EMIT((u32)limit, 4); 1273 + EMIT(limit >> 32, 4); 1274 + /* cmp src_reg, r11 */ 1275 + maybe_emit_mod(&prog, src_reg, AUX_REG, true); 1276 + EMIT2(0x39, add_2reg(0xC0, src_reg, AUX_REG)); 1277 + /* if unsigned '<' goto end_of_jmp2 */ 1278 + EMIT2(X86_JB, 0); 1279 + end_of_jmp1 = prog; 1280 + 1281 + /* mov r11, src_reg */ 1282 + emit_mov_reg(&prog, true, AUX_REG, src_reg); 1283 + /* add r11, insn->off */ 1284 + maybe_emit_1mod(&prog, AUX_REG, true); 1285 + EMIT2_off32(0x81, add_1reg(0xC0, AUX_REG), insn->off); 1286 + /* jmp if not carry to start_of_ldx 1287 + * Otherwise ERR_PTR(-EINVAL) + 128 will be the user addr 1288 + * that has to be rejected. 1289 + */ 1290 + EMIT2(0x73 /* JNC */, 0); 1291 + end_of_jmp2 = prog; 1292 + 1260 1293 /* xor dst_reg, dst_reg */ 1261 1294 emit_mov_imm32(&prog, false, dst_reg, 0); 1262 1295 /* jmp byte_after_ldx */ 1263 1296 EMIT2(0xEB, 0); 1264 1297 1265 - /* populate jmp_offset for JNE above */ 1266 - temp[4] = prog - temp - 5 /* sizeof(test + jne) */; 1298 + /* populate jmp_offset for JB above to jump to xor dst_reg */ 1299 + end_of_jmp1[-1] = end_of_jmp2 - end_of_jmp1; 1300 + /* populate jmp_offset for JNC above to jump to start_of_ldx */ 1267 1301 start_of_ldx = prog; 1302 + end_of_jmp2[-1] = start_of_ldx - end_of_jmp2; 1268 1303 } 1269 1304 emit_ldx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn->off); 1270 1305 if (BPF_MODE(insn->code) == BPF_PROBE_MEM) { ··· 1340 1305 * End result: x86 insn "mov rbx, qword ptr [rax+0x14]" 1341 1306 * of 4 bytes will be ignored and rbx will be zero inited. 1342 1307 */ 1343 - ex->fixup = (prog - temp) | (reg2pt_regs[dst_reg] << 8); 1308 + ex->fixup = (prog - start_of_ldx) | (reg2pt_regs[dst_reg] << 8); 1344 1309 } 1345 1310 break; 1346 1311
+1 -1
arch/x86/tools/relocs.c
··· 68 68 "(__parainstructions|__alt_instructions)(_end)?|" 69 69 "(__iommu_table|__apicdrivers|__smp_locks)(_end)?|" 70 70 "__(start|end)_pci_.*|" 71 - #if CONFIG_FW_LOADER_BUILTIN 71 + #if CONFIG_FW_LOADER 72 72 "__(start|end)_builtin_fw|" 73 73 #endif 74 74 "__(start|stop)___ksymtab(_gpl)?|"
+8 -1
block/blk-iocost.c
··· 2311 2311 hwm = current_hweight_max(iocg); 2312 2312 new_hwi = hweight_after_donation(iocg, old_hwi, hwm, 2313 2313 usage, &now); 2314 - if (new_hwi < hwm) { 2314 + /* 2315 + * Donation calculation assumes hweight_after_donation 2316 + * to be positive, a condition that a donor w/ hwa < 2 2317 + * can't meet. Don't bother with donation if hwa is 2318 + * below 2. It's not gonna make a meaningful difference 2319 + * anyway. 2320 + */ 2321 + if (new_hwi < hwm && hwa >= 2) { 2315 2322 iocg->hweight_donating = hwa; 2316 2323 iocg->hweight_after_donation = new_hwi; 2317 2324 list_add(&iocg->surplus_list, &surpluses);
+1 -2
drivers/Makefile
··· 41 41 # SOC specific infrastructure drivers. 42 42 obj-y += soc/ 43 43 44 - obj-$(CONFIG_VIRTIO) += virtio/ 45 - obj-$(CONFIG_VIRTIO_PCI_LIB) += virtio/ 44 + obj-y += virtio/ 46 45 obj-$(CONFIG_VDPA) += vdpa/ 47 46 obj-$(CONFIG_XEN) += xen/ 48 47
+1 -1
drivers/android/binder_alloc.c
··· 671 671 BUG_ON(buffer->user_data > alloc->buffer + alloc->buffer_size); 672 672 673 673 if (buffer->async_transaction) { 674 - alloc->free_async_space += size + sizeof(struct binder_buffer); 674 + alloc->free_async_space += buffer_size + sizeof(struct binder_buffer); 675 675 676 676 binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC, 677 677 "%d: binder_free_buf size %zd async free %zd\n",
+13 -2
drivers/ata/libata-scsi.c
··· 2859 2859 goto invalid_fld; 2860 2860 } 2861 2861 2862 - if (ata_is_ncq(tf->protocol) && (cdb[2 + cdb_offset] & 0x3) == 0) 2863 - tf->protocol = ATA_PROT_NCQ_NODATA; 2862 + if ((cdb[2 + cdb_offset] & 0x3) == 0) { 2863 + /* 2864 + * When T_LENGTH is zero (No data is transferred), dir should 2865 + * be DMA_NONE. 2866 + */ 2867 + if (scmd->sc_data_direction != DMA_NONE) { 2868 + fp = 2 + cdb_offset; 2869 + goto invalid_fld; 2870 + } 2871 + 2872 + if (ata_is_ncq(tf->protocol)) 2873 + tf->protocol = ATA_PROT_NCQ_NODATA; 2874 + } 2864 2875 2865 2876 /* enable LBA */ 2866 2877 tf->flags |= ATA_TFLAG_LBA;
+4 -1
drivers/auxdisplay/charlcd.c
··· 37 37 bool must_clear; 38 38 39 39 /* contains the LCD config state */ 40 - unsigned long int flags; 40 + unsigned long flags; 41 41 42 42 /* Current escape sequence and it's length or -1 if outside */ 43 43 struct { ··· 578 578 * Since charlcd_init_display() needs to write data, we have to 579 579 * enable mark the LCD initialized just before. 580 580 */ 581 + if (WARN_ON(!lcd->ops->init_display)) 582 + return -EINVAL; 583 + 581 584 ret = lcd->ops->init_display(lcd); 582 585 if (ret) 583 586 return ret;
+1 -1
drivers/base/power/main.c
··· 1902 1902 device_block_probing(); 1903 1903 1904 1904 mutex_lock(&dpm_list_mtx); 1905 - while (!list_empty(&dpm_list)) { 1905 + while (!list_empty(&dpm_list) && !error) { 1906 1906 struct device *dev = to_device(dpm_list.next); 1907 1907 1908 1908 get_device(dev);
+12 -3
drivers/block/xen-blkfront.c
··· 1512 1512 unsigned long flags; 1513 1513 struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id; 1514 1514 struct blkfront_info *info = rinfo->dev_info; 1515 + unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS; 1515 1516 1516 - if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) 1517 + if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) { 1518 + xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS); 1517 1519 return IRQ_HANDLED; 1520 + } 1518 1521 1519 1522 spin_lock_irqsave(&rinfo->ring_lock, flags); 1520 1523 again: ··· 1532 1529 for (i = rinfo->ring.rsp_cons; i != rp; i++) { 1533 1530 unsigned long id; 1534 1531 unsigned int op; 1532 + 1533 + eoiflag = 0; 1535 1534 1536 1535 RING_COPY_RESPONSE(&rinfo->ring, i, &bret); 1537 1536 id = bret.id; ··· 1651 1646 1652 1647 spin_unlock_irqrestore(&rinfo->ring_lock, flags); 1653 1648 1649 + xen_irq_lateeoi(irq, eoiflag); 1650 + 1654 1651 return IRQ_HANDLED; 1655 1652 1656 1653 err: 1657 1654 info->connected = BLKIF_STATE_ERROR; 1658 1655 1659 1656 spin_unlock_irqrestore(&rinfo->ring_lock, flags); 1657 + 1658 + /* No EOI in order to avoid further interrupts. */ 1660 1659 1661 1660 pr_alert("%s disabled for further use\n", info->gd->disk_name); 1662 1661 return IRQ_HANDLED; ··· 1701 1692 if (err) 1702 1693 goto fail; 1703 1694 1704 - err = bind_evtchn_to_irqhandler(rinfo->evtchn, blkif_interrupt, 0, 1705 - "blkif", rinfo); 1695 + err = bind_evtchn_to_irqhandler_lateeoi(rinfo->evtchn, blkif_interrupt, 1696 + 0, "blkif", rinfo); 1706 1697 if (err <= 0) { 1707 1698 xenbus_dev_fatal(dev, err, 1708 1699 "bind_evtchn_to_irqhandler failed");
+4 -4
drivers/bus/sunxi-rsb.c
··· 687 687 688 688 static void sunxi_rsb_hw_exit(struct sunxi_rsb *rsb) 689 689 { 690 - /* Keep the clock and PM reference counts consistent. */ 691 - if (pm_runtime_status_suspended(rsb->dev)) 692 - pm_runtime_resume(rsb->dev); 693 690 reset_control_assert(rsb->rstc); 694 - clk_disable_unprepare(rsb->clk); 691 + 692 + /* Keep the clock and PM reference counts consistent. */ 693 + if (!pm_runtime_status_suspended(rsb->dev)) 694 + clk_disable_unprepare(rsb->clk); 695 695 } 696 696 697 697 static int __maybe_unused sunxi_rsb_runtime_suspend(struct device *dev)
+14 -9
drivers/char/ipmi/ipmi_msghandler.c
··· 3031 3031 * with removing the device attributes while reading a device 3032 3032 * attribute. 3033 3033 */ 3034 - schedule_work(&bmc->remove_work); 3034 + queue_work(remove_work_wq, &bmc->remove_work); 3035 3035 } 3036 3036 3037 3037 /* ··· 5392 5392 if (initialized) 5393 5393 goto out; 5394 5394 5395 - init_srcu_struct(&ipmi_interfaces_srcu); 5395 + rv = init_srcu_struct(&ipmi_interfaces_srcu); 5396 + if (rv) 5397 + goto out; 5398 + 5399 + remove_work_wq = create_singlethread_workqueue("ipmi-msghandler-remove-wq"); 5400 + if (!remove_work_wq) { 5401 + pr_err("unable to create ipmi-msghandler-remove-wq workqueue"); 5402 + rv = -ENOMEM; 5403 + goto out_wq; 5404 + } 5396 5405 5397 5406 timer_setup(&ipmi_timer, ipmi_timeout, 0); 5398 5407 mod_timer(&ipmi_timer, jiffies + IPMI_TIMEOUT_JIFFIES); 5399 5408 5400 5409 atomic_notifier_chain_register(&panic_notifier_list, &panic_block); 5401 5410 5402 - remove_work_wq = create_singlethread_workqueue("ipmi-msghandler-remove-wq"); 5403 - if (!remove_work_wq) { 5404 - pr_err("unable to create ipmi-msghandler-remove-wq workqueue"); 5405 - rv = -ENOMEM; 5406 - goto out; 5407 - } 5408 - 5409 5411 initialized = true; 5410 5412 5413 + out_wq: 5414 + if (rv) 5415 + cleanup_srcu_struct(&ipmi_interfaces_srcu); 5411 5416 out: 5412 5417 mutex_unlock(&ipmi_interfaces_mutex); 5413 5418 return rv;
+4 -3
drivers/char/ipmi/ipmi_ssif.c
··· 1659 1659 } 1660 1660 } 1661 1661 1662 + ssif_info->client = client; 1663 + i2c_set_clientdata(client, ssif_info); 1664 + 1662 1665 rv = ssif_check_and_remove(client, ssif_info); 1663 1666 /* If rv is 0 and addr source is not SI_ACPI, continue probing */ 1664 1667 if (!rv && ssif_info->addr_source == SI_ACPI) { ··· 1681 1678 "Trying %s-specified SSIF interface at i2c address 0x%x, adapter %s, slave address 0x%x\n", 1682 1679 ipmi_addr_src_to_str(ssif_info->addr_source), 1683 1680 client->addr, client->adapter->name, slave_addr); 1684 - 1685 - ssif_info->client = client; 1686 - i2c_set_clientdata(client, ssif_info); 1687 1681 1688 1682 /* Now check for system interface capabilities */ 1689 1683 msg[0] = IPMI_NETFN_APP_REQUEST << 2; ··· 1881 1881 1882 1882 dev_err(&ssif_info->client->dev, 1883 1883 "Unable to start IPMI SSIF: %d\n", rv); 1884 + i2c_set_clientdata(client, NULL); 1884 1885 kfree(ssif_info); 1885 1886 } 1886 1887 kfree(resp);
+12 -3
drivers/clk/clk.c
··· 3418 3418 3419 3419 clk_prepare_lock(); 3420 3420 3421 + /* 3422 + * Set hw->core after grabbing the prepare_lock to synchronize with 3423 + * callers of clk_core_fill_parent_index() where we treat hw->core 3424 + * being NULL as the clk not being registered yet. This is crucial so 3425 + * that clks aren't parented until their parent is fully registered. 3426 + */ 3427 + core->hw->core = core; 3428 + 3421 3429 ret = clk_pm_runtime_get(core); 3422 3430 if (ret) 3423 3431 goto unlock; ··· 3590 3582 out: 3591 3583 clk_pm_runtime_put(core); 3592 3584 unlock: 3593 - if (ret) 3585 + if (ret) { 3594 3586 hlist_del_init(&core->child_node); 3587 + core->hw->core = NULL; 3588 + } 3595 3589 3596 3590 clk_prepare_unlock(); 3597 3591 ··· 3857 3847 core->num_parents = init->num_parents; 3858 3848 core->min_rate = 0; 3859 3849 core->max_rate = ULONG_MAX; 3860 - hw->core = core; 3861 3850 3862 3851 ret = clk_core_populate_parent_map(core, init); 3863 3852 if (ret) ··· 3874 3865 goto fail_create_clk; 3875 3866 } 3876 3867 3877 - clk_core_link_consumer(hw->core, hw->clk); 3868 + clk_core_link_consumer(core, hw->clk); 3878 3869 3879 3870 ret = __clk_core_init(core); 3880 3871 if (!ret)
+7
drivers/crypto/qat/qat_4xxx/adf_4xxx_hw_data.c
··· 211 211 return adf_4xxx_fw_config[obj_num].ae_mask; 212 212 } 213 213 214 + static u32 get_vf2pf_sources(void __iomem *pmisc_addr) 215 + { 216 + /* For the moment do not report vf2pf sources */ 217 + return 0; 218 + } 219 + 214 220 void adf_init_hw_data_4xxx(struct adf_hw_device_data *hw_data) 215 221 { 216 222 hw_data->dev_class = &adf_4xxx_class; ··· 260 254 hw_data->set_msix_rttable = set_msix_default_rttable; 261 255 hw_data->set_ssm_wdtimer = adf_gen4_set_ssm_wdtimer; 262 256 hw_data->enable_pfvf_comms = pfvf_comms_disabled; 257 + hw_data->get_vf2pf_sources = get_vf2pf_sources; 263 258 hw_data->disable_iov = adf_disable_sriov; 264 259 hw_data->min_iov_compat_ver = ADF_PFVF_COMPAT_THIS_VERSION; 265 260
+2 -2
drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
··· 373 373 struct axi_dma_desc *first) 374 374 { 375 375 u32 priority = chan->chip->dw->hdata->priority[chan->id]; 376 - struct axi_dma_chan_config config; 376 + struct axi_dma_chan_config config = {}; 377 377 u32 irq_mask; 378 378 u8 lms = 0; /* Select AXI0 master for LLI fetching */ 379 379 ··· 391 391 config.tt_fc = DWAXIDMAC_TT_FC_MEM_TO_MEM_DMAC; 392 392 config.prior = priority; 393 393 config.hs_sel_dst = DWAXIDMAC_HS_SEL_HW; 394 - config.hs_sel_dst = DWAXIDMAC_HS_SEL_HW; 394 + config.hs_sel_src = DWAXIDMAC_HS_SEL_HW; 395 395 switch (chan->direction) { 396 396 case DMA_MEM_TO_DEV: 397 397 dw_axi_dma_set_byte_halfword(chan, true);
+1 -9
drivers/dma/dw-edma/dw-edma-pcie.c
··· 187 187 188 188 /* DMA configuration */ 189 189 err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 190 - if (!err) { 190 + if (err) { 191 191 pci_err(pdev, "DMA mask 64 set failed\n"); 192 192 return err; 193 - } else { 194 - pci_err(pdev, "DMA mask 64 set failed\n"); 195 - 196 - err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 197 - if (err) { 198 - pci_err(pdev, "DMA mask 32 set failed\n"); 199 - return err; 200 - } 201 193 } 202 194 203 195 /* Data structure allocation */
+1 -1
drivers/dma/idxd/irq.c
··· 137 137 INIT_WORK(&idxd->work, idxd_device_reinit); 138 138 queue_work(idxd->wq, &idxd->work); 139 139 } else { 140 - spin_lock(&idxd->dev_lock); 141 140 idxd->state = IDXD_DEV_HALTED; 142 141 idxd_wqs_quiesce(idxd); 143 142 idxd_wqs_unmap_portal(idxd); 143 + spin_lock(&idxd->dev_lock); 144 144 idxd_device_clear_state(idxd); 145 145 dev_err(&idxd->pdev->dev, 146 146 "idxd halted, need %s.\n",
+17 -1
drivers/dma/idxd/submit.c
··· 106 106 { 107 107 struct idxd_desc *d, *t, *found = NULL; 108 108 struct llist_node *head; 109 + LIST_HEAD(flist); 109 110 110 111 desc->completion->status = IDXD_COMP_DESC_ABORT; 111 112 /* ··· 121 120 found = desc; 122 121 continue; 123 122 } 124 - list_add_tail(&desc->list, &ie->work_list); 123 + 124 + if (d->completion->status) 125 + list_add_tail(&d->list, &flist); 126 + else 127 + list_add_tail(&d->list, &ie->work_list); 125 128 } 126 129 } 127 130 ··· 135 130 136 131 if (found) 137 132 complete_desc(found, IDXD_COMPLETE_ABORT); 133 + 134 + /* 135 + * complete_desc() will return desc to allocator and the desc can be 136 + * acquired by a different process and the desc->list can be modified. 137 + * Delete desc from list so the list trasversing does not get corrupted 138 + * by the other process. 139 + */ 140 + list_for_each_entry_safe(d, t, &flist, list) { 141 + list_del_init(&d->list); 142 + complete_desc(d, IDXD_COMPLETE_NORMAL); 143 + } 138 144 } 139 145 140 146 int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc)
+1 -1
drivers/dma/st_fdma.c
··· 874 874 MODULE_DESCRIPTION("STMicroelectronics FDMA engine driver"); 875 875 MODULE_AUTHOR("Ludovic.barre <Ludovic.barre@st.com>"); 876 876 MODULE_AUTHOR("Peter Griffin <peter.griffin@linaro.org>"); 877 - MODULE_ALIAS("platform: " DRIVER_NAME); 877 + MODULE_ALIAS("platform:" DRIVER_NAME);
+105 -48
drivers/dma/ti/k3-udma.c
··· 4534 4534 rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN]; 4535 4535 if (IS_ERR(rm_res)) { 4536 4536 bitmap_zero(ud->tchan_map, ud->tchan_cnt); 4537 + irq_res.sets = 1; 4537 4538 } else { 4538 4539 bitmap_fill(ud->tchan_map, ud->tchan_cnt); 4539 4540 for (i = 0; i < rm_res->sets; i++) 4540 4541 udma_mark_resource_ranges(ud, ud->tchan_map, 4541 4542 &rm_res->desc[i], "tchan"); 4543 + irq_res.sets = rm_res->sets; 4542 4544 } 4543 - irq_res.sets = rm_res->sets; 4544 4545 4545 4546 /* rchan and matching default flow ranges */ 4546 4547 rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN]; 4547 4548 if (IS_ERR(rm_res)) { 4548 4549 bitmap_zero(ud->rchan_map, ud->rchan_cnt); 4550 + irq_res.sets++; 4549 4551 } else { 4550 4552 bitmap_fill(ud->rchan_map, ud->rchan_cnt); 4551 4553 for (i = 0; i < rm_res->sets; i++) 4552 4554 udma_mark_resource_ranges(ud, ud->rchan_map, 4553 4555 &rm_res->desc[i], "rchan"); 4556 + irq_res.sets += rm_res->sets; 4554 4557 } 4555 4558 4556 - irq_res.sets += rm_res->sets; 4557 4559 irq_res.desc = kcalloc(irq_res.sets, sizeof(*irq_res.desc), GFP_KERNEL); 4560 + if (!irq_res.desc) 4561 + return -ENOMEM; 4558 4562 rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN]; 4559 - for (i = 0; i < rm_res->sets; i++) { 4560 - irq_res.desc[i].start = rm_res->desc[i].start; 4561 - irq_res.desc[i].num = rm_res->desc[i].num; 4562 - irq_res.desc[i].start_sec = rm_res->desc[i].start_sec; 4563 - irq_res.desc[i].num_sec = rm_res->desc[i].num_sec; 4563 + if (IS_ERR(rm_res)) { 4564 + irq_res.desc[0].start = 0; 4565 + irq_res.desc[0].num = ud->tchan_cnt; 4566 + i = 1; 4567 + } else { 4568 + for (i = 0; i < rm_res->sets; i++) { 4569 + irq_res.desc[i].start = rm_res->desc[i].start; 4570 + irq_res.desc[i].num = rm_res->desc[i].num; 4571 + irq_res.desc[i].start_sec = rm_res->desc[i].start_sec; 4572 + irq_res.desc[i].num_sec = rm_res->desc[i].num_sec; 4573 + } 4564 4574 } 4565 4575 rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN]; 4566 - for (j = 0; j < rm_res->sets; j++, i++) { 4567 - if (rm_res->desc[j].num) { 4568 - irq_res.desc[i].start = rm_res->desc[j].start + 4569 - ud->soc_data->oes.udma_rchan; 4570 - irq_res.desc[i].num = rm_res->desc[j].num; 4571 - } 4572 - if (rm_res->desc[j].num_sec) { 4573 - irq_res.desc[i].start_sec = rm_res->desc[j].start_sec + 4574 - ud->soc_data->oes.udma_rchan; 4575 - irq_res.desc[i].num_sec = rm_res->desc[j].num_sec; 4576 + if (IS_ERR(rm_res)) { 4577 + irq_res.desc[i].start = 0; 4578 + irq_res.desc[i].num = ud->rchan_cnt; 4579 + } else { 4580 + for (j = 0; j < rm_res->sets; j++, i++) { 4581 + if (rm_res->desc[j].num) { 4582 + irq_res.desc[i].start = rm_res->desc[j].start + 4583 + ud->soc_data->oes.udma_rchan; 4584 + irq_res.desc[i].num = rm_res->desc[j].num; 4585 + } 4586 + if (rm_res->desc[j].num_sec) { 4587 + irq_res.desc[i].start_sec = rm_res->desc[j].start_sec + 4588 + ud->soc_data->oes.udma_rchan; 4589 + irq_res.desc[i].num_sec = rm_res->desc[j].num_sec; 4590 + } 4576 4591 } 4577 4592 } 4578 4593 ret = ti_sci_inta_msi_domain_alloc_irqs(ud->dev, &irq_res); ··· 4705 4690 rm_res = tisci_rm->rm_ranges[RM_RANGE_BCHAN]; 4706 4691 if (IS_ERR(rm_res)) { 4707 4692 bitmap_zero(ud->bchan_map, ud->bchan_cnt); 4693 + irq_res.sets++; 4708 4694 } else { 4709 4695 bitmap_fill(ud->bchan_map, ud->bchan_cnt); 4710 4696 for (i = 0; i < rm_res->sets; i++) 4711 4697 udma_mark_resource_ranges(ud, ud->bchan_map, 4712 4698 &rm_res->desc[i], 4713 4699 "bchan"); 4700 + irq_res.sets += rm_res->sets; 4714 4701 } 4715 - irq_res.sets += rm_res->sets; 4716 4702 } 4717 4703 4718 4704 /* tchan ranges */ ··· 4721 4705 rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN]; 4722 4706 if (IS_ERR(rm_res)) { 4723 4707 bitmap_zero(ud->tchan_map, ud->tchan_cnt); 4708 + irq_res.sets += 2; 4724 4709 } else { 4725 4710 bitmap_fill(ud->tchan_map, ud->tchan_cnt); 4726 4711 for (i = 0; i < rm_res->sets; i++) 4727 4712 udma_mark_resource_ranges(ud, ud->tchan_map, 4728 4713 &rm_res->desc[i], 4729 4714 "tchan"); 4715 + irq_res.sets += rm_res->sets * 2; 4730 4716 } 4731 - irq_res.sets += rm_res->sets * 2; 4732 4717 } 4733 4718 4734 4719 /* rchan ranges */ ··· 4737 4720 rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN]; 4738 4721 if (IS_ERR(rm_res)) { 4739 4722 bitmap_zero(ud->rchan_map, ud->rchan_cnt); 4723 + irq_res.sets += 2; 4740 4724 } else { 4741 4725 bitmap_fill(ud->rchan_map, ud->rchan_cnt); 4742 4726 for (i = 0; i < rm_res->sets; i++) 4743 4727 udma_mark_resource_ranges(ud, ud->rchan_map, 4744 4728 &rm_res->desc[i], 4745 4729 "rchan"); 4730 + irq_res.sets += rm_res->sets * 2; 4746 4731 } 4747 - irq_res.sets += rm_res->sets * 2; 4748 4732 } 4749 4733 4750 4734 irq_res.desc = kcalloc(irq_res.sets, sizeof(*irq_res.desc), GFP_KERNEL); 4735 + if (!irq_res.desc) 4736 + return -ENOMEM; 4751 4737 if (ud->bchan_cnt) { 4752 4738 rm_res = tisci_rm->rm_ranges[RM_RANGE_BCHAN]; 4753 - for (i = 0; i < rm_res->sets; i++) { 4754 - irq_res.desc[i].start = rm_res->desc[i].start + 4755 - oes->bcdma_bchan_ring; 4756 - irq_res.desc[i].num = rm_res->desc[i].num; 4739 + if (IS_ERR(rm_res)) { 4740 + irq_res.desc[0].start = oes->bcdma_bchan_ring; 4741 + irq_res.desc[0].num = ud->bchan_cnt; 4742 + i = 1; 4743 + } else { 4744 + for (i = 0; i < rm_res->sets; i++) { 4745 + irq_res.desc[i].start = rm_res->desc[i].start + 4746 + oes->bcdma_bchan_ring; 4747 + irq_res.desc[i].num = rm_res->desc[i].num; 4748 + } 4757 4749 } 4758 4750 } 4759 4751 if (ud->tchan_cnt) { 4760 4752 rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN]; 4761 - for (j = 0; j < rm_res->sets; j++, i += 2) { 4762 - irq_res.desc[i].start = rm_res->desc[j].start + 4763 - oes->bcdma_tchan_data; 4764 - irq_res.desc[i].num = rm_res->desc[j].num; 4753 + if (IS_ERR(rm_res)) { 4754 + irq_res.desc[i].start = oes->bcdma_tchan_data; 4755 + irq_res.desc[i].num = ud->tchan_cnt; 4756 + irq_res.desc[i + 1].start = oes->bcdma_tchan_ring; 4757 + irq_res.desc[i + 1].num = ud->tchan_cnt; 4758 + i += 2; 4759 + } else { 4760 + for (j = 0; j < rm_res->sets; j++, i += 2) { 4761 + irq_res.desc[i].start = rm_res->desc[j].start + 4762 + oes->bcdma_tchan_data; 4763 + irq_res.desc[i].num = rm_res->desc[j].num; 4765 4764 4766 - irq_res.desc[i + 1].start = rm_res->desc[j].start + 4767 - oes->bcdma_tchan_ring; 4768 - irq_res.desc[i + 1].num = rm_res->desc[j].num; 4765 + irq_res.desc[i + 1].start = rm_res->desc[j].start + 4766 + oes->bcdma_tchan_ring; 4767 + irq_res.desc[i + 1].num = rm_res->desc[j].num; 4768 + } 4769 4769 } 4770 4770 } 4771 4771 if (ud->rchan_cnt) { 4772 4772 rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN]; 4773 - for (j = 0; j < rm_res->sets; j++, i += 2) { 4774 - irq_res.desc[i].start = rm_res->desc[j].start + 4775 - oes->bcdma_rchan_data; 4776 - irq_res.desc[i].num = rm_res->desc[j].num; 4773 + if (IS_ERR(rm_res)) { 4774 + irq_res.desc[i].start = oes->bcdma_rchan_data; 4775 + irq_res.desc[i].num = ud->rchan_cnt; 4776 + irq_res.desc[i + 1].start = oes->bcdma_rchan_ring; 4777 + irq_res.desc[i + 1].num = ud->rchan_cnt; 4778 + i += 2; 4779 + } else { 4780 + for (j = 0; j < rm_res->sets; j++, i += 2) { 4781 + irq_res.desc[i].start = rm_res->desc[j].start + 4782 + oes->bcdma_rchan_data; 4783 + irq_res.desc[i].num = rm_res->desc[j].num; 4777 4784 4778 - irq_res.desc[i + 1].start = rm_res->desc[j].start + 4779 - oes->bcdma_rchan_ring; 4780 - irq_res.desc[i + 1].num = rm_res->desc[j].num; 4785 + irq_res.desc[i + 1].start = rm_res->desc[j].start + 4786 + oes->bcdma_rchan_ring; 4787 + irq_res.desc[i + 1].num = rm_res->desc[j].num; 4788 + } 4781 4789 } 4782 4790 } 4783 4791 ··· 4900 4858 if (IS_ERR(rm_res)) { 4901 4859 /* all rflows are assigned exclusively to Linux */ 4902 4860 bitmap_zero(ud->rflow_in_use, ud->rflow_cnt); 4861 + irq_res.sets = 1; 4903 4862 } else { 4904 4863 bitmap_fill(ud->rflow_in_use, ud->rflow_cnt); 4905 4864 for (i = 0; i < rm_res->sets; i++) 4906 4865 udma_mark_resource_ranges(ud, ud->rflow_in_use, 4907 4866 &rm_res->desc[i], "rflow"); 4867 + irq_res.sets = rm_res->sets; 4908 4868 } 4909 - irq_res.sets = rm_res->sets; 4910 4869 4911 4870 /* tflow ranges */ 4912 4871 rm_res = tisci_rm->rm_ranges[RM_RANGE_TFLOW]; 4913 4872 if (IS_ERR(rm_res)) { 4914 4873 /* all tflows are assigned exclusively to Linux */ 4915 4874 bitmap_zero(ud->tflow_map, ud->tflow_cnt); 4875 + irq_res.sets++; 4916 4876 } else { 4917 4877 bitmap_fill(ud->tflow_map, ud->tflow_cnt); 4918 4878 for (i = 0; i < rm_res->sets; i++) 4919 4879 udma_mark_resource_ranges(ud, ud->tflow_map, 4920 4880 &rm_res->desc[i], "tflow"); 4881 + irq_res.sets += rm_res->sets; 4921 4882 } 4922 - irq_res.sets += rm_res->sets; 4923 4883 4924 4884 irq_res.desc = kcalloc(irq_res.sets, sizeof(*irq_res.desc), GFP_KERNEL); 4885 + if (!irq_res.desc) 4886 + return -ENOMEM; 4925 4887 rm_res = tisci_rm->rm_ranges[RM_RANGE_TFLOW]; 4926 - for (i = 0; i < rm_res->sets; i++) { 4927 - irq_res.desc[i].start = rm_res->desc[i].start + 4928 - oes->pktdma_tchan_flow; 4929 - irq_res.desc[i].num = rm_res->desc[i].num; 4888 + if (IS_ERR(rm_res)) { 4889 + irq_res.desc[0].start = oes->pktdma_tchan_flow; 4890 + irq_res.desc[0].num = ud->tflow_cnt; 4891 + i = 1; 4892 + } else { 4893 + for (i = 0; i < rm_res->sets; i++) { 4894 + irq_res.desc[i].start = rm_res->desc[i].start + 4895 + oes->pktdma_tchan_flow; 4896 + irq_res.desc[i].num = rm_res->desc[i].num; 4897 + } 4930 4898 } 4931 4899 rm_res = tisci_rm->rm_ranges[RM_RANGE_RFLOW]; 4932 - for (j = 0; j < rm_res->sets; j++, i++) { 4933 - irq_res.desc[i].start = rm_res->desc[j].start + 4934 - oes->pktdma_rchan_flow; 4935 - irq_res.desc[i].num = rm_res->desc[j].num; 4900 + if (IS_ERR(rm_res)) { 4901 + irq_res.desc[i].start = oes->pktdma_rchan_flow; 4902 + irq_res.desc[i].num = ud->rflow_cnt; 4903 + } else { 4904 + for (j = 0; j < rm_res->sets; j++, i++) { 4905 + irq_res.desc[i].start = rm_res->desc[j].start + 4906 + oes->pktdma_rchan_flow; 4907 + irq_res.desc[i].num = rm_res->desc[j].num; 4908 + } 4936 4909 } 4937 4910 ret = ti_sci_inta_msi_domain_alloc_irqs(ud->dev, &irq_res); 4938 4911 kfree(irq_res.desc);
+7 -3
drivers/firmware/scpi_pm_domain.c
··· 16 16 struct generic_pm_domain genpd; 17 17 struct scpi_ops *ops; 18 18 u32 domain; 19 - char name[30]; 20 19 }; 21 20 22 21 /* ··· 109 110 110 111 scpi_pd->domain = i; 111 112 scpi_pd->ops = scpi_ops; 112 - sprintf(scpi_pd->name, "%pOFn.%d", np, i); 113 - scpi_pd->genpd.name = scpi_pd->name; 113 + scpi_pd->genpd.name = devm_kasprintf(dev, GFP_KERNEL, 114 + "%pOFn.%d", np, i); 115 + if (!scpi_pd->genpd.name) { 116 + dev_err(dev, "Failed to allocate genpd name:%pOFn.%d\n", 117 + np, i); 118 + continue; 119 + } 114 120 scpi_pd->genpd.power_off = scpi_pd_power_off; 115 121 scpi_pd->genpd.power_on = scpi_pd_power_on; 116 122
+3 -2
drivers/firmware/tegra/bpmp-debugfs.c
··· 77 77 const char *root_path, *filename = NULL; 78 78 char *root_path_buf; 79 79 size_t root_len; 80 + size_t root_path_buf_len = 512; 80 81 81 - root_path_buf = kzalloc(512, GFP_KERNEL); 82 + root_path_buf = kzalloc(root_path_buf_len, GFP_KERNEL); 82 83 if (!root_path_buf) 83 84 goto out; 84 85 85 86 root_path = dentry_path(bpmp->debugfs_mirror, root_path_buf, 86 - sizeof(root_path_buf)); 87 + root_path_buf_len); 87 88 if (IS_ERR(root_path)) 88 89 goto out; 89 90
+9 -10
drivers/gpio/gpio-dln2.c
··· 46 46 struct dln2_gpio { 47 47 struct platform_device *pdev; 48 48 struct gpio_chip gpio; 49 + struct irq_chip irqchip; 49 50 50 51 /* 51 52 * Cache pin direction to save us one transfer, since the hardware has ··· 384 383 mutex_unlock(&dln2->irq_lock); 385 384 } 386 385 387 - static struct irq_chip dln2_gpio_irqchip = { 388 - .name = "dln2-irq", 389 - .irq_mask = dln2_irq_mask, 390 - .irq_unmask = dln2_irq_unmask, 391 - .irq_set_type = dln2_irq_set_type, 392 - .irq_bus_lock = dln2_irq_bus_lock, 393 - .irq_bus_sync_unlock = dln2_irq_bus_unlock, 394 - }; 395 - 396 386 static void dln2_gpio_event(struct platform_device *pdev, u16 echo, 397 387 const void *data, int len) 398 388 { ··· 465 473 dln2->gpio.direction_output = dln2_gpio_direction_output; 466 474 dln2->gpio.set_config = dln2_gpio_set_config; 467 475 476 + dln2->irqchip.name = "dln2-irq", 477 + dln2->irqchip.irq_mask = dln2_irq_mask, 478 + dln2->irqchip.irq_unmask = dln2_irq_unmask, 479 + dln2->irqchip.irq_set_type = dln2_irq_set_type, 480 + dln2->irqchip.irq_bus_lock = dln2_irq_bus_lock, 481 + dln2->irqchip.irq_bus_sync_unlock = dln2_irq_bus_unlock, 482 + 468 483 girq = &dln2->gpio.irq; 469 - girq->chip = &dln2_gpio_irqchip; 484 + girq->chip = &dln2->irqchip; 470 485 /* The event comes from the outside so no parent handler */ 471 486 girq->parent_handler = NULL; 472 487 girq->num_parents = 0;
+1 -5
drivers/gpio/gpio-virtio.c
··· 100 100 virtqueue_kick(vgpio->request_vq); 101 101 mutex_unlock(&vgpio->lock); 102 102 103 - if (!wait_for_completion_timeout(&line->completion, HZ)) { 104 - dev_err(dev, "GPIO operation timed out\n"); 105 - ret = -ETIMEDOUT; 106 - goto out; 107 - } 103 + wait_for_completion(&line->completion); 108 104 109 105 if (unlikely(res->status != VIRTIO_GPIO_STATUS_OK)) { 110 106 dev_err(dev, "GPIO request failed: %d\n", gpio);
+8 -9
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 3166 3166 bool amdgpu_device_asic_has_dc_support(enum amd_asic_type asic_type) 3167 3167 { 3168 3168 switch (asic_type) { 3169 + #ifdef CONFIG_DRM_AMDGPU_SI 3170 + case CHIP_HAINAN: 3171 + #endif 3172 + case CHIP_TOPAZ: 3173 + /* chips with no display hardware */ 3174 + return false; 3169 3175 #if defined(CONFIG_DRM_AMD_DC) 3170 3176 case CHIP_TAHITI: 3171 3177 case CHIP_PITCAIRN: ··· 4467 4461 int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev, 4468 4462 struct amdgpu_reset_context *reset_context) 4469 4463 { 4470 - int i, j, r = 0; 4464 + int i, r = 0; 4471 4465 struct amdgpu_job *job = NULL; 4472 4466 bool need_full_reset = 4473 4467 test_bit(AMDGPU_NEED_FULL_RESET, &reset_context->flags); ··· 4489 4483 4490 4484 /*clear job fence from fence drv to avoid force_completion 4491 4485 *leave NULL and vm flush fence in fence drv */ 4492 - for (j = 0; j <= ring->fence_drv.num_fences_mask; j++) { 4493 - struct dma_fence *old, **ptr; 4486 + amdgpu_fence_driver_clear_job_fences(ring); 4494 4487 4495 - ptr = &ring->fence_drv.fences[j]; 4496 - old = rcu_dereference_protected(*ptr, 1); 4497 - if (old && test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &old->flags)) { 4498 - RCU_INIT_POINTER(*ptr, NULL); 4499 - } 4500 - } 4501 4488 /* after all hw jobs are reset, hw fence is meaningless, so force_completion */ 4502 4489 amdgpu_fence_driver_force_completion(ring); 4503 4490 }
+54 -22
drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
··· 526 526 } 527 527 } 528 528 529 + union gc_info { 530 + struct gc_info_v1_0 v1; 531 + struct gc_info_v2_0 v2; 532 + }; 533 + 529 534 int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev) 530 535 { 531 536 struct binary_header *bhdr; 532 - struct gc_info_v1_0 *gc_info; 537 + union gc_info *gc_info; 533 538 534 539 if (!adev->mman.discovery_bin) { 535 540 DRM_ERROR("ip discovery uninitialized\n"); ··· 542 537 } 543 538 544 539 bhdr = (struct binary_header *)adev->mman.discovery_bin; 545 - gc_info = (struct gc_info_v1_0 *)(adev->mman.discovery_bin + 540 + gc_info = (union gc_info *)(adev->mman.discovery_bin + 546 541 le16_to_cpu(bhdr->table_list[GC].offset)); 547 - 548 - adev->gfx.config.max_shader_engines = le32_to_cpu(gc_info->gc_num_se); 549 - adev->gfx.config.max_cu_per_sh = 2 * (le32_to_cpu(gc_info->gc_num_wgp0_per_sa) + 550 - le32_to_cpu(gc_info->gc_num_wgp1_per_sa)); 551 - adev->gfx.config.max_sh_per_se = le32_to_cpu(gc_info->gc_num_sa_per_se); 552 - adev->gfx.config.max_backends_per_se = le32_to_cpu(gc_info->gc_num_rb_per_se); 553 - adev->gfx.config.max_texture_channel_caches = le32_to_cpu(gc_info->gc_num_gl2c); 554 - adev->gfx.config.max_gprs = le32_to_cpu(gc_info->gc_num_gprs); 555 - adev->gfx.config.max_gs_threads = le32_to_cpu(gc_info->gc_num_max_gs_thds); 556 - adev->gfx.config.gs_vgt_table_depth = le32_to_cpu(gc_info->gc_gs_table_depth); 557 - adev->gfx.config.gs_prim_buffer_depth = le32_to_cpu(gc_info->gc_gsprim_buff_depth); 558 - adev->gfx.config.double_offchip_lds_buf = le32_to_cpu(gc_info->gc_double_offchip_lds_buffer); 559 - adev->gfx.cu_info.wave_front_size = le32_to_cpu(gc_info->gc_wave_size); 560 - adev->gfx.cu_info.max_waves_per_simd = le32_to_cpu(gc_info->gc_max_waves_per_simd); 561 - adev->gfx.cu_info.max_scratch_slots_per_cu = le32_to_cpu(gc_info->gc_max_scratch_slots_per_cu); 562 - adev->gfx.cu_info.lds_size = le32_to_cpu(gc_info->gc_lds_size); 563 - adev->gfx.config.num_sc_per_sh = le32_to_cpu(gc_info->gc_num_sc_per_se) / 564 - le32_to_cpu(gc_info->gc_num_sa_per_se); 565 - adev->gfx.config.num_packer_per_sc = le32_to_cpu(gc_info->gc_num_packer_per_sc); 566 - 542 + switch (gc_info->v1.header.version_major) { 543 + case 1: 544 + adev->gfx.config.max_shader_engines = le32_to_cpu(gc_info->v1.gc_num_se); 545 + adev->gfx.config.max_cu_per_sh = 2 * (le32_to_cpu(gc_info->v1.gc_num_wgp0_per_sa) + 546 + le32_to_cpu(gc_info->v1.gc_num_wgp1_per_sa)); 547 + adev->gfx.config.max_sh_per_se = le32_to_cpu(gc_info->v1.gc_num_sa_per_se); 548 + adev->gfx.config.max_backends_per_se = le32_to_cpu(gc_info->v1.gc_num_rb_per_se); 549 + adev->gfx.config.max_texture_channel_caches = le32_to_cpu(gc_info->v1.gc_num_gl2c); 550 + adev->gfx.config.max_gprs = le32_to_cpu(gc_info->v1.gc_num_gprs); 551 + adev->gfx.config.max_gs_threads = le32_to_cpu(gc_info->v1.gc_num_max_gs_thds); 552 + adev->gfx.config.gs_vgt_table_depth = le32_to_cpu(gc_info->v1.gc_gs_table_depth); 553 + adev->gfx.config.gs_prim_buffer_depth = le32_to_cpu(gc_info->v1.gc_gsprim_buff_depth); 554 + adev->gfx.config.double_offchip_lds_buf = le32_to_cpu(gc_info->v1.gc_double_offchip_lds_buffer); 555 + adev->gfx.cu_info.wave_front_size = le32_to_cpu(gc_info->v1.gc_wave_size); 556 + adev->gfx.cu_info.max_waves_per_simd = le32_to_cpu(gc_info->v1.gc_max_waves_per_simd); 557 + adev->gfx.cu_info.max_scratch_slots_per_cu = le32_to_cpu(gc_info->v1.gc_max_scratch_slots_per_cu); 558 + adev->gfx.cu_info.lds_size = le32_to_cpu(gc_info->v1.gc_lds_size); 559 + adev->gfx.config.num_sc_per_sh = le32_to_cpu(gc_info->v1.gc_num_sc_per_se) / 560 + le32_to_cpu(gc_info->v1.gc_num_sa_per_se); 561 + adev->gfx.config.num_packer_per_sc = le32_to_cpu(gc_info->v1.gc_num_packer_per_sc); 562 + break; 563 + case 2: 564 + adev->gfx.config.max_shader_engines = le32_to_cpu(gc_info->v2.gc_num_se); 565 + adev->gfx.config.max_cu_per_sh = le32_to_cpu(gc_info->v2.gc_num_cu_per_sh); 566 + adev->gfx.config.max_sh_per_se = le32_to_cpu(gc_info->v2.gc_num_sh_per_se); 567 + adev->gfx.config.max_backends_per_se = le32_to_cpu(gc_info->v2.gc_num_rb_per_se); 568 + adev->gfx.config.max_texture_channel_caches = le32_to_cpu(gc_info->v2.gc_num_tccs); 569 + adev->gfx.config.max_gprs = le32_to_cpu(gc_info->v2.gc_num_gprs); 570 + adev->gfx.config.max_gs_threads = le32_to_cpu(gc_info->v2.gc_num_max_gs_thds); 571 + adev->gfx.config.gs_vgt_table_depth = le32_to_cpu(gc_info->v2.gc_gs_table_depth); 572 + adev->gfx.config.gs_prim_buffer_depth = le32_to_cpu(gc_info->v2.gc_gsprim_buff_depth); 573 + adev->gfx.config.double_offchip_lds_buf = le32_to_cpu(gc_info->v2.gc_double_offchip_lds_buffer); 574 + adev->gfx.cu_info.wave_front_size = le32_to_cpu(gc_info->v2.gc_wave_size); 575 + adev->gfx.cu_info.max_waves_per_simd = le32_to_cpu(gc_info->v2.gc_max_waves_per_simd); 576 + adev->gfx.cu_info.max_scratch_slots_per_cu = le32_to_cpu(gc_info->v2.gc_max_scratch_slots_per_cu); 577 + adev->gfx.cu_info.lds_size = le32_to_cpu(gc_info->v2.gc_lds_size); 578 + adev->gfx.config.num_sc_per_sh = le32_to_cpu(gc_info->v2.gc_num_sc_per_se) / 579 + le32_to_cpu(gc_info->v2.gc_num_sh_per_se); 580 + adev->gfx.config.num_packer_per_sc = le32_to_cpu(gc_info->v2.gc_num_packer_per_sc); 581 + break; 582 + default: 583 + dev_err(adev->dev, 584 + "Unhandled GC info table %d.%d\n", 585 + gc_info->v1.header.version_major, 586 + gc_info->v1.header.version_minor); 587 + return -EINVAL; 588 + } 567 589 return 0; 568 590 } 569 591
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
··· 384 384 struct amdgpu_vm_bo_base *bo_base; 385 385 int r; 386 386 387 - if (bo->tbo.resource->mem_type == TTM_PL_SYSTEM) 387 + if (!bo->tbo.resource || bo->tbo.resource->mem_type == TTM_PL_SYSTEM) 388 388 return; 389 389 390 390 r = ttm_bo_validate(&bo->tbo, &placement, &ctx);
+23 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 328 328 329 329 /** 330 330 * DOC: runpm (int) 331 - * Override for runtime power management control for dGPUs in PX/HG laptops. The amdgpu driver can dynamically power down 332 - * the dGPU on PX/HG laptops when it is idle. The default is -1 (auto enable). Setting the value to 0 disables this functionality. 331 + * Override for runtime power management control for dGPUs. The amdgpu driver can dynamically power down 332 + * the dGPUs when they are idle if supported. The default is -1 (auto enable). 333 + * Setting the value to 0 disables this functionality. 333 334 */ 334 - MODULE_PARM_DESC(runpm, "PX runtime pm (2 = force enable with BAMACO, 1 = force enable with BACO, 0 = disable, -1 = PX only default)"); 335 + MODULE_PARM_DESC(runpm, "PX runtime pm (2 = force enable with BAMACO, 1 = force enable with BACO, 0 = disable, -1 = auto)"); 335 336 module_param_named(runpm, amdgpu_runtime_pm, int, 0444); 336 337 337 338 /** ··· 2154 2153 adev->in_s3 = true; 2155 2154 r = amdgpu_device_suspend(drm_dev, true); 2156 2155 adev->in_s3 = false; 2157 - 2156 + if (r) 2157 + return r; 2158 + if (!adev->in_s0ix) 2159 + r = amdgpu_asic_reset(adev); 2158 2160 return r; 2159 2161 } 2160 2162 ··· 2238 2234 if (amdgpu_device_supports_px(drm_dev)) 2239 2235 drm_dev->switch_power_state = DRM_SWITCH_POWER_CHANGING; 2240 2236 2237 + /* 2238 + * By setting mp1_state as PP_MP1_STATE_UNLOAD, MP1 will do some 2239 + * proper cleanups and put itself into a state ready for PNP. That 2240 + * can address some random resuming failure observed on BOCO capable 2241 + * platforms. 2242 + * TODO: this may be also needed for PX capable platform. 2243 + */ 2244 + if (amdgpu_device_supports_boco(drm_dev)) 2245 + adev->mp1_state = PP_MP1_STATE_UNLOAD; 2246 + 2241 2247 ret = amdgpu_device_suspend(drm_dev, false); 2242 2248 if (ret) { 2243 2249 adev->in_runpm = false; 2250 + if (amdgpu_device_supports_boco(drm_dev)) 2251 + adev->mp1_state = PP_MP1_STATE_NONE; 2244 2252 return ret; 2245 2253 } 2254 + 2255 + if (amdgpu_device_supports_boco(drm_dev)) 2256 + adev->mp1_state = PP_MP1_STATE_NONE; 2246 2257 2247 2258 if (amdgpu_device_supports_px(drm_dev)) { 2248 2259 /* Only need to handle PCI state in the driver for ATPX
+87 -39
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
··· 77 77 * Cast helper 78 78 */ 79 79 static const struct dma_fence_ops amdgpu_fence_ops; 80 + static const struct dma_fence_ops amdgpu_job_fence_ops; 80 81 static inline struct amdgpu_fence *to_amdgpu_fence(struct dma_fence *f) 81 82 { 82 83 struct amdgpu_fence *__f = container_of(f, struct amdgpu_fence, base); 83 84 84 - if (__f->base.ops == &amdgpu_fence_ops) 85 + if (__f->base.ops == &amdgpu_fence_ops || 86 + __f->base.ops == &amdgpu_job_fence_ops) 85 87 return __f; 86 88 87 89 return NULL; ··· 160 158 } 161 159 162 160 seq = ++ring->fence_drv.sync_seq; 163 - if (job != NULL && job->job_run_counter) { 161 + if (job && job->job_run_counter) { 164 162 /* reinit seq for resubmitted jobs */ 165 163 fence->seqno = seq; 166 164 } else { 167 - dma_fence_init(fence, &amdgpu_fence_ops, 168 - &ring->fence_drv.lock, 169 - adev->fence_context + ring->idx, 170 - seq); 171 - } 172 - 173 - if (job != NULL) { 174 - /* mark this fence has a parent job */ 175 - set_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &fence->flags); 165 + if (job) 166 + dma_fence_init(fence, &amdgpu_job_fence_ops, 167 + &ring->fence_drv.lock, 168 + adev->fence_context + ring->idx, seq); 169 + else 170 + dma_fence_init(fence, &amdgpu_fence_ops, 171 + &ring->fence_drv.lock, 172 + adev->fence_context + ring->idx, seq); 176 173 } 177 174 178 175 amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr, ··· 622 621 } 623 622 624 623 /** 624 + * amdgpu_fence_driver_clear_job_fences - clear job embedded fences of ring 625 + * 626 + * @ring: fence of the ring to be cleared 627 + * 628 + */ 629 + void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring) 630 + { 631 + int i; 632 + struct dma_fence *old, **ptr; 633 + 634 + for (i = 0; i <= ring->fence_drv.num_fences_mask; i++) { 635 + ptr = &ring->fence_drv.fences[i]; 636 + old = rcu_dereference_protected(*ptr, 1); 637 + if (old && old->ops == &amdgpu_job_fence_ops) 638 + RCU_INIT_POINTER(*ptr, NULL); 639 + } 640 + } 641 + 642 + /** 625 643 * amdgpu_fence_driver_force_completion - force signal latest fence of ring 626 644 * 627 645 * @ring: fence of the ring to signal ··· 663 643 664 644 static const char *amdgpu_fence_get_timeline_name(struct dma_fence *f) 665 645 { 666 - struct amdgpu_ring *ring; 646 + return (const char *)to_amdgpu_fence(f)->ring->name; 647 + } 667 648 668 - if (test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &f->flags)) { 669 - struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence); 649 + static const char *amdgpu_job_fence_get_timeline_name(struct dma_fence *f) 650 + { 651 + struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence); 670 652 671 - ring = to_amdgpu_ring(job->base.sched); 672 - } else { 673 - ring = to_amdgpu_fence(f)->ring; 674 - } 675 - return (const char *)ring->name; 653 + return (const char *)to_amdgpu_ring(job->base.sched)->name; 676 654 } 677 655 678 656 /** ··· 683 665 */ 684 666 static bool amdgpu_fence_enable_signaling(struct dma_fence *f) 685 667 { 686 - struct amdgpu_ring *ring; 668 + if (!timer_pending(&to_amdgpu_fence(f)->ring->fence_drv.fallback_timer)) 669 + amdgpu_fence_schedule_fallback(to_amdgpu_fence(f)->ring); 687 670 688 - if (test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &f->flags)) { 689 - struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence); 671 + return true; 672 + } 690 673 691 - ring = to_amdgpu_ring(job->base.sched); 692 - } else { 693 - ring = to_amdgpu_fence(f)->ring; 694 - } 674 + /** 675 + * amdgpu_job_fence_enable_signaling - enable signalling on job fence 676 + * @f: fence 677 + * 678 + * This is the simliar function with amdgpu_fence_enable_signaling above, it 679 + * only handles the job embedded fence. 680 + */ 681 + static bool amdgpu_job_fence_enable_signaling(struct dma_fence *f) 682 + { 683 + struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence); 695 684 696 - if (!timer_pending(&ring->fence_drv.fallback_timer)) 697 - amdgpu_fence_schedule_fallback(ring); 685 + if (!timer_pending(&to_amdgpu_ring(job->base.sched)->fence_drv.fallback_timer)) 686 + amdgpu_fence_schedule_fallback(to_amdgpu_ring(job->base.sched)); 698 687 699 688 return true; 700 689 } ··· 717 692 { 718 693 struct dma_fence *f = container_of(rcu, struct dma_fence, rcu); 719 694 720 - if (test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &f->flags)) { 721 - /* free job if fence has a parent job */ 722 - struct amdgpu_job *job; 723 - 724 - job = container_of(f, struct amdgpu_job, hw_fence); 725 - kfree(job); 726 - } else { 727 695 /* free fence_slab if it's separated fence*/ 728 - struct amdgpu_fence *fence; 696 + kmem_cache_free(amdgpu_fence_slab, to_amdgpu_fence(f)); 697 + } 729 698 730 - fence = to_amdgpu_fence(f); 731 - kmem_cache_free(amdgpu_fence_slab, fence); 732 - } 699 + /** 700 + * amdgpu_job_fence_free - free up the job with embedded fence 701 + * 702 + * @rcu: RCU callback head 703 + * 704 + * Free up the job with embedded fence after the RCU grace period. 705 + */ 706 + static void amdgpu_job_fence_free(struct rcu_head *rcu) 707 + { 708 + struct dma_fence *f = container_of(rcu, struct dma_fence, rcu); 709 + 710 + /* free job if fence has a parent job */ 711 + kfree(container_of(f, struct amdgpu_job, hw_fence)); 733 712 } 734 713 735 714 /** ··· 749 720 call_rcu(&f->rcu, amdgpu_fence_free); 750 721 } 751 722 723 + /** 724 + * amdgpu_job_fence_release - callback that job embedded fence can be freed 725 + * 726 + * @f: fence 727 + * 728 + * This is the simliar function with amdgpu_fence_release above, it 729 + * only handles the job embedded fence. 730 + */ 731 + static void amdgpu_job_fence_release(struct dma_fence *f) 732 + { 733 + call_rcu(&f->rcu, amdgpu_job_fence_free); 734 + } 735 + 752 736 static const struct dma_fence_ops amdgpu_fence_ops = { 753 737 .get_driver_name = amdgpu_fence_get_driver_name, 754 738 .get_timeline_name = amdgpu_fence_get_timeline_name, ··· 769 727 .release = amdgpu_fence_release, 770 728 }; 771 729 730 + static const struct dma_fence_ops amdgpu_job_fence_ops = { 731 + .get_driver_name = amdgpu_fence_get_driver_name, 732 + .get_timeline_name = amdgpu_job_fence_get_timeline_name, 733 + .enable_signaling = amdgpu_job_fence_enable_signaling, 734 + .release = amdgpu_job_fence_release, 735 + }; 772 736 773 737 /* 774 738 * Fence debugfs
+1 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
··· 53 53 #define AMDGPU_FENCE_FLAG_INT (1 << 1) 54 54 #define AMDGPU_FENCE_FLAG_TC_WB_ONLY (1 << 2) 55 55 56 - /* fence flag bit to indicate the face is embedded in job*/ 57 - #define AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT (DMA_FENCE_FLAG_USER_BITS + 1) 58 - 59 56 #define to_amdgpu_ring(s) container_of((s), struct amdgpu_ring, sched) 60 57 61 58 #define AMDGPU_IB_POOL_SIZE (1024 * 1024) ··· 111 114 struct dma_fence **fences; 112 115 }; 113 116 117 + void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring); 114 118 void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring); 115 119 116 120 int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring,
+2 -2
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
··· 3070 3070 AMD_PG_SUPPORT_CP | 3071 3071 AMD_PG_SUPPORT_GDS | 3072 3072 AMD_PG_SUPPORT_RLC_SMU_HS)) { 3073 - WREG32(mmRLC_JUMP_TABLE_RESTORE, 3074 - adev->gfx.rlc.cp_table_gpu_addr >> 8); 3073 + WREG32_SOC15(GC, 0, mmRLC_JUMP_TABLE_RESTORE, 3074 + adev->gfx.rlc.cp_table_gpu_addr >> 8); 3075 3075 gfx_v9_0_init_gfx_power_gating(adev); 3076 3076 } 3077 3077 }
-1
drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
··· 162 162 ENABLE_ADVANCED_DRIVER_MODEL, 1); 163 163 tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, 164 164 SYSTEM_APERTURE_UNMAPPED_ACCESS, 0); 165 - tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ECO_BITS, 0); 166 165 tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, 167 166 MTYPE, MTYPE_UC);/* XXX for emulation. */ 168 167 tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ATC_EN, 1);
-1
drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.c
··· 196 196 ENABLE_ADVANCED_DRIVER_MODEL, 1); 197 197 tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL, 198 198 SYSTEM_APERTURE_UNMAPPED_ACCESS, 0); 199 - tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL, ECO_BITS, 0); 200 199 tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL, 201 200 MTYPE, MTYPE_UC); /* UC, uncached */ 202 201
-1
drivers/gpu/drm/amd/amdgpu/gfxhub_v2_1.c
··· 197 197 ENABLE_ADVANCED_DRIVER_MODEL, 1); 198 198 tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL, 199 199 SYSTEM_APERTURE_UNMAPPED_ACCESS, 0); 200 - tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL, ECO_BITS, 0); 201 200 tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL, 202 201 MTYPE, MTYPE_UC); /* UC, uncached */ 203 202
+8
drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
··· 1808 1808 return 0; 1809 1809 } 1810 1810 1811 + /* 1812 + * Pair the operations did in gmc_v9_0_hw_init and thus maintain 1813 + * a correct cached state for GMC. Otherwise, the "gate" again 1814 + * operation on S3 resuming will fail due to wrong cached state. 1815 + */ 1816 + if (adev->mmhub.funcs->update_power_gating) 1817 + adev->mmhub.funcs->update_power_gating(adev, false); 1818 + 1811 1819 amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0); 1812 1820 amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0); 1813 1821
+4 -5
drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
··· 145 145 ENABLE_ADVANCED_DRIVER_MODEL, 1); 146 146 tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, 147 147 SYSTEM_APERTURE_UNMAPPED_ACCESS, 0); 148 - tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ECO_BITS, 0); 149 148 tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, 150 149 MTYPE, MTYPE_UC);/* XXX for emulation. */ 151 150 tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ATC_EN, 1); ··· 301 302 if (amdgpu_sriov_vf(adev)) 302 303 return; 303 304 304 - if (enable && adev->pg_flags & AMD_PG_SUPPORT_MMHUB) { 305 - amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_GMC, true); 306 - 307 - } 305 + if (adev->pg_flags & AMD_PG_SUPPORT_MMHUB) 306 + amdgpu_dpm_set_powergating_by_smu(adev, 307 + AMD_IP_BLOCK_TYPE_GMC, 308 + enable); 308 309 } 309 310 310 311 static int mmhub_v1_0_gart_enable(struct amdgpu_device *adev)
-1
drivers/gpu/drm/amd/amdgpu/mmhub_v1_7.c
··· 165 165 ENABLE_ADVANCED_DRIVER_MODEL, 1); 166 166 tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, 167 167 SYSTEM_APERTURE_UNMAPPED_ACCESS, 0); 168 - tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ECO_BITS, 0); 169 168 tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, 170 169 MTYPE, MTYPE_UC);/* XXX for emulation. */ 171 170 tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ATC_EN, 1);
-1
drivers/gpu/drm/amd/amdgpu/mmhub_v2_0.c
··· 267 267 ENABLE_ADVANCED_DRIVER_MODEL, 1); 268 268 tmp = REG_SET_FIELD(tmp, MMMC_VM_MX_L1_TLB_CNTL, 269 269 SYSTEM_APERTURE_UNMAPPED_ACCESS, 0); 270 - tmp = REG_SET_FIELD(tmp, MMMC_VM_MX_L1_TLB_CNTL, ECO_BITS, 0); 271 270 tmp = REG_SET_FIELD(tmp, MMMC_VM_MX_L1_TLB_CNTL, 272 271 MTYPE, MTYPE_UC); /* UC, uncached */ 273 272
-1
drivers/gpu/drm/amd/amdgpu/mmhub_v2_3.c
··· 194 194 ENABLE_ADVANCED_DRIVER_MODEL, 1); 195 195 tmp = REG_SET_FIELD(tmp, MMMC_VM_MX_L1_TLB_CNTL, 196 196 SYSTEM_APERTURE_UNMAPPED_ACCESS, 0); 197 - tmp = REG_SET_FIELD(tmp, MMMC_VM_MX_L1_TLB_CNTL, ECO_BITS, 0); 198 197 tmp = REG_SET_FIELD(tmp, MMMC_VM_MX_L1_TLB_CNTL, 199 198 MTYPE, MTYPE_UC); /* UC, uncached */ 200 199
-2
drivers/gpu/drm/amd/amdgpu/mmhub_v9_4.c
··· 190 190 tmp = REG_SET_FIELD(tmp, VMSHAREDVC0_MC_VM_MX_L1_TLB_CNTL, 191 191 SYSTEM_APERTURE_UNMAPPED_ACCESS, 0); 192 192 tmp = REG_SET_FIELD(tmp, VMSHAREDVC0_MC_VM_MX_L1_TLB_CNTL, 193 - ECO_BITS, 0); 194 - tmp = REG_SET_FIELD(tmp, VMSHAREDVC0_MC_VM_MX_L1_TLB_CNTL, 195 193 MTYPE, MTYPE_UC);/* XXX for emulation. */ 196 194 tmp = REG_SET_FIELD(tmp, VMSHAREDVC0_MC_VM_MX_L1_TLB_CNTL, 197 195 ATC_EN, 1);
+7
drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
··· 246 246 { 247 247 int r; 248 248 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 249 + bool idle_work_unexecuted; 250 + 251 + idle_work_unexecuted = cancel_delayed_work_sync(&adev->vcn.idle_work); 252 + if (idle_work_unexecuted) { 253 + if (adev->pm.dpm_enabled) 254 + amdgpu_dpm_enable_uvd(adev, false); 255 + } 249 256 250 257 r = vcn_v1_0_hw_fini(adev); 251 258 if (r)
+5
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 1051 1051 return 0; 1052 1052 } 1053 1053 1054 + /* Reset DMCUB if it was previously running - before we overwrite its memory. */ 1055 + status = dmub_srv_hw_reset(dmub_srv); 1056 + if (status != DMUB_STATUS_OK) 1057 + DRM_WARN("Error resetting DMUB HW: %d\n", status); 1058 + 1054 1059 hdr = (const struct dmcub_firmware_header_v1_0 *)dmub_fw->data; 1055 1060 1056 1061 fw_inst_const = dmub_fw->data +
+1
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
··· 158 158 union display_idle_optimization_u idle_info = { 0 }; 159 159 idle_info.idle_info.df_request_disabled = 1; 160 160 idle_info.idle_info.phy_ref_clk_off = 1; 161 + idle_info.idle_info.s0i2_rdy = 1; 161 162 dcn31_smu_set_display_idle_optimization(clk_mgr, idle_info.data); 162 163 /* update power state */ 163 164 clk_mgr_base->clks.pwr_state = DCN_PWR_STATE_LOW_POWER;
+1 -4
drivers/gpu/drm/amd/display/dc/core/dc_link.c
··· 3945 3945 config.dig_be = pipe_ctx->stream->link->link_enc_hw_inst; 3946 3946 #if defined(CONFIG_DRM_AMD_DC_DCN) 3947 3947 config.stream_enc_idx = pipe_ctx->stream_res.stream_enc->id - ENGINE_ID_DIGA; 3948 - 3948 + 3949 3949 if (pipe_ctx->stream->link->ep_type == DISPLAY_ENDPOINT_PHY || 3950 3950 pipe_ctx->stream->link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA) { 3951 - link_enc = pipe_ctx->stream->link->link_enc; 3952 - config.dio_output_type = pipe_ctx->stream->link->ep_type; 3953 - config.dio_output_idx = link_enc->transmitter - TRANSMITTER_UNIPHY_A; 3954 3951 if (pipe_ctx->stream->link->ep_type == DISPLAY_ENDPOINT_PHY) 3955 3952 link_enc = pipe_ctx->stream->link->link_enc; 3956 3953 else if (pipe_ctx->stream->link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA)
+1
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
··· 78 78 .get_clock = dcn10_get_clock, 79 79 .get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync, 80 80 .calc_vupdate_position = dcn10_calc_vupdate_position, 81 + .power_down = dce110_power_down, 81 82 .set_backlight_level = dce110_set_backlight_level, 82 83 .set_abm_immediate_disable = dce110_set_abm_immediate_disable, 83 84 .set_pipe = dce110_set_pipe,
+1 -1
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
··· 1069 1069 .timing_trace = false, 1070 1070 .clock_trace = true, 1071 1071 .disable_pplib_clock_request = true, 1072 - .pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP, 1072 + .pipe_split_policy = MPC_SPLIT_DYNAMIC, 1073 1073 .force_single_disp_pipe_split = false, 1074 1074 .disable_dcc = DCC_ENABLE, 1075 1075 .vsr_support = true,
+1 -1
drivers/gpu/drm/amd/display/dc/dcn201/dcn201_resource.c
··· 603 603 .timing_trace = false, 604 604 .clock_trace = true, 605 605 .disable_pplib_clock_request = true, 606 - .pipe_split_policy = MPC_SPLIT_AVOID, 606 + .pipe_split_policy = MPC_SPLIT_DYNAMIC, 607 607 .force_single_disp_pipe_split = false, 608 608 .disable_dcc = DCC_ENABLE, 609 609 .vsr_support = true,
+1 -1
drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
··· 874 874 .clock_trace = true, 875 875 .disable_pplib_clock_request = true, 876 876 .min_disp_clk_khz = 100000, 877 - .pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP, 877 + .pipe_split_policy = MPC_SPLIT_DYNAMIC, 878 878 .force_single_disp_pipe_split = false, 879 879 .disable_dcc = DCC_ENABLE, 880 880 .vsr_support = true,
+1 -1
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
··· 840 840 .timing_trace = false, 841 841 .clock_trace = true, 842 842 .disable_pplib_clock_request = true, 843 - .pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP, 843 + .pipe_split_policy = MPC_SPLIT_DYNAMIC, 844 844 .force_single_disp_pipe_split = false, 845 845 .disable_dcc = DCC_ENABLE, 846 846 .vsr_support = true,
+1 -1
drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
··· 686 686 .disable_clock_gate = true, 687 687 .disable_pplib_clock_request = true, 688 688 .disable_pplib_wm_range = true, 689 - .pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP, 689 + .pipe_split_policy = MPC_SPLIT_DYNAMIC, 690 690 .force_single_disp_pipe_split = false, 691 691 .disable_dcc = DCC_ENABLE, 692 692 .vsr_support = true,
+1 -1
drivers/gpu/drm/amd/display/dc/dcn302/dcn302_resource.c
··· 211 211 .timing_trace = false, 212 212 .clock_trace = true, 213 213 .disable_pplib_clock_request = true, 214 - .pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP, 214 + .pipe_split_policy = MPC_SPLIT_DYNAMIC, 215 215 .force_single_disp_pipe_split = false, 216 216 .disable_dcc = DCC_ENABLE, 217 217 .vsr_support = true,
+1 -1
drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c
··· 193 193 .timing_trace = false, 194 194 .clock_trace = true, 195 195 .disable_pplib_clock_request = true, 196 - .pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP, 196 + .pipe_split_policy = MPC_SPLIT_DYNAMIC, 197 197 .force_single_disp_pipe_split = false, 198 198 .disable_dcc = DCC_ENABLE, 199 199 .vsr_support = true,
+2
drivers/gpu/drm/amd/display/dc/dcn31/dcn31_init.c
··· 101 101 .z10_restore = dcn31_z10_restore, 102 102 .z10_save_init = dcn31_z10_save_init, 103 103 .set_disp_pattern_generator = dcn30_set_disp_pattern_generator, 104 + .optimize_pwr_state = dcn21_optimize_pwr_state, 105 + .exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state, 104 106 .update_visual_confirm_color = dcn20_update_visual_confirm_color, 105 107 }; 106 108
+24 -3
drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
··· 355 355 clk_src_regs(3, D), 356 356 clk_src_regs(4, E) 357 357 }; 358 + /*pll_id being rempped in dmub, in driver it is logical instance*/ 359 + static const struct dce110_clk_src_regs clk_src_regs_b0[] = { 360 + clk_src_regs(0, A), 361 + clk_src_regs(1, B), 362 + clk_src_regs(2, F), 363 + clk_src_regs(3, G), 364 + clk_src_regs(4, E) 365 + }; 358 366 359 367 static const struct dce110_clk_src_shift cs_shift = { 360 368 CS_COMMON_MASK_SH_LIST_DCN2_0(__SHIFT) ··· 1002 994 .timing_trace = false, 1003 995 .clock_trace = true, 1004 996 .disable_pplib_clock_request = false, 1005 - .pipe_split_policy = MPC_SPLIT_AVOID, 997 + .pipe_split_policy = MPC_SPLIT_DYNAMIC, 1006 998 .force_single_disp_pipe_split = false, 1007 999 .disable_dcc = DCC_ENABLE, 1008 1000 .vsr_support = true, ··· 2284 2276 dcn30_clock_source_create(ctx, ctx->dc_bios, 2285 2277 CLOCK_SOURCE_COMBO_PHY_PLL1, 2286 2278 &clk_src_regs[1], false); 2287 - pool->base.clock_sources[DCN31_CLK_SRC_PLL2] = 2279 + /*move phypllx_pixclk_resync to dmub next*/ 2280 + if (dc->ctx->asic_id.hw_internal_rev == YELLOW_CARP_B0) { 2281 + pool->base.clock_sources[DCN31_CLK_SRC_PLL2] = 2282 + dcn30_clock_source_create(ctx, ctx->dc_bios, 2283 + CLOCK_SOURCE_COMBO_PHY_PLL2, 2284 + &clk_src_regs_b0[2], false); 2285 + pool->base.clock_sources[DCN31_CLK_SRC_PLL3] = 2286 + dcn30_clock_source_create(ctx, ctx->dc_bios, 2287 + CLOCK_SOURCE_COMBO_PHY_PLL3, 2288 + &clk_src_regs_b0[3], false); 2289 + } else { 2290 + pool->base.clock_sources[DCN31_CLK_SRC_PLL2] = 2288 2291 dcn30_clock_source_create(ctx, ctx->dc_bios, 2289 2292 CLOCK_SOURCE_COMBO_PHY_PLL2, 2290 2293 &clk_src_regs[2], false); 2291 - pool->base.clock_sources[DCN31_CLK_SRC_PLL3] = 2294 + pool->base.clock_sources[DCN31_CLK_SRC_PLL3] = 2292 2295 dcn30_clock_source_create(ctx, ctx->dc_bios, 2293 2296 CLOCK_SOURCE_COMBO_PHY_PLL3, 2294 2297 &clk_src_regs[3], false); 2298 + } 2299 + 2295 2300 pool->base.clock_sources[DCN31_CLK_SRC_PLL4] = 2296 2301 dcn30_clock_source_create(ctx, ctx->dc_bios, 2297 2302 CLOCK_SOURCE_COMBO_PHY_PLL4,
+31
drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.h
··· 49 49 const struct dc_init_data *init_data, 50 50 struct dc *dc); 51 51 52 + /*temp: B0 specific before switch to dcn313 headers*/ 53 + #ifndef regPHYPLLF_PIXCLK_RESYNC_CNTL 54 + #define regPHYPLLF_PIXCLK_RESYNC_CNTL 0x007e 55 + #define regPHYPLLF_PIXCLK_RESYNC_CNTL_BASE_IDX 1 56 + #define regPHYPLLG_PIXCLK_RESYNC_CNTL 0x005f 57 + #define regPHYPLLG_PIXCLK_RESYNC_CNTL_BASE_IDX 1 58 + 59 + //PHYPLLF_PIXCLK_RESYNC_CNTL 60 + #define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_PIXCLK_RESYNC_ENABLE__SHIFT 0x0 61 + #define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_DEEP_COLOR_DTO_ENABLE_STATUS__SHIFT 0x1 62 + #define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_DCCG_DEEP_COLOR_CNTL__SHIFT 0x4 63 + #define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_PIXCLK_ENABLE__SHIFT 0x8 64 + #define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_PIXCLK_DOUBLE_RATE_ENABLE__SHIFT 0x9 65 + #define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_PIXCLK_RESYNC_ENABLE_MASK 0x00000001L 66 + #define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_DEEP_COLOR_DTO_ENABLE_STATUS_MASK 0x00000002L 67 + #define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_DCCG_DEEP_COLOR_CNTL_MASK 0x00000030L 68 + #define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_PIXCLK_ENABLE_MASK 0x00000100L 69 + #define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_PIXCLK_DOUBLE_RATE_ENABLE_MASK 0x00000200L 70 + 71 + //PHYPLLG_PIXCLK_RESYNC_CNTL 72 + #define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_PIXCLK_RESYNC_ENABLE__SHIFT 0x0 73 + #define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_DEEP_COLOR_DTO_ENABLE_STATUS__SHIFT 0x1 74 + #define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_DCCG_DEEP_COLOR_CNTL__SHIFT 0x4 75 + #define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_PIXCLK_ENABLE__SHIFT 0x8 76 + #define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_PIXCLK_DOUBLE_RATE_ENABLE__SHIFT 0x9 77 + #define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_PIXCLK_RESYNC_ENABLE_MASK 0x00000001L 78 + #define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_DEEP_COLOR_DTO_ENABLE_STATUS_MASK 0x00000002L 79 + #define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_DCCG_DEEP_COLOR_CNTL_MASK 0x00000030L 80 + #define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_PIXCLK_ENABLE_MASK 0x00000100L 81 + #define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_PIXCLK_DOUBLE_RATE_ENABLE_MASK 0x00000200L 82 + #endif 52 83 #endif /* _DCN31_RESOURCE_H_ */
+49
drivers/gpu/drm/amd/include/discovery.h
··· 143 143 uint32_t gc_num_gl2a; 144 144 }; 145 145 146 + struct gc_info_v1_1 { 147 + struct gpu_info_header header; 148 + 149 + uint32_t gc_num_se; 150 + uint32_t gc_num_wgp0_per_sa; 151 + uint32_t gc_num_wgp1_per_sa; 152 + uint32_t gc_num_rb_per_se; 153 + uint32_t gc_num_gl2c; 154 + uint32_t gc_num_gprs; 155 + uint32_t gc_num_max_gs_thds; 156 + uint32_t gc_gs_table_depth; 157 + uint32_t gc_gsprim_buff_depth; 158 + uint32_t gc_parameter_cache_depth; 159 + uint32_t gc_double_offchip_lds_buffer; 160 + uint32_t gc_wave_size; 161 + uint32_t gc_max_waves_per_simd; 162 + uint32_t gc_max_scratch_slots_per_cu; 163 + uint32_t gc_lds_size; 164 + uint32_t gc_num_sc_per_se; 165 + uint32_t gc_num_sa_per_se; 166 + uint32_t gc_num_packer_per_sc; 167 + uint32_t gc_num_gl2a; 168 + uint32_t gc_num_tcp_per_sa; 169 + uint32_t gc_num_sdp_interface; 170 + uint32_t gc_num_tcps; 171 + }; 172 + 173 + struct gc_info_v2_0 { 174 + struct gpu_info_header header; 175 + 176 + uint32_t gc_num_se; 177 + uint32_t gc_num_cu_per_sh; 178 + uint32_t gc_num_sh_per_se; 179 + uint32_t gc_num_rb_per_se; 180 + uint32_t gc_num_tccs; 181 + uint32_t gc_num_gprs; 182 + uint32_t gc_num_max_gs_thds; 183 + uint32_t gc_gs_table_depth; 184 + uint32_t gc_gsprim_buff_depth; 185 + uint32_t gc_parameter_cache_depth; 186 + uint32_t gc_double_offchip_lds_buffer; 187 + uint32_t gc_wave_size; 188 + uint32_t gc_max_waves_per_simd; 189 + uint32_t gc_max_scratch_slots_per_cu; 190 + uint32_t gc_lds_size; 191 + uint32_t gc_num_sc_per_se; 192 + uint32_t gc_num_packer_per_sc; 193 + }; 194 + 146 195 typedef struct harvest_info_header { 147 196 uint32_t signature; /* Table Signature */ 148 197 uint32_t version; /* Table Version */
+6 -1
drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
··· 1328 1328 pp_dpm_powergate_vce(handle, gate); 1329 1329 break; 1330 1330 case AMD_IP_BLOCK_TYPE_GMC: 1331 - pp_dpm_powergate_mmhub(handle); 1331 + /* 1332 + * For now, this is only used on PICASSO. 1333 + * And only "gate" operation is supported. 1334 + */ 1335 + if (gate) 1336 + pp_dpm_powergate_mmhub(handle); 1332 1337 break; 1333 1338 case AMD_IP_BLOCK_TYPE_GFX: 1334 1339 ret = pp_dpm_powergate_gfx(handle, gate);
+2 -5
drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
··· 1568 1568 1569 1569 smu->watermarks_bitmap &= ~(WATERMARKS_LOADED); 1570 1570 1571 - /* skip CGPG when in S0ix */ 1572 - if (smu->is_apu && !adev->in_s0ix) 1573 - smu_set_gfx_cgpg(&adev->smu, false); 1571 + smu_set_gfx_cgpg(&adev->smu, false); 1574 1572 1575 1573 return 0; 1576 1574 } ··· 1599 1601 return ret; 1600 1602 } 1601 1603 1602 - if (smu->is_apu) 1603 - smu_set_gfx_cgpg(&adev->smu, true); 1604 + smu_set_gfx_cgpg(&adev->smu, true); 1604 1605 1605 1606 smu->disable_uclk_switch = 0; 1606 1607
+5 -1
drivers/gpu/drm/amd/pm/swsmu/smu12/smu_v12_0.c
··· 120 120 121 121 int smu_v12_0_set_gfx_cgpg(struct smu_context *smu, bool enable) 122 122 { 123 - if (!(smu->adev->pg_flags & AMD_PG_SUPPORT_GFX_PG)) 123 + /* Until now the SMU12 only implemented for Renoir series so here neen't do APU check. */ 124 + if (!(smu->adev->pg_flags & AMD_PG_SUPPORT_GFX_PG) || smu->adev->in_s0ix) 124 125 return 0; 125 126 126 127 return smu_cmn_send_smc_msg_with_param(smu, ··· 191 190 192 191 kfree(smu_table->watermarks_table); 193 192 smu_table->watermarks_table = NULL; 193 + 194 + kfree(smu_table->gpu_metrics_table); 195 + smu_table->gpu_metrics_table = NULL; 194 196 195 197 return 0; 196 198 }
+1 -1
drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
··· 1621 1621 { 1622 1622 return smu_cmn_send_smc_msg_with_param(smu, 1623 1623 SMU_MSG_GmiPwrDnControl, 1624 - en ? 1 : 0, 1624 + en ? 0 : 1, 1625 1625 NULL); 1626 1626 } 1627 1627
+3
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
··· 198 198 199 199 int smu_v13_0_check_fw_version(struct smu_context *smu) 200 200 { 201 + struct amdgpu_device *adev = smu->adev; 201 202 uint32_t if_version = 0xff, smu_version = 0xff; 202 203 uint16_t smu_major; 203 204 uint8_t smu_minor, smu_debug; ··· 211 210 smu_major = (smu_version >> 16) & 0xffff; 212 211 smu_minor = (smu_version >> 8) & 0xff; 213 212 smu_debug = (smu_version >> 0) & 0xff; 213 + if (smu->is_apu) 214 + adev->pm.fw_version = smu_version; 214 215 215 216 switch (smu->adev->ip_versions[MP1_HWIP][0]) { 216 217 case IP_VERSION(13, 0, 2):
+4 -1
drivers/gpu/drm/ast/ast_mode.c
··· 1121 1121 if (crtc->state) 1122 1122 crtc->funcs->atomic_destroy_state(crtc, crtc->state); 1123 1123 1124 - __drm_atomic_helper_crtc_reset(crtc, &ast_state->base); 1124 + if (ast_state) 1125 + __drm_atomic_helper_crtc_reset(crtc, &ast_state->base); 1126 + else 1127 + __drm_atomic_helper_crtc_reset(crtc, NULL); 1125 1128 } 1126 1129 1127 1130 static struct drm_crtc_state *
+7 -1
drivers/gpu/drm/drm_fb_helper.c
··· 1743 1743 sizes->fb_width, sizes->fb_height); 1744 1744 1745 1745 info->par = fb_helper; 1746 - snprintf(info->fix.id, sizeof(info->fix.id), "%s", 1746 + /* 1747 + * The DRM drivers fbdev emulation device name can be confusing if the 1748 + * driver name also has a "drm" suffix on it. Leading to names such as 1749 + * "simpledrmdrmfb" in /proc/fb. Unfortunately, it's an uAPI and can't 1750 + * be changed due user-space tools (e.g: pm-utils) matching against it. 1751 + */ 1752 + snprintf(info->fix.id, sizeof(info->fix.id), "%sdrmfb", 1747 1753 fb_helper->dev->driver->name); 1748 1754 1749 1755 }
+1 -1
drivers/gpu/drm/i915/display/intel_dmc.c
··· 596 596 continue; 597 597 598 598 offset = readcount + dmc->dmc_info[id].dmc_offset * 4; 599 - if (fw->size - offset < 0) { 599 + if (offset > fw->size) { 600 600 drm_err(&dev_priv->drm, "Reading beyond the fw_size\n"); 601 601 continue; 602 602 }
+1 -1
drivers/gpu/drm/i915/gem/i915_gem_context.c
··· 564 564 container_of_user(base, typeof(*ext), base); 565 565 const struct set_proto_ctx_engines *set = data; 566 566 struct drm_i915_private *i915 = set->i915; 567 + struct i915_engine_class_instance prev_engine; 567 568 u64 flags; 568 569 int err = 0, n, i, j; 569 570 u16 slot, width, num_siblings; ··· 630 629 /* Create contexts / engines */ 631 630 for (i = 0; i < width; ++i) { 632 631 intel_engine_mask_t current_mask = 0; 633 - struct i915_engine_class_instance prev_engine; 634 632 635 633 for (j = 0; j < num_siblings; ++j) { 636 634 struct i915_engine_class_instance ci;
+1 -1
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
··· 3017 3017 fence_array = dma_fence_array_create(eb->num_batches, 3018 3018 fences, 3019 3019 eb->context->parallel.fence_context, 3020 - eb->context->parallel.seqno, 3020 + eb->context->parallel.seqno++, 3021 3021 false); 3022 3022 if (!fence_array) { 3023 3023 kfree(fences);
+3 -3
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
··· 1662 1662 GEM_BUG_ON(intel_context_is_parent(cn)); 1663 1663 1664 1664 list_del_init(&cn->guc_id.link); 1665 - ce->guc_id = cn->guc_id; 1665 + ce->guc_id.id = cn->guc_id.id; 1666 1666 1667 - spin_lock(&ce->guc_state.lock); 1667 + spin_lock(&cn->guc_state.lock); 1668 1668 clr_context_registered(cn); 1669 - spin_unlock(&ce->guc_state.lock); 1669 + spin_unlock(&cn->guc_state.lock); 1670 1670 1671 1671 set_context_guc_id_invalid(cn); 1672 1672
+7 -5
drivers/gpu/drm/mediatek/mtk_hdmi.c
··· 1224 1224 return MODE_BAD; 1225 1225 } 1226 1226 1227 - if (hdmi->conf->cea_modes_only && !drm_match_cea_mode(mode)) 1228 - return MODE_BAD; 1227 + if (hdmi->conf) { 1228 + if (hdmi->conf->cea_modes_only && !drm_match_cea_mode(mode)) 1229 + return MODE_BAD; 1229 1230 1230 - if (hdmi->conf->max_mode_clock && 1231 - mode->clock > hdmi->conf->max_mode_clock) 1232 - return MODE_CLOCK_HIGH; 1231 + if (hdmi->conf->max_mode_clock && 1232 + mode->clock > hdmi->conf->max_mode_clock) 1233 + return MODE_CLOCK_HIGH; 1234 + } 1233 1235 1234 1236 if (mode->clock < 27000) 1235 1237 return MODE_CLOCK_LOW;
+28 -26
drivers/gpu/drm/nouveau/nouveau_fence.c
··· 353 353 354 354 if (ret) 355 355 return ret; 356 + 357 + fobj = NULL; 358 + } else { 359 + fobj = dma_resv_shared_list(resv); 356 360 } 357 361 358 - fobj = dma_resv_shared_list(resv); 359 - fence = dma_resv_excl_fence(resv); 360 - 361 - if (fence) { 362 - struct nouveau_channel *prev = NULL; 363 - bool must_wait = true; 364 - 365 - f = nouveau_local_fence(fence, chan->drm); 366 - if (f) { 367 - rcu_read_lock(); 368 - prev = rcu_dereference(f->channel); 369 - if (prev && (prev == chan || fctx->sync(f, prev, chan) == 0)) 370 - must_wait = false; 371 - rcu_read_unlock(); 372 - } 373 - 374 - if (must_wait) 375 - ret = dma_fence_wait(fence, intr); 376 - 377 - return ret; 378 - } 379 - 380 - if (!exclusive || !fobj) 381 - return ret; 382 - 383 - for (i = 0; i < fobj->shared_count && !ret; ++i) { 362 + /* Waiting for the exclusive fence first causes performance regressions 363 + * under some circumstances. So manually wait for the shared ones first. 364 + */ 365 + for (i = 0; i < (fobj ? fobj->shared_count : 0) && !ret; ++i) { 384 366 struct nouveau_channel *prev = NULL; 385 367 bool must_wait = true; 386 368 ··· 380 398 381 399 if (must_wait) 382 400 ret = dma_fence_wait(fence, intr); 401 + } 402 + 403 + fence = dma_resv_excl_fence(resv); 404 + if (fence) { 405 + struct nouveau_channel *prev = NULL; 406 + bool must_wait = true; 407 + 408 + f = nouveau_local_fence(fence, chan->drm); 409 + if (f) { 410 + rcu_read_lock(); 411 + prev = rcu_dereference(f->channel); 412 + if (prev && (prev == chan || fctx->sync(f, prev, chan) == 0)) 413 + must_wait = false; 414 + rcu_read_unlock(); 415 + } 416 + 417 + if (must_wait) 418 + ret = dma_fence_wait(fence, intr); 419 + 420 + return ret; 383 421 } 384 422 385 423 return ret;
+1 -1
drivers/gpu/drm/tiny/simpledrm.c
··· 458 458 { 459 459 struct drm_display_mode mode = { SIMPLEDRM_MODE(width, height) }; 460 460 461 - mode.clock = 60 /* Hz */ * mode.hdisplay * mode.vdisplay; 461 + mode.clock = mode.hdisplay * mode.vdisplay * 60 / 1000 /* kHz */; 462 462 drm_mode_set_name(&mode); 463 463 464 464 return mode;
+15
drivers/hid/hid-holtek-mouse.c
··· 65 65 static int holtek_mouse_probe(struct hid_device *hdev, 66 66 const struct hid_device_id *id) 67 67 { 68 + int ret; 69 + 68 70 if (!hid_is_usb(hdev)) 69 71 return -EINVAL; 72 + 73 + ret = hid_parse(hdev); 74 + if (ret) { 75 + hid_err(hdev, "hid parse failed: %d\n", ret); 76 + return ret; 77 + } 78 + 79 + ret = hid_hw_start(hdev, HID_CONNECT_DEFAULT); 80 + if (ret) { 81 + hid_err(hdev, "hw start failed: %d\n", ret); 82 + return ret; 83 + } 84 + 70 85 return 0; 71 86 } 72 87
+3
drivers/hid/hid-vivaldi.c
··· 57 57 int ret; 58 58 59 59 drvdata = devm_kzalloc(&hdev->dev, sizeof(*drvdata), GFP_KERNEL); 60 + if (!drvdata) 61 + return -ENOMEM; 62 + 60 63 hid_set_drvdata(hdev, drvdata); 61 64 62 65 ret = hid_parse(hdev);
+1
drivers/hv/Kconfig
··· 19 19 config HYPERV_UTILS 20 20 tristate "Microsoft Hyper-V Utilities driver" 21 21 depends on HYPERV && CONNECTOR && NLS 22 + depends on PTP_1588_CLOCK_OPTIONAL 22 23 help 23 24 Select this option to enable the Hyper-V Utilities. 24 25
+62 -44
drivers/hwmon/lm90.c
··· 35 35 * explicitly as max6659, or if its address is not 0x4c. 36 36 * These chips lack the remote temperature offset feature. 37 37 * 38 - * This driver also supports the MAX6654 chip made by Maxim. This chip can 39 - * be at 9 different addresses, similar to MAX6680/MAX6681. The MAX6654 is 40 - * otherwise similar to MAX6657/MAX6658/MAX6659. Extended range is available 41 - * by setting the configuration register accordingly, and is done during 42 - * initialization. Extended precision is only available at conversion rates 43 - * of 1 Hz and slower. Note that extended precision is not enabled by 44 - * default, as this driver initializes all chips to 2 Hz by design. 38 + * This driver also supports the MAX6654 chip made by Maxim. This chip can be 39 + * at 9 different addresses, similar to MAX6680/MAX6681. The MAX6654 is similar 40 + * to MAX6657/MAX6658/MAX6659, but does not support critical temperature 41 + * limits. Extended range is available by setting the configuration register 42 + * accordingly, and is done during initialization. Extended precision is only 43 + * available at conversion rates of 1 Hz and slower. Note that extended 44 + * precision is not enabled by default, as this driver initializes all chips 45 + * to 2 Hz by design. 45 46 * 46 47 * This driver also supports the MAX6646, MAX6647, MAX6648, MAX6649 and 47 48 * MAX6692 chips made by Maxim. These are again similar to the LM86, ··· 189 188 #define LM90_HAVE_BROKEN_ALERT (1 << 7) /* Broken alert */ 190 189 #define LM90_HAVE_EXTENDED_TEMP (1 << 8) /* extended temperature support*/ 191 190 #define LM90_PAUSE_FOR_CONFIG (1 << 9) /* Pause conversion for config */ 191 + #define LM90_HAVE_CRIT (1 << 10)/* Chip supports CRIT/OVERT register */ 192 + #define LM90_HAVE_CRIT_ALRM_SWP (1 << 11)/* critical alarm bits swapped */ 192 193 193 194 /* LM90 status */ 194 195 #define LM90_STATUS_LTHRM (1 << 0) /* local THERM limit tripped */ ··· 200 197 #define LM90_STATUS_RHIGH (1 << 4) /* remote high temp limit tripped */ 201 198 #define LM90_STATUS_LLOW (1 << 5) /* local low temp limit tripped */ 202 199 #define LM90_STATUS_LHIGH (1 << 6) /* local high temp limit tripped */ 200 + #define LM90_STATUS_BUSY (1 << 7) /* conversion is ongoing */ 203 201 204 202 #define MAX6696_STATUS2_R2THRM (1 << 1) /* remote2 THERM limit tripped */ 205 203 #define MAX6696_STATUS2_R2OPEN (1 << 2) /* remote2 is an open circuit */ ··· 358 354 static const struct lm90_params lm90_params[] = { 359 355 [adm1032] = { 360 356 .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT 361 - | LM90_HAVE_BROKEN_ALERT, 357 + | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_CRIT, 362 358 .alert_alarms = 0x7c, 363 359 .max_convrate = 10, 364 360 }, 365 361 [adt7461] = { 366 362 .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT 367 - | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP, 363 + | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP 364 + | LM90_HAVE_CRIT, 368 365 .alert_alarms = 0x7c, 369 366 .max_convrate = 10, 370 367 }, 371 368 [g781] = { 372 369 .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT 373 - | LM90_HAVE_BROKEN_ALERT, 370 + | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_CRIT, 374 371 .alert_alarms = 0x7c, 375 372 .max_convrate = 8, 376 373 }, 377 374 [lm86] = { 378 - .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT, 375 + .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT 376 + | LM90_HAVE_CRIT, 379 377 .alert_alarms = 0x7b, 380 378 .max_convrate = 9, 381 379 }, 382 380 [lm90] = { 383 - .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT, 381 + .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT 382 + | LM90_HAVE_CRIT, 384 383 .alert_alarms = 0x7b, 385 384 .max_convrate = 9, 386 385 }, 387 386 [lm99] = { 388 - .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT, 387 + .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT 388 + | LM90_HAVE_CRIT, 389 389 .alert_alarms = 0x7b, 390 390 .max_convrate = 9, 391 391 }, 392 392 [max6646] = { 393 + .flags = LM90_HAVE_CRIT, 393 394 .alert_alarms = 0x7c, 394 395 .max_convrate = 6, 395 396 .reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL, ··· 405 396 .reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL, 406 397 }, 407 398 [max6657] = { 408 - .flags = LM90_PAUSE_FOR_CONFIG, 399 + .flags = LM90_PAUSE_FOR_CONFIG | LM90_HAVE_CRIT, 409 400 .alert_alarms = 0x7c, 410 401 .max_convrate = 8, 411 402 .reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL, 412 403 }, 413 404 [max6659] = { 414 - .flags = LM90_HAVE_EMERGENCY, 405 + .flags = LM90_HAVE_EMERGENCY | LM90_HAVE_CRIT, 415 406 .alert_alarms = 0x7c, 416 407 .max_convrate = 8, 417 408 .reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL, 418 409 }, 419 410 [max6680] = { 420 - .flags = LM90_HAVE_OFFSET, 411 + .flags = LM90_HAVE_OFFSET | LM90_HAVE_CRIT 412 + | LM90_HAVE_CRIT_ALRM_SWP, 421 413 .alert_alarms = 0x7c, 422 414 .max_convrate = 7, 423 415 }, 424 416 [max6696] = { 425 417 .flags = LM90_HAVE_EMERGENCY 426 - | LM90_HAVE_EMERGENCY_ALARM | LM90_HAVE_TEMP3, 418 + | LM90_HAVE_EMERGENCY_ALARM | LM90_HAVE_TEMP3 | LM90_HAVE_CRIT, 427 419 .alert_alarms = 0x1c7c, 428 420 .max_convrate = 6, 429 421 .reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL, 430 422 }, 431 423 [w83l771] = { 432 - .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT, 424 + .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT | LM90_HAVE_CRIT, 433 425 .alert_alarms = 0x7c, 434 426 .max_convrate = 8, 435 427 }, 436 428 [sa56004] = { 437 - .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT, 429 + .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT | LM90_HAVE_CRIT, 438 430 .alert_alarms = 0x7b, 439 431 .max_convrate = 9, 440 432 .reg_local_ext = SA56004_REG_R_LOCAL_TEMPL, 441 433 }, 442 434 [tmp451] = { 443 435 .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT 444 - | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP, 436 + | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP | LM90_HAVE_CRIT, 445 437 .alert_alarms = 0x7c, 446 438 .max_convrate = 9, 447 439 .reg_local_ext = TMP451_REG_R_LOCAL_TEMPL, 448 440 }, 449 441 [tmp461] = { 450 442 .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT 451 - | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP, 443 + | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP | LM90_HAVE_CRIT, 452 444 .alert_alarms = 0x7c, 453 445 .max_convrate = 9, 454 446 .reg_local_ext = TMP451_REG_R_LOCAL_TEMPL, ··· 678 668 struct i2c_client *client = data->client; 679 669 int val; 680 670 681 - val = lm90_read_reg(client, LM90_REG_R_LOCAL_CRIT); 682 - if (val < 0) 683 - return val; 684 - data->temp8[LOCAL_CRIT] = val; 671 + if (data->flags & LM90_HAVE_CRIT) { 672 + val = lm90_read_reg(client, LM90_REG_R_LOCAL_CRIT); 673 + if (val < 0) 674 + return val; 675 + data->temp8[LOCAL_CRIT] = val; 685 676 686 - val = lm90_read_reg(client, LM90_REG_R_REMOTE_CRIT); 687 - if (val < 0) 688 - return val; 689 - data->temp8[REMOTE_CRIT] = val; 677 + val = lm90_read_reg(client, LM90_REG_R_REMOTE_CRIT); 678 + if (val < 0) 679 + return val; 680 + data->temp8[REMOTE_CRIT] = val; 690 681 691 - val = lm90_read_reg(client, LM90_REG_R_TCRIT_HYST); 692 - if (val < 0) 693 - return val; 694 - data->temp_hyst = val; 682 + val = lm90_read_reg(client, LM90_REG_R_TCRIT_HYST); 683 + if (val < 0) 684 + return val; 685 + data->temp_hyst = val; 686 + } 695 687 696 688 val = lm90_read_reg(client, LM90_REG_R_REMOTE_LOWH); 697 689 if (val < 0) ··· 821 809 val = lm90_read_reg(client, LM90_REG_R_STATUS); 822 810 if (val < 0) 823 811 return val; 824 - data->alarms = val; /* lower 8 bit of alarms */ 812 + data->alarms = val & ~LM90_STATUS_BUSY; 825 813 826 814 if (data->kind == max6696) { 827 815 val = lm90_select_remote_channel(data, 1); ··· 1172 1160 else 1173 1161 temp = temp_from_s8(data->temp8[LOCAL_CRIT]); 1174 1162 1175 - /* prevent integer underflow */ 1176 - val = max(val, -128000l); 1163 + /* prevent integer overflow/underflow */ 1164 + val = clamp_val(val, -128000l, 255000l); 1177 1165 1178 1166 data->temp_hyst = hyst_to_reg(temp - val); 1179 1167 err = i2c_smbus_write_byte_data(client, LM90_REG_W_TCRIT_HYST, ··· 1204 1192 static const u8 lm90_min_alarm_bits[3] = { 5, 3, 11 }; 1205 1193 static const u8 lm90_max_alarm_bits[3] = { 6, 4, 12 }; 1206 1194 static const u8 lm90_crit_alarm_bits[3] = { 0, 1, 9 }; 1195 + static const u8 lm90_crit_alarm_bits_swapped[3] = { 1, 0, 9 }; 1207 1196 static const u8 lm90_emergency_alarm_bits[3] = { 15, 13, 14 }; 1208 1197 static const u8 lm90_fault_bits[3] = { 0, 2, 10 }; 1209 1198 ··· 1230 1217 *val = (data->alarms >> lm90_max_alarm_bits[channel]) & 1; 1231 1218 break; 1232 1219 case hwmon_temp_crit_alarm: 1233 - *val = (data->alarms >> lm90_crit_alarm_bits[channel]) & 1; 1220 + if (data->flags & LM90_HAVE_CRIT_ALRM_SWP) 1221 + *val = (data->alarms >> lm90_crit_alarm_bits_swapped[channel]) & 1; 1222 + else 1223 + *val = (data->alarms >> lm90_crit_alarm_bits[channel]) & 1; 1234 1224 break; 1235 1225 case hwmon_temp_emergency_alarm: 1236 1226 *val = (data->alarms >> lm90_emergency_alarm_bits[channel]) & 1; ··· 1481 1465 if (man_id < 0 || chip_id < 0 || config1 < 0 || convrate < 0) 1482 1466 return -ENODEV; 1483 1467 1484 - if (man_id == 0x01 || man_id == 0x5C || man_id == 0x41) { 1468 + if (man_id == 0x01 || man_id == 0x5C || man_id == 0xA1) { 1485 1469 config2 = i2c_smbus_read_byte_data(client, LM90_REG_R_CONFIG2); 1486 1470 if (config2 < 0) 1487 1471 return -ENODEV; 1488 - } else 1489 - config2 = 0; /* Make compiler happy */ 1472 + } 1490 1473 1491 1474 if ((address == 0x4C || address == 0x4D) 1492 1475 && man_id == 0x01) { /* National Semiconductor */ ··· 1918 1903 info->config = data->channel_config; 1919 1904 1920 1905 data->channel_config[0] = HWMON_T_INPUT | HWMON_T_MIN | HWMON_T_MAX | 1921 - HWMON_T_CRIT | HWMON_T_CRIT_HYST | HWMON_T_MIN_ALARM | 1922 - HWMON_T_MAX_ALARM | HWMON_T_CRIT_ALARM; 1906 + HWMON_T_MIN_ALARM | HWMON_T_MAX_ALARM; 1923 1907 data->channel_config[1] = HWMON_T_INPUT | HWMON_T_MIN | HWMON_T_MAX | 1924 - HWMON_T_CRIT | HWMON_T_CRIT_HYST | HWMON_T_MIN_ALARM | 1925 - HWMON_T_MAX_ALARM | HWMON_T_CRIT_ALARM | HWMON_T_FAULT; 1908 + HWMON_T_MIN_ALARM | HWMON_T_MAX_ALARM | HWMON_T_FAULT; 1909 + 1910 + if (data->flags & LM90_HAVE_CRIT) { 1911 + data->channel_config[0] |= HWMON_T_CRIT | HWMON_T_CRIT_ALARM | HWMON_T_CRIT_HYST; 1912 + data->channel_config[1] |= HWMON_T_CRIT | HWMON_T_CRIT_ALARM | HWMON_T_CRIT_HYST; 1913 + } 1926 1914 1927 1915 if (data->flags & LM90_HAVE_OFFSET) 1928 1916 data->channel_config[1] |= HWMON_T_OFFSET;
+3
drivers/i2c/i2c-dev.c
··· 535 535 sizeof(rdwr_arg))) 536 536 return -EFAULT; 537 537 538 + if (!rdwr_arg.msgs || rdwr_arg.nmsgs == 0) 539 + return -EINVAL; 540 + 538 541 if (rdwr_arg.nmsgs > I2C_RDWR_IOCTL_MAX_MSGS) 539 542 return -EINVAL; 540 543
+57 -7
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 1594 1594 { 1595 1595 struct hns_roce_cmq_desc desc; 1596 1596 struct hns_roce_cmq_req *req = (struct hns_roce_cmq_req *)desc.data; 1597 + u32 clock_cycles_of_1us; 1597 1598 1598 1599 hns_roce_cmq_setup_basic_desc(&desc, HNS_ROCE_OPC_CFG_GLOBAL_PARAM, 1599 1600 false); 1600 1601 1601 - hr_reg_write(req, CFG_GLOBAL_PARAM_1US_CYCLES, 0x3e8); 1602 + if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) 1603 + clock_cycles_of_1us = HNS_ROCE_1NS_CFG; 1604 + else 1605 + clock_cycles_of_1us = HNS_ROCE_1US_CFG; 1606 + 1607 + hr_reg_write(req, CFG_GLOBAL_PARAM_1US_CYCLES, clock_cycles_of_1us); 1602 1608 hr_reg_write(req, CFG_GLOBAL_PARAM_UDP_PORT, ROCE_V2_UDP_DPORT); 1603 1609 1604 1610 return hns_roce_cmq_send(hr_dev, &desc, 1); ··· 4808 4802 return ret; 4809 4803 } 4810 4804 4805 + static bool check_qp_timeout_cfg_range(struct hns_roce_dev *hr_dev, u8 *timeout) 4806 + { 4807 + #define QP_ACK_TIMEOUT_MAX_HIP08 20 4808 + #define QP_ACK_TIMEOUT_OFFSET 10 4809 + #define QP_ACK_TIMEOUT_MAX 31 4810 + 4811 + if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) { 4812 + if (*timeout > QP_ACK_TIMEOUT_MAX_HIP08) { 4813 + ibdev_warn(&hr_dev->ib_dev, 4814 + "Local ACK timeout shall be 0 to 20.\n"); 4815 + return false; 4816 + } 4817 + *timeout += QP_ACK_TIMEOUT_OFFSET; 4818 + } else if (hr_dev->pci_dev->revision > PCI_REVISION_ID_HIP08) { 4819 + if (*timeout > QP_ACK_TIMEOUT_MAX) { 4820 + ibdev_warn(&hr_dev->ib_dev, 4821 + "Local ACK timeout shall be 0 to 31.\n"); 4822 + return false; 4823 + } 4824 + } 4825 + 4826 + return true; 4827 + } 4828 + 4811 4829 static int hns_roce_v2_set_opt_fields(struct ib_qp *ibqp, 4812 4830 const struct ib_qp_attr *attr, 4813 4831 int attr_mask, ··· 4841 4811 struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device); 4842 4812 struct hns_roce_qp *hr_qp = to_hr_qp(ibqp); 4843 4813 int ret = 0; 4814 + u8 timeout; 4844 4815 4845 4816 if (attr_mask & IB_QP_AV) { 4846 4817 ret = hns_roce_v2_set_path(ibqp, attr, attr_mask, context, ··· 4851 4820 } 4852 4821 4853 4822 if (attr_mask & IB_QP_TIMEOUT) { 4854 - if (attr->timeout < 31) { 4855 - hr_reg_write(context, QPC_AT, attr->timeout); 4823 + timeout = attr->timeout; 4824 + if (check_qp_timeout_cfg_range(hr_dev, &timeout)) { 4825 + hr_reg_write(context, QPC_AT, timeout); 4856 4826 hr_reg_clear(qpc_mask, QPC_AT); 4857 - } else { 4858 - ibdev_warn(&hr_dev->ib_dev, 4859 - "Local ACK timeout shall be 0 to 30.\n"); 4860 4827 } 4861 4828 } 4862 4829 ··· 4911 4882 set_access_flags(hr_qp, context, qpc_mask, attr, attr_mask); 4912 4883 4913 4884 if (attr_mask & IB_QP_MIN_RNR_TIMER) { 4914 - hr_reg_write(context, QPC_MIN_RNR_TIME, attr->min_rnr_timer); 4885 + hr_reg_write(context, QPC_MIN_RNR_TIME, 4886 + hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08 ? 4887 + HNS_ROCE_RNR_TIMER_10NS : attr->min_rnr_timer); 4915 4888 hr_reg_clear(qpc_mask, QPC_MIN_RNR_TIME); 4916 4889 } 4917 4890 ··· 5530 5499 5531 5500 hr_reg_write(cq_context, CQC_CQ_MAX_CNT, cq_count); 5532 5501 hr_reg_clear(cqc_mask, CQC_CQ_MAX_CNT); 5502 + 5503 + if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) { 5504 + if (cq_period * HNS_ROCE_CLOCK_ADJUST > USHRT_MAX) { 5505 + dev_info(hr_dev->dev, 5506 + "cq_period(%u) reached the upper limit, adjusted to 65.\n", 5507 + cq_period); 5508 + cq_period = HNS_ROCE_MAX_CQ_PERIOD; 5509 + } 5510 + cq_period *= HNS_ROCE_CLOCK_ADJUST; 5511 + } 5533 5512 hr_reg_write(cq_context, CQC_CQ_PERIOD, cq_period); 5534 5513 hr_reg_clear(cqc_mask, CQC_CQ_PERIOD); 5535 5514 ··· 5934 5893 to_hr_hw_page_shift(eq->mtr.hem_cfg.buf_pg_shift)); 5935 5894 hr_reg_write(eqc, EQC_EQ_PROD_INDX, HNS_ROCE_EQ_INIT_PROD_IDX); 5936 5895 hr_reg_write(eqc, EQC_EQ_MAX_CNT, eq->eq_max_cnt); 5896 + 5897 + if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) { 5898 + if (eq->eq_period * HNS_ROCE_CLOCK_ADJUST > USHRT_MAX) { 5899 + dev_info(hr_dev->dev, "eq_period(%u) reached the upper limit, adjusted to 65.\n", 5900 + eq->eq_period); 5901 + eq->eq_period = HNS_ROCE_MAX_EQ_PERIOD; 5902 + } 5903 + eq->eq_period *= HNS_ROCE_CLOCK_ADJUST; 5904 + } 5937 5905 5938 5906 hr_reg_write(eqc, EQC_EQ_PERIOD, eq->eq_period); 5939 5907 hr_reg_write(eqc, EQC_EQE_REPORT_TIMER, HNS_ROCE_EQ_INIT_REPORT_TIMER);
+8
drivers/infiniband/hw/hns/hns_roce_hw_v2.h
··· 1444 1444 struct list_head node; /* all dips are on a list */ 1445 1445 }; 1446 1446 1447 + /* only for RNR timeout issue of HIP08 */ 1448 + #define HNS_ROCE_CLOCK_ADJUST 1000 1449 + #define HNS_ROCE_MAX_CQ_PERIOD 65 1450 + #define HNS_ROCE_MAX_EQ_PERIOD 65 1451 + #define HNS_ROCE_RNR_TIMER_10NS 1 1452 + #define HNS_ROCE_1US_CFG 999 1453 + #define HNS_ROCE_1NS_CFG 0 1454 + 1447 1455 #define HNS_ROCE_AEQ_DEFAULT_BURST_NUM 0x0 1448 1456 #define HNS_ROCE_AEQ_DEFAULT_INTERVAL 0x0 1449 1457 #define HNS_ROCE_CEQ_DEFAULT_BURST_NUM 0x0
+1 -1
drivers/infiniband/hw/hns/hns_roce_srq.c
··· 259 259 260 260 static void free_srq_wrid(struct hns_roce_srq *srq) 261 261 { 262 - kfree(srq->wrid); 262 + kvfree(srq->wrid); 263 263 srq->wrid = NULL; 264 264 } 265 265
+1 -1
drivers/infiniband/hw/qib/qib_user_sdma.c
··· 941 941 &addrlimit) || 942 942 addrlimit > type_max(typeof(pkt->addrlimit))) { 943 943 ret = -EINVAL; 944 - goto free_pbc; 944 + goto free_pkt; 945 945 } 946 946 pkt->addrlimit = addrlimit; 947 947
+9 -2
drivers/input/joystick/spaceball.c
··· 19 19 #include <linux/module.h> 20 20 #include <linux/input.h> 21 21 #include <linux/serio.h> 22 + #include <asm/unaligned.h> 22 23 23 24 #define DRIVER_DESC "SpaceTec SpaceBall 2003/3003/4000 FLX driver" 24 25 ··· 76 75 77 76 case 'D': /* Ball data */ 78 77 if (spaceball->idx != 15) return; 79 - for (i = 0; i < 6; i++) 78 + /* 79 + * Skip first three bytes; read six axes worth of data. 80 + * Axis values are signed 16-bit big-endian. 81 + */ 82 + data += 3; 83 + for (i = 0; i < ARRAY_SIZE(spaceball_axes); i++) { 80 84 input_report_abs(dev, spaceball_axes[i], 81 - (__s16)((data[2 * i + 3] << 8) | data[2 * i + 2])); 85 + (__s16)get_unaligned_be16(&data[i * 2])); 86 + } 82 87 break; 83 88 84 89 case 'K': /* Button data */
+12 -9
drivers/input/misc/iqs626a.c
··· 456 456 unsigned int suspend_mode; 457 457 }; 458 458 459 - static int iqs626_parse_events(struct iqs626_private *iqs626, 460 - const struct fwnode_handle *ch_node, 461 - enum iqs626_ch_id ch_id) 459 + static noinline_for_stack int 460 + iqs626_parse_events(struct iqs626_private *iqs626, 461 + const struct fwnode_handle *ch_node, 462 + enum iqs626_ch_id ch_id) 462 463 { 463 464 struct iqs626_sys_reg *sys_reg = &iqs626->sys_reg; 464 465 struct i2c_client *client = iqs626->client; ··· 605 604 return 0; 606 605 } 607 606 608 - static int iqs626_parse_ati_target(struct iqs626_private *iqs626, 609 - const struct fwnode_handle *ch_node, 610 - enum iqs626_ch_id ch_id) 607 + static noinline_for_stack int 608 + iqs626_parse_ati_target(struct iqs626_private *iqs626, 609 + const struct fwnode_handle *ch_node, 610 + enum iqs626_ch_id ch_id) 611 611 { 612 612 struct iqs626_sys_reg *sys_reg = &iqs626->sys_reg; 613 613 struct i2c_client *client = iqs626->client; ··· 887 885 return 0; 888 886 } 889 887 890 - static int iqs626_parse_channel(struct iqs626_private *iqs626, 891 - const struct fwnode_handle *ch_node, 892 - enum iqs626_ch_id ch_id) 888 + static noinline_for_stack int 889 + iqs626_parse_channel(struct iqs626_private *iqs626, 890 + const struct fwnode_handle *ch_node, 891 + enum iqs626_ch_id ch_id) 893 892 { 894 893 struct iqs626_sys_reg *sys_reg = &iqs626->sys_reg; 895 894 struct i2c_client *client = iqs626->client;
+2 -2
drivers/input/mouse/appletouch.c
··· 916 916 set_bit(BTN_TOOL_TRIPLETAP, input_dev->keybit); 917 917 set_bit(BTN_LEFT, input_dev->keybit); 918 918 919 + INIT_WORK(&dev->work, atp_reinit); 920 + 919 921 error = input_register_device(dev->input); 920 922 if (error) 921 923 goto err_free_buffer; 922 924 923 925 /* save our data pointer in this interface device */ 924 926 usb_set_intfdata(iface, dev); 925 - 926 - INIT_WORK(&dev->work, atp_reinit); 927 927 928 928 return 0; 929 929
+7 -1
drivers/input/mouse/elantech.c
··· 1588 1588 */ 1589 1589 static int elantech_change_report_id(struct psmouse *psmouse) 1590 1590 { 1591 - unsigned char param[2] = { 0x10, 0x03 }; 1591 + /* 1592 + * NOTE: the code is expecting to receive param[] as an array of 3 1593 + * items (see __ps2_command()), even if in this case only 2 are 1594 + * actually needed. Make sure the array size is 3 to avoid potential 1595 + * stack out-of-bound accesses. 1596 + */ 1597 + unsigned char param[3] = { 0x10, 0x03 }; 1592 1598 1593 1599 if (elantech_write_reg_params(psmouse, 0x7, param) || 1594 1600 elantech_read_reg_params(psmouse, 0x7, param) ||
+21
drivers/input/serio/i8042-x86ia64io.h
··· 995 995 { } 996 996 }; 997 997 998 + static const struct dmi_system_id i8042_dmi_probe_defer_table[] __initconst = { 999 + { 1000 + /* ASUS ZenBook UX425UA */ 1001 + .matches = { 1002 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 1003 + DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX425UA"), 1004 + }, 1005 + }, 1006 + { 1007 + /* ASUS ZenBook UM325UA */ 1008 + .matches = { 1009 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 1010 + DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX325UA_UM325UA"), 1011 + }, 1012 + }, 1013 + { } 1014 + }; 1015 + 998 1016 #endif /* CONFIG_X86 */ 999 1017 1000 1018 #ifdef CONFIG_PNP ··· 1332 1314 1333 1315 if (dmi_check_system(i8042_dmi_kbdreset_table)) 1334 1316 i8042_kbdreset = true; 1317 + 1318 + if (dmi_check_system(i8042_dmi_probe_defer_table)) 1319 + i8042_probe_defer = true; 1335 1320 1336 1321 /* 1337 1322 * A20 was already enabled during early kernel init. But some buggy
+35 -19
drivers/input/serio/i8042.c
··· 45 45 module_param_named(unlock, i8042_unlock, bool, 0); 46 46 MODULE_PARM_DESC(unlock, "Ignore keyboard lock."); 47 47 48 + static bool i8042_probe_defer; 49 + module_param_named(probe_defer, i8042_probe_defer, bool, 0); 50 + MODULE_PARM_DESC(probe_defer, "Allow deferred probing."); 51 + 48 52 enum i8042_controller_reset_mode { 49 53 I8042_RESET_NEVER, 50 54 I8042_RESET_ALWAYS, ··· 715 711 * LCS/Telegraphics. 716 712 */ 717 713 718 - static int __init i8042_check_mux(void) 714 + static int i8042_check_mux(void) 719 715 { 720 716 unsigned char mux_version; 721 717 ··· 744 740 /* 745 741 * The following is used to test AUX IRQ delivery. 746 742 */ 747 - static struct completion i8042_aux_irq_delivered __initdata; 748 - static bool i8042_irq_being_tested __initdata; 743 + static struct completion i8042_aux_irq_delivered; 744 + static bool i8042_irq_being_tested; 749 745 750 - static irqreturn_t __init i8042_aux_test_irq(int irq, void *dev_id) 746 + static irqreturn_t i8042_aux_test_irq(int irq, void *dev_id) 751 747 { 752 748 unsigned long flags; 753 749 unsigned char str, data; ··· 774 770 * verifies success by readinng CTR. Used when testing for presence of AUX 775 771 * port. 776 772 */ 777 - static int __init i8042_toggle_aux(bool on) 773 + static int i8042_toggle_aux(bool on) 778 774 { 779 775 unsigned char param; 780 776 int i; ··· 802 798 * the presence of an AUX interface. 803 799 */ 804 800 805 - static int __init i8042_check_aux(void) 801 + static int i8042_check_aux(void) 806 802 { 807 803 int retval = -1; 808 804 bool irq_registered = false; ··· 1009 1005 1010 1006 if (i8042_command(&ctr[n++ % 2], I8042_CMD_CTL_RCTR)) { 1011 1007 pr_err("Can't read CTR while initializing i8042\n"); 1012 - return -EIO; 1008 + return i8042_probe_defer ? -EPROBE_DEFER : -EIO; 1013 1009 } 1014 1010 1015 1011 } while (n < 2 || ctr[0] != ctr[1]); ··· 1324 1320 i8042_controller_reset(false); 1325 1321 } 1326 1322 1327 - static int __init i8042_create_kbd_port(void) 1323 + static int i8042_create_kbd_port(void) 1328 1324 { 1329 1325 struct serio *serio; 1330 1326 struct i8042_port *port = &i8042_ports[I8042_KBD_PORT_NO]; ··· 1353 1349 return 0; 1354 1350 } 1355 1351 1356 - static int __init i8042_create_aux_port(int idx) 1352 + static int i8042_create_aux_port(int idx) 1357 1353 { 1358 1354 struct serio *serio; 1359 1355 int port_no = idx < 0 ? I8042_AUX_PORT_NO : I8042_MUX_PORT_NO + idx; ··· 1390 1386 return 0; 1391 1387 } 1392 1388 1393 - static void __init i8042_free_kbd_port(void) 1389 + static void i8042_free_kbd_port(void) 1394 1390 { 1395 1391 kfree(i8042_ports[I8042_KBD_PORT_NO].serio); 1396 1392 i8042_ports[I8042_KBD_PORT_NO].serio = NULL; 1397 1393 } 1398 1394 1399 - static void __init i8042_free_aux_ports(void) 1395 + static void i8042_free_aux_ports(void) 1400 1396 { 1401 1397 int i; 1402 1398 ··· 1406 1402 } 1407 1403 } 1408 1404 1409 - static void __init i8042_register_ports(void) 1405 + static void i8042_register_ports(void) 1410 1406 { 1411 1407 int i; 1412 1408 ··· 1447 1443 i8042_aux_irq_registered = i8042_kbd_irq_registered = false; 1448 1444 } 1449 1445 1450 - static int __init i8042_setup_aux(void) 1446 + static int i8042_setup_aux(void) 1451 1447 { 1452 1448 int (*aux_enable)(void); 1453 1449 int error; ··· 1489 1485 return error; 1490 1486 } 1491 1487 1492 - static int __init i8042_setup_kbd(void) 1488 + static int i8042_setup_kbd(void) 1493 1489 { 1494 1490 int error; 1495 1491 ··· 1539 1535 return 0; 1540 1536 } 1541 1537 1542 - static int __init i8042_probe(struct platform_device *dev) 1538 + static int i8042_probe(struct platform_device *dev) 1543 1539 { 1544 1540 int error; 1545 1541 ··· 1604 1600 .pm = &i8042_pm_ops, 1605 1601 #endif 1606 1602 }, 1603 + .probe = i8042_probe, 1607 1604 .remove = i8042_remove, 1608 1605 .shutdown = i8042_shutdown, 1609 1606 }; ··· 1615 1610 1616 1611 static int __init i8042_init(void) 1617 1612 { 1618 - struct platform_device *pdev; 1619 1613 int err; 1620 1614 1621 1615 dbg_init(); ··· 1630 1626 /* Set this before creating the dev to allow i8042_command to work right away */ 1631 1627 i8042_present = true; 1632 1628 1633 - pdev = platform_create_bundle(&i8042_driver, i8042_probe, NULL, 0, NULL, 0); 1634 - if (IS_ERR(pdev)) { 1635 - err = PTR_ERR(pdev); 1629 + err = platform_driver_register(&i8042_driver); 1630 + if (err) 1636 1631 goto err_platform_exit; 1632 + 1633 + i8042_platform_device = platform_device_alloc("i8042", -1); 1634 + if (!i8042_platform_device) { 1635 + err = -ENOMEM; 1636 + goto err_unregister_driver; 1637 1637 } 1638 + 1639 + err = platform_device_add(i8042_platform_device); 1640 + if (err) 1641 + goto err_free_device; 1638 1642 1639 1643 bus_register_notifier(&serio_bus, &i8042_kbd_bind_notifier_block); 1640 1644 panic_blink = i8042_panic_blink; 1641 1645 1642 1646 return 0; 1643 1647 1648 + err_free_device: 1649 + platform_device_put(i8042_platform_device); 1650 + err_unregister_driver: 1651 + platform_driver_unregister(&i8042_driver); 1644 1652 err_platform_exit: 1645 1653 i8042_platform_exit(); 1646 1654 return err;
+1 -1
drivers/input/touchscreen/atmel_mxt_ts.c
··· 1882 1882 if (error) { 1883 1883 dev_err(&client->dev, "Error %d parsing object table\n", error); 1884 1884 mxt_free_object_table(data); 1885 - goto err_free_mem; 1885 + return error; 1886 1886 } 1887 1887 1888 1888 data->object_table = (struct mxt_object *)(id_buf + MXT_OBJECT_START);
+45 -1
drivers/input/touchscreen/elants_i2c.c
··· 117 117 #define ELAN_POWERON_DELAY_USEC 500 118 118 #define ELAN_RESET_DELAY_MSEC 20 119 119 120 + /* FW boot code version */ 121 + #define BC_VER_H_BYTE_FOR_EKTH3900x1_I2C 0x72 122 + #define BC_VER_H_BYTE_FOR_EKTH3900x2_I2C 0x82 123 + #define BC_VER_H_BYTE_FOR_EKTH3900x3_I2C 0x92 124 + #define BC_VER_H_BYTE_FOR_EKTH5312x1_I2C 0x6D 125 + #define BC_VER_H_BYTE_FOR_EKTH5312x2_I2C 0x6E 126 + #define BC_VER_H_BYTE_FOR_EKTH5312cx1_I2C 0x77 127 + #define BC_VER_H_BYTE_FOR_EKTH5312cx2_I2C 0x78 128 + #define BC_VER_H_BYTE_FOR_EKTH5312x1_I2C_USB 0x67 129 + #define BC_VER_H_BYTE_FOR_EKTH5312x2_I2C_USB 0x68 130 + #define BC_VER_H_BYTE_FOR_EKTH5312cx1_I2C_USB 0x74 131 + #define BC_VER_H_BYTE_FOR_EKTH5312cx2_I2C_USB 0x75 132 + 120 133 enum elants_chip_id { 121 134 EKTH3500, 122 135 EKTF3624, ··· 749 736 return 0; 750 737 } 751 738 739 + static bool elants_i2c_should_check_remark_id(struct elants_data *ts) 740 + { 741 + struct i2c_client *client = ts->client; 742 + const u8 bootcode_version = ts->iap_version; 743 + bool check; 744 + 745 + /* I2C eKTH3900 and eKTH5312 are NOT support Remark ID */ 746 + if ((bootcode_version == BC_VER_H_BYTE_FOR_EKTH3900x1_I2C) || 747 + (bootcode_version == BC_VER_H_BYTE_FOR_EKTH3900x2_I2C) || 748 + (bootcode_version == BC_VER_H_BYTE_FOR_EKTH3900x3_I2C) || 749 + (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312x1_I2C) || 750 + (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312x2_I2C) || 751 + (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312cx1_I2C) || 752 + (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312cx2_I2C) || 753 + (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312x1_I2C_USB) || 754 + (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312x2_I2C_USB) || 755 + (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312cx1_I2C_USB) || 756 + (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312cx2_I2C_USB)) { 757 + dev_dbg(&client->dev, 758 + "eKTH3900/eKTH5312(0x%02x) are not support remark id\n", 759 + bootcode_version); 760 + check = false; 761 + } else if (bootcode_version >= 0x60) { 762 + check = true; 763 + } else { 764 + check = false; 765 + } 766 + 767 + return check; 768 + } 769 + 752 770 static int elants_i2c_do_update_firmware(struct i2c_client *client, 753 771 const struct firmware *fw, 754 772 bool force) ··· 793 749 u16 send_id; 794 750 int page, n_fw_pages; 795 751 int error; 796 - bool check_remark_id = ts->iap_version >= 0x60; 752 + bool check_remark_id = elants_i2c_should_check_remark_id(ts); 797 753 798 754 /* Recovery mode detection! */ 799 755 if (force) {
+26 -5
drivers/input/touchscreen/goodix.c
··· 102 102 { .id = "911", .data = &gt911_chip_data }, 103 103 { .id = "9271", .data = &gt911_chip_data }, 104 104 { .id = "9110", .data = &gt911_chip_data }, 105 + { .id = "9111", .data = &gt911_chip_data }, 105 106 { .id = "927", .data = &gt911_chip_data }, 106 107 { .id = "928", .data = &gt911_chip_data }, 107 108 ··· 651 650 652 651 usleep_range(6000, 10000); /* T4: > 5ms */ 653 652 654 - /* end select I2C slave addr */ 655 - error = gpiod_direction_input(ts->gpiod_rst); 656 - if (error) 657 - goto error; 653 + /* 654 + * Put the reset pin back in to input / high-impedance mode to save 655 + * power. Only do this in the non ACPI case since some ACPI boards 656 + * don't have a pull-up, so there the reset pin must stay active-high. 657 + */ 658 + if (ts->irq_pin_access_method == IRQ_PIN_ACCESS_GPIO) { 659 + error = gpiod_direction_input(ts->gpiod_rst); 660 + if (error) 661 + goto error; 662 + } 658 663 659 664 return 0; 660 665 ··· 794 787 return -EINVAL; 795 788 } 796 789 790 + /* 791 + * Normally we put the reset pin in input / high-impedance mode to save 792 + * power. But some x86/ACPI boards don't have a pull-up, so for the ACPI 793 + * case, leave the pin as is. This results in the pin not being touched 794 + * at all on x86/ACPI boards, except when needed for error-recover. 795 + */ 796 + ts->gpiod_rst_flags = GPIOD_ASIS; 797 + 797 798 return devm_acpi_dev_add_driver_gpios(dev, gpio_mapping); 798 799 } 799 800 #else ··· 826 811 if (!ts->client) 827 812 return -EINVAL; 828 813 dev = &ts->client->dev; 814 + 815 + /* 816 + * By default we request the reset pin as input, leaving it in 817 + * high-impedance when not resetting the controller to save power. 818 + */ 819 + ts->gpiod_rst_flags = GPIOD_IN; 829 820 830 821 ts->avdd28 = devm_regulator_get(dev, "AVDD28"); 831 822 if (IS_ERR(ts->avdd28)) { ··· 870 849 ts->gpiod_int = gpiod; 871 850 872 851 /* Get the reset line GPIO pin number */ 873 - gpiod = devm_gpiod_get_optional(dev, GOODIX_GPIO_RST_NAME, GPIOD_IN); 852 + gpiod = devm_gpiod_get_optional(dev, GOODIX_GPIO_RST_NAME, ts->gpiod_rst_flags); 874 853 if (IS_ERR(gpiod)) { 875 854 error = PTR_ERR(gpiod); 876 855 if (error != -EPROBE_DEFER)
+1
drivers/input/touchscreen/goodix.h
··· 87 87 struct gpio_desc *gpiod_rst; 88 88 int gpio_count; 89 89 int gpio_int_idx; 90 + enum gpiod_flags gpiod_rst_flags; 90 91 char id[GOODIX_ID_MAX_LEN + 1]; 91 92 char cfg_name[64]; 92 93 u16 version;
+1 -1
drivers/input/touchscreen/goodix_fwupload.c
··· 207 207 208 208 error = goodix_reset_no_int_sync(ts); 209 209 if (error) 210 - return error; 210 + goto release; 211 211 212 212 error = goodix_enter_upload_mode(ts->client); 213 213 if (error)
+3 -3
drivers/isdn/mISDN/core.c
··· 381 381 err = mISDN_inittimer(&debug); 382 382 if (err) 383 383 goto error2; 384 - err = l1_init(&debug); 384 + err = Isdnl1_Init(&debug); 385 385 if (err) 386 386 goto error3; 387 387 err = Isdnl2_Init(&debug); ··· 395 395 error5: 396 396 Isdnl2_cleanup(); 397 397 error4: 398 - l1_cleanup(); 398 + Isdnl1_cleanup(); 399 399 error3: 400 400 mISDN_timer_cleanup(); 401 401 error2: ··· 408 408 { 409 409 misdn_sock_cleanup(); 410 410 Isdnl2_cleanup(); 411 - l1_cleanup(); 411 + Isdnl1_cleanup(); 412 412 mISDN_timer_cleanup(); 413 413 class_unregister(&mISDN_class); 414 414
+2 -2
drivers/isdn/mISDN/core.h
··· 60 60 extern int mISDN_inittimer(u_int *); 61 61 extern void mISDN_timer_cleanup(void); 62 62 63 - extern int l1_init(u_int *); 64 - extern void l1_cleanup(void); 63 + extern int Isdnl1_Init(u_int *); 64 + extern void Isdnl1_cleanup(void); 65 65 extern int Isdnl2_Init(u_int *); 66 66 extern void Isdnl2_cleanup(void); 67 67
+2 -2
drivers/isdn/mISDN/layer1.c
··· 398 398 EXPORT_SYMBOL(create_l1); 399 399 400 400 int 401 - l1_init(u_int *deb) 401 + Isdnl1_Init(u_int *deb) 402 402 { 403 403 debug = deb; 404 404 l1fsm_s.state_count = L1S_STATE_COUNT; ··· 409 409 } 410 410 411 411 void 412 - l1_cleanup(void) 412 + Isdnl1_cleanup(void) 413 413 { 414 414 mISDN_FsmFree(&l1fsm_s); 415 415 }
+2 -1
drivers/md/bcache/super.c
··· 1139 1139 static void cached_dev_detach_finish(struct work_struct *w) 1140 1140 { 1141 1141 struct cached_dev *dc = container_of(w, struct cached_dev, detach); 1142 + struct cache_set *c = dc->disk.c; 1142 1143 1143 1144 BUG_ON(!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags)); 1144 1145 BUG_ON(refcount_read(&dc->count)); ··· 1157 1156 1158 1157 bcache_device_detach(&dc->disk); 1159 1158 list_move(&dc->list, &uncached_devices); 1160 - calc_cached_dev_sectors(dc->disk.c); 1159 + calc_cached_dev_sectors(c); 1161 1160 1162 1161 clear_bit(BCACHE_DEV_DETACHING, &dc->disk.flags); 1163 1162 clear_bit(BCACHE_DEV_UNLINK_DONE, &dc->disk.flags);
+1 -1
drivers/md/dm-integrity.c
··· 1963 1963 n_sectors -= bv.bv_len >> SECTOR_SHIFT; 1964 1964 bio_advance_iter(bio, &bio->bi_iter, bv.bv_len); 1965 1965 retry_kmap: 1966 - mem = bvec_kmap_local(&bv); 1966 + mem = kmap_local_page(bv.bv_page); 1967 1967 if (likely(dio->op == REQ_OP_WRITE)) 1968 1968 flush_dcache_page(bv.bv_page); 1969 1969
+1 -1
drivers/md/persistent-data/dm-btree-remove.c
··· 423 423 424 424 memcpy(n, dm_block_data(child), 425 425 dm_bm_block_size(dm_tm_get_bm(info->tm))); 426 - dm_tm_unlock(info->tm, child); 427 426 428 427 dm_tm_dec(info->tm, dm_block_location(child)); 428 + dm_tm_unlock(info->tm, child); 429 429 return 0; 430 430 } 431 431
+6 -1
drivers/mmc/core/core.c
··· 2264 2264 _mmc_detect_change(host, 0, false); 2265 2265 } 2266 2266 2267 - void mmc_stop_host(struct mmc_host *host) 2267 + void __mmc_stop_host(struct mmc_host *host) 2268 2268 { 2269 2269 if (host->slot.cd_irq >= 0) { 2270 2270 mmc_gpio_set_cd_wake(host, false); ··· 2273 2273 2274 2274 host->rescan_disable = 1; 2275 2275 cancel_delayed_work_sync(&host->detect); 2276 + } 2277 + 2278 + void mmc_stop_host(struct mmc_host *host) 2279 + { 2280 + __mmc_stop_host(host); 2276 2281 2277 2282 /* clear pm flags now and let card drivers set them as needed */ 2278 2283 host->pm_flags = 0;
+1
drivers/mmc/core/core.h
··· 70 70 71 71 void mmc_rescan(struct work_struct *work); 72 72 void mmc_start_host(struct mmc_host *host); 73 + void __mmc_stop_host(struct mmc_host *host); 73 74 void mmc_stop_host(struct mmc_host *host); 74 75 75 76 void _mmc_detect_change(struct mmc_host *host, unsigned long delay,
+9
drivers/mmc/core/host.c
··· 80 80 kfree(host); 81 81 } 82 82 83 + static int mmc_host_classdev_shutdown(struct device *dev) 84 + { 85 + struct mmc_host *host = cls_dev_to_mmc_host(dev); 86 + 87 + __mmc_stop_host(host); 88 + return 0; 89 + } 90 + 83 91 static struct class mmc_host_class = { 84 92 .name = "mmc_host", 85 93 .dev_release = mmc_host_classdev_release, 94 + .shutdown_pre = mmc_host_classdev_shutdown, 86 95 .pm = MMC_HOST_CLASS_DEV_PM_OPS, 87 96 }; 88 97
+16
drivers/mmc/host/meson-mx-sdhc-mmc.c
··· 135 135 struct mmc_command *cmd) 136 136 { 137 137 struct meson_mx_sdhc_host *host = mmc_priv(mmc); 138 + bool manual_stop = false; 138 139 u32 ictl, send; 139 140 int pack_len; 140 141 ··· 173 172 else 174 173 /* software flush: */ 175 174 ictl |= MESON_SDHC_ICTL_DATA_XFER_OK; 175 + 176 + /* 177 + * Mimic the logic from the vendor driver where (only) 178 + * SD_IO_RW_EXTENDED commands with more than one block set the 179 + * MESON_SDHC_MISC_MANUAL_STOP bit. This fixes the firmware 180 + * download in the brcmfmac driver for a BCM43362/1 card. 181 + * Without this sdio_memcpy_toio() (with a size of 219557 182 + * bytes) times out if MESON_SDHC_MISC_MANUAL_STOP is not set. 183 + */ 184 + manual_stop = cmd->data->blocks > 1 && 185 + cmd->opcode == SD_IO_RW_EXTENDED; 176 186 } else { 177 187 pack_len = 0; 178 188 179 189 ictl |= MESON_SDHC_ICTL_RESP_OK; 180 190 } 191 + 192 + regmap_update_bits(host->regmap, MESON_SDHC_MISC, 193 + MESON_SDHC_MISC_MANUAL_STOP, 194 + manual_stop ? MESON_SDHC_MISC_MANUAL_STOP : 0); 181 195 182 196 if (cmd->opcode == MMC_STOP_TRANSMISSION) 183 197 send |= MESON_SDHC_SEND_DATA_STOP;
+2
drivers/mmc/host/mmci_stm32_sdmmc.c
··· 441 441 return -EINVAL; 442 442 } 443 443 444 + writel_relaxed(0, dlyb->base + DLYB_CR); 445 + 444 446 phase = end_of_len - max_len / 2; 445 447 sdmmc_dlyb_set_cfgr(dlyb, dlyb->unit, phase, false); 446 448
+26 -17
drivers/mmc/host/sdhci-tegra.c
··· 356 356 } 357 357 } 358 358 359 - static void tegra_sdhci_hs400_enhanced_strobe(struct mmc_host *mmc, 360 - struct mmc_ios *ios) 361 - { 362 - struct sdhci_host *host = mmc_priv(mmc); 363 - u32 val; 364 - 365 - val = sdhci_readl(host, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL); 366 - 367 - if (ios->enhanced_strobe) 368 - val |= SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE; 369 - else 370 - val &= ~SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE; 371 - 372 - sdhci_writel(host, val, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL); 373 - 374 - } 375 - 376 359 static void tegra_sdhci_reset(struct sdhci_host *host, u8 mask) 377 360 { 378 361 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); ··· 774 791 tegra_sdhci_pad_autocalib(host); 775 792 tegra_host->pad_calib_required = false; 776 793 } 794 + } 795 + 796 + static void tegra_sdhci_hs400_enhanced_strobe(struct mmc_host *mmc, 797 + struct mmc_ios *ios) 798 + { 799 + struct sdhci_host *host = mmc_priv(mmc); 800 + u32 val; 801 + 802 + val = sdhci_readl(host, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL); 803 + 804 + if (ios->enhanced_strobe) { 805 + val |= SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE; 806 + /* 807 + * When CMD13 is sent from mmc_select_hs400es() after 808 + * switching to HS400ES mode, the bus is operating at 809 + * either MMC_HIGH_26_MAX_DTR or MMC_HIGH_52_MAX_DTR. 810 + * To meet Tegra SDHCI requirement at HS400ES mode, force SDHCI 811 + * interface clock to MMC_HS200_MAX_DTR (200 MHz) so that host 812 + * controller CAR clock and the interface clock are rate matched. 813 + */ 814 + tegra_sdhci_set_clock(host, MMC_HS200_MAX_DTR); 815 + } else { 816 + val &= ~SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE; 817 + } 818 + 819 + sdhci_writel(host, val, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL); 777 820 } 778 821 779 822 static unsigned int tegra_sdhci_get_max_clock(struct sdhci_host *host)
+1 -1
drivers/net/bonding/bond_options.c
··· 1526 1526 mac = (u8 *)&newval->value; 1527 1527 } 1528 1528 1529 - if (!is_valid_ether_addr(mac)) 1529 + if (is_multicast_ether_addr(mac)) 1530 1530 goto err; 1531 1531 1532 1532 netdev_dbg(bond->dev, "Setting ad_actor_system to %pM\n", mac);
+4
drivers/net/dsa/mv88e6xxx/chip.c
··· 768 768 if ((!mv88e6xxx_port_ppu_updates(chip, port) || 769 769 mode == MLO_AN_FIXED) && ops->port_sync_link) 770 770 err = ops->port_sync_link(chip, port, mode, false); 771 + 772 + if (!err && ops->port_set_speed_duplex) 773 + err = ops->port_set_speed_duplex(chip, port, SPEED_UNFORCED, 774 + DUPLEX_UNFORCED); 771 775 mv88e6xxx_reg_unlock(chip); 772 776 773 777 if (err)
+2 -2
drivers/net/dsa/mv88e6xxx/port.c
··· 283 283 if (err) 284 284 return err; 285 285 286 - if (speed) 286 + if (speed != SPEED_UNFORCED) 287 287 dev_dbg(chip->dev, "p%d: Speed set to %d Mbps\n", port, speed); 288 288 else 289 289 dev_dbg(chip->dev, "p%d: Speed unforced\n", port); ··· 516 516 if (err) 517 517 return err; 518 518 519 - if (speed) 519 + if (speed != SPEED_UNFORCED) 520 520 dev_dbg(chip->dev, "p%d: Speed set to %d Mbps\n", port, speed); 521 521 else 522 522 dev_dbg(chip->dev, "p%d: Speed unforced\n", port);
+8
drivers/net/ethernet/aquantia/atlantic/aq_ring.c
··· 366 366 if (!buff->is_eop) { 367 367 buff_ = buff; 368 368 do { 369 + if (buff_->next >= self->size) { 370 + err = -EIO; 371 + goto err_exit; 372 + } 369 373 next_ = buff_->next, 370 374 buff_ = &self->buff_ring[next_]; 371 375 is_rsc_completed = ··· 393 389 (buff->is_lro && buff->is_cso_err)) { 394 390 buff_ = buff; 395 391 do { 392 + if (buff_->next >= self->size) { 393 + err = -EIO; 394 + goto err_exit; 395 + } 396 396 next_ = buff_->next, 397 397 buff_ = &self->buff_ring[next_]; 398 398
+8 -15
drivers/net/ethernet/atheros/ag71xx.c
··· 1913 1913 ag->mac_reset = devm_reset_control_get(&pdev->dev, "mac"); 1914 1914 if (IS_ERR(ag->mac_reset)) { 1915 1915 netif_err(ag, probe, ndev, "missing mac reset\n"); 1916 - err = PTR_ERR(ag->mac_reset); 1917 - goto err_free; 1916 + return PTR_ERR(ag->mac_reset); 1918 1917 } 1919 1918 1920 1919 ag->mac_base = devm_ioremap(&pdev->dev, res->start, resource_size(res)); 1921 - if (!ag->mac_base) { 1922 - err = -ENOMEM; 1923 - goto err_free; 1924 - } 1920 + if (!ag->mac_base) 1921 + return -ENOMEM; 1925 1922 1926 1923 ndev->irq = platform_get_irq(pdev, 0); 1927 1924 err = devm_request_irq(&pdev->dev, ndev->irq, ag71xx_interrupt, ··· 1926 1929 if (err) { 1927 1930 netif_err(ag, probe, ndev, "unable to request IRQ %d\n", 1928 1931 ndev->irq); 1929 - goto err_free; 1932 + return err; 1930 1933 } 1931 1934 1932 1935 ndev->netdev_ops = &ag71xx_netdev_ops; ··· 1954 1957 ag->stop_desc = dmam_alloc_coherent(&pdev->dev, 1955 1958 sizeof(struct ag71xx_desc), 1956 1959 &ag->stop_desc_dma, GFP_KERNEL); 1957 - if (!ag->stop_desc) { 1958 - err = -ENOMEM; 1959 - goto err_free; 1960 - } 1960 + if (!ag->stop_desc) 1961 + return -ENOMEM; 1961 1962 1962 1963 ag->stop_desc->data = 0; 1963 1964 ag->stop_desc->ctrl = 0; ··· 1970 1975 err = of_get_phy_mode(np, &ag->phy_if_mode); 1971 1976 if (err) { 1972 1977 netif_err(ag, probe, ndev, "missing phy-mode property in DT\n"); 1973 - goto err_free; 1978 + return err; 1974 1979 } 1975 1980 1976 1981 netif_napi_add(ndev, &ag->napi, ag71xx_poll, AG71XX_NAPI_WEIGHT); ··· 1978 1983 err = clk_prepare_enable(ag->clk_eth); 1979 1984 if (err) { 1980 1985 netif_err(ag, probe, ndev, "Failed to enable eth clk.\n"); 1981 - goto err_free; 1986 + return err; 1982 1987 } 1983 1988 1984 1989 ag71xx_wr(ag, AG71XX_REG_MAC_CFG1, 0); ··· 2014 2019 ag71xx_mdio_remove(ag); 2015 2020 err_put_clk: 2016 2021 clk_disable_unprepare(ag->clk_eth); 2017 - err_free: 2018 - free_netdev(ndev); 2019 2022 return err; 2020 2023 } 2021 2024
+4 -1
drivers/net/ethernet/broadcom/bcmsysport.c
··· 1309 1309 struct bcm_sysport_priv *priv = netdev_priv(dev); 1310 1310 struct device *kdev = &priv->pdev->dev; 1311 1311 struct bcm_sysport_tx_ring *ring; 1312 + unsigned long flags, desc_flags; 1312 1313 struct bcm_sysport_cb *cb; 1313 1314 struct netdev_queue *txq; 1314 1315 u32 len_status, addr_lo; 1315 1316 unsigned int skb_len; 1316 - unsigned long flags; 1317 1317 dma_addr_t mapping; 1318 1318 u16 queue; 1319 1319 int ret; ··· 1373 1373 ring->desc_count--; 1374 1374 1375 1375 /* Ports are latched, so write upper address first */ 1376 + spin_lock_irqsave(&priv->desc_lock, desc_flags); 1376 1377 tdma_writel(priv, len_status, TDMA_WRITE_PORT_HI(ring->index)); 1377 1378 tdma_writel(priv, addr_lo, TDMA_WRITE_PORT_LO(ring->index)); 1379 + spin_unlock_irqrestore(&priv->desc_lock, desc_flags); 1378 1380 1379 1381 /* Check ring space and update SW control flow */ 1380 1382 if (ring->desc_count == 0) ··· 2015 2013 } 2016 2014 2017 2015 /* Initialize both hardware and software ring */ 2016 + spin_lock_init(&priv->desc_lock); 2018 2017 for (i = 0; i < dev->num_tx_queues; i++) { 2019 2018 ret = bcm_sysport_init_tx_ring(priv, i); 2020 2019 if (ret) {
+1
drivers/net/ethernet/broadcom/bcmsysport.h
··· 711 711 int wol_irq; 712 712 713 713 /* Transmit rings */ 714 + spinlock_t desc_lock; 714 715 struct bcm_sysport_tx_ring *tx_rings; 715 716 716 717 /* Receive queue */
+2 -2
drivers/net/ethernet/broadcom/genet/bcmmii.c
··· 589 589 * Internal or external PHY with MDIO access 590 590 */ 591 591 phydev = phy_attach(priv->dev, phy_name, pd->phy_interface); 592 - if (!phydev) { 592 + if (IS_ERR(phydev)) { 593 593 dev_err(kdev, "failed to register PHY device\n"); 594 - return -ENODEV; 594 + return PTR_ERR(phydev); 595 595 } 596 596 } else { 597 597 /*
+2
drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
··· 388 388 __u64 bytes_per_cdan; 389 389 }; 390 390 391 + #define DPAA2_ETH_CH_STATS 7 392 + 391 393 /* Maximum number of queues associated with a DPNI */ 392 394 #define DPAA2_ETH_MAX_TCS 8 393 395 #define DPAA2_ETH_MAX_RX_QUEUES_PER_TC 16
+1 -1
drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
··· 278 278 /* Per-channel stats */ 279 279 for (k = 0; k < priv->num_channels; k++) { 280 280 ch_stats = &priv->channel[k]->stats; 281 - for (j = 0; j < sizeof(*ch_stats) / sizeof(__u64) - 1; j++) 281 + for (j = 0; j < DPAA2_ETH_CH_STATS; j++) 282 282 *((__u64 *)data + i + j) += *((__u64 *)ch_stats + j); 283 283 } 284 284 i += j;
+7 -5
drivers/net/ethernet/freescale/fman/fman_port.c
··· 1805 1805 fman = dev_get_drvdata(&fm_pdev->dev); 1806 1806 if (!fman) { 1807 1807 err = -EINVAL; 1808 - goto return_err; 1808 + goto put_device; 1809 1809 } 1810 1810 1811 1811 err = of_property_read_u32(port_node, "cell-index", &val); ··· 1813 1813 dev_err(port->dev, "%s: reading cell-index for %pOF failed\n", 1814 1814 __func__, port_node); 1815 1815 err = -EINVAL; 1816 - goto return_err; 1816 + goto put_device; 1817 1817 } 1818 1818 port_id = (u8)val; 1819 1819 port->dts_params.id = port_id; ··· 1847 1847 } else { 1848 1848 dev_err(port->dev, "%s: Illegal port type\n", __func__); 1849 1849 err = -EINVAL; 1850 - goto return_err; 1850 + goto put_device; 1851 1851 } 1852 1852 1853 1853 port->dts_params.type = port_type; ··· 1861 1861 dev_err(port->dev, "%s: incorrect qman-channel-id\n", 1862 1862 __func__); 1863 1863 err = -EINVAL; 1864 - goto return_err; 1864 + goto put_device; 1865 1865 } 1866 1866 port->dts_params.qman_channel_id = qman_channel_id; 1867 1867 } ··· 1871 1871 dev_err(port->dev, "%s: of_address_to_resource() failed\n", 1872 1872 __func__); 1873 1873 err = -ENOMEM; 1874 - goto return_err; 1874 + goto put_device; 1875 1875 } 1876 1876 1877 1877 port->dts_params.fman = fman; ··· 1896 1896 1897 1897 return 0; 1898 1898 1899 + put_device: 1900 + put_device(&fm_pdev->dev); 1899 1901 return_err: 1900 1902 of_node_put(port_node); 1901 1903 free_port:
+4 -4
drivers/net/ethernet/google/gve/gve_adminq.c
··· 738 738 * is not set to GqiRda, choose the queue format in a priority order: 739 739 * DqoRda, GqiRda, GqiQpl. Use GqiQpl as default. 740 740 */ 741 - if (priv->queue_format == GVE_GQI_RDA_FORMAT) { 742 - dev_info(&priv->pdev->dev, 743 - "Driver is running with GQI RDA queue format.\n"); 744 - } else if (dev_op_dqo_rda) { 741 + if (dev_op_dqo_rda) { 745 742 priv->queue_format = GVE_DQO_RDA_FORMAT; 746 743 dev_info(&priv->pdev->dev, 747 744 "Driver is running with DQO RDA queue format.\n"); ··· 750 753 "Driver is running with GQI RDA queue format.\n"); 751 754 supported_features_mask = 752 755 be32_to_cpu(dev_op_gqi_rda->supported_features_mask); 756 + } else if (priv->queue_format == GVE_GQI_RDA_FORMAT) { 757 + dev_info(&priv->pdev->dev, 758 + "Driver is running with GQI RDA queue format.\n"); 753 759 } else { 754 760 priv->queue_format = GVE_GQI_QPL_FORMAT; 755 761 if (dev_op_gqi_qpl)
+2
drivers/net/ethernet/hisilicon/hns3/hnae3.h
··· 839 839 840 840 u8 netdev_flags; 841 841 struct dentry *hnae3_dbgfs; 842 + /* protects concurrent contention between debugfs commands */ 843 + struct mutex dbgfs_lock; 842 844 843 845 /* Network interface message level enabled bits */ 844 846 u32 msg_enable;
+14 -6
drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
··· 1226 1226 if (ret) 1227 1227 return ret; 1228 1228 1229 + mutex_lock(&handle->dbgfs_lock); 1229 1230 save_buf = &hns3_dbg_cmd[index].buf; 1230 1231 1231 1232 if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) || ··· 1239 1238 read_buf = *save_buf; 1240 1239 } else { 1241 1240 read_buf = kvzalloc(hns3_dbg_cmd[index].buf_len, GFP_KERNEL); 1242 - if (!read_buf) 1243 - return -ENOMEM; 1241 + if (!read_buf) { 1242 + ret = -ENOMEM; 1243 + goto out; 1244 + } 1244 1245 1245 1246 /* save the buffer addr until the last read operation */ 1246 1247 *save_buf = read_buf; 1247 - } 1248 1248 1249 - /* get data ready for the first time to read */ 1250 - if (!*ppos) { 1249 + /* get data ready for the first time to read */ 1251 1250 ret = hns3_dbg_read_cmd(dbg_data, hns3_dbg_cmd[index].cmd, 1252 1251 read_buf, hns3_dbg_cmd[index].buf_len); 1253 1252 if (ret) ··· 1256 1255 1257 1256 size = simple_read_from_buffer(buffer, count, ppos, read_buf, 1258 1257 strlen(read_buf)); 1259 - if (size > 0) 1258 + if (size > 0) { 1259 + mutex_unlock(&handle->dbgfs_lock); 1260 1260 return size; 1261 + } 1261 1262 1262 1263 out: 1263 1264 /* free the buffer for the last read operation */ ··· 1268 1265 *save_buf = NULL; 1269 1266 } 1270 1267 1268 + mutex_unlock(&handle->dbgfs_lock); 1271 1269 return ret; 1272 1270 } 1273 1271 ··· 1341 1337 debugfs_create_dir(hns3_dbg_dentry[i].name, 1342 1338 handle->hnae3_dbgfs); 1343 1339 1340 + mutex_init(&handle->dbgfs_lock); 1341 + 1344 1342 for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++) { 1345 1343 if ((hns3_dbg_cmd[i].cmd == HNAE3_DBG_CMD_TM_NODES && 1346 1344 ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2) || ··· 1369 1363 return 0; 1370 1364 1371 1365 out: 1366 + mutex_destroy(&handle->dbgfs_lock); 1372 1367 debugfs_remove_recursive(handle->hnae3_dbgfs); 1373 1368 handle->hnae3_dbgfs = NULL; 1374 1369 return ret; ··· 1385 1378 hns3_dbg_cmd[i].buf = NULL; 1386 1379 } 1387 1380 1381 + mutex_destroy(&handle->dbgfs_lock); 1388 1382 debugfs_remove_recursive(handle->hnae3_dbgfs); 1389 1383 handle->hnae3_dbgfs = NULL; 1390 1384 }
+2 -1
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
··· 114 114 115 115 memcpy(&req->msg, send_msg, sizeof(struct hclge_vf_to_pf_msg)); 116 116 117 - trace_hclge_vf_mbx_send(hdev, req); 117 + if (test_bit(HCLGEVF_STATE_NIC_REGISTERED, &hdev->state)) 118 + trace_hclge_vf_mbx_send(hdev, req); 118 119 119 120 /* synchronous send */ 120 121 if (need_resp) {
+2 -3
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 2046 2046 } 2047 2047 adapter->aq_required = 0; 2048 2048 adapter->current_op = VIRTCHNL_OP_UNKNOWN; 2049 + mutex_unlock(&adapter->crit_lock); 2049 2050 queue_delayed_work(iavf_wq, 2050 2051 &adapter->watchdog_task, 2051 2052 msecs_to_jiffies(10)); ··· 2077 2076 iavf_detect_recover_hung(&adapter->vsi); 2078 2077 break; 2079 2078 case __IAVF_REMOVE: 2080 - mutex_unlock(&adapter->crit_lock); 2081 - return; 2082 2079 default: 2080 + mutex_unlock(&adapter->crit_lock); 2083 2081 return; 2084 2082 } 2085 2083 2086 2084 /* check for hw reset */ 2087 2085 reg_val = rd32(hw, IAVF_VF_ARQLEN1) & IAVF_VF_ARQLEN1_ARQENABLE_MASK; 2088 2086 if (!reg_val) { 2089 - iavf_change_state(adapter, __IAVF_RESETTING); 2090 2087 adapter->flags |= IAVF_FLAG_RESET_PENDING; 2091 2088 adapter->aq_required = 0; 2092 2089 adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+17
drivers/net/ethernet/intel/ice/ice_base.c
··· 6 6 #include "ice_lib.h" 7 7 #include "ice_dcb_lib.h" 8 8 9 + static bool ice_alloc_rx_buf_zc(struct ice_rx_ring *rx_ring) 10 + { 11 + rx_ring->xdp_buf = kcalloc(rx_ring->count, sizeof(*rx_ring->xdp_buf), GFP_KERNEL); 12 + return !!rx_ring->xdp_buf; 13 + } 14 + 15 + static bool ice_alloc_rx_buf(struct ice_rx_ring *rx_ring) 16 + { 17 + rx_ring->rx_buf = kcalloc(rx_ring->count, sizeof(*rx_ring->rx_buf), GFP_KERNEL); 18 + return !!rx_ring->rx_buf; 19 + } 20 + 9 21 /** 10 22 * __ice_vsi_get_qs_contig - Assign a contiguous chunk of queues to VSI 11 23 * @qs_cfg: gathered variables needed for PF->VSI queues assignment ··· 504 492 xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev, 505 493 ring->q_index, ring->q_vector->napi.napi_id); 506 494 495 + kfree(ring->rx_buf); 507 496 ring->xsk_pool = ice_xsk_pool(ring); 508 497 if (ring->xsk_pool) { 498 + if (!ice_alloc_rx_buf_zc(ring)) 499 + return -ENOMEM; 509 500 xdp_rxq_info_unreg_mem_model(&ring->xdp_rxq); 510 501 511 502 ring->rx_buf_len = ··· 523 508 dev_info(dev, "Registered XDP mem model MEM_TYPE_XSK_BUFF_POOL on Rx ring %d\n", 524 509 ring->q_index); 525 510 } else { 511 + if (!ice_alloc_rx_buf(ring)) 512 + return -ENOMEM; 526 513 if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) 527 514 /* coverity[check_return] */ 528 515 xdp_rxq_info_reg(&ring->xdp_rxq,
+5 -8
drivers/net/ethernet/intel/ice/ice_ptp.c
··· 705 705 scaled_ppm = -scaled_ppm; 706 706 } 707 707 708 - while ((u64)scaled_ppm > div_u64(U64_MAX, incval)) { 708 + while ((u64)scaled_ppm > div64_u64(U64_MAX, incval)) { 709 709 /* handle overflow by scaling down the scaled_ppm and 710 710 * the divisor, losing some precision 711 711 */ ··· 1540 1540 if (err) 1541 1541 continue; 1542 1542 1543 - /* Check if the timestamp is valid */ 1544 - if (!(raw_tstamp & ICE_PTP_TS_VALID)) 1543 + /* Check if the timestamp is invalid or stale */ 1544 + if (!(raw_tstamp & ICE_PTP_TS_VALID) || 1545 + raw_tstamp == tx->tstamps[idx].cached_tstamp) 1545 1546 continue; 1546 - 1547 - /* clear the timestamp register, so that it won't show valid 1548 - * again when re-used. 1549 - */ 1550 - ice_clear_phy_tstamp(hw, tx->quad, phy_idx); 1551 1547 1552 1548 /* The timestamp is valid, so we'll go ahead and clear this 1553 1549 * index and then send the timestamp up to the stack. 1554 1550 */ 1555 1551 spin_lock(&tx->lock); 1552 + tx->tstamps[idx].cached_tstamp = raw_tstamp; 1556 1553 clear_bit(idx, tx->in_use); 1557 1554 skb = tx->tstamps[idx].skb; 1558 1555 tx->tstamps[idx].skb = NULL;
+6
drivers/net/ethernet/intel/ice/ice_ptp.h
··· 55 55 * struct ice_tx_tstamp - Tracking for a single Tx timestamp 56 56 * @skb: pointer to the SKB for this timestamp request 57 57 * @start: jiffies when the timestamp was first requested 58 + * @cached_tstamp: last read timestamp 58 59 * 59 60 * This structure tracks a single timestamp request. The SKB pointer is 60 61 * provided when initiating a request. The start time is used to ensure that 61 62 * we discard old requests that were not fulfilled within a 2 second time 62 63 * window. 64 + * Timestamp values in the PHY are read only and do not get cleared except at 65 + * hardware reset or when a new timestamp value is captured. The cached_tstamp 66 + * field is used to detect the case where a new timestamp has not yet been 67 + * captured, ensuring that we avoid sending stale timestamp data to the stack. 63 68 */ 64 69 struct ice_tx_tstamp { 65 70 struct sk_buff *skb; 66 71 unsigned long start; 72 + u64 cached_tstamp; 67 73 }; 68 74 69 75 /**
+13 -6
drivers/net/ethernet/intel/ice/ice_txrx.c
··· 419 419 } 420 420 421 421 rx_skip_free: 422 - memset(rx_ring->rx_buf, 0, sizeof(*rx_ring->rx_buf) * rx_ring->count); 422 + if (rx_ring->xsk_pool) 423 + memset(rx_ring->xdp_buf, 0, array_size(rx_ring->count, sizeof(*rx_ring->xdp_buf))); 424 + else 425 + memset(rx_ring->rx_buf, 0, array_size(rx_ring->count, sizeof(*rx_ring->rx_buf))); 423 426 424 427 /* Zero out the descriptor ring */ 425 428 size = ALIGN(rx_ring->count * sizeof(union ice_32byte_rx_desc), ··· 449 446 if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq)) 450 447 xdp_rxq_info_unreg(&rx_ring->xdp_rxq); 451 448 rx_ring->xdp_prog = NULL; 452 - devm_kfree(rx_ring->dev, rx_ring->rx_buf); 453 - rx_ring->rx_buf = NULL; 449 + if (rx_ring->xsk_pool) { 450 + kfree(rx_ring->xdp_buf); 451 + rx_ring->xdp_buf = NULL; 452 + } else { 453 + kfree(rx_ring->rx_buf); 454 + rx_ring->rx_buf = NULL; 455 + } 454 456 455 457 if (rx_ring->desc) { 456 458 size = ALIGN(rx_ring->count * sizeof(union ice_32byte_rx_desc), ··· 483 475 /* warn if we are about to overwrite the pointer */ 484 476 WARN_ON(rx_ring->rx_buf); 485 477 rx_ring->rx_buf = 486 - devm_kcalloc(dev, sizeof(*rx_ring->rx_buf), rx_ring->count, 487 - GFP_KERNEL); 478 + kcalloc(rx_ring->count, sizeof(*rx_ring->rx_buf), GFP_KERNEL); 488 479 if (!rx_ring->rx_buf) 489 480 return -ENOMEM; 490 481 ··· 512 505 return 0; 513 506 514 507 err: 515 - devm_kfree(dev, rx_ring->rx_buf); 508 + kfree(rx_ring->rx_buf); 516 509 rx_ring->rx_buf = NULL; 517 510 return -ENOMEM; 518 511 }
-1
drivers/net/ethernet/intel/ice/ice_txrx.h
··· 24 24 #define ICE_MAX_DATA_PER_TXD_ALIGNED \ 25 25 (~(ICE_MAX_READ_REQ_SIZE - 1) & ICE_MAX_DATA_PER_TXD) 26 26 27 - #define ICE_RX_BUF_WRITE 16 /* Must be power of 2 */ 28 27 #define ICE_MAX_TXQ_PER_TXQG 128 29 28 30 29 /* Attempt to maximize the headroom available for incoming frames. We use a 2K
+32 -34
drivers/net/ethernet/intel/ice/ice_xsk.c
··· 12 12 #include "ice_txrx_lib.h" 13 13 #include "ice_lib.h" 14 14 15 + static struct xdp_buff **ice_xdp_buf(struct ice_rx_ring *rx_ring, u32 idx) 16 + { 17 + return &rx_ring->xdp_buf[idx]; 18 + } 19 + 15 20 /** 16 21 * ice_qp_reset_stats - Resets all stats for rings of given index 17 22 * @vsi: VSI that contains rings of interest ··· 377 372 dma_addr_t dma; 378 373 379 374 rx_desc = ICE_RX_DESC(rx_ring, ntu); 380 - xdp = &rx_ring->xdp_buf[ntu]; 375 + xdp = ice_xdp_buf(rx_ring, ntu); 381 376 382 377 nb_buffs = min_t(u16, count, rx_ring->count - ntu); 383 378 nb_buffs = xsk_buff_alloc_batch(rx_ring->xsk_pool, xdp, nb_buffs); ··· 395 390 } 396 391 397 392 ntu += nb_buffs; 398 - if (ntu == rx_ring->count) { 399 - rx_desc = ICE_RX_DESC(rx_ring, 0); 400 - xdp = rx_ring->xdp_buf; 393 + if (ntu == rx_ring->count) 401 394 ntu = 0; 402 - } 403 395 404 - /* clear the status bits for the next_to_use descriptor */ 405 - rx_desc->wb.status_error0 = 0; 406 396 ice_release_rx_desc(rx_ring, ntu); 407 397 408 398 return count == nb_buffs; ··· 419 419 /** 420 420 * ice_construct_skb_zc - Create an sk_buff from zero-copy buffer 421 421 * @rx_ring: Rx ring 422 - * @xdp_arr: Pointer to the SW ring of xdp_buff pointers 422 + * @xdp: Pointer to XDP buffer 423 423 * 424 424 * This function allocates a new skb from a zero-copy Rx buffer. 425 425 * 426 426 * Returns the skb on success, NULL on failure. 427 427 */ 428 428 static struct sk_buff * 429 - ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff **xdp_arr) 429 + ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp) 430 430 { 431 - struct xdp_buff *xdp = *xdp_arr; 431 + unsigned int datasize_hard = xdp->data_end - xdp->data_hard_start; 432 432 unsigned int metasize = xdp->data - xdp->data_meta; 433 433 unsigned int datasize = xdp->data_end - xdp->data; 434 - unsigned int datasize_hard = xdp->data_end - xdp->data_hard_start; 435 434 struct sk_buff *skb; 436 435 437 436 skb = __napi_alloc_skb(&rx_ring->q_vector->napi, datasize_hard, ··· 444 445 skb_metadata_set(skb, metasize); 445 446 446 447 xsk_buff_free(xdp); 447 - *xdp_arr = NULL; 448 448 return skb; 449 449 } 450 450 ··· 505 507 int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) 506 508 { 507 509 unsigned int total_rx_bytes = 0, total_rx_packets = 0; 508 - u16 cleaned_count = ICE_DESC_UNUSED(rx_ring); 509 510 struct ice_tx_ring *xdp_ring; 510 511 unsigned int xdp_xmit = 0; 511 512 struct bpf_prog *xdp_prog; ··· 519 522 while (likely(total_rx_packets < (unsigned int)budget)) { 520 523 union ice_32b_rx_flex_desc *rx_desc; 521 524 unsigned int size, xdp_res = 0; 522 - struct xdp_buff **xdp; 525 + struct xdp_buff *xdp; 523 526 struct sk_buff *skb; 524 527 u16 stat_err_bits; 525 528 u16 vlan_tag = 0; ··· 537 540 */ 538 541 dma_rmb(); 539 542 543 + xdp = *ice_xdp_buf(rx_ring, rx_ring->next_to_clean); 544 + 540 545 size = le16_to_cpu(rx_desc->wb.pkt_len) & 541 546 ICE_RX_FLX_DESC_PKT_LEN_M; 542 - if (!size) 543 - break; 547 + if (!size) { 548 + xdp->data = NULL; 549 + xdp->data_end = NULL; 550 + xdp->data_hard_start = NULL; 551 + xdp->data_meta = NULL; 552 + goto construct_skb; 553 + } 544 554 545 - xdp = &rx_ring->xdp_buf[rx_ring->next_to_clean]; 546 - xsk_buff_set_size(*xdp, size); 547 - xsk_buff_dma_sync_for_cpu(*xdp, rx_ring->xsk_pool); 555 + xsk_buff_set_size(xdp, size); 556 + xsk_buff_dma_sync_for_cpu(xdp, rx_ring->xsk_pool); 548 557 549 - xdp_res = ice_run_xdp_zc(rx_ring, *xdp, xdp_prog, xdp_ring); 558 + xdp_res = ice_run_xdp_zc(rx_ring, xdp, xdp_prog, xdp_ring); 550 559 if (xdp_res) { 551 560 if (xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR)) 552 561 xdp_xmit |= xdp_res; 553 562 else 554 - xsk_buff_free(*xdp); 563 + xsk_buff_free(xdp); 555 564 556 - *xdp = NULL; 557 565 total_rx_bytes += size; 558 566 total_rx_packets++; 559 - cleaned_count++; 560 567 561 568 ice_bump_ntc(rx_ring); 562 569 continue; 563 570 } 564 - 571 + construct_skb: 565 572 /* XDP_PASS path */ 566 573 skb = ice_construct_skb_zc(rx_ring, xdp); 567 574 if (!skb) { ··· 573 572 break; 574 573 } 575 574 576 - cleaned_count++; 577 575 ice_bump_ntc(rx_ring); 578 576 579 577 if (eth_skb_pad(skb)) { ··· 594 594 ice_receive_skb(rx_ring, skb, vlan_tag); 595 595 } 596 596 597 - if (cleaned_count >= ICE_RX_BUF_WRITE) 598 - failure = !ice_alloc_rx_bufs_zc(rx_ring, cleaned_count); 597 + failure = !ice_alloc_rx_bufs_zc(rx_ring, ICE_DESC_UNUSED(rx_ring)); 599 598 600 599 ice_finalize_xdp_rx(xdp_ring, xdp_xmit); 601 600 ice_update_rx_ring_stats(rx_ring, total_rx_packets, total_rx_bytes); ··· 810 811 */ 811 812 void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring) 812 813 { 813 - u16 i; 814 + u16 count_mask = rx_ring->count - 1; 815 + u16 ntc = rx_ring->next_to_clean; 816 + u16 ntu = rx_ring->next_to_use; 814 817 815 - for (i = 0; i < rx_ring->count; i++) { 816 - struct xdp_buff **xdp = &rx_ring->xdp_buf[i]; 818 + for ( ; ntc != ntu; ntc = (ntc + 1) & count_mask) { 819 + struct xdp_buff *xdp = *ice_xdp_buf(rx_ring, ntc); 817 820 818 - if (!xdp) 819 - continue; 820 - 821 - *xdp = NULL; 821 + xsk_buff_free(xdp); 822 822 } 823 823 } 824 824
+27 -20
drivers/net/ethernet/intel/igb/igb_main.c
··· 7648 7648 struct vf_mac_filter *entry = NULL; 7649 7649 int ret = 0; 7650 7650 7651 + if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) && 7652 + !vf_data->trusted) { 7653 + dev_warn(&pdev->dev, 7654 + "VF %d requested MAC filter but is administratively denied\n", 7655 + vf); 7656 + return -EINVAL; 7657 + } 7658 + if (!is_valid_ether_addr(addr)) { 7659 + dev_warn(&pdev->dev, 7660 + "VF %d attempted to set invalid MAC filter\n", 7661 + vf); 7662 + return -EINVAL; 7663 + } 7664 + 7651 7665 switch (info) { 7652 7666 case E1000_VF_MAC_FILTER_CLR: 7653 7667 /* remove all unicast MAC filters related to the current VF */ ··· 7675 7661 } 7676 7662 break; 7677 7663 case E1000_VF_MAC_FILTER_ADD: 7678 - if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) && 7679 - !vf_data->trusted) { 7680 - dev_warn(&pdev->dev, 7681 - "VF %d requested MAC filter but is administratively denied\n", 7682 - vf); 7683 - return -EINVAL; 7684 - } 7685 - if (!is_valid_ether_addr(addr)) { 7686 - dev_warn(&pdev->dev, 7687 - "VF %d attempted to set invalid MAC filter\n", 7688 - vf); 7689 - return -EINVAL; 7690 - } 7691 - 7692 7664 /* try to find empty slot in the list */ 7693 7665 list_for_each(pos, &adapter->vf_macs.l) { 7694 7666 entry = list_entry(pos, struct vf_mac_filter, l); ··· 9254 9254 return __igb_shutdown(to_pci_dev(dev), NULL, 0); 9255 9255 } 9256 9256 9257 - static int __maybe_unused igb_resume(struct device *dev) 9257 + static int __maybe_unused __igb_resume(struct device *dev, bool rpm) 9258 9258 { 9259 9259 struct pci_dev *pdev = to_pci_dev(dev); 9260 9260 struct net_device *netdev = pci_get_drvdata(pdev); ··· 9297 9297 9298 9298 wr32(E1000_WUS, ~0); 9299 9299 9300 - rtnl_lock(); 9300 + if (!rpm) 9301 + rtnl_lock(); 9301 9302 if (!err && netif_running(netdev)) 9302 9303 err = __igb_open(netdev, true); 9303 9304 9304 9305 if (!err) 9305 9306 netif_device_attach(netdev); 9306 - rtnl_unlock(); 9307 + if (!rpm) 9308 + rtnl_unlock(); 9307 9309 9308 9310 return err; 9311 + } 9312 + 9313 + static int __maybe_unused igb_resume(struct device *dev) 9314 + { 9315 + return __igb_resume(dev, false); 9309 9316 } 9310 9317 9311 9318 static int __maybe_unused igb_runtime_idle(struct device *dev) ··· 9333 9326 9334 9327 static int __maybe_unused igb_runtime_resume(struct device *dev) 9335 9328 { 9336 - return igb_resume(dev); 9329 + return __igb_resume(dev, true); 9337 9330 } 9338 9331 9339 9332 static void igb_shutdown(struct pci_dev *pdev) ··· 9449 9442 * @pdev: Pointer to PCI device 9450 9443 * 9451 9444 * Restart the card from scratch, as if from a cold-boot. Implementation 9452 - * resembles the first-half of the igb_resume routine. 9445 + * resembles the first-half of the __igb_resume routine. 9453 9446 **/ 9454 9447 static pci_ers_result_t igb_io_slot_reset(struct pci_dev *pdev) 9455 9448 { ··· 9489 9482 * 9490 9483 * This callback is called when the error recovery driver tells us that 9491 9484 * its OK to resume normal operation. Implementation resembles the 9492 - * second-half of the igb_resume routine. 9485 + * second-half of the __igb_resume routine. 9493 9486 */ 9494 9487 static void igb_io_resume(struct pci_dev *pdev) 9495 9488 {
+1
drivers/net/ethernet/intel/igbvf/netdev.c
··· 2859 2859 return 0; 2860 2860 2861 2861 err_hw_init: 2862 + netif_napi_del(&adapter->rx_ring->napi); 2862 2863 kfree(adapter->tx_ring); 2863 2864 kfree(adapter->rx_ring); 2864 2865 err_sw_init:
+1 -1
drivers/net/ethernet/intel/igc/igc_i225.c
··· 636 636 ltrv = rd32(IGC_LTRMAXV); 637 637 if (ltr_max != (ltrv & IGC_LTRMAXV_LTRV_MASK)) { 638 638 ltrv = IGC_LTRMAXV_LSNP_REQ | ltr_max | 639 - (scale_min << IGC_LTRMAXV_SCALE_SHIFT); 639 + (scale_max << IGC_LTRMAXV_SCALE_SHIFT); 640 640 wr32(IGC_LTRMAXV, ltrv); 641 641 } 642 642 }
+6
drivers/net/ethernet/intel/igc/igc_main.c
··· 5467 5467 mod_timer(&adapter->watchdog_timer, jiffies + 1); 5468 5468 } 5469 5469 5470 + if (icr & IGC_ICR_TS) 5471 + igc_tsync_interrupt(adapter); 5472 + 5470 5473 napi_schedule(&q_vector->napi); 5471 5474 5472 5475 return IRQ_HANDLED; ··· 5512 5509 if (!test_bit(__IGC_DOWN, &adapter->state)) 5513 5510 mod_timer(&adapter->watchdog_timer, jiffies + 1); 5514 5511 } 5512 + 5513 + if (icr & IGC_ICR_TS) 5514 + igc_tsync_interrupt(adapter); 5515 5515 5516 5516 napi_schedule(&q_vector->napi); 5517 5517
+14 -1
drivers/net/ethernet/intel/igc/igc_ptp.c
··· 768 768 */ 769 769 static bool igc_is_crosststamp_supported(struct igc_adapter *adapter) 770 770 { 771 - return IS_ENABLED(CONFIG_X86_TSC) ? pcie_ptm_enabled(adapter->pdev) : false; 771 + if (!IS_ENABLED(CONFIG_X86_TSC)) 772 + return false; 773 + 774 + /* FIXME: it was noticed that enabling support for PCIe PTM in 775 + * some i225-V models could cause lockups when bringing the 776 + * interface up/down. There should be no downsides to 777 + * disabling crosstimestamping support for i225-V, as it 778 + * doesn't have any PTP support. That way we gain some time 779 + * while root causing the issue. 780 + */ 781 + if (adapter->pdev->device == IGC_DEV_ID_I225_V) 782 + return false; 783 + 784 + return pcie_ptm_enabled(adapter->pdev); 772 785 } 773 786 774 787 static struct system_counterval_t igc_device_tstamp_to_system(u64 tstamp)
+4
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 5531 5531 if (!speed && hw->mac.ops.get_link_capabilities) { 5532 5532 ret = hw->mac.ops.get_link_capabilities(hw, &speed, 5533 5533 &autoneg); 5534 + /* remove NBASE-T speeds from default autonegotiation 5535 + * to accommodate broken network switches in the field 5536 + * which cannot cope with advertised NBASE-T speeds 5537 + */ 5534 5538 speed &= ~(IXGBE_LINK_SPEED_5GB_FULL | 5535 5539 IXGBE_LINK_SPEED_2_5GB_FULL); 5536 5540 }
+3
drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
··· 3405 3405 /* flush pending Tx transactions */ 3406 3406 ixgbe_clear_tx_pending(hw); 3407 3407 3408 + /* set MDIO speed before talking to the PHY in case it's the 1st time */ 3409 + ixgbe_set_mdio_speed(hw); 3410 + 3408 3411 /* PHY ops must be identified and initialized prior to reset */ 3409 3412 status = hw->phy.ops.init(hw); 3410 3413 if (status == IXGBE_ERR_SFP_NOT_SUPPORTED ||
+25 -11
drivers/net/ethernet/lantiq_xrx200.c
··· 71 71 struct xrx200_chan chan_tx; 72 72 struct xrx200_chan chan_rx; 73 73 74 + u16 rx_buf_size; 75 + 74 76 struct net_device *net_dev; 75 77 struct device *dev; 76 78 ··· 99 97 xrx200_pmac_w32(priv, val, offset); 100 98 } 101 99 100 + static int xrx200_max_frame_len(int mtu) 101 + { 102 + return VLAN_ETH_HLEN + mtu; 103 + } 104 + 105 + static int xrx200_buffer_size(int mtu) 106 + { 107 + return round_up(xrx200_max_frame_len(mtu), 4 * XRX200_DMA_BURST_LEN); 108 + } 109 + 102 110 /* drop all the packets from the DMA ring */ 103 111 static void xrx200_flush_dma(struct xrx200_chan *ch) 104 112 { ··· 121 109 break; 122 110 123 111 desc->ctl = LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) | 124 - (ch->priv->net_dev->mtu + VLAN_ETH_HLEN + 125 - ETH_FCS_LEN); 112 + ch->priv->rx_buf_size; 126 113 ch->dma.desc++; 127 114 ch->dma.desc %= LTQ_DESC_NUM; 128 115 } ··· 169 158 170 159 static int xrx200_alloc_skb(struct xrx200_chan *ch) 171 160 { 172 - int len = ch->priv->net_dev->mtu + VLAN_ETH_HLEN + ETH_FCS_LEN; 173 161 struct sk_buff *skb = ch->skb[ch->dma.desc]; 162 + struct xrx200_priv *priv = ch->priv; 174 163 dma_addr_t mapping; 175 164 int ret = 0; 176 165 177 - ch->skb[ch->dma.desc] = netdev_alloc_skb_ip_align(ch->priv->net_dev, 178 - len); 166 + ch->skb[ch->dma.desc] = netdev_alloc_skb_ip_align(priv->net_dev, 167 + priv->rx_buf_size); 179 168 if (!ch->skb[ch->dma.desc]) { 180 169 ret = -ENOMEM; 181 170 goto skip; 182 171 } 183 172 184 - mapping = dma_map_single(ch->priv->dev, ch->skb[ch->dma.desc]->data, 185 - len, DMA_FROM_DEVICE); 186 - if (unlikely(dma_mapping_error(ch->priv->dev, mapping))) { 173 + mapping = dma_map_single(priv->dev, ch->skb[ch->dma.desc]->data, 174 + priv->rx_buf_size, DMA_FROM_DEVICE); 175 + if (unlikely(dma_mapping_error(priv->dev, mapping))) { 187 176 dev_kfree_skb_any(ch->skb[ch->dma.desc]); 188 177 ch->skb[ch->dma.desc] = skb; 189 178 ret = -ENOMEM; ··· 195 184 wmb(); 196 185 skip: 197 186 ch->dma.desc_base[ch->dma.desc].ctl = 198 - LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) | len; 187 + LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) | priv->rx_buf_size; 199 188 200 189 return ret; 201 190 } ··· 224 213 skb->protocol = eth_type_trans(skb, net_dev); 225 214 netif_receive_skb(skb); 226 215 net_dev->stats.rx_packets++; 227 - net_dev->stats.rx_bytes += len - ETH_FCS_LEN; 216 + net_dev->stats.rx_bytes += len; 228 217 229 218 return 0; 230 219 } ··· 367 356 int ret = 0; 368 357 369 358 net_dev->mtu = new_mtu; 359 + priv->rx_buf_size = xrx200_buffer_size(new_mtu); 370 360 371 361 if (new_mtu <= old_mtu) 372 362 return ret; ··· 387 375 ret = xrx200_alloc_skb(ch_rx); 388 376 if (ret) { 389 377 net_dev->mtu = old_mtu; 378 + priv->rx_buf_size = xrx200_buffer_size(old_mtu); 390 379 break; 391 380 } 392 381 dev_kfree_skb_any(skb); ··· 518 505 net_dev->netdev_ops = &xrx200_netdev_ops; 519 506 SET_NETDEV_DEV(net_dev, dev); 520 507 net_dev->min_mtu = ETH_ZLEN; 521 - net_dev->max_mtu = XRX200_DMA_DATA_LEN - VLAN_ETH_HLEN - ETH_FCS_LEN; 508 + net_dev->max_mtu = XRX200_DMA_DATA_LEN - xrx200_max_frame_len(0); 509 + priv->rx_buf_size = xrx200_buffer_size(ETH_DATA_LEN); 522 510 523 511 /* load the memory ranges */ 524 512 priv->pmac_reg = devm_platform_get_and_ioremap_resource(pdev, 0, NULL);
+22 -13
drivers/net/ethernet/marvell/prestera/prestera_main.c
··· 54 54 struct prestera_port *prestera_port_find_by_hwid(struct prestera_switch *sw, 55 55 u32 dev_id, u32 hw_id) 56 56 { 57 - struct prestera_port *port = NULL; 57 + struct prestera_port *port = NULL, *tmp; 58 58 59 59 read_lock(&sw->port_list_lock); 60 - list_for_each_entry(port, &sw->port_list, list) { 61 - if (port->dev_id == dev_id && port->hw_id == hw_id) 60 + list_for_each_entry(tmp, &sw->port_list, list) { 61 + if (tmp->dev_id == dev_id && tmp->hw_id == hw_id) { 62 + port = tmp; 62 63 break; 64 + } 63 65 } 64 66 read_unlock(&sw->port_list_lock); 65 67 ··· 70 68 71 69 struct prestera_port *prestera_find_port(struct prestera_switch *sw, u32 id) 72 70 { 73 - struct prestera_port *port = NULL; 71 + struct prestera_port *port = NULL, *tmp; 74 72 75 73 read_lock(&sw->port_list_lock); 76 - list_for_each_entry(port, &sw->port_list, list) { 77 - if (port->id == id) 74 + list_for_each_entry(tmp, &sw->port_list, list) { 75 + if (tmp->id == id) { 76 + port = tmp; 78 77 break; 78 + } 79 79 } 80 80 read_unlock(&sw->port_list_lock); 81 81 ··· 768 764 struct net_device *dev, 769 765 unsigned long event, void *ptr) 770 766 { 771 - struct netdev_notifier_changeupper_info *info = ptr; 767 + struct netdev_notifier_info *info = ptr; 768 + struct netdev_notifier_changeupper_info *cu_info; 772 769 struct prestera_port *port = netdev_priv(dev); 773 770 struct netlink_ext_ack *extack; 774 771 struct net_device *upper; 775 772 776 - extack = netdev_notifier_info_to_extack(&info->info); 777 - upper = info->upper_dev; 773 + extack = netdev_notifier_info_to_extack(info); 774 + cu_info = container_of(info, 775 + struct netdev_notifier_changeupper_info, 776 + info); 778 777 779 778 switch (event) { 780 779 case NETDEV_PRECHANGEUPPER: 780 + upper = cu_info->upper_dev; 781 781 if (!netif_is_bridge_master(upper) && 782 782 !netif_is_lag_master(upper)) { 783 783 NL_SET_ERR_MSG_MOD(extack, "Unknown upper device type"); 784 784 return -EINVAL; 785 785 } 786 786 787 - if (!info->linking) 787 + if (!cu_info->linking) 788 788 break; 789 789 790 790 if (netdev_has_any_upper_dev(upper)) { ··· 797 789 } 798 790 799 791 if (netif_is_lag_master(upper) && 800 - !prestera_lag_master_check(upper, info->upper_info, extack)) 792 + !prestera_lag_master_check(upper, cu_info->upper_info, extack)) 801 793 return -EOPNOTSUPP; 802 794 if (netif_is_lag_master(upper) && vlan_uses_dev(dev)) { 803 795 NL_SET_ERR_MSG_MOD(extack, ··· 813 805 break; 814 806 815 807 case NETDEV_CHANGEUPPER: 808 + upper = cu_info->upper_dev; 816 809 if (netif_is_bridge_master(upper)) { 817 - if (info->linking) 810 + if (cu_info->linking) 818 811 return prestera_bridge_port_join(upper, port, 819 812 extack); 820 813 else 821 814 prestera_bridge_port_leave(upper, port); 822 815 } else if (netif_is_lag_master(upper)) { 823 - if (info->linking) 816 + if (cu_info->linking) 824 817 return prestera_lag_port_add(port, upper); 825 818 else 826 819 prestera_lag_port_del(port);
+2 -3
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 783 783 DECLARE_BITMAP(state, MLX5E_CHANNEL_NUM_STATES); 784 784 int ix; 785 785 int cpu; 786 + /* Sync between icosq recovery and XSK enable/disable. */ 787 + struct mutex icosq_recovery_lock; 786 788 }; 787 789 788 790 struct mlx5e_ptp; ··· 1016 1014 void mlx5e_destroy_rq(struct mlx5e_rq *rq); 1017 1015 1018 1016 struct mlx5e_sq_param; 1019 - int mlx5e_open_icosq(struct mlx5e_channel *c, struct mlx5e_params *params, 1020 - struct mlx5e_sq_param *param, struct mlx5e_icosq *sq); 1021 - void mlx5e_close_icosq(struct mlx5e_icosq *sq); 1022 1017 int mlx5e_open_xdpsq(struct mlx5e_channel *c, struct mlx5e_params *params, 1023 1018 struct mlx5e_sq_param *param, struct xsk_buff_pool *xsk_pool, 1024 1019 struct mlx5e_xdpsq *sq, bool is_redirect);
+2
drivers/net/ethernet/mellanox/mlx5/core/en/health.h
··· 30 30 void mlx5e_reporter_icosq_cqe_err(struct mlx5e_icosq *icosq); 31 31 void mlx5e_reporter_rq_cqe_err(struct mlx5e_rq *rq); 32 32 void mlx5e_reporter_rx_timeout(struct mlx5e_rq *rq); 33 + void mlx5e_reporter_icosq_suspend_recovery(struct mlx5e_channel *c); 34 + void mlx5e_reporter_icosq_resume_recovery(struct mlx5e_channel *c); 33 35 34 36 #define MLX5E_REPORTER_PER_Q_MAX_LEN 256 35 37
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.h
··· 66 66 67 67 static inline void 68 68 mlx5e_rep_tc_receive(struct mlx5_cqe64 *cqe, struct mlx5e_rq *rq, 69 - struct sk_buff *skb) {} 69 + struct sk_buff *skb) { napi_gro_receive(rq->cq.napi, skb); } 70 70 71 71 #endif /* CONFIG_MLX5_CLS_ACT */ 72 72
+34 -1
drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
··· 62 62 63 63 static int mlx5e_rx_reporter_err_icosq_cqe_recover(void *ctx) 64 64 { 65 + struct mlx5e_rq *xskrq = NULL; 65 66 struct mlx5_core_dev *mdev; 66 67 struct mlx5e_icosq *icosq; 67 68 struct net_device *dev; ··· 71 70 int err; 72 71 73 72 icosq = ctx; 73 + 74 + mutex_lock(&icosq->channel->icosq_recovery_lock); 75 + 76 + /* mlx5e_close_rq cancels this work before RQ and ICOSQ are killed. */ 74 77 rq = &icosq->channel->rq; 78 + if (test_bit(MLX5E_RQ_STATE_ENABLED, &icosq->channel->xskrq.state)) 79 + xskrq = &icosq->channel->xskrq; 75 80 mdev = icosq->channel->mdev; 76 81 dev = icosq->channel->netdev; 77 82 err = mlx5_core_query_sq_state(mdev, icosq->sqn, &state); ··· 91 84 goto out; 92 85 93 86 mlx5e_deactivate_rq(rq); 87 + if (xskrq) 88 + mlx5e_deactivate_rq(xskrq); 89 + 94 90 err = mlx5e_wait_for_icosq_flush(icosq); 95 91 if (err) 96 92 goto out; ··· 107 97 goto out; 108 98 109 99 mlx5e_reset_icosq_cc_pc(icosq); 100 + 110 101 mlx5e_free_rx_in_progress_descs(rq); 102 + if (xskrq) 103 + mlx5e_free_rx_in_progress_descs(xskrq); 104 + 111 105 clear_bit(MLX5E_SQ_STATE_RECOVERING, &icosq->state); 112 106 mlx5e_activate_icosq(icosq); 113 - mlx5e_activate_rq(rq); 114 107 108 + mlx5e_activate_rq(rq); 115 109 rq->stats->recover++; 110 + 111 + if (xskrq) { 112 + mlx5e_activate_rq(xskrq); 113 + xskrq->stats->recover++; 114 + } 115 + 116 + mutex_unlock(&icosq->channel->icosq_recovery_lock); 117 + 116 118 return 0; 117 119 out: 118 120 clear_bit(MLX5E_SQ_STATE_RECOVERING, &icosq->state); 121 + mutex_unlock(&icosq->channel->icosq_recovery_lock); 119 122 return err; 120 123 } 121 124 ··· 727 704 snprintf(err_str, sizeof(err_str), "ERR CQE on ICOSQ: 0x%x", icosq->sqn); 728 705 729 706 mlx5e_health_report(priv, priv->rx_reporter, err_str, &err_ctx); 707 + } 708 + 709 + void mlx5e_reporter_icosq_suspend_recovery(struct mlx5e_channel *c) 710 + { 711 + mutex_lock(&c->icosq_recovery_lock); 712 + } 713 + 714 + void mlx5e_reporter_icosq_resume_recovery(struct mlx5e_channel *c) 715 + { 716 + mutex_unlock(&c->icosq_recovery_lock); 730 717 } 731 718 732 719 static const struct devlink_health_reporter_ops mlx5_rx_reporter_ops = {
+9 -1
drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
··· 466 466 return mlx5e_health_fmsg_named_obj_nest_end(fmsg); 467 467 } 468 468 469 + static int mlx5e_tx_reporter_timeout_dump(struct mlx5e_priv *priv, struct devlink_fmsg *fmsg, 470 + void *ctx) 471 + { 472 + struct mlx5e_tx_timeout_ctx *to_ctx = ctx; 473 + 474 + return mlx5e_tx_reporter_dump_sq(priv, fmsg, to_ctx->sq); 475 + } 476 + 469 477 static int mlx5e_tx_reporter_dump_all_sqs(struct mlx5e_priv *priv, 470 478 struct devlink_fmsg *fmsg) 471 479 { ··· 569 561 to_ctx.sq = sq; 570 562 err_ctx.ctx = &to_ctx; 571 563 err_ctx.recover = mlx5e_tx_reporter_timeout_recover; 572 - err_ctx.dump = mlx5e_tx_reporter_dump_sq; 564 + err_ctx.dump = mlx5e_tx_reporter_timeout_dump; 573 565 snprintf(err_str, sizeof(err_str), 574 566 "TX timeout on queue: %d, SQ: 0x%x, CQ: 0x%x, SQ Cons: 0x%x SQ Prod: 0x%x, usecs since last trans: %u", 575 567 sq->ch_ix, sq->sqn, sq->cq.mcq.cqn, sq->cc, sq->pc,
+15 -1
drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
··· 4 4 #include "setup.h" 5 5 #include "en/params.h" 6 6 #include "en/txrx.h" 7 + #include "en/health.h" 7 8 8 9 /* It matches XDP_UMEM_MIN_CHUNK_SIZE, but as this constant is private and may 9 10 * change unexpectedly, and mlx5e has a minimum valid stride size for striding ··· 171 170 172 171 void mlx5e_activate_xsk(struct mlx5e_channel *c) 173 172 { 173 + /* ICOSQ recovery deactivates RQs. Suspend the recovery to avoid 174 + * activating XSKRQ in the middle of recovery. 175 + */ 176 + mlx5e_reporter_icosq_suspend_recovery(c); 174 177 set_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state); 178 + mlx5e_reporter_icosq_resume_recovery(c); 179 + 175 180 /* TX queue is created active. */ 176 181 177 182 spin_lock_bh(&c->async_icosq_lock); ··· 187 180 188 181 void mlx5e_deactivate_xsk(struct mlx5e_channel *c) 189 182 { 190 - mlx5e_deactivate_rq(&c->xskrq); 183 + /* ICOSQ recovery may reactivate XSKRQ if clear_bit is called in the 184 + * middle of recovery. Suspend the recovery to avoid it. 185 + */ 186 + mlx5e_reporter_icosq_suspend_recovery(c); 187 + clear_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state); 188 + mlx5e_reporter_icosq_resume_recovery(c); 189 + synchronize_net(); /* Sync with NAPI to prevent mlx5e_post_rx_wqes. */ 190 + 191 191 /* TX queue is disabled on close. */ 192 192 }
+32 -16
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 1087 1087 void mlx5e_close_rq(struct mlx5e_rq *rq) 1088 1088 { 1089 1089 cancel_work_sync(&rq->dim.work); 1090 - if (rq->icosq) 1091 - cancel_work_sync(&rq->icosq->recover_work); 1092 1090 cancel_work_sync(&rq->recover_work); 1093 1091 mlx5e_destroy_rq(rq); 1094 1092 mlx5e_free_rx_descs(rq); ··· 1214 1216 mlx5e_reporter_icosq_cqe_err(sq); 1215 1217 } 1216 1218 1219 + static void mlx5e_async_icosq_err_cqe_work(struct work_struct *recover_work) 1220 + { 1221 + struct mlx5e_icosq *sq = container_of(recover_work, struct mlx5e_icosq, 1222 + recover_work); 1223 + 1224 + /* Not implemented yet. */ 1225 + 1226 + netdev_warn(sq->channel->netdev, "async_icosq recovery is not implemented\n"); 1227 + } 1228 + 1217 1229 static int mlx5e_alloc_icosq(struct mlx5e_channel *c, 1218 1230 struct mlx5e_sq_param *param, 1219 - struct mlx5e_icosq *sq) 1231 + struct mlx5e_icosq *sq, 1232 + work_func_t recover_work_func) 1220 1233 { 1221 1234 void *sqc_wq = MLX5_ADDR_OF(sqc, param->sqc, wq); 1222 1235 struct mlx5_core_dev *mdev = c->mdev; ··· 1248 1239 if (err) 1249 1240 goto err_sq_wq_destroy; 1250 1241 1251 - INIT_WORK(&sq->recover_work, mlx5e_icosq_err_cqe_work); 1242 + INIT_WORK(&sq->recover_work, recover_work_func); 1252 1243 1253 1244 return 0; 1254 1245 ··· 1584 1575 mlx5e_reporter_tx_err_cqe(sq); 1585 1576 } 1586 1577 1587 - int mlx5e_open_icosq(struct mlx5e_channel *c, struct mlx5e_params *params, 1588 - struct mlx5e_sq_param *param, struct mlx5e_icosq *sq) 1578 + static int mlx5e_open_icosq(struct mlx5e_channel *c, struct mlx5e_params *params, 1579 + struct mlx5e_sq_param *param, struct mlx5e_icosq *sq, 1580 + work_func_t recover_work_func) 1589 1581 { 1590 1582 struct mlx5e_create_sq_param csp = {}; 1591 1583 int err; 1592 1584 1593 - err = mlx5e_alloc_icosq(c, param, sq); 1585 + err = mlx5e_alloc_icosq(c, param, sq, recover_work_func); 1594 1586 if (err) 1595 1587 return err; 1596 1588 ··· 1630 1620 synchronize_net(); /* Sync with NAPI. */ 1631 1621 } 1632 1622 1633 - void mlx5e_close_icosq(struct mlx5e_icosq *sq) 1623 + static void mlx5e_close_icosq(struct mlx5e_icosq *sq) 1634 1624 { 1635 1625 struct mlx5e_channel *c = sq->channel; 1636 1626 ··· 2094 2084 2095 2085 spin_lock_init(&c->async_icosq_lock); 2096 2086 2097 - err = mlx5e_open_icosq(c, params, &cparam->async_icosq, &c->async_icosq); 2087 + err = mlx5e_open_icosq(c, params, &cparam->async_icosq, &c->async_icosq, 2088 + mlx5e_async_icosq_err_cqe_work); 2098 2089 if (err) 2099 2090 goto err_close_xdpsq_cq; 2100 2091 2101 - err = mlx5e_open_icosq(c, params, &cparam->icosq, &c->icosq); 2092 + mutex_init(&c->icosq_recovery_lock); 2093 + 2094 + err = mlx5e_open_icosq(c, params, &cparam->icosq, &c->icosq, 2095 + mlx5e_icosq_err_cqe_work); 2102 2096 if (err) 2103 2097 goto err_close_async_icosq; 2104 2098 ··· 2170 2156 mlx5e_close_xdpsq(&c->xdpsq); 2171 2157 if (c->xdp) 2172 2158 mlx5e_close_xdpsq(&c->rq_xdpsq); 2159 + /* The same ICOSQ is used for UMRs for both RQ and XSKRQ. */ 2160 + cancel_work_sync(&c->icosq.recover_work); 2173 2161 mlx5e_close_rq(&c->rq); 2174 2162 mlx5e_close_sqs(c); 2175 2163 mlx5e_close_icosq(&c->icosq); 2164 + mutex_destroy(&c->icosq_recovery_lock); 2176 2165 mlx5e_close_icosq(&c->async_icosq); 2177 2166 if (c->xdp) 2178 2167 mlx5e_close_cq(&c->rq_xdpsq.cq); ··· 3741 3724 3742 3725 static int mlx5e_handle_feature(struct net_device *netdev, 3743 3726 netdev_features_t *features, 3744 - netdev_features_t wanted_features, 3745 3727 netdev_features_t feature, 3746 3728 mlx5e_feature_handler feature_handler) 3747 3729 { 3748 - netdev_features_t changes = wanted_features ^ netdev->features; 3749 - bool enable = !!(wanted_features & feature); 3730 + netdev_features_t changes = *features ^ netdev->features; 3731 + bool enable = !!(*features & feature); 3750 3732 int err; 3751 3733 3752 3734 if (!(changes & feature)) ··· 3753 3737 3754 3738 err = feature_handler(netdev, enable); 3755 3739 if (err) { 3740 + MLX5E_SET_FEATURE(features, feature, !enable); 3756 3741 netdev_err(netdev, "%s feature %pNF failed, err %d\n", 3757 3742 enable ? "Enable" : "Disable", &feature, err); 3758 3743 return err; 3759 3744 } 3760 3745 3761 - MLX5E_SET_FEATURE(features, feature, enable); 3762 3746 return 0; 3763 3747 } 3764 3748 3765 3749 int mlx5e_set_features(struct net_device *netdev, netdev_features_t features) 3766 3750 { 3767 - netdev_features_t oper_features = netdev->features; 3751 + netdev_features_t oper_features = features; 3768 3752 int err = 0; 3769 3753 3770 3754 #define MLX5E_HANDLE_FEATURE(feature, handler) \ 3771 - mlx5e_handle_feature(netdev, &oper_features, features, feature, handler) 3755 + mlx5e_handle_feature(netdev, &oper_features, feature, handler) 3772 3756 3773 3757 err |= MLX5E_HANDLE_FEATURE(NETIF_F_LRO, set_feature_lro); 3774 3758 err |= MLX5E_HANDLE_FEATURE(NETIF_F_GRO_HW, set_feature_hw_gro);
+17 -16
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 1196 1196 if (attr->flags & MLX5_ESW_ATTR_FLAG_SLOW_PATH) 1197 1197 goto offload_rule_0; 1198 1198 1199 - if (flow_flag_test(flow, CT)) { 1200 - mlx5_tc_ct_delete_flow(get_ct_priv(flow->priv), flow, attr); 1201 - return; 1202 - } 1203 - 1204 - if (flow_flag_test(flow, SAMPLE)) { 1205 - mlx5e_tc_sample_unoffload(get_sample_priv(flow->priv), flow->rule[0], attr); 1206 - return; 1207 - } 1208 - 1209 1199 if (attr->esw_attr->split_count) 1210 1200 mlx5_eswitch_del_fwd_rule(esw, flow->rule[1], attr); 1211 1201 1202 + if (flow_flag_test(flow, CT)) 1203 + mlx5_tc_ct_delete_flow(get_ct_priv(flow->priv), flow, attr); 1204 + else if (flow_flag_test(flow, SAMPLE)) 1205 + mlx5e_tc_sample_unoffload(get_sample_priv(flow->priv), flow->rule[0], attr); 1206 + else 1212 1207 offload_rule_0: 1213 - mlx5_eswitch_del_offloaded_rule(esw, flow->rule[0], attr); 1208 + mlx5_eswitch_del_offloaded_rule(esw, flow->rule[0], attr); 1214 1209 } 1215 1210 1216 1211 struct mlx5_flow_handle * ··· 1440 1445 MLX5_FLOW_NAMESPACE_FDB, VPORT_TO_REG, 1441 1446 metadata); 1442 1447 if (err) 1443 - return err; 1448 + goto err_out; 1449 + 1450 + attr->action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; 1444 1451 } 1445 1452 } 1446 1453 ··· 1458 1461 if (attr->chain) { 1459 1462 NL_SET_ERR_MSG_MOD(extack, 1460 1463 "Internal port rule is only supported on chain 0"); 1461 - return -EOPNOTSUPP; 1464 + err = -EOPNOTSUPP; 1465 + goto err_out; 1462 1466 } 1463 1467 1464 1468 if (attr->dest_chain) { 1465 1469 NL_SET_ERR_MSG_MOD(extack, 1466 1470 "Internal port rule offload doesn't support goto action"); 1467 - return -EOPNOTSUPP; 1471 + err = -EOPNOTSUPP; 1472 + goto err_out; 1468 1473 } 1469 1474 1470 1475 int_port = mlx5e_tc_int_port_get(mlx5e_get_int_port_priv(priv), ··· 1474 1475 flow_flag_test(flow, EGRESS) ? 1475 1476 MLX5E_TC_INT_PORT_EGRESS : 1476 1477 MLX5E_TC_INT_PORT_INGRESS); 1477 - if (IS_ERR(int_port)) 1478 - return PTR_ERR(int_port); 1478 + if (IS_ERR(int_port)) { 1479 + err = PTR_ERR(int_port); 1480 + goto err_out; 1481 + } 1479 1482 1480 1483 esw_attr->int_port = int_port; 1481 1484 }
+3
drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
··· 121 121 122 122 u32 mlx5_chains_get_prio_range(struct mlx5_fs_chains *chains) 123 123 { 124 + if (!mlx5_chains_prios_supported(chains)) 125 + return 1; 126 + 124 127 if (mlx5_chains_ignore_flow_level_supported(chains)) 125 128 return UINT_MAX; 126 129
+6 -5
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 1809 1809 1810 1810 int mlx5_recover_device(struct mlx5_core_dev *dev) 1811 1811 { 1812 - int ret = -EIO; 1812 + if (!mlx5_core_is_sf(dev)) { 1813 + mlx5_pci_disable_device(dev); 1814 + if (mlx5_pci_slot_reset(dev->pdev) != PCI_ERS_RESULT_RECOVERED) 1815 + return -EIO; 1816 + } 1813 1817 1814 - mlx5_pci_disable_device(dev); 1815 - if (mlx5_pci_slot_reset(dev->pdev) == PCI_ERS_RESULT_RECOVERED) 1816 - ret = mlx5_load_one(dev); 1817 - return ret; 1818 + return mlx5_load_one(dev); 1818 1819 } 1819 1820 1820 1821 static struct pci_driver mlx5_core_driver = {
+3 -3
drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
··· 356 356 new_irq = irq_pool_create_irq(pool, affinity); 357 357 if (IS_ERR(new_irq)) { 358 358 if (!least_loaded_irq) { 359 - mlx5_core_err(pool->dev, "Didn't find IRQ for cpu = %u\n", 360 - cpumask_first(affinity)); 359 + mlx5_core_err(pool->dev, "Didn't find a matching IRQ. err = %ld\n", 360 + PTR_ERR(new_irq)); 361 361 mutex_unlock(&pool->lock); 362 362 return new_irq; 363 363 } ··· 398 398 cpumask_copy(irq->mask, affinity); 399 399 if (!irq_pool_is_sf_pool(pool) && !pool->xa_num_irqs.max && 400 400 cpumask_empty(irq->mask)) 401 - cpumask_set_cpu(0, irq->mask); 401 + cpumask_set_cpu(cpumask_first(cpu_online_mask), irq->mask); 402 402 irq_set_affinity_hint(irq->irqn, irq->mask); 403 403 unlock: 404 404 mutex_unlock(&pool->lock);
+4 -5
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
··· 2 2 /* Copyright (c) 2019 Mellanox Technologies. */ 3 3 4 4 #include <linux/mlx5/eswitch.h> 5 + #include <linux/err.h> 5 6 #include "dr_types.h" 6 7 7 8 #define DR_DOMAIN_SW_STEERING_SUPPORTED(dmn, dmn_type) \ ··· 73 72 } 74 73 75 74 dmn->uar = mlx5_get_uars_page(dmn->mdev); 76 - if (!dmn->uar) { 75 + if (IS_ERR(dmn->uar)) { 77 76 mlx5dr_err(dmn, "Couldn't allocate UAR\n"); 78 - ret = -ENOMEM; 77 + ret = PTR_ERR(dmn->uar); 79 78 goto clean_pd; 80 79 } 81 80 ··· 164 163 165 164 static int dr_domain_query_esw_mngr(struct mlx5dr_domain *dmn) 166 165 { 167 - return dr_domain_query_vport(dmn, 168 - dmn->info.caps.is_ecpf ? MLX5_VPORT_ECPF : 0, 169 - false, 166 + return dr_domain_query_vport(dmn, 0, false, 170 167 &dmn->info.caps.vports.esw_manager_caps); 171 168 } 172 169
+2 -1
drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
··· 8494 8494 u8 mac_profile; 8495 8495 int err; 8496 8496 8497 - if (!mlxsw_sp_rif_mac_profile_is_shared(rif)) 8497 + if (!mlxsw_sp_rif_mac_profile_is_shared(rif) && 8498 + !mlxsw_sp_rif_mac_profile_find(mlxsw_sp, new_mac)) 8498 8499 return mlxsw_sp_rif_mac_profile_edit(rif, new_mac); 8499 8500 8500 8501 err = mlxsw_sp_rif_mac_profile_get(mlxsw_sp, new_mac,
+2
drivers/net/ethernet/micrel/ks8851_par.c
··· 321 321 return ret; 322 322 323 323 netdev->irq = platform_get_irq(pdev, 0); 324 + if (netdev->irq < 0) 325 + return netdev->irq; 324 326 325 327 return ks8851_probe_common(netdev, dev, msg_enable); 326 328 }
+1 -1
drivers/net/ethernet/pensando/ionic/ionic_lif.c
··· 3135 3135 return -EINVAL; 3136 3136 } 3137 3137 3138 - lif->dbid_inuse = bitmap_alloc(lif->dbid_count, GFP_KERNEL); 3138 + lif->dbid_inuse = bitmap_zalloc(lif->dbid_count, GFP_KERNEL); 3139 3139 if (!lif->dbid_inuse) { 3140 3140 dev_err(dev, "Failed alloc doorbell id bitmap, aborting\n"); 3141 3141 return -ENOMEM;
+1 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov.h
··· 201 201 struct qlcnic_info *, u16); 202 202 int qlcnic_sriov_cfg_vf_guest_vlan(struct qlcnic_adapter *, u16, u8); 203 203 void qlcnic_sriov_free_vlans(struct qlcnic_adapter *); 204 - void qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *); 204 + int qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *); 205 205 bool qlcnic_sriov_check_any_vlan(struct qlcnic_vf_info *); 206 206 void qlcnic_sriov_del_vlan_id(struct qlcnic_sriov *, 207 207 struct qlcnic_vf_info *, u16);
+9 -3
drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
··· 432 432 struct qlcnic_cmd_args *cmd) 433 433 { 434 434 struct qlcnic_sriov *sriov = adapter->ahw->sriov; 435 - int i, num_vlans; 435 + int i, num_vlans, ret; 436 436 u16 *vlans; 437 437 438 438 if (sriov->allowed_vlans) ··· 443 443 dev_info(&adapter->pdev->dev, "Number of allowed Guest VLANs = %d\n", 444 444 sriov->num_allowed_vlans); 445 445 446 - qlcnic_sriov_alloc_vlans(adapter); 446 + ret = qlcnic_sriov_alloc_vlans(adapter); 447 + if (ret) 448 + return ret; 447 449 448 450 if (!sriov->any_vlan) 449 451 return 0; ··· 2156 2154 return err; 2157 2155 } 2158 2156 2159 - void qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *adapter) 2157 + int qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *adapter) 2160 2158 { 2161 2159 struct qlcnic_sriov *sriov = adapter->ahw->sriov; 2162 2160 struct qlcnic_vf_info *vf; ··· 2166 2164 vf = &sriov->vf_info[i]; 2167 2165 vf->sriov_vlans = kcalloc(sriov->num_allowed_vlans, 2168 2166 sizeof(*vf->sriov_vlans), GFP_KERNEL); 2167 + if (!vf->sriov_vlans) 2168 + return -ENOMEM; 2169 2169 } 2170 + 2171 + return 0; 2170 2172 } 2171 2173 2172 2174 void qlcnic_sriov_free_vlans(struct qlcnic_adapter *adapter)
+3 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c
··· 597 597 if (err) 598 598 goto del_flr_queue; 599 599 600 - qlcnic_sriov_alloc_vlans(adapter); 600 + err = qlcnic_sriov_alloc_vlans(adapter); 601 + if (err) 602 + goto del_flr_queue; 601 603 602 604 return err; 603 605
+3
drivers/net/ethernet/sfc/ef100_nic.c
··· 609 609 ef100_common_stat_mask(mask); 610 610 ef100_ethtool_stat_mask(mask); 611 611 612 + if (!mc_stats) 613 + return 0; 614 + 612 615 efx_nic_copy_stats(efx, mc_stats); 613 616 efx_nic_update_stats(ef100_stat_desc, EF100_STAT_COUNT, mask, 614 617 stats, mc_stats, false);
+4 -1
drivers/net/ethernet/sfc/falcon/rx.c
··· 728 728 efx->rx_bufs_per_page); 729 729 rx_queue->page_ring = kcalloc(page_ring_size, 730 730 sizeof(*rx_queue->page_ring), GFP_KERNEL); 731 - rx_queue->page_ptr_mask = page_ring_size - 1; 731 + if (!rx_queue->page_ring) 732 + rx_queue->page_ptr_mask = 0; 733 + else 734 + rx_queue->page_ptr_mask = page_ring_size - 1; 732 735 } 733 736 734 737 void ef4_init_rx_queue(struct ef4_rx_queue *rx_queue)
+4 -1
drivers/net/ethernet/sfc/rx_common.c
··· 150 150 efx->rx_bufs_per_page); 151 151 rx_queue->page_ring = kcalloc(page_ring_size, 152 152 sizeof(*rx_queue->page_ring), GFP_KERNEL); 153 - rx_queue->page_ptr_mask = page_ring_size - 1; 153 + if (!rx_queue->page_ring) 154 + rx_queue->page_ptr_mask = 0; 155 + else 156 + rx_queue->page_ptr_mask = page_ring_size - 1; 154 157 } 155 158 156 159 static void efx_fini_rx_recycle_ring(struct efx_rx_queue *rx_queue)
+5
drivers/net/ethernet/smsc/smc911x.c
··· 2072 2072 2073 2073 ndev->dma = (unsigned char)-1; 2074 2074 ndev->irq = platform_get_irq(pdev, 0); 2075 + if (ndev->irq < 0) { 2076 + ret = ndev->irq; 2077 + goto release_both; 2078 + } 2079 + 2075 2080 lp = netdev_priv(ndev); 2076 2081 lp->netdev = ndev; 2077 2082 #ifdef SMC_DYNAMIC_BUS_CONFIG
+3 -1
drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
··· 33 33 void (*set_rgmii_speed)(struct rk_priv_data *bsp_priv, int speed); 34 34 void (*set_rmii_speed)(struct rk_priv_data *bsp_priv, int speed); 35 35 void (*integrated_phy_powerup)(struct rk_priv_data *bsp_priv); 36 + bool regs_valid; 36 37 u32 regs[]; 37 38 }; 38 39 ··· 1093 1092 .set_to_rmii = rk3568_set_to_rmii, 1094 1093 .set_rgmii_speed = rk3568_set_gmac_speed, 1095 1094 .set_rmii_speed = rk3568_set_gmac_speed, 1095 + .regs_valid = true, 1096 1096 .regs = { 1097 1097 0xfe2a0000, /* gmac0 */ 1098 1098 0xfe010000, /* gmac1 */ ··· 1385 1383 * to be distinguished. 1386 1384 */ 1387 1385 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1388 - if (res) { 1386 + if (res && ops->regs_valid) { 1389 1387 int i = 0; 1390 1388 1391 1389 while (ops->regs[i]) {
+1 -1
drivers/net/ethernet/stmicro/stmmac/dwmac-visconti.c
··· 26 26 #define ETHER_CLK_SEL_FREQ_SEL_125M (BIT(9) | BIT(8)) 27 27 #define ETHER_CLK_SEL_FREQ_SEL_50M BIT(9) 28 28 #define ETHER_CLK_SEL_FREQ_SEL_25M BIT(8) 29 - #define ETHER_CLK_SEL_FREQ_SEL_2P5M BIT(0) 29 + #define ETHER_CLK_SEL_FREQ_SEL_2P5M 0 30 30 #define ETHER_CLK_SEL_TX_CLK_EXT_SEL_IN BIT(0) 31 31 #define ETHER_CLK_SEL_TX_CLK_EXT_SEL_TXC BIT(10) 32 32 #define ETHER_CLK_SEL_TX_CLK_EXT_SEL_DIV BIT(11)
+17
drivers/net/ethernet/stmicro/stmmac/stmmac.h
··· 172 172 int is_l4; 173 173 }; 174 174 175 + /* Rx Frame Steering */ 176 + enum stmmac_rfs_type { 177 + STMMAC_RFS_T_VLAN, 178 + STMMAC_RFS_T_MAX, 179 + }; 180 + 181 + struct stmmac_rfs_entry { 182 + unsigned long cookie; 183 + int in_use; 184 + int type; 185 + int tc; 186 + }; 187 + 175 188 struct stmmac_priv { 176 189 /* Frequently used values are kept adjacent for cache effect */ 177 190 u32 tx_coal_frames[MTL_MAX_TX_QUEUES]; ··· 302 289 struct stmmac_tc_entry *tc_entries; 303 290 unsigned int flow_entries_max; 304 291 struct stmmac_flow_entry *flow_entries; 292 + unsigned int rfs_entries_max[STMMAC_RFS_T_MAX]; 293 + unsigned int rfs_entries_cnt[STMMAC_RFS_T_MAX]; 294 + unsigned int rfs_entries_total; 295 + struct stmmac_rfs_entry *rfs_entries; 305 296 306 297 /* Pulse Per Second output */ 307 298 struct stmmac_pps_cfg pps[STMMAC_PPS_MAX];
+12 -4
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 1461 1461 { 1462 1462 struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; 1463 1463 struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i]; 1464 + gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN); 1465 + 1466 + if (priv->dma_cap.addr64 <= 32) 1467 + gfp |= GFP_DMA32; 1464 1468 1465 1469 if (!buf->page) { 1466 - buf->page = page_pool_dev_alloc_pages(rx_q->page_pool); 1470 + buf->page = page_pool_alloc_pages(rx_q->page_pool, gfp); 1467 1471 if (!buf->page) 1468 1472 return -ENOMEM; 1469 1473 buf->page_offset = stmmac_rx_offset(priv); 1470 1474 } 1471 1475 1472 1476 if (priv->sph && !buf->sec_page) { 1473 - buf->sec_page = page_pool_dev_alloc_pages(rx_q->page_pool); 1477 + buf->sec_page = page_pool_alloc_pages(rx_q->page_pool, gfp); 1474 1478 if (!buf->sec_page) 1475 1479 return -ENOMEM; 1476 1480 ··· 4486 4482 struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; 4487 4483 int dirty = stmmac_rx_dirty(priv, queue); 4488 4484 unsigned int entry = rx_q->dirty_rx; 4485 + gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN); 4486 + 4487 + if (priv->dma_cap.addr64 <= 32) 4488 + gfp |= GFP_DMA32; 4489 4489 4490 4490 while (dirty-- > 0) { 4491 4491 struct stmmac_rx_buffer *buf = &rx_q->buf_pool[entry]; ··· 4502 4494 p = rx_q->dma_rx + entry; 4503 4495 4504 4496 if (!buf->page) { 4505 - buf->page = page_pool_dev_alloc_pages(rx_q->page_pool); 4497 + buf->page = page_pool_alloc_pages(rx_q->page_pool, gfp); 4506 4498 if (!buf->page) 4507 4499 break; 4508 4500 } 4509 4501 4510 4502 if (priv->sph && !buf->sec_page) { 4511 - buf->sec_page = page_pool_dev_alloc_pages(rx_q->page_pool); 4503 + buf->sec_page = page_pool_alloc_pages(rx_q->page_pool, gfp); 4512 4504 if (!buf->sec_page) 4513 4505 break; 4514 4506
+1 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
··· 102 102 time.tv_nsec = priv->plat->est->btr_reserve[0]; 103 103 time.tv_sec = priv->plat->est->btr_reserve[1]; 104 104 basetime = timespec64_to_ktime(time); 105 - cycle_time = priv->plat->est->ctr[1] * NSEC_PER_SEC + 105 + cycle_time = (u64)priv->plat->est->ctr[1] * NSEC_PER_SEC + 106 106 priv->plat->est->ctr[0]; 107 107 time = stmmac_calc_tas_basetime(basetime, 108 108 current_time_ns,
+73 -13
drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
··· 232 232 } 233 233 } 234 234 235 + static int tc_rfs_init(struct stmmac_priv *priv) 236 + { 237 + int i; 238 + 239 + priv->rfs_entries_max[STMMAC_RFS_T_VLAN] = 8; 240 + 241 + for (i = 0; i < STMMAC_RFS_T_MAX; i++) 242 + priv->rfs_entries_total += priv->rfs_entries_max[i]; 243 + 244 + priv->rfs_entries = devm_kcalloc(priv->device, 245 + priv->rfs_entries_total, 246 + sizeof(*priv->rfs_entries), 247 + GFP_KERNEL); 248 + if (!priv->rfs_entries) 249 + return -ENOMEM; 250 + 251 + dev_info(priv->device, "Enabled RFS Flow TC (entries=%d)\n", 252 + priv->rfs_entries_total); 253 + 254 + return 0; 255 + } 256 + 235 257 static int tc_init(struct stmmac_priv *priv) 236 258 { 237 259 struct dma_features *dma_cap = &priv->dma_cap; 238 260 unsigned int count; 239 - int i; 261 + int ret, i; 240 262 241 263 if (dma_cap->l3l4fnum) { 242 264 priv->flow_entries_max = dma_cap->l3l4fnum; ··· 272 250 for (i = 0; i < priv->flow_entries_max; i++) 273 251 priv->flow_entries[i].idx = i; 274 252 275 - dev_info(priv->device, "Enabled Flow TC (entries=%d)\n", 253 + dev_info(priv->device, "Enabled L3L4 Flow TC (entries=%d)\n", 276 254 priv->flow_entries_max); 277 255 } 256 + 257 + ret = tc_rfs_init(priv); 258 + if (ret) 259 + return -ENOMEM; 278 260 279 261 if (!priv->plat->fpe_cfg) { 280 262 priv->plat->fpe_cfg = devm_kzalloc(priv->device, ··· 633 607 return ret; 634 608 } 635 609 610 + static struct stmmac_rfs_entry *tc_find_rfs(struct stmmac_priv *priv, 611 + struct flow_cls_offload *cls, 612 + bool get_free) 613 + { 614 + int i; 615 + 616 + for (i = 0; i < priv->rfs_entries_total; i++) { 617 + struct stmmac_rfs_entry *entry = &priv->rfs_entries[i]; 618 + 619 + if (entry->cookie == cls->cookie) 620 + return entry; 621 + if (get_free && entry->in_use == false) 622 + return entry; 623 + } 624 + 625 + return NULL; 626 + } 627 + 636 628 #define VLAN_PRIO_FULL_MASK (0x07) 637 629 638 630 static int tc_add_vlan_flow(struct stmmac_priv *priv, 639 631 struct flow_cls_offload *cls) 640 632 { 633 + struct stmmac_rfs_entry *entry = tc_find_rfs(priv, cls, false); 641 634 struct flow_rule *rule = flow_cls_offload_flow_rule(cls); 642 635 struct flow_dissector *dissector = rule->match.dissector; 643 636 int tc = tc_classid_to_hwtc(priv->dev, cls->classid); 644 637 struct flow_match_vlan match; 638 + 639 + if (!entry) { 640 + entry = tc_find_rfs(priv, cls, true); 641 + if (!entry) 642 + return -ENOENT; 643 + } 644 + 645 + if (priv->rfs_entries_cnt[STMMAC_RFS_T_VLAN] >= 646 + priv->rfs_entries_max[STMMAC_RFS_T_VLAN]) 647 + return -ENOENT; 645 648 646 649 /* Nothing to do here */ 647 650 if (!dissector_uses_key(dissector, FLOW_DISSECTOR_KEY_VLAN)) ··· 693 638 694 639 prio = BIT(match.key->vlan_priority); 695 640 stmmac_rx_queue_prio(priv, priv->hw, prio, tc); 641 + 642 + entry->in_use = true; 643 + entry->cookie = cls->cookie; 644 + entry->tc = tc; 645 + entry->type = STMMAC_RFS_T_VLAN; 646 + priv->rfs_entries_cnt[STMMAC_RFS_T_VLAN]++; 696 647 } 697 648 698 649 return 0; ··· 707 646 static int tc_del_vlan_flow(struct stmmac_priv *priv, 708 647 struct flow_cls_offload *cls) 709 648 { 710 - struct flow_rule *rule = flow_cls_offload_flow_rule(cls); 711 - struct flow_dissector *dissector = rule->match.dissector; 712 - int tc = tc_classid_to_hwtc(priv->dev, cls->classid); 649 + struct stmmac_rfs_entry *entry = tc_find_rfs(priv, cls, false); 713 650 714 - /* Nothing to do here */ 715 - if (!dissector_uses_key(dissector, FLOW_DISSECTOR_KEY_VLAN)) 716 - return -EINVAL; 651 + if (!entry || !entry->in_use || entry->type != STMMAC_RFS_T_VLAN) 652 + return -ENOENT; 717 653 718 - if (tc < 0) { 719 - netdev_err(priv->dev, "Invalid traffic class\n"); 720 - return -EINVAL; 721 - } 654 + stmmac_rx_queue_prio(priv, priv->hw, 0, entry->tc); 722 655 723 - stmmac_rx_queue_prio(priv, priv->hw, 0, tc); 656 + entry->in_use = false; 657 + entry->cookie = 0; 658 + entry->tc = 0; 659 + entry->type = 0; 660 + 661 + priv->rfs_entries_cnt[STMMAC_RFS_T_VLAN]--; 724 662 725 663 return 0; 726 664 }
+20 -9
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 1844 1844 if (ret < 0) { 1845 1845 dev_err(dev, "%pOF error reading port_id %d\n", 1846 1846 port_np, ret); 1847 - return ret; 1847 + goto of_node_put; 1848 1848 } 1849 1849 1850 1850 if (!port_id || port_id > common->port_num) { 1851 1851 dev_err(dev, "%pOF has invalid port_id %u %s\n", 1852 1852 port_np, port_id, port_np->name); 1853 - return -EINVAL; 1853 + ret = -EINVAL; 1854 + goto of_node_put; 1854 1855 } 1855 1856 1856 1857 port = am65_common_get_port(common, port_id); ··· 1867 1866 (AM65_CPSW_NU_FRAM_PORT_OFFSET * (port_id - 1)); 1868 1867 1869 1868 port->slave.mac_sl = cpsw_sl_get("am65", dev, port->port_base); 1870 - if (IS_ERR(port->slave.mac_sl)) 1871 - return PTR_ERR(port->slave.mac_sl); 1869 + if (IS_ERR(port->slave.mac_sl)) { 1870 + ret = PTR_ERR(port->slave.mac_sl); 1871 + goto of_node_put; 1872 + } 1872 1873 1873 1874 port->disabled = !of_device_is_available(port_np); 1874 1875 if (port->disabled) { ··· 1883 1880 ret = PTR_ERR(port->slave.ifphy); 1884 1881 dev_err(dev, "%pOF error retrieving port phy: %d\n", 1885 1882 port_np, ret); 1886 - return ret; 1883 + goto of_node_put; 1887 1884 } 1888 1885 1889 1886 port->slave.mac_only = ··· 1892 1889 /* get phy/link info */ 1893 1890 if (of_phy_is_fixed_link(port_np)) { 1894 1891 ret = of_phy_register_fixed_link(port_np); 1895 - if (ret) 1896 - return dev_err_probe(dev, ret, 1892 + if (ret) { 1893 + ret = dev_err_probe(dev, ret, 1897 1894 "failed to register fixed-link phy %pOF\n", 1898 1895 port_np); 1896 + goto of_node_put; 1897 + } 1899 1898 port->slave.phy_node = of_node_get(port_np); 1900 1899 } else { 1901 1900 port->slave.phy_node = ··· 1907 1902 if (!port->slave.phy_node) { 1908 1903 dev_err(dev, 1909 1904 "slave[%d] no phy found\n", port_id); 1910 - return -ENODEV; 1905 + ret = -ENODEV; 1906 + goto of_node_put; 1911 1907 } 1912 1908 1913 1909 ret = of_get_phy_mode(port_np, &port->slave.phy_if); 1914 1910 if (ret) { 1915 1911 dev_err(dev, "%pOF read phy-mode err %d\n", 1916 1912 port_np, ret); 1917 - return ret; 1913 + goto of_node_put; 1918 1914 } 1919 1915 1920 1916 ret = of_get_mac_address(port_np, port->slave.mac_addr); ··· 1938 1932 } 1939 1933 1940 1934 return 0; 1935 + 1936 + of_node_put: 1937 + of_node_put(port_np); 1938 + of_node_put(node); 1939 + return ret; 1941 1940 } 1942 1941 1943 1942 static void am65_cpsw_pcpu_stats_free(void *data)
+5
drivers/net/fjes/fjes_main.c
··· 1262 1262 hw->hw_res.start = res->start; 1263 1263 hw->hw_res.size = resource_size(res); 1264 1264 hw->hw_res.irq = platform_get_irq(plat_dev, 0); 1265 + if (hw->hw_res.irq < 0) { 1266 + err = hw->hw_res.irq; 1267 + goto err_free_control_wq; 1268 + } 1269 + 1265 1270 err = fjes_hw_init(&adapter->hw); 1266 1271 if (err) 1267 1272 goto err_free_control_wq;
+2 -2
drivers/net/hamradio/mkiss.c
··· 794 794 */ 795 795 netif_stop_queue(ax->dev); 796 796 797 - ax->tty = NULL; 798 - 799 797 unregister_netdev(ax->dev); 800 798 801 799 /* Free all AX25 frame buffers after unreg. */ 802 800 kfree(ax->rbuff); 803 801 kfree(ax->xbuff); 802 + 803 + ax->tty = NULL; 804 804 805 805 free_netdev(ax->dev); 806 806 }
+1
drivers/net/netdevsim/bpf.c
··· 514 514 goto err_free; 515 515 key = nmap->entry[i].key; 516 516 *key = i; 517 + memset(nmap->entry[i].value, 0, offmap->map.value_size); 517 518 } 518 519 } 519 520
+4 -1
drivers/net/netdevsim/ethtool.c
··· 77 77 { 78 78 struct netdevsim *ns = netdev_priv(dev); 79 79 80 - memcpy(&ns->ethtool.ring, ring, sizeof(ns->ethtool.ring)); 80 + ns->ethtool.ring.rx_pending = ring->rx_pending; 81 + ns->ethtool.ring.rx_jumbo_pending = ring->rx_jumbo_pending; 82 + ns->ethtool.ring.rx_mini_pending = ring->rx_mini_pending; 83 + ns->ethtool.ring.tx_pending = ring->tx_pending; 81 84 return 0; 82 85 } 83 86
+2 -2
drivers/net/phy/fixed_phy.c
··· 239 239 /* Check if we have a GPIO associated with this fixed phy */ 240 240 if (!gpiod) { 241 241 gpiod = fixed_phy_get_gpiod(np); 242 - if (IS_ERR(gpiod)) 243 - return ERR_CAST(gpiod); 242 + if (!gpiod) 243 + return ERR_PTR(-EINVAL); 244 244 } 245 245 246 246 /* Get the next available PHY address, up to PHY_MAX_ADDR */
+3
drivers/net/phy/mdio_bus.c
··· 460 460 461 461 if (addr == mdiodev->addr) { 462 462 device_set_node(dev, of_fwnode_handle(child)); 463 + /* The refcount on "child" is passed to the mdio 464 + * device. Do _not_ use of_node_put(child) here. 465 + */ 463 466 return; 464 467 } 465 468 }
+59 -56
drivers/net/tun.c
··· 209 209 struct tun_prog __rcu *steering_prog; 210 210 struct tun_prog __rcu *filter_prog; 211 211 struct ethtool_link_ksettings link_ksettings; 212 + /* init args */ 213 + struct file *file; 214 + struct ifreq *ifr; 212 215 }; 213 216 214 217 struct veth { 215 218 __be16 h_vlan_proto; 216 219 __be16 h_vlan_TCI; 217 220 }; 221 + 222 + static void tun_flow_init(struct tun_struct *tun); 223 + static void tun_flow_uninit(struct tun_struct *tun); 218 224 219 225 static int tun_napi_receive(struct napi_struct *napi, int budget) 220 226 { ··· 959 953 960 954 static const struct ethtool_ops tun_ethtool_ops; 961 955 956 + static int tun_net_init(struct net_device *dev) 957 + { 958 + struct tun_struct *tun = netdev_priv(dev); 959 + struct ifreq *ifr = tun->ifr; 960 + int err; 961 + 962 + dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats); 963 + if (!dev->tstats) 964 + return -ENOMEM; 965 + 966 + spin_lock_init(&tun->lock); 967 + 968 + err = security_tun_dev_alloc_security(&tun->security); 969 + if (err < 0) { 970 + free_percpu(dev->tstats); 971 + return err; 972 + } 973 + 974 + tun_flow_init(tun); 975 + 976 + dev->hw_features = NETIF_F_SG | NETIF_F_FRAGLIST | 977 + TUN_USER_FEATURES | NETIF_F_HW_VLAN_CTAG_TX | 978 + NETIF_F_HW_VLAN_STAG_TX; 979 + dev->features = dev->hw_features | NETIF_F_LLTX; 980 + dev->vlan_features = dev->features & 981 + ~(NETIF_F_HW_VLAN_CTAG_TX | 982 + NETIF_F_HW_VLAN_STAG_TX); 983 + 984 + tun->flags = (tun->flags & ~TUN_FEATURES) | 985 + (ifr->ifr_flags & TUN_FEATURES); 986 + 987 + INIT_LIST_HEAD(&tun->disabled); 988 + err = tun_attach(tun, tun->file, false, ifr->ifr_flags & IFF_NAPI, 989 + ifr->ifr_flags & IFF_NAPI_FRAGS, false); 990 + if (err < 0) { 991 + tun_flow_uninit(tun); 992 + security_tun_dev_free_security(tun->security); 993 + free_percpu(dev->tstats); 994 + return err; 995 + } 996 + return 0; 997 + } 998 + 962 999 /* Net device detach from fd. */ 963 1000 static void tun_net_uninit(struct net_device *dev) 964 1001 { ··· 1218 1169 } 1219 1170 1220 1171 static const struct net_device_ops tun_netdev_ops = { 1172 + .ndo_init = tun_net_init, 1221 1173 .ndo_uninit = tun_net_uninit, 1222 1174 .ndo_open = tun_net_open, 1223 1175 .ndo_stop = tun_net_close, ··· 1302 1252 } 1303 1253 1304 1254 static const struct net_device_ops tap_netdev_ops = { 1255 + .ndo_init = tun_net_init, 1305 1256 .ndo_uninit = tun_net_uninit, 1306 1257 .ndo_open = tun_net_open, 1307 1258 .ndo_stop = tun_net_close, ··· 1343 1292 #define MAX_MTU 65535 1344 1293 1345 1294 /* Initialize net device. */ 1346 - static void tun_net_init(struct net_device *dev) 1295 + static void tun_net_initialize(struct net_device *dev) 1347 1296 { 1348 1297 struct tun_struct *tun = netdev_priv(dev); 1349 1298 ··· 2257 2206 BUG_ON(!(list_empty(&tun->disabled))); 2258 2207 2259 2208 free_percpu(dev->tstats); 2260 - /* We clear tstats so that tun_set_iff() can tell if 2261 - * tun_free_netdev() has been called from register_netdevice(). 2262 - */ 2263 - dev->tstats = NULL; 2264 - 2265 2209 tun_flow_uninit(tun); 2266 2210 security_tun_dev_free_security(tun->security); 2267 2211 __tun_set_ebpf(tun, &tun->steering_prog, NULL); ··· 2762 2716 tun->rx_batched = 0; 2763 2717 RCU_INIT_POINTER(tun->steering_prog, NULL); 2764 2718 2765 - dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats); 2766 - if (!dev->tstats) { 2767 - err = -ENOMEM; 2768 - goto err_free_dev; 2769 - } 2719 + tun->ifr = ifr; 2720 + tun->file = file; 2770 2721 2771 - spin_lock_init(&tun->lock); 2772 - 2773 - err = security_tun_dev_alloc_security(&tun->security); 2774 - if (err < 0) 2775 - goto err_free_stat; 2776 - 2777 - tun_net_init(dev); 2778 - tun_flow_init(tun); 2779 - 2780 - dev->hw_features = NETIF_F_SG | NETIF_F_FRAGLIST | 2781 - TUN_USER_FEATURES | NETIF_F_HW_VLAN_CTAG_TX | 2782 - NETIF_F_HW_VLAN_STAG_TX; 2783 - dev->features = dev->hw_features | NETIF_F_LLTX; 2784 - dev->vlan_features = dev->features & 2785 - ~(NETIF_F_HW_VLAN_CTAG_TX | 2786 - NETIF_F_HW_VLAN_STAG_TX); 2787 - 2788 - tun->flags = (tun->flags & ~TUN_FEATURES) | 2789 - (ifr->ifr_flags & TUN_FEATURES); 2790 - 2791 - INIT_LIST_HEAD(&tun->disabled); 2792 - err = tun_attach(tun, file, false, ifr->ifr_flags & IFF_NAPI, 2793 - ifr->ifr_flags & IFF_NAPI_FRAGS, false); 2794 - if (err < 0) 2795 - goto err_free_flow; 2722 + tun_net_initialize(dev); 2796 2723 2797 2724 err = register_netdevice(tun->dev); 2798 - if (err < 0) 2799 - goto err_detach; 2725 + if (err < 0) { 2726 + free_netdev(dev); 2727 + return err; 2728 + } 2800 2729 /* free_netdev() won't check refcnt, to avoid race 2801 2730 * with dev_put() we need publish tun after registration. 2802 2731 */ ··· 2788 2767 2789 2768 strcpy(ifr->ifr_name, tun->dev->name); 2790 2769 return 0; 2791 - 2792 - err_detach: 2793 - tun_detach_all(dev); 2794 - /* We are here because register_netdevice() has failed. 2795 - * If register_netdevice() already called tun_free_netdev() 2796 - * while dealing with the error, dev->stats has been cleared. 2797 - */ 2798 - if (!dev->tstats) 2799 - goto err_free_dev; 2800 - 2801 - err_free_flow: 2802 - tun_flow_uninit(tun); 2803 - security_tun_dev_free_security(tun->security); 2804 - err_free_stat: 2805 - free_percpu(dev->tstats); 2806 - err_free_dev: 2807 - free_netdev(dev); 2808 - return err; 2809 2770 } 2810 2771 2811 2772 static void tun_get_iff(struct tun_struct *tun, struct ifreq *ifr)
+5 -3
drivers/net/usb/asix_common.c
··· 9 9 10 10 #include "asix.h" 11 11 12 + #define AX_HOST_EN_RETRIES 30 13 + 12 14 int asix_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index, 13 15 u16 size, void *data, int in_pm) 14 16 { ··· 70 68 int i, ret; 71 69 u8 smsr; 72 70 73 - for (i = 0; i < 30; ++i) { 71 + for (i = 0; i < AX_HOST_EN_RETRIES; ++i) { 74 72 ret = asix_set_sw_mii(dev, in_pm); 75 73 if (ret == -ENODEV || ret == -ETIMEDOUT) 76 74 break; ··· 79 77 0, 0, 1, &smsr, in_pm); 80 78 if (ret == -ENODEV) 81 79 break; 82 - else if (ret < 0) 80 + else if (ret < sizeof(smsr)) 83 81 continue; 84 82 else if (smsr & AX_HOST_EN) 85 83 break; 86 84 } 87 85 88 - return ret; 86 + return i >= AX_HOST_EN_RETRIES ? -ETIMEDOUT : ret; 89 87 } 90 88 91 89 static void reset_asix_rx_fixup_info(struct asix_rx_fixup_info *rx)
+6
drivers/net/usb/lan78xx.c
··· 76 76 #define LAN7801_USB_PRODUCT_ID (0x7801) 77 77 #define LAN78XX_EEPROM_MAGIC (0x78A5) 78 78 #define LAN78XX_OTP_MAGIC (0x78F3) 79 + #define AT29M2AF_USB_VENDOR_ID (0x07C9) 80 + #define AT29M2AF_USB_PRODUCT_ID (0x0012) 79 81 80 82 #define MII_READ 1 81 83 #define MII_WRITE 0 ··· 4735 4733 { 4736 4734 /* LAN7801 USB Gigabit Ethernet Device */ 4737 4735 USB_DEVICE(LAN78XX_USB_VENDOR_ID, LAN7801_USB_PRODUCT_ID), 4736 + }, 4737 + { 4738 + /* ATM2-AF USB Gigabit Ethernet Device */ 4739 + USB_DEVICE(AT29M2AF_USB_VENDOR_ID, AT29M2AF_USB_PRODUCT_ID), 4738 4740 }, 4739 4741 {}, 4740 4742 };
+2 -2
drivers/net/usb/pegasus.c
··· 493 493 goto goon; 494 494 495 495 rx_status = buf[count - 2]; 496 - if (rx_status & 0x1e) { 496 + if (rx_status & 0x1c) { 497 497 netif_dbg(pegasus, rx_err, net, 498 498 "RX packet error %x\n", rx_status); 499 499 net->stats.rx_errors++; 500 - if (rx_status & 0x06) /* long or runt */ 500 + if (rx_status & 0x04) /* runt */ 501 501 net->stats.rx_length_errors++; 502 502 if (rx_status & 0x08) 503 503 net->stats.rx_crc_errors++;
+1
drivers/net/usb/qmi_wwan.c
··· 1358 1358 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)}, /* Telit LE922A */ 1359 1359 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1050, 2)}, /* Telit FN980 */ 1360 1360 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1060, 2)}, /* Telit LN920 */ 1361 + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1070, 2)}, /* Telit FN990 */ 1361 1362 {QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */ 1362 1363 {QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */ 1363 1364 {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */
+39 -4
drivers/net/usb/r8152.c
··· 32 32 #define NETNEXT_VERSION "12" 33 33 34 34 /* Information for net */ 35 - #define NET_VERSION "11" 35 + #define NET_VERSION "12" 36 36 37 37 #define DRIVER_VERSION "v1." NETNEXT_VERSION "." NET_VERSION 38 38 #define DRIVER_AUTHOR "Realtek linux nic maintainers <nic_swsd@realtek.com>" ··· 4016 4016 ocp_write_word(tp, type, PLA_BP_BA, 0); 4017 4017 } 4018 4018 4019 + static inline void rtl_reset_ocp_base(struct r8152 *tp) 4020 + { 4021 + tp->ocp_base = -1; 4022 + } 4023 + 4019 4024 static int rtl_phy_patch_request(struct r8152 *tp, bool request, bool wait) 4020 4025 { 4021 4026 u16 data, check; ··· 4091 4086 rtl_patch_key_set(tp, key_addr, 0); 4092 4087 4093 4088 rtl_phy_patch_request(tp, false, wait); 4094 - 4095 - ocp_write_word(tp, MCU_TYPE_PLA, PLA_OCP_GPHY_BASE, tp->ocp_base); 4096 4089 4097 4090 return 0; 4098 4091 } ··· 4803 4800 u32 len; 4804 4801 u8 *data; 4805 4802 4803 + rtl_reset_ocp_base(tp); 4804 + 4806 4805 if (sram_read(tp, SRAM_GPHY_FW_VER) >= __le16_to_cpu(phy->version)) { 4807 4806 dev_dbg(&tp->intf->dev, "PHY firmware has been the newest\n"); 4808 4807 return; ··· 4850 4845 } 4851 4846 } 4852 4847 4853 - ocp_write_word(tp, MCU_TYPE_PLA, PLA_OCP_GPHY_BASE, tp->ocp_base); 4848 + rtl_reset_ocp_base(tp); 4849 + 4854 4850 rtl_phy_patch_request(tp, false, wait); 4855 4851 4856 4852 if (sram_read(tp, SRAM_GPHY_FW_VER) == __le16_to_cpu(phy->version)) ··· 4866 4860 4867 4861 ver_addr = __le16_to_cpu(phy_ver->ver.addr); 4868 4862 ver = __le16_to_cpu(phy_ver->ver.data); 4863 + 4864 + rtl_reset_ocp_base(tp); 4869 4865 4870 4866 if (sram_read(tp, ver_addr) >= ver) { 4871 4867 dev_dbg(&tp->intf->dev, "PHY firmware has been the newest\n"); ··· 4884 4876 static void rtl8152_fw_phy_fixup(struct r8152 *tp, struct fw_phy_fixup *fix) 4885 4877 { 4886 4878 u16 addr, data; 4879 + 4880 + rtl_reset_ocp_base(tp); 4887 4881 4888 4882 addr = __le16_to_cpu(fix->setting.addr); 4889 4883 data = ocp_reg_read(tp, addr); ··· 4918 4908 u32 length; 4919 4909 int i, num; 4920 4910 4911 + rtl_reset_ocp_base(tp); 4912 + 4921 4913 num = phy->pre_num; 4922 4914 for (i = 0; i < num; i++) 4923 4915 sram_write(tp, __le16_to_cpu(phy->pre_set[i].addr), ··· 4949 4937 u16 mode_reg, bp_index; 4950 4938 u32 length, i, num; 4951 4939 __le16 *data; 4940 + 4941 + rtl_reset_ocp_base(tp); 4952 4942 4953 4943 mode_reg = __le16_to_cpu(phy->mode_reg); 4954 4944 sram_write(tp, mode_reg, __le16_to_cpu(phy->mode_pre)); ··· 5121 5107 if (rtl_fw->post_fw) 5122 5108 rtl_fw->post_fw(tp); 5123 5109 5110 + rtl_reset_ocp_base(tp); 5124 5111 strscpy(rtl_fw->version, fw_hdr->version, RTL_VER_SIZE); 5125 5112 dev_info(&tp->intf->dev, "load %s successfully\n", rtl_fw->version); 5126 5113 } ··· 6599 6584 return true; 6600 6585 } 6601 6586 6587 + static void r8156_mdio_force_mode(struct r8152 *tp) 6588 + { 6589 + u16 data; 6590 + 6591 + /* Select force mode through 0xa5b4 bit 15 6592 + * 0: MDIO force mode 6593 + * 1: MMD force mode 6594 + */ 6595 + data = ocp_reg_read(tp, 0xa5b4); 6596 + if (data & BIT(15)) { 6597 + data &= ~BIT(15); 6598 + ocp_reg_write(tp, 0xa5b4, data); 6599 + } 6600 + } 6601 + 6602 6602 static void set_carrier(struct r8152 *tp) 6603 6603 { 6604 6604 struct net_device *netdev = tp->netdev; ··· 8046 8016 ocp_data |= ACT_ODMA; 8047 8017 ocp_write_byte(tp, MCU_TYPE_USB, USB_BMU_CONFIG, ocp_data); 8048 8018 8019 + r8156_mdio_force_mode(tp); 8049 8020 rtl_tally_reset(tp); 8050 8021 8051 8022 tp->coalesce = 15000; /* 15 us */ ··· 8176 8145 ocp_data &= ~(RX_AGG_DISABLE | RX_ZERO_EN); 8177 8146 ocp_write_word(tp, MCU_TYPE_USB, USB_USB_CTRL, ocp_data); 8178 8147 8148 + r8156_mdio_force_mode(tp); 8179 8149 rtl_tally_reset(tp); 8180 8150 8181 8151 tp->coalesce = 15000; /* 15 us */ ··· 8499 8467 8500 8468 mutex_lock(&tp->control); 8501 8469 8470 + rtl_reset_ocp_base(tp); 8471 + 8502 8472 if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) 8503 8473 ret = rtl8152_runtime_resume(tp); 8504 8474 else ··· 8516 8482 struct r8152 *tp = usb_get_intfdata(intf); 8517 8483 8518 8484 clear_bit(SELECTIVE_SUSPEND, &tp->flags); 8485 + rtl_reset_ocp_base(tp); 8519 8486 tp->rtl_ops.init(tp); 8520 8487 queue_delayed_work(system_long_wq, &tp->hw_phy_work, 0); 8521 8488 set_ethernet_addr(tp, true);
+6 -2
drivers/net/veth.c
··· 879 879 880 880 stats->xdp_bytes += skb->len; 881 881 skb = veth_xdp_rcv_skb(rq, skb, bq, stats); 882 - if (skb) 883 - napi_gro_receive(&rq->xdp_napi, skb); 882 + if (skb) { 883 + if (skb_shared(skb) || skb_unclone(skb, GFP_ATOMIC)) 884 + netif_receive_skb(skb); 885 + else 886 + napi_gro_receive(&rq->xdp_napi, skb); 887 + } 884 888 } 885 889 done++; 886 890 }
+3 -6
drivers/net/virtio_net.c
··· 733 733 pr_debug("%s: rx error: len %u exceeds max size %d\n", 734 734 dev->name, len, GOOD_PACKET_LEN); 735 735 dev->stats.rx_length_errors++; 736 - goto err_len; 736 + goto err; 737 737 } 738 738 739 739 if (likely(!vi->xdp_enabled)) { ··· 825 825 826 826 skip_xdp: 827 827 skb = build_skb(buf, buflen); 828 - if (!skb) { 829 - put_page(page); 828 + if (!skb) 830 829 goto err; 831 - } 832 830 skb_reserve(skb, headroom - delta); 833 831 skb_put(skb, len); 834 832 if (!xdp_prog) { ··· 837 839 if (metasize) 838 840 skb_metadata_set(skb, metasize); 839 841 840 - err: 841 842 return skb; 842 843 843 844 err_xdp: 844 845 rcu_read_unlock(); 845 846 stats->xdp_drops++; 846 - err_len: 847 + err: 847 848 stats->drops++; 848 849 put_page(page); 849 850 xdp_xmit:
+9 -5
drivers/net/wireless/broadcom/brcm80211/Kconfig
··· 7 7 depends on MAC80211 8 8 depends on BCMA_POSSIBLE 9 9 select BCMA 10 - select NEW_LEDS if BCMA_DRIVER_GPIO 11 - select LEDS_CLASS if BCMA_DRIVER_GPIO 12 10 select BRCMUTIL 13 11 select FW_LOADER 14 12 select CORDIC 15 13 help 16 14 This module adds support for PCIe wireless adapters based on Broadcom 17 - IEEE802.11n SoftMAC chipsets. It also has WLAN led support, which will 18 - be available if you select BCMA_DRIVER_GPIO. If you choose to build a 19 - module, the driver will be called brcmsmac.ko. 15 + IEEE802.11n SoftMAC chipsets. If you choose to build a module, the 16 + driver will be called brcmsmac.ko. 17 + 18 + config BRCMSMAC_LEDS 19 + def_bool BRCMSMAC && BCMA_DRIVER_GPIO && MAC80211_LEDS 20 + help 21 + The brcmsmac LED support depends on the presence of the 22 + BCMA_DRIVER_GPIO driver, and it only works if LED support 23 + is enabled and reachable from the driver module. 20 24 21 25 source "drivers/net/wireless/broadcom/brcm80211/brcmfmac/Kconfig" 22 26
+1 -1
drivers/net/wireless/broadcom/brcm80211/brcmsmac/Makefile
··· 42 42 brcms_trace_events.o \ 43 43 debug.o 44 44 45 - brcmsmac-$(CONFIG_BCMA_DRIVER_GPIO) += led.o 45 + brcmsmac-$(CONFIG_BRCMSMAC_LEDS) += led.o 46 46 47 47 obj-$(CONFIG_BRCMSMAC) += brcmsmac.o
+1 -1
drivers/net/wireless/broadcom/brcm80211/brcmsmac/led.h
··· 24 24 struct gpio_desc *gpiod; 25 25 }; 26 26 27 - #ifdef CONFIG_BCMA_DRIVER_GPIO 27 + #ifdef CONFIG_BRCMSMAC_LEDS 28 28 void brcms_led_unregister(struct brcms_info *wl); 29 29 int brcms_led_register(struct brcms_info *wl); 30 30 #else
+2 -2
drivers/net/wireless/intel/iwlegacy/Kconfig
··· 2 2 config IWLEGACY 3 3 tristate 4 4 select FW_LOADER 5 - select NEW_LEDS 6 - select LEDS_CLASS 7 5 select LEDS_TRIGGERS 8 6 select MAC80211_LEDS 9 7 10 8 config IWL4965 11 9 tristate "Intel Wireless WiFi 4965AGN (iwl4965)" 12 10 depends on PCI && MAC80211 11 + depends on LEDS_CLASS=y || LEDS_CLASS=MAC80211 13 12 select IWLEGACY 14 13 help 15 14 This option enables support for ··· 37 38 config IWL3945 38 39 tristate "Intel PRO/Wireless 3945ABG/BG Network Connection (iwl3945)" 39 40 depends on PCI && MAC80211 41 + depends on LEDS_CLASS=y || LEDS_CLASS=MAC80211 40 42 select IWLEGACY 41 43 help 42 44 Select to build the driver supporting the:
+1 -1
drivers/net/wireless/intel/iwlwifi/Kconfig
··· 47 47 48 48 config IWLWIFI_LEDS 49 49 bool 50 - depends on LEDS_CLASS=y || LEDS_CLASS=IWLWIFI 50 + depends on LEDS_CLASS=y || LEDS_CLASS=MAC80211 51 51 depends on IWLMVM || IWLDVM 52 52 select LEDS_TRIGGERS 53 53 select MAC80211_LEDS
+3 -2
drivers/net/wireless/intel/iwlwifi/mvm/tx.c
··· 269 269 u8 rate_plcp; 270 270 u32 rate_flags = 0; 271 271 bool is_cck; 272 - struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta); 273 272 274 273 /* info->control is only relevant for non HW rate control */ 275 274 if (!ieee80211_hw_check(mvm->hw, HAS_RATE_CONTROL)) { 275 + struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta); 276 + 276 277 /* HT rate doesn't make sense for a non data frame */ 277 278 WARN_ONCE(info->control.rates[0].flags & IEEE80211_TX_RC_MCS && 278 279 !ieee80211_is_data(fc), 279 280 "Got a HT rate (flags:0x%x/mcs:%d/fc:0x%x/state:%d) for a non data frame\n", 280 281 info->control.rates[0].flags, 281 282 info->control.rates[0].idx, 282 - le16_to_cpu(fc), mvmsta->sta_state); 283 + le16_to_cpu(fc), sta ? mvmsta->sta_state : -1); 283 284 284 285 rate_idx = info->control.rates[0].idx; 285 286 }
+1 -1
drivers/net/wireless/mediatek/mt76/Makefile
··· 34 34 obj-$(CONFIG_MT7603E) += mt7603/ 35 35 obj-$(CONFIG_MT7615_COMMON) += mt7615/ 36 36 obj-$(CONFIG_MT7915E) += mt7915/ 37 - obj-$(CONFIG_MT7921E) += mt7921/ 37 + obj-$(CONFIG_MT7921_COMMON) += mt7921/
+1
drivers/net/xen-netback/common.h
··· 203 203 unsigned int rx_queue_max; 204 204 unsigned int rx_queue_len; 205 205 unsigned long last_rx_time; 206 + unsigned int rx_slots_needed; 206 207 bool stalled; 207 208 208 209 struct xenvif_copy_state rx_copy;
+49 -28
drivers/net/xen-netback/rx.c
··· 33 33 #include <xen/xen.h> 34 34 #include <xen/events.h> 35 35 36 + /* 37 + * Update the needed ring page slots for the first SKB queued. 38 + * Note that any call sequence outside the RX thread calling this function 39 + * needs to wake up the RX thread via a call of xenvif_kick_thread() 40 + * afterwards in order to avoid a race with putting the thread to sleep. 41 + */ 42 + static void xenvif_update_needed_slots(struct xenvif_queue *queue, 43 + const struct sk_buff *skb) 44 + { 45 + unsigned int needed = 0; 46 + 47 + if (skb) { 48 + needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE); 49 + if (skb_is_gso(skb)) 50 + needed++; 51 + if (skb->sw_hash) 52 + needed++; 53 + } 54 + 55 + WRITE_ONCE(queue->rx_slots_needed, needed); 56 + } 57 + 36 58 static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue) 37 59 { 38 60 RING_IDX prod, cons; 39 - struct sk_buff *skb; 40 - int needed; 41 - unsigned long flags; 61 + unsigned int needed; 42 62 43 - spin_lock_irqsave(&queue->rx_queue.lock, flags); 44 - 45 - skb = skb_peek(&queue->rx_queue); 46 - if (!skb) { 47 - spin_unlock_irqrestore(&queue->rx_queue.lock, flags); 63 + needed = READ_ONCE(queue->rx_slots_needed); 64 + if (!needed) 48 65 return false; 49 - } 50 - 51 - needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE); 52 - if (skb_is_gso(skb)) 53 - needed++; 54 - if (skb->sw_hash) 55 - needed++; 56 - 57 - spin_unlock_irqrestore(&queue->rx_queue.lock, flags); 58 66 59 67 do { 60 68 prod = queue->rx.sring->req_prod; ··· 88 80 89 81 spin_lock_irqsave(&queue->rx_queue.lock, flags); 90 82 91 - __skb_queue_tail(&queue->rx_queue, skb); 92 - 93 - queue->rx_queue_len += skb->len; 94 - if (queue->rx_queue_len > queue->rx_queue_max) { 83 + if (queue->rx_queue_len >= queue->rx_queue_max) { 95 84 struct net_device *dev = queue->vif->dev; 96 85 97 86 netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id)); 87 + kfree_skb(skb); 88 + queue->vif->dev->stats.rx_dropped++; 89 + } else { 90 + if (skb_queue_empty(&queue->rx_queue)) 91 + xenvif_update_needed_slots(queue, skb); 92 + 93 + __skb_queue_tail(&queue->rx_queue, skb); 94 + 95 + queue->rx_queue_len += skb->len; 98 96 } 99 97 100 98 spin_unlock_irqrestore(&queue->rx_queue.lock, flags); ··· 114 100 115 101 skb = __skb_dequeue(&queue->rx_queue); 116 102 if (skb) { 103 + xenvif_update_needed_slots(queue, skb_peek(&queue->rx_queue)); 104 + 117 105 queue->rx_queue_len -= skb->len; 118 106 if (queue->rx_queue_len < queue->rx_queue_max) { 119 107 struct netdev_queue *txq; ··· 150 134 break; 151 135 xenvif_rx_dequeue(queue); 152 136 kfree_skb(skb); 137 + queue->vif->dev->stats.rx_dropped++; 153 138 } 154 139 } 155 140 ··· 504 487 xenvif_rx_copy_flush(queue); 505 488 } 506 489 507 - static bool xenvif_rx_queue_stalled(struct xenvif_queue *queue) 490 + static RING_IDX xenvif_rx_queue_slots(const struct xenvif_queue *queue) 508 491 { 509 492 RING_IDX prod, cons; 510 493 511 494 prod = queue->rx.sring->req_prod; 512 495 cons = queue->rx.req_cons; 513 496 497 + return prod - cons; 498 + } 499 + 500 + static bool xenvif_rx_queue_stalled(const struct xenvif_queue *queue) 501 + { 502 + unsigned int needed = READ_ONCE(queue->rx_slots_needed); 503 + 514 504 return !queue->stalled && 515 - prod - cons < 1 && 505 + xenvif_rx_queue_slots(queue) < needed && 516 506 time_after(jiffies, 517 507 queue->last_rx_time + queue->vif->stall_timeout); 518 508 } 519 509 520 510 static bool xenvif_rx_queue_ready(struct xenvif_queue *queue) 521 511 { 522 - RING_IDX prod, cons; 512 + unsigned int needed = READ_ONCE(queue->rx_slots_needed); 523 513 524 - prod = queue->rx.sring->req_prod; 525 - cons = queue->rx.req_cons; 526 - 527 - return queue->stalled && prod - cons >= 1; 514 + return queue->stalled && xenvif_rx_queue_slots(queue) >= needed; 528 515 } 529 516 530 517 bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread)
+95 -32
drivers/net/xen-netfront.c
··· 148 148 grant_ref_t gref_rx_head; 149 149 grant_ref_t grant_rx_ref[NET_RX_RING_SIZE]; 150 150 151 + unsigned int rx_rsp_unconsumed; 152 + spinlock_t rx_cons_lock; 153 + 151 154 struct page_pool *page_pool; 152 155 struct xdp_rxq_info xdp_rxq; 153 156 }; ··· 379 376 return 0; 380 377 } 381 378 382 - static void xennet_tx_buf_gc(struct netfront_queue *queue) 379 + static bool xennet_tx_buf_gc(struct netfront_queue *queue) 383 380 { 384 381 RING_IDX cons, prod; 385 382 unsigned short id; 386 383 struct sk_buff *skb; 387 384 bool more_to_do; 385 + bool work_done = false; 388 386 const struct device *dev = &queue->info->netdev->dev; 389 387 390 388 BUG_ON(!netif_carrier_ok(queue->info->netdev)); ··· 401 397 402 398 for (cons = queue->tx.rsp_cons; cons != prod; cons++) { 403 399 struct xen_netif_tx_response txrsp; 400 + 401 + work_done = true; 404 402 405 403 RING_COPY_RESPONSE(&queue->tx, cons, &txrsp); 406 404 if (txrsp.status == XEN_NETIF_RSP_NULL) ··· 447 441 448 442 xennet_maybe_wake_tx(queue); 449 443 450 - return; 444 + return work_done; 451 445 452 446 err: 453 447 queue->info->broken = true; 454 448 dev_alert(dev, "Disabled for further use\n"); 449 + 450 + return work_done; 455 451 } 456 452 457 453 struct xennet_gnttab_make_txreq { ··· 842 834 return 0; 843 835 } 844 836 837 + static void xennet_set_rx_rsp_cons(struct netfront_queue *queue, RING_IDX val) 838 + { 839 + unsigned long flags; 840 + 841 + spin_lock_irqsave(&queue->rx_cons_lock, flags); 842 + queue->rx.rsp_cons = val; 843 + queue->rx_rsp_unconsumed = RING_HAS_UNCONSUMED_RESPONSES(&queue->rx); 844 + spin_unlock_irqrestore(&queue->rx_cons_lock, flags); 845 + } 846 + 845 847 static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb, 846 848 grant_ref_t ref) 847 849 { ··· 903 885 xennet_move_rx_slot(queue, skb, ref); 904 886 } while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE); 905 887 906 - queue->rx.rsp_cons = cons; 888 + xennet_set_rx_rsp_cons(queue, cons); 907 889 return err; 908 890 } 909 891 ··· 1057 1039 } 1058 1040 1059 1041 if (unlikely(err)) 1060 - queue->rx.rsp_cons = cons + slots; 1042 + xennet_set_rx_rsp_cons(queue, cons + slots); 1061 1043 1062 1044 return err; 1063 1045 } ··· 1111 1093 __pskb_pull_tail(skb, pull_to - skb_headlen(skb)); 1112 1094 } 1113 1095 if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) { 1114 - queue->rx.rsp_cons = ++cons + skb_queue_len(list); 1096 + xennet_set_rx_rsp_cons(queue, 1097 + ++cons + skb_queue_len(list)); 1115 1098 kfree_skb(nskb); 1116 1099 return -ENOENT; 1117 1100 } ··· 1125 1106 kfree_skb(nskb); 1126 1107 } 1127 1108 1128 - queue->rx.rsp_cons = cons; 1109 + xennet_set_rx_rsp_cons(queue, cons); 1129 1110 1130 1111 return 0; 1131 1112 } ··· 1248 1229 1249 1230 if (unlikely(xennet_set_skb_gso(skb, gso))) { 1250 1231 __skb_queue_head(&tmpq, skb); 1251 - queue->rx.rsp_cons += skb_queue_len(&tmpq); 1232 + xennet_set_rx_rsp_cons(queue, 1233 + queue->rx.rsp_cons + 1234 + skb_queue_len(&tmpq)); 1252 1235 goto err; 1253 1236 } 1254 1237 } ··· 1274 1253 1275 1254 __skb_queue_tail(&rxq, skb); 1276 1255 1277 - i = ++queue->rx.rsp_cons; 1256 + i = queue->rx.rsp_cons + 1; 1257 + xennet_set_rx_rsp_cons(queue, i); 1278 1258 work_done++; 1279 1259 } 1280 1260 if (need_xdp_flush) ··· 1439 1417 return 0; 1440 1418 } 1441 1419 1442 - static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id) 1420 + static bool xennet_handle_tx(struct netfront_queue *queue, unsigned int *eoi) 1443 1421 { 1444 - struct netfront_queue *queue = dev_id; 1445 1422 unsigned long flags; 1446 1423 1447 - if (queue->info->broken) 1448 - return IRQ_HANDLED; 1424 + if (unlikely(queue->info->broken)) 1425 + return false; 1449 1426 1450 1427 spin_lock_irqsave(&queue->tx_lock, flags); 1451 - xennet_tx_buf_gc(queue); 1428 + if (xennet_tx_buf_gc(queue)) 1429 + *eoi = 0; 1452 1430 spin_unlock_irqrestore(&queue->tx_lock, flags); 1431 + 1432 + return true; 1433 + } 1434 + 1435 + static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id) 1436 + { 1437 + unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS; 1438 + 1439 + if (likely(xennet_handle_tx(dev_id, &eoiflag))) 1440 + xen_irq_lateeoi(irq, eoiflag); 1453 1441 1454 1442 return IRQ_HANDLED; 1455 1443 } 1456 1444 1445 + static bool xennet_handle_rx(struct netfront_queue *queue, unsigned int *eoi) 1446 + { 1447 + unsigned int work_queued; 1448 + unsigned long flags; 1449 + 1450 + if (unlikely(queue->info->broken)) 1451 + return false; 1452 + 1453 + spin_lock_irqsave(&queue->rx_cons_lock, flags); 1454 + work_queued = RING_HAS_UNCONSUMED_RESPONSES(&queue->rx); 1455 + if (work_queued > queue->rx_rsp_unconsumed) { 1456 + queue->rx_rsp_unconsumed = work_queued; 1457 + *eoi = 0; 1458 + } else if (unlikely(work_queued < queue->rx_rsp_unconsumed)) { 1459 + const struct device *dev = &queue->info->netdev->dev; 1460 + 1461 + spin_unlock_irqrestore(&queue->rx_cons_lock, flags); 1462 + dev_alert(dev, "RX producer index going backwards\n"); 1463 + dev_alert(dev, "Disabled for further use\n"); 1464 + queue->info->broken = true; 1465 + return false; 1466 + } 1467 + spin_unlock_irqrestore(&queue->rx_cons_lock, flags); 1468 + 1469 + if (likely(netif_carrier_ok(queue->info->netdev) && work_queued)) 1470 + napi_schedule(&queue->napi); 1471 + 1472 + return true; 1473 + } 1474 + 1457 1475 static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id) 1458 1476 { 1459 - struct netfront_queue *queue = dev_id; 1460 - struct net_device *dev = queue->info->netdev; 1477 + unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS; 1461 1478 1462 - if (queue->info->broken) 1463 - return IRQ_HANDLED; 1464 - 1465 - if (likely(netif_carrier_ok(dev) && 1466 - RING_HAS_UNCONSUMED_RESPONSES(&queue->rx))) 1467 - napi_schedule(&queue->napi); 1479 + if (likely(xennet_handle_rx(dev_id, &eoiflag))) 1480 + xen_irq_lateeoi(irq, eoiflag); 1468 1481 1469 1482 return IRQ_HANDLED; 1470 1483 } 1471 1484 1472 1485 static irqreturn_t xennet_interrupt(int irq, void *dev_id) 1473 1486 { 1474 - xennet_tx_interrupt(irq, dev_id); 1475 - xennet_rx_interrupt(irq, dev_id); 1487 + unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS; 1488 + 1489 + if (xennet_handle_tx(dev_id, &eoiflag) && 1490 + xennet_handle_rx(dev_id, &eoiflag)) 1491 + xen_irq_lateeoi(irq, eoiflag); 1492 + 1476 1493 return IRQ_HANDLED; 1477 1494 } 1478 1495 ··· 1829 1768 if (err < 0) 1830 1769 goto fail; 1831 1770 1832 - err = bind_evtchn_to_irqhandler(queue->tx_evtchn, 1833 - xennet_interrupt, 1834 - 0, queue->info->netdev->name, queue); 1771 + err = bind_evtchn_to_irqhandler_lateeoi(queue->tx_evtchn, 1772 + xennet_interrupt, 0, 1773 + queue->info->netdev->name, 1774 + queue); 1835 1775 if (err < 0) 1836 1776 goto bind_fail; 1837 1777 queue->rx_evtchn = queue->tx_evtchn; ··· 1860 1798 1861 1799 snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name), 1862 1800 "%s-tx", queue->name); 1863 - err = bind_evtchn_to_irqhandler(queue->tx_evtchn, 1864 - xennet_tx_interrupt, 1865 - 0, queue->tx_irq_name, queue); 1801 + err = bind_evtchn_to_irqhandler_lateeoi(queue->tx_evtchn, 1802 + xennet_tx_interrupt, 0, 1803 + queue->tx_irq_name, queue); 1866 1804 if (err < 0) 1867 1805 goto bind_tx_fail; 1868 1806 queue->tx_irq = err; 1869 1807 1870 1808 snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name), 1871 1809 "%s-rx", queue->name); 1872 - err = bind_evtchn_to_irqhandler(queue->rx_evtchn, 1873 - xennet_rx_interrupt, 1874 - 0, queue->rx_irq_name, queue); 1810 + err = bind_evtchn_to_irqhandler_lateeoi(queue->rx_evtchn, 1811 + xennet_rx_interrupt, 0, 1812 + queue->rx_irq_name, queue); 1875 1813 if (err < 0) 1876 1814 goto bind_rx_fail; 1877 1815 queue->rx_irq = err; ··· 1973 1911 1974 1912 spin_lock_init(&queue->tx_lock); 1975 1913 spin_lock_init(&queue->rx_lock); 1914 + spin_lock_init(&queue->rx_cons_lock); 1976 1915 1977 1916 timer_setup(&queue->rx_refill_timer, rx_refill_timeout, 0); 1978 1917
+20 -9
drivers/nfc/st21nfca/i2c.c
··· 524 524 phy->gpiod_ena = devm_gpiod_get(dev, "enable", GPIOD_OUT_LOW); 525 525 if (IS_ERR(phy->gpiod_ena)) { 526 526 nfc_err(dev, "Unable to get ENABLE GPIO\n"); 527 - return PTR_ERR(phy->gpiod_ena); 527 + r = PTR_ERR(phy->gpiod_ena); 528 + goto out_free; 528 529 } 529 530 530 531 phy->se_status.is_ese_present = ··· 536 535 r = st21nfca_hci_platform_init(phy); 537 536 if (r < 0) { 538 537 nfc_err(&client->dev, "Unable to reboot st21nfca\n"); 539 - return r; 538 + goto out_free; 540 539 } 541 540 542 541 r = devm_request_threaded_irq(&client->dev, client->irq, NULL, ··· 545 544 ST21NFCA_HCI_DRIVER_NAME, phy); 546 545 if (r < 0) { 547 546 nfc_err(&client->dev, "Unable to register IRQ handler\n"); 548 - return r; 547 + goto out_free; 549 548 } 550 549 551 - return st21nfca_hci_probe(phy, &i2c_phy_ops, LLC_SHDLC_NAME, 552 - ST21NFCA_FRAME_HEADROOM, 553 - ST21NFCA_FRAME_TAILROOM, 554 - ST21NFCA_HCI_LLC_MAX_PAYLOAD, 555 - &phy->hdev, 556 - &phy->se_status); 550 + r = st21nfca_hci_probe(phy, &i2c_phy_ops, LLC_SHDLC_NAME, 551 + ST21NFCA_FRAME_HEADROOM, 552 + ST21NFCA_FRAME_TAILROOM, 553 + ST21NFCA_HCI_LLC_MAX_PAYLOAD, 554 + &phy->hdev, 555 + &phy->se_status); 556 + if (r) 557 + goto out_free; 558 + 559 + return 0; 560 + 561 + out_free: 562 + kfree_skb(phy->pending_skb); 563 + return r; 557 564 } 558 565 559 566 static int st21nfca_hci_i2c_remove(struct i2c_client *client) ··· 572 563 573 564 if (phy->powered) 574 565 st21nfca_hci_i2c_disable(phy); 566 + if (phy->pending_skb) 567 + kfree_skb(phy->pending_skb); 575 568 576 569 return 0; 577 570 }
+2 -2
drivers/pci/controller/Kconfig
··· 332 332 If unsure, say Y if you have an Apple Silicon system. 333 333 334 334 config PCIE_MT7621 335 - tristate "MediaTek MT7621 PCIe Controller" 336 - depends on (RALINK && SOC_MT7621) || (MIPS && COMPILE_TEST) 335 + bool "MediaTek MT7621 PCIe Controller" 336 + depends on SOC_MT7621 || (MIPS && COMPILE_TEST) 337 337 select PHY_MT7621_PCI 338 338 default SOC_MT7621 339 339 help
+11 -4
drivers/pci/msi.c
··· 722 722 goto out_disable; 723 723 } 724 724 725 - /* Ensure that all table entries are masked. */ 726 - msix_mask_all(base, tsize); 727 - 728 725 ret = msix_setup_entries(dev, base, entries, nvec, affd); 729 726 if (ret) 730 727 goto out_disable; ··· 748 751 /* Set MSI-X enabled bits and unmask the function */ 749 752 pci_intx_for_msi(dev, 0); 750 753 dev->msix_enabled = 1; 754 + 755 + /* 756 + * Ensure that all table entries are masked to prevent 757 + * stale entries from firing in a crash kernel. 758 + * 759 + * Done late to deal with a broken Marvell NVME device 760 + * which takes the MSI-X mask bits into account even 761 + * when MSI-X is disabled, which prevents MSI delivery. 762 + */ 763 + msix_mask_all(base, tsize); 751 764 pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0); 752 765 753 766 pcibios_free_irq(dev); ··· 784 777 free_msi_irqs(dev); 785 778 786 779 out_disable: 787 - pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0); 780 + pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL | PCI_MSIX_FLAGS_ENABLE, 0); 788 781 789 782 return ret; 790 783 }
+16 -13
drivers/pinctrl/bcm/pinctrl-bcm2835.c
··· 1244 1244 raw_spin_lock_init(&pc->irq_lock[i]); 1245 1245 } 1246 1246 1247 + pc->pctl_desc = *pdata->pctl_desc; 1248 + pc->pctl_dev = devm_pinctrl_register(dev, &pc->pctl_desc, pc); 1249 + if (IS_ERR(pc->pctl_dev)) { 1250 + gpiochip_remove(&pc->gpio_chip); 1251 + return PTR_ERR(pc->pctl_dev); 1252 + } 1253 + 1254 + pc->gpio_range = *pdata->gpio_range; 1255 + pc->gpio_range.base = pc->gpio_chip.base; 1256 + pc->gpio_range.gc = &pc->gpio_chip; 1257 + pinctrl_add_gpio_range(pc->pctl_dev, &pc->gpio_range); 1258 + 1247 1259 girq = &pc->gpio_chip.irq; 1248 1260 girq->chip = &bcm2835_gpio_irq_chip; 1249 1261 girq->parent_handler = bcm2835_gpio_irq_handler; ··· 1263 1251 girq->parents = devm_kcalloc(dev, BCM2835_NUM_IRQS, 1264 1252 sizeof(*girq->parents), 1265 1253 GFP_KERNEL); 1266 - if (!girq->parents) 1254 + if (!girq->parents) { 1255 + pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range); 1267 1256 return -ENOMEM; 1257 + } 1268 1258 1269 1259 if (is_7211) { 1270 1260 pc->wake_irq = devm_kcalloc(dev, BCM2835_NUM_IRQS, ··· 1321 1307 err = gpiochip_add_data(&pc->gpio_chip, pc); 1322 1308 if (err) { 1323 1309 dev_err(dev, "could not add GPIO chip\n"); 1310 + pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range); 1324 1311 return err; 1325 1312 } 1326 - 1327 - pc->pctl_desc = *pdata->pctl_desc; 1328 - pc->pctl_dev = devm_pinctrl_register(dev, &pc->pctl_desc, pc); 1329 - if (IS_ERR(pc->pctl_dev)) { 1330 - gpiochip_remove(&pc->gpio_chip); 1331 - return PTR_ERR(pc->pctl_dev); 1332 - } 1333 - 1334 - pc->gpio_range = *pdata->gpio_range; 1335 - pc->gpio_range.base = pc->gpio_chip.base; 1336 - pc->gpio_range.gc = &pc->gpio_chip; 1337 - pinctrl_add_gpio_range(pc->pctl_dev, &pc->gpio_range); 1338 1313 1339 1314 return 0; 1340 1315 }
+6 -2
drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
··· 285 285 desc = (const struct mtk_pin_desc *)hw->soc->pins; 286 286 *gpio_chip = &hw->chip; 287 287 288 - /* Be greedy to guess first gpio_n is equal to eint_n */ 289 - if (desc[eint_n].eint.eint_n == eint_n) 288 + /* 289 + * Be greedy to guess first gpio_n is equal to eint_n. 290 + * Only eint virtual eint number is greater than gpio number. 291 + */ 292 + if (hw->soc->npins > eint_n && 293 + desc[eint_n].eint.eint_n == eint_n) 290 294 *gpio_n = eint_n; 291 295 else 292 296 *gpio_n = mtk_xt_find_eint_num(hw, eint_n);
+4 -4
drivers/pinctrl/stm32/pinctrl-stm32.c
··· 1251 1251 bank_nr = args.args[1] / STM32_GPIO_PINS_PER_BANK; 1252 1252 bank->gpio_chip.base = args.args[1]; 1253 1253 1254 - npins = args.args[2]; 1255 - while (!of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3, 1256 - ++i, &args)) 1257 - npins += args.args[2]; 1254 + /* get the last defined gpio line (offset + nb of pins) */ 1255 + npins = args.args[0] + args.args[2]; 1256 + while (!of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3, ++i, &args)) 1257 + npins = max(npins, (int)(args.args[0] + args.args[2])); 1258 1258 } else { 1259 1259 bank_nr = pctl->nbanks; 1260 1260 bank->gpio_chip.base = bank_nr * STM32_GPIO_PINS_PER_BANK;
+2 -2
drivers/platform/mellanox/mlxbf-pmc.c
··· 1374 1374 pmc->block[i].counters = info[2]; 1375 1375 pmc->block[i].type = info[3]; 1376 1376 1377 - if (IS_ERR(pmc->block[i].mmio_base)) 1378 - return PTR_ERR(pmc->block[i].mmio_base); 1377 + if (!pmc->block[i].mmio_base) 1378 + return -ENOMEM; 1379 1379 1380 1380 ret = mlxbf_pmc_create_groups(dev, i); 1381 1381 if (ret)
+1 -1
drivers/platform/x86/Makefile
··· 68 68 obj-$(CONFIG_THINKPAD_LMI) += think-lmi.o 69 69 70 70 # Intel 71 - obj-$(CONFIG_X86_PLATFORM_DRIVERS_INTEL) += intel/ 71 + obj-y += intel/ 72 72 73 73 # MSI 74 74 obj-$(CONFIG_MSI_LAPTOP) += msi-laptop.o
+2 -1
drivers/platform/x86/amd-pmc.c
··· 508 508 } 509 509 510 510 static const struct dev_pm_ops amd_pmc_pm_ops = { 511 - SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(amd_pmc_suspend, amd_pmc_resume) 511 + .suspend_noirq = amd_pmc_suspend, 512 + .resume_noirq = amd_pmc_resume, 512 513 }; 513 514 514 515 static const struct pci_device_id pmc_pci_ids[] = {
+1 -1
drivers/platform/x86/apple-gmux.c
··· 625 625 } 626 626 627 627 gmux_data->iostart = res->start; 628 - gmux_data->iolen = res->end - res->start; 628 + gmux_data->iolen = resource_size(res); 629 629 630 630 if (gmux_data->iolen < GMUX_MIN_IO_LEN) { 631 631 pr_err("gmux I/O region too small (%lu < %u)\n",
-15
drivers/platform/x86/intel/Kconfig
··· 3 3 # Intel x86 Platform Specific Drivers 4 4 # 5 5 6 - menuconfig X86_PLATFORM_DRIVERS_INTEL 7 - bool "Intel x86 Platform Specific Device Drivers" 8 - default y 9 - help 10 - Say Y here to get to see options for device drivers for 11 - various Intel x86 platforms, including vendor-specific 12 - drivers. This option alone does not add any kernel code. 13 - 14 - If you say N, all options in this submenu will be skipped 15 - and disabled. 16 - 17 - if X86_PLATFORM_DRIVERS_INTEL 18 - 19 6 source "drivers/platform/x86/intel/atomisp2/Kconfig" 20 7 source "drivers/platform/x86/intel/int1092/Kconfig" 21 8 source "drivers/platform/x86/intel/int33fe/Kconfig" ··· 170 183 171 184 To compile this driver as a module, choose M here: the module 172 185 will be called intel-uncore-frequency. 173 - 174 - endif # X86_PLATFORM_DRIVERS_INTEL
+1 -1
drivers/platform/x86/intel/pmc/pltdrv.c
··· 65 65 66 66 retval = platform_device_register(pmc_core_device); 67 67 if (retval) 68 - kfree(pmc_core_device); 68 + platform_device_put(pmc_core_device); 69 69 70 70 return retval; 71 71 }
+30 -28
drivers/platform/x86/system76_acpi.c
··· 35 35 union acpi_object *nfan; 36 36 union acpi_object *ntmp; 37 37 struct input_dev *input; 38 + bool has_open_ec; 38 39 }; 39 40 40 41 static const struct acpi_device_id device_ids[] = { ··· 280 279 281 280 static void system76_battery_init(void) 282 281 { 283 - acpi_handle handle; 284 - 285 - handle = ec_get_handle(); 286 - if (handle && acpi_has_method(handle, "GBCT")) 287 - battery_hook_register(&system76_battery_hook); 282 + battery_hook_register(&system76_battery_hook); 288 283 } 289 284 290 285 static void system76_battery_exit(void) 291 286 { 292 - acpi_handle handle; 293 - 294 - handle = ec_get_handle(); 295 - if (handle && acpi_has_method(handle, "GBCT")) 296 - battery_hook_unregister(&system76_battery_hook); 287 + battery_hook_unregister(&system76_battery_hook); 297 288 } 298 289 299 290 // Get the airplane mode LED brightness ··· 666 673 acpi_dev->driver_data = data; 667 674 data->acpi_dev = acpi_dev; 668 675 676 + // Some models do not run open EC firmware. Check for an ACPI method 677 + // that only exists on open EC to guard functionality specific to it. 678 + data->has_open_ec = acpi_has_method(acpi_device_handle(data->acpi_dev), "NFAN"); 679 + 669 680 err = system76_get(data, "INIT"); 670 681 if (err) 671 682 return err; ··· 715 718 if (err) 716 719 goto error; 717 720 718 - err = system76_get_object(data, "NFAN", &data->nfan); 719 - if (err) 720 - goto error; 721 + if (data->has_open_ec) { 722 + err = system76_get_object(data, "NFAN", &data->nfan); 723 + if (err) 724 + goto error; 721 725 722 - err = system76_get_object(data, "NTMP", &data->ntmp); 723 - if (err) 724 - goto error; 726 + err = system76_get_object(data, "NTMP", &data->ntmp); 727 + if (err) 728 + goto error; 725 729 726 - data->therm = devm_hwmon_device_register_with_info(&acpi_dev->dev, 727 - "system76_acpi", data, &thermal_chip_info, NULL); 728 - err = PTR_ERR_OR_ZERO(data->therm); 729 - if (err) 730 - goto error; 730 + data->therm = devm_hwmon_device_register_with_info(&acpi_dev->dev, 731 + "system76_acpi", data, &thermal_chip_info, NULL); 732 + err = PTR_ERR_OR_ZERO(data->therm); 733 + if (err) 734 + goto error; 731 735 732 - system76_battery_init(); 736 + system76_battery_init(); 737 + } 733 738 734 739 return 0; 735 740 736 741 error: 737 - kfree(data->ntmp); 738 - kfree(data->nfan); 742 + if (data->has_open_ec) { 743 + kfree(data->ntmp); 744 + kfree(data->nfan); 745 + } 739 746 return err; 740 747 } 741 748 ··· 750 749 751 750 data = acpi_driver_data(acpi_dev); 752 751 753 - system76_battery_exit(); 752 + if (data->has_open_ec) { 753 + system76_battery_exit(); 754 + kfree(data->nfan); 755 + kfree(data->ntmp); 756 + } 754 757 755 758 devm_led_classdev_unregister(&acpi_dev->dev, &data->ap_led); 756 759 devm_led_classdev_unregister(&acpi_dev->dev, &data->kb_led); 757 - 758 - kfree(data->nfan); 759 - kfree(data->ntmp); 760 760 761 761 system76_get(data, "FINI"); 762 762
+1 -8
drivers/reset/tegra/reset-bpmp.c
··· 20 20 struct tegra_bpmp *bpmp = to_tegra_bpmp(rstc); 21 21 struct mrq_reset_request request; 22 22 struct tegra_bpmp_message msg; 23 - int err; 24 23 25 24 memset(&request, 0, sizeof(request)); 26 25 request.cmd = command; ··· 30 31 msg.tx.data = &request; 31 32 msg.tx.size = sizeof(request); 32 33 33 - err = tegra_bpmp_transfer(bpmp, &msg); 34 - if (err) 35 - return err; 36 - if (msg.rx.ret) 37 - return -EINVAL; 38 - 39 - return 0; 34 + return tegra_bpmp_transfer(bpmp, &msg); 40 35 } 41 36 42 37 static int tegra_bpmp_reset_module(struct reset_controller_dev *rstc,
+4 -2
drivers/scsi/libiscsi.c
··· 3100 3100 { 3101 3101 struct iscsi_conn *conn = cls_conn->dd_data; 3102 3102 struct iscsi_session *session = conn->session; 3103 + char *tmp_persistent_address = conn->persistent_address; 3104 + char *tmp_local_ipaddr = conn->local_ipaddr; 3103 3105 3104 3106 del_timer_sync(&conn->transport_timer); 3105 3107 ··· 3123 3121 spin_lock_bh(&session->frwd_lock); 3124 3122 free_pages((unsigned long) conn->data, 3125 3123 get_order(ISCSI_DEF_MAX_RECV_SEG_LEN)); 3126 - kfree(conn->persistent_address); 3127 - kfree(conn->local_ipaddr); 3128 3124 /* regular RX path uses back_lock */ 3129 3125 spin_lock_bh(&session->back_lock); 3130 3126 kfifo_in(&session->cmdpool.queue, (void*)&conn->login_task, ··· 3134 3134 mutex_unlock(&session->eh_mutex); 3135 3135 3136 3136 iscsi_destroy_conn(cls_conn); 3137 + kfree(tmp_persistent_address); 3138 + kfree(tmp_local_ipaddr); 3137 3139 } 3138 3140 EXPORT_SYMBOL_GPL(iscsi_conn_teardown); 3139 3141
+2 -2
drivers/scsi/lpfc/lpfc_debugfs.c
··· 2954 2954 char mybuf[64]; 2955 2955 char *pbuf; 2956 2956 2957 - if (nbytes > 64) 2958 - nbytes = 64; 2957 + if (nbytes > 63) 2958 + nbytes = 63; 2959 2959 2960 2960 memset(mybuf, 0, sizeof(mybuf)); 2961 2961
+21 -17
drivers/scsi/pm8001/pm80xx_hwi.c
··· 3053 3053 struct smp_completion_resp *psmpPayload; 3054 3054 struct task_status_struct *ts; 3055 3055 struct pm8001_device *pm8001_dev; 3056 - char *pdma_respaddr = NULL; 3057 3056 3058 3057 psmpPayload = (struct smp_completion_resp *)(piomb + 4); 3059 3058 status = le32_to_cpu(psmpPayload->status); ··· 3079 3080 if (pm8001_dev) 3080 3081 atomic_dec(&pm8001_dev->running_req); 3081 3082 if (pm8001_ha->smp_exp_mode == SMP_DIRECT) { 3083 + struct scatterlist *sg_resp = &t->smp_task.smp_resp; 3084 + u8 *payload; 3085 + void *to; 3086 + 3082 3087 pm8001_dbg(pm8001_ha, IO, 3083 3088 "DIRECT RESPONSE Length:%d\n", 3084 3089 param); 3085 - pdma_respaddr = (char *)(phys_to_virt(cpu_to_le64 3086 - ((u64)sg_dma_address 3087 - (&t->smp_task.smp_resp)))); 3090 + to = kmap_atomic(sg_page(sg_resp)); 3091 + payload = to + sg_resp->offset; 3088 3092 for (i = 0; i < param; i++) { 3089 - *(pdma_respaddr+i) = psmpPayload->_r_a[i]; 3093 + *(payload + i) = psmpPayload->_r_a[i]; 3090 3094 pm8001_dbg(pm8001_ha, IO, 3091 3095 "SMP Byte%d DMA data 0x%x psmp 0x%x\n", 3092 - i, *(pdma_respaddr + i), 3096 + i, *(payload + i), 3093 3097 psmpPayload->_r_a[i]); 3094 3098 } 3099 + kunmap_atomic(to); 3095 3100 } 3096 3101 break; 3097 3102 case IO_ABORTED: ··· 4239 4236 struct sas_task *task = ccb->task; 4240 4237 struct domain_device *dev = task->dev; 4241 4238 struct pm8001_device *pm8001_dev = dev->lldd_dev; 4242 - struct scatterlist *sg_req, *sg_resp; 4239 + struct scatterlist *sg_req, *sg_resp, *smp_req; 4243 4240 u32 req_len, resp_len; 4244 4241 struct smp_req smp_cmd; 4245 4242 u32 opc; 4246 4243 struct inbound_queue_table *circularQ; 4247 - char *preq_dma_addr = NULL; 4248 - __le64 tmp_addr; 4249 4244 u32 i, length; 4245 + u8 *payload; 4246 + u8 *to; 4250 4247 4251 4248 memset(&smp_cmd, 0, sizeof(smp_cmd)); 4252 4249 /* ··· 4283 4280 pm8001_ha->smp_exp_mode = SMP_INDIRECT; 4284 4281 4285 4282 4286 - tmp_addr = cpu_to_le64((u64)sg_dma_address(&task->smp_task.smp_req)); 4287 - preq_dma_addr = (char *)phys_to_virt(tmp_addr); 4283 + smp_req = &task->smp_task.smp_req; 4284 + to = kmap_atomic(sg_page(smp_req)); 4285 + payload = to + smp_req->offset; 4288 4286 4289 4287 /* INDIRECT MODE command settings. Use DMA */ 4290 4288 if (pm8001_ha->smp_exp_mode == SMP_INDIRECT) { ··· 4293 4289 /* for SPCv indirect mode. Place the top 4 bytes of 4294 4290 * SMP Request header here. */ 4295 4291 for (i = 0; i < 4; i++) 4296 - smp_cmd.smp_req16[i] = *(preq_dma_addr + i); 4292 + smp_cmd.smp_req16[i] = *(payload + i); 4297 4293 /* exclude top 4 bytes for SMP req header */ 4298 4294 smp_cmd.long_smp_req.long_req_addr = 4299 4295 cpu_to_le64((u64)sg_dma_address ··· 4324 4320 pm8001_dbg(pm8001_ha, IO, "SMP REQUEST DIRECT MODE\n"); 4325 4321 for (i = 0; i < length; i++) 4326 4322 if (i < 16) { 4327 - smp_cmd.smp_req16[i] = *(preq_dma_addr+i); 4323 + smp_cmd.smp_req16[i] = *(payload + i); 4328 4324 pm8001_dbg(pm8001_ha, IO, 4329 4325 "Byte[%d]:%x (DMA data:%x)\n", 4330 4326 i, smp_cmd.smp_req16[i], 4331 - *(preq_dma_addr)); 4327 + *(payload)); 4332 4328 } else { 4333 - smp_cmd.smp_req[i] = *(preq_dma_addr+i); 4329 + smp_cmd.smp_req[i] = *(payload + i); 4334 4330 pm8001_dbg(pm8001_ha, IO, 4335 4331 "Byte[%d]:%x (DMA data:%x)\n", 4336 4332 i, smp_cmd.smp_req[i], 4337 - *(preq_dma_addr)); 4333 + *(payload)); 4338 4334 } 4339 4335 } 4340 - 4336 + kunmap_atomic(to); 4341 4337 build_smp_cmd(pm8001_dev->device_id, smp_cmd.tag, 4342 4338 &smp_cmd, pm8001_ha->smp_exp_mode, length); 4343 4339 rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &smp_cmd,
+5 -2
drivers/scsi/vmw_pvscsi.c
··· 586 586 * Commands like INQUIRY may transfer less data than 587 587 * requested by the initiator via bufflen. Set residual 588 588 * count to make upper layer aware of the actual amount 589 - * of data returned. 589 + * of data returned. There are cases when controller 590 + * returns zero dataLen with non zero data - do not set 591 + * residual count in that case. 590 592 */ 591 - scsi_set_resid(cmd, scsi_bufflen(cmd) - e->dataLen); 593 + if (e->dataLen && (e->dataLen < scsi_bufflen(cmd))) 594 + scsi_set_resid(cmd, scsi_bufflen(cmd) - e->dataLen); 592 595 cmd->result = (DID_OK << 16); 593 596 break; 594 597
+19
drivers/soc/imx/imx8m-blk-ctrl.c
··· 17 17 18 18 #define BLK_SFT_RSTN 0x0 19 19 #define BLK_CLK_EN 0x4 20 + #define BLK_MIPI_RESET_DIV 0x8 /* Mini/Nano DISPLAY_BLK_CTRL only */ 20 21 21 22 struct imx8m_blk_ctrl_domain; 22 23 ··· 37 36 const char *gpc_name; 38 37 u32 rst_mask; 39 38 u32 clk_mask; 39 + 40 + /* 41 + * i.MX8M Mini and Nano have a third DISPLAY_BLK_CTRL register 42 + * which is used to control the reset for the MIPI Phy. 43 + * Since it's only present in certain circumstances, 44 + * an if-statement should be used before setting and clearing this 45 + * register. 46 + */ 47 + u32 mipi_phy_rst_mask; 40 48 }; 41 49 42 50 #define DOMAIN_MAX_CLKS 3 ··· 88 78 89 79 /* put devices into reset */ 90 80 regmap_clear_bits(bc->regmap, BLK_SFT_RSTN, data->rst_mask); 81 + if (data->mipi_phy_rst_mask) 82 + regmap_clear_bits(bc->regmap, BLK_MIPI_RESET_DIV, data->mipi_phy_rst_mask); 91 83 92 84 /* enable upstream and blk-ctrl clocks to allow reset to propagate */ 93 85 ret = clk_bulk_prepare_enable(data->num_clks, domain->clks); ··· 111 99 112 100 /* release reset */ 113 101 regmap_set_bits(bc->regmap, BLK_SFT_RSTN, data->rst_mask); 102 + if (data->mipi_phy_rst_mask) 103 + regmap_set_bits(bc->regmap, BLK_MIPI_RESET_DIV, data->mipi_phy_rst_mask); 114 104 115 105 /* disable upstream clocks */ 116 106 clk_bulk_disable_unprepare(data->num_clks, domain->clks); ··· 134 120 struct imx8m_blk_ctrl *bc = domain->bc; 135 121 136 122 /* put devices into reset and disable clocks */ 123 + if (data->mipi_phy_rst_mask) 124 + regmap_clear_bits(bc->regmap, BLK_MIPI_RESET_DIV, data->mipi_phy_rst_mask); 125 + 137 126 regmap_clear_bits(bc->regmap, BLK_SFT_RSTN, data->rst_mask); 138 127 regmap_clear_bits(bc->regmap, BLK_CLK_EN, data->clk_mask); 139 128 ··· 497 480 .gpc_name = "mipi-dsi", 498 481 .rst_mask = BIT(5), 499 482 .clk_mask = BIT(8) | BIT(9), 483 + .mipi_phy_rst_mask = BIT(17), 500 484 }, 501 485 [IMX8MM_DISPBLK_PD_MIPI_CSI] = { 502 486 .name = "dispblk-mipi-csi", ··· 506 488 .gpc_name = "mipi-csi", 507 489 .rst_mask = BIT(3) | BIT(4), 508 490 .clk_mask = BIT(10) | BIT(11), 491 + .mipi_phy_rst_mask = BIT(16), 509 492 }, 510 493 }; 511 494
+4
drivers/soc/imx/soc-imx.c
··· 36 36 int ret; 37 37 int i; 38 38 39 + /* Return early if this is running on devices with different SoCs */ 40 + if (!__mxc_cpu_type) 41 + return 0; 42 + 39 43 if (of_machine_is_compatible("fsl,ls1021a")) 40 44 return 0; 41 45
+1 -1
drivers/soc/tegra/fuse/fuse-tegra.c
··· 320 320 }; 321 321 builtin_platform_driver(tegra_fuse_driver); 322 322 323 - bool __init tegra_fuse_read_spare(unsigned int spare) 323 + u32 __init tegra_fuse_read_spare(unsigned int spare) 324 324 { 325 325 unsigned int offset = fuse->soc->info->spare + spare * 4; 326 326
+1 -1
drivers/soc/tegra/fuse/fuse.h
··· 65 65 void tegra_init_revision(void); 66 66 void tegra_init_apbmisc(void); 67 67 68 - bool __init tegra_fuse_read_spare(unsigned int spare); 68 + u32 __init tegra_fuse_read_spare(unsigned int spare); 69 69 u32 __init tegra_fuse_read_early(unsigned int offset); 70 70 71 71 u8 tegra_get_major_rev(void);
+1 -1
drivers/spi/spi-armada-3700.c
··· 901 901 return 0; 902 902 903 903 error_clk: 904 - clk_disable_unprepare(spi->clk); 904 + clk_unprepare(spi->clk); 905 905 error: 906 906 spi_master_put(master); 907 907 out:
+2 -3
drivers/tee/amdtee/core.c
··· 203 203 204 204 *ta_size = roundup(fw->size, PAGE_SIZE); 205 205 *ta = (void *)__get_free_pages(GFP_KERNEL, get_order(*ta_size)); 206 - if (IS_ERR(*ta)) { 207 - pr_err("%s: get_free_pages failed 0x%llx\n", __func__, 208 - (u64)*ta); 206 + if (!*ta) { 207 + pr_err("%s: get_free_pages failed\n", __func__); 209 208 rc = -ENOMEM; 210 209 goto rel_fw; 211 210 }
+2 -4
drivers/tee/optee/core.c
··· 48 48 goto err; 49 49 } 50 50 51 - for (i = 0; i < nr_pages; i++) { 52 - pages[i] = page; 53 - page++; 54 - } 51 + for (i = 0; i < nr_pages; i++) 52 + pages[i] = page + i; 55 53 56 54 shm->flags |= TEE_SHM_REGISTER; 57 55 rc = shm_register(shm->ctx, shm, pages, nr_pages,
+2
drivers/tee/optee/smc_abi.c
··· 23 23 #include "optee_private.h" 24 24 #include "optee_smc.h" 25 25 #include "optee_rpc_cmd.h" 26 + #include <linux/kmemleak.h> 26 27 #define CREATE_TRACE_POINTS 27 28 #include "optee_trace.h" 28 29 ··· 784 783 param->a4 = 0; 785 784 param->a5 = 0; 786 785 } 786 + kmemleak_not_leak(shm); 787 787 break; 788 788 case OPTEE_SMC_RPC_FUNC_FREE: 789 789 shm = reg_pair_to_ptr(param->a1, param->a2);
+66 -108
drivers/tee/tee_shm.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Copyright (c) 2015-2016, Linaro Limited 3 + * Copyright (c) 2015-2017, 2019-2021 Linaro Limited 4 4 */ 5 + #include <linux/anon_inodes.h> 5 6 #include <linux/device.h> 6 - #include <linux/dma-buf.h> 7 - #include <linux/fdtable.h> 8 7 #include <linux/idr.h> 8 + #include <linux/mm.h> 9 9 #include <linux/sched.h> 10 10 #include <linux/slab.h> 11 11 #include <linux/tee_drv.h> 12 12 #include <linux/uio.h> 13 - #include <linux/module.h> 14 13 #include "tee_private.h" 15 - 16 - MODULE_IMPORT_NS(DMA_BUF); 17 14 18 15 static void release_registered_pages(struct tee_shm *shm) 19 16 { ··· 28 31 } 29 32 } 30 33 31 - static void tee_shm_release(struct tee_shm *shm) 34 + static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) 32 35 { 33 - struct tee_device *teedev = shm->ctx->teedev; 34 - 35 - if (shm->flags & TEE_SHM_DMA_BUF) { 36 - mutex_lock(&teedev->mutex); 37 - idr_remove(&teedev->idr, shm->id); 38 - mutex_unlock(&teedev->mutex); 39 - } 40 - 41 36 if (shm->flags & TEE_SHM_POOL) { 42 37 struct tee_shm_pool_mgr *poolm; 43 38 ··· 55 66 56 67 tee_device_put(teedev); 57 68 } 58 - 59 - static struct sg_table *tee_shm_op_map_dma_buf(struct dma_buf_attachment 60 - *attach, enum dma_data_direction dir) 61 - { 62 - return NULL; 63 - } 64 - 65 - static void tee_shm_op_unmap_dma_buf(struct dma_buf_attachment *attach, 66 - struct sg_table *table, 67 - enum dma_data_direction dir) 68 - { 69 - } 70 - 71 - static void tee_shm_op_release(struct dma_buf *dmabuf) 72 - { 73 - struct tee_shm *shm = dmabuf->priv; 74 - 75 - tee_shm_release(shm); 76 - } 77 - 78 - static int tee_shm_op_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) 79 - { 80 - struct tee_shm *shm = dmabuf->priv; 81 - size_t size = vma->vm_end - vma->vm_start; 82 - 83 - /* Refuse sharing shared memory provided by application */ 84 - if (shm->flags & TEE_SHM_USER_MAPPED) 85 - return -EINVAL; 86 - 87 - return remap_pfn_range(vma, vma->vm_start, shm->paddr >> PAGE_SHIFT, 88 - size, vma->vm_page_prot); 89 - } 90 - 91 - static const struct dma_buf_ops tee_shm_dma_buf_ops = { 92 - .map_dma_buf = tee_shm_op_map_dma_buf, 93 - .unmap_dma_buf = tee_shm_op_unmap_dma_buf, 94 - .release = tee_shm_op_release, 95 - .mmap = tee_shm_op_mmap, 96 - }; 97 69 98 70 struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags) 99 71 { ··· 90 140 goto err_dev_put; 91 141 } 92 142 143 + refcount_set(&shm->refcount, 1); 93 144 shm->flags = flags | TEE_SHM_POOL; 94 145 shm->ctx = ctx; 95 146 if (flags & TEE_SHM_DMA_BUF) ··· 104 153 goto err_kfree; 105 154 } 106 155 107 - 108 156 if (flags & TEE_SHM_DMA_BUF) { 109 - DEFINE_DMA_BUF_EXPORT_INFO(exp_info); 110 - 111 157 mutex_lock(&teedev->mutex); 112 158 shm->id = idr_alloc(&teedev->idr, shm, 1, 0, GFP_KERNEL); 113 159 mutex_unlock(&teedev->mutex); ··· 112 164 ret = ERR_PTR(shm->id); 113 165 goto err_pool_free; 114 166 } 115 - 116 - exp_info.ops = &tee_shm_dma_buf_ops; 117 - exp_info.size = shm->size; 118 - exp_info.flags = O_RDWR; 119 - exp_info.priv = shm; 120 - 121 - shm->dmabuf = dma_buf_export(&exp_info); 122 - if (IS_ERR(shm->dmabuf)) { 123 - ret = ERR_CAST(shm->dmabuf); 124 - goto err_rem; 125 - } 126 167 } 127 168 128 169 teedev_ctx_get(ctx); 129 170 130 171 return shm; 131 - err_rem: 132 - if (flags & TEE_SHM_DMA_BUF) { 133 - mutex_lock(&teedev->mutex); 134 - idr_remove(&teedev->idr, shm->id); 135 - mutex_unlock(&teedev->mutex); 136 - } 137 172 err_pool_free: 138 173 poolm->ops->free(poolm, shm); 139 174 err_kfree: ··· 177 246 goto err; 178 247 } 179 248 249 + refcount_set(&shm->refcount, 1); 180 250 shm->flags = flags | TEE_SHM_REGISTER; 181 251 shm->ctx = ctx; 182 252 shm->id = -1; ··· 238 306 goto err; 239 307 } 240 308 241 - if (flags & TEE_SHM_DMA_BUF) { 242 - DEFINE_DMA_BUF_EXPORT_INFO(exp_info); 243 - 244 - exp_info.ops = &tee_shm_dma_buf_ops; 245 - exp_info.size = shm->size; 246 - exp_info.flags = O_RDWR; 247 - exp_info.priv = shm; 248 - 249 - shm->dmabuf = dma_buf_export(&exp_info); 250 - if (IS_ERR(shm->dmabuf)) { 251 - ret = ERR_CAST(shm->dmabuf); 252 - teedev->desc->ops->shm_unregister(ctx, shm); 253 - goto err; 254 - } 255 - } 256 - 257 309 return shm; 258 310 err: 259 311 if (shm) { ··· 255 339 } 256 340 EXPORT_SYMBOL_GPL(tee_shm_register); 257 341 342 + static int tee_shm_fop_release(struct inode *inode, struct file *filp) 343 + { 344 + tee_shm_put(filp->private_data); 345 + return 0; 346 + } 347 + 348 + static int tee_shm_fop_mmap(struct file *filp, struct vm_area_struct *vma) 349 + { 350 + struct tee_shm *shm = filp->private_data; 351 + size_t size = vma->vm_end - vma->vm_start; 352 + 353 + /* Refuse sharing shared memory provided by application */ 354 + if (shm->flags & TEE_SHM_USER_MAPPED) 355 + return -EINVAL; 356 + 357 + /* check for overflowing the buffer's size */ 358 + if (vma->vm_pgoff + vma_pages(vma) > shm->size >> PAGE_SHIFT) 359 + return -EINVAL; 360 + 361 + return remap_pfn_range(vma, vma->vm_start, shm->paddr >> PAGE_SHIFT, 362 + size, vma->vm_page_prot); 363 + } 364 + 365 + static const struct file_operations tee_shm_fops = { 366 + .owner = THIS_MODULE, 367 + .release = tee_shm_fop_release, 368 + .mmap = tee_shm_fop_mmap, 369 + }; 370 + 258 371 /** 259 372 * tee_shm_get_fd() - Increase reference count and return file descriptor 260 373 * @shm: Shared memory handle ··· 296 351 if (!(shm->flags & TEE_SHM_DMA_BUF)) 297 352 return -EINVAL; 298 353 299 - get_dma_buf(shm->dmabuf); 300 - fd = dma_buf_fd(shm->dmabuf, O_CLOEXEC); 354 + /* matched by tee_shm_put() in tee_shm_op_release() */ 355 + refcount_inc(&shm->refcount); 356 + fd = anon_inode_getfd("tee_shm", &tee_shm_fops, shm, O_RDWR); 301 357 if (fd < 0) 302 - dma_buf_put(shm->dmabuf); 358 + tee_shm_put(shm); 303 359 return fd; 304 360 } 305 361 ··· 310 364 */ 311 365 void tee_shm_free(struct tee_shm *shm) 312 366 { 313 - /* 314 - * dma_buf_put() decreases the dmabuf reference counter and will 315 - * call tee_shm_release() when the last reference is gone. 316 - * 317 - * In the case of driver private memory we call tee_shm_release 318 - * directly instead as it doesn't have a reference counter. 319 - */ 320 - if (shm->flags & TEE_SHM_DMA_BUF) 321 - dma_buf_put(shm->dmabuf); 322 - else 323 - tee_shm_release(shm); 367 + tee_shm_put(shm); 324 368 } 325 369 EXPORT_SYMBOL_GPL(tee_shm_free); 326 370 ··· 417 481 teedev = ctx->teedev; 418 482 mutex_lock(&teedev->mutex); 419 483 shm = idr_find(&teedev->idr, id); 484 + /* 485 + * If the tee_shm was found in the IDR it must have a refcount 486 + * larger than 0 due to the guarantee in tee_shm_put() below. So 487 + * it's safe to use refcount_inc(). 488 + */ 420 489 if (!shm || shm->ctx != ctx) 421 490 shm = ERR_PTR(-EINVAL); 422 - else if (shm->flags & TEE_SHM_DMA_BUF) 423 - get_dma_buf(shm->dmabuf); 491 + else 492 + refcount_inc(&shm->refcount); 424 493 mutex_unlock(&teedev->mutex); 425 494 return shm; 426 495 } ··· 437 496 */ 438 497 void tee_shm_put(struct tee_shm *shm) 439 498 { 440 - if (shm->flags & TEE_SHM_DMA_BUF) 441 - dma_buf_put(shm->dmabuf); 499 + struct tee_device *teedev = shm->ctx->teedev; 500 + bool do_release = false; 501 + 502 + mutex_lock(&teedev->mutex); 503 + if (refcount_dec_and_test(&shm->refcount)) { 504 + /* 505 + * refcount has reached 0, we must now remove it from the 506 + * IDR before releasing the mutex. This will guarantee that 507 + * the refcount_inc() in tee_shm_get_from_id() never starts 508 + * from 0. 509 + */ 510 + if (shm->flags & TEE_SHM_DMA_BUF) 511 + idr_remove(&teedev->idr, shm->id); 512 + do_release = true; 513 + } 514 + mutex_unlock(&teedev->mutex); 515 + 516 + if (do_release) 517 + tee_shm_release(teedev, shm); 442 518 } 443 519 EXPORT_SYMBOL_GPL(tee_shm_put);
+27 -3
drivers/tty/hvc/hvc_xen.c
··· 37 37 struct xenbus_device *xbdev; 38 38 struct xencons_interface *intf; 39 39 unsigned int evtchn; 40 + XENCONS_RING_IDX out_cons; 41 + unsigned int out_cons_same; 40 42 struct hvc_struct *hvc; 41 43 int irq; 42 44 int vtermno; ··· 140 138 XENCONS_RING_IDX cons, prod; 141 139 int recv = 0; 142 140 struct xencons_info *xencons = vtermno_to_xencons(vtermno); 141 + unsigned int eoiflag = 0; 142 + 143 143 if (xencons == NULL) 144 144 return -EINVAL; 145 145 intf = xencons->intf; ··· 161 157 mb(); /* read ring before consuming */ 162 158 intf->in_cons = cons; 163 159 164 - notify_daemon(xencons); 160 + /* 161 + * When to mark interrupt having been spurious: 162 + * - there was no new data to be read, and 163 + * - the backend did not consume some output bytes, and 164 + * - the previous round with no read data didn't see consumed bytes 165 + * (we might have a race with an interrupt being in flight while 166 + * updating xencons->out_cons, so account for that by allowing one 167 + * round without any visible reason) 168 + */ 169 + if (intf->out_cons != xencons->out_cons) { 170 + xencons->out_cons = intf->out_cons; 171 + xencons->out_cons_same = 0; 172 + } 173 + if (recv) { 174 + notify_daemon(xencons); 175 + } else if (xencons->out_cons_same++ > 1) { 176 + eoiflag = XEN_EOI_FLAG_SPURIOUS; 177 + } 178 + 179 + xen_irq_lateeoi(xencons->irq, eoiflag); 180 + 165 181 return recv; 166 182 } 167 183 ··· 410 386 if (ret) 411 387 return ret; 412 388 info->evtchn = evtchn; 413 - irq = bind_evtchn_to_irq(evtchn); 389 + irq = bind_interdomain_evtchn_to_irq_lateeoi(dev, evtchn); 414 390 if (irq < 0) 415 391 return irq; 416 392 info->irq = irq; ··· 575 551 return r; 576 552 577 553 info = vtermno_to_xencons(HVC_COOKIE); 578 - info->irq = bind_evtchn_to_irq(info->evtchn); 554 + info->irq = bind_evtchn_to_irq_lateeoi(info->evtchn); 579 555 } 580 556 if (info->irq < 0) 581 557 info->irq = 0; /* NO_IRQ */
+22 -1
drivers/tty/n_hdlc.c
··· 140 140 struct n_hdlc_buf_list rx_buf_list; 141 141 struct n_hdlc_buf_list tx_free_buf_list; 142 142 struct n_hdlc_buf_list rx_free_buf_list; 143 + struct work_struct write_work; 144 + struct tty_struct *tty_for_write_work; 143 145 }; 144 146 145 147 /* ··· 156 154 /* Local functions */ 157 155 158 156 static struct n_hdlc *n_hdlc_alloc(void); 157 + static void n_hdlc_tty_write_work(struct work_struct *work); 159 158 160 159 /* max frame size for memory allocations */ 161 160 static int maxframe = 4096; ··· 213 210 wake_up_interruptible(&tty->read_wait); 214 211 wake_up_interruptible(&tty->write_wait); 215 212 213 + cancel_work_sync(&n_hdlc->write_work); 214 + 216 215 n_hdlc_free_buf_list(&n_hdlc->rx_free_buf_list); 217 216 n_hdlc_free_buf_list(&n_hdlc->tx_free_buf_list); 218 217 n_hdlc_free_buf_list(&n_hdlc->rx_buf_list); ··· 246 241 return -ENFILE; 247 242 } 248 243 244 + INIT_WORK(&n_hdlc->write_work, n_hdlc_tty_write_work); 245 + n_hdlc->tty_for_write_work = tty; 249 246 tty->disc_data = n_hdlc; 250 247 tty->receive_room = 65536; 251 248 ··· 342 335 } /* end of n_hdlc_send_frames() */ 343 336 344 337 /** 338 + * n_hdlc_tty_write_work - Asynchronous callback for transmit wakeup 339 + * @work: pointer to work_struct 340 + * 341 + * Called when low level device driver can accept more send data. 342 + */ 343 + static void n_hdlc_tty_write_work(struct work_struct *work) 344 + { 345 + struct n_hdlc *n_hdlc = container_of(work, struct n_hdlc, write_work); 346 + struct tty_struct *tty = n_hdlc->tty_for_write_work; 347 + 348 + n_hdlc_send_frames(n_hdlc, tty); 349 + } /* end of n_hdlc_tty_write_work() */ 350 + 351 + /** 345 352 * n_hdlc_tty_wakeup - Callback for transmit wakeup 346 353 * @tty: pointer to associated tty instance data 347 354 * ··· 365 344 { 366 345 struct n_hdlc *n_hdlc = tty->disc_data; 367 346 368 - n_hdlc_send_frames(n_hdlc, tty); 347 + schedule_work(&n_hdlc->write_work); 369 348 } /* end of n_hdlc_tty_wakeup() */ 370 349 371 350 /**
-20
drivers/tty/serial/8250/8250_fintek.c
··· 290 290 } 291 291 } 292 292 293 - static void fintek_8250_goto_highspeed(struct uart_8250_port *uart, 294 - struct fintek_8250 *pdata) 295 - { 296 - sio_write_reg(pdata, LDN, pdata->index); 297 - 298 - switch (pdata->pid) { 299 - case CHIP_ID_F81966: 300 - case CHIP_ID_F81866: /* set uart clock for high speed serial mode */ 301 - sio_write_mask_reg(pdata, F81866_UART_CLK, 302 - F81866_UART_CLK_MASK, 303 - F81866_UART_CLK_14_769MHZ); 304 - 305 - uart->port.uartclk = 921600 * 16; 306 - break; 307 - default: /* leave clock speed untouched */ 308 - break; 309 - } 310 - } 311 - 312 293 static void fintek_8250_set_termios(struct uart_port *port, 313 294 struct ktermios *termios, 314 295 struct ktermios *old) ··· 411 430 412 431 fintek_8250_set_irq_mode(pdata, level_mode); 413 432 fintek_8250_set_max_fifo(pdata); 414 - fintek_8250_goto_highspeed(uart, pdata); 415 433 416 434 fintek_8250_exit_key(addr[i]); 417 435
+12
drivers/usb/cdns3/cdnsp-gadget.c
··· 1541 1541 { 1542 1542 struct cdnsp_device *pdev = gadget_to_cdnsp(gadget); 1543 1543 struct cdns *cdns = dev_get_drvdata(pdev->dev); 1544 + unsigned long flags; 1544 1545 1545 1546 trace_cdnsp_pullup(is_on); 1547 + 1548 + /* 1549 + * Disable events handling while controller is being 1550 + * enabled/disabled. 1551 + */ 1552 + disable_irq(cdns->dev_irq); 1553 + spin_lock_irqsave(&pdev->lock, flags); 1546 1554 1547 1555 if (!is_on) { 1548 1556 cdnsp_reset_device(pdev); ··· 1558 1550 } else { 1559 1551 cdns_set_vbus(cdns); 1560 1552 } 1553 + 1554 + spin_unlock_irqrestore(&pdev->lock, flags); 1555 + enable_irq(cdns->dev_irq); 1556 + 1561 1557 return 0; 1562 1558 } 1563 1559
+10 -1
drivers/usb/cdns3/cdnsp-ring.c
··· 1029 1029 return; 1030 1030 } 1031 1031 1032 + *status = 0; 1033 + 1032 1034 cdnsp_finish_td(pdev, td, event, pep, status); 1033 1035 } 1034 1036 ··· 1525 1523 spin_lock_irqsave(&pdev->lock, flags); 1526 1524 1527 1525 if (pdev->cdnsp_state & (CDNSP_STATE_HALTED | CDNSP_STATE_DYING)) { 1528 - cdnsp_died(pdev); 1526 + /* 1527 + * While removing or stopping driver there may still be deferred 1528 + * not handled interrupt which should not be treated as error. 1529 + * Driver should simply ignore it. 1530 + */ 1531 + if (pdev->gadget_driver) 1532 + cdnsp_died(pdev); 1533 + 1529 1534 spin_unlock_irqrestore(&pdev->lock, flags); 1530 1535 return IRQ_HANDLED; 1531 1536 }
+2 -2
drivers/usb/cdns3/cdnsp-trace.h
··· 57 57 __entry->first_prime_det = pep->stream_info.first_prime_det; 58 58 __entry->drbls_count = pep->stream_info.drbls_count; 59 59 ), 60 - TP_printk("%s: SID: %08x ep state: %x stream: enabled: %d num %d " 60 + TP_printk("%s: SID: %08x, ep state: %x, stream: enabled: %d num %d " 61 61 "tds %d, first prime: %d drbls %d", 62 - __get_str(name), __entry->state, __entry->stream_id, 62 + __get_str(name), __entry->stream_id, __entry->state, 63 63 __entry->enabled, __entry->num_streams, __entry->td_count, 64 64 __entry->first_prime_det, __entry->drbls_count) 65 65 );
+3
drivers/usb/core/quirks.c
··· 434 434 { USB_DEVICE(0x1532, 0x0116), .driver_info = 435 435 USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL }, 436 436 437 + /* Lenovo USB-C to Ethernet Adapter RTL8153-04 */ 438 + { USB_DEVICE(0x17ef, 0x720c), .driver_info = USB_QUIRK_NO_LPM }, 439 + 437 440 /* Lenovo Powered USB-C Travel Hub (4X90S92381, RTL8153 GigE) */ 438 441 { USB_DEVICE(0x17ef, 0x721e), .driver_info = USB_QUIRK_NO_LPM }, 439 442
+3
drivers/usb/dwc2/platform.c
··· 575 575 ggpio |= GGPIO_STM32_OTG_GCCFG_IDEN; 576 576 ggpio |= GGPIO_STM32_OTG_GCCFG_VBDEN; 577 577 dwc2_writel(hsotg, ggpio, GGPIO); 578 + 579 + /* ID/VBUS detection startup time */ 580 + usleep_range(5000, 7000); 578 581 } 579 582 580 583 retval = dwc2_drd_init(hsotg);
+11 -4
drivers/usb/early/xhci-dbc.c
··· 14 14 #include <linux/pci_ids.h> 15 15 #include <linux/memblock.h> 16 16 #include <linux/io.h> 17 - #include <linux/iopoll.h> 18 17 #include <asm/pci-direct.h> 19 18 #include <asm/fixmap.h> 20 19 #include <linux/bcd.h> ··· 135 136 { 136 137 u32 result; 137 138 138 - return readl_poll_timeout_atomic(ptr, result, 139 - ((result & mask) == done), 140 - delay, wait); 139 + /* Can not use readl_poll_timeout_atomic() for early boot things */ 140 + do { 141 + result = readl(ptr); 142 + result &= mask; 143 + if (result == done) 144 + return 0; 145 + udelay(delay); 146 + wait -= delay; 147 + } while (wait > 0); 148 + 149 + return -ETIMEDOUT; 141 150 } 142 151 143 152 static void __init xdbc_bios_handoff(void)
+3 -3
drivers/usb/gadget/composite.c
··· 1680 1680 u8 endp; 1681 1681 1682 1682 if (w_length > USB_COMP_EP0_BUFSIZ) { 1683 - if (ctrl->bRequestType == USB_DIR_OUT) { 1684 - goto done; 1685 - } else { 1683 + if (ctrl->bRequestType & USB_DIR_IN) { 1686 1684 /* Cast away the const, we are going to overwrite on purpose. */ 1687 1685 __le16 *temp = (__le16 *)&ctrl->wLength; 1688 1686 1689 1687 *temp = cpu_to_le16(USB_COMP_EP0_BUFSIZ); 1690 1688 w_length = USB_COMP_EP0_BUFSIZ; 1689 + } else { 1690 + goto done; 1691 1691 } 1692 1692 } 1693 1693
+6 -3
drivers/usb/gadget/function/f_fs.c
··· 1773 1773 1774 1774 BUG_ON(ffs->gadget); 1775 1775 1776 - if (ffs->epfiles) 1776 + if (ffs->epfiles) { 1777 1777 ffs_epfiles_destroy(ffs->epfiles, ffs->eps_count); 1778 + ffs->epfiles = NULL; 1779 + } 1778 1780 1779 - if (ffs->ffs_eventfd) 1781 + if (ffs->ffs_eventfd) { 1780 1782 eventfd_ctx_put(ffs->ffs_eventfd); 1783 + ffs->ffs_eventfd = NULL; 1784 + } 1781 1785 1782 1786 kfree(ffs->raw_descs_data); 1783 1787 kfree(ffs->raw_strings); ··· 1794 1790 1795 1791 ffs_data_clear(ffs); 1796 1792 1797 - ffs->epfiles = NULL; 1798 1793 ffs->raw_descs_data = NULL; 1799 1794 ffs->raw_descs = NULL; 1800 1795 ffs->raw_strings = NULL;
+6 -10
drivers/usb/gadget/function/u_ether.c
··· 17 17 #include <linux/etherdevice.h> 18 18 #include <linux/ethtool.h> 19 19 #include <linux/if_vlan.h> 20 + #include <linux/etherdevice.h> 20 21 21 22 #include "u_ether.h" 22 23 ··· 864 863 { 865 864 struct eth_dev *dev; 866 865 struct usb_gadget *g; 867 - struct sockaddr sa; 868 866 int status; 869 867 870 868 if (!net->dev.parent) 871 869 return -EINVAL; 872 870 dev = netdev_priv(net); 873 871 g = dev->gadget; 872 + 873 + net->addr_assign_type = NET_ADDR_RANDOM; 874 + eth_hw_addr_set(net, dev->dev_mac); 875 + 874 876 status = register_netdev(net); 875 877 if (status < 0) { 876 878 dev_dbg(&g->dev, "register_netdev failed, %d\n", status); 877 879 return status; 878 880 } else { 879 881 INFO(dev, "HOST MAC %pM\n", dev->host_mac); 882 + INFO(dev, "MAC %pM\n", dev->dev_mac); 880 883 881 884 /* two kinds of host-initiated state changes: 882 885 * - iff DATA transfer is active, carrier is "on" ··· 888 883 */ 889 884 netif_carrier_off(net); 890 885 } 891 - sa.sa_family = net->type; 892 - memcpy(sa.sa_data, dev->dev_mac, ETH_ALEN); 893 - rtnl_lock(); 894 - status = dev_set_mac_address(net, &sa, NULL); 895 - rtnl_unlock(); 896 - if (status) 897 - pr_warn("cannot set self ethernet address: %d\n", status); 898 - else 899 - INFO(dev, "MAC %pM\n", dev->dev_mac); 900 886 901 887 return status; 902 888 }
+3 -3
drivers/usb/gadget/legacy/dbgp.c
··· 346 346 u16 len = 0; 347 347 348 348 if (length > DBGP_REQ_LEN) { 349 - if (ctrl->bRequestType == USB_DIR_OUT) { 350 - return err; 351 - } else { 349 + if (ctrl->bRequestType & USB_DIR_IN) { 352 350 /* Cast away the const, we are going to overwrite on purpose. */ 353 351 __le16 *temp = (__le16 *)&ctrl->wLength; 354 352 355 353 *temp = cpu_to_le16(DBGP_REQ_LEN); 356 354 length = DBGP_REQ_LEN; 355 + } else { 356 + return err; 357 357 } 358 358 } 359 359
+3 -3
drivers/usb/gadget/legacy/inode.c
··· 1334 1334 u16 w_length = le16_to_cpu(ctrl->wLength); 1335 1335 1336 1336 if (w_length > RBUF_SIZE) { 1337 - if (ctrl->bRequestType == USB_DIR_OUT) { 1338 - return value; 1339 - } else { 1337 + if (ctrl->bRequestType & USB_DIR_IN) { 1340 1338 /* Cast away the const, we are going to overwrite on purpose. */ 1341 1339 __le16 *temp = (__le16 *)&ctrl->wLength; 1342 1340 1343 1341 *temp = cpu_to_le16(RBUF_SIZE); 1344 1342 w_length = RBUF_SIZE; 1343 + } else { 1344 + return value; 1345 1345 } 1346 1346 } 1347 1347
+1 -1
drivers/usb/host/xhci-mtk-sch.c
··· 781 781 782 782 ret = xhci_check_bandwidth(hcd, udev); 783 783 if (!ret) 784 - INIT_LIST_HEAD(&mtk->bw_ep_chk_list); 784 + list_del_init(&mtk->bw_ep_chk_list); 785 785 786 786 return ret; 787 787 }
+9 -2
drivers/usb/host/xhci-pci.c
··· 71 71 #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4 0x161e 72 72 #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5 0x15d6 73 73 #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6 0x15d7 74 + #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_7 0x161c 75 + #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_8 0x161f 74 76 75 77 #define PCI_DEVICE_ID_ASMEDIA_1042_XHCI 0x1042 76 78 #define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI 0x1142 ··· 123 121 /* Look for vendor-specific quirks */ 124 122 if (pdev->vendor == PCI_VENDOR_ID_FRESCO_LOGIC && 125 123 (pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK || 126 - pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1100 || 127 124 pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1400)) { 128 125 if (pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK && 129 126 pdev->revision == 0x0) { ··· 156 155 if (pdev->vendor == PCI_VENDOR_ID_FRESCO_LOGIC && 157 156 pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1009) 158 157 xhci->quirks |= XHCI_BROKEN_STREAMS; 158 + 159 + if (pdev->vendor == PCI_VENDOR_ID_FRESCO_LOGIC && 160 + pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1100) 161 + xhci->quirks |= XHCI_TRUST_TX_LENGTH; 159 162 160 163 if (pdev->vendor == PCI_VENDOR_ID_NEC) 161 164 xhci->quirks |= XHCI_NEC_HOST; ··· 335 330 pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_3 || 336 331 pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4 || 337 332 pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5 || 338 - pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6)) 333 + pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6 || 334 + pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_7 || 335 + pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_8)) 339 336 xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW; 340 337 341 338 if (xhci->quirks & XHCI_RESET_ON_RESUME)
+10 -2
drivers/usb/mtu3/mtu3_gadget.c
··· 77 77 if (usb_endpoint_xfer_int(desc) || 78 78 usb_endpoint_xfer_isoc(desc)) { 79 79 interval = desc->bInterval; 80 - interval = clamp_val(interval, 1, 16) - 1; 80 + interval = clamp_val(interval, 1, 16); 81 81 if (usb_endpoint_xfer_isoc(desc) && comp_desc) 82 82 mult = comp_desc->bmAttributes; 83 83 } ··· 89 89 if (usb_endpoint_xfer_isoc(desc) || 90 90 usb_endpoint_xfer_int(desc)) { 91 91 interval = desc->bInterval; 92 - interval = clamp_val(interval, 1, 16) - 1; 92 + interval = clamp_val(interval, 1, 16); 93 93 mult = usb_endpoint_maxp_mult(desc) - 1; 94 94 } 95 + break; 96 + case USB_SPEED_FULL: 97 + if (usb_endpoint_xfer_isoc(desc)) 98 + interval = clamp_val(desc->bInterval, 1, 16); 99 + else if (usb_endpoint_xfer_int(desc)) 100 + interval = clamp_val(desc->bInterval, 1, 255); 101 + 95 102 break; 96 103 default: 97 104 break; /*others are ignored */ ··· 242 235 mreq->request.dma = DMA_ADDR_INVALID; 243 236 mreq->epnum = mep->epnum; 244 237 mreq->mep = mep; 238 + INIT_LIST_HEAD(&mreq->list); 245 239 trace_mtu3_alloc_request(mreq); 246 240 247 241 return &mreq->request;
+6 -1
drivers/usb/mtu3/mtu3_qmu.c
··· 273 273 gpd->dw3_info |= cpu_to_le32(GPD_EXT_FLAG_ZLP); 274 274 } 275 275 276 + /* prevent reorder, make sure GPD's HWO is set last */ 277 + mb(); 276 278 gpd->dw0_info |= cpu_to_le32(GPD_FLAGS_IOC | GPD_FLAGS_HWO); 277 279 278 280 mreq->gpd = gpd; ··· 308 306 gpd->next_gpd = cpu_to_le32(lower_32_bits(enq_dma)); 309 307 ext_addr |= GPD_EXT_NGP(mtu, upper_32_bits(enq_dma)); 310 308 gpd->dw3_info = cpu_to_le32(ext_addr); 309 + /* prevent reorder, make sure GPD's HWO is set last */ 310 + mb(); 311 311 gpd->dw0_info |= cpu_to_le32(GPD_FLAGS_IOC | GPD_FLAGS_HWO); 312 312 313 313 mreq->gpd = gpd; ··· 449 445 return; 450 446 } 451 447 mtu3_setbits(mbase, MU3D_EP_TXCR0(mep->epnum), TX_TXPKTRDY); 452 - 448 + /* prevent reorder, make sure GPD's HWO is set last */ 449 + mb(); 453 450 /* by pass the current GDP */ 454 451 gpd_current->dw0_info |= cpu_to_le32(GPD_FLAGS_BPS | GPD_FLAGS_HWO); 455 452
+4 -2
drivers/usb/serial/cp210x.c
··· 1635 1635 1636 1636 /* 2 banks of GPIO - One for the pins taken from each serial port */ 1637 1637 if (intf_num == 0) { 1638 + priv->gc.ngpio = 2; 1639 + 1638 1640 if (mode.eci == CP210X_PIN_MODE_MODEM) { 1639 1641 /* mark all GPIOs of this interface as reserved */ 1640 1642 priv->gpio_altfunc = 0xff; ··· 1647 1645 priv->gpio_pushpull = (u8)((le16_to_cpu(config.gpio_mode) & 1648 1646 CP210X_ECI_GPIO_MODE_MASK) >> 1649 1647 CP210X_ECI_GPIO_MODE_OFFSET); 1650 - priv->gc.ngpio = 2; 1651 1648 } else if (intf_num == 1) { 1649 + priv->gc.ngpio = 3; 1650 + 1652 1651 if (mode.sci == CP210X_PIN_MODE_MODEM) { 1653 1652 /* mark all GPIOs of this interface as reserved */ 1654 1653 priv->gpio_altfunc = 0xff; ··· 1660 1657 priv->gpio_pushpull = (u8)((le16_to_cpu(config.gpio_mode) & 1661 1658 CP210X_SCI_GPIO_MODE_MASK) >> 1662 1659 CP210X_SCI_GPIO_MODE_OFFSET); 1663 - priv->gc.ngpio = 3; 1664 1660 } else { 1665 1661 return -ENODEV; 1666 1662 }
+8
drivers/usb/serial/option.c
··· 1219 1219 .driver_info = NCTRL(2) | RSVD(3) }, 1220 1220 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1063, 0xff), /* Telit LN920 (ECM) */ 1221 1221 .driver_info = NCTRL(0) | RSVD(1) }, 1222 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1070, 0xff), /* Telit FN990 (rmnet) */ 1223 + .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, 1224 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1071, 0xff), /* Telit FN990 (MBIM) */ 1225 + .driver_info = NCTRL(0) | RSVD(1) }, 1226 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1072, 0xff), /* Telit FN990 (RNDIS) */ 1227 + .driver_info = NCTRL(2) | RSVD(3) }, 1228 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1073, 0xff), /* Telit FN990 (ECM) */ 1229 + .driver_info = NCTRL(0) | RSVD(1) }, 1222 1230 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910), 1223 1231 .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) }, 1224 1232 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+13 -5
drivers/usb/typec/tcpm/tcpm.c
··· 324 324 325 325 bool attached; 326 326 bool connected; 327 + bool registered; 327 328 bool pd_supported; 328 329 enum typec_port_type port_type; 329 330 ··· 6292 6291 { 6293 6292 struct tcpm_port *port = container_of(timer, struct tcpm_port, state_machine_timer); 6294 6293 6295 - kthread_queue_work(port->wq, &port->state_machine); 6294 + if (port->registered) 6295 + kthread_queue_work(port->wq, &port->state_machine); 6296 6296 return HRTIMER_NORESTART; 6297 6297 } 6298 6298 ··· 6301 6299 { 6302 6300 struct tcpm_port *port = container_of(timer, struct tcpm_port, vdm_state_machine_timer); 6303 6301 6304 - kthread_queue_work(port->wq, &port->vdm_state_machine); 6302 + if (port->registered) 6303 + kthread_queue_work(port->wq, &port->vdm_state_machine); 6305 6304 return HRTIMER_NORESTART; 6306 6305 } 6307 6306 ··· 6310 6307 { 6311 6308 struct tcpm_port *port = container_of(timer, struct tcpm_port, enable_frs_timer); 6312 6309 6313 - kthread_queue_work(port->wq, &port->enable_frs); 6310 + if (port->registered) 6311 + kthread_queue_work(port->wq, &port->enable_frs); 6314 6312 return HRTIMER_NORESTART; 6315 6313 } 6316 6314 ··· 6319 6315 { 6320 6316 struct tcpm_port *port = container_of(timer, struct tcpm_port, send_discover_timer); 6321 6317 6322 - kthread_queue_work(port->wq, &port->send_discover_work); 6318 + if (port->registered) 6319 + kthread_queue_work(port->wq, &port->send_discover_work); 6323 6320 return HRTIMER_NORESTART; 6324 6321 } 6325 6322 ··· 6408 6403 typec_port_register_altmodes(port->typec_port, 6409 6404 &tcpm_altmode_ops, port, 6410 6405 port->port_altmode, ALTMODE_DISCOVERY_MAX); 6406 + port->registered = true; 6411 6407 6412 6408 mutex_lock(&port->lock); 6413 6409 tcpm_init(port); ··· 6430 6424 { 6431 6425 int i; 6432 6426 6427 + port->registered = false; 6428 + kthread_destroy_worker(port->wq); 6429 + 6433 6430 hrtimer_cancel(&port->send_discover_timer); 6434 6431 hrtimer_cancel(&port->enable_frs_timer); 6435 6432 hrtimer_cancel(&port->vdm_state_machine_timer); ··· 6444 6435 typec_unregister_port(port->typec_port); 6445 6436 usb_role_switch_put(port->role_sw); 6446 6437 tcpm_debugfs_exit(port); 6447 - kthread_destroy_worker(port->wq); 6448 6438 } 6449 6439 EXPORT_SYMBOL_GPL(tcpm_unregister_port); 6450 6440
+3 -1
drivers/usb/typec/ucsi/ucsi.c
··· 1150 1150 ret = 0; 1151 1151 } 1152 1152 1153 - if (UCSI_CONSTAT_PWR_OPMODE(con->status.flags) == UCSI_CONSTAT_PWR_OPMODE_PD) { 1153 + if (con->partner && 1154 + UCSI_CONSTAT_PWR_OPMODE(con->status.flags) == 1155 + UCSI_CONSTAT_PWR_OPMODE_PD) { 1154 1156 ucsi_get_src_pdos(con); 1155 1157 ucsi_check_altmodes(con); 1156 1158 }
+2 -1
drivers/vdpa/vdpa.c
··· 404 404 goto msg_err; 405 405 406 406 while (mdev->id_table[i].device) { 407 - supported_classes |= BIT(mdev->id_table[i].device); 407 + if (mdev->id_table[i].device <= 63) 408 + supported_classes |= BIT_ULL(mdev->id_table[i].device); 408 409 i++; 409 410 } 410 411
+4 -2
drivers/vdpa/vdpa_user/vduse_dev.c
··· 655 655 { 656 656 struct vduse_dev *dev = vdpa_to_vduse(vdpa); 657 657 658 - if (len > dev->config_size - offset) 658 + if (offset > dev->config_size || 659 + len > dev->config_size - offset) 659 660 return; 660 661 661 662 memcpy(buf, dev->config + offset, len); ··· 976 975 break; 977 976 978 977 ret = -EINVAL; 979 - if (config.length == 0 || 978 + if (config.offset > dev->config_size || 979 + config.length == 0 || 980 980 config.length > dev->config_size - config.offset) 981 981 break; 982 982
+1 -1
drivers/vhost/vdpa.c
··· 197 197 struct vdpa_device *vdpa = v->vdpa; 198 198 long size = vdpa->config->get_config_size(vdpa); 199 199 200 - if (c->len == 0) 200 + if (c->len == 0 || c->off > size) 201 201 return -EINVAL; 202 202 203 203 if (c->len > size - c->off)
+3 -2
drivers/virt/nitro_enclaves/ne_misc_dev.c
··· 963 963 goto put_pages; 964 964 } 965 965 966 - gup_rc = get_user_pages(mem_region.userspace_addr + memory_size, 1, FOLL_GET, 967 - ne_mem_region->pages + i, NULL); 966 + gup_rc = get_user_pages_unlocked(mem_region.userspace_addr + memory_size, 1, 967 + ne_mem_region->pages + i, FOLL_GET); 968 + 968 969 if (gup_rc < 0) { 969 970 rc = gup_rc; 970 971
+1 -1
drivers/virtio/virtio_ring.c
··· 268 268 size_t max_segment_size = SIZE_MAX; 269 269 270 270 if (vring_use_dma_api(vdev)) 271 - max_segment_size = dma_max_mapping_size(&vdev->dev); 271 + max_segment_size = dma_max_mapping_size(vdev->dev.parent); 272 272 273 273 return max_segment_size; 274 274 }
+6
drivers/xen/events/events_base.c
··· 1251 1251 } 1252 1252 EXPORT_SYMBOL_GPL(bind_evtchn_to_irq); 1253 1253 1254 + int bind_evtchn_to_irq_lateeoi(evtchn_port_t evtchn) 1255 + { 1256 + return bind_evtchn_to_irq_chip(evtchn, &xen_lateeoi_chip, NULL); 1257 + } 1258 + EXPORT_SYMBOL_GPL(bind_evtchn_to_irq_lateeoi); 1259 + 1254 1260 static int bind_ipi_to_irq(unsigned int ipi, unsigned int cpu) 1255 1261 { 1256 1262 struct evtchn_bind_ipi bind_ipi;
+3 -2
fs/afs/file.c
··· 514 514 if (atomic_inc_return(&vnode->cb_nr_mmap) == 1) { 515 515 down_write(&vnode->volume->cell->fs_open_mmaps_lock); 516 516 517 - list_add_tail(&vnode->cb_mmap_link, 518 - &vnode->volume->cell->fs_open_mmaps); 517 + if (list_empty(&vnode->cb_mmap_link)) 518 + list_add_tail(&vnode->cb_mmap_link, 519 + &vnode->volume->cell->fs_open_mmaps); 519 520 520 521 up_write(&vnode->volume->cell->fs_open_mmaps_lock); 521 522 }
+1
fs/afs/super.c
··· 667 667 INIT_LIST_HEAD(&vnode->pending_locks); 668 668 INIT_LIST_HEAD(&vnode->granted_locks); 669 669 INIT_DELAYED_WORK(&vnode->lock_work, afs_lock_work); 670 + INIT_LIST_HEAD(&vnode->cb_mmap_link); 670 671 seqlock_init(&vnode->cb_lock); 671 672 } 672 673
+9 -8
fs/btrfs/ctree.c
··· 463 463 BUG_ON(ret < 0); 464 464 rcu_assign_pointer(root->node, cow); 465 465 466 - btrfs_free_tree_block(trans, root, buf, parent_start, 467 - last_ref); 466 + btrfs_free_tree_block(trans, btrfs_root_id(root), buf, 467 + parent_start, last_ref); 468 468 free_extent_buffer(buf); 469 469 add_root_to_dirty_list(root); 470 470 } else { ··· 485 485 return ret; 486 486 } 487 487 } 488 - btrfs_free_tree_block(trans, root, buf, parent_start, 489 - last_ref); 488 + btrfs_free_tree_block(trans, btrfs_root_id(root), buf, 489 + parent_start, last_ref); 490 490 } 491 491 if (unlock_orig) 492 492 btrfs_tree_unlock(buf); ··· 927 927 free_extent_buffer(mid); 928 928 929 929 root_sub_used(root, mid->len); 930 - btrfs_free_tree_block(trans, root, mid, 0, 1); 930 + btrfs_free_tree_block(trans, btrfs_root_id(root), mid, 0, 1); 931 931 /* once for the root ptr */ 932 932 free_extent_buffer_stale(mid); 933 933 return 0; ··· 986 986 btrfs_tree_unlock(right); 987 987 del_ptr(root, path, level + 1, pslot + 1); 988 988 root_sub_used(root, right->len); 989 - btrfs_free_tree_block(trans, root, right, 0, 1); 989 + btrfs_free_tree_block(trans, btrfs_root_id(root), right, 990 + 0, 1); 990 991 free_extent_buffer_stale(right); 991 992 right = NULL; 992 993 } else { ··· 1032 1031 btrfs_tree_unlock(mid); 1033 1032 del_ptr(root, path, level + 1, pslot); 1034 1033 root_sub_used(root, mid->len); 1035 - btrfs_free_tree_block(trans, root, mid, 0, 1); 1034 + btrfs_free_tree_block(trans, btrfs_root_id(root), mid, 0, 1); 1036 1035 free_extent_buffer_stale(mid); 1037 1036 mid = NULL; 1038 1037 } else { ··· 4033 4032 root_sub_used(root, leaf->len); 4034 4033 4035 4034 atomic_inc(&leaf->refs); 4036 - btrfs_free_tree_block(trans, root, leaf, 0, 1); 4035 + btrfs_free_tree_block(trans, btrfs_root_id(root), leaf, 0, 1); 4037 4036 free_extent_buffer_stale(leaf); 4038 4037 } 4039 4038 /*
+6 -1
fs/btrfs/ctree.h
··· 2257 2257 return (root->root_item.flags & cpu_to_le64(BTRFS_ROOT_SUBVOL_DEAD)) != 0; 2258 2258 } 2259 2259 2260 + static inline u64 btrfs_root_id(const struct btrfs_root *root) 2261 + { 2262 + return root->root_key.objectid; 2263 + } 2264 + 2260 2265 /* struct btrfs_root_backup */ 2261 2266 BTRFS_SETGET_STACK_FUNCS(backup_tree_root, struct btrfs_root_backup, 2262 2267 tree_root, 64); ··· 2724 2719 u64 empty_size, 2725 2720 enum btrfs_lock_nesting nest); 2726 2721 void btrfs_free_tree_block(struct btrfs_trans_handle *trans, 2727 - struct btrfs_root *root, 2722 + u64 root_id, 2728 2723 struct extent_buffer *buf, 2729 2724 u64 parent, int last_ref); 2730 2725 int btrfs_alloc_reserved_file_extent(struct btrfs_trans_handle *trans,
+8
fs/btrfs/disk-io.c
··· 1732 1732 } 1733 1733 return root; 1734 1734 fail: 1735 + /* 1736 + * If our caller provided us an anonymous device, then it's his 1737 + * responsability to free it in case we fail. So we have to set our 1738 + * root's anon_dev to 0 to avoid a double free, once by btrfs_put_root() 1739 + * and once again by our caller. 1740 + */ 1741 + if (anon_dev) 1742 + root->anon_dev = 0; 1735 1743 btrfs_put_root(root); 1736 1744 return ERR_PTR(ret); 1737 1745 }
+7 -6
fs/btrfs/extent-tree.c
··· 3275 3275 } 3276 3276 3277 3277 void btrfs_free_tree_block(struct btrfs_trans_handle *trans, 3278 - struct btrfs_root *root, 3278 + u64 root_id, 3279 3279 struct extent_buffer *buf, 3280 3280 u64 parent, int last_ref) 3281 3281 { 3282 - struct btrfs_fs_info *fs_info = root->fs_info; 3282 + struct btrfs_fs_info *fs_info = trans->fs_info; 3283 3283 struct btrfs_ref generic_ref = { 0 }; 3284 3284 int ret; 3285 3285 3286 3286 btrfs_init_generic_ref(&generic_ref, BTRFS_DROP_DELAYED_REF, 3287 3287 buf->start, buf->len, parent); 3288 3288 btrfs_init_tree_ref(&generic_ref, btrfs_header_level(buf), 3289 - root->root_key.objectid, 0, false); 3289 + root_id, 0, false); 3290 3290 3291 - if (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID) { 3291 + if (root_id != BTRFS_TREE_LOG_OBJECTID) { 3292 3292 btrfs_ref_tree_mod(fs_info, &generic_ref); 3293 3293 ret = btrfs_add_delayed_tree_ref(trans, &generic_ref, NULL); 3294 3294 BUG_ON(ret); /* -ENOMEM */ ··· 3298 3298 struct btrfs_block_group *cache; 3299 3299 bool must_pin = false; 3300 3300 3301 - if (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID) { 3301 + if (root_id != BTRFS_TREE_LOG_OBJECTID) { 3302 3302 ret = check_ref_cleanup(trans, buf->start); 3303 3303 if (!ret) { 3304 3304 btrfs_redirty_list_add(trans->transaction, buf); ··· 5472 5472 goto owner_mismatch; 5473 5473 } 5474 5474 5475 - btrfs_free_tree_block(trans, root, eb, parent, wc->refs[level] == 1); 5475 + btrfs_free_tree_block(trans, btrfs_root_id(root), eb, parent, 5476 + wc->refs[level] == 1); 5476 5477 out: 5477 5478 wc->refs[level] = 0; 5478 5479 wc->flags[level] = 0;
+8
fs/btrfs/extent_io.c
··· 6611 6611 if (test_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags)) 6612 6612 return 0; 6613 6613 6614 + /* 6615 + * We could have had EXTENT_BUFFER_UPTODATE cleared by the write 6616 + * operation, which could potentially still be in flight. In this case 6617 + * we simply want to return an error. 6618 + */ 6619 + if (unlikely(test_bit(EXTENT_BUFFER_WRITE_ERR, &eb->bflags))) 6620 + return -EIO; 6621 + 6614 6622 if (eb->fs_info->sectorsize < PAGE_SIZE) 6615 6623 return read_extent_buffer_subpage(eb, wait, mirror_num); 6616 6624
+2 -2
fs/btrfs/free-space-tree.c
··· 1256 1256 btrfs_tree_lock(free_space_root->node); 1257 1257 btrfs_clean_tree_block(free_space_root->node); 1258 1258 btrfs_tree_unlock(free_space_root->node); 1259 - btrfs_free_tree_block(trans, free_space_root, free_space_root->node, 1260 - 0, 1); 1259 + btrfs_free_tree_block(trans, btrfs_root_id(free_space_root), 1260 + free_space_root->node, 0, 1); 1261 1261 1262 1262 btrfs_put_root(free_space_root); 1263 1263
+6 -4
fs/btrfs/ioctl.c
··· 617 617 * Since we don't abort the transaction in this case, free the 618 618 * tree block so that we don't leak space and leave the 619 619 * filesystem in an inconsistent state (an extent item in the 620 - * extent tree without backreferences). Also no need to have 621 - * the tree block locked since it is not in any tree at this 622 - * point, so no other task can find it and use it. 620 + * extent tree with a backreference for a root that does not 621 + * exists). 623 622 */ 624 - btrfs_free_tree_block(trans, root, leaf, 0, 1); 623 + btrfs_tree_lock(leaf); 624 + btrfs_clean_tree_block(leaf); 625 + btrfs_tree_unlock(leaf); 626 + btrfs_free_tree_block(trans, objectid, leaf, 0, 1); 625 627 free_extent_buffer(leaf); 626 628 goto fail; 627 629 }
+2 -1
fs/btrfs/qgroup.c
··· 1219 1219 btrfs_tree_lock(quota_root->node); 1220 1220 btrfs_clean_tree_block(quota_root->node); 1221 1221 btrfs_tree_unlock(quota_root->node); 1222 - btrfs_free_tree_block(trans, quota_root, quota_root->node, 0, 1); 1222 + btrfs_free_tree_block(trans, btrfs_root_id(quota_root), 1223 + quota_root->node, 0, 1); 1223 1224 1224 1225 btrfs_put_root(quota_root); 1225 1226
+2
fs/btrfs/tree-log.c
··· 1181 1181 parent_objectid, victim_name, 1182 1182 victim_name_len); 1183 1183 if (ret < 0) { 1184 + kfree(victim_name); 1184 1185 return ret; 1185 1186 } else if (!ret) { 1186 1187 ret = -ENOENT; ··· 3978 3977 goto done; 3979 3978 } 3980 3979 if (btrfs_header_generation(path->nodes[0]) != trans->transid) { 3980 + ctx->last_dir_item_offset = min_key.offset; 3981 3981 ret = overwrite_item(trans, log, dst_path, 3982 3982 path->nodes[0], path->slots[0], 3983 3983 &min_key);
+4 -2
fs/btrfs/volumes.c
··· 1370 1370 1371 1371 bytenr_orig = btrfs_sb_offset(0); 1372 1372 ret = btrfs_sb_log_location_bdev(bdev, 0, READ, &bytenr); 1373 - if (ret) 1374 - return ERR_PTR(ret); 1373 + if (ret) { 1374 + device = ERR_PTR(ret); 1375 + goto error_bdev_put; 1376 + } 1375 1377 1376 1378 disk_super = btrfs_read_disk_super(bdev, bytenr, bytenr_orig); 1377 1379 if (IS_ERR(disk_super)) {
+8 -8
fs/ceph/caps.c
··· 4350 4350 { 4351 4351 struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(ci->vfs_inode.i_sb); 4352 4352 int bits = (fmode << 1) | 1; 4353 - bool is_opened = false; 4353 + bool already_opened = false; 4354 4354 int i; 4355 4355 4356 4356 if (count == 1) ··· 4358 4358 4359 4359 spin_lock(&ci->i_ceph_lock); 4360 4360 for (i = 0; i < CEPH_FILE_MODE_BITS; i++) { 4361 - if (bits & (1 << i)) 4362 - ci->i_nr_by_mode[i] += count; 4363 - 4364 4361 /* 4365 - * If any of the mode ref is larger than 1, 4362 + * If any of the mode ref is larger than 0, 4366 4363 * that means it has been already opened by 4367 4364 * others. Just skip checking the PIN ref. 4368 4365 */ 4369 - if (i && ci->i_nr_by_mode[i] > 1) 4370 - is_opened = true; 4366 + if (i && ci->i_nr_by_mode[i]) 4367 + already_opened = true; 4368 + 4369 + if (bits & (1 << i)) 4370 + ci->i_nr_by_mode[i] += count; 4371 4371 } 4372 4372 4373 - if (!is_opened) 4373 + if (!already_opened) 4374 4374 percpu_counter_inc(&mdsc->metric.opened_inodes); 4375 4375 spin_unlock(&ci->i_ceph_lock); 4376 4376 }
+16 -4
fs/ceph/file.c
··· 605 605 in.cap.realm = cpu_to_le64(ci->i_snap_realm->ino); 606 606 in.cap.flags = CEPH_CAP_FLAG_AUTH; 607 607 in.ctime = in.mtime = in.atime = iinfo.btime; 608 - in.mode = cpu_to_le32((u32)mode); 609 608 in.truncate_seq = cpu_to_le32(1); 610 609 in.truncate_size = cpu_to_le64(-1ULL); 611 610 in.xattr_version = cpu_to_le64(1); 612 611 in.uid = cpu_to_le32(from_kuid(&init_user_ns, current_fsuid())); 613 - in.gid = cpu_to_le32(from_kgid(&init_user_ns, dir->i_mode & S_ISGID ? 614 - dir->i_gid : current_fsgid())); 612 + if (dir->i_mode & S_ISGID) { 613 + in.gid = cpu_to_le32(from_kgid(&init_user_ns, dir->i_gid)); 614 + 615 + /* Directories always inherit the setgid bit. */ 616 + if (S_ISDIR(mode)) 617 + mode |= S_ISGID; 618 + else if ((mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP) && 619 + !in_group_p(dir->i_gid) && 620 + !capable_wrt_inode_uidgid(&init_user_ns, dir, CAP_FSETID)) 621 + mode &= ~S_ISGID; 622 + } else { 623 + in.gid = cpu_to_le32(from_kgid(&init_user_ns, current_fsgid())); 624 + } 625 + in.mode = cpu_to_le32((u32)mode); 626 + 615 627 in.nlink = cpu_to_le32(1); 616 628 in.max_size = cpu_to_le64(lo->stripe_unit); 617 629 ··· 859 847 ssize_t ret; 860 848 u64 off = iocb->ki_pos; 861 849 u64 len = iov_iter_count(to); 862 - u64 i_size; 850 + u64 i_size = i_size_read(inode); 863 851 864 852 dout("sync_read on file %p %llu~%u %s\n", file, off, (unsigned)len, 865 853 (file->f_flags & O_DIRECT) ? "O_DIRECT" : "");
+1 -2
fs/ceph/mds_client.c
··· 3683 3683 struct ceph_pagelist *pagelist = recon_state->pagelist; 3684 3684 struct dentry *dentry; 3685 3685 char *path; 3686 - int pathlen, err; 3686 + int pathlen = 0, err; 3687 3687 u64 pathbase; 3688 3688 u64 snap_follows; 3689 3689 ··· 3703 3703 } 3704 3704 } else { 3705 3705 path = NULL; 3706 - pathlen = 0; 3707 3706 pathbase = 0; 3708 3707 } 3709 3708
+7
fs/cifs/connect.c
··· 3064 3064 (cifs_sb->ctx->rsize > server->ops->negotiate_rsize(tcon, ctx))) 3065 3065 cifs_sb->ctx->rsize = server->ops->negotiate_rsize(tcon, ctx); 3066 3066 3067 + /* 3068 + * The cookie is initialized from volume info returned above. 3069 + * Inside cifs_fscache_get_super_cookie it checks 3070 + * that we do not get super cookie twice. 3071 + */ 3072 + cifs_fscache_get_super_cookie(tcon); 3073 + 3067 3074 out: 3068 3075 mnt_ctx->server = server; 3069 3076 mnt_ctx->ses = ses;
+37 -1
fs/cifs/fs_context.c
··· 435 435 } 436 436 437 437 /* 438 + * Remove duplicate path delimiters. Windows is supposed to do that 439 + * but there are some bugs that prevent rename from working if there are 440 + * multiple delimiters. 441 + * 442 + * Returns a sanitized duplicate of @path. The caller is responsible for 443 + * cleaning up the original. 444 + */ 445 + #define IS_DELIM(c) ((c) == '/' || (c) == '\\') 446 + static char *sanitize_path(char *path) 447 + { 448 + char *cursor1 = path, *cursor2 = path; 449 + 450 + /* skip all prepended delimiters */ 451 + while (IS_DELIM(*cursor1)) 452 + cursor1++; 453 + 454 + /* copy the first letter */ 455 + *cursor2 = *cursor1; 456 + 457 + /* copy the remainder... */ 458 + while (*(cursor1++)) { 459 + /* ... skipping all duplicated delimiters */ 460 + if (IS_DELIM(*cursor1) && IS_DELIM(*cursor2)) 461 + continue; 462 + *(++cursor2) = *cursor1; 463 + } 464 + 465 + /* if the last character is a delimiter, skip it */ 466 + if (IS_DELIM(*(cursor2 - 1))) 467 + cursor2--; 468 + 469 + *(cursor2) = '\0'; 470 + return kstrdup(path, GFP_KERNEL); 471 + } 472 + 473 + /* 438 474 * Parse a devname into substrings and populate the ctx->UNC and ctx->prepath 439 475 * fields with the result. Returns 0 on success and an error otherwise 440 476 * (e.g. ENOMEM or EINVAL) ··· 529 493 if (!*pos) 530 494 return 0; 531 495 532 - ctx->prepath = kstrdup(pos, GFP_KERNEL); 496 + ctx->prepath = sanitize_path(pos); 533 497 if (!ctx->prepath) 534 498 return -ENOMEM; 535 499
-13
fs/cifs/inode.c
··· 1356 1356 goto out; 1357 1357 } 1358 1358 1359 - #ifdef CONFIG_CIFS_FSCACHE 1360 - /* populate tcon->resource_id */ 1361 - tcon->resource_id = CIFS_I(inode)->uniqueid; 1362 - #endif 1363 - 1364 1359 if (rc && tcon->pipe) { 1365 1360 cifs_dbg(FYI, "ipc connection - fake read inode\n"); 1366 1361 spin_lock(&inode->i_lock); ··· 1370 1375 iget_failed(inode); 1371 1376 inode = ERR_PTR(rc); 1372 1377 } 1373 - 1374 - /* 1375 - * The cookie is initialized from volume info returned above. 1376 - * Inside cifs_fscache_get_super_cookie it checks 1377 - * that we do not get super cookie twice. 1378 - */ 1379 - cifs_fscache_get_super_cookie(tcon); 1380 - 1381 1378 out: 1382 1379 kfree(path); 1383 1380 free_xid(xid);
+56 -16
fs/file.c
··· 841 841 spin_unlock(&files->file_lock); 842 842 } 843 843 844 + static inline struct file *__fget_files_rcu(struct files_struct *files, 845 + unsigned int fd, fmode_t mask, unsigned int refs) 846 + { 847 + for (;;) { 848 + struct file *file; 849 + struct fdtable *fdt = rcu_dereference_raw(files->fdt); 850 + struct file __rcu **fdentry; 851 + 852 + if (unlikely(fd >= fdt->max_fds)) 853 + return NULL; 854 + 855 + fdentry = fdt->fd + array_index_nospec(fd, fdt->max_fds); 856 + file = rcu_dereference_raw(*fdentry); 857 + if (unlikely(!file)) 858 + return NULL; 859 + 860 + if (unlikely(file->f_mode & mask)) 861 + return NULL; 862 + 863 + /* 864 + * Ok, we have a file pointer. However, because we do 865 + * this all locklessly under RCU, we may be racing with 866 + * that file being closed. 867 + * 868 + * Such a race can take two forms: 869 + * 870 + * (a) the file ref already went down to zero, 871 + * and get_file_rcu_many() fails. Just try 872 + * again: 873 + */ 874 + if (unlikely(!get_file_rcu_many(file, refs))) 875 + continue; 876 + 877 + /* 878 + * (b) the file table entry has changed under us. 879 + * Note that we don't need to re-check the 'fdt->fd' 880 + * pointer having changed, because it always goes 881 + * hand-in-hand with 'fdt'. 882 + * 883 + * If so, we need to put our refs and try again. 884 + */ 885 + if (unlikely(rcu_dereference_raw(files->fdt) != fdt) || 886 + unlikely(rcu_dereference_raw(*fdentry) != file)) { 887 + fput_many(file, refs); 888 + continue; 889 + } 890 + 891 + /* 892 + * Ok, we have a ref to the file, and checked that it 893 + * still exists. 894 + */ 895 + return file; 896 + } 897 + } 898 + 844 899 static struct file *__fget_files(struct files_struct *files, unsigned int fd, 845 900 fmode_t mask, unsigned int refs) 846 901 { 847 902 struct file *file; 848 903 849 904 rcu_read_lock(); 850 - loop: 851 - file = files_lookup_fd_rcu(files, fd); 852 - if (file) { 853 - /* File object ref couldn't be taken. 854 - * dup2() atomicity guarantee is the reason 855 - * we loop to catch the new file (or NULL pointer) 856 - */ 857 - if (file->f_mode & mask) 858 - file = NULL; 859 - else if (!get_file_rcu_many(file, refs)) 860 - goto loop; 861 - else if (files_lookup_fd_raw(files, fd) != file) { 862 - fput_many(file, refs); 863 - goto loop; 864 - } 865 - } 905 + file = __fget_files_rcu(files, fd, mask, refs); 866 906 rcu_read_unlock(); 867 907 868 908 return file;
+2
fs/io-wq.c
··· 395 395 if (atomic_dec_and_test(&acct->nr_running) && io_acct_run_queue(acct)) { 396 396 atomic_inc(&acct->nr_running); 397 397 atomic_inc(&wqe->wq->worker_refs); 398 + raw_spin_unlock(&wqe->lock); 398 399 io_queue_worker_create(worker, acct, create_worker_cb); 400 + raw_spin_lock(&wqe->lock); 399 401 } 400 402 } 401 403
+7 -3
fs/io_uring.c
··· 2891 2891 req->flags |= io_file_get_flags(file) << REQ_F_SUPPORT_NOWAIT_BIT; 2892 2892 2893 2893 kiocb->ki_pos = READ_ONCE(sqe->off); 2894 - if (kiocb->ki_pos == -1 && !(file->f_mode & FMODE_STREAM)) { 2895 - req->flags |= REQ_F_CUR_POS; 2896 - kiocb->ki_pos = file->f_pos; 2894 + if (kiocb->ki_pos == -1) { 2895 + if (!(file->f_mode & FMODE_STREAM)) { 2896 + req->flags |= REQ_F_CUR_POS; 2897 + kiocb->ki_pos = file->f_pos; 2898 + } else { 2899 + kiocb->ki_pos = 0; 2900 + } 2897 2901 } 2898 2902 kiocb->ki_flags = iocb_flags(file); 2899 2903 ret = kiocb_set_rw_flags(kiocb, READ_ONCE(sqe->rw_flags));
+1 -1
fs/ksmbd/ndr.c
··· 148 148 static int ndr_read_int32(struct ndr *n, __u32 *value) 149 149 { 150 150 if (n->offset + sizeof(__u32) > n->length) 151 - return 0; 151 + return -EINVAL; 152 152 153 153 if (value) 154 154 *value = le32_to_cpu(*(__le32 *)ndr_get_field(n));
-3
fs/ksmbd/smb2ops.c
··· 271 271 if (server_conf.flags & KSMBD_GLOBAL_FLAG_SMB2_LEASES) 272 272 conn->vals->capabilities |= SMB2_GLOBAL_CAP_LEASING; 273 273 274 - if (conn->cipher_type) 275 - conn->vals->capabilities |= SMB2_GLOBAL_CAP_ENCRYPTION; 276 - 277 274 if (server_conf.flags & KSMBD_GLOBAL_FLAG_SMB3_MULTICHANNEL) 278 275 conn->vals->capabilities |= SMB2_GLOBAL_CAP_MULTI_CHANNEL; 279 276
+25 -4
fs/ksmbd/smb2pdu.c
··· 915 915 } 916 916 } 917 917 918 + /** 919 + * smb3_encryption_negotiated() - checks if server and client agreed on enabling encryption 920 + * @conn: smb connection 921 + * 922 + * Return: true if connection should be encrypted, else false 923 + */ 924 + static bool smb3_encryption_negotiated(struct ksmbd_conn *conn) 925 + { 926 + if (!conn->ops->generate_encryptionkey) 927 + return false; 928 + 929 + /* 930 + * SMB 3.0 and 3.0.2 dialects use the SMB2_GLOBAL_CAP_ENCRYPTION flag. 931 + * SMB 3.1.1 uses the cipher_type field. 932 + */ 933 + return (conn->vals->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION) || 934 + conn->cipher_type; 935 + } 936 + 918 937 static void decode_compress_ctxt(struct ksmbd_conn *conn, 919 938 struct smb2_compression_capabilities_context *pneg_ctxt) 920 939 { ··· 1488 1469 (req->SecurityMode & SMB2_NEGOTIATE_SIGNING_REQUIRED)) 1489 1470 sess->sign = true; 1490 1471 1491 - if (conn->vals->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION && 1492 - conn->ops->generate_encryptionkey && 1472 + if (smb3_encryption_negotiated(conn) && 1493 1473 !(req->Flags & SMB2_SESSION_REQ_FLAG_BINDING)) { 1494 1474 rc = conn->ops->generate_encryptionkey(sess); 1495 1475 if (rc) { ··· 1577 1559 (req->SecurityMode & SMB2_NEGOTIATE_SIGNING_REQUIRED)) 1578 1560 sess->sign = true; 1579 1561 1580 - if ((conn->vals->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION) && 1581 - conn->ops->generate_encryptionkey) { 1562 + if (smb3_encryption_negotiated(conn)) { 1582 1563 retval = conn->ops->generate_encryptionkey(sess); 1583 1564 if (retval) { 1584 1565 ksmbd_debug(SMB, ··· 2979 2962 &pntsd_size, &fattr); 2980 2963 posix_acl_release(fattr.cf_acls); 2981 2964 posix_acl_release(fattr.cf_dacls); 2965 + if (rc) { 2966 + kfree(pntsd); 2967 + goto err_out; 2968 + } 2982 2969 2983 2970 rc = ksmbd_vfs_set_sd_xattr(conn, 2984 2971 user_ns,
+4 -5
fs/namespace.c
··· 4263 4263 return err; 4264 4264 4265 4265 err = user_path_at(dfd, path, kattr.lookup_flags, &target); 4266 - if (err) 4267 - return err; 4268 - 4269 - err = do_mount_setattr(&target, &kattr); 4266 + if (!err) { 4267 + err = do_mount_setattr(&target, &kattr); 4268 + path_put(&target); 4269 + } 4270 4270 finish_mount_kattr(&kattr); 4271 - path_put(&target); 4272 4271 return err; 4273 4272 } 4274 4273
+4 -7
fs/nfsd/nfs3proc.c
··· 438 438 439 439 static void nfsd3_init_dirlist_pages(struct svc_rqst *rqstp, 440 440 struct nfsd3_readdirres *resp, 441 - int count) 441 + u32 count) 442 442 { 443 443 struct xdr_buf *buf = &resp->dirlist; 444 444 struct xdr_stream *xdr = &resp->xdr; 445 445 446 - count = min_t(u32, count, svc_max_payload(rqstp)); 446 + count = clamp(count, (u32)(XDR_UNIT * 2), svc_max_payload(rqstp)); 447 447 448 448 memset(buf, 0, sizeof(*buf)); 449 449 450 450 /* Reserve room for the NULL ptr & eof flag (-2 words) */ 451 451 buf->buflen = count - XDR_UNIT * 2; 452 452 buf->pages = rqstp->rq_next_page; 453 - while (count > 0) { 454 - rqstp->rq_next_page++; 455 - count -= PAGE_SIZE; 456 - } 453 + rqstp->rq_next_page += (buf->buflen + PAGE_SIZE - 1) >> PAGE_SHIFT; 457 454 458 455 /* This is xdr_init_encode(), but it assumes that 459 456 * the head kvec has already been consumed. */ ··· 459 462 xdr->page_ptr = buf->pages; 460 463 xdr->iov = NULL; 461 464 xdr->p = page_address(*buf->pages); 462 - xdr->end = xdr->p + (PAGE_SIZE >> 2); 465 + xdr->end = (void *)xdr->p + min_t(u32, buf->buflen, PAGE_SIZE); 463 466 xdr->rqst = NULL; 464 467 } 465 468
+4 -4
fs/nfsd/nfsproc.c
··· 556 556 557 557 static void nfsd_init_dirlist_pages(struct svc_rqst *rqstp, 558 558 struct nfsd_readdirres *resp, 559 - int count) 559 + u32 count) 560 560 { 561 561 struct xdr_buf *buf = &resp->dirlist; 562 562 struct xdr_stream *xdr = &resp->xdr; 563 563 564 - count = min_t(u32, count, PAGE_SIZE); 564 + count = clamp(count, (u32)(XDR_UNIT * 2), svc_max_payload(rqstp)); 565 565 566 566 memset(buf, 0, sizeof(*buf)); 567 567 568 568 /* Reserve room for the NULL ptr & eof flag (-2 words) */ 569 - buf->buflen = count - sizeof(__be32) * 2; 569 + buf->buflen = count - XDR_UNIT * 2; 570 570 buf->pages = rqstp->rq_next_page; 571 571 rqstp->rq_next_page++; 572 572 ··· 577 577 xdr->page_ptr = buf->pages; 578 578 xdr->iov = NULL; 579 579 xdr->p = page_address(*buf->pages); 580 - xdr->end = xdr->p + (PAGE_SIZE >> 2); 580 + xdr->end = (void *)xdr->p + min_t(u32, buf->buflen, PAGE_SIZE); 581 581 xdr->rqst = NULL; 582 582 } 583 583
+1
fs/zonefs/super.c
··· 1787 1787 MODULE_AUTHOR("Damien Le Moal"); 1788 1788 MODULE_DESCRIPTION("Zone file system for zoned block devices"); 1789 1789 MODULE_LICENSE("GPL"); 1790 + MODULE_ALIAS_FS("zonefs"); 1790 1791 module_init(zonefs_init); 1791 1792 module_exit(zonefs_exit);
+2 -2
include/linux/compiler.h
··· 121 121 asm volatile(__stringify_label(c) ":\n\t" \ 122 122 ".pushsection .discard.reachable\n\t" \ 123 123 ".long " __stringify_label(c) "b - .\n\t" \ 124 - ".popsection\n\t"); \ 124 + ".popsection\n\t" : : "i" (c)); \ 125 125 }) 126 126 #define annotate_reachable() __annotate_reachable(__COUNTER__) 127 127 ··· 129 129 asm volatile(__stringify_label(c) ":\n\t" \ 130 130 ".pushsection .discard.unreachable\n\t" \ 131 131 ".long " __stringify_label(c) "b - .\n\t" \ 132 - ".popsection\n\t"); \ 132 + ".popsection\n\t" : : "i" (c)); \ 133 133 }) 134 134 #define annotate_unreachable() __annotate_unreachable(__COUNTER__) 135 135
+6
include/linux/efi.h
··· 1283 1283 } 1284 1284 #endif 1285 1285 1286 + #ifdef CONFIG_SYSFB 1287 + extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt); 1288 + #else 1289 + static inline void efifb_setup_from_dmi(struct screen_info *si, const char *opt) { } 1290 + #endif 1291 + 1286 1292 #endif /* _LINUX_EFI_H */
+1 -1
include/linux/gfp.h
··· 624 624 625 625 void *alloc_pages_exact(size_t size, gfp_t gfp_mask) __alloc_size(1); 626 626 void free_pages_exact(void *virt, size_t size); 627 - __meminit void *alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask) __alloc_size(1); 627 + __meminit void *alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask) __alloc_size(2); 628 628 629 629 #define __get_free_page(gfp_mask) \ 630 630 __get_free_pages((gfp_mask), 0)
+2 -2
include/linux/instrumentation.h
··· 11 11 asm volatile(__stringify(c) ": nop\n\t" \ 12 12 ".pushsection .discard.instr_begin\n\t" \ 13 13 ".long " __stringify(c) "b - .\n\t" \ 14 - ".popsection\n\t"); \ 14 + ".popsection\n\t" : : "i" (c)); \ 15 15 }) 16 16 #define instrumentation_begin() __instrumentation_begin(__COUNTER__) 17 17 ··· 50 50 asm volatile(__stringify(c) ": nop\n\t" \ 51 51 ".pushsection .discard.instr_end\n\t" \ 52 52 ".long " __stringify(c) "b - .\n\t" \ 53 - ".popsection\n\t"); \ 53 + ".popsection\n\t" : : "i" (c)); \ 54 54 }) 55 55 #define instrumentation_end() __instrumentation_end(__COUNTER__) 56 56 #else
+2 -2
include/linux/memblock.h
··· 405 405 phys_addr_t end, int nid, bool exact_nid); 406 406 phys_addr_t memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t align, int nid); 407 407 408 - static inline phys_addr_t memblock_phys_alloc(phys_addr_t size, 409 - phys_addr_t align) 408 + static __always_inline phys_addr_t memblock_phys_alloc(phys_addr_t size, 409 + phys_addr_t align) 410 410 { 411 411 return memblock_phys_alloc_range(size, align, 0, 412 412 MEMBLOCK_ALLOC_ACCESSIBLE);
+1
include/linux/mmzone.h
··· 277 277 VMSCAN_THROTTLE_WRITEBACK, 278 278 VMSCAN_THROTTLE_ISOLATED, 279 279 VMSCAN_THROTTLE_NOPROGRESS, 280 + VMSCAN_THROTTLE_CONGESTED, 280 281 NR_VMSCAN_THROTTLE, 281 282 }; 282 283
+1 -1
include/linux/netdevice.h
··· 1937 1937 * @udp_tunnel_nic: UDP tunnel offload state 1938 1938 * @xdp_state: stores info on attached XDP BPF programs 1939 1939 * 1940 - * @nested_level: Used as as a parameter of spin_lock_nested() of 1940 + * @nested_level: Used as a parameter of spin_lock_nested() of 1941 1941 * dev->addr_list_lock. 1942 1942 * @unlink_list: As netif_addr_lock() can be called recursively, 1943 1943 * keep a list of interfaces to be deleted.
-1
include/linux/pagemap.h
··· 285 285 286 286 static inline bool page_cache_add_speculative(struct page *page, int count) 287 287 { 288 - VM_BUG_ON_PAGE(PageTail(page), page); 289 288 return folio_ref_try_add_rcu((struct folio *)page, count); 290 289 } 291 290
+2 -1
include/linux/skbuff.h
··· 286 286 struct tc_skb_ext { 287 287 __u32 chain; 288 288 __u16 mru; 289 + __u16 zone; 289 290 bool post_ct; 290 291 }; 291 292 #endif ··· 1381 1380 struct flow_dissector *flow_dissector, 1382 1381 void *target_container, 1383 1382 u16 *ctinfo_map, size_t mapsize, 1384 - bool post_ct); 1383 + bool post_ct, u16 zone); 1385 1384 void 1386 1385 skb_flow_dissect_tunnel_info(const struct sk_buff *skb, 1387 1386 struct flow_dissector *flow_dissector,
+2 -2
include/linux/tee_drv.h
··· 195 195 * @offset: offset of buffer in user space 196 196 * @pages: locked pages from userspace 197 197 * @num_pages: number of locked pages 198 - * @dmabuf: dmabuf used to for exporting to user space 198 + * @refcount: reference counter 199 199 * @flags: defined by TEE_SHM_* in tee_drv.h 200 200 * @id: unique id of a shared memory object on this device, shared 201 201 * with user space ··· 214 214 unsigned int offset; 215 215 struct page **pages; 216 216 size_t num_pages; 217 - struct dma_buf *dmabuf; 217 + refcount_t refcount; 218 218 u32 flags; 219 219 int id; 220 220 u64 sec_world_id;
+23 -2
include/linux/virtio_net.h
··· 7 7 #include <uapi/linux/udp.h> 8 8 #include <uapi/linux/virtio_net.h> 9 9 10 + static inline bool virtio_net_hdr_match_proto(__be16 protocol, __u8 gso_type) 11 + { 12 + switch (gso_type & ~VIRTIO_NET_HDR_GSO_ECN) { 13 + case VIRTIO_NET_HDR_GSO_TCPV4: 14 + return protocol == cpu_to_be16(ETH_P_IP); 15 + case VIRTIO_NET_HDR_GSO_TCPV6: 16 + return protocol == cpu_to_be16(ETH_P_IPV6); 17 + case VIRTIO_NET_HDR_GSO_UDP: 18 + return protocol == cpu_to_be16(ETH_P_IP) || 19 + protocol == cpu_to_be16(ETH_P_IPV6); 20 + default: 21 + return false; 22 + } 23 + } 24 + 10 25 static inline int virtio_net_hdr_set_proto(struct sk_buff *skb, 11 26 const struct virtio_net_hdr *hdr) 12 27 { 28 + if (skb->protocol) 29 + return 0; 30 + 13 31 switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) { 14 32 case VIRTIO_NET_HDR_GSO_TCPV4: 15 33 case VIRTIO_NET_HDR_GSO_UDP: ··· 106 88 if (!skb->protocol) { 107 89 __be16 protocol = dev_parse_header_protocol(skb); 108 90 109 - virtio_net_hdr_set_proto(skb, hdr); 110 - if (protocol && protocol != skb->protocol) 91 + if (!protocol) 92 + virtio_net_hdr_set_proto(skb, hdr); 93 + else if (!virtio_net_hdr_match_proto(protocol, hdr->gso_type)) 111 94 return -EINVAL; 95 + else 96 + skb->protocol = protocol; 112 97 } 113 98 retry: 114 99 if (!skb_flow_dissect_flow_keys_basic(NULL, skb, &keys,
+16
include/net/pkt_sched.h
··· 193 193 skb->tstamp = ktime_set(0, 0); 194 194 } 195 195 196 + struct tc_skb_cb { 197 + struct qdisc_skb_cb qdisc_cb; 198 + 199 + u16 mru; 200 + bool post_ct; 201 + u16 zone; /* Only valid if post_ct = true */ 202 + }; 203 + 204 + static inline struct tc_skb_cb *tc_skb_cb(const struct sk_buff *skb) 205 + { 206 + struct tc_skb_cb *cb = (struct tc_skb_cb *)skb->cb; 207 + 208 + BUILD_BUG_ON(sizeof(*cb) > sizeof_field(struct sk_buff, cb)); 209 + return cb; 210 + } 211 + 196 212 #endif
-2
include/net/sch_generic.h
··· 447 447 }; 448 448 #define QDISC_CB_PRIV_LEN 20 449 449 unsigned char data[QDISC_CB_PRIV_LEN]; 450 - u16 mru; 451 - bool post_ct; 452 450 }; 453 451 454 452 typedef void tcf_chain_head_change_t(struct tcf_proto *tp_head, void *priv);
+3 -3
include/net/sctp/sctp.h
··· 105 105 int sctp_asconf_mgmt(struct sctp_sock *, struct sctp_sockaddr_entry *); 106 106 struct sk_buff *sctp_skb_recv_datagram(struct sock *, int, int, int *); 107 107 108 + typedef int (*sctp_callback_t)(struct sctp_endpoint *, struct sctp_transport *, void *); 108 109 void sctp_transport_walk_start(struct rhashtable_iter *iter); 109 110 void sctp_transport_walk_stop(struct rhashtable_iter *iter); 110 111 struct sctp_transport *sctp_transport_get_next(struct net *net, ··· 116 115 struct net *net, 117 116 const union sctp_addr *laddr, 118 117 const union sctp_addr *paddr, void *p); 119 - int sctp_for_each_transport(int (*cb)(struct sctp_transport *, void *), 120 - int (*cb_done)(struct sctp_transport *, void *), 121 - struct net *net, int *pos, void *p); 118 + int sctp_transport_traverse_process(sctp_callback_t cb, sctp_callback_t cb_done, 119 + struct net *net, int *pos, void *p); 122 120 int sctp_for_each_endpoint(int (*cb)(struct sctp_endpoint *, void *), void *p); 123 121 int sctp_get_sctp_info(struct sock *sk, struct sctp_association *asoc, 124 122 struct sctp_info *info);
+2 -1
include/net/sctp/structs.h
··· 1355 1355 reconf_enable:1; 1356 1356 1357 1357 __u8 strreset_enable; 1358 + struct rcu_head rcu; 1358 1359 }; 1359 1360 1360 1361 /* Recover the outter endpoint structure. */ ··· 1371 1370 struct sctp_endpoint *sctp_endpoint_new(struct sock *, gfp_t); 1372 1371 void sctp_endpoint_free(struct sctp_endpoint *); 1373 1372 void sctp_endpoint_put(struct sctp_endpoint *); 1374 - void sctp_endpoint_hold(struct sctp_endpoint *); 1373 + int sctp_endpoint_hold(struct sctp_endpoint *ep); 1375 1374 void sctp_endpoint_add_asoc(struct sctp_endpoint *, struct sctp_association *); 1376 1375 struct sctp_association *sctp_endpoint_lookup_assoc( 1377 1376 const struct sctp_endpoint *ep,
+1 -1
include/net/sock.h
··· 431 431 #ifdef CONFIG_XFRM 432 432 struct xfrm_policy __rcu *sk_policy[2]; 433 433 #endif 434 - struct dst_entry *sk_rx_dst; 434 + struct dst_entry __rcu *sk_rx_dst; 435 435 int sk_rx_dst_ifindex; 436 436 u32 sk_rx_dst_cookie; 437 437
+3 -1
include/trace/events/vmscan.h
··· 30 30 #define _VMSCAN_THROTTLE_WRITEBACK (1 << VMSCAN_THROTTLE_WRITEBACK) 31 31 #define _VMSCAN_THROTTLE_ISOLATED (1 << VMSCAN_THROTTLE_ISOLATED) 32 32 #define _VMSCAN_THROTTLE_NOPROGRESS (1 << VMSCAN_THROTTLE_NOPROGRESS) 33 + #define _VMSCAN_THROTTLE_CONGESTED (1 << VMSCAN_THROTTLE_CONGESTED) 33 34 34 35 #define show_throttle_flags(flags) \ 35 36 (flags) ? __print_flags(flags, "|", \ 36 37 {_VMSCAN_THROTTLE_WRITEBACK, "VMSCAN_THROTTLE_WRITEBACK"}, \ 37 38 {_VMSCAN_THROTTLE_ISOLATED, "VMSCAN_THROTTLE_ISOLATED"}, \ 38 - {_VMSCAN_THROTTLE_NOPROGRESS, "VMSCAN_THROTTLE_NOPROGRESS"} \ 39 + {_VMSCAN_THROTTLE_NOPROGRESS, "VMSCAN_THROTTLE_NOPROGRESS"}, \ 40 + {_VMSCAN_THROTTLE_CONGESTED, "VMSCAN_THROTTLE_CONGESTED"} \ 39 41 ) : "VMSCAN_THROTTLE_NONE" 40 42 41 43
+1
include/uapi/linux/byteorder/big_endian.h
··· 9 9 #define __BIG_ENDIAN_BITFIELD 10 10 #endif 11 11 12 + #include <linux/stddef.h> 12 13 #include <linux/types.h> 13 14 #include <linux/swab.h> 14 15
+1
include/uapi/linux/byteorder/little_endian.h
··· 9 9 #define __LITTLE_ENDIAN_BITFIELD 10 10 #endif 11 11 12 + #include <linux/stddef.h> 12 13 #include <linux/types.h> 13 14 #include <linux/swab.h> 14 15
+10 -8
include/uapi/linux/mptcp.h
··· 136 136 * MPTCP_EVENT_REMOVED: token, rem_id 137 137 * An address has been lost by the peer. 138 138 * 139 - * MPTCP_EVENT_SUB_ESTABLISHED: token, family, saddr4 | saddr6, 140 - * daddr4 | daddr6, sport, dport, backup, 141 - * if_idx [, error] 139 + * MPTCP_EVENT_SUB_ESTABLISHED: token, family, loc_id, rem_id, 140 + * saddr4 | saddr6, daddr4 | daddr6, sport, 141 + * dport, backup, if_idx [, error] 142 142 * A new subflow has been established. 'error' should not be set. 143 143 * 144 - * MPTCP_EVENT_SUB_CLOSED: token, family, saddr4 | saddr6, daddr4 | daddr6, 145 - * sport, dport, backup, if_idx [, error] 144 + * MPTCP_EVENT_SUB_CLOSED: token, family, loc_id, rem_id, saddr4 | saddr6, 145 + * daddr4 | daddr6, sport, dport, backup, if_idx 146 + * [, error] 146 147 * A subflow has been closed. An error (copy of sk_err) could be set if an 147 148 * error has been detected for this subflow. 148 149 * 149 - * MPTCP_EVENT_SUB_PRIORITY: token, family, saddr4 | saddr6, daddr4 | daddr6, 150 - * sport, dport, backup, if_idx [, error] 151 - * The priority of a subflow has changed. 'error' should not be set. 150 + * MPTCP_EVENT_SUB_PRIORITY: token, family, loc_id, rem_id, saddr4 | saddr6, 151 + * daddr4 | daddr6, sport, dport, backup, if_idx 152 + * [, error] 153 + * The priority of a subflow has changed. 'error' should not be set. 152 154 */ 153 155 enum mptcp_event_type { 154 156 MPTCP_EVENT_UNSPEC = 0,
+3 -3
include/uapi/linux/nfc.h
··· 263 263 #define NFC_SE_ENABLED 0x1 264 264 265 265 struct sockaddr_nfc { 266 - sa_family_t sa_family; 266 + __kernel_sa_family_t sa_family; 267 267 __u32 dev_idx; 268 268 __u32 target_idx; 269 269 __u32 nfc_protocol; ··· 271 271 272 272 #define NFC_LLCP_MAX_SERVICE_NAME 63 273 273 struct sockaddr_nfc_llcp { 274 - sa_family_t sa_family; 274 + __kernel_sa_family_t sa_family; 275 275 __u32 dev_idx; 276 276 __u32 target_idx; 277 277 __u32 nfc_protocol; 278 278 __u8 dsap; /* Destination SAP, if known */ 279 279 __u8 ssap; /* Source SAP to be bound to */ 280 280 char service_name[NFC_LLCP_MAX_SERVICE_NAME]; /* Service name URI */; 281 - size_t service_name_len; 281 + __kernel_size_t service_name_len; 282 282 }; 283 283 284 284 /* NFC socket protocols */
+1
include/xen/events.h
··· 17 17 unsigned xen_evtchn_nr_channels(void); 18 18 19 19 int bind_evtchn_to_irq(evtchn_port_t evtchn); 20 + int bind_evtchn_to_irq_lateeoi(evtchn_port_t evtchn); 20 21 int bind_evtchn_to_irqhandler(evtchn_port_t evtchn, 21 22 irq_handler_t handler, 22 23 unsigned long irqflags, const char *devname,
+10 -11
kernel/audit.c
··· 718 718 { 719 719 int rc = 0; 720 720 struct sk_buff *skb; 721 - static unsigned int failed = 0; 721 + unsigned int failed = 0; 722 722 723 723 /* NOTE: kauditd_thread takes care of all our locking, we just use 724 724 * the netlink info passed to us (e.g. sk and portid) */ ··· 735 735 continue; 736 736 } 737 737 738 + retry: 738 739 /* grab an extra skb reference in case of error */ 739 740 skb_get(skb); 740 741 rc = netlink_unicast(sk, skb, portid, 0); 741 742 if (rc < 0) { 742 - /* fatal failure for our queue flush attempt? */ 743 + /* send failed - try a few times unless fatal error */ 743 744 if (++failed >= retry_limit || 744 745 rc == -ECONNREFUSED || rc == -EPERM) { 745 - /* yes - error processing for the queue */ 746 746 sk = NULL; 747 747 if (err_hook) 748 748 (*err_hook)(skb); 749 - if (!skb_hook) 750 - goto out; 751 - /* keep processing with the skb_hook */ 749 + if (rc == -EAGAIN) 750 + rc = 0; 751 + /* continue to drain the queue */ 752 752 continue; 753 753 } else 754 - /* no - requeue to preserve ordering */ 755 - skb_queue_head(queue, skb); 754 + goto retry; 756 755 } else { 757 - /* it worked - drop the extra reference and continue */ 756 + /* skb sent - drop the extra reference and continue */ 758 757 consume_skb(skb); 759 758 failed = 0; 760 759 } 761 760 } 762 761 763 - out: 764 762 return (rc >= 0 ? 0 : rc); 765 763 } 766 764 ··· 1607 1609 audit_panic("cannot initialize netlink socket in namespace"); 1608 1610 return -ENOMEM; 1609 1611 } 1610 - aunet->sk->sk_sndtimeo = MAX_SCHEDULE_TIMEOUT; 1612 + /* limit the timeout in case auditd is blocked/stopped */ 1613 + aunet->sk->sk_sndtimeo = HZ / 10; 1611 1614 1612 1615 return 0; 1613 1616 }
+36 -17
kernel/bpf/verifier.c
··· 1366 1366 reg->var_off = tnum_or(tnum_clear_subreg(var64_off), var32_off); 1367 1367 } 1368 1368 1369 + static bool __reg32_bound_s64(s32 a) 1370 + { 1371 + return a >= 0 && a <= S32_MAX; 1372 + } 1373 + 1369 1374 static void __reg_assign_32_into_64(struct bpf_reg_state *reg) 1370 1375 { 1371 1376 reg->umin_value = reg->u32_min_value; 1372 1377 reg->umax_value = reg->u32_max_value; 1373 - /* Attempt to pull 32-bit signed bounds into 64-bit bounds 1374 - * but must be positive otherwise set to worse case bounds 1375 - * and refine later from tnum. 1378 + 1379 + /* Attempt to pull 32-bit signed bounds into 64-bit bounds but must 1380 + * be positive otherwise set to worse case bounds and refine later 1381 + * from tnum. 1376 1382 */ 1377 - if (reg->s32_min_value >= 0 && reg->s32_max_value >= 0) 1378 - reg->smax_value = reg->s32_max_value; 1379 - else 1380 - reg->smax_value = U32_MAX; 1381 - if (reg->s32_min_value >= 0) 1383 + if (__reg32_bound_s64(reg->s32_min_value) && 1384 + __reg32_bound_s64(reg->s32_max_value)) { 1382 1385 reg->smin_value = reg->s32_min_value; 1383 - else 1386 + reg->smax_value = reg->s32_max_value; 1387 + } else { 1384 1388 reg->smin_value = 0; 1389 + reg->smax_value = U32_MAX; 1390 + } 1385 1391 } 1386 1392 1387 1393 static void __reg_combine_32_into_64(struct bpf_reg_state *reg) ··· 2385 2379 */ 2386 2380 if (insn->src_reg != BPF_REG_FP) 2387 2381 return 0; 2388 - if (BPF_SIZE(insn->code) != BPF_DW) 2389 - return 0; 2390 2382 2391 2383 /* dreg = *(u64 *)[fp - off] was a fill from the stack. 2392 2384 * that [fp - off] slot contains scalar that needs to be ··· 2406 2402 return -ENOTSUPP; 2407 2403 /* scalars can only be spilled into stack */ 2408 2404 if (insn->dst_reg != BPF_REG_FP) 2409 - return 0; 2410 - if (BPF_SIZE(insn->code) != BPF_DW) 2411 2405 return 0; 2412 2406 spi = (-insn->off - 1) / BPF_REG_SIZE; 2413 2407 if (spi >= 64) { ··· 4553 4551 4554 4552 if (insn->imm == BPF_CMPXCHG) { 4555 4553 /* Check comparison of R0 with memory location */ 4556 - err = check_reg_arg(env, BPF_REG_0, SRC_OP); 4554 + const u32 aux_reg = BPF_REG_0; 4555 + 4556 + err = check_reg_arg(env, aux_reg, SRC_OP); 4557 4557 if (err) 4558 4558 return err; 4559 + 4560 + if (is_pointer_value(env, aux_reg)) { 4561 + verbose(env, "R%d leaks addr into mem\n", aux_reg); 4562 + return -EACCES; 4563 + } 4559 4564 } 4560 4565 4561 4566 if (is_pointer_value(env, insn->src_reg)) { ··· 4597 4588 load_reg = -1; 4598 4589 } 4599 4590 4600 - /* check whether we can read the memory */ 4591 + /* Check whether we can read the memory, with second call for fetch 4592 + * case to simulate the register fill. 4593 + */ 4601 4594 err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, 4602 - BPF_SIZE(insn->code), BPF_READ, load_reg, true); 4595 + BPF_SIZE(insn->code), BPF_READ, -1, true); 4596 + if (!err && load_reg >= 0) 4597 + err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, 4598 + BPF_SIZE(insn->code), BPF_READ, load_reg, 4599 + true); 4603 4600 if (err) 4604 4601 return err; 4605 4602 4606 - /* check whether we can write into the same memory */ 4603 + /* Check whether we can write into the same memory. */ 4607 4604 err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, 4608 4605 BPF_SIZE(insn->code), BPF_WRITE, -1, true); 4609 4606 if (err) ··· 8323 8308 insn->dst_reg); 8324 8309 } 8325 8310 zext_32_to_64(dst_reg); 8311 + 8312 + __update_reg_bounds(dst_reg); 8313 + __reg_deduce_bounds(dst_reg); 8314 + __reg_bound_offset(dst_reg); 8326 8315 } 8327 8316 } else { 8328 8317 /* case: R = imm
+11
kernel/crash_core.c
··· 6 6 7 7 #include <linux/buildid.h> 8 8 #include <linux/crash_core.h> 9 + #include <linux/init.h> 9 10 #include <linux/utsname.h> 10 11 #include <linux/vmalloc.h> 11 12 ··· 295 294 return __parse_crashkernel(cmdline, system_ram, crash_size, crash_base, 296 295 "crashkernel=", suffix_tbl[SUFFIX_LOW]); 297 296 } 297 + 298 + /* 299 + * Add a dummy early_param handler to mark crashkernel= as a known command line 300 + * parameter and suppress incorrect warnings in init/main.c. 301 + */ 302 + static int __init parse_crashkernel_dummy(char *arg) 303 + { 304 + return 0; 305 + } 306 + early_param("crashkernel", parse_crashkernel_dummy); 298 307 299 308 Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, 300 309 void *data, size_t data_len)
+1 -1
kernel/locking/rtmutex.c
··· 1380 1380 * - the VCPU on which owner runs is preempted 1381 1381 */ 1382 1382 if (!owner->on_cpu || need_resched() || 1383 - rt_mutex_waiter_is_top_waiter(lock, waiter) || 1383 + !rt_mutex_waiter_is_top_waiter(lock, waiter) || 1384 1384 vcpu_is_preempted(task_cpu(owner))) { 1385 1385 res = false; 1386 1386 break;
+9
kernel/signal.c
··· 4185 4185 ss_mode != 0)) 4186 4186 return -EINVAL; 4187 4187 4188 + /* 4189 + * Return before taking any locks if no actual 4190 + * sigaltstack changes were requested. 4191 + */ 4192 + if (t->sas_ss_sp == (unsigned long)ss_sp && 4193 + t->sas_ss_size == ss_size && 4194 + t->sas_ss_flags == ss_flags) 4195 + return 0; 4196 + 4188 4197 sigaltstack_lock(); 4189 4198 if (ss_mode == SS_DISABLE) { 4190 4199 ss_size = 0;
+1 -2
kernel/time/timekeeping.c
··· 1306 1306 timekeeping_forward_now(tk); 1307 1307 1308 1308 xt = tk_xtime(tk); 1309 - ts_delta.tv_sec = ts->tv_sec - xt.tv_sec; 1310 - ts_delta.tv_nsec = ts->tv_nsec - xt.tv_nsec; 1309 + ts_delta = timespec64_sub(*ts, xt); 1311 1310 1312 1311 if (timespec64_compare(&tk->wall_to_monotonic, &ts_delta) > 0) { 1313 1312 ret = -EINVAL;
+9 -6
kernel/ucount.c
··· 264 264 long inc_rlimit_ucounts(struct ucounts *ucounts, enum ucount_type type, long v) 265 265 { 266 266 struct ucounts *iter; 267 + long max = LONG_MAX; 267 268 long ret = 0; 268 269 269 270 for (iter = ucounts; iter; iter = iter->ns->ucounts) { 270 - long max = READ_ONCE(iter->ns->ucount_max[type]); 271 271 long new = atomic_long_add_return(v, &iter->ucount[type]); 272 272 if (new < 0 || new > max) 273 273 ret = LONG_MAX; 274 274 else if (iter == ucounts) 275 275 ret = new; 276 + max = READ_ONCE(iter->ns->ucount_max[type]); 276 277 } 277 278 return ret; 278 279 } ··· 313 312 { 314 313 /* Caller must hold a reference to ucounts */ 315 314 struct ucounts *iter; 315 + long max = LONG_MAX; 316 316 long dec, ret = 0; 317 317 318 318 for (iter = ucounts; iter; iter = iter->ns->ucounts) { 319 - long max = READ_ONCE(iter->ns->ucount_max[type]); 320 319 long new = atomic_long_add_return(1, &iter->ucount[type]); 321 320 if (new < 0 || new > max) 322 321 goto unwind; 323 322 if (iter == ucounts) 324 323 ret = new; 324 + max = READ_ONCE(iter->ns->ucount_max[type]); 325 325 /* 326 326 * Grab an extra ucount reference for the caller when 327 327 * the rlimit count was previously 0. ··· 341 339 return 0; 342 340 } 343 341 344 - bool is_ucounts_overlimit(struct ucounts *ucounts, enum ucount_type type, unsigned long max) 342 + bool is_ucounts_overlimit(struct ucounts *ucounts, enum ucount_type type, unsigned long rlimit) 345 343 { 346 344 struct ucounts *iter; 347 - if (get_ucounts_value(ucounts, type) > max) 348 - return true; 345 + long max = rlimit; 346 + if (rlimit > LONG_MAX) 347 + max = LONG_MAX; 349 348 for (iter = ucounts; iter; iter = iter->ns->ucounts) { 350 - max = READ_ONCE(iter->ns->ucount_max[type]); 351 349 if (get_ucounts_value(iter, type) > max) 352 350 return true; 351 + max = READ_ONCE(iter->ns->ucount_max[type]); 353 352 } 354 353 return false; 355 354 }
+9 -2
mm/damon/dbgfs.c
··· 353 353 const char __user *buf, size_t count, loff_t *ppos) 354 354 { 355 355 struct damon_ctx *ctx = file->private_data; 356 + struct damon_target *t, *next_t; 356 357 bool id_is_pid = true; 357 358 char *kbuf, *nrs; 358 359 unsigned long *targets; ··· 398 397 goto unlock_out; 399 398 } 400 399 401 - /* remove targets with previously-set primitive */ 402 - damon_set_targets(ctx, NULL, 0); 400 + /* remove previously set targets */ 401 + damon_for_each_target_safe(t, next_t, ctx) { 402 + if (targetid_is_pid(ctx)) 403 + put_pid((struct pid *)t->id); 404 + damon_destroy_target(t); 405 + } 403 406 404 407 /* Configure the context for the address space type */ 405 408 if (id_is_pid) ··· 655 650 if (!targetid_is_pid(ctx)) 656 651 return; 657 652 653 + mutex_lock(&ctx->kdamond_lock); 658 654 damon_for_each_target_safe(t, next, ctx) { 659 655 put_pid((struct pid *)t->id); 660 656 damon_destroy_target(t); 661 657 } 658 + mutex_unlock(&ctx->kdamond_lock); 662 659 } 663 660 664 661 static struct damon_ctx *dbgfs_new_ctx(void)
+1
mm/kfence/core.c
··· 683 683 .open = open_objects, 684 684 .read = seq_read, 685 685 .llseek = seq_lseek, 686 + .release = seq_release, 686 687 }; 687 688 688 689 static int __init kfence_debugfs_init(void)
+5 -9
mm/memory-failure.c
··· 1470 1470 if (!(flags & MF_COUNT_INCREASED)) { 1471 1471 res = get_hwpoison_page(p, flags); 1472 1472 if (!res) { 1473 - /* 1474 - * Check "filter hit" and "race with other subpage." 1475 - */ 1476 1473 lock_page(head); 1477 - if (PageHWPoison(head)) { 1478 - if ((hwpoison_filter(p) && TestClearPageHWPoison(p)) 1479 - || (p != head && TestSetPageHWPoison(head))) { 1474 + if (hwpoison_filter(p)) { 1475 + if (TestClearPageHWPoison(head)) 1480 1476 num_poisoned_pages_dec(); 1481 - unlock_page(head); 1482 - return 0; 1483 - } 1477 + unlock_page(head); 1478 + return 0; 1484 1479 } 1485 1480 unlock_page(head); 1486 1481 res = MF_FAILED; ··· 2234 2239 } else if (ret == 0) { 2235 2240 if (soft_offline_free_page(page) && try_again) { 2236 2241 try_again = false; 2242 + flags &= ~MF_COUNT_INCREASED; 2237 2243 goto retry; 2238 2244 } 2239 2245 }
+1 -2
mm/mempolicy.c
··· 2140 2140 * memory with both reclaim and compact as well. 2141 2141 */ 2142 2142 if (!page && (gfp & __GFP_DIRECT_RECLAIM)) 2143 - page = __alloc_pages_node(hpage_node, 2144 - gfp, order); 2143 + page = __alloc_pages(gfp, order, hpage_node, nmask); 2145 2144 2146 2145 goto out; 2147 2146 }
+56 -9
mm/vmscan.c
··· 1021 1021 unlock_page(page); 1022 1022 } 1023 1023 1024 + static bool skip_throttle_noprogress(pg_data_t *pgdat) 1025 + { 1026 + int reclaimable = 0, write_pending = 0; 1027 + int i; 1028 + 1029 + /* 1030 + * If kswapd is disabled, reschedule if necessary but do not 1031 + * throttle as the system is likely near OOM. 1032 + */ 1033 + if (pgdat->kswapd_failures >= MAX_RECLAIM_RETRIES) 1034 + return true; 1035 + 1036 + /* 1037 + * If there are a lot of dirty/writeback pages then do not 1038 + * throttle as throttling will occur when the pages cycle 1039 + * towards the end of the LRU if still under writeback. 1040 + */ 1041 + for (i = 0; i < MAX_NR_ZONES; i++) { 1042 + struct zone *zone = pgdat->node_zones + i; 1043 + 1044 + if (!populated_zone(zone)) 1045 + continue; 1046 + 1047 + reclaimable += zone_reclaimable_pages(zone); 1048 + write_pending += zone_page_state_snapshot(zone, 1049 + NR_ZONE_WRITE_PENDING); 1050 + } 1051 + if (2 * write_pending <= reclaimable) 1052 + return true; 1053 + 1054 + return false; 1055 + } 1056 + 1024 1057 void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason) 1025 1058 { 1026 1059 wait_queue_head_t *wqh = &pgdat->reclaim_wait[reason]; ··· 1089 1056 } 1090 1057 1091 1058 break; 1059 + case VMSCAN_THROTTLE_CONGESTED: 1060 + fallthrough; 1092 1061 case VMSCAN_THROTTLE_NOPROGRESS: 1093 - timeout = HZ/2; 1062 + if (skip_throttle_noprogress(pgdat)) { 1063 + cond_resched(); 1064 + return; 1065 + } 1066 + 1067 + timeout = 1; 1068 + 1094 1069 break; 1095 1070 case VMSCAN_THROTTLE_ISOLATED: 1096 1071 timeout = HZ/50; ··· 3362 3321 if (!current_is_kswapd() && current_may_throttle() && 3363 3322 !sc->hibernation_mode && 3364 3323 test_bit(LRUVEC_CONGESTED, &target_lruvec->flags)) 3365 - reclaim_throttle(pgdat, VMSCAN_THROTTLE_WRITEBACK); 3324 + reclaim_throttle(pgdat, VMSCAN_THROTTLE_CONGESTED); 3366 3325 3367 3326 if (should_continue_reclaim(pgdat, sc->nr_reclaimed - nr_reclaimed, 3368 3327 sc)) ··· 3427 3386 } 3428 3387 3429 3388 /* 3430 - * Do not throttle kswapd on NOPROGRESS as it will throttle on 3431 - * VMSCAN_THROTTLE_WRITEBACK if there are too many pages under 3432 - * writeback and marked for immediate reclaim at the tail of 3433 - * the LRU. 3389 + * Do not throttle kswapd or cgroup reclaim on NOPROGRESS as it will 3390 + * throttle on VMSCAN_THROTTLE_WRITEBACK if there are too many pages 3391 + * under writeback and marked for immediate reclaim at the tail of the 3392 + * LRU. 3434 3393 */ 3435 - if (current_is_kswapd()) 3394 + if (current_is_kswapd() || cgroup_reclaim(sc)) 3436 3395 return; 3437 3396 3438 3397 /* Throttle if making no progress at high prioities. */ 3439 - if (sc->priority < DEF_PRIORITY - 2) 3398 + if (sc->priority == 1 && !sc->nr_reclaimed) 3440 3399 reclaim_throttle(pgdat, VMSCAN_THROTTLE_NOPROGRESS); 3441 3400 } 3442 3401 ··· 3456 3415 unsigned long nr_soft_scanned; 3457 3416 gfp_t orig_mask; 3458 3417 pg_data_t *last_pgdat = NULL; 3418 + pg_data_t *first_pgdat = NULL; 3459 3419 3460 3420 /* 3461 3421 * If the number of buffer_heads in the machine exceeds the maximum ··· 3520 3478 /* need some check for avoid more shrink_zone() */ 3521 3479 } 3522 3480 3481 + if (!first_pgdat) 3482 + first_pgdat = zone->zone_pgdat; 3483 + 3523 3484 /* See comment about same check for global reclaim above */ 3524 3485 if (zone->zone_pgdat == last_pgdat) 3525 3486 continue; 3526 3487 last_pgdat = zone->zone_pgdat; 3527 3488 shrink_node(zone->zone_pgdat, sc); 3528 - consider_reclaim_throttle(zone->zone_pgdat, sc); 3529 3489 } 3490 + 3491 + if (first_pgdat) 3492 + consider_reclaim_throttle(first_pgdat, sc); 3530 3493 3531 3494 /* 3532 3495 * Restore to original mask to avoid the impact on the caller if we
+3 -1
net/ax25/af_ax25.c
··· 85 85 again: 86 86 ax25_for_each(s, &ax25_list) { 87 87 if (s->ax25_dev == ax25_dev) { 88 - s->ax25_dev = NULL; 89 88 spin_unlock_bh(&ax25_list_lock); 89 + lock_sock(s->sk); 90 + s->ax25_dev = NULL; 91 + release_sock(s->sk); 90 92 ax25_disconnect(s, ENETUNREACH); 91 93 spin_lock_bh(&ax25_list_lock); 92 94
+1 -1
net/bridge/br_ioctl.c
··· 337 337 338 338 args[2] = get_bridge_ifindices(net, indices, args[2]); 339 339 340 - ret = copy_to_user(uarg, indices, 340 + ret = copy_to_user((void __user *)args[1], indices, 341 341 array_size(args[2], sizeof(int))) 342 342 ? -EFAULT : args[2]; 343 343
+32
net/bridge/br_multicast.c
··· 4522 4522 } 4523 4523 #endif 4524 4524 4525 + void br_multicast_set_query_intvl(struct net_bridge_mcast *brmctx, 4526 + unsigned long val) 4527 + { 4528 + unsigned long intvl_jiffies = clock_t_to_jiffies(val); 4529 + 4530 + if (intvl_jiffies < BR_MULTICAST_QUERY_INTVL_MIN) { 4531 + br_info(brmctx->br, 4532 + "trying to set multicast query interval below minimum, setting to %lu (%ums)\n", 4533 + jiffies_to_clock_t(BR_MULTICAST_QUERY_INTVL_MIN), 4534 + jiffies_to_msecs(BR_MULTICAST_QUERY_INTVL_MIN)); 4535 + intvl_jiffies = BR_MULTICAST_QUERY_INTVL_MIN; 4536 + } 4537 + 4538 + brmctx->multicast_query_interval = intvl_jiffies; 4539 + } 4540 + 4541 + void br_multicast_set_startup_query_intvl(struct net_bridge_mcast *brmctx, 4542 + unsigned long val) 4543 + { 4544 + unsigned long intvl_jiffies = clock_t_to_jiffies(val); 4545 + 4546 + if (intvl_jiffies < BR_MULTICAST_STARTUP_QUERY_INTVL_MIN) { 4547 + br_info(brmctx->br, 4548 + "trying to set multicast startup query interval below minimum, setting to %lu (%ums)\n", 4549 + jiffies_to_clock_t(BR_MULTICAST_STARTUP_QUERY_INTVL_MIN), 4550 + jiffies_to_msecs(BR_MULTICAST_STARTUP_QUERY_INTVL_MIN)); 4551 + intvl_jiffies = BR_MULTICAST_STARTUP_QUERY_INTVL_MIN; 4552 + } 4553 + 4554 + brmctx->multicast_startup_query_interval = intvl_jiffies; 4555 + } 4556 + 4525 4557 /** 4526 4558 * br_multicast_list_adjacent - Returns snooped multicast addresses 4527 4559 * @dev: The bridge port adjacent to which to retrieve addresses
+2 -2
net/bridge/br_netlink.c
··· 1357 1357 if (data[IFLA_BR_MCAST_QUERY_INTVL]) { 1358 1358 u64 val = nla_get_u64(data[IFLA_BR_MCAST_QUERY_INTVL]); 1359 1359 1360 - br->multicast_ctx.multicast_query_interval = clock_t_to_jiffies(val); 1360 + br_multicast_set_query_intvl(&br->multicast_ctx, val); 1361 1361 } 1362 1362 1363 1363 if (data[IFLA_BR_MCAST_QUERY_RESPONSE_INTVL]) { ··· 1369 1369 if (data[IFLA_BR_MCAST_STARTUP_QUERY_INTVL]) { 1370 1370 u64 val = nla_get_u64(data[IFLA_BR_MCAST_STARTUP_QUERY_INTVL]); 1371 1371 1372 - br->multicast_ctx.multicast_startup_query_interval = clock_t_to_jiffies(val); 1372 + br_multicast_set_startup_query_intvl(&br->multicast_ctx, val); 1373 1373 } 1374 1374 1375 1375 if (data[IFLA_BR_MCAST_STATS_ENABLED]) {
+9 -3
net/bridge/br_private.h
··· 28 28 #define BR_MAX_PORTS (1<<BR_PORT_BITS) 29 29 30 30 #define BR_MULTICAST_DEFAULT_HASH_MAX 4096 31 + #define BR_MULTICAST_QUERY_INTVL_MIN msecs_to_jiffies(1000) 32 + #define BR_MULTICAST_STARTUP_QUERY_INTVL_MIN BR_MULTICAST_QUERY_INTVL_MIN 31 33 32 34 #define BR_HWDOM_MAX BITS_PER_LONG 33 35 ··· 965 963 int nest_attr); 966 964 size_t br_multicast_querier_state_size(void); 967 965 size_t br_rports_size(const struct net_bridge_mcast *brmctx); 966 + void br_multicast_set_query_intvl(struct net_bridge_mcast *brmctx, 967 + unsigned long val); 968 + void br_multicast_set_startup_query_intvl(struct net_bridge_mcast *brmctx, 969 + unsigned long val); 968 970 969 971 static inline bool br_group_is_l2(const struct br_ip *group) 970 972 { ··· 1153 1147 static inline bool 1154 1148 br_multicast_ctx_vlan_global_disabled(const struct net_bridge_mcast *brmctx) 1155 1149 { 1156 - return br_opt_get(brmctx->br, BROPT_MCAST_VLAN_SNOOPING_ENABLED) && 1157 - br_multicast_ctx_is_vlan(brmctx) && 1158 - !(brmctx->vlan->priv_flags & BR_VLFLAG_GLOBAL_MCAST_ENABLED); 1150 + return br_multicast_ctx_is_vlan(brmctx) && 1151 + (!br_opt_get(brmctx->br, BROPT_MCAST_VLAN_SNOOPING_ENABLED) || 1152 + !(brmctx->vlan->priv_flags & BR_VLFLAG_GLOBAL_MCAST_ENABLED)); 1159 1153 } 1160 1154 1161 1155 static inline bool
+2 -2
net/bridge/br_sysfs_br.c
··· 658 658 static int set_query_interval(struct net_bridge *br, unsigned long val, 659 659 struct netlink_ext_ack *extack) 660 660 { 661 - br->multicast_ctx.multicast_query_interval = clock_t_to_jiffies(val); 661 + br_multicast_set_query_intvl(&br->multicast_ctx, val); 662 662 return 0; 663 663 } 664 664 ··· 706 706 static int set_startup_query_interval(struct net_bridge *br, unsigned long val, 707 707 struct netlink_ext_ack *extack) 708 708 { 709 - br->multicast_ctx.multicast_startup_query_interval = clock_t_to_jiffies(val); 709 + br_multicast_set_startup_query_intvl(&br->multicast_ctx, val); 710 710 return 0; 711 711 } 712 712
+2 -2
net/bridge/br_vlan_options.c
··· 521 521 u64 val; 522 522 523 523 val = nla_get_u64(tb[BRIDGE_VLANDB_GOPTS_MCAST_QUERY_INTVL]); 524 - v->br_mcast_ctx.multicast_query_interval = clock_t_to_jiffies(val); 524 + br_multicast_set_query_intvl(&v->br_mcast_ctx, val); 525 525 *changed = true; 526 526 } 527 527 if (tb[BRIDGE_VLANDB_GOPTS_MCAST_QUERY_RESPONSE_INTVL]) { ··· 535 535 u64 val; 536 536 537 537 val = nla_get_u64(tb[BRIDGE_VLANDB_GOPTS_MCAST_STARTUP_QUERY_INTVL]); 538 - v->br_mcast_ctx.multicast_startup_query_interval = clock_t_to_jiffies(val); 538 + br_multicast_set_startup_query_intvl(&v->br_mcast_ctx, val); 539 539 *changed = true; 540 540 } 541 541 if (tb[BRIDGE_VLANDB_GOPTS_MCAST_QUERIER]) {
+4 -4
net/core/dev.c
··· 3941 3941 return skb; 3942 3942 3943 3943 /* qdisc_skb_cb(skb)->pkt_len was already set by the caller. */ 3944 - qdisc_skb_cb(skb)->mru = 0; 3945 - qdisc_skb_cb(skb)->post_ct = false; 3944 + tc_skb_cb(skb)->mru = 0; 3945 + tc_skb_cb(skb)->post_ct = false; 3946 3946 mini_qdisc_bstats_cpu_update(miniq, skb); 3947 3947 3948 3948 switch (tcf_classify(skb, miniq->block, miniq->filter_list, &cl_res, false)) { ··· 5103 5103 } 5104 5104 5105 5105 qdisc_skb_cb(skb)->pkt_len = skb->len; 5106 - qdisc_skb_cb(skb)->mru = 0; 5107 - qdisc_skb_cb(skb)->post_ct = false; 5106 + tc_skb_cb(skb)->mru = 0; 5107 + tc_skb_cb(skb)->post_ct = false; 5108 5108 skb->tc_at_ingress = 1; 5109 5109 mini_qdisc_bstats_cpu_update(miniq, skb); 5110 5110
+2 -1
net/core/flow_dissector.c
··· 238 238 skb_flow_dissect_ct(const struct sk_buff *skb, 239 239 struct flow_dissector *flow_dissector, 240 240 void *target_container, u16 *ctinfo_map, 241 - size_t mapsize, bool post_ct) 241 + size_t mapsize, bool post_ct, u16 zone) 242 242 { 243 243 #if IS_ENABLED(CONFIG_NF_CONNTRACK) 244 244 struct flow_dissector_key_ct *key; ··· 260 260 if (!ct) { 261 261 key->ct_state = TCA_FLOWER_KEY_CT_FLAGS_TRACKED | 262 262 TCA_FLOWER_KEY_CT_FLAGS_INVALID; 263 + key->ct_zone = zone; 263 264 return; 264 265 } 265 266
+1 -1
net/core/skbuff.c
··· 832 832 ntohs(skb->protocol), skb->pkt_type, skb->skb_iif); 833 833 834 834 if (dev) 835 - printk("%sdev name=%s feat=0x%pNF\n", 835 + printk("%sdev name=%s feat=%pNF\n", 836 836 level, dev->name, &dev->features); 837 837 if (sk) 838 838 printk("%ssk family=%hu type=%u proto=%u\n",
+5 -1
net/dsa/tag_ocelot.c
··· 47 47 void *injection; 48 48 __be32 *prefix; 49 49 u32 rew_op = 0; 50 + u64 qos_class; 50 51 51 52 ocelot_xmit_get_vlan_info(skb, dp, &vlan_tci, &tag_type); 53 + 54 + qos_class = netdev_get_num_tc(netdev) ? 55 + netdev_get_prio_tc_map(netdev, skb->priority) : skb->priority; 52 56 53 57 injection = skb_push(skb, OCELOT_TAG_LEN); 54 58 prefix = skb_push(skb, OCELOT_SHORT_PREFIX_LEN); ··· 61 57 memset(injection, 0, OCELOT_TAG_LEN); 62 58 ocelot_ifh_set_bypass(injection, 1); 63 59 ocelot_ifh_set_src(injection, ds->num_ports); 64 - ocelot_ifh_set_qos_class(injection, skb->priority); 60 + ocelot_ifh_set_qos_class(injection, qos_class); 65 61 ocelot_ifh_set_vlan_tci(injection, vlan_tci); 66 62 ocelot_ifh_set_tag_type(injection, tag_type); 67 63
+5 -7
net/ipv4/af_inet.c
··· 154 154 155 155 kfree(rcu_dereference_protected(inet->inet_opt, 1)); 156 156 dst_release(rcu_dereference_protected(sk->sk_dst_cache, 1)); 157 - dst_release(sk->sk_rx_dst); 157 + dst_release(rcu_dereference_protected(sk->sk_rx_dst, 1)); 158 158 sk_refcnt_debug_dec(sk); 159 159 } 160 160 EXPORT_SYMBOL(inet_sock_destruct); ··· 1994 1994 1995 1995 ip_init(); 1996 1996 1997 + /* Initialise per-cpu ipv4 mibs */ 1998 + if (init_ipv4_mibs()) 1999 + panic("%s: Cannot init ipv4 mibs\n", __func__); 2000 + 1997 2001 /* Setup TCP slab cache for open requests. */ 1998 2002 tcp_init(); 1999 2003 ··· 2028 2024 2029 2025 if (init_inet_pernet_ops()) 2030 2026 pr_crit("%s: Cannot init ipv4 inet pernet ops\n", __func__); 2031 - /* 2032 - * Initialise per-cpu ipv4 mibs 2033 - */ 2034 - 2035 - if (init_ipv4_mibs()) 2036 - pr_crit("%s: Cannot init ipv4 mibs\n", __func__); 2037 2027 2038 2028 ipv4_proc_init(); 2039 2029
+1 -3
net/ipv4/inet_diag.c
··· 261 261 r->idiag_state = sk->sk_state; 262 262 r->idiag_timer = 0; 263 263 r->idiag_retrans = 0; 264 + r->idiag_expires = 0; 264 265 265 266 if (inet_diag_msg_attrs_fill(sk, skb, r, ext, 266 267 sk_user_ns(NETLINK_CB(cb->skb).sk), ··· 315 314 r->idiag_retrans = icsk->icsk_probes_out; 316 315 r->idiag_expires = 317 316 jiffies_delta_to_msecs(sk->sk_timer.expires - jiffies); 318 - } else { 319 - r->idiag_timer = 0; 320 - r->idiag_expires = 0; 321 317 } 322 318 323 319 if ((ext & (1 << (INET_DIAG_INFO - 1))) && handler->idiag_info_size) {
+1 -2
net/ipv4/tcp.c
··· 3012 3012 icsk->icsk_ack.rcv_mss = TCP_MIN_MSS; 3013 3013 memset(&tp->rx_opt, 0, sizeof(tp->rx_opt)); 3014 3014 __sk_dst_reset(sk); 3015 - dst_release(sk->sk_rx_dst); 3016 - sk->sk_rx_dst = NULL; 3015 + dst_release(xchg((__force struct dst_entry **)&sk->sk_rx_dst, NULL)); 3017 3016 tcp_saved_syn_free(tp); 3018 3017 tp->compressed_ack = 0; 3019 3018 tp->segs_in = 0;
+1 -1
net/ipv4/tcp_input.c
··· 5787 5787 trace_tcp_probe(sk, skb); 5788 5788 5789 5789 tcp_mstamp_refresh(tp); 5790 - if (unlikely(!sk->sk_rx_dst)) 5790 + if (unlikely(!rcu_access_pointer(sk->sk_rx_dst))) 5791 5791 inet_csk(sk)->icsk_af_ops->sk_rx_dst_set(sk, skb); 5792 5792 /* 5793 5793 * Header prediction.
+7 -4
net/ipv4/tcp_ipv4.c
··· 1701 1701 struct sock *rsk; 1702 1702 1703 1703 if (sk->sk_state == TCP_ESTABLISHED) { /* Fast path */ 1704 - struct dst_entry *dst = sk->sk_rx_dst; 1704 + struct dst_entry *dst; 1705 + 1706 + dst = rcu_dereference_protected(sk->sk_rx_dst, 1707 + lockdep_sock_is_held(sk)); 1705 1708 1706 1709 sock_rps_save_rxhash(sk, skb); 1707 1710 sk_mark_napi_id(sk, skb); ··· 1712 1709 if (sk->sk_rx_dst_ifindex != skb->skb_iif || 1713 1710 !INDIRECT_CALL_1(dst->ops->check, ipv4_dst_check, 1714 1711 dst, 0)) { 1712 + RCU_INIT_POINTER(sk->sk_rx_dst, NULL); 1715 1713 dst_release(dst); 1716 - sk->sk_rx_dst = NULL; 1717 1714 } 1718 1715 } 1719 1716 tcp_rcv_established(sk, skb); ··· 1789 1786 skb->sk = sk; 1790 1787 skb->destructor = sock_edemux; 1791 1788 if (sk_fullsock(sk)) { 1792 - struct dst_entry *dst = READ_ONCE(sk->sk_rx_dst); 1789 + struct dst_entry *dst = rcu_dereference(sk->sk_rx_dst); 1793 1790 1794 1791 if (dst) 1795 1792 dst = dst_check(dst, 0); ··· 2204 2201 struct dst_entry *dst = skb_dst(skb); 2205 2202 2206 2203 if (dst && dst_hold_safe(dst)) { 2207 - sk->sk_rx_dst = dst; 2204 + rcu_assign_pointer(sk->sk_rx_dst, dst); 2208 2205 sk->sk_rx_dst_ifindex = skb->skb_iif; 2209 2206 } 2210 2207 }
+4 -4
net/ipv4/udp.c
··· 2250 2250 struct dst_entry *old; 2251 2251 2252 2252 if (dst_hold_safe(dst)) { 2253 - old = xchg(&sk->sk_rx_dst, dst); 2253 + old = xchg((__force struct dst_entry **)&sk->sk_rx_dst, dst); 2254 2254 dst_release(old); 2255 2255 return old != dst; 2256 2256 } ··· 2440 2440 struct dst_entry *dst = skb_dst(skb); 2441 2441 int ret; 2442 2442 2443 - if (unlikely(sk->sk_rx_dst != dst)) 2443 + if (unlikely(rcu_dereference(sk->sk_rx_dst) != dst)) 2444 2444 udp_sk_rx_dst_set(sk, dst); 2445 2445 2446 2446 ret = udp_unicast_rcv_skb(sk, skb, uh); ··· 2599 2599 2600 2600 skb->sk = sk; 2601 2601 skb->destructor = sock_efree; 2602 - dst = READ_ONCE(sk->sk_rx_dst); 2602 + dst = rcu_dereference(sk->sk_rx_dst); 2603 2603 2604 2604 if (dst) 2605 2605 dst = dst_check(dst, 0); ··· 3075 3075 { 3076 3076 seq_setwidth(seq, 127); 3077 3077 if (v == SEQ_START_TOKEN) 3078 - seq_puts(seq, " sl local_address rem_address st tx_queue " 3078 + seq_puts(seq, " sl local_address rem_address st tx_queue " 3079 3079 "rx_queue tr tm->when retrnsmt uid timeout " 3080 3080 "inode ref pointer drops"); 3081 3081 else {
+2
net/ipv6/ip6_vti.c
··· 808 808 struct net *net = dev_net(dev); 809 809 struct vti6_net *ip6n = net_generic(net, vti6_net_id); 810 810 811 + memset(&p1, 0, sizeof(p1)); 812 + 811 813 switch (cmd) { 812 814 case SIOCGETTUNNEL: 813 815 if (dev == ip6n->fb_tnl_dev) {
+3
net/ipv6/raw.c
··· 1020 1020 struct raw6_sock *rp = raw6_sk(sk); 1021 1021 int val; 1022 1022 1023 + if (optlen < sizeof(val)) 1024 + return -EINVAL; 1025 + 1023 1026 if (copy_from_sockptr(&val, optval, sizeof(val))) 1024 1027 return -EFAULT; 1025 1028
-1
net/ipv6/sit.c
··· 1933 1933 return 0; 1934 1934 1935 1935 err_reg_dev: 1936 - ipip6_dev_free(sitn->fb_tunnel_dev); 1937 1936 free_netdev(sitn->fb_tunnel_dev); 1938 1937 err_alloc_dev: 1939 1938 return err;
+7 -4
net/ipv6/tcp_ipv6.c
··· 107 107 if (dst && dst_hold_safe(dst)) { 108 108 const struct rt6_info *rt = (const struct rt6_info *)dst; 109 109 110 - sk->sk_rx_dst = dst; 110 + rcu_assign_pointer(sk->sk_rx_dst, dst); 111 111 sk->sk_rx_dst_ifindex = skb->skb_iif; 112 112 sk->sk_rx_dst_cookie = rt6_get_cookie(rt); 113 113 } ··· 1505 1505 opt_skb = skb_clone(skb, sk_gfp_mask(sk, GFP_ATOMIC)); 1506 1506 1507 1507 if (sk->sk_state == TCP_ESTABLISHED) { /* Fast path */ 1508 - struct dst_entry *dst = sk->sk_rx_dst; 1508 + struct dst_entry *dst; 1509 + 1510 + dst = rcu_dereference_protected(sk->sk_rx_dst, 1511 + lockdep_sock_is_held(sk)); 1509 1512 1510 1513 sock_rps_save_rxhash(sk, skb); 1511 1514 sk_mark_napi_id(sk, skb); ··· 1516 1513 if (sk->sk_rx_dst_ifindex != skb->skb_iif || 1517 1514 INDIRECT_CALL_1(dst->ops->check, ip6_dst_check, 1518 1515 dst, sk->sk_rx_dst_cookie) == NULL) { 1516 + RCU_INIT_POINTER(sk->sk_rx_dst, NULL); 1519 1517 dst_release(dst); 1520 - sk->sk_rx_dst = NULL; 1521 1518 } 1522 1519 } 1523 1520 ··· 1877 1874 skb->sk = sk; 1878 1875 skb->destructor = sock_edemux; 1879 1876 if (sk_fullsock(sk)) { 1880 - struct dst_entry *dst = READ_ONCE(sk->sk_rx_dst); 1877 + struct dst_entry *dst = rcu_dereference(sk->sk_rx_dst); 1881 1878 1882 1879 if (dst) 1883 1880 dst = dst_check(dst, sk->sk_rx_dst_cookie);
+3 -3
net/ipv6/udp.c
··· 956 956 struct dst_entry *dst = skb_dst(skb); 957 957 int ret; 958 958 959 - if (unlikely(sk->sk_rx_dst != dst)) 959 + if (unlikely(rcu_dereference(sk->sk_rx_dst) != dst)) 960 960 udp6_sk_rx_dst_set(sk, dst); 961 961 962 962 if (!uh->check && !udp_sk(sk)->no_check6_rx) { ··· 1070 1070 1071 1071 skb->sk = sk; 1072 1072 skb->destructor = sock_efree; 1073 - dst = READ_ONCE(sk->sk_rx_dst); 1073 + dst = rcu_dereference(sk->sk_rx_dst); 1074 1074 1075 1075 if (dst) 1076 1076 dst = dst_check(dst, sk->sk_rx_dst_cookie); ··· 1204 1204 kfree_skb(skb); 1205 1205 return -EINVAL; 1206 1206 } 1207 - if (skb->len > cork->gso_size * UDP_MAX_SEGMENTS) { 1207 + if (datalen > cork->gso_size * UDP_MAX_SEGMENTS) { 1208 1208 kfree_skb(skb); 1209 1209 return -EINVAL; 1210 1210 }
+3 -2
net/mac80211/agg-rx.c
··· 9 9 * Copyright 2007, Michael Wu <flamingice@sourmilk.net> 10 10 * Copyright 2007-2010, Intel Corporation 11 11 * Copyright(c) 2015-2017 Intel Deutschland GmbH 12 - * Copyright (C) 2018-2020 Intel Corporation 12 + * Copyright (C) 2018-2021 Intel Corporation 13 13 */ 14 14 15 15 /** ··· 191 191 sband = ieee80211_get_sband(sdata); 192 192 if (!sband) 193 193 return; 194 - he_cap = ieee80211_get_he_iftype_cap(sband, sdata->vif.type); 194 + he_cap = ieee80211_get_he_iftype_cap(sband, 195 + ieee80211_vif_type_p2p(&sdata->vif)); 195 196 if (!he_cap) 196 197 return; 197 198
+11 -5
net/mac80211/agg-tx.c
··· 9 9 * Copyright 2007, Michael Wu <flamingice@sourmilk.net> 10 10 * Copyright 2007-2010, Intel Corporation 11 11 * Copyright(c) 2015-2017 Intel Deutschland GmbH 12 - * Copyright (C) 2018 - 2020 Intel Corporation 12 + * Copyright (C) 2018 - 2021 Intel Corporation 13 13 */ 14 14 15 15 #include <linux/ieee80211.h> ··· 106 106 mgmt->u.action.u.addba_req.start_seq_num = 107 107 cpu_to_le16(start_seq_num << 4); 108 108 109 - ieee80211_tx_skb(sdata, skb); 109 + ieee80211_tx_skb_tid(sdata, skb, tid); 110 110 } 111 111 112 112 void ieee80211_send_bar(struct ieee80211_vif *vif, u8 *ra, u16 tid, u16 ssn) ··· 213 213 struct ieee80211_txq *txq = sta->sta.txq[tid]; 214 214 struct txq_info *txqi; 215 215 216 + lockdep_assert_held(&sta->ampdu_mlme.mtx); 217 + 216 218 if (!txq) 217 219 return; 218 220 ··· 292 290 ieee80211_assign_tid_tx(sta, tid, NULL); 293 291 294 292 ieee80211_agg_splice_finish(sta->sdata, tid); 295 - ieee80211_agg_start_txq(sta, tid, false); 296 293 297 294 kfree_rcu(tid_tx, rcu_head); 298 295 } ··· 481 480 482 481 /* send AddBA request */ 483 482 ieee80211_send_addba_request(sdata, sta->sta.addr, tid, 484 - tid_tx->dialog_token, 485 - sta->tid_seq[tid] >> 4, 483 + tid_tx->dialog_token, tid_tx->ssn, 486 484 buf_size, tid_tx->timeout); 487 485 488 486 WARN_ON(test_and_set_bit(HT_AGG_STATE_SENT_ADDBA, &tid_tx->state)); ··· 523 523 524 524 params.ssn = sta->tid_seq[tid] >> 4; 525 525 ret = drv_ampdu_action(local, sdata, &params); 526 + tid_tx->ssn = params.ssn; 526 527 if (ret == IEEE80211_AMPDU_TX_START_DELAY_ADDBA) { 527 528 return; 528 529 } else if (ret == IEEE80211_AMPDU_TX_START_IMMEDIATE) { ··· 890 889 { 891 890 struct ieee80211_sub_if_data *sdata = sta->sdata; 892 891 bool send_delba = false; 892 + bool start_txq = false; 893 893 894 894 ht_dbg(sdata, "Stopping Tx BA session for %pM tid %d\n", 895 895 sta->sta.addr, tid); ··· 908 906 send_delba = true; 909 907 910 908 ieee80211_remove_tid_tx(sta, tid); 909 + start_txq = true; 911 910 912 911 unlock_sta: 913 912 spin_unlock_bh(&sta->lock); 913 + 914 + if (start_txq) 915 + ieee80211_agg_start_txq(sta, tid, false); 914 916 915 917 if (send_delba) 916 918 ieee80211_send_delba(sdata, sta->sta.addr, tid,
+3
net/mac80211/cfg.c
··· 1264 1264 return 0; 1265 1265 1266 1266 error: 1267 + mutex_lock(&local->mtx); 1267 1268 ieee80211_vif_release_channel(sdata); 1269 + mutex_unlock(&local->mtx); 1270 + 1268 1271 return err; 1269 1272 } 1270 1273
+4 -1
net/mac80211/driver-ops.h
··· 1219 1219 { 1220 1220 struct ieee80211_sub_if_data *sdata = vif_to_sdata(txq->txq.vif); 1221 1221 1222 - if (local->in_reconfig) 1222 + /* In reconfig don't transmit now, but mark for waking later */ 1223 + if (local->in_reconfig) { 1224 + set_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txq->flags); 1223 1225 return; 1226 + } 1224 1227 1225 1228 if (!check_sdata_in_driver(sdata)) 1226 1229 return;
+10 -3
net/mac80211/mlme.c
··· 2452 2452 u16 tx_time) 2453 2453 { 2454 2454 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; 2455 - u16 tid = ieee80211_get_tid(hdr); 2456 - int ac = ieee80211_ac_from_tid(tid); 2457 - struct ieee80211_sta_tx_tspec *tx_tspec = &ifmgd->tx_tspec[ac]; 2455 + u16 tid; 2456 + int ac; 2457 + struct ieee80211_sta_tx_tspec *tx_tspec; 2458 2458 unsigned long now = jiffies; 2459 + 2460 + if (!ieee80211_is_data_qos(hdr->frame_control)) 2461 + return; 2462 + 2463 + tid = ieee80211_get_tid(hdr); 2464 + ac = ieee80211_ac_from_tid(tid); 2465 + tx_tspec = &ifmgd->tx_tspec[ac]; 2459 2466 2460 2467 if (likely(!tx_tspec->admitted_time)) 2461 2468 return;
+1
net/mac80211/rx.c
··· 2944 2944 if (!fwd_skb) 2945 2945 goto out; 2946 2946 2947 + fwd_skb->dev = sdata->dev; 2947 2948 fwd_hdr = (struct ieee80211_hdr *) fwd_skb->data; 2948 2949 fwd_hdr->frame_control &= ~cpu_to_le16(IEEE80211_FCTL_RETRY); 2949 2950 info = IEEE80211_SKB_CB(fwd_skb);
+12 -9
net/mac80211/sta_info.c
··· 644 644 /* check if STA exists already */ 645 645 if (sta_info_get_bss(sdata, sta->sta.addr)) { 646 646 err = -EEXIST; 647 - goto out_err; 647 + goto out_cleanup; 648 648 } 649 649 650 650 sinfo = kzalloc(sizeof(struct station_info), GFP_KERNEL); 651 651 if (!sinfo) { 652 652 err = -ENOMEM; 653 - goto out_err; 653 + goto out_cleanup; 654 654 } 655 655 656 656 local->num_sta++; ··· 667 667 668 668 list_add_tail_rcu(&sta->list, &local->sta_list); 669 669 670 + /* update channel context before notifying the driver about state 671 + * change, this enables driver using the updated channel context right away. 672 + */ 673 + if (sta->sta_state >= IEEE80211_STA_ASSOC) { 674 + ieee80211_recalc_min_chandef(sta->sdata); 675 + if (!sta->sta.support_p2p_ps) 676 + ieee80211_recalc_p2p_go_ps_allowed(sta->sdata); 677 + } 678 + 670 679 /* notify driver */ 671 680 err = sta_info_insert_drv_state(local, sdata, sta); 672 681 if (err) 673 682 goto out_remove; 674 683 675 684 set_sta_flag(sta, WLAN_STA_INSERTED); 676 - 677 - if (sta->sta_state >= IEEE80211_STA_ASSOC) { 678 - ieee80211_recalc_min_chandef(sta->sdata); 679 - if (!sta->sta.support_p2p_ps) 680 - ieee80211_recalc_p2p_go_ps_allowed(sta->sdata); 681 - } 682 685 683 686 /* accept BA sessions now */ 684 687 clear_sta_flag(sta, WLAN_STA_BLOCK_BA); ··· 709 706 out_drop_sta: 710 707 local->num_sta--; 711 708 synchronize_net(); 709 + out_cleanup: 712 710 cleanup_single_sta(sta); 713 - out_err: 714 711 mutex_unlock(&local->sta_mtx); 715 712 kfree(sinfo); 716 713 rcu_read_lock();
+2
net/mac80211/sta_info.h
··· 176 176 * @failed_bar_ssn: ssn of the last failed BAR tx attempt 177 177 * @bar_pending: BAR needs to be re-sent 178 178 * @amsdu: support A-MSDU withing A-MDPU 179 + * @ssn: starting sequence number of the session 179 180 * 180 181 * This structure's lifetime is managed by RCU, assignments to 181 182 * the array holding it must hold the aggregation mutex. ··· 200 199 u8 stop_initiator; 201 200 bool tx_stop; 202 201 u16 buf_size; 202 + u16 ssn; 203 203 204 204 u16 failed_bar_ssn; 205 205 bool bar_pending;
+5 -5
net/mac80211/tx.c
··· 1822 1822 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(tx->skb); 1823 1823 ieee80211_tx_result res = TX_CONTINUE; 1824 1824 1825 + if (!ieee80211_hw_check(&tx->local->hw, HAS_RATE_CONTROL)) 1826 + CALL_TXH(ieee80211_tx_h_rate_ctrl); 1827 + 1825 1828 if (unlikely(info->flags & IEEE80211_TX_INTFL_RETRANSMISSION)) { 1826 1829 __skb_queue_tail(&tx->skbs, tx->skb); 1827 1830 tx->skb = NULL; 1828 1831 goto txh_done; 1829 1832 } 1830 - 1831 - if (!ieee80211_hw_check(&tx->local->hw, HAS_RATE_CONTROL)) 1832 - CALL_TXH(ieee80211_tx_h_rate_ctrl); 1833 1833 1834 1834 CALL_TXH(ieee80211_tx_h_michael_mic_add); 1835 1835 CALL_TXH(ieee80211_tx_h_sequence); ··· 4191 4191 4192 4192 ieee80211_aggr_check(sdata, sta, skb); 4193 4193 4194 + sk_pacing_shift_update(skb->sk, sdata->local->hw.tx_sk_pacing_shift); 4195 + 4194 4196 if (sta) { 4195 4197 struct ieee80211_fast_tx *fast_tx; 4196 - 4197 - sk_pacing_shift_update(skb->sk, sdata->local->hw.tx_sk_pacing_shift); 4198 4198 4199 4199 fast_tx = rcu_dereference(sta->fast_tx); 4200 4200
+14 -9
net/mac80211/util.c
··· 943 943 struct ieee802_11_elems *elems) 944 944 { 945 945 const void *data = elem->data + 1; 946 - u8 len = elem->datalen - 1; 946 + u8 len; 947 + 948 + if (!elem->datalen) 949 + return; 950 + 951 + len = elem->datalen - 1; 947 952 948 953 switch (elem->data[0]) { 949 954 case WLAN_EID_EXT_HE_MU_EDCA: ··· 2068 2063 chandef.chan = chan; 2069 2064 2070 2065 skb = ieee80211_probereq_get(&local->hw, src, ssid, ssid_len, 2071 - 100 + ie_len); 2066 + local->scan_ies_len + ie_len); 2072 2067 if (!skb) 2073 2068 return NULL; 2074 2069 ··· 2651 2646 mutex_unlock(&local->sta_mtx); 2652 2647 } 2653 2648 2649 + /* 2650 + * If this is for hw restart things are still running. 2651 + * We may want to change that later, however. 2652 + */ 2653 + if (local->open_count && (!suspended || reconfig_due_to_wowlan)) 2654 + drv_reconfig_complete(local, IEEE80211_RECONFIG_TYPE_RESTART); 2655 + 2654 2656 if (local->in_reconfig) { 2655 2657 local->in_reconfig = false; 2656 2658 barrier(); ··· 2675 2663 ieee80211_wake_queues_by_reason(hw, IEEE80211_MAX_QUEUE_MAP, 2676 2664 IEEE80211_QUEUE_STOP_REASON_SUSPEND, 2677 2665 false); 2678 - 2679 - /* 2680 - * If this is for hw restart things are still running. 2681 - * We may want to change that later, however. 2682 - */ 2683 - if (local->open_count && (!suspended || reconfig_due_to_wowlan)) 2684 - drv_reconfig_complete(local, IEEE80211_RECONFIG_TYPE_RESTART); 2685 2666 2686 2667 if (!suspended) 2687 2668 return 0;
+3
net/mptcp/pm_netlink.c
··· 700 700 701 701 msk_owned_by_me(msk); 702 702 703 + if (sk->sk_state == TCP_LISTEN) 704 + return; 705 + 703 706 if (!rm_list->nr) 704 707 return; 705 708
+4 -2
net/mptcp/protocol.c
··· 1524 1524 int ret = 0; 1525 1525 1526 1526 prev_ssk = ssk; 1527 - mptcp_flush_join_list(msk); 1527 + __mptcp_flush_join_list(msk); 1528 1528 ssk = mptcp_subflow_get_send(msk); 1529 1529 1530 1530 /* First check. If the ssk has changed since ··· 2879 2879 */ 2880 2880 if (WARN_ON_ONCE(!new_mptcp_sock)) { 2881 2881 tcp_sk(newsk)->is_mptcp = 0; 2882 - return newsk; 2882 + goto out; 2883 2883 } 2884 2884 2885 2885 /* acquire the 2nd reference for the owning socket */ ··· 2891 2891 MPTCP_MIB_MPCAPABLEPASSIVEFALLBACK); 2892 2892 } 2893 2893 2894 + out: 2895 + newsk->sk_kern_sock = kern; 2894 2896 return newsk; 2895 2897 } 2896 2898
-1
net/mptcp/sockopt.c
··· 525 525 case TCP_NODELAY: 526 526 case TCP_THIN_LINEAR_TIMEOUTS: 527 527 case TCP_CONGESTION: 528 - case TCP_ULP: 529 528 case TCP_CORK: 530 529 case TCP_KEEPIDLE: 531 530 case TCP_KEEPINTVL:
+5 -1
net/ncsi/ncsi-netlink.c
··· 112 112 pnest = nla_nest_start_noflag(skb, NCSI_PKG_ATTR); 113 113 if (!pnest) 114 114 return -ENOMEM; 115 - nla_put_u32(skb, NCSI_PKG_ATTR_ID, np->id); 115 + rc = nla_put_u32(skb, NCSI_PKG_ATTR_ID, np->id); 116 + if (rc) { 117 + nla_nest_cancel(skb, pnest); 118 + return rc; 119 + } 116 120 if ((0x1 << np->id) == ndp->package_whitelist) 117 121 nla_put_flag(skb, NCSI_PKG_ATTR_FORCED); 118 122 cnest = nla_nest_start_noflag(skb, NCSI_PKG_ATTR_CHANNEL_LIST);
+3 -2
net/netfilter/nf_conntrack_netlink.c
··· 1195 1195 } 1196 1196 hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[cb->args[0]], 1197 1197 hnnode) { 1198 - if (NF_CT_DIRECTION(h) != IP_CT_DIR_ORIGINAL) 1199 - continue; 1200 1198 ct = nf_ct_tuplehash_to_ctrack(h); 1201 1199 if (nf_ct_is_expired(ct)) { 1202 1200 if (i < ARRAY_SIZE(nf_ct_evict) && ··· 1204 1206 } 1205 1207 1206 1208 if (!net_eq(net, nf_ct_net(ct))) 1209 + continue; 1210 + 1211 + if (NF_CT_DIRECTION(h) != IP_CT_DIR_ORIGINAL) 1207 1212 continue; 1208 1213 1209 1214 if (cb->args[1]) {
+2 -2
net/netfilter/nf_tables_api.c
··· 4481 4481 static void nft_set_catchall_destroy(const struct nft_ctx *ctx, 4482 4482 struct nft_set *set) 4483 4483 { 4484 - struct nft_set_elem_catchall *catchall; 4484 + struct nft_set_elem_catchall *next, *catchall; 4485 4485 4486 - list_for_each_entry_rcu(catchall, &set->catchall_list, list) { 4486 + list_for_each_entry_safe(catchall, next, &set->catchall_list, list) { 4487 4487 list_del_rcu(&catchall->list); 4488 4488 nft_set_elem_destroy(set, catchall->elem, true); 4489 4489 kfree_rcu(catchall);
+7 -1
net/openvswitch/flow.c
··· 34 34 #include <net/mpls.h> 35 35 #include <net/ndisc.h> 36 36 #include <net/nsh.h> 37 + #include <net/netfilter/nf_conntrack_zones.h> 37 38 38 39 #include "conntrack.h" 39 40 #include "datapath.h" ··· 861 860 #endif 862 861 bool post_ct = false; 863 862 int res, err; 863 + u16 zone = 0; 864 864 865 865 /* Extract metadata from packet. */ 866 866 if (tun_info) { ··· 900 898 key->recirc_id = tc_ext ? tc_ext->chain : 0; 901 899 OVS_CB(skb)->mru = tc_ext ? tc_ext->mru : 0; 902 900 post_ct = tc_ext ? tc_ext->post_ct : false; 901 + zone = post_ct ? tc_ext->zone : 0; 903 902 } else { 904 903 key->recirc_id = 0; 905 904 } ··· 909 906 #endif 910 907 911 908 err = key_extract(skb, key); 912 - if (!err) 909 + if (!err) { 913 910 ovs_ct_fill_key(skb, key, post_ct); /* Must be after key_extract(). */ 911 + if (post_ct && !skb_get_nfct(skb)) 912 + key->ct_zone = zone; 913 + } 914 914 return err; 915 915 } 916 916
+3 -2
net/packet/af_packet.c
··· 4492 4492 } 4493 4493 4494 4494 out_free_pg_vec: 4495 - bitmap_free(rx_owner_map); 4496 - if (pg_vec) 4495 + if (pg_vec) { 4496 + bitmap_free(rx_owner_map); 4497 4497 free_pg_vec(pg_vec, order, req->tp_block_nr); 4498 + } 4498 4499 out: 4499 4500 return err; 4500 4501 }
+3
net/phonet/pep.c
··· 868 868 869 869 err = pep_accept_conn(newsk, skb); 870 870 if (err) { 871 + __sock_put(sk); 871 872 sock_put(newsk); 872 873 newsk = NULL; 873 874 goto drop; ··· 947 946 ret = -EBUSY; 948 947 else if (sk->sk_state == TCP_ESTABLISHED) 949 948 ret = -EISCONN; 949 + else if (!pn->pn_sk.sobject) 950 + ret = -EADDRNOTAVAIL; 950 951 else 951 952 ret = pep_sock_enable(sk, NULL, 0); 952 953 release_sock(sk);
+1
net/rds/connection.c
··· 253 253 * should end up here, but if it 254 254 * does, reset/destroy the connection. 255 255 */ 256 + kfree(conn->c_path); 256 257 kmem_cache_free(rds_conn_slab, conn); 257 258 conn = ERR_PTR(-EOPNOTSUPP); 258 259 goto out;
+8 -7
net/sched/act_ct.c
··· 690 690 u8 family, u16 zone, bool *defrag) 691 691 { 692 692 enum ip_conntrack_info ctinfo; 693 - struct qdisc_skb_cb cb; 694 693 struct nf_conn *ct; 695 694 int err = 0; 696 695 bool frag; 696 + u16 mru; 697 697 698 698 /* Previously seen (loopback)? Ignore. */ 699 699 ct = nf_ct_get(skb, &ctinfo); ··· 708 708 return err; 709 709 710 710 skb_get(skb); 711 - cb = *qdisc_skb_cb(skb); 711 + mru = tc_skb_cb(skb)->mru; 712 712 713 713 if (family == NFPROTO_IPV4) { 714 714 enum ip_defrag_users user = IP_DEFRAG_CONNTRACK_IN + zone; ··· 722 722 723 723 if (!err) { 724 724 *defrag = true; 725 - cb.mru = IPCB(skb)->frag_max_size; 725 + mru = IPCB(skb)->frag_max_size; 726 726 } 727 727 } else { /* NFPROTO_IPV6 */ 728 728 #if IS_ENABLED(CONFIG_NF_DEFRAG_IPV6) ··· 735 735 736 736 if (!err) { 737 737 *defrag = true; 738 - cb.mru = IP6CB(skb)->frag_max_size; 738 + mru = IP6CB(skb)->frag_max_size; 739 739 } 740 740 #else 741 741 err = -EOPNOTSUPP; ··· 744 744 } 745 745 746 746 if (err != -EINPROGRESS) 747 - *qdisc_skb_cb(skb) = cb; 747 + tc_skb_cb(skb)->mru = mru; 748 748 skb_clear_hash(skb); 749 749 skb->ignore_df = 1; 750 750 return err; ··· 963 963 tcf_action_update_bstats(&c->common, skb); 964 964 965 965 if (clear) { 966 - qdisc_skb_cb(skb)->post_ct = false; 966 + tc_skb_cb(skb)->post_ct = false; 967 967 ct = nf_ct_get(skb, &ctinfo); 968 968 if (ct) { 969 969 nf_conntrack_put(&ct->ct_general); ··· 1048 1048 out_push: 1049 1049 skb_push_rcsum(skb, nh_ofs); 1050 1050 1051 - qdisc_skb_cb(skb)->post_ct = true; 1051 + tc_skb_cb(skb)->post_ct = true; 1052 + tc_skb_cb(skb)->zone = p->zone; 1052 1053 out_clear: 1053 1054 if (defrag) 1054 1055 qdisc_skb_cb(skb)->pkt_len = skb->len;
+6 -2
net/sched/cls_api.c
··· 1617 1617 1618 1618 /* If we missed on some chain */ 1619 1619 if (ret == TC_ACT_UNSPEC && last_executed_chain) { 1620 + struct tc_skb_cb *cb = tc_skb_cb(skb); 1621 + 1620 1622 ext = tc_skb_ext_alloc(skb); 1621 1623 if (WARN_ON_ONCE(!ext)) 1622 1624 return TC_ACT_SHOT; 1623 1625 ext->chain = last_executed_chain; 1624 - ext->mru = qdisc_skb_cb(skb)->mru; 1625 - ext->post_ct = qdisc_skb_cb(skb)->post_ct; 1626 + ext->mru = cb->mru; 1627 + ext->post_ct = cb->post_ct; 1628 + ext->zone = cb->zone; 1626 1629 } 1627 1630 1628 1631 return ret; ··· 3690 3687 entry->mpls_mangle.ttl = tcf_mpls_ttl(act); 3691 3688 break; 3692 3689 default: 3690 + err = -EOPNOTSUPP; 3693 3691 goto err_out_locked; 3694 3692 } 3695 3693 } else if (is_tcf_skbedit_ptype(act)) {
+4 -2
net/sched/cls_flower.c
··· 19 19 20 20 #include <net/sch_generic.h> 21 21 #include <net/pkt_cls.h> 22 + #include <net/pkt_sched.h> 22 23 #include <net/ip.h> 23 24 #include <net/flow_dissector.h> 24 25 #include <net/geneve.h> ··· 310 309 struct tcf_result *res) 311 310 { 312 311 struct cls_fl_head *head = rcu_dereference_bh(tp->root); 313 - bool post_ct = qdisc_skb_cb(skb)->post_ct; 312 + bool post_ct = tc_skb_cb(skb)->post_ct; 313 + u16 zone = tc_skb_cb(skb)->zone; 314 314 struct fl_flow_key skb_key; 315 315 struct fl_flow_mask *mask; 316 316 struct cls_fl_filter *f; ··· 329 327 skb_flow_dissect_ct(skb, &mask->dissector, &skb_key, 330 328 fl_ct_info_to_flower_map, 331 329 ARRAY_SIZE(fl_ct_info_to_flower_map), 332 - post_ct); 330 + post_ct, zone); 333 331 skb_flow_dissect_hash(skb, &mask->dissector, &skb_key); 334 332 skb_flow_dissect(skb, &mask->dissector, &skb_key, 335 333 FLOW_DISSECTOR_F_STOP_BEFORE_ENCAP);
+1 -5
net/sched/sch_cake.c
··· 2736 2736 q->tins = kvcalloc(CAKE_MAX_TINS, sizeof(struct cake_tin_data), 2737 2737 GFP_KERNEL); 2738 2738 if (!q->tins) 2739 - goto nomem; 2739 + return -ENOMEM; 2740 2740 2741 2741 for (i = 0; i < CAKE_MAX_TINS; i++) { 2742 2742 struct cake_tin_data *b = q->tins + i; ··· 2766 2766 q->min_netlen = ~0; 2767 2767 q->min_adjlen = ~0; 2768 2768 return 0; 2769 - 2770 - nomem: 2771 - cake_destroy(sch); 2772 - return -ENOMEM; 2773 2769 } 2774 2770 2775 2771 static int cake_dump(struct Qdisc *sch, struct sk_buff *skb)
+2 -2
net/sched/sch_ets.c
··· 666 666 } 667 667 } 668 668 for (i = q->nbands; i < oldbands; i++) { 669 - qdisc_tree_flush_backlog(q->classes[i].qdisc); 670 - if (i >= q->nstrict) 669 + if (i >= q->nstrict && q->classes[i].qdisc->q.qlen) 671 670 list_del(&q->classes[i].alist); 671 + qdisc_tree_flush_backlog(q->classes[i].qdisc); 672 672 } 673 673 q->nstrict = nstrict; 674 674 memcpy(q->prio2band, priomap, sizeof(priomap));
+2 -1
net/sched/sch_frag.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB 2 2 #include <net/netlink.h> 3 3 #include <net/sch_generic.h> 4 + #include <net/pkt_sched.h> 4 5 #include <net/dst.h> 5 6 #include <net/ip.h> 6 7 #include <net/ip6_fib.h> ··· 138 137 139 138 int sch_frag_xmit_hook(struct sk_buff *skb, int (*xmit)(struct sk_buff *skb)) 140 139 { 141 - u16 mru = qdisc_skb_cb(skb)->mru; 140 + u16 mru = tc_skb_cb(skb)->mru; 142 141 int err; 143 142 144 143 if (mru && skb->len > mru + skb->dev->hard_header_len)
+6 -6
net/sctp/diag.c
··· 290 290 return err; 291 291 } 292 292 293 - static int sctp_sock_dump(struct sctp_transport *tsp, void *p) 293 + static int sctp_sock_dump(struct sctp_endpoint *ep, struct sctp_transport *tsp, void *p) 294 294 { 295 - struct sctp_endpoint *ep = tsp->asoc->ep; 296 295 struct sctp_comm_param *commp = p; 297 296 struct sock *sk = ep->base.sk; 298 297 struct sk_buff *skb = commp->skb; ··· 301 302 int err = 0; 302 303 303 304 lock_sock(sk); 305 + if (ep != tsp->asoc->ep) 306 + goto release; 304 307 list_for_each_entry(assoc, &ep->asocs, asocs) { 305 308 if (cb->args[4] < cb->args[1]) 306 309 goto next; ··· 345 344 return err; 346 345 } 347 346 348 - static int sctp_sock_filter(struct sctp_transport *tsp, void *p) 347 + static int sctp_sock_filter(struct sctp_endpoint *ep, struct sctp_transport *tsp, void *p) 349 348 { 350 - struct sctp_endpoint *ep = tsp->asoc->ep; 351 349 struct sctp_comm_param *commp = p; 352 350 struct sock *sk = ep->base.sk; 353 351 const struct inet_diag_req_v2 *r = commp->r; ··· 505 505 if (!(idiag_states & ~(TCPF_LISTEN | TCPF_CLOSE))) 506 506 goto done; 507 507 508 - sctp_for_each_transport(sctp_sock_filter, sctp_sock_dump, 509 - net, &pos, &commp); 508 + sctp_transport_traverse_process(sctp_sock_filter, sctp_sock_dump, 509 + net, &pos, &commp); 510 510 cb->args[2] = pos; 511 511 512 512 done:
+15 -8
net/sctp/endpointola.c
··· 184 184 } 185 185 186 186 /* Final destructor for endpoint. */ 187 + static void sctp_endpoint_destroy_rcu(struct rcu_head *head) 188 + { 189 + struct sctp_endpoint *ep = container_of(head, struct sctp_endpoint, rcu); 190 + struct sock *sk = ep->base.sk; 191 + 192 + sctp_sk(sk)->ep = NULL; 193 + sock_put(sk); 194 + 195 + kfree(ep); 196 + SCTP_DBG_OBJCNT_DEC(ep); 197 + } 198 + 187 199 static void sctp_endpoint_destroy(struct sctp_endpoint *ep) 188 200 { 189 201 struct sock *sk; ··· 225 213 if (sctp_sk(sk)->bind_hash) 226 214 sctp_put_port(sk); 227 215 228 - sctp_sk(sk)->ep = NULL; 229 - /* Give up our hold on the sock */ 230 - sock_put(sk); 231 - 232 - kfree(ep); 233 - SCTP_DBG_OBJCNT_DEC(ep); 216 + call_rcu(&ep->rcu, sctp_endpoint_destroy_rcu); 234 217 } 235 218 236 219 /* Hold a reference to an endpoint. */ 237 - void sctp_endpoint_hold(struct sctp_endpoint *ep) 220 + int sctp_endpoint_hold(struct sctp_endpoint *ep) 238 221 { 239 - refcount_inc(&ep->base.refcnt); 222 + return refcount_inc_not_zero(&ep->base.refcnt); 240 223 } 241 224 242 225 /* Release a reference to an endpoint and clean up if there are
+15 -8
net/sctp/socket.c
··· 5338 5338 } 5339 5339 EXPORT_SYMBOL_GPL(sctp_transport_lookup_process); 5340 5340 5341 - int sctp_for_each_transport(int (*cb)(struct sctp_transport *, void *), 5342 - int (*cb_done)(struct sctp_transport *, void *), 5343 - struct net *net, int *pos, void *p) { 5341 + int sctp_transport_traverse_process(sctp_callback_t cb, sctp_callback_t cb_done, 5342 + struct net *net, int *pos, void *p) 5343 + { 5344 5344 struct rhashtable_iter hti; 5345 5345 struct sctp_transport *tsp; 5346 + struct sctp_endpoint *ep; 5346 5347 int ret; 5347 5348 5348 5349 again: ··· 5352 5351 5353 5352 tsp = sctp_transport_get_idx(net, &hti, *pos + 1); 5354 5353 for (; !IS_ERR_OR_NULL(tsp); tsp = sctp_transport_get_next(net, &hti)) { 5355 - ret = cb(tsp, p); 5356 - if (ret) 5357 - break; 5354 + ep = tsp->asoc->ep; 5355 + if (sctp_endpoint_hold(ep)) { /* asoc can be peeled off */ 5356 + ret = cb(ep, tsp, p); 5357 + if (ret) 5358 + break; 5359 + sctp_endpoint_put(ep); 5360 + } 5358 5361 (*pos)++; 5359 5362 sctp_transport_put(tsp); 5360 5363 } 5361 5364 sctp_transport_walk_stop(&hti); 5362 5365 5363 5366 if (ret) { 5364 - if (cb_done && !cb_done(tsp, p)) { 5367 + if (cb_done && !cb_done(ep, tsp, p)) { 5365 5368 (*pos)++; 5369 + sctp_endpoint_put(ep); 5366 5370 sctp_transport_put(tsp); 5367 5371 goto again; 5368 5372 } 5373 + sctp_endpoint_put(ep); 5369 5374 sctp_transport_put(tsp); 5370 5375 } 5371 5376 5372 5377 return ret; 5373 5378 } 5374 - EXPORT_SYMBOL_GPL(sctp_for_each_transport); 5379 + EXPORT_SYMBOL_GPL(sctp_transport_traverse_process); 5375 5380 5376 5381 /* 7.2.1 Association Status (SCTP_STATUS) 5377 5382
+3 -1
net/smc/af_smc.c
··· 194 194 /* cleanup for a dangling non-blocking connect */ 195 195 if (smc->connect_nonblock && sk->sk_state == SMC_INIT) 196 196 tcp_abort(smc->clcsock->sk, ECONNABORTED); 197 - flush_work(&smc->connect_work); 197 + 198 + if (cancel_work_sync(&smc->connect_work)) 199 + sock_put(&smc->sk); /* sock_hold in smc_connect for passive closing */ 198 200 199 201 if (sk->sk_state == SMC_LISTEN) 200 202 /* smc_close_non_accepted() is called and acquires
+5
net/smc/smc.h
··· 180 180 u16 tx_cdc_seq; /* sequence # for CDC send */ 181 181 u16 tx_cdc_seq_fin; /* sequence # - tx completed */ 182 182 spinlock_t send_lock; /* protect wr_sends */ 183 + atomic_t cdc_pend_tx_wr; /* number of pending tx CDC wqe 184 + * - inc when post wqe, 185 + * - dec on polled tx cqe 186 + */ 187 + wait_queue_head_t cdc_pend_tx_wq; /* wakeup on no cdc_pend_tx_wr*/ 183 188 struct delayed_work tx_work; /* retry of smc_cdc_msg_send */ 184 189 u32 tx_off; /* base offset in peer rmb */ 185 190
+24 -28
net/smc/smc_cdc.c
··· 31 31 struct smc_sock *smc; 32 32 int diff; 33 33 34 - if (!conn) 35 - /* already dismissed */ 36 - return; 37 - 38 34 smc = container_of(conn, struct smc_sock, conn); 39 35 bh_lock_sock(&smc->sk); 40 36 if (!wc_status) { ··· 47 51 conn); 48 52 conn->tx_cdc_seq_fin = cdcpend->ctrl_seq; 49 53 } 54 + 55 + if (atomic_dec_and_test(&conn->cdc_pend_tx_wr) && 56 + unlikely(wq_has_sleeper(&conn->cdc_pend_tx_wq))) 57 + wake_up(&conn->cdc_pend_tx_wq); 58 + WARN_ON(atomic_read(&conn->cdc_pend_tx_wr) < 0); 59 + 50 60 smc_tx_sndbuf_nonfull(smc); 51 61 bh_unlock_sock(&smc->sk); 52 62 } ··· 109 107 conn->tx_cdc_seq++; 110 108 conn->local_tx_ctrl.seqno = conn->tx_cdc_seq; 111 109 smc_host_msg_to_cdc((struct smc_cdc_msg *)wr_buf, conn, &cfed); 110 + 111 + atomic_inc(&conn->cdc_pend_tx_wr); 112 + smp_mb__after_atomic(); /* Make sure cdc_pend_tx_wr added before post */ 113 + 112 114 rc = smc_wr_tx_send(link, (struct smc_wr_tx_pend_priv *)pend); 113 115 if (!rc) { 114 116 smc_curs_copy(&conn->rx_curs_confirmed, &cfed, conn); ··· 120 114 } else { 121 115 conn->tx_cdc_seq--; 122 116 conn->local_tx_ctrl.seqno = conn->tx_cdc_seq; 117 + atomic_dec(&conn->cdc_pend_tx_wr); 123 118 } 124 119 125 120 return rc; ··· 143 136 peer->token = htonl(local->token); 144 137 peer->prod_flags.failover_validation = 1; 145 138 139 + /* We need to set pend->conn here to make sure smc_cdc_tx_handler() 140 + * can handle properly 141 + */ 142 + smc_cdc_add_pending_send(conn, pend); 143 + 144 + atomic_inc(&conn->cdc_pend_tx_wr); 145 + smp_mb__after_atomic(); /* Make sure cdc_pend_tx_wr added before post */ 146 + 146 147 rc = smc_wr_tx_send(link, (struct smc_wr_tx_pend_priv *)pend); 148 + if (unlikely(rc)) 149 + atomic_dec(&conn->cdc_pend_tx_wr); 150 + 147 151 return rc; 148 152 } 149 153 ··· 211 193 return rc; 212 194 } 213 195 214 - static bool smc_cdc_tx_filter(struct smc_wr_tx_pend_priv *tx_pend, 215 - unsigned long data) 196 + void smc_cdc_wait_pend_tx_wr(struct smc_connection *conn) 216 197 { 217 - struct smc_connection *conn = (struct smc_connection *)data; 218 - struct smc_cdc_tx_pend *cdc_pend = 219 - (struct smc_cdc_tx_pend *)tx_pend; 220 - 221 - return cdc_pend->conn == conn; 222 - } 223 - 224 - static void smc_cdc_tx_dismisser(struct smc_wr_tx_pend_priv *tx_pend) 225 - { 226 - struct smc_cdc_tx_pend *cdc_pend = 227 - (struct smc_cdc_tx_pend *)tx_pend; 228 - 229 - cdc_pend->conn = NULL; 230 - } 231 - 232 - void smc_cdc_tx_dismiss_slots(struct smc_connection *conn) 233 - { 234 - struct smc_link *link = conn->lnk; 235 - 236 - smc_wr_tx_dismiss_slots(link, SMC_CDC_MSG_TYPE, 237 - smc_cdc_tx_filter, smc_cdc_tx_dismisser, 238 - (unsigned long)conn); 198 + wait_event(conn->cdc_pend_tx_wq, !atomic_read(&conn->cdc_pend_tx_wr)); 239 199 } 240 200 241 201 /* Send a SMC-D CDC header.
+1 -1
net/smc/smc_cdc.h
··· 291 291 struct smc_wr_buf **wr_buf, 292 292 struct smc_rdma_wr **wr_rdma_buf, 293 293 struct smc_cdc_tx_pend **pend); 294 - void smc_cdc_tx_dismiss_slots(struct smc_connection *conn); 294 + void smc_cdc_wait_pend_tx_wr(struct smc_connection *conn); 295 295 int smc_cdc_msg_send(struct smc_connection *conn, struct smc_wr_buf *wr_buf, 296 296 struct smc_cdc_tx_pend *pend); 297 297 int smc_cdc_get_slot_and_msg_send(struct smc_connection *conn);
+21 -6
net/smc/smc_core.c
··· 647 647 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { 648 648 struct smc_link *lnk = &lgr->lnk[i]; 649 649 650 - if (smc_link_usable(lnk)) 650 + if (smc_link_sendable(lnk)) 651 651 lnk->state = SMC_LNK_INACTIVE; 652 652 } 653 653 wake_up_all(&lgr->llc_msg_waiter); ··· 1127 1127 smc_ism_unset_conn(conn); 1128 1128 tasklet_kill(&conn->rx_tsklet); 1129 1129 } else { 1130 - smc_cdc_tx_dismiss_slots(conn); 1130 + smc_cdc_wait_pend_tx_wr(conn); 1131 1131 if (current_work() != &conn->abort_work) 1132 1132 cancel_work_sync(&conn->abort_work); 1133 1133 } ··· 1204 1204 smc_llc_link_clear(lnk, log); 1205 1205 smcr_buf_unmap_lgr(lnk); 1206 1206 smcr_rtoken_clear_link(lnk); 1207 - smc_ib_modify_qp_reset(lnk); 1207 + smc_ib_modify_qp_error(lnk); 1208 1208 smc_wr_free_link(lnk); 1209 1209 smc_ib_destroy_queue_pair(lnk); 1210 1210 smc_ib_dealloc_protection_domain(lnk); ··· 1336 1336 else 1337 1337 tasklet_unlock_wait(&conn->rx_tsklet); 1338 1338 } else { 1339 - smc_cdc_tx_dismiss_slots(conn); 1339 + smc_cdc_wait_pend_tx_wr(conn); 1340 1340 } 1341 1341 smc_lgr_unregister_conn(conn); 1342 1342 smc_close_active_abort(smc); ··· 1459 1459 /* Called when an SMCR device is removed or the smc module is unloaded. 1460 1460 * If smcibdev is given, all SMCR link groups using this device are terminated. 1461 1461 * If smcibdev is NULL, all SMCR link groups are terminated. 1462 + * 1463 + * We must wait here for QPs been destroyed before we destroy the CQs, 1464 + * or we won't received any CQEs and cdc_pend_tx_wr cannot reach 0 thus 1465 + * smc_sock cannot be released. 1462 1466 */ 1463 1467 void smc_smcr_terminate_all(struct smc_ib_device *smcibdev) 1464 1468 { 1465 1469 struct smc_link_group *lgr, *lg; 1466 1470 LIST_HEAD(lgr_free_list); 1471 + LIST_HEAD(lgr_linkdown_list); 1467 1472 int i; 1468 1473 1469 1474 spin_lock_bh(&smc_lgr_list.lock); ··· 1480 1475 list_for_each_entry_safe(lgr, lg, &smc_lgr_list.list, list) { 1481 1476 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { 1482 1477 if (lgr->lnk[i].smcibdev == smcibdev) 1483 - smcr_link_down_cond_sched(&lgr->lnk[i]); 1478 + list_move_tail(&lgr->list, &lgr_linkdown_list); 1484 1479 } 1485 1480 } 1486 1481 } ··· 1490 1485 list_del_init(&lgr->list); 1491 1486 smc_llc_set_termination_rsn(lgr, SMC_LLC_DEL_OP_INIT_TERM); 1492 1487 __smc_lgr_terminate(lgr, false); 1488 + } 1489 + 1490 + list_for_each_entry_safe(lgr, lg, &lgr_linkdown_list, list) { 1491 + for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { 1492 + if (lgr->lnk[i].smcibdev == smcibdev) { 1493 + mutex_lock(&lgr->llc_conf_mutex); 1494 + smcr_link_down_cond(&lgr->lnk[i]); 1495 + mutex_unlock(&lgr->llc_conf_mutex); 1496 + } 1497 + } 1493 1498 } 1494 1499 1495 1500 if (smcibdev) { ··· 1601 1586 if (!lgr || lnk->state == SMC_LNK_UNUSED || list_empty(&lgr->list)) 1602 1587 return; 1603 1588 1604 - smc_ib_modify_qp_reset(lnk); 1605 1589 to_lnk = smc_switch_conns(lgr, lnk, true); 1606 1590 if (!to_lnk) { /* no backup link available */ 1607 1591 smcr_link_clear(lnk, true); ··· 1838 1824 conn->local_tx_ctrl.common.type = SMC_CDC_MSG_TYPE; 1839 1825 conn->local_tx_ctrl.len = SMC_WR_TX_SIZE; 1840 1826 conn->urg_state = SMC_URG_READ; 1827 + init_waitqueue_head(&conn->cdc_pend_tx_wq); 1841 1828 INIT_WORK(&smc->conn.abort_work, smc_conn_abort_work); 1842 1829 if (ini->is_smcd) { 1843 1830 conn->rx_off = sizeof(struct smcd_cdc_msg);
+6
net/smc/smc_core.h
··· 415 415 return true; 416 416 } 417 417 418 + static inline bool smc_link_sendable(struct smc_link *lnk) 419 + { 420 + return smc_link_usable(lnk) && 421 + lnk->qp_attr.cur_qp_state == IB_QPS_RTS; 422 + } 423 + 418 424 static inline bool smc_link_active(struct smc_link *lnk) 419 425 { 420 426 return lnk->state == SMC_LNK_ACTIVE;
+2 -2
net/smc/smc_ib.c
··· 109 109 IB_QP_MAX_QP_RD_ATOMIC); 110 110 } 111 111 112 - int smc_ib_modify_qp_reset(struct smc_link *lnk) 112 + int smc_ib_modify_qp_error(struct smc_link *lnk) 113 113 { 114 114 struct ib_qp_attr qp_attr; 115 115 116 116 memset(&qp_attr, 0, sizeof(qp_attr)); 117 - qp_attr.qp_state = IB_QPS_RESET; 117 + qp_attr.qp_state = IB_QPS_ERR; 118 118 return ib_modify_qp(lnk->roce_qp, &qp_attr, IB_QP_STATE); 119 119 } 120 120
+1
net/smc/smc_ib.h
··· 90 90 int smc_ib_ready_link(struct smc_link *lnk); 91 91 int smc_ib_modify_qp_rts(struct smc_link *lnk); 92 92 int smc_ib_modify_qp_reset(struct smc_link *lnk); 93 + int smc_ib_modify_qp_error(struct smc_link *lnk); 93 94 long smc_ib_setup_per_ibdev(struct smc_ib_device *smcibdev); 94 95 int smc_ib_get_memory_region(struct ib_pd *pd, int access_flags, 95 96 struct smc_buf_desc *buf_slot, u8 link_idx);
+1 -1
net/smc/smc_llc.c
··· 1630 1630 delllc.reason = htonl(rsn); 1631 1631 1632 1632 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { 1633 - if (!smc_link_usable(&lgr->lnk[i])) 1633 + if (!smc_link_sendable(&lgr->lnk[i])) 1634 1634 continue; 1635 1635 if (!smc_llc_send_message_wait(&lgr->lnk[i], &delllc)) 1636 1636 break;
+9 -42
net/smc/smc_wr.c
··· 62 62 } 63 63 64 64 /* wait till all pending tx work requests on the given link are completed */ 65 - int smc_wr_tx_wait_no_pending_sends(struct smc_link *link) 65 + void smc_wr_tx_wait_no_pending_sends(struct smc_link *link) 66 66 { 67 - if (wait_event_timeout(link->wr_tx_wait, !smc_wr_is_tx_pend(link), 68 - SMC_WR_TX_WAIT_PENDING_TIME)) 69 - return 0; 70 - else /* timeout */ 71 - return -EPIPE; 67 + wait_event(link->wr_tx_wait, !smc_wr_is_tx_pend(link)); 72 68 } 73 69 74 70 static inline int smc_wr_tx_find_pending_index(struct smc_link *link, u64 wr_id) ··· 83 87 struct smc_wr_tx_pend pnd_snd; 84 88 struct smc_link *link; 85 89 u32 pnd_snd_idx; 86 - int i; 87 90 88 91 link = wc->qp->qp_context; 89 92 ··· 123 128 } 124 129 125 130 if (wc->status) { 126 - for_each_set_bit(i, link->wr_tx_mask, link->wr_tx_cnt) { 127 - /* clear full struct smc_wr_tx_pend including .priv */ 128 - memset(&link->wr_tx_pends[i], 0, 129 - sizeof(link->wr_tx_pends[i])); 130 - memset(&link->wr_tx_bufs[i], 0, 131 - sizeof(link->wr_tx_bufs[i])); 132 - clear_bit(i, link->wr_tx_mask); 133 - } 134 131 if (link->lgr->smc_version == SMC_V2) { 135 132 memset(link->wr_tx_v2_pend, 0, 136 133 sizeof(*link->wr_tx_v2_pend)); ··· 175 188 static inline int smc_wr_tx_get_free_slot_index(struct smc_link *link, u32 *idx) 176 189 { 177 190 *idx = link->wr_tx_cnt; 178 - if (!smc_link_usable(link)) 191 + if (!smc_link_sendable(link)) 179 192 return -ENOLINK; 180 193 for_each_clear_bit(*idx, link->wr_tx_mask, link->wr_tx_cnt) { 181 194 if (!test_and_set_bit(*idx, link->wr_tx_mask)) ··· 218 231 } else { 219 232 rc = wait_event_interruptible_timeout( 220 233 link->wr_tx_wait, 221 - !smc_link_usable(link) || 234 + !smc_link_sendable(link) || 222 235 lgr->terminating || 223 236 (smc_wr_tx_get_free_slot_index(link, &idx) != -EBUSY), 224 237 SMC_WR_TX_WAIT_FREE_SLOT_TIME); ··· 345 358 unsigned long timeout) 346 359 { 347 360 struct smc_wr_tx_pend *pend; 361 + u32 pnd_idx; 348 362 int rc; 349 363 350 364 pend = container_of(priv, struct smc_wr_tx_pend, priv); 351 365 pend->compl_requested = 1; 352 - init_completion(&link->wr_tx_compl[pend->idx]); 366 + pnd_idx = pend->idx; 367 + init_completion(&link->wr_tx_compl[pnd_idx]); 353 368 354 369 rc = smc_wr_tx_send(link, priv); 355 370 if (rc) 356 371 return rc; 357 372 /* wait for completion by smc_wr_tx_process_cqe() */ 358 373 rc = wait_for_completion_interruptible_timeout( 359 - &link->wr_tx_compl[pend->idx], timeout); 374 + &link->wr_tx_compl[pnd_idx], timeout); 360 375 if (rc <= 0) 361 376 rc = -ENODATA; 362 377 if (rc > 0) ··· 406 417 break; 407 418 } 408 419 return rc; 409 - } 410 - 411 - void smc_wr_tx_dismiss_slots(struct smc_link *link, u8 wr_tx_hdr_type, 412 - smc_wr_tx_filter filter, 413 - smc_wr_tx_dismisser dismisser, 414 - unsigned long data) 415 - { 416 - struct smc_wr_tx_pend_priv *tx_pend; 417 - struct smc_wr_rx_hdr *wr_tx; 418 - int i; 419 - 420 - for_each_set_bit(i, link->wr_tx_mask, link->wr_tx_cnt) { 421 - wr_tx = (struct smc_wr_rx_hdr *)&link->wr_tx_bufs[i]; 422 - if (wr_tx->type != wr_tx_hdr_type) 423 - continue; 424 - tx_pend = &link->wr_tx_pends[i].priv; 425 - if (filter(tx_pend, data)) 426 - dismisser(tx_pend); 427 - } 428 420 } 429 421 430 422 /****************************** receive queue ********************************/ ··· 643 673 smc_wr_wakeup_reg_wait(lnk); 644 674 smc_wr_wakeup_tx_wait(lnk); 645 675 646 - if (smc_wr_tx_wait_no_pending_sends(lnk)) 647 - memset(lnk->wr_tx_mask, 0, 648 - BITS_TO_LONGS(SMC_WR_BUF_CNT) * 649 - sizeof(*lnk->wr_tx_mask)); 676 + smc_wr_tx_wait_no_pending_sends(lnk); 650 677 wait_event(lnk->wr_reg_wait, (!atomic_read(&lnk->wr_reg_refcnt))); 651 678 wait_event(lnk->wr_tx_wait, (!atomic_read(&lnk->wr_tx_refcnt))); 652 679
+2 -3
net/smc/smc_wr.h
··· 22 22 #define SMC_WR_BUF_CNT 16 /* # of ctrl buffers per link */ 23 23 24 24 #define SMC_WR_TX_WAIT_FREE_SLOT_TIME (10 * HZ) 25 - #define SMC_WR_TX_WAIT_PENDING_TIME (5 * HZ) 26 25 27 26 #define SMC_WR_TX_SIZE 44 /* actual size of wr_send data (<=SMC_WR_BUF_SIZE) */ 28 27 ··· 61 62 62 63 static inline bool smc_wr_tx_link_hold(struct smc_link *link) 63 64 { 64 - if (!smc_link_usable(link)) 65 + if (!smc_link_sendable(link)) 65 66 return false; 66 67 atomic_inc(&link->wr_tx_refcnt); 67 68 return true; ··· 129 130 smc_wr_tx_filter filter, 130 131 smc_wr_tx_dismisser dismisser, 131 132 unsigned long data); 132 - int smc_wr_tx_wait_no_pending_sends(struct smc_link *link); 133 + void smc_wr_tx_wait_no_pending_sends(struct smc_link *link); 133 134 134 135 int smc_wr_rx_register_handler(struct smc_wr_rx_handler *handler); 135 136 int smc_wr_rx_post_init(struct smc_link *link);
+4 -4
net/tipc/crypto.c
··· 524 524 return -EEXIST; 525 525 526 526 /* Allocate a new AEAD */ 527 - tmp = kzalloc(sizeof(*tmp), GFP_KERNEL); 527 + tmp = kzalloc(sizeof(*tmp), GFP_ATOMIC); 528 528 if (unlikely(!tmp)) 529 529 return -ENOMEM; 530 530 ··· 1474 1474 return -EEXIST; 1475 1475 1476 1476 /* Allocate crypto */ 1477 - c = kzalloc(sizeof(*c), GFP_KERNEL); 1477 + c = kzalloc(sizeof(*c), GFP_ATOMIC); 1478 1478 if (!c) 1479 1479 return -ENOMEM; 1480 1480 ··· 1488 1488 } 1489 1489 1490 1490 /* Allocate statistic structure */ 1491 - c->stats = alloc_percpu(struct tipc_crypto_stats); 1491 + c->stats = alloc_percpu_gfp(struct tipc_crypto_stats, GFP_ATOMIC); 1492 1492 if (!c->stats) { 1493 1493 if (c->wq) 1494 1494 destroy_workqueue(c->wq); ··· 2461 2461 } 2462 2462 2463 2463 /* Lets duplicate it first */ 2464 - skey = kmemdup(aead->key, tipc_aead_key_size(aead->key), GFP_KERNEL); 2464 + skey = kmemdup(aead->key, tipc_aead_key_size(aead->key), GFP_ATOMIC); 2465 2465 rcu_read_unlock(); 2466 2466 2467 2467 /* Now, generate new key, initiate & distribute it */
+2 -1
net/vmw_vsock/virtio_transport_common.c
··· 1299 1299 space_available = virtio_transport_space_update(sk, pkt); 1300 1300 1301 1301 /* Update CID in case it has changed after a transport reset event */ 1302 - vsk->local_addr.svm_cid = dst.svm_cid; 1302 + if (vsk->local_addr.svm_cid != VMADDR_CID_ANY) 1303 + vsk->local_addr.svm_cid = dst.svm_cid; 1303 1304 1304 1305 if (space_available) 1305 1306 sk->sk_write_space(sk);
+28 -2
net/wireless/reg.c
··· 133 133 134 134 static void restore_regulatory_settings(bool reset_user, bool cached); 135 135 static void print_regdomain(const struct ieee80211_regdomain *rd); 136 + static void reg_process_hint(struct regulatory_request *reg_request); 136 137 137 138 static const struct ieee80211_regdomain *get_cfg80211_regdom(void) 138 139 { ··· 1099 1098 const struct firmware *fw; 1100 1099 void *db; 1101 1100 int err; 1101 + const struct ieee80211_regdomain *current_regdomain; 1102 + struct regulatory_request *request; 1102 1103 1103 1104 err = request_firmware(&fw, "regulatory.db", &reg_pdev->dev); 1104 1105 if (err) ··· 1121 1118 if (!IS_ERR_OR_NULL(regdb)) 1122 1119 kfree(regdb); 1123 1120 regdb = db; 1124 - rtnl_unlock(); 1125 1121 1122 + /* reset regulatory domain */ 1123 + current_regdomain = get_cfg80211_regdom(); 1124 + 1125 + request = kzalloc(sizeof(*request), GFP_KERNEL); 1126 + if (!request) { 1127 + err = -ENOMEM; 1128 + goto out_unlock; 1129 + } 1130 + 1131 + request->wiphy_idx = WIPHY_IDX_INVALID; 1132 + request->alpha2[0] = current_regdomain->alpha2[0]; 1133 + request->alpha2[1] = current_regdomain->alpha2[1]; 1134 + request->initiator = NL80211_REGDOM_SET_BY_CORE; 1135 + request->user_reg_hint_type = NL80211_USER_REG_HINT_USER; 1136 + 1137 + reg_process_hint(request); 1138 + 1139 + out_unlock: 1140 + rtnl_unlock(); 1126 1141 out: 1127 1142 release_firmware(fw); 1128 1143 return err; ··· 2359 2338 struct cfg80211_chan_def chandef = {}; 2360 2339 struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); 2361 2340 enum nl80211_iftype iftype; 2341 + bool ret; 2362 2342 2363 2343 wdev_lock(wdev); 2364 2344 iftype = wdev->iftype; ··· 2409 2387 case NL80211_IFTYPE_AP: 2410 2388 case NL80211_IFTYPE_P2P_GO: 2411 2389 case NL80211_IFTYPE_ADHOC: 2412 - return cfg80211_reg_can_beacon_relax(wiphy, &chandef, iftype); 2390 + wiphy_lock(wiphy); 2391 + ret = cfg80211_reg_can_beacon_relax(wiphy, &chandef, iftype); 2392 + wiphy_unlock(wiphy); 2393 + 2394 + return ret; 2413 2395 case NL80211_IFTYPE_STATION: 2414 2396 case NL80211_IFTYPE_P2P_CLIENT: 2415 2397 return cfg80211_chandef_usable(wiphy, &chandef,
+2 -2
net/xdp/xsk.c
··· 677 677 struct xdp_sock *xs = xdp_sk(sk); 678 678 struct xsk_buff_pool *pool; 679 679 680 - sock_poll_wait(file, sock, wait); 681 - 682 680 if (unlikely(!xsk_is_bound(xs))) 683 681 return mask; 684 682 ··· 688 690 else 689 691 /* Poll needs to drive Tx also in copy mode */ 690 692 __xsk_sendmsg(sk); 693 + } else { 694 + sock_poll_wait(file, sock, wait); 691 695 } 692 696 693 697 if (xs->rx && !xskq_prod_is_empty(xs->rx))
+1
net/xdp/xsk_buff_pool.c
··· 83 83 xskb = &pool->heads[i]; 84 84 xskb->pool = pool; 85 85 xskb->xdp.frame_sz = umem->chunk_size - umem->headroom; 86 + INIT_LIST_HEAD(&xskb->free_list_node); 86 87 if (pool->unaligned) 87 88 pool->free_heads[i] = xskb; 88 89 else
+1 -1
scripts/recordmcount.pl
··· 219 219 220 220 } elsif ($arch eq "s390" && $bits == 64) { 221 221 if ($cc =~ /-DCC_USING_HOTPATCH/) { 222 - $mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*c0 04 00 00 00 00\\s*brcl\\s*0,[0-9a-f]+ <([^\+]*)>\$"; 222 + $mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*c0 04 00 00 00 00\\s*(brcl\\s*0,|jgnop\\s*)[0-9a-f]+ <([^\+]*)>\$"; 223 223 $mcount_adjust = 0; 224 224 } 225 225 $alignment = 8;
+20 -15
security/selinux/hooks.c
··· 611 611 return 0; 612 612 } 613 613 614 - static int parse_sid(struct super_block *sb, const char *s, u32 *sid) 614 + static int parse_sid(struct super_block *sb, const char *s, u32 *sid, 615 + gfp_t gfp) 615 616 { 616 617 int rc = security_context_str_to_sid(&selinux_state, s, 617 - sid, GFP_KERNEL); 618 + sid, gfp); 618 619 if (rc) 619 620 pr_warn("SELinux: security_context_str_to_sid" 620 621 "(%s) failed for (dev %s, type %s) errno=%d\n", ··· 686 685 */ 687 686 if (opts) { 688 687 if (opts->fscontext) { 689 - rc = parse_sid(sb, opts->fscontext, &fscontext_sid); 688 + rc = parse_sid(sb, opts->fscontext, &fscontext_sid, 689 + GFP_KERNEL); 690 690 if (rc) 691 691 goto out; 692 692 if (bad_option(sbsec, FSCONTEXT_MNT, sbsec->sid, ··· 696 694 sbsec->flags |= FSCONTEXT_MNT; 697 695 } 698 696 if (opts->context) { 699 - rc = parse_sid(sb, opts->context, &context_sid); 697 + rc = parse_sid(sb, opts->context, &context_sid, 698 + GFP_KERNEL); 700 699 if (rc) 701 700 goto out; 702 701 if (bad_option(sbsec, CONTEXT_MNT, sbsec->mntpoint_sid, ··· 706 703 sbsec->flags |= CONTEXT_MNT; 707 704 } 708 705 if (opts->rootcontext) { 709 - rc = parse_sid(sb, opts->rootcontext, &rootcontext_sid); 706 + rc = parse_sid(sb, opts->rootcontext, &rootcontext_sid, 707 + GFP_KERNEL); 710 708 if (rc) 711 709 goto out; 712 710 if (bad_option(sbsec, ROOTCONTEXT_MNT, root_isec->sid, ··· 716 712 sbsec->flags |= ROOTCONTEXT_MNT; 717 713 } 718 714 if (opts->defcontext) { 719 - rc = parse_sid(sb, opts->defcontext, &defcontext_sid); 715 + rc = parse_sid(sb, opts->defcontext, &defcontext_sid, 716 + GFP_KERNEL); 720 717 if (rc) 721 718 goto out; 722 719 if (bad_option(sbsec, DEFCONTEXT_MNT, sbsec->def_sid, ··· 2707 2702 return (sbsec->flags & SE_MNTMASK) ? 1 : 0; 2708 2703 2709 2704 if (opts->fscontext) { 2710 - rc = parse_sid(sb, opts->fscontext, &sid); 2705 + rc = parse_sid(sb, opts->fscontext, &sid, GFP_NOWAIT); 2711 2706 if (rc) 2712 2707 return 1; 2713 2708 if (bad_option(sbsec, FSCONTEXT_MNT, sbsec->sid, sid)) 2714 2709 return 1; 2715 2710 } 2716 2711 if (opts->context) { 2717 - rc = parse_sid(sb, opts->context, &sid); 2712 + rc = parse_sid(sb, opts->context, &sid, GFP_NOWAIT); 2718 2713 if (rc) 2719 2714 return 1; 2720 2715 if (bad_option(sbsec, CONTEXT_MNT, sbsec->mntpoint_sid, sid)) ··· 2724 2719 struct inode_security_struct *root_isec; 2725 2720 2726 2721 root_isec = backing_inode_security(sb->s_root); 2727 - rc = parse_sid(sb, opts->rootcontext, &sid); 2722 + rc = parse_sid(sb, opts->rootcontext, &sid, GFP_NOWAIT); 2728 2723 if (rc) 2729 2724 return 1; 2730 2725 if (bad_option(sbsec, ROOTCONTEXT_MNT, root_isec->sid, sid)) 2731 2726 return 1; 2732 2727 } 2733 2728 if (opts->defcontext) { 2734 - rc = parse_sid(sb, opts->defcontext, &sid); 2729 + rc = parse_sid(sb, opts->defcontext, &sid, GFP_NOWAIT); 2735 2730 if (rc) 2736 2731 return 1; 2737 2732 if (bad_option(sbsec, DEFCONTEXT_MNT, sbsec->def_sid, sid)) ··· 2754 2749 return 0; 2755 2750 2756 2751 if (opts->fscontext) { 2757 - rc = parse_sid(sb, opts->fscontext, &sid); 2752 + rc = parse_sid(sb, opts->fscontext, &sid, GFP_KERNEL); 2758 2753 if (rc) 2759 2754 return rc; 2760 2755 if (bad_option(sbsec, FSCONTEXT_MNT, sbsec->sid, sid)) 2761 2756 goto out_bad_option; 2762 2757 } 2763 2758 if (opts->context) { 2764 - rc = parse_sid(sb, opts->context, &sid); 2759 + rc = parse_sid(sb, opts->context, &sid, GFP_KERNEL); 2765 2760 if (rc) 2766 2761 return rc; 2767 2762 if (bad_option(sbsec, CONTEXT_MNT, sbsec->mntpoint_sid, sid)) ··· 2770 2765 if (opts->rootcontext) { 2771 2766 struct inode_security_struct *root_isec; 2772 2767 root_isec = backing_inode_security(sb->s_root); 2773 - rc = parse_sid(sb, opts->rootcontext, &sid); 2768 + rc = parse_sid(sb, opts->rootcontext, &sid, GFP_KERNEL); 2774 2769 if (rc) 2775 2770 return rc; 2776 2771 if (bad_option(sbsec, ROOTCONTEXT_MNT, root_isec->sid, sid)) 2777 2772 goto out_bad_option; 2778 2773 } 2779 2774 if (opts->defcontext) { 2780 - rc = parse_sid(sb, opts->defcontext, &sid); 2775 + rc = parse_sid(sb, opts->defcontext, &sid, GFP_KERNEL); 2781 2776 if (rc) 2782 2777 return rc; 2783 2778 if (bad_option(sbsec, DEFCONTEXT_MNT, sbsec->def_sid, sid)) ··· 5785 5780 struct sk_security_struct *sksec; 5786 5781 struct common_audit_data ad; 5787 5782 struct lsm_network_audit net = {0,}; 5788 - u8 proto; 5783 + u8 proto = 0; 5789 5784 5790 5785 sk = skb_to_full_sk(skb); 5791 5786 if (sk == NULL)
+14 -17
security/tomoyo/util.c
··· 1051 1051 return false; 1052 1052 if (!domain) 1053 1053 return true; 1054 + if (READ_ONCE(domain->flags[TOMOYO_DIF_QUOTA_WARNED])) 1055 + return false; 1054 1056 list_for_each_entry_rcu(ptr, &domain->acl_info_list, list, 1055 1057 srcu_read_lock_held(&tomoyo_ss)) { 1056 1058 u16 perm; 1057 - u8 i; 1058 1059 1059 1060 if (ptr->is_deleted) 1060 1061 continue; ··· 1066 1065 */ 1067 1066 switch (ptr->type) { 1068 1067 case TOMOYO_TYPE_PATH_ACL: 1069 - data_race(perm = container_of(ptr, struct tomoyo_path_acl, head)->perm); 1068 + perm = data_race(container_of(ptr, struct tomoyo_path_acl, head)->perm); 1070 1069 break; 1071 1070 case TOMOYO_TYPE_PATH2_ACL: 1072 - data_race(perm = container_of(ptr, struct tomoyo_path2_acl, head)->perm); 1071 + perm = data_race(container_of(ptr, struct tomoyo_path2_acl, head)->perm); 1073 1072 break; 1074 1073 case TOMOYO_TYPE_PATH_NUMBER_ACL: 1075 - data_race(perm = container_of(ptr, struct tomoyo_path_number_acl, head) 1074 + perm = data_race(container_of(ptr, struct tomoyo_path_number_acl, head) 1076 1075 ->perm); 1077 1076 break; 1078 1077 case TOMOYO_TYPE_MKDEV_ACL: 1079 - data_race(perm = container_of(ptr, struct tomoyo_mkdev_acl, head)->perm); 1078 + perm = data_race(container_of(ptr, struct tomoyo_mkdev_acl, head)->perm); 1080 1079 break; 1081 1080 case TOMOYO_TYPE_INET_ACL: 1082 - data_race(perm = container_of(ptr, struct tomoyo_inet_acl, head)->perm); 1081 + perm = data_race(container_of(ptr, struct tomoyo_inet_acl, head)->perm); 1083 1082 break; 1084 1083 case TOMOYO_TYPE_UNIX_ACL: 1085 - data_race(perm = container_of(ptr, struct tomoyo_unix_acl, head)->perm); 1084 + perm = data_race(container_of(ptr, struct tomoyo_unix_acl, head)->perm); 1086 1085 break; 1087 1086 case TOMOYO_TYPE_MANUAL_TASK_ACL: 1088 1087 perm = 0; ··· 1090 1089 default: 1091 1090 perm = 1; 1092 1091 } 1093 - for (i = 0; i < 16; i++) 1094 - if (perm & (1 << i)) 1095 - count++; 1092 + count += hweight16(perm); 1096 1093 } 1097 1094 if (count < tomoyo_profile(domain->ns, domain->profile)-> 1098 1095 pref[TOMOYO_PREF_MAX_LEARNING_ENTRY]) 1099 1096 return true; 1100 - if (!domain->flags[TOMOYO_DIF_QUOTA_WARNED]) { 1101 - domain->flags[TOMOYO_DIF_QUOTA_WARNED] = true; 1102 - /* r->granted = false; */ 1103 - tomoyo_write_log(r, "%s", tomoyo_dif[TOMOYO_DIF_QUOTA_WARNED]); 1097 + WRITE_ONCE(domain->flags[TOMOYO_DIF_QUOTA_WARNED], true); 1098 + /* r->granted = false; */ 1099 + tomoyo_write_log(r, "%s", tomoyo_dif[TOMOYO_DIF_QUOTA_WARNED]); 1104 1100 #ifndef CONFIG_SECURITY_TOMOYO_INSECURE_BUILTIN_SETTING 1105 - pr_warn("WARNING: Domain '%s' has too many ACLs to hold. Stopped learning mode.\n", 1106 - domain->domainname->name); 1101 + pr_warn("WARNING: Domain '%s' has too many ACLs to hold. Stopped learning mode.\n", 1102 + domain->domainname->name); 1107 1103 #endif 1108 - } 1109 1104 return false; 1110 1105 }
+4
sound/core/jack.c
··· 509 509 return -ENOMEM; 510 510 511 511 jack->id = kstrdup(id, GFP_KERNEL); 512 + if (jack->id == NULL) { 513 + kfree(jack); 514 + return -ENOMEM; 515 + } 512 516 513 517 /* don't creat input device for phantom jack */ 514 518 if (!phantom_jack) {
+1
sound/core/rawmidi.c
··· 447 447 err = -ENOMEM; 448 448 goto __error; 449 449 } 450 + rawmidi_file->user_pversion = 0; 450 451 init_waitqueue_entry(&wait, current); 451 452 add_wait_queue(&rmidi->open_wait, &wait); 452 453 while (1) {
+1 -1
sound/drivers/opl3/opl3_midi.c
··· 397 397 } 398 398 if (instr_4op) { 399 399 vp2 = &opl3->voices[voice + 3]; 400 - if (vp->state > 0) { 400 + if (vp2->state > 0) { 401 401 opl3_reg = reg_side | (OPL3_REG_KEYON_BLOCK + 402 402 voice_offset + 3); 403 403 reg_val = vp->keyon_reg & ~OPL3_KEYON_BIT;
+10 -3
sound/hda/intel-sdw-acpi.c
··· 132 132 return AE_NOT_FOUND; 133 133 } 134 134 135 - info->handle = handle; 136 - 137 135 /* 138 136 * On some Intel platforms, multiple children of the HDAS 139 137 * device can be found, but only one of them is the SoundWire ··· 141 143 */ 142 144 if (FIELD_GET(GENMASK(31, 28), adr) != SDW_LINK_TYPE) 143 145 return AE_OK; /* keep going */ 146 + 147 + /* found the correct SoundWire controller */ 148 + info->handle = handle; 144 149 145 150 /* device found, stop namespace walk */ 146 151 return AE_CTRL_TERMINATE; ··· 165 164 acpi_status status; 166 165 167 166 info->handle = NULL; 167 + /* 168 + * In the HDAS ACPI scope, 'SNDW' may be either the child of 169 + * 'HDAS' or the grandchild of 'HDAS'. So let's go through 170 + * the ACPI from 'HDAS' at max depth of 2 to find the 'SNDW' 171 + * device. 172 + */ 168 173 status = acpi_walk_namespace(ACPI_TYPE_DEVICE, 169 - parent_handle, 1, 174 + parent_handle, 2, 170 175 sdw_intel_acpi_cb, 171 176 NULL, info, NULL); 172 177 if (ACPI_FAILURE(status) || info->handle == NULL)
+15 -6
sound/pci/hda/patch_hdmi.c
··· 2947 2947 2948 2948 /* Intel Haswell and onwards; audio component with eld notifier */ 2949 2949 static int intel_hsw_common_init(struct hda_codec *codec, hda_nid_t vendor_nid, 2950 - const int *port_map, int port_num, int dev_num) 2950 + const int *port_map, int port_num, int dev_num, 2951 + bool send_silent_stream) 2951 2952 { 2952 2953 struct hdmi_spec *spec; 2953 2954 int err; ··· 2981 2980 * Enable silent stream feature, if it is enabled via 2982 2981 * module param or Kconfig option 2983 2982 */ 2984 - if (enable_silent_stream) 2983 + if (send_silent_stream) 2985 2984 spec->send_silent_stream = true; 2986 2985 2987 2986 return parse_intel_hdmi(codec); ··· 2989 2988 2990 2989 static int patch_i915_hsw_hdmi(struct hda_codec *codec) 2991 2990 { 2992 - return intel_hsw_common_init(codec, 0x08, NULL, 0, 3); 2991 + return intel_hsw_common_init(codec, 0x08, NULL, 0, 3, 2992 + enable_silent_stream); 2993 2993 } 2994 2994 2995 2995 static int patch_i915_glk_hdmi(struct hda_codec *codec) 2996 2996 { 2997 - return intel_hsw_common_init(codec, 0x0b, NULL, 0, 3); 2997 + /* 2998 + * Silent stream calls audio component .get_power() from 2999 + * .pin_eld_notify(). On GLK this will deadlock in i915 due 3000 + * to the audio vs. CDCLK workaround. 3001 + */ 3002 + return intel_hsw_common_init(codec, 0x0b, NULL, 0, 3, false); 2998 3003 } 2999 3004 3000 3005 static int patch_i915_icl_hdmi(struct hda_codec *codec) ··· 3011 3004 */ 3012 3005 static const int map[] = {0x0, 0x4, 0x6, 0x8, 0xa, 0xb}; 3013 3006 3014 - return intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map), 3); 3007 + return intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map), 3, 3008 + enable_silent_stream); 3015 3009 } 3016 3010 3017 3011 static int patch_i915_tgl_hdmi(struct hda_codec *codec) ··· 3024 3016 static const int map[] = {0x4, 0x6, 0x8, 0xa, 0xb, 0xc, 0xd, 0xe, 0xf}; 3025 3017 int ret; 3026 3018 3027 - ret = intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map), 4); 3019 + ret = intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map), 4, 3020 + enable_silent_stream); 3028 3021 if (!ret) { 3029 3022 struct hdmi_spec *spec = codec->spec; 3030 3023
+28 -1
sound/pci/hda/patch_realtek.c
··· 6546 6546 alc_process_coef_fw(codec, alc233_fixup_no_audio_jack_coefs); 6547 6547 } 6548 6548 6549 + static void alc256_fixup_mic_no_presence_and_resume(struct hda_codec *codec, 6550 + const struct hda_fixup *fix, 6551 + int action) 6552 + { 6553 + /* 6554 + * The Clevo NJ51CU comes either with the ALC293 or the ALC256 codec, 6555 + * but uses the 0x8686 subproduct id in both cases. The ALC256 codec 6556 + * needs an additional quirk for sound working after suspend and resume. 6557 + */ 6558 + if (codec->core.vendor_id == 0x10ec0256) { 6559 + alc_update_coef_idx(codec, 0x10, 1<<9, 0); 6560 + snd_hda_codec_set_pincfg(codec, 0x19, 0x04a11120); 6561 + } else { 6562 + snd_hda_codec_set_pincfg(codec, 0x1a, 0x04a1113c); 6563 + } 6564 + } 6565 + 6549 6566 enum { 6550 6567 ALC269_FIXUP_GPIO2, 6551 6568 ALC269_FIXUP_SONY_VAIO, ··· 6783 6766 ALC256_FIXUP_SET_COEF_DEFAULTS, 6784 6767 ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE, 6785 6768 ALC233_FIXUP_NO_AUDIO_JACK, 6769 + ALC256_FIXUP_MIC_NO_PRESENCE_AND_RESUME, 6786 6770 }; 6787 6771 6788 6772 static const struct hda_fixup alc269_fixups[] = { ··· 8508 8490 .type = HDA_FIXUP_FUNC, 8509 8491 .v.func = alc233_fixup_no_audio_jack, 8510 8492 }, 8493 + [ALC256_FIXUP_MIC_NO_PRESENCE_AND_RESUME] = { 8494 + .type = HDA_FIXUP_FUNC, 8495 + .v.func = alc256_fixup_mic_no_presence_and_resume, 8496 + .chained = true, 8497 + .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC 8498 + }, 8511 8499 }; 8512 8500 8513 8501 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 8684 8660 SND_PCI_QUIRK(0x103c, 0x84da, "HP OMEN dc0019-ur", ALC295_FIXUP_HP_OMEN), 8685 8661 SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3), 8686 8662 SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360), 8663 + SND_PCI_QUIRK(0x103c, 0x860f, "HP ZBook 15 G6", ALC285_FIXUP_HP_GPIO_AMP_INIT), 8687 8664 SND_PCI_QUIRK(0x103c, 0x861f, "HP Elite Dragonfly G1", ALC285_FIXUP_HP_GPIO_AMP_INIT), 8688 8665 SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED), 8689 8666 SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO), ··· 8730 8705 SND_PCI_QUIRK(0x103c, 0x8896, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_MUTE_LED), 8731 8706 SND_PCI_QUIRK(0x103c, 0x8898, "HP EliteBook 845 G8 Notebook PC", ALC285_FIXUP_HP_LIMIT_INT_MIC_BOOST), 8732 8707 SND_PCI_QUIRK(0x103c, 0x88d0, "HP Pavilion 15-eh1xxx (mainboard 88D0)", ALC287_FIXUP_HP_GPIO_LED), 8708 + SND_PCI_QUIRK(0x103c, 0x89ca, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 8733 8709 SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), 8734 8710 SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300), 8735 8711 SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), ··· 8855 8829 SND_PCI_QUIRK(0x1558, 0x8562, "Clevo NH[57][0-9]RZ[Q]", ALC269_FIXUP_DMIC), 8856 8830 SND_PCI_QUIRK(0x1558, 0x8668, "Clevo NP50B[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 8857 8831 SND_PCI_QUIRK(0x1558, 0x8680, "Clevo NJ50LU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 8858 - SND_PCI_QUIRK(0x1558, 0x8686, "Clevo NH50[CZ]U", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 8832 + SND_PCI_QUIRK(0x1558, 0x8686, "Clevo NH50[CZ]U", ALC256_FIXUP_MIC_NO_PRESENCE_AND_RESUME), 8859 8833 SND_PCI_QUIRK(0x1558, 0x8a20, "Clevo NH55DCQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 8860 8834 SND_PCI_QUIRK(0x1558, 0x8a51, "Clevo NH70RCQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 8861 8835 SND_PCI_QUIRK(0x1558, 0x8d50, "Clevo NH55RCQ-M", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ··· 9149 9123 {.id = ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP, .name = "alc287-ideapad-bass-spk-amp"}, 9150 9124 {.id = ALC623_FIXUP_LENOVO_THINKSTATION_P340, .name = "alc623-lenovo-thinkstation-p340"}, 9151 9125 {.id = ALC255_FIXUP_ACER_HEADPHONE_AND_MIC, .name = "alc255-acer-headphone-and-mic"}, 9126 + {.id = ALC285_FIXUP_HP_GPIO_AMP_INIT, .name = "alc285-hp-amp-init"}, 9152 9127 {} 9153 9128 }; 9154 9129 #define ALC225_STANDARD_PINS \
+4
sound/soc/codecs/rt5682.c
··· 929 929 unsigned int val, count; 930 930 931 931 if (jack_insert) { 932 + snd_soc_dapm_mutex_lock(dapm); 933 + 932 934 snd_soc_component_update_bits(component, RT5682_PWR_ANLG_1, 933 935 RT5682_PWR_VREF2 | RT5682_PWR_MB, 934 936 RT5682_PWR_VREF2 | RT5682_PWR_MB); ··· 981 979 snd_soc_component_update_bits(component, RT5682_MICBIAS_2, 982 980 RT5682_PWR_CLK25M_MASK | RT5682_PWR_CLK1M_MASK, 983 981 RT5682_PWR_CLK25M_PU | RT5682_PWR_CLK1M_PU); 982 + 983 + snd_soc_dapm_mutex_unlock(dapm); 984 984 } else { 985 985 rt5682_enable_push_button_irq(component, false); 986 986 snd_soc_component_update_bits(component, RT5682_CBJ_CTRL_1,
+2 -2
sound/soc/codecs/tas2770.c
··· 291 291 ramp_rate_val = TAS2770_TDM_CFG_REG0_SMP_44_1KHZ | 292 292 TAS2770_TDM_CFG_REG0_31_88_2_96KHZ; 293 293 break; 294 - case 19200: 294 + case 192000: 295 295 ramp_rate_val = TAS2770_TDM_CFG_REG0_SMP_48KHZ | 296 296 TAS2770_TDM_CFG_REG0_31_176_4_192KHZ; 297 297 break; 298 - case 17640: 298 + case 176400: 299 299 ramp_rate_val = TAS2770_TDM_CFG_REG0_SMP_44_1KHZ | 300 300 TAS2770_TDM_CFG_REG0_31_176_4_192KHZ; 301 301 break;
-33
sound/soc/meson/aiu-encoder-i2s.c
··· 18 18 #define AIU_RST_SOFT_I2S_FAST BIT(0) 19 19 20 20 #define AIU_I2S_DAC_CFG_MSB_FIRST BIT(2) 21 - #define AIU_I2S_MISC_HOLD_EN BIT(2) 22 21 #define AIU_CLK_CTRL_I2S_DIV_EN BIT(0) 23 22 #define AIU_CLK_CTRL_I2S_DIV GENMASK(3, 2) 24 23 #define AIU_CLK_CTRL_AOCLK_INVERT BIT(6) ··· 33 34 snd_soc_component_update_bits(component, AIU_CLK_CTRL, 34 35 AIU_CLK_CTRL_I2S_DIV_EN, 35 36 enable ? AIU_CLK_CTRL_I2S_DIV_EN : 0); 36 - } 37 - 38 - static void aiu_encoder_i2s_hold(struct snd_soc_component *component, 39 - bool enable) 40 - { 41 - snd_soc_component_update_bits(component, AIU_I2S_MISC, 42 - AIU_I2S_MISC_HOLD_EN, 43 - enable ? AIU_I2S_MISC_HOLD_EN : 0); 44 - } 45 - 46 - static int aiu_encoder_i2s_trigger(struct snd_pcm_substream *substream, int cmd, 47 - struct snd_soc_dai *dai) 48 - { 49 - struct snd_soc_component *component = dai->component; 50 - 51 - switch (cmd) { 52 - case SNDRV_PCM_TRIGGER_START: 53 - case SNDRV_PCM_TRIGGER_RESUME: 54 - case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: 55 - aiu_encoder_i2s_hold(component, false); 56 - return 0; 57 - 58 - case SNDRV_PCM_TRIGGER_STOP: 59 - case SNDRV_PCM_TRIGGER_SUSPEND: 60 - case SNDRV_PCM_TRIGGER_PAUSE_PUSH: 61 - aiu_encoder_i2s_hold(component, true); 62 - return 0; 63 - 64 - default: 65 - return -EINVAL; 66 - } 67 37 } 68 38 69 39 static int aiu_encoder_i2s_setup_desc(struct snd_soc_component *component, ··· 321 353 } 322 354 323 355 const struct snd_soc_dai_ops aiu_encoder_i2s_dai_ops = { 324 - .trigger = aiu_encoder_i2s_trigger, 325 356 .hw_params = aiu_encoder_i2s_hw_params, 326 357 .hw_free = aiu_encoder_i2s_hw_free, 327 358 .set_fmt = aiu_encoder_i2s_set_fmt,
+19
sound/soc/meson/aiu-fifo-i2s.c
··· 20 20 #define AIU_MEM_I2S_CONTROL_MODE_16BIT BIT(6) 21 21 #define AIU_MEM_I2S_BUF_CNTL_INIT BIT(0) 22 22 #define AIU_RST_SOFT_I2S_FAST BIT(0) 23 + #define AIU_I2S_MISC_HOLD_EN BIT(2) 24 + #define AIU_I2S_MISC_FORCE_LEFT_RIGHT BIT(4) 23 25 24 26 #define AIU_FIFO_I2S_BLOCK 256 25 27 ··· 92 90 unsigned int val; 93 91 int ret; 94 92 93 + snd_soc_component_update_bits(component, AIU_I2S_MISC, 94 + AIU_I2S_MISC_HOLD_EN, 95 + AIU_I2S_MISC_HOLD_EN); 96 + 95 97 ret = aiu_fifo_hw_params(substream, params, dai); 96 98 if (ret) 97 99 return ret; ··· 122 116 val = FIELD_PREP(AIU_MEM_I2S_MASKS_IRQ_BLOCK, val); 123 117 snd_soc_component_update_bits(component, AIU_MEM_I2S_MASKS, 124 118 AIU_MEM_I2S_MASKS_IRQ_BLOCK, val); 119 + 120 + /* 121 + * Most (all?) supported SoCs have this bit set by default. The vendor 122 + * driver however sets it manually (depending on the version either 123 + * while un-setting AIU_I2S_MISC_HOLD_EN or right before that). Follow 124 + * the same approach for consistency with the vendor driver. 125 + */ 126 + snd_soc_component_update_bits(component, AIU_I2S_MISC, 127 + AIU_I2S_MISC_FORCE_LEFT_RIGHT, 128 + AIU_I2S_MISC_FORCE_LEFT_RIGHT); 129 + 130 + snd_soc_component_update_bits(component, AIU_I2S_MISC, 131 + AIU_I2S_MISC_HOLD_EN, 0); 125 132 126 133 return 0; 127 134 }
+6
sound/soc/meson/aiu-fifo.c
··· 5 5 6 6 #include <linux/bitfield.h> 7 7 #include <linux/clk.h> 8 + #include <linux/dma-mapping.h> 8 9 #include <sound/pcm_params.h> 9 10 #include <sound/soc.h> 10 11 #include <sound/soc-dai.h> ··· 180 179 struct snd_card *card = rtd->card->snd_card; 181 180 struct aiu_fifo *fifo = dai->playback_dma_data; 182 181 size_t size = fifo->pcm->buffer_bytes_max; 182 + int ret; 183 + 184 + ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(32)); 185 + if (ret) 186 + return ret; 183 187 184 188 snd_pcm_set_managed_buffer_all(rtd->pcm, SNDRV_DMA_TYPE_DEV, 185 189 card->dev, size, size);
+4
sound/soc/sof/intel/pci-tgl.c
··· 112 112 .driver_data = (unsigned long)&adls_desc}, 113 113 { PCI_DEVICE(0x8086, 0x51c8), /* ADL-P */ 114 114 .driver_data = (unsigned long)&adl_desc}, 115 + { PCI_DEVICE(0x8086, 0x51cd), /* ADL-P */ 116 + .driver_data = (unsigned long)&adl_desc}, 115 117 { PCI_DEVICE(0x8086, 0x51cc), /* ADL-M */ 118 + .driver_data = (unsigned long)&adl_desc}, 119 + { PCI_DEVICE(0x8086, 0x54c8), /* ADL-N */ 116 120 .driver_data = (unsigned long)&adl_desc}, 117 121 { 0, } 118 122 };
+10 -1
sound/soc/tegra/tegra_asoc_machine.c
··· 116 116 SOC_DAPM_PIN_SWITCH("Headset Mic"), 117 117 SOC_DAPM_PIN_SWITCH("Internal Mic 1"), 118 118 SOC_DAPM_PIN_SWITCH("Internal Mic 2"), 119 + SOC_DAPM_PIN_SWITCH("Headphones"), 120 + SOC_DAPM_PIN_SWITCH("Mic Jack"), 119 121 }; 120 122 121 123 int tegra_asoc_machine_init(struct snd_soc_pcm_runtime *rtd) 122 124 { 123 125 struct snd_soc_card *card = rtd->card; 124 126 struct tegra_machine *machine = snd_soc_card_get_drvdata(card); 127 + const char *jack_name; 125 128 int err; 126 129 127 130 if (machine->gpiod_hp_det && machine->asoc->add_hp_jack) { 128 - err = snd_soc_card_jack_new(card, "Headphones Jack", 131 + if (machine->asoc->hp_jack_name) 132 + jack_name = machine->asoc->hp_jack_name; 133 + else 134 + jack_name = "Headphones Jack"; 135 + 136 + err = snd_soc_card_jack_new(card, jack_name, 129 137 SND_JACK_HEADPHONE, 130 138 &tegra_machine_hp_jack, 131 139 tegra_machine_hp_jack_pins, ··· 666 658 static const struct tegra_asoc_data tegra_max98090_data = { 667 659 .mclk_rate = tegra_machine_mclk_rate_12mhz, 668 660 .card = &snd_soc_tegra_max98090, 661 + .hp_jack_name = "Headphones", 669 662 .add_common_dapm_widgets = true, 670 663 .add_common_controls = true, 671 664 .add_common_snd_ops = true,
+1
sound/soc/tegra/tegra_asoc_machine.h
··· 14 14 struct tegra_asoc_data { 15 15 unsigned int (*mclk_rate)(unsigned int srate); 16 16 const char *codec_dev_name; 17 + const char *hp_jack_name; 17 18 struct snd_soc_card *card; 18 19 unsigned int mclk_id; 19 20 bool hp_jack_gpio_active_low;
+9 -4
tools/perf/builtin-inject.c
··· 755 755 return inject->itrace_synth_opts.vm_tm_corr_args ? 0 : -ENOMEM; 756 756 } 757 757 758 + static int output_fd(struct perf_inject *inject) 759 + { 760 + return inject->in_place_update ? -1 : perf_data__fd(&inject->output); 761 + } 762 + 758 763 static int __cmd_inject(struct perf_inject *inject) 759 764 { 760 765 int ret = -EINVAL; 761 766 struct perf_session *session = inject->session; 762 - struct perf_data *data_out = &inject->output; 763 - int fd = inject->in_place_update ? -1 : perf_data__fd(data_out); 767 + int fd = output_fd(inject); 764 768 u64 output_data_offset; 765 769 766 770 signal(SIGINT, sig_handler); ··· 1019 1015 } 1020 1016 1021 1017 inject.session = __perf_session__new(&data, repipe, 1022 - perf_data__fd(&inject.output), 1018 + output_fd(&inject), 1023 1019 &inject.tool); 1024 1020 if (IS_ERR(inject.session)) { 1025 1021 ret = PTR_ERR(inject.session); ··· 1082 1078 zstd_fini(&(inject.session->zstd_data)); 1083 1079 perf_session__delete(inject.session); 1084 1080 out_close_output: 1085 - perf_data__close(&inject.output); 1081 + if (!inject.in_place_update) 1082 + perf_data__close(&inject.output); 1086 1083 free(inject.itrace_synth_opts.vm_tm_corr_args); 1087 1084 return ret; 1088 1085 }
+1 -1
tools/perf/builtin-script.c
··· 2473 2473 if (perf_event__process_switch(tool, event, sample, machine) < 0) 2474 2474 return -1; 2475 2475 2476 - if (scripting_ops && scripting_ops->process_switch) 2476 + if (scripting_ops && scripting_ops->process_switch && !filter_cpu(sample)) 2477 2477 scripting_ops->process_switch(event, sample, machine); 2478 2478 2479 2479 if (!script->show_switch_events)
+13 -10
tools/perf/scripts/python/intel-pt-events.py
··· 32 32 except: 33 33 broken_pipe_exception = IOError 34 34 35 - glb_switch_str = None 36 - glb_switch_printed = True 35 + glb_switch_str = {} 37 36 glb_insn = False 38 37 glb_disassembler = None 39 38 glb_src = False ··· 69 70 ap = argparse.ArgumentParser(usage = "", add_help = False) 70 71 ap.add_argument("--insn-trace", action='store_true') 71 72 ap.add_argument("--src-trace", action='store_true') 73 + ap.add_argument("--all-switch-events", action='store_true') 72 74 global glb_args 73 75 global glb_insn 74 76 global glb_src ··· 256 256 print(start_str, src_str) 257 257 258 258 def do_process_event(param_dict): 259 - global glb_switch_printed 260 - if not glb_switch_printed: 261 - print(glb_switch_str) 262 - glb_switch_printed = True 263 259 event_attr = param_dict["attr"] 264 260 sample = param_dict["sample"] 265 261 raw_buf = param_dict["raw_buf"] ··· 269 273 # Symbol and dso info are not always resolved 270 274 dso = get_optional(param_dict, "dso") 271 275 symbol = get_optional(param_dict, "symbol") 276 + 277 + cpu = sample["cpu"] 278 + if cpu in glb_switch_str: 279 + print(glb_switch_str[cpu]) 280 + del glb_switch_str[cpu] 272 281 273 282 if name[0:12] == "instructions": 274 283 if glb_src: ··· 337 336 sys.exit(1) 338 337 339 338 def context_switch(ts, cpu, pid, tid, np_pid, np_tid, machine_pid, out, out_preempt, *x): 340 - global glb_switch_printed 341 - global glb_switch_str 342 339 if out: 343 340 out_str = "Switch out " 344 341 else: ··· 349 350 machine_str = "" 350 351 else: 351 352 machine_str = "machine PID %d" % machine_pid 352 - glb_switch_str = "%16s %5d/%-5d [%03u] %9u.%09u %5d/%-5d %s %s" % \ 353 + switch_str = "%16s %5d/%-5d [%03u] %9u.%09u %5d/%-5d %s %s" % \ 353 354 (out_str, pid, tid, cpu, ts / 1000000000, ts %1000000000, np_pid, np_tid, machine_str, preempt_str) 354 - glb_switch_printed = False 355 + if glb_args.all_switch_events: 356 + print(switch_str); 357 + else: 358 + global glb_switch_str 359 + glb_switch_str[cpu] = switch_str
+5 -3
tools/perf/ui/tui/setup.c
··· 170 170 "Press any key...", 0); 171 171 172 172 SLtt_set_cursor_visibility(1); 173 - SLsmg_refresh(); 174 - SLsmg_reset_smg(); 173 + if (!pthread_mutex_trylock(&ui__lock)) { 174 + SLsmg_refresh(); 175 + SLsmg_reset_smg(); 176 + pthread_mutex_unlock(&ui__lock); 177 + } 175 178 SLang_reset_tty(); 176 - 177 179 perf_error__unregister(&perf_tui_eops); 178 180 }
+11 -1
tools/perf/util/expr.c
··· 12 12 #include "expr-bison.h" 13 13 #include "expr-flex.h" 14 14 #include "smt.h" 15 + #include <linux/err.h> 15 16 #include <linux/kernel.h> 16 17 #include <linux/zalloc.h> 17 18 #include <ctype.h> ··· 66 65 67 66 struct hashmap *ids__new(void) 68 67 { 69 - return hashmap__new(key_hash, key_equal, NULL); 68 + struct hashmap *hash; 69 + 70 + hash = hashmap__new(key_hash, key_equal, NULL); 71 + if (IS_ERR(hash)) 72 + return NULL; 73 + return hash; 70 74 } 71 75 72 76 void ids__free(struct hashmap *ids) ··· 305 299 return NULL; 306 300 307 301 ctx->ids = hashmap__new(key_hash, key_equal, NULL); 302 + if (IS_ERR(ctx->ids)) { 303 + free(ctx); 304 + return NULL; 305 + } 308 306 ctx->runtime = 0; 309 307 310 308 return ctx;
+1
tools/perf/util/intel-pt.c
··· 3625 3625 *args = p; 3626 3626 return 0; 3627 3627 } 3628 + p += 1; 3628 3629 while (1) { 3629 3630 vmcs = strtoull(p, &p, 0); 3630 3631 if (errno)
+17 -6
tools/perf/util/pmu.c
··· 1659 1659 return !strcmp(name, "cpu") || is_arm_pmu_core(name); 1660 1660 } 1661 1661 1662 + static bool pmu_alias_is_duplicate(struct sevent *alias_a, 1663 + struct sevent *alias_b) 1664 + { 1665 + /* Different names -> never duplicates */ 1666 + if (strcmp(alias_a->name, alias_b->name)) 1667 + return false; 1668 + 1669 + /* Don't remove duplicates for hybrid PMUs */ 1670 + if (perf_pmu__is_hybrid(alias_a->pmu) && 1671 + perf_pmu__is_hybrid(alias_b->pmu)) 1672 + return false; 1673 + 1674 + return true; 1675 + } 1676 + 1662 1677 void print_pmu_events(const char *event_glob, bool name_only, bool quiet_flag, 1663 1678 bool long_desc, bool details_flag, bool deprecated, 1664 1679 const char *pmu_name) ··· 1759 1744 qsort(aliases, len, sizeof(struct sevent), cmp_sevent); 1760 1745 for (j = 0; j < len; j++) { 1761 1746 /* Skip duplicates */ 1762 - if (j > 0 && !strcmp(aliases[j].name, aliases[j - 1].name)) { 1763 - if (!aliases[j].pmu || !aliases[j - 1].pmu || 1764 - !strcmp(aliases[j].pmu, aliases[j - 1].pmu)) { 1765 - continue; 1766 - } 1767 - } 1747 + if (j > 0 && pmu_alias_is_duplicate(&aliases[j], &aliases[j - 1])) 1748 + continue; 1768 1749 1769 1750 if (name_only) { 1770 1751 printf("%s ", aliases[j].name);
+20
tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
··· 33 33 return sum; 34 34 } 35 35 36 + __weak noinline struct file *bpf_testmod_return_ptr(int arg) 37 + { 38 + static struct file f = {}; 39 + 40 + switch (arg) { 41 + case 1: return (void *)EINVAL; /* user addr */ 42 + case 2: return (void *)0xcafe4a11; /* user addr */ 43 + case 3: return (void *)-EINVAL; /* canonical, but invalid */ 44 + case 4: return (void *)(1ull << 60); /* non-canonical and invalid */ 45 + case 5: return (void *)~(1ull << 30); /* trigger extable */ 46 + case 6: return &f; /* valid addr */ 47 + case 7: return (void *)((long)&f | 1); /* kernel tricks */ 48 + default: return NULL; 49 + } 50 + } 51 + 36 52 noinline ssize_t 37 53 bpf_testmod_test_read(struct file *file, struct kobject *kobj, 38 54 struct bin_attribute *bin_attr, ··· 59 43 .off = off, 60 44 .len = len, 61 45 }; 46 + int i = 1; 47 + 48 + while (bpf_testmod_return_ptr(i)) 49 + i++; 62 50 63 51 /* This is always true. Use the check to make sure the compiler 64 52 * doesn't remove bpf_testmod_loop_test.
+14 -2
tools/testing/selftests/bpf/prog_tests/btf_skc_cls_ingress.c
··· 90 90 91 91 static void test_conn(void) 92 92 { 93 - int listen_fd = -1, cli_fd = -1, err; 93 + int listen_fd = -1, cli_fd = -1, srv_fd = -1, err; 94 94 socklen_t addrlen = sizeof(srv_sa6); 95 95 int srv_port; 96 96 ··· 110 110 111 111 cli_fd = connect_to_fd(listen_fd, 0); 112 112 if (CHECK_FAIL(cli_fd == -1)) 113 + goto done; 114 + 115 + srv_fd = accept(listen_fd, NULL, NULL); 116 + if (CHECK_FAIL(srv_fd == -1)) 113 117 goto done; 114 118 115 119 if (CHECK(skel->bss->listen_tp_sport != srv_port || ··· 138 134 close(listen_fd); 139 135 if (cli_fd != -1) 140 136 close(cli_fd); 137 + if (srv_fd != -1) 138 + close(srv_fd); 141 139 } 142 140 143 141 static void test_syncookie(void) 144 142 { 145 - int listen_fd = -1, cli_fd = -1, err; 143 + int listen_fd = -1, cli_fd = -1, srv_fd = -1, err; 146 144 socklen_t addrlen = sizeof(srv_sa6); 147 145 int srv_port; 148 146 ··· 165 159 166 160 cli_fd = connect_to_fd(listen_fd, 0); 167 161 if (CHECK_FAIL(cli_fd == -1)) 162 + goto done; 163 + 164 + srv_fd = accept(listen_fd, NULL, NULL); 165 + if (CHECK_FAIL(srv_fd == -1)) 168 166 goto done; 169 167 170 168 if (CHECK(skel->bss->listen_tp_sport != srv_port, ··· 198 188 close(listen_fd); 199 189 if (cli_fd != -1) 200 190 close(cli_fd); 191 + if (srv_fd != -1) 192 + close(srv_fd); 201 193 } 202 194 203 195 struct test {
+12
tools/testing/selftests/bpf/progs/test_module_attach.c
··· 87 87 return 0; 88 88 } 89 89 90 + SEC("fexit/bpf_testmod_return_ptr") 91 + int BPF_PROG(handle_fexit_ret, int arg, struct file *ret) 92 + { 93 + long buf = 0; 94 + 95 + bpf_probe_read_kernel(&buf, 8, ret); 96 + bpf_probe_read_kernel(&buf, 8, (char *)ret + 256); 97 + *(volatile long long *)ret; 98 + *(volatile int *)&ret->f_mode; 99 + return 0; 100 + } 101 + 90 102 __u32 fmod_ret_read_sz = 0; 91 103 92 104 SEC("fmod_ret/bpf_testmod_test_read")
+1 -1
tools/testing/selftests/bpf/test_verifier.c
··· 54 54 #define MAX_INSNS BPF_MAXINSNS 55 55 #define MAX_TEST_INSNS 1000000 56 56 #define MAX_FIXUPS 8 57 - #define MAX_NR_MAPS 21 57 + #define MAX_NR_MAPS 22 58 58 #define MAX_TEST_RUNS 8 59 59 #define POINTER_VALUE 0xcafe4all 60 60 #define TEST_DATA_LEN 64
+86
tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
··· 138 138 BPF_EXIT_INSN(), 139 139 }, 140 140 .result = ACCEPT, 141 + .result_unpriv = REJECT, 142 + .errstr_unpriv = "R0 leaks addr into mem", 141 143 }, 142 144 { 143 145 "Dest pointer in r0 - succeed", ··· 158 156 BPF_EXIT_INSN(), 159 157 }, 160 158 .result = ACCEPT, 159 + .result_unpriv = REJECT, 160 + .errstr_unpriv = "R0 leaks addr into mem", 161 + }, 162 + { 163 + "Dest pointer in r0 - succeed, check 2", 164 + .insns = { 165 + /* r0 = &val */ 166 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_10), 167 + /* val = r0; */ 168 + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), 169 + /* r5 = &val */ 170 + BPF_MOV64_REG(BPF_REG_5, BPF_REG_10), 171 + /* r0 = atomic_cmpxchg(&val, r0, r5); */ 172 + BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, BPF_REG_10, BPF_REG_5, -8), 173 + /* r1 = *r0 */ 174 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8), 175 + /* exit(0); */ 176 + BPF_MOV64_IMM(BPF_REG_0, 0), 177 + BPF_EXIT_INSN(), 178 + }, 179 + .result = ACCEPT, 180 + .result_unpriv = REJECT, 181 + .errstr_unpriv = "R0 leaks addr into mem", 182 + }, 183 + { 184 + "Dest pointer in r0 - succeed, check 3", 185 + .insns = { 186 + /* r0 = &val */ 187 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_10), 188 + /* val = r0; */ 189 + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), 190 + /* r5 = &val */ 191 + BPF_MOV64_REG(BPF_REG_5, BPF_REG_10), 192 + /* r0 = atomic_cmpxchg(&val, r0, r5); */ 193 + BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, BPF_REG_10, BPF_REG_5, -8), 194 + /* exit(0); */ 195 + BPF_MOV64_IMM(BPF_REG_0, 0), 196 + BPF_EXIT_INSN(), 197 + }, 198 + .result = REJECT, 199 + .errstr = "invalid size of register fill", 200 + .errstr_unpriv = "R0 leaks addr into mem", 201 + }, 202 + { 203 + "Dest pointer in r0 - succeed, check 4", 204 + .insns = { 205 + /* r0 = &val */ 206 + BPF_MOV32_REG(BPF_REG_0, BPF_REG_10), 207 + /* val = r0; */ 208 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -8), 209 + /* r5 = &val */ 210 + BPF_MOV32_REG(BPF_REG_5, BPF_REG_10), 211 + /* r0 = atomic_cmpxchg(&val, r0, r5); */ 212 + BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, BPF_REG_10, BPF_REG_5, -8), 213 + /* r1 = *r10 */ 214 + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -8), 215 + /* exit(0); */ 216 + BPF_MOV64_IMM(BPF_REG_0, 0), 217 + BPF_EXIT_INSN(), 218 + }, 219 + .result = ACCEPT, 220 + .result_unpriv = REJECT, 221 + .errstr_unpriv = "R10 partial copy of pointer", 222 + }, 223 + { 224 + "Dest pointer in r0 - succeed, check 5", 225 + .insns = { 226 + /* r0 = &val */ 227 + BPF_MOV32_REG(BPF_REG_0, BPF_REG_10), 228 + /* val = r0; */ 229 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -8), 230 + /* r5 = &val */ 231 + BPF_MOV32_REG(BPF_REG_5, BPF_REG_10), 232 + /* r0 = atomic_cmpxchg(&val, r0, r5); */ 233 + BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, BPF_REG_10, BPF_REG_5, -8), 234 + /* r1 = *r0 */ 235 + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, -8), 236 + /* exit(0); */ 237 + BPF_MOV64_IMM(BPF_REG_0, 0), 238 + BPF_EXIT_INSN(), 239 + }, 240 + .result = REJECT, 241 + .errstr = "R0 invalid mem access", 242 + .errstr_unpriv = "R10 partial copy of pointer", 161 243 },
+94
tools/testing/selftests/bpf/verifier/atomic_fetch.c
··· 1 + { 2 + "atomic dw/fetch and address leakage of (map ptr & -1) via stack slot", 3 + .insns = { 4 + BPF_LD_IMM64(BPF_REG_1, -1), 5 + BPF_LD_MAP_FD(BPF_REG_8, 0), 6 + BPF_LD_MAP_FD(BPF_REG_9, 0), 7 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 8 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 9 + BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_9, 0), 10 + BPF_ATOMIC_OP(BPF_DW, BPF_AND | BPF_FETCH, BPF_REG_2, BPF_REG_1, 0), 11 + BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_2, 0), 12 + BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), 13 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), 14 + BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), 15 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 16 + BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0), 17 + BPF_MOV64_IMM(BPF_REG_0, 0), 18 + BPF_EXIT_INSN(), 19 + }, 20 + .fixup_map_array_48b = { 2, 4 }, 21 + .result = ACCEPT, 22 + .result_unpriv = REJECT, 23 + .errstr_unpriv = "leaking pointer from stack off -8", 24 + }, 25 + { 26 + "atomic dw/fetch and address leakage of (map ptr & -1) via returned value", 27 + .insns = { 28 + BPF_LD_IMM64(BPF_REG_1, -1), 29 + BPF_LD_MAP_FD(BPF_REG_8, 0), 30 + BPF_LD_MAP_FD(BPF_REG_9, 0), 31 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 32 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 33 + BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_9, 0), 34 + BPF_ATOMIC_OP(BPF_DW, BPF_AND | BPF_FETCH, BPF_REG_2, BPF_REG_1, 0), 35 + BPF_MOV64_REG(BPF_REG_9, BPF_REG_1), 36 + BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), 37 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), 38 + BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), 39 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 40 + BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0), 41 + BPF_MOV64_IMM(BPF_REG_0, 0), 42 + BPF_EXIT_INSN(), 43 + }, 44 + .fixup_map_array_48b = { 2, 4 }, 45 + .result = ACCEPT, 46 + .result_unpriv = REJECT, 47 + .errstr_unpriv = "leaking pointer from stack off -8", 48 + }, 49 + { 50 + "atomic w/fetch and address leakage of (map ptr & -1) via stack slot", 51 + .insns = { 52 + BPF_LD_IMM64(BPF_REG_1, -1), 53 + BPF_LD_MAP_FD(BPF_REG_8, 0), 54 + BPF_LD_MAP_FD(BPF_REG_9, 0), 55 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 56 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 57 + BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_9, 0), 58 + BPF_ATOMIC_OP(BPF_W, BPF_AND | BPF_FETCH, BPF_REG_2, BPF_REG_1, 0), 59 + BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_2, 0), 60 + BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), 61 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), 62 + BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), 63 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 64 + BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0), 65 + BPF_MOV64_IMM(BPF_REG_0, 0), 66 + BPF_EXIT_INSN(), 67 + }, 68 + .fixup_map_array_48b = { 2, 4 }, 69 + .result = REJECT, 70 + .errstr = "invalid size of register fill", 71 + }, 72 + { 73 + "atomic w/fetch and address leakage of (map ptr & -1) via returned value", 74 + .insns = { 75 + BPF_LD_IMM64(BPF_REG_1, -1), 76 + BPF_LD_MAP_FD(BPF_REG_8, 0), 77 + BPF_LD_MAP_FD(BPF_REG_9, 0), 78 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 79 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 80 + BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_9, 0), 81 + BPF_ATOMIC_OP(BPF_W, BPF_AND | BPF_FETCH, BPF_REG_2, BPF_REG_1, 0), 82 + BPF_MOV64_REG(BPF_REG_9, BPF_REG_1), 83 + BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), 84 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), 85 + BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), 86 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 87 + BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0), 88 + BPF_MOV64_IMM(BPF_REG_0, 0), 89 + BPF_EXIT_INSN(), 90 + }, 91 + .fixup_map_array_48b = { 2, 4 }, 92 + .result = REJECT, 93 + .errstr = "invalid size of register fill", 94 + }, 1 95 #define __ATOMIC_FETCH_OP_TEST(src_reg, dst_reg, operand1, op, operand2, expect) \ 2 96 { \ 3 97 "atomic fetch " #op ", src=" #dst_reg " dst=" #dst_reg, \
+71
tools/testing/selftests/bpf/verifier/search_pruning.c
··· 133 133 .prog_type = BPF_PROG_TYPE_TRACEPOINT, 134 134 }, 135 135 { 136 + "precision tracking for u32 spill/fill", 137 + .insns = { 138 + BPF_MOV64_REG(BPF_REG_7, BPF_REG_1), 139 + BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), 140 + BPF_MOV32_IMM(BPF_REG_6, 32), 141 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 142 + BPF_MOV32_IMM(BPF_REG_6, 4), 143 + /* Additional insns to introduce a pruning point. */ 144 + BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), 145 + BPF_MOV64_IMM(BPF_REG_3, 0), 146 + BPF_MOV64_IMM(BPF_REG_3, 0), 147 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 148 + BPF_MOV64_IMM(BPF_REG_3, 0), 149 + /* u32 spill/fill */ 150 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_6, -8), 151 + BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_10, -8), 152 + /* out-of-bound map value access for r6=32 */ 153 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0), 154 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 155 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16), 156 + BPF_LD_MAP_FD(BPF_REG_1, 0), 157 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), 158 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), 159 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_8), 160 + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), 161 + BPF_MOV64_IMM(BPF_REG_0, 0), 162 + BPF_EXIT_INSN(), 163 + }, 164 + .fixup_map_hash_8b = { 15 }, 165 + .result = REJECT, 166 + .errstr = "R0 min value is outside of the allowed memory range", 167 + .prog_type = BPF_PROG_TYPE_TRACEPOINT, 168 + }, 169 + { 170 + "precision tracking for u32 spills, u64 fill", 171 + .insns = { 172 + BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), 173 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), 174 + BPF_MOV32_IMM(BPF_REG_7, 0xffffffff), 175 + /* Additional insns to introduce a pruning point. */ 176 + BPF_MOV64_IMM(BPF_REG_3, 1), 177 + BPF_MOV64_IMM(BPF_REG_3, 1), 178 + BPF_MOV64_IMM(BPF_REG_3, 1), 179 + BPF_MOV64_IMM(BPF_REG_3, 1), 180 + BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), 181 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 182 + BPF_MOV64_IMM(BPF_REG_3, 1), 183 + BPF_ALU32_IMM(BPF_DIV, BPF_REG_3, 0), 184 + /* u32 spills, u64 fill */ 185 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_6, -4), 186 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_7, -8), 187 + BPF_LDX_MEM(BPF_DW, BPF_REG_8, BPF_REG_10, -8), 188 + /* if r8 != X goto pc+1 r8 known in fallthrough branch */ 189 + BPF_JMP_IMM(BPF_JNE, BPF_REG_8, 0xffffffff, 1), 190 + BPF_MOV64_IMM(BPF_REG_3, 1), 191 + /* if r8 == X goto pc+1 condition always true on first 192 + * traversal, so starts backtracking to mark r8 as requiring 193 + * precision. r7 marked as needing precision. r6 not marked 194 + * since it's not tracked. 195 + */ 196 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_8, 0xffffffff, 1), 197 + /* fails if r8 correctly marked unknown after fill. */ 198 + BPF_ALU32_IMM(BPF_DIV, BPF_REG_3, 0), 199 + BPF_MOV64_IMM(BPF_REG_0, 0), 200 + BPF_EXIT_INSN(), 201 + }, 202 + .result = REJECT, 203 + .errstr = "div by zero", 204 + .prog_type = BPF_PROG_TYPE_TRACEPOINT, 205 + }, 206 + { 136 207 "allocated_stack", 137 208 .insns = { 138 209 BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_1),
+32
tools/testing/selftests/bpf/verifier/spill_fill.c
··· 176 176 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 177 177 }, 178 178 { 179 + "Spill u32 const scalars. Refill as u64. Offset to skb->data", 180 + .insns = { 181 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 182 + offsetof(struct __sk_buff, data)), 183 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 184 + offsetof(struct __sk_buff, data_end)), 185 + /* r6 = 0 */ 186 + BPF_MOV32_IMM(BPF_REG_6, 0), 187 + /* r7 = 20 */ 188 + BPF_MOV32_IMM(BPF_REG_7, 20), 189 + /* *(u32 *)(r10 -4) = r6 */ 190 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_6, -4), 191 + /* *(u32 *)(r10 -8) = r7 */ 192 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_7, -8), 193 + /* r4 = *(u64 *)(r10 -8) */ 194 + BPF_LDX_MEM(BPF_H, BPF_REG_4, BPF_REG_10, -8), 195 + /* r0 = r2 */ 196 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), 197 + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv,umax=65535 */ 198 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), 199 + /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv,umax=65535 */ 200 + BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), 201 + /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv20 */ 202 + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), 203 + BPF_MOV64_IMM(BPF_REG_0, 0), 204 + BPF_EXIT_INSN(), 205 + }, 206 + .result = REJECT, 207 + .errstr = "invalid access to packet", 208 + .prog_type = BPF_PROG_TYPE_SCHED_CLS, 209 + }, 210 + { 179 211 "Spill a u32 const scalar. Refill as u16 from fp-6. Offset to skb->data", 180 212 .insns = { 181 213 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+23
tools/testing/selftests/bpf/verifier/value_ptr_arith.c
··· 1078 1078 .errstr_unpriv = "R0 pointer -= pointer prohibited", 1079 1079 }, 1080 1080 { 1081 + "map access: trying to leak tained dst reg", 1082 + .insns = { 1083 + BPF_MOV64_IMM(BPF_REG_0, 0), 1084 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 1085 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 1086 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 1087 + BPF_LD_MAP_FD(BPF_REG_1, 0), 1088 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), 1089 + BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), 1090 + BPF_EXIT_INSN(), 1091 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), 1092 + BPF_MOV32_IMM(BPF_REG_1, 0xFFFFFFFF), 1093 + BPF_MOV32_REG(BPF_REG_1, BPF_REG_1), 1094 + BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), 1095 + BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 0), 1096 + BPF_MOV64_IMM(BPF_REG_0, 0), 1097 + BPF_EXIT_INSN(), 1098 + }, 1099 + .fixup_map_array_48b = { 4 }, 1100 + .result = REJECT, 1101 + .errstr = "math between map_value pointer and 4294967295 is not allowed", 1102 + }, 1103 + { 1081 1104 "32bit pkt_ptr -= scalar", 1082 1105 .insns = { 1083 1106 BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_1,
+30
tools/testing/selftests/drivers/net/mlxsw/rif_mac_profiles_occ.sh
··· 72 72 ip link set $h1.10 address $h1_10_mac 73 73 } 74 74 75 + rif_mac_profile_consolidation_test() 76 + { 77 + local count=$1; shift 78 + local h1_20_mac 79 + 80 + RET=0 81 + 82 + if [[ $count -eq 1 ]]; then 83 + return 84 + fi 85 + 86 + h1_20_mac=$(mac_get $h1.20) 87 + 88 + # Set the MAC of $h1.20 to that of $h1.10 and confirm that they are 89 + # using the same MAC profile. 90 + ip link set $h1.20 address 00:11:11:11:11:11 91 + check_err $? 92 + 93 + occ=$(devlink -j resource show $DEVLINK_DEV \ 94 + | jq '.[][][] | select(.name=="rif_mac_profiles") |.["occ"]') 95 + 96 + [[ $occ -eq $((count - 1)) ]] 97 + check_err $? "MAC profile occupancy did not decrease" 98 + 99 + log_test "RIF MAC profile consolidation" 100 + 101 + ip link set $h1.20 address $h1_20_mac 102 + } 103 + 75 104 rif_mac_profile_shared_replacement_test() 76 105 { 77 106 local count=$1; shift ··· 133 104 create_max_rif_mac_profiles $count 134 105 135 106 rif_mac_profile_replacement_test 107 + rif_mac_profile_consolidation_test $count 136 108 rif_mac_profile_shared_replacement_test $count 137 109 } 138 110
+1
tools/testing/selftests/kvm/.gitignore
··· 35 35 /x86_64/vmx_apic_access_test 36 36 /x86_64/vmx_close_while_nested_test 37 37 /x86_64/vmx_dirty_log_test 38 + /x86_64/vmx_invalid_nested_guest_state 38 39 /x86_64/vmx_preemption_timer_test 39 40 /x86_64/vmx_set_nested_state_test 40 41 /x86_64/vmx_tsc_adjust_test
+1
tools/testing/selftests/kvm/Makefile
··· 64 64 TEST_GEN_PROGS_x86_64 += x86_64/vmx_apic_access_test 65 65 TEST_GEN_PROGS_x86_64 += x86_64/vmx_close_while_nested_test 66 66 TEST_GEN_PROGS_x86_64 += x86_64/vmx_dirty_log_test 67 + TEST_GEN_PROGS_x86_64 += x86_64/vmx_invalid_nested_guest_state 67 68 TEST_GEN_PROGS_x86_64 += x86_64/vmx_set_nested_state_test 68 69 TEST_GEN_PROGS_x86_64 += x86_64/vmx_tsc_adjust_test 69 70 TEST_GEN_PROGS_x86_64 += x86_64/vmx_nested_tsc_scaling_test
+1 -9
tools/testing/selftests/kvm/include/kvm_util.h
··· 71 71 72 72 #endif 73 73 74 - #if defined(__x86_64__) 75 - unsigned long vm_compute_max_gfn(struct kvm_vm *vm); 76 - #else 77 - static inline unsigned long vm_compute_max_gfn(struct kvm_vm *vm) 78 - { 79 - return ((1ULL << vm->pa_bits) >> vm->page_shift) - 1; 80 - } 81 - #endif 82 - 83 74 #define MIN_PAGE_SIZE (1U << MIN_PAGE_SHIFT) 84 75 #define PTES_PER_MIN_PAGE ptes_per_page(MIN_PAGE_SIZE) 85 76 ··· 321 330 322 331 unsigned int vm_get_page_size(struct kvm_vm *vm); 323 332 unsigned int vm_get_page_shift(struct kvm_vm *vm); 333 + unsigned long vm_compute_max_gfn(struct kvm_vm *vm); 324 334 uint64_t vm_get_max_gfn(struct kvm_vm *vm); 325 335 int vm_get_fd(struct kvm_vm *vm); 326 336
+5
tools/testing/selftests/kvm/lib/kvm_util.c
··· 2328 2328 return vm->page_shift; 2329 2329 } 2330 2330 2331 + unsigned long __attribute__((weak)) vm_compute_max_gfn(struct kvm_vm *vm) 2332 + { 2333 + return ((1ULL << vm->pa_bits) >> vm->page_shift) - 1; 2334 + } 2335 + 2331 2336 uint64_t vm_get_max_gfn(struct kvm_vm *vm) 2332 2337 { 2333 2338 return vm->max_gfn;
+105
tools/testing/selftests/kvm/x86_64/vmx_invalid_nested_guest_state.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + #include "test_util.h" 3 + #include "kvm_util.h" 4 + #include "processor.h" 5 + #include "vmx.h" 6 + 7 + #include <string.h> 8 + #include <sys/ioctl.h> 9 + 10 + #include "kselftest.h" 11 + 12 + #define VCPU_ID 0 13 + #define ARBITRARY_IO_PORT 0x2000 14 + 15 + static struct kvm_vm *vm; 16 + 17 + static void l2_guest_code(void) 18 + { 19 + /* 20 + * Generate an exit to L0 userspace, i.e. main(), via I/O to an 21 + * arbitrary port. 22 + */ 23 + asm volatile("inb %%dx, %%al" 24 + : : [port] "d" (ARBITRARY_IO_PORT) : "rax"); 25 + } 26 + 27 + static void l1_guest_code(struct vmx_pages *vmx_pages) 28 + { 29 + #define L2_GUEST_STACK_SIZE 64 30 + unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; 31 + 32 + GUEST_ASSERT(prepare_for_vmx_operation(vmx_pages)); 33 + GUEST_ASSERT(load_vmcs(vmx_pages)); 34 + 35 + /* Prepare the VMCS for L2 execution. */ 36 + prepare_vmcs(vmx_pages, l2_guest_code, 37 + &l2_guest_stack[L2_GUEST_STACK_SIZE]); 38 + 39 + /* 40 + * L2 must be run without unrestricted guest, verify that the selftests 41 + * library hasn't enabled it. Because KVM selftests jump directly to 42 + * 64-bit mode, unrestricted guest support isn't required. 43 + */ 44 + GUEST_ASSERT(!(vmreadz(CPU_BASED_VM_EXEC_CONTROL) & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) || 45 + !(vmreadz(SECONDARY_VM_EXEC_CONTROL) & SECONDARY_EXEC_UNRESTRICTED_GUEST)); 46 + 47 + GUEST_ASSERT(!vmlaunch()); 48 + 49 + /* L2 should triple fault after main() stuffs invalid guest state. */ 50 + GUEST_ASSERT(vmreadz(VM_EXIT_REASON) == EXIT_REASON_TRIPLE_FAULT); 51 + GUEST_DONE(); 52 + } 53 + 54 + int main(int argc, char *argv[]) 55 + { 56 + vm_vaddr_t vmx_pages_gva; 57 + struct kvm_sregs sregs; 58 + struct kvm_run *run; 59 + struct ucall uc; 60 + 61 + nested_vmx_check_supported(); 62 + 63 + vm = vm_create_default(VCPU_ID, 0, (void *) l1_guest_code); 64 + 65 + /* Allocate VMX pages and shared descriptors (vmx_pages). */ 66 + vcpu_alloc_vmx(vm, &vmx_pages_gva); 67 + vcpu_args_set(vm, VCPU_ID, 1, vmx_pages_gva); 68 + 69 + vcpu_run(vm, VCPU_ID); 70 + 71 + run = vcpu_state(vm, VCPU_ID); 72 + 73 + /* 74 + * The first exit to L0 userspace should be an I/O access from L2. 75 + * Running L1 should launch L2 without triggering an exit to userspace. 76 + */ 77 + TEST_ASSERT(run->exit_reason == KVM_EXIT_IO, 78 + "Expected KVM_EXIT_IO, got: %u (%s)\n", 79 + run->exit_reason, exit_reason_str(run->exit_reason)); 80 + 81 + TEST_ASSERT(run->io.port == ARBITRARY_IO_PORT, 82 + "Expected IN from port %d from L2, got port %d", 83 + ARBITRARY_IO_PORT, run->io.port); 84 + 85 + /* 86 + * Stuff invalid guest state for L2 by making TR unusuable. The next 87 + * KVM_RUN should induce a TRIPLE_FAULT in L2 as KVM doesn't support 88 + * emulating invalid guest state for L2. 89 + */ 90 + memset(&sregs, 0, sizeof(sregs)); 91 + vcpu_sregs_get(vm, VCPU_ID, &sregs); 92 + sregs.tr.unusable = 1; 93 + vcpu_sregs_set(vm, VCPU_ID, &sregs); 94 + 95 + vcpu_run(vm, VCPU_ID); 96 + 97 + switch (get_ucall(vm, VCPU_ID, &uc)) { 98 + case UCALL_DONE: 99 + break; 100 + case UCALL_ABORT: 101 + TEST_FAIL("%s", (const char *)uc.args[0]); 102 + default: 103 + TEST_FAIL("Unexpected ucall: %lu", uc.cmd); 104 + } 105 + }
-17
tools/testing/selftests/kvm/x86_64/vmx_pmu_msrs_test.c
··· 110 110 ret = _vcpu_set_msr(vm, 0, MSR_IA32_PERF_CAPABILITIES, PMU_CAP_LBR_FMT); 111 111 TEST_ASSERT(ret == 0, "Bad PERF_CAPABILITIES didn't fail."); 112 112 113 - /* testcase 4, set capabilities when we don't have PDCM bit */ 114 - entry_1_0->ecx &= ~X86_FEATURE_PDCM; 115 - vcpu_set_cpuid(vm, VCPU_ID, cpuid); 116 - ret = _vcpu_set_msr(vm, 0, MSR_IA32_PERF_CAPABILITIES, host_cap.capabilities); 117 - TEST_ASSERT(ret == 0, "Bad PERF_CAPABILITIES didn't fail."); 118 - 119 - /* testcase 5, set capabilities when we don't have PMU version bits */ 120 - entry_1_0->ecx |= X86_FEATURE_PDCM; 121 - eax.split.version_id = 0; 122 - entry_1_0->ecx = eax.full; 123 - vcpu_set_cpuid(vm, VCPU_ID, cpuid); 124 - ret = _vcpu_set_msr(vm, 0, MSR_IA32_PERF_CAPABILITIES, PMU_CAP_FW_WRITES); 125 - TEST_ASSERT(ret == 0, "Bad PERF_CAPABILITIES didn't fail."); 126 - 127 - vcpu_set_msr(vm, 0, MSR_IA32_PERF_CAPABILITIES, 0); 128 - ASSERT_EQ(vcpu_get_msr(vm, VCPU_ID, MSR_IA32_PERF_CAPABILITIES), 0); 129 - 130 113 kvm_vm_free(vm); 131 114 }
+34 -11
tools/testing/selftests/net/fcnal-test.sh
··· 455 455 ip netns del ${NSC} >/dev/null 2>&1 456 456 } 457 457 458 + cleanup_vrf_dup() 459 + { 460 + ip link del ${NSA_DEV2} >/dev/null 2>&1 461 + ip netns pids ${NSC} | xargs kill 2>/dev/null 462 + ip netns del ${NSC} >/dev/null 2>&1 463 + } 464 + 465 + setup_vrf_dup() 466 + { 467 + # some VRF tests use ns-C which has the same config as 468 + # ns-B but for a device NOT in the VRF 469 + create_ns ${NSC} "-" "-" 470 + connect_ns ${NSA} ${NSA_DEV2} ${NSA_IP}/24 ${NSA_IP6}/64 \ 471 + ${NSC} ${NSC_DEV} ${NSB_IP}/24 ${NSB_IP6}/64 472 + } 473 + 458 474 setup() 459 475 { 460 476 local with_vrf=${1} ··· 500 484 501 485 ip -netns ${NSB} ro add ${VRF_IP}/32 via ${NSA_IP} dev ${NSB_DEV} 502 486 ip -netns ${NSB} -6 ro add ${VRF_IP6}/128 via ${NSA_IP6} dev ${NSB_DEV} 503 - 504 - # some VRF tests use ns-C which has the same config as 505 - # ns-B but for a device NOT in the VRF 506 - create_ns ${NSC} "-" "-" 507 - connect_ns ${NSA} ${NSA_DEV2} ${NSA_IP}/24 ${NSA_IP6}/64 \ 508 - ${NSC} ${NSC_DEV} ${NSB_IP}/24 ${NSB_IP6}/64 509 487 else 510 488 ip -netns ${NSA} ro add ${NSB_LO_IP}/32 via ${NSB_IP} dev ${NSA_DEV} 511 489 ip -netns ${NSA} ro add ${NSB_LO_IP6}/128 via ${NSB_IP6} dev ${NSA_DEV} ··· 1250 1240 log_test_addr ${a} $? 1 "Global server, local connection" 1251 1241 1252 1242 # run MD5 tests 1243 + setup_vrf_dup 1253 1244 ipv4_tcp_md5 1245 + cleanup_vrf_dup 1254 1246 1255 1247 # 1256 1248 # enable VRF global server ··· 1810 1798 for a in ${NSA_IP} ${VRF_IP} 1811 1799 do 1812 1800 log_start 1801 + show_hint "Socket not bound to VRF, but address is in VRF" 1813 1802 run_cmd nettest -s -R -P icmp -l ${a} -b 1814 - log_test_addr ${a} $? 0 "Raw socket bind to local address" 1803 + log_test_addr ${a} $? 1 "Raw socket bind to local address" 1815 1804 1816 1805 log_start 1817 1806 run_cmd nettest -s -R -P icmp -l ${a} -I ${NSA_DEV} -b ··· 2204 2191 log_start 2205 2192 show_hint "Fails since VRF device does not support linklocal or multicast" 2206 2193 run_cmd ${ping6} -c1 -w1 ${a} 2207 - log_test_addr ${a} $? 2 "ping out, VRF bind" 2194 + log_test_addr ${a} $? 1 "ping out, VRF bind" 2208 2195 done 2209 2196 2210 2197 for a in ${NSB_IP6} ${NSB_LO_IP6} ${NSB_LINKIP6}%${NSA_DEV} ${MCAST}%${NSA_DEV} ··· 2732 2719 log_test_addr ${a} $? 1 "Global server, local connection" 2733 2720 2734 2721 # run MD5 tests 2722 + setup_vrf_dup 2735 2723 ipv6_tcp_md5 2724 + cleanup_vrf_dup 2736 2725 2737 2726 # 2738 2727 # enable VRF global server ··· 3429 3414 run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b 3430 3415 log_test_addr ${a} $? 0 "TCP socket bind to local address after device bind" 3431 3416 3417 + # Sadly, the kernel allows binding a socket to a device and then 3418 + # binding to an address not on the device. So this test passes 3419 + # when it really should not 3432 3420 a=${NSA_LO_IP6} 3433 3421 log_start 3434 - show_hint "Should fail with 'Cannot assign requested address'" 3422 + show_hint "Tecnically should fail since address is not on device but kernel allows" 3435 3423 run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b 3436 - log_test_addr ${a} $? 1 "TCP socket bind to out of scope local address" 3424 + log_test_addr ${a} $? 0 "TCP socket bind to out of scope local address" 3437 3425 } 3438 3426 3439 3427 ipv6_addr_bind_vrf() ··· 3477 3459 run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b 3478 3460 log_test_addr ${a} $? 0 "TCP socket bind to local address with device bind" 3479 3461 3462 + # Sadly, the kernel allows binding a socket to a device and then 3463 + # binding to an address not on the device. The only restriction 3464 + # is that the address is valid in the L3 domain. So this test 3465 + # passes when it really should not 3480 3466 a=${VRF_IP6} 3481 3467 log_start 3468 + show_hint "Tecnically should fail since address is not on device but kernel allows" 3482 3469 run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b 3483 - log_test_addr ${a} $? 1 "TCP socket bind to VRF address with device bind" 3470 + log_test_addr ${a} $? 0 "TCP socket bind to VRF address with device bind" 3484 3471 3485 3472 a=${NSA_LO_IP6} 3486 3473 log_start
+2
tools/testing/selftests/net/forwarding/forwarding.config.sample
··· 13 13 NETIFS[p6]=veth5 14 14 NETIFS[p7]=veth6 15 15 NETIFS[p8]=veth7 16 + NETIFS[p9]=veth8 17 + NETIFS[p10]=veth9 16 18 17 19 # Port that does not have a cable connected. 18 20 NETIF_NO_CABLE=eth8
+1 -1
tools/testing/selftests/net/icmp_redirect.sh
··· 311 311 ip -netns h1 ro get ${H1_VRF_ARG} ${H2_N2_IP} | \ 312 312 grep -E -v 'mtu|redirected' | grep -q "cache" 313 313 fi 314 - log_test $? 0 "IPv4: ${desc}" 314 + log_test $? 0 "IPv4: ${desc}" 0 315 315 316 316 # No PMTU info for test "redirect" and "mtu exception plus redirect" 317 317 if [ "$with_redirect" = "yes" ] && [ "$desc" != "redirect exception plus mtu" ]; then
-1
tools/testing/selftests/net/mptcp/config
··· 9 9 CONFIG_NETFILTER_ADVANCED=y 10 10 CONFIG_NETFILTER_NETLINK=m 11 11 CONFIG_NF_TABLES=m 12 - CONFIG_NFT_COUNTER=m 13 12 CONFIG_NFT_COMPAT=m 14 13 CONFIG_NETFILTER_XTABLES=m 15 14 CONFIG_NETFILTER_XT_MATCH_BPF=m
+1 -1
tools/testing/selftests/net/toeplitz.c
··· 498 498 bool have_toeplitz = false; 499 499 int index, c; 500 500 501 - while ((c = getopt_long(argc, argv, "46C:d:i:k:r:stT:u:v", long_options, &index)) != -1) { 501 + while ((c = getopt_long(argc, argv, "46C:d:i:k:r:stT:uv", long_options, &index)) != -1) { 502 502 switch (c) { 503 503 case '4': 504 504 cfg_family = AF_INET;
+4 -2
tools/testing/selftests/net/udpgro_fwd.sh
··· 132 132 local rcv=`ip netns exec $NS_DST $ipt"-save" -c | grep 'dport 8000' | \ 133 133 sed -e 's/\[//' -e 's/:.*//'` 134 134 if [ $rcv != $pkts ]; then 135 - echo " fail - received $rvs packets, expected $pkts" 135 + echo " fail - received $rcv packets, expected $pkts" 136 136 ret=1 137 137 return 138 138 fi ··· 185 185 IPT=iptables 186 186 SUFFIX=24 187 187 VXDEV=vxlan 188 + PING=ping 188 189 189 190 if [ $family = 6 ]; then 190 191 BM_NET=$BM_NET_V6 ··· 193 192 SUFFIX="64 nodad" 194 193 VXDEV=vxlan6 195 194 IPT=ip6tables 195 + PING="ping6" 196 196 fi 197 197 198 198 echo "IPv$family" ··· 239 237 240 238 # load arp cache before running the test to reduce the amount of 241 239 # stray traffic on top of the UDP tunnel 242 - ip netns exec $NS_SRC ping -q -c 1 $OL_NET$DST_NAT >/dev/null 240 + ip netns exec $NS_SRC $PING -q -c 1 $OL_NET$DST_NAT >/dev/null 243 241 run_test "GRO fwd over UDP tunnel" $OL_NET$DST_NAT 1 1 $OL_NET$DST 244 242 cleanup 245 243
+6 -6
tools/testing/selftests/net/udpgso.c
··· 156 156 }, 157 157 { 158 158 /* send max number of min sized segments */ 159 - .tlen = UDP_MAX_SEGMENTS - CONST_HDRLEN_V4, 159 + .tlen = UDP_MAX_SEGMENTS, 160 160 .gso_len = 1, 161 - .r_num_mss = UDP_MAX_SEGMENTS - CONST_HDRLEN_V4, 161 + .r_num_mss = UDP_MAX_SEGMENTS, 162 162 }, 163 163 { 164 164 /* send max number + 1 of min sized segments: fail */ 165 - .tlen = UDP_MAX_SEGMENTS - CONST_HDRLEN_V4 + 1, 165 + .tlen = UDP_MAX_SEGMENTS + 1, 166 166 .gso_len = 1, 167 167 .tfail = true, 168 168 }, ··· 259 259 }, 260 260 { 261 261 /* send max number of min sized segments */ 262 - .tlen = UDP_MAX_SEGMENTS - CONST_HDRLEN_V6, 262 + .tlen = UDP_MAX_SEGMENTS, 263 263 .gso_len = 1, 264 - .r_num_mss = UDP_MAX_SEGMENTS - CONST_HDRLEN_V6, 264 + .r_num_mss = UDP_MAX_SEGMENTS, 265 265 }, 266 266 { 267 267 /* send max number + 1 of min sized segments: fail */ 268 - .tlen = UDP_MAX_SEGMENTS - CONST_HDRLEN_V6 + 1, 268 + .tlen = UDP_MAX_SEGMENTS + 1, 269 269 .gso_len = 1, 270 270 .tfail = true, 271 271 },
+7 -1
tools/testing/selftests/net/udpgso_bench_tx.c
··· 419 419 420 420 static void parse_opts(int argc, char **argv) 421 421 { 422 + const char *bind_addr = NULL; 422 423 int max_len, hdrlen; 423 424 int c; 424 425 ··· 447 446 cfg_cpu = strtol(optarg, NULL, 0); 448 447 break; 449 448 case 'D': 450 - setup_sockaddr(cfg_family, optarg, &cfg_dst_addr); 449 + bind_addr = optarg; 451 450 break; 452 451 case 'l': 453 452 cfg_runtime_ms = strtoul(optarg, NULL, 10) * 1000; ··· 492 491 break; 493 492 } 494 493 } 494 + 495 + if (!bind_addr) 496 + bind_addr = cfg_family == PF_INET6 ? "::" : "0.0.0.0"; 497 + 498 + setup_sockaddr(cfg_family, bind_addr, &cfg_dst_addr); 495 499 496 500 if (optind != argc) 497 501 usage(argv[0]);
+10 -6
tools/testing/selftests/vm/userfaultfd.c
··· 87 87 88 88 static bool map_shared; 89 89 static int shm_fd; 90 - static int huge_fd; 90 + static int huge_fd = -1; /* only used for hugetlb_shared test */ 91 91 static char *huge_fd_off0; 92 92 static unsigned long long *count_verify; 93 93 static int uffd = -1; ··· 223 223 224 224 static void hugetlb_release_pages(char *rel_area) 225 225 { 226 + if (huge_fd == -1) 227 + return; 228 + 226 229 if (fallocate(huge_fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, 227 230 rel_area == huge_fd_off0 ? 0 : nr_pages * page_size, 228 231 nr_pages * page_size)) ··· 238 235 char **alloc_area_alias; 239 236 240 237 *alloc_area = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE, 241 - (map_shared ? MAP_SHARED : MAP_PRIVATE) | 242 - MAP_HUGETLB, 243 - huge_fd, *alloc_area == area_src ? 0 : 244 - nr_pages * page_size); 238 + map_shared ? MAP_SHARED : 239 + MAP_PRIVATE | MAP_HUGETLB | 240 + (*alloc_area == area_src ? 0 : MAP_NORESERVE), 241 + huge_fd, 242 + *alloc_area == area_src ? 0 : nr_pages * page_size); 245 243 if (*alloc_area == MAP_FAILED) 246 244 err("mmap of hugetlbfs file failed"); 247 245 248 246 if (map_shared) { 249 247 area_alias = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE, 250 - MAP_SHARED | MAP_HUGETLB, 248 + MAP_SHARED, 251 249 huge_fd, *alloc_area == area_src ? 0 : 252 250 nr_pages * page_size); 253 251 if (area_alias == MAP_FAILED)