Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

No conflicts.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+3141 -1195
+2
.mailmap
··· 126 126 Greg Kroah-Hartman <greg@kroah.com> 127 127 Greg Kurz <groug@kaod.org> <gkurz@linux.vnet.ibm.com> 128 128 Gregory CLEMENT <gregory.clement@bootlin.com> <gregory.clement@free-electrons.com> 129 + Guo Ren <guoren@kernel.org> <guoren@linux.alibaba.com> 130 + Guo Ren <guoren@kernel.org> <ren_guo@c-sky.com> 129 131 Gustavo Padovan <gustavo@las.ic.unicamp.br> 130 132 Gustavo Padovan <padovan@profusion.mobi> 131 133 Hanjun Guo <guohanjun@huawei.com> <hanjun.guo@linaro.org>
+4 -4
Documentation/devicetree/bindings/i2c/apple,i2c.yaml
··· 20 20 21 21 properties: 22 22 compatible: 23 - enum: 24 - - apple,t8103-i2c 25 - - apple,i2c 23 + items: 24 + - const: apple,t8103-i2c 25 + - const: apple,i2c 26 26 27 27 reg: 28 28 maxItems: 1 ··· 51 51 examples: 52 52 - | 53 53 i2c@35010000 { 54 - compatible = "apple,t8103-i2c"; 54 + compatible = "apple,t8103-i2c", "apple,i2c"; 55 55 reg = <0x35010000 0x4000>; 56 56 interrupt-parent = <&aic>; 57 57 interrupts = <0 627 4>;
+1 -1
Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.yaml
··· 136 136 samsung,syscon-phandle = <&pmu_system_controller>; 137 137 138 138 /* NTC thermistor is a hwmon device */ 139 - ncp15wb473 { 139 + thermistor { 140 140 compatible = "murata,ncp15wb473"; 141 141 pullup-uv = <1800000>; 142 142 pullup-ohm = <47000>;
+1 -1
Documentation/devicetree/bindings/input/gpio-keys.yaml
··· 142 142 down { 143 143 label = "GPIO Key DOWN"; 144 144 linux,code = <108>; 145 - interrupts = <1 IRQ_TYPE_LEVEL_HIGH 7>; 145 + interrupts = <1 IRQ_TYPE_EDGE_FALLING>; 146 146 }; 147 147 }; 148 148
+2 -12
Documentation/devicetree/bindings/media/nxp,imx7-mipi-csi2.yaml
··· 79 79 80 80 properties: 81 81 data-lanes: 82 + description: 83 + Note that 'fsl,imx7-mipi-csi2' only supports up to 2 data lines. 82 84 items: 83 85 minItems: 1 84 86 maxItems: 4 ··· 92 90 93 91 required: 94 92 - data-lanes 95 - 96 - allOf: 97 - - if: 98 - properties: 99 - compatible: 100 - contains: 101 - const: fsl,imx7-mipi-csi2 102 - then: 103 - properties: 104 - data-lanes: 105 - items: 106 - maxItems: 2 107 93 108 94 port@1: 109 95 $ref: /schemas/graph.yaml#/properties/port
+8
Documentation/devicetree/bindings/net/ethernet-phy.yaml
··· 91 91 compensate for the board being designed with the lanes 92 92 swapped. 93 93 94 + enet-phy-lane-no-swap: 95 + $ref: /schemas/types.yaml#/definitions/flag 96 + description: 97 + If set, indicates that PHY will disable swap of the 98 + TX/RX lanes. This property allows the PHY to work correcly after 99 + e.g. wrong bootstrap configuration caused by issues in PCB 100 + layout design. 101 + 94 102 eee-broken-100tx: 95 103 $ref: /schemas/types.yaml#/definitions/flag 96 104 description:
+1 -1
Documentation/devicetree/bindings/phy/xlnx,zynqmp-psgtr.yaml
··· 29 29 - PHY_TYPE_PCIE 30 30 - PHY_TYPE_SATA 31 31 - PHY_TYPE_SGMII 32 - - PHY_TYPE_USB 32 + - PHY_TYPE_USB3 33 33 - description: The PHY instance 34 34 minimum: 0 35 35 maximum: 1 # for DP, SATA or USB
+1 -1
Documentation/devicetree/bindings/power/supply/bq25980.yaml
··· 105 105 reg = <0x65>; 106 106 interrupt-parent = <&gpio1>; 107 107 interrupts = <16 IRQ_TYPE_EDGE_FALLING>; 108 - ti,watchdog-timer = <0>; 108 + ti,watchdog-timeout-ms = <0>; 109 109 ti,sc-ocp-limit-microamp = <2000000>; 110 110 ti,sc-ovp-limit-microvolt = <17800000>; 111 111 monitored-battery = <&bat>;
+3
Documentation/devicetree/bindings/sound/wlf,wm8962.yaml
··· 19 19 clocks: 20 20 maxItems: 1 21 21 22 + interrupts: 23 + maxItems: 1 24 + 22 25 "#sound-dai-cells": 23 26 const: 0 24 27
+16
Documentation/networking/device_drivers/ethernet/intel/ixgbe.rst
··· 440 440 a virtual function (VF), jumbo frames must first be enabled in the physical 441 441 function (PF). The VF MTU setting cannot be larger than the PF MTU. 442 442 443 + NBASE-T Support 444 + --------------- 445 + The ixgbe driver supports NBASE-T on some devices. However, the advertisement 446 + of NBASE-T speeds is suppressed by default, to accommodate broken network 447 + switches which cannot cope with advertised NBASE-T speeds. Use the ethtool 448 + command to enable advertising NBASE-T speeds on devices which support it:: 449 + 450 + ethtool -s eth? advertise 0x1800000001028 451 + 452 + On Linux systems with INTERFACES(5), this can be specified as a pre-up command 453 + in /etc/network/interfaces so that the interface is always brought up with 454 + NBASE-T support, e.g.:: 455 + 456 + iface eth? inet dhcp 457 + pre-up ethtool -s eth? advertise 0x1800000001028 || true 458 + 443 459 Generic Receive Offload, aka GRO 444 460 -------------------------------- 445 461 The driver supports the in-kernel software implementation of GRO. GRO has
+16 -8
MAINTAINERS
··· 3066 3066 F: drivers/phy/qualcomm/phy-ath79-usb.c 3067 3067 3068 3068 ATHEROS ATH GENERIC UTILITIES 3069 - M: Kalle Valo <kvalo@codeaurora.org> 3069 + M: Kalle Valo <kvalo@kernel.org> 3070 3070 L: linux-wireless@vger.kernel.org 3071 3071 S: Supported 3072 3072 F: drivers/net/wireless/ath/* ··· 3081 3081 F: drivers/net/wireless/ath/ath5k/ 3082 3082 3083 3083 ATHEROS ATH6KL WIRELESS DRIVER 3084 - M: Kalle Valo <kvalo@codeaurora.org> 3084 + M: Kalle Valo <kvalo@kernel.org> 3085 3085 L: linux-wireless@vger.kernel.org 3086 3086 S: Supported 3087 3087 W: https://wireless.wiki.kernel.org/en/users/Drivers/ath6kl ··· 9329 9329 F: drivers/iio/pressure/dps310.c 9330 9330 9331 9331 INFINIBAND SUBSYSTEM 9332 - M: Doug Ledford <dledford@redhat.com> 9333 9332 M: Jason Gunthorpe <jgg@nvidia.com> 9334 9333 L: linux-rdma@vger.kernel.org 9335 9334 S: Supported ··· 10279 10280 F: scripts/Makefile.kcsan 10280 10281 10281 10282 KDUMP 10282 - M: Dave Young <dyoung@redhat.com> 10283 10283 M: Baoquan He <bhe@redhat.com> 10284 10284 R: Vivek Goyal <vgoyal@redhat.com> 10285 + R: Dave Young <dyoung@redhat.com> 10285 10286 L: kexec@lists.infradead.org 10286 10287 S: Maintained 10287 10288 W: http://lse.sourceforge.net/kdump/ ··· 13255 13256 F: include/uapi/linux/netdevice.h 13256 13257 13257 13258 NETWORKING DRIVERS (WIRELESS) 13258 - M: Kalle Valo <kvalo@codeaurora.org> 13259 + M: Kalle Valo <kvalo@kernel.org> 13259 13260 L: linux-wireless@vger.kernel.org 13260 13261 S: Maintained 13261 13262 Q: http://patchwork.kernel.org/project/linux-wireless/list/ ··· 15711 15712 F: drivers/media/tuners/qt1010* 15712 15713 15713 15714 QUALCOMM ATHEROS ATH10K WIRELESS DRIVER 15714 - M: Kalle Valo <kvalo@codeaurora.org> 15715 + M: Kalle Valo <kvalo@kernel.org> 15715 15716 L: ath10k@lists.infradead.org 15716 15717 S: Supported 15717 15718 W: https://wireless.wiki.kernel.org/en/users/Drivers/ath10k ··· 15719 15720 F: drivers/net/wireless/ath/ath10k/ 15720 15721 15721 15722 QUALCOMM ATHEROS ATH11K WIRELESS DRIVER 15722 - M: Kalle Valo <kvalo@codeaurora.org> 15723 + M: Kalle Valo <kvalo@kernel.org> 15723 15724 L: ath11k@lists.infradead.org 15724 15725 S: Supported 15725 15726 T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git ··· 15784 15785 S: Maintained 15785 15786 F: Documentation/devicetree/bindings/net/qcom,ethqos.txt 15786 15787 F: drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c 15788 + 15789 + QUALCOMM FASTRPC DRIVER 15790 + M: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> 15791 + M: Amol Maheshwari <amahesh@qti.qualcomm.com> 15792 + L: linux-arm-msm@vger.kernel.org 15793 + S: Maintained 15794 + F: Documentation/devicetree/bindings/misc/qcom,fastrpc.txt 15795 + F: drivers/misc/fastrpc.c 15796 + F: include/uapi/misc/fastrpc.h 15787 15797 15788 15798 QUALCOMM GENERIC INTERFACE I2C DRIVER 15789 15799 M: Akash Asthana <akashast@codeaurora.org> ··· 15900 15892 F: drivers/media/platform/qcom/venus/ 15901 15893 15902 15894 QUALCOMM WCN36XX WIRELESS DRIVER 15903 - M: Kalle Valo <kvalo@codeaurora.org> 15895 + M: Kalle Valo <kvalo@kernel.org> 15904 15896 L: wcn36xx@lists.infradead.org 15905 15897 S: Supported 15906 15898 W: https://wireless.wiki.kernel.org/en/users/Drivers/wcn36xx
+6 -6
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 16 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc5 6 6 NAME = Gobble Gobble 7 7 8 8 # *DOCUMENTATION* ··· 1374 1374 1375 1375 ifneq ($(dtstree),) 1376 1376 1377 - %.dtb: dt_binding_check include/config/kernel.release scripts_dtc 1378 - $(Q)$(MAKE) $(build)=$(dtstree) $(dtstree)/$@ $(dtstree)/$*.dt.yaml 1377 + %.dtb: include/config/kernel.release scripts_dtc 1378 + $(Q)$(MAKE) $(build)=$(dtstree) $(dtstree)/$@ 1379 1379 1380 - %.dtbo: dt_binding_check include/config/kernel.release scripts_dtc 1381 - $(Q)$(MAKE) $(build)=$(dtstree) $(dtstree)/$@ $(dtstree)/$*.dt.yaml 1380 + %.dtbo: include/config/kernel.release scripts_dtc 1381 + $(Q)$(MAKE) $(build)=$(dtstree) $(dtstree)/$@ 1382 1382 1383 1383 PHONY += dtbs dtbs_install dtbs_check 1384 1384 dtbs: include/config/kernel.release scripts_dtc 1385 1385 $(Q)$(MAKE) $(build)=$(dtstree) 1386 1386 1387 - ifneq ($(filter dtbs_check %.dtb %.dtbo, $(MAKECMDGOALS)),) 1387 + ifneq ($(filter dtbs_check, $(MAKECMDGOALS)),) 1388 1388 export CHECK_DTBS=y 1389 1389 dtbs: dt_binding_check 1390 1390 endif
+2
arch/arm/boot/dts/imx6qp-prtwd3.dts
··· 178 178 label = "cpu"; 179 179 ethernet = <&fec>; 180 180 phy-mode = "rgmii-id"; 181 + rx-internal-delay-ps = <2000>; 182 + tx-internal-delay-ps = <2000>; 181 183 182 184 fixed-link { 183 185 speed = <100>;
+1 -1
arch/arm/boot/dts/imx6ull-pinfunc.h
··· 82 82 #define MX6ULL_PAD_CSI_DATA04__ESAI_TX_FS 0x01F4 0x0480 0x0000 0x9 0x0 83 83 #define MX6ULL_PAD_CSI_DATA05__ESAI_TX_CLK 0x01F8 0x0484 0x0000 0x9 0x0 84 84 #define MX6ULL_PAD_CSI_DATA06__ESAI_TX5_RX0 0x01FC 0x0488 0x0000 0x9 0x0 85 - #define MX6ULL_PAD_CSI_DATA07__ESAI_T0 0x0200 0x048C 0x0000 0x9 0x0 85 + #define MX6ULL_PAD_CSI_DATA07__ESAI_TX0 0x0200 0x048C 0x0000 0x9 0x0 86 86 87 87 #endif /* __DTS_IMX6ULL_PINFUNC_H */
+2
arch/arm/boot/dts/ls1021a-tsn.dts
··· 91 91 /* Internal port connected to eth2 */ 92 92 ethernet = <&enet2>; 93 93 phy-mode = "rgmii"; 94 + rx-internal-delay-ps = <0>; 95 + tx-internal-delay-ps = <0>; 94 96 reg = <4>; 95 97 96 98 fixed-link {
+1 -1
arch/arm/boot/dts/socfpga_arria10_socdk_qspi.dts
··· 12 12 flash0: n25q00@0 { 13 13 #address-cells = <1>; 14 14 #size-cells = <1>; 15 - compatible = "n25q00aa"; 15 + compatible = "micron,mt25qu02g", "jedec,spi-nor"; 16 16 reg = <0>; 17 17 spi-max-frequency = <100000000>; 18 18
+1 -1
arch/arm/boot/dts/socfpga_arria5_socdk.dts
··· 119 119 flash: flash@0 { 120 120 #address-cells = <1>; 121 121 #size-cells = <1>; 122 - compatible = "n25q256a"; 122 + compatible = "micron,n25q256a", "jedec,spi-nor"; 123 123 reg = <0>; 124 124 spi-max-frequency = <100000000>; 125 125
+1 -1
arch/arm/boot/dts/socfpga_cyclone5_socdk.dts
··· 124 124 flash0: n25q00@0 { 125 125 #address-cells = <1>; 126 126 #size-cells = <1>; 127 - compatible = "n25q00"; 127 + compatible = "micron,mt25qu02g", "jedec,spi-nor"; 128 128 reg = <0>; /* chip select */ 129 129 spi-max-frequency = <100000000>; 130 130
+1 -1
arch/arm/boot/dts/socfpga_cyclone5_sockit.dts
··· 169 169 flash: flash@0 { 170 170 #address-cells = <1>; 171 171 #size-cells = <1>; 172 - compatible = "n25q00"; 172 + compatible = "micron,mt25qu02g", "jedec,spi-nor"; 173 173 reg = <0>; 174 174 spi-max-frequency = <100000000>; 175 175
+1 -1
arch/arm/boot/dts/socfpga_cyclone5_socrates.dts
··· 80 80 flash: flash@0 { 81 81 #address-cells = <1>; 82 82 #size-cells = <1>; 83 - compatible = "n25q256a"; 83 + compatible = "micron,n25q256a", "jedec,spi-nor"; 84 84 reg = <0>; 85 85 spi-max-frequency = <100000000>; 86 86 m25p,fast-read;
+1 -1
arch/arm/boot/dts/socfpga_cyclone5_sodia.dts
··· 116 116 flash0: n25q512a@0 { 117 117 #address-cells = <1>; 118 118 #size-cells = <1>; 119 - compatible = "n25q512a"; 119 + compatible = "micron,n25q512a", "jedec,spi-nor"; 120 120 reg = <0>; 121 121 spi-max-frequency = <100000000>; 122 122
+2 -2
arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts
··· 224 224 n25q128@0 { 225 225 #address-cells = <1>; 226 226 #size-cells = <1>; 227 - compatible = "n25q128"; 227 + compatible = "micron,n25q128", "jedec,spi-nor"; 228 228 reg = <0>; /* chip select */ 229 229 spi-max-frequency = <100000000>; 230 230 m25p,fast-read; ··· 241 241 n25q00@1 { 242 242 #address-cells = <1>; 243 243 #size-cells = <1>; 244 - compatible = "n25q00"; 244 + compatible = "micron,mt25qu02g", "jedec,spi-nor"; 245 245 reg = <1>; /* chip select */ 246 246 spi-max-frequency = <100000000>; 247 247 m25p,fast-read;
+1 -1
arch/arm/mach-rockchip/platsmp.c
··· 189 189 rockchip_boot_fn = __pa_symbol(secondary_startup); 190 190 191 191 /* copy the trampoline to sram, that runs during startup of the core */ 192 - memcpy(sram_base_addr, &rockchip_secondary_trampoline, trampoline_sz); 192 + memcpy_toio(sram_base_addr, &rockchip_secondary_trampoline, trampoline_sz); 193 193 flush_cache_all(); 194 194 outer_clean_range(0, trampoline_sz); 195 195
-1
arch/arm64/Kconfig.platforms
··· 161 161 162 162 config ARCH_MESON 163 163 bool "Amlogic Platforms" 164 - select COMMON_CLK 165 164 help 166 165 This enables support for the arm64 based Amlogic SoCs 167 166 such as the s905, S905X/D, S912, A113X/D or S905X/D2
+15 -15
arch/arm64/boot/dts/amlogic/meson-axg-jethome-jethub-j100.dts
··· 134 134 type = "critical"; 135 135 }; 136 136 }; 137 - }; 138 137 139 - cpu_cooling_maps: cooling-maps { 140 - map0 { 141 - trip = <&cpu_passive>; 142 - cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 143 - <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 144 - <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 145 - <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>; 146 - }; 138 + cpu_cooling_maps: cooling-maps { 139 + map0 { 140 + trip = <&cpu_passive>; 141 + cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 142 + <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 143 + <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 144 + <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>; 145 + }; 147 146 148 - map1 { 149 - trip = <&cpu_hot>; 150 - cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 151 - <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 152 - <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 153 - <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>; 147 + map1 { 148 + trip = <&cpu_hot>; 149 + cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 150 + <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 151 + <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>, 152 + <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>; 153 + }; 154 154 }; 155 155 }; 156 156 };
+1 -1
arch/arm64/boot/dts/apple/t8103-j274.dts
··· 60 60 61 61 &port02 { 62 62 bus-range = <3 3>; 63 - ethernet0: pci@0,0 { 63 + ethernet0: ethernet@0,0 { 64 64 reg = <0x30000 0x0 0x0 0x0 0x0>; 65 65 /* To be filled by the loader */ 66 66 local-mac-address = [00 10 18 00 00 00];
+8 -3
arch/arm64/boot/dts/apple/t8103.dtsi
··· 7 7 * Copyright The Asahi Linux Contributors 8 8 */ 9 9 10 + #include <dt-bindings/gpio/gpio.h> 10 11 #include <dt-bindings/interrupt-controller/apple-aic.h> 11 12 #include <dt-bindings/interrupt-controller/irq.h> 12 13 #include <dt-bindings/pinctrl/apple.h> ··· 144 143 apple,npins = <212>; 145 144 146 145 interrupt-controller; 146 + #interrupt-cells = <2>; 147 147 interrupt-parent = <&aic>; 148 148 interrupts = <AIC_IRQ 190 IRQ_TYPE_LEVEL_HIGH>, 149 149 <AIC_IRQ 191 IRQ_TYPE_LEVEL_HIGH>, ··· 171 169 apple,npins = <42>; 172 170 173 171 interrupt-controller; 172 + #interrupt-cells = <2>; 174 173 interrupt-parent = <&aic>; 175 174 interrupts = <AIC_IRQ 268 IRQ_TYPE_LEVEL_HIGH>, 176 175 <AIC_IRQ 269 IRQ_TYPE_LEVEL_HIGH>, ··· 192 189 apple,npins = <23>; 193 190 194 191 interrupt-controller; 192 + #interrupt-cells = <2>; 195 193 interrupt-parent = <&aic>; 196 194 interrupts = <AIC_IRQ 330 IRQ_TYPE_LEVEL_HIGH>, 197 195 <AIC_IRQ 331 IRQ_TYPE_LEVEL_HIGH>, ··· 213 209 apple,npins = <16>; 214 210 215 211 interrupt-controller; 212 + #interrupt-cells = <2>; 216 213 interrupt-parent = <&aic>; 217 214 interrupts = <AIC_IRQ 391 IRQ_TYPE_LEVEL_HIGH>, 218 215 <AIC_IRQ 392 IRQ_TYPE_LEVEL_HIGH>, ··· 286 281 port00: pci@0,0 { 287 282 device_type = "pci"; 288 283 reg = <0x0 0x0 0x0 0x0 0x0>; 289 - reset-gpios = <&pinctrl_ap 152 0>; 284 + reset-gpios = <&pinctrl_ap 152 GPIO_ACTIVE_LOW>; 290 285 max-link-speed = <2>; 291 286 292 287 #address-cells = <3>; ··· 306 301 port01: pci@1,0 { 307 302 device_type = "pci"; 308 303 reg = <0x800 0x0 0x0 0x0 0x0>; 309 - reset-gpios = <&pinctrl_ap 153 0>; 304 + reset-gpios = <&pinctrl_ap 153 GPIO_ACTIVE_LOW>; 310 305 max-link-speed = <2>; 311 306 312 307 #address-cells = <3>; ··· 326 321 port02: pci@2,0 { 327 322 device_type = "pci"; 328 323 reg = <0x1000 0x0 0x0 0x0 0x0>; 329 - reset-gpios = <&pinctrl_ap 33 0>; 324 + reset-gpios = <&pinctrl_ap 33 GPIO_ACTIVE_LOW>; 330 325 max-link-speed = <1>; 331 326 332 327 #address-cells = <3>;
-2
arch/arm64/boot/dts/freescale/fsl-ls1088a-ten64.dts
··· 38 38 powerdn { 39 39 label = "External Power Down"; 40 40 gpios = <&gpio1 17 GPIO_ACTIVE_LOW>; 41 - interrupts = <&gpio1 17 IRQ_TYPE_EDGE_FALLING>; 42 41 linux,code = <KEY_POWER>; 43 42 }; 44 43 ··· 45 46 admin { 46 47 label = "ADMIN button"; 47 48 gpios = <&gpio3 8 GPIO_ACTIVE_HIGH>; 48 - interrupts = <&gpio3 8 IRQ_TYPE_EDGE_RISING>; 49 49 linux,code = <KEY_WPS_BUTTON>; 50 50 }; 51 51 };
+4
arch/arm64/boot/dts/freescale/fsl-lx2160a-bluebox3.dts
··· 386 386 reg = <2>; 387 387 ethernet = <&dpmac17>; 388 388 phy-mode = "rgmii-id"; 389 + rx-internal-delay-ps = <2000>; 390 + tx-internal-delay-ps = <2000>; 389 391 390 392 fixed-link { 391 393 speed = <1000>; ··· 531 529 reg = <2>; 532 530 ethernet = <&dpmac18>; 533 531 phy-mode = "rgmii-id"; 532 + rx-internal-delay-ps = <2000>; 533 + tx-internal-delay-ps = <2000>; 534 534 535 535 fixed-link { 536 536 speed = <1000>;
-2
arch/arm64/boot/dts/freescale/imx8mq.dtsi
··· 524 524 <&clk IMX8MQ_VIDEO_PLL1>, 525 525 <&clk IMX8MQ_VIDEO_PLL1_OUT>; 526 526 assigned-clock-rates = <0>, <0>, <0>, <594000000>; 527 - interconnects = <&noc IMX8MQ_ICM_LCDIF &noc IMX8MQ_ICS_DRAM>; 528 - interconnect-names = "dram"; 529 527 status = "disabled"; 530 528 531 529 port@0 {
+1 -1
arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
··· 97 97 regulator-max-microvolt = <3300000>; 98 98 regulator-always-on; 99 99 regulator-boot-on; 100 - vim-supply = <&vcc_io>; 100 + vin-supply = <&vcc_io>; 101 101 }; 102 102 103 103 vdd_core: vdd-core {
-1
arch/arm64/boot/dts/rockchip/rk3399-khadas-edge.dtsi
··· 705 705 &sdhci { 706 706 bus-width = <8>; 707 707 mmc-hs400-1_8v; 708 - mmc-hs400-enhanced-strobe; 709 708 non-removable; 710 709 status = "okay"; 711 710 };
+1
arch/arm64/boot/dts/rockchip/rk3399-kobol-helios64.dts
··· 276 276 clock-output-names = "xin32k", "rk808-clkout2"; 277 277 pinctrl-names = "default"; 278 278 pinctrl-0 = <&pmic_int_l>; 279 + rockchip,system-power-controller; 279 280 vcc1-supply = <&vcc5v0_sys>; 280 281 vcc2-supply = <&vcc5v0_sys>; 281 282 vcc3-supply = <&vcc5v0_sys>;
+1 -1
arch/arm64/boot/dts/rockchip/rk3399-leez-p710.dts
··· 55 55 regulator-boot-on; 56 56 regulator-min-microvolt = <3300000>; 57 57 regulator-max-microvolt = <3300000>; 58 - vim-supply = <&vcc3v3_sys>; 58 + vin-supply = <&vcc3v3_sys>; 59 59 }; 60 60 61 61 vcc3v3_sys: vcc3v3-sys {
+1 -1
arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi
··· 502 502 status = "okay"; 503 503 504 504 bt656-supply = <&vcc_3v0>; 505 - audio-supply = <&vcc_3v0>; 505 + audio-supply = <&vcc1v8_codec>; 506 506 sdmmc-supply = <&vcc_sdio>; 507 507 gpio1830-supply = <&vcc_3v0>; 508 508 };
+1
arch/arm64/kernel/machine_kexec_file.c
··· 149 149 initrd_len, cmdline, 0); 150 150 if (!dtb) { 151 151 pr_err("Preparing for new dtb failed\n"); 152 + ret = -EINVAL; 152 153 goto out_err; 153 154 } 154 155
+2 -2
arch/csky/kernel/traps.c
··· 209 209 210 210 asmlinkage void do_trap_fpe(struct pt_regs *regs) 211 211 { 212 - #ifdef CONFIG_CPU_HAS_FP 212 + #ifdef CONFIG_CPU_HAS_FPU 213 213 return fpu_fpe(regs); 214 214 #else 215 215 do_trap_error(regs, SIGILL, ILL_ILLOPC, regs->pc, ··· 219 219 220 220 asmlinkage void do_trap_priv(struct pt_regs *regs) 221 221 { 222 - #ifdef CONFIG_CPU_HAS_FP 222 + #ifdef CONFIG_CPU_HAS_FPU 223 223 if (user_mode(regs) && fpu_libc_helper(regs)) 224 224 return; 225 225 #endif
+2
arch/s390/configs/debug_defconfig
··· 117 117 CONFIG_UNIX_DIAG=m 118 118 CONFIG_XFRM_USER=m 119 119 CONFIG_NET_KEY=m 120 + CONFIG_NET_SWITCHDEV=y 120 121 CONFIG_SMC=m 121 122 CONFIG_SMC_DIAG=m 122 123 CONFIG_INET=y ··· 512 511 CONFIG_MLX4_EN=m 513 512 CONFIG_MLX5_CORE=m 514 513 CONFIG_MLX5_CORE_EN=y 514 + CONFIG_MLX5_ESWITCH=y 515 515 # CONFIG_NET_VENDOR_MICREL is not set 516 516 # CONFIG_NET_VENDOR_MICROCHIP is not set 517 517 # CONFIG_NET_VENDOR_MICROSEMI is not set
+2
arch/s390/configs/defconfig
··· 109 109 CONFIG_UNIX_DIAG=m 110 110 CONFIG_XFRM_USER=m 111 111 CONFIG_NET_KEY=m 112 + CONFIG_NET_SWITCHDEV=y 112 113 CONFIG_SMC=m 113 114 CONFIG_SMC_DIAG=m 114 115 CONFIG_INET=y ··· 503 502 CONFIG_MLX4_EN=m 504 503 CONFIG_MLX5_CORE=m 505 504 CONFIG_MLX5_CORE_EN=y 505 + CONFIG_MLX5_ESWITCH=y 506 506 # CONFIG_NET_VENDOR_MICREL is not set 507 507 # CONFIG_NET_VENDOR_MICROCHIP is not set 508 508 # CONFIG_NET_VENDOR_MICROSEMI is not set
-2
arch/s390/kernel/ftrace.c
··· 290 290 return; 291 291 292 292 regs = ftrace_get_regs(fregs); 293 - preempt_disable_notrace(); 294 293 p = get_kprobe((kprobe_opcode_t *)ip); 295 294 if (unlikely(!p) || kprobe_disabled(p)) 296 295 goto out; ··· 317 318 } 318 319 __this_cpu_write(current_kprobe, NULL); 319 320 out: 320 - preempt_enable_notrace(); 321 321 ftrace_test_recursion_unlock(bit); 322 322 } 323 323 NOKPROBE_SYMBOL(kprobe_ftrace_handler);
+5 -4
arch/s390/kernel/irq.c
··· 138 138 struct pt_regs *old_regs = set_irq_regs(regs); 139 139 int from_idle; 140 140 141 - irq_enter(); 141 + irq_enter_rcu(); 142 142 143 143 if (user_mode(regs)) { 144 144 update_timer_sys(); ··· 158 158 do_irq_async(regs, IO_INTERRUPT); 159 159 } while (MACHINE_IS_LPAR && irq_pending(regs)); 160 160 161 - irq_exit(); 161 + irq_exit_rcu(); 162 + 162 163 set_irq_regs(old_regs); 163 164 irqentry_exit(regs, state); 164 165 ··· 173 172 struct pt_regs *old_regs = set_irq_regs(regs); 174 173 int from_idle; 175 174 176 - irq_enter(); 175 + irq_enter_rcu(); 177 176 178 177 if (user_mode(regs)) { 179 178 update_timer_sys(); ··· 191 190 192 191 do_irq_async(regs, EXT_INTERRUPT); 193 192 194 - irq_exit(); 193 + irq_exit_rcu(); 195 194 set_irq_regs(old_regs); 196 195 irqentry_exit(regs, state); 197 196
+35 -5
arch/s390/kernel/machine_kexec_file.c
··· 7 7 * Author(s): Philipp Rudo <prudo@linux.vnet.ibm.com> 8 8 */ 9 9 10 + #define pr_fmt(fmt) "kexec: " fmt 11 + 10 12 #include <linux/elf.h> 11 13 #include <linux/errno.h> 12 14 #include <linux/kexec.h> ··· 292 290 const Elf_Shdr *relsec, 293 291 const Elf_Shdr *symtab) 294 292 { 293 + const char *strtab, *name, *shstrtab; 294 + const Elf_Shdr *sechdrs; 295 295 Elf_Rela *relas; 296 296 int i, r_type; 297 + int ret; 298 + 299 + /* String & section header string table */ 300 + sechdrs = (void *)pi->ehdr + pi->ehdr->e_shoff; 301 + strtab = (char *)pi->ehdr + sechdrs[symtab->sh_link].sh_offset; 302 + shstrtab = (char *)pi->ehdr + sechdrs[pi->ehdr->e_shstrndx].sh_offset; 297 303 298 304 relas = (void *)pi->ehdr + relsec->sh_offset; 299 305 ··· 314 304 sym = (void *)pi->ehdr + symtab->sh_offset; 315 305 sym += ELF64_R_SYM(relas[i].r_info); 316 306 317 - if (sym->st_shndx == SHN_UNDEF) 318 - return -ENOEXEC; 307 + if (sym->st_name) 308 + name = strtab + sym->st_name; 309 + else 310 + name = shstrtab + sechdrs[sym->st_shndx].sh_name; 319 311 320 - if (sym->st_shndx == SHN_COMMON) 312 + if (sym->st_shndx == SHN_UNDEF) { 313 + pr_err("Undefined symbol: %s\n", name); 321 314 return -ENOEXEC; 315 + } 316 + 317 + if (sym->st_shndx == SHN_COMMON) { 318 + pr_err("symbol '%s' in common section\n", name); 319 + return -ENOEXEC; 320 + } 322 321 323 322 if (sym->st_shndx >= pi->ehdr->e_shnum && 324 - sym->st_shndx != SHN_ABS) 323 + sym->st_shndx != SHN_ABS) { 324 + pr_err("Invalid section %d for symbol %s\n", 325 + sym->st_shndx, name); 325 326 return -ENOEXEC; 327 + } 326 328 327 329 loc = pi->purgatory_buf; 328 330 loc += section->sh_offset; ··· 348 326 addr = section->sh_addr + relas[i].r_offset; 349 327 350 328 r_type = ELF64_R_TYPE(relas[i].r_info); 351 - arch_kexec_do_relocs(r_type, loc, val, addr); 329 + 330 + if (r_type == R_390_PLT32DBL) 331 + r_type = R_390_PC32DBL; 332 + 333 + ret = arch_kexec_do_relocs(r_type, loc, val, addr); 334 + if (ret) { 335 + pr_err("Unknown rela relocation: %d\n", r_type); 336 + return -ENOEXEC; 337 + } 352 338 } 353 339 return 0; 354 340 }
+1 -1
arch/x86/include/asm/kvm_host.h
··· 97 97 KVM_ARCH_REQ_FLAGS(25, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) 98 98 #define KVM_REQ_TLB_FLUSH_CURRENT KVM_ARCH_REQ(26) 99 99 #define KVM_REQ_TLB_FLUSH_GUEST \ 100 - KVM_ARCH_REQ_FLAGS(27, KVM_REQUEST_NO_WAKEUP) 100 + KVM_ARCH_REQ_FLAGS(27, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) 101 101 #define KVM_REQ_APF_READY KVM_ARCH_REQ(28) 102 102 #define KVM_REQ_MSR_FILTER_CHANGED KVM_ARCH_REQ(29) 103 103 #define KVM_REQ_UPDATE_CPU_DIRTY_LOGGING \
+14
arch/x86/kernel/smpboot.c
··· 579 579 { NULL, }, 580 580 }; 581 581 582 + static struct sched_domain_topology_level x86_hybrid_topology[] = { 583 + #ifdef CONFIG_SCHED_SMT 584 + { cpu_smt_mask, x86_smt_flags, SD_INIT_NAME(SMT) }, 585 + #endif 586 + #ifdef CONFIG_SCHED_MC 587 + { cpu_coregroup_mask, x86_core_flags, SD_INIT_NAME(MC) }, 588 + #endif 589 + { cpu_cpu_mask, SD_INIT_NAME(DIE) }, 590 + { NULL, }, 591 + }; 592 + 582 593 static struct sched_domain_topology_level x86_topology[] = { 583 594 #ifdef CONFIG_SCHED_SMT 584 595 { cpu_smt_mask, x86_smt_flags, SD_INIT_NAME(SMT) }, ··· 1480 1469 1481 1470 calculate_max_logical_packages(); 1482 1471 1472 + /* XXX for now assume numa-in-package and hybrid don't overlap */ 1483 1473 if (x86_has_numa_in_package) 1484 1474 set_sched_topology(x86_numa_in_package_topology); 1475 + if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU)) 1476 + set_sched_topology(x86_hybrid_topology); 1485 1477 1486 1478 nmi_selftest(); 1487 1479 impress_friends();
+5 -2
arch/x86/kvm/hyperv.c
··· 1922 1922 1923 1923 all_cpus = send_ipi_ex.vp_set.format == HV_GENERIC_SET_ALL; 1924 1924 1925 + if (all_cpus) 1926 + goto check_and_send_ipi; 1927 + 1925 1928 if (!sparse_banks_len) 1926 1929 goto ret_success; 1927 1930 1928 - if (!all_cpus && 1929 - kvm_read_guest(kvm, 1931 + if (kvm_read_guest(kvm, 1930 1932 hc->ingpa + offsetof(struct hv_send_ipi_ex, 1931 1933 vp_set.bank_contents), 1932 1934 sparse_banks, ··· 1936 1934 return HV_STATUS_INVALID_HYPERCALL_INPUT; 1937 1935 } 1938 1936 1937 + check_and_send_ipi: 1939 1938 if ((vector < HV_IPI_LOW_VECTOR) || (vector > HV_IPI_HIGH_VECTOR)) 1940 1939 return HV_STATUS_INVALID_HYPERCALL_INPUT; 1941 1940
+13 -9
arch/x86/kvm/vmx/vmx.c
··· 2646 2646 if (!loaded_vmcs->msr_bitmap) 2647 2647 goto out_vmcs; 2648 2648 memset(loaded_vmcs->msr_bitmap, 0xff, PAGE_SIZE); 2649 - 2650 - if (IS_ENABLED(CONFIG_HYPERV) && 2651 - static_branch_unlikely(&enable_evmcs) && 2652 - (ms_hyperv.nested_features & HV_X64_NESTED_MSR_BITMAP)) { 2653 - struct hv_enlightened_vmcs *evmcs = 2654 - (struct hv_enlightened_vmcs *)loaded_vmcs->vmcs; 2655 - 2656 - evmcs->hv_enlightenments_control.msr_bitmap = 1; 2657 - } 2658 2649 } 2659 2650 2660 2651 memset(&loaded_vmcs->host_state, 0, sizeof(struct vmcs_host_state)); ··· 6832 6841 err = alloc_loaded_vmcs(&vmx->vmcs01); 6833 6842 if (err < 0) 6834 6843 goto free_pml; 6844 + 6845 + /* 6846 + * Use Hyper-V 'Enlightened MSR Bitmap' feature when KVM runs as a 6847 + * nested (L1) hypervisor and Hyper-V in L0 supports it. Enable the 6848 + * feature only for vmcs01, KVM currently isn't equipped to realize any 6849 + * performance benefits from enabling it for vmcs02. 6850 + */ 6851 + if (IS_ENABLED(CONFIG_HYPERV) && static_branch_unlikely(&enable_evmcs) && 6852 + (ms_hyperv.nested_features & HV_X64_NESTED_MSR_BITMAP)) { 6853 + struct hv_enlightened_vmcs *evmcs = (void *)vmx->vmcs01.vmcs; 6854 + 6855 + evmcs->hv_enlightenments_control.msr_bitmap = 1; 6856 + } 6835 6857 6836 6858 /* The MSR bitmap starts with all ones */ 6837 6859 bitmap_fill(vmx->shadow_msr_intercept.read, MAX_POSSIBLE_PASSTHROUGH_MSRS);
+9 -3
arch/x86/kvm/x86.c
··· 890 890 !load_pdptrs(vcpu, vcpu->arch.walk_mmu, kvm_read_cr3(vcpu))) 891 891 return 1; 892 892 893 - if (!(cr0 & X86_CR0_PG) && kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE)) 893 + if (!(cr0 & X86_CR0_PG) && 894 + (is_64_bit_mode(vcpu) || kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE))) 894 895 return 1; 895 896 896 897 static_call(kvm_x86_set_cr0)(vcpu, cr0); ··· 7122 7121 unsigned short port, void *val, unsigned int count) 7123 7122 { 7124 7123 if (vcpu->arch.pio.count) { 7125 - /* Complete previous iteration. */ 7124 + /* 7125 + * Complete a previous iteration that required userspace I/O. 7126 + * Note, @count isn't guaranteed to match pio.count as userspace 7127 + * can modify ECX before rerunning the vCPU. Ignore any such 7128 + * shenanigans as KVM doesn't support modifying the rep count, 7129 + * and the emulator ensures @count doesn't overflow the buffer. 7130 + */ 7126 7131 } else { 7127 7132 int r = __emulator_pio_in(vcpu, size, port, count); 7128 7133 if (!r) ··· 7137 7130 /* Results already available, fall through. */ 7138 7131 } 7139 7132 7140 - WARN_ON(count != vcpu->arch.pio.count); 7141 7133 complete_emulator_pio_in(vcpu, val); 7142 7134 return 1; 7143 7135 }
+43 -8
arch/x86/net/bpf_jit_comp.c
··· 1252 1252 case BPF_LDX | BPF_MEM | BPF_DW: 1253 1253 case BPF_LDX | BPF_PROBE_MEM | BPF_DW: 1254 1254 if (BPF_MODE(insn->code) == BPF_PROBE_MEM) { 1255 - /* test src_reg, src_reg */ 1256 - maybe_emit_mod(&prog, src_reg, src_reg, true); /* always 1 byte */ 1257 - EMIT2(0x85, add_2reg(0xC0, src_reg, src_reg)); 1258 - /* jne start_of_ldx */ 1259 - EMIT2(X86_JNE, 0); 1255 + /* Though the verifier prevents negative insn->off in BPF_PROBE_MEM 1256 + * add abs(insn->off) to the limit to make sure that negative 1257 + * offset won't be an issue. 1258 + * insn->off is s16, so it won't affect valid pointers. 1259 + */ 1260 + u64 limit = TASK_SIZE_MAX + PAGE_SIZE + abs(insn->off); 1261 + u8 *end_of_jmp1, *end_of_jmp2; 1262 + 1263 + /* Conservatively check that src_reg + insn->off is a kernel address: 1264 + * 1. src_reg + insn->off >= limit 1265 + * 2. src_reg + insn->off doesn't become small positive. 1266 + * Cannot do src_reg + insn->off >= limit in one branch, 1267 + * since it needs two spare registers, but JIT has only one. 1268 + */ 1269 + 1270 + /* movabsq r11, limit */ 1271 + EMIT2(add_1mod(0x48, AUX_REG), add_1reg(0xB8, AUX_REG)); 1272 + EMIT((u32)limit, 4); 1273 + EMIT(limit >> 32, 4); 1274 + /* cmp src_reg, r11 */ 1275 + maybe_emit_mod(&prog, src_reg, AUX_REG, true); 1276 + EMIT2(0x39, add_2reg(0xC0, src_reg, AUX_REG)); 1277 + /* if unsigned '<' goto end_of_jmp2 */ 1278 + EMIT2(X86_JB, 0); 1279 + end_of_jmp1 = prog; 1280 + 1281 + /* mov r11, src_reg */ 1282 + emit_mov_reg(&prog, true, AUX_REG, src_reg); 1283 + /* add r11, insn->off */ 1284 + maybe_emit_1mod(&prog, AUX_REG, true); 1285 + EMIT2_off32(0x81, add_1reg(0xC0, AUX_REG), insn->off); 1286 + /* jmp if not carry to start_of_ldx 1287 + * Otherwise ERR_PTR(-EINVAL) + 128 will be the user addr 1288 + * that has to be rejected. 1289 + */ 1290 + EMIT2(0x73 /* JNC */, 0); 1291 + end_of_jmp2 = prog; 1292 + 1260 1293 /* xor dst_reg, dst_reg */ 1261 1294 emit_mov_imm32(&prog, false, dst_reg, 0); 1262 1295 /* jmp byte_after_ldx */ 1263 1296 EMIT2(0xEB, 0); 1264 1297 1265 - /* populate jmp_offset for JNE above */ 1266 - temp[4] = prog - temp - 5 /* sizeof(test + jne) */; 1298 + /* populate jmp_offset for JB above to jump to xor dst_reg */ 1299 + end_of_jmp1[-1] = end_of_jmp2 - end_of_jmp1; 1300 + /* populate jmp_offset for JNC above to jump to start_of_ldx */ 1267 1301 start_of_ldx = prog; 1302 + end_of_jmp2[-1] = start_of_ldx - end_of_jmp2; 1268 1303 } 1269 1304 emit_ldx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn->off); 1270 1305 if (BPF_MODE(insn->code) == BPF_PROBE_MEM) { ··· 1340 1305 * End result: x86 insn "mov rbx, qword ptr [rax+0x14]" 1341 1306 * of 4 bytes will be ignored and rbx will be zero inited. 1342 1307 */ 1343 - ex->fixup = (prog - temp) | (reg2pt_regs[dst_reg] << 8); 1308 + ex->fixup = (prog - start_of_ldx) | (reg2pt_regs[dst_reg] << 8); 1344 1309 } 1345 1310 break; 1346 1311
+1 -2
block/fops.c
··· 341 341 } else { 342 342 ret = bio_iov_iter_get_pages(bio, iter); 343 343 if (unlikely(ret)) { 344 - bio->bi_status = BLK_STS_IOERR; 345 - bio_endio(bio); 344 + bio_put(bio); 346 345 return ret; 347 346 } 348 347 }
+3
block/ioprio.c
··· 220 220 pgrp = task_pgrp(current); 221 221 else 222 222 pgrp = find_vpid(who); 223 + read_lock(&tasklist_lock); 223 224 do_each_pid_thread(pgrp, PIDTYPE_PGID, p) { 224 225 tmpio = get_task_ioprio(p); 225 226 if (tmpio < 0) ··· 230 229 else 231 230 ret = ioprio_best(ret, tmpio); 232 231 } while_each_pid_thread(pgrp, PIDTYPE_PGID, p); 232 + read_unlock(&tasklist_lock); 233 + 233 234 break; 234 235 case IOPRIO_WHO_USER: 235 236 uid = make_kuid(current_user_ns(), who);
+1 -2
drivers/Makefile
··· 41 41 # SOC specific infrastructure drivers. 42 42 obj-y += soc/ 43 43 44 - obj-$(CONFIG_VIRTIO) += virtio/ 45 - obj-$(CONFIG_VIRTIO_PCI_LIB) += virtio/ 44 + obj-y += virtio/ 46 45 obj-$(CONFIG_VDPA) += vdpa/ 47 46 obj-$(CONFIG_XEN) += xen/ 48 47
+9 -12
drivers/android/binder.c
··· 4422 4422 __release(&t->lock); 4423 4423 4424 4424 /* 4425 - * If this thread used poll, make sure we remove the waitqueue 4426 - * from any epoll data structures holding it with POLLFREE. 4427 - * waitqueue_active() is safe to use here because we're holding 4428 - * the inner lock. 4425 + * If this thread used poll, make sure we remove the waitqueue from any 4426 + * poll data structures holding it. 4429 4427 */ 4430 - if ((thread->looper & BINDER_LOOPER_STATE_POLL) && 4431 - waitqueue_active(&thread->wait)) { 4432 - wake_up_poll(&thread->wait, EPOLLHUP | POLLFREE); 4433 - } 4428 + if (thread->looper & BINDER_LOOPER_STATE_POLL) 4429 + wake_up_pollfree(&thread->wait); 4434 4430 4435 4431 binder_inner_proc_unlock(thread->proc); 4436 4432 4437 4433 /* 4438 - * This is needed to avoid races between wake_up_poll() above and 4439 - * and ep_remove_waitqueue() called for other reasons (eg the epoll file 4440 - * descriptor being closed); ep_remove_waitqueue() holds an RCU read 4441 - * lock, so we can be sure it's done after calling synchronize_rcu(). 4434 + * This is needed to avoid races between wake_up_pollfree() above and 4435 + * someone else removing the last entry from the queue for other reasons 4436 + * (e.g. ep_remove_wait_queue() being called due to an epoll file 4437 + * descriptor being closed). Such other users hold an RCU read lock, so 4438 + * we can be sure they're done after we call synchronize_rcu(). 4442 4439 */ 4443 4440 if (thread->looper & BINDER_LOOPER_STATE_POLL) 4444 4441 synchronize_rcu();
+2 -1
drivers/ata/ahci_ceva.c
··· 94 94 static unsigned int ceva_ahci_read_id(struct ata_device *dev, 95 95 struct ata_taskfile *tf, u16 *id) 96 96 { 97 + __le16 *__id = (__le16 *)id; 97 98 u32 err_mask; 98 99 99 100 err_mask = ata_do_dev_read_id(dev, tf, id); ··· 104 103 * Since CEVA controller does not support device sleep feature, we 105 104 * need to clear DEVSLP (bit 8) in word78 of the IDENTIFY DEVICE data. 106 105 */ 107 - id[ATA_ID_FEATURE_SUPP] &= cpu_to_le16(~(1 << 8)); 106 + __id[ATA_ID_FEATURE_SUPP] &= cpu_to_le16(~(1 << 8)); 108 107 109 108 return 0; 110 109 }
+2
drivers/ata/libata-core.c
··· 3920 3920 { "VRFDFC22048UCHC-TE*", NULL, ATA_HORKAGE_NODMA }, 3921 3921 /* Odd clown on sil3726/4726 PMPs */ 3922 3922 { "Config Disk", NULL, ATA_HORKAGE_DISABLE }, 3923 + /* Similar story with ASMedia 1092 */ 3924 + { "ASMT109x- Config", NULL, ATA_HORKAGE_DISABLE }, 3923 3925 3924 3926 /* Weird ATAPI devices */ 3925 3927 { "TORiSAN DVD-ROM DRD-N216", NULL, ATA_HORKAGE_MAX_SEC_128 },
+18 -3
drivers/bus/mhi/core/pm.c
··· 881 881 } 882 882 EXPORT_SYMBOL_GPL(mhi_pm_suspend); 883 883 884 - int mhi_pm_resume(struct mhi_controller *mhi_cntrl) 884 + static int __mhi_pm_resume(struct mhi_controller *mhi_cntrl, bool force) 885 885 { 886 886 struct mhi_chan *itr, *tmp; 887 887 struct device *dev = &mhi_cntrl->mhi_dev->dev; ··· 898 898 if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) 899 899 return -EIO; 900 900 901 - if (mhi_get_mhi_state(mhi_cntrl) != MHI_STATE_M3) 902 - return -EINVAL; 901 + if (mhi_get_mhi_state(mhi_cntrl) != MHI_STATE_M3) { 902 + dev_warn(dev, "Resuming from non M3 state (%s)\n", 903 + TO_MHI_STATE_STR(mhi_get_mhi_state(mhi_cntrl))); 904 + if (!force) 905 + return -EINVAL; 906 + } 903 907 904 908 /* Notify clients about exiting LPM */ 905 909 list_for_each_entry_safe(itr, tmp, &mhi_cntrl->lpm_chans, node) { ··· 944 940 945 941 return 0; 946 942 } 943 + 944 + int mhi_pm_resume(struct mhi_controller *mhi_cntrl) 945 + { 946 + return __mhi_pm_resume(mhi_cntrl, false); 947 + } 947 948 EXPORT_SYMBOL_GPL(mhi_pm_resume); 949 + 950 + int mhi_pm_resume_force(struct mhi_controller *mhi_cntrl) 951 + { 952 + return __mhi_pm_resume(mhi_cntrl, true); 953 + } 954 + EXPORT_SYMBOL_GPL(mhi_pm_resume_force); 948 955 949 956 int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl) 950 957 {
+1 -1
drivers/bus/mhi/pci_generic.c
··· 20 20 21 21 #define MHI_PCI_DEFAULT_BAR_NUM 0 22 22 23 - #define MHI_POST_RESET_DELAY_MS 500 23 + #define MHI_POST_RESET_DELAY_MS 2000 24 24 25 25 #define HEALTH_CHECK_PERIOD (HZ * 2) 26 26
+12 -3
drivers/clk/clk.c
··· 3418 3418 3419 3419 clk_prepare_lock(); 3420 3420 3421 + /* 3422 + * Set hw->core after grabbing the prepare_lock to synchronize with 3423 + * callers of clk_core_fill_parent_index() where we treat hw->core 3424 + * being NULL as the clk not being registered yet. This is crucial so 3425 + * that clks aren't parented until their parent is fully registered. 3426 + */ 3427 + core->hw->core = core; 3428 + 3421 3429 ret = clk_pm_runtime_get(core); 3422 3430 if (ret) 3423 3431 goto unlock; ··· 3590 3582 out: 3591 3583 clk_pm_runtime_put(core); 3592 3584 unlock: 3593 - if (ret) 3585 + if (ret) { 3594 3586 hlist_del_init(&core->child_node); 3587 + core->hw->core = NULL; 3588 + } 3595 3589 3596 3590 clk_prepare_unlock(); 3597 3591 ··· 3857 3847 core->num_parents = init->num_parents; 3858 3848 core->min_rate = 0; 3859 3849 core->max_rate = ULONG_MAX; 3860 - hw->core = core; 3861 3850 3862 3851 ret = clk_core_populate_parent_map(core, init); 3863 3852 if (ret) ··· 3874 3865 goto fail_create_clk; 3875 3866 } 3876 3867 3877 - clk_core_link_consumer(hw->core, hw->clk); 3868 + clk_core_link_consumer(core, hw->clk); 3878 3869 3879 3870 ret = __clk_core_init(core); 3880 3871 if (!ret)
+1 -1
drivers/clk/imx/clk-imx8qxp-lpcg.c
··· 370 370 .probe = imx8qxp_lpcg_clk_probe, 371 371 }; 372 372 373 - builtin_platform_driver(imx8qxp_lpcg_clk_driver); 373 + module_platform_driver(imx8qxp_lpcg_clk_driver); 374 374 375 375 MODULE_AUTHOR("Aisheng Dong <aisheng.dong@nxp.com>"); 376 376 MODULE_DESCRIPTION("NXP i.MX8QXP LPCG clock driver");
+1 -1
drivers/clk/imx/clk-imx8qxp.c
··· 308 308 }, 309 309 .probe = imx8qxp_clk_probe, 310 310 }; 311 - builtin_platform_driver(imx8qxp_clk_driver); 311 + module_platform_driver(imx8qxp_clk_driver); 312 312 313 313 MODULE_AUTHOR("Aisheng Dong <aisheng.dong@nxp.com>"); 314 314 MODULE_DESCRIPTION("NXP i.MX8QXP clock driver");
+9
drivers/clk/qcom/clk-alpha-pll.c
··· 1429 1429 void clk_trion_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap, 1430 1430 const struct alpha_pll_config *config) 1431 1431 { 1432 + /* 1433 + * If the bootloader left the PLL enabled it's likely that there are 1434 + * RCGs that will lock up if we disable the PLL below. 1435 + */ 1436 + if (trion_pll_is_enabled(pll, regmap)) { 1437 + pr_debug("Trion PLL is already enabled, skipping configuration\n"); 1438 + return; 1439 + } 1440 + 1432 1441 clk_alpha_pll_write_config(regmap, PLL_L_VAL(pll), config->l); 1433 1442 regmap_write(regmap, PLL_CAL_L_VAL(pll), TRION_PLL_CAL_VAL); 1434 1443 clk_alpha_pll_write_config(regmap, PLL_ALPHA_VAL(pll), config->alpha);
+1 -1
drivers/clk/qcom/clk-regmap-mux.c
··· 28 28 val &= mask; 29 29 30 30 if (mux->parent_map) 31 - return qcom_find_src_index(hw, mux->parent_map, val); 31 + return qcom_find_cfg_index(hw, mux->parent_map, val); 32 32 33 33 return val; 34 34 }
+12
drivers/clk/qcom/common.c
··· 69 69 } 70 70 EXPORT_SYMBOL_GPL(qcom_find_src_index); 71 71 72 + int qcom_find_cfg_index(struct clk_hw *hw, const struct parent_map *map, u8 cfg) 73 + { 74 + int i, num_parents = clk_hw_get_num_parents(hw); 75 + 76 + for (i = 0; i < num_parents; i++) 77 + if (cfg == map[i].cfg) 78 + return i; 79 + 80 + return -ENOENT; 81 + } 82 + EXPORT_SYMBOL_GPL(qcom_find_cfg_index); 83 + 72 84 struct regmap * 73 85 qcom_cc_map(struct platform_device *pdev, const struct qcom_cc_desc *desc) 74 86 {
+2
drivers/clk/qcom/common.h
··· 49 49 qcom_pll_set_fsm_mode(struct regmap *m, u32 reg, u8 bias_count, u8 lock_count); 50 50 extern int qcom_find_src_index(struct clk_hw *hw, const struct parent_map *map, 51 51 u8 src); 52 + extern int qcom_find_cfg_index(struct clk_hw *hw, const struct parent_map *map, 53 + u8 cfg); 52 54 53 55 extern int qcom_cc_register_board_clk(struct device *dev, const char *path, 54 56 const char *name, unsigned long rate);
+2 -2
drivers/clk/qcom/gcc-sm6125.c
··· 1121 1121 .name = "gcc_sdcc1_apps_clk_src", 1122 1122 .parent_data = gcc_parent_data_1, 1123 1123 .num_parents = ARRAY_SIZE(gcc_parent_data_1), 1124 - .ops = &clk_rcg2_ops, 1124 + .ops = &clk_rcg2_floor_ops, 1125 1125 }, 1126 1126 }; 1127 1127 ··· 1143 1143 .name = "gcc_sdcc1_ice_core_clk_src", 1144 1144 .parent_data = gcc_parent_data_0, 1145 1145 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1146 - .ops = &clk_rcg2_floor_ops, 1146 + .ops = &clk_rcg2_ops, 1147 1147 }, 1148 1148 }; 1149 1149
+1 -1
drivers/clk/versatile/clk-icst.c
··· 543 543 544 544 regclk = icst_clk_setup(NULL, &icst_desc, name, parent_name, map, ctype); 545 545 if (IS_ERR(regclk)) { 546 - kfree(name); 547 546 pr_err("error setting up syscon ICST clock %s\n", name); 547 + kfree(name); 548 548 return; 549 549 } 550 550 of_clk_add_provider(np, of_clk_src_simple_get, regclk);
+7 -2
drivers/clocksource/arm_arch_timer.c
··· 394 394 395 395 static atomic_t timer_unstable_counter_workaround_in_use = ATOMIC_INIT(0); 396 396 397 - static void erratum_set_next_event_generic(const int access, unsigned long evt, 398 - struct clock_event_device *clk) 397 + /* 398 + * Force the inlining of this function so that the register accesses 399 + * can be themselves correctly inlined. 400 + */ 401 + static __always_inline 402 + void erratum_set_next_event_generic(const int access, unsigned long evt, 403 + struct clock_event_device *clk) 399 404 { 400 405 unsigned long ctrl; 401 406 u64 cval;
+1 -1
drivers/clocksource/dw_apb_timer_of.c
··· 47 47 pr_warn("pclk for %pOFn is present, but could not be activated\n", 48 48 np); 49 49 50 - if (!of_property_read_u32(np, "clock-freq", rate) && 50 + if (!of_property_read_u32(np, "clock-freq", rate) || 51 51 !of_property_read_u32(np, "clock-frequency", rate)) 52 52 return 0; 53 53
+7 -3
drivers/firmware/scpi_pm_domain.c
··· 16 16 struct generic_pm_domain genpd; 17 17 struct scpi_ops *ops; 18 18 u32 domain; 19 - char name[30]; 20 19 }; 21 20 22 21 /* ··· 109 110 110 111 scpi_pd->domain = i; 111 112 scpi_pd->ops = scpi_ops; 112 - sprintf(scpi_pd->name, "%pOFn.%d", np, i); 113 - scpi_pd->genpd.name = scpi_pd->name; 113 + scpi_pd->genpd.name = devm_kasprintf(dev, GFP_KERNEL, 114 + "%pOFn.%d", np, i); 115 + if (!scpi_pd->genpd.name) { 116 + dev_err(dev, "Failed to allocate genpd name:%pOFn.%d\n", 117 + np, i); 118 + continue; 119 + } 114 120 scpi_pd->genpd.power_off = scpi_pd_power_off; 115 121 scpi_pd->genpd.power_on = scpi_pd_power_on; 116 122
+3 -2
drivers/firmware/tegra/bpmp-debugfs.c
··· 77 77 const char *root_path, *filename = NULL; 78 78 char *root_path_buf; 79 79 size_t root_len; 80 + size_t root_path_buf_len = 512; 80 81 81 - root_path_buf = kzalloc(512, GFP_KERNEL); 82 + root_path_buf = kzalloc(root_path_buf_len, GFP_KERNEL); 82 83 if (!root_path_buf) 83 84 goto out; 84 85 85 86 root_path = dentry_path(bpmp->debugfs_mirror, root_path_buf, 86 - sizeof(root_path_buf)); 87 + root_path_buf_len); 87 88 if (IS_ERR(root_path)) 88 89 goto out; 89 90
+6 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 2576 2576 */ 2577 2577 link_enc_cfg_init(dm->dc, dc_state); 2578 2578 2579 - amdgpu_dm_outbox_init(adev); 2579 + if (dc_enable_dmub_notifications(adev->dm.dc)) 2580 + amdgpu_dm_outbox_init(adev); 2580 2581 2581 2582 r = dm_dmub_hw_init(adev); 2582 2583 if (r) ··· 2625 2624 dm_state->context = dc_create_state(dm->dc); 2626 2625 /* TODO: Remove dc_state->dccg, use dc->dccg directly. */ 2627 2626 dc_resource_state_construct(dm->dc, dm_state->context); 2627 + 2628 + /* Re-enable outbox interrupts for DPIA. */ 2629 + if (dc_enable_dmub_notifications(adev->dm.dc)) 2630 + amdgpu_dm_outbox_init(adev); 2628 2631 2629 2632 /* Before powering on DC we need to re-initialize DMUB. */ 2630 2633 r = dm_dmub_hw_init(adev);
+2
drivers/gpu/drm/amd/display/dc/dc_link.h
··· 226 226 *edp_num = 0; 227 227 for (i = 0; i < dc->link_count; i++) { 228 228 // report any eDP links, even unconnected DDI's 229 + if (!dc->links[i]) 230 + continue; 229 231 if (dc->links[i]->connector_signal == SIGNAL_TYPE_EDP) { 230 232 edp_links[*edp_num] = dc->links[i]; 231 233 if (++(*edp_num) == MAX_NUM_EDP)
+10 -1
drivers/gpu/drm/drm_syncobj.c
··· 404 404 405 405 if (*fence) { 406 406 ret = dma_fence_chain_find_seqno(fence, point); 407 - if (!ret) 407 + if (!ret) { 408 + /* If the requested seqno is already signaled 409 + * drm_syncobj_find_fence may return a NULL 410 + * fence. To make sure the recipient gets 411 + * signalled, use a new fence instead. 412 + */ 413 + if (!*fence) 414 + *fence = dma_fence_get_stub(); 415 + 408 416 goto out; 417 + } 409 418 dma_fence_put(*fence); 410 419 } else { 411 420 ret = -EINVAL;
+1
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
··· 3277 3277 out_fence = eb_requests_create(&eb, in_fence, out_fence_fd); 3278 3278 if (IS_ERR(out_fence)) { 3279 3279 err = PTR_ERR(out_fence); 3280 + out_fence = NULL; 3280 3281 if (eb.requests[0]) 3281 3282 goto err_request; 3282 3283 else
+9 -9
drivers/gpu/drm/i915/gt/intel_workarounds.c
··· 1127 1127 GAMT_CHKN_BIT_REG, 1128 1128 GAMT_CHKN_DISABLE_L3_COH_PIPE); 1129 1129 1130 + /* Wa_1407352427:icl,ehl */ 1131 + wa_write_or(wal, UNSLICE_UNIT_LEVEL_CLKGATE2, 1132 + PSDUNIT_CLKGATE_DIS); 1133 + 1134 + /* Wa_1406680159:icl,ehl */ 1135 + wa_write_or(wal, 1136 + SUBSLICE_UNIT_LEVEL_CLKGATE, 1137 + GWUNIT_CLKGATE_DIS); 1138 + 1130 1139 /* Wa_1607087056:icl,ehl,jsl */ 1131 1140 if (IS_ICELAKE(i915) || 1132 1141 IS_JSL_EHL_GT_STEP(i915, STEP_A0, STEP_B0)) ··· 1860 1851 */ 1861 1852 wa_write_or(wal, UNSLICE_UNIT_LEVEL_CLKGATE, 1862 1853 VSUNIT_CLKGATE_DIS | HSUNIT_CLKGATE_DIS); 1863 - 1864 - /* Wa_1407352427:icl,ehl */ 1865 - wa_write_or(wal, UNSLICE_UNIT_LEVEL_CLKGATE2, 1866 - PSDUNIT_CLKGATE_DIS); 1867 - 1868 - /* Wa_1406680159:icl,ehl */ 1869 - wa_write_or(wal, 1870 - SUBSLICE_UNIT_LEVEL_CLKGATE, 1871 - GWUNIT_CLKGATE_DIS); 1872 1854 1873 1855 /* 1874 1856 * Wa_1408767742:icl[a2..forever],ehl[all]
+2 -1
drivers/gpu/drm/ttm/ttm_bo.c
··· 1103 1103 * as an indication that we're about to swap out. 1104 1104 */ 1105 1105 memset(&place, 0, sizeof(place)); 1106 - place.mem_type = TTM_PL_SYSTEM; 1106 + place.mem_type = bo->resource->mem_type; 1107 1107 if (!ttm_bo_evict_swapout_allowable(bo, ctx, &place, &locked, NULL)) 1108 1108 return -EBUSY; 1109 1109 ··· 1135 1135 struct ttm_place hop; 1136 1136 1137 1137 memset(&hop, 0, sizeof(hop)); 1138 + place.mem_type = TTM_PL_SYSTEM; 1138 1139 ret = ttm_resource_alloc(bo, &place, &evict_mem); 1139 1140 if (unlikely(ret)) 1140 1141 goto out;
+1
drivers/hv/Kconfig
··· 19 19 config HYPERV_UTILS 20 20 tristate "Microsoft Hyper-V Utilities driver" 21 21 depends on HYPERV && CONNECTOR && NLS 22 + depends on PTP_1588_CLOCK_OPTIONAL 22 23 help 23 24 Select this option to enable the Hyper-V Utilities. 24 25
+1 -1
drivers/hwmon/corsair-psu.c
··· 729 729 corsairpsu_check_cmd_support(priv); 730 730 731 731 priv->hwmon_dev = hwmon_device_register_with_info(&hdev->dev, "corsairpsu", priv, 732 - &corsairpsu_chip_info, 0); 732 + &corsairpsu_chip_info, NULL); 733 733 734 734 if (IS_ERR(priv->hwmon_dev)) { 735 735 ret = PTR_ERR(priv->hwmon_dev);
+3 -4
drivers/hwmon/dell-smm-hwmon.c
··· 627 627 { 628 628 struct dell_smm_data *data = dev_get_drvdata(dev); 629 629 630 - /* Register the proc entry */ 631 - proc_create_data("i8k", 0, NULL, &i8k_proc_ops, data); 632 - 633 - devm_add_action_or_reset(dev, i8k_exit_procfs, NULL); 630 + /* Only register exit function if creation was successful */ 631 + if (proc_create_data("i8k", 0, NULL, &i8k_proc_ops, data)) 632 + devm_add_action_or_reset(dev, i8k_exit_procfs, NULL); 634 633 } 635 634 636 635 #else
+1 -1
drivers/hwmon/nct6775.c
··· 1527 1527 1528 1528 nct6775_wmi_set_bank(data, reg); 1529 1529 1530 - err = nct6775_asuswmi_read(data->bank, reg, &tmp); 1530 + err = nct6775_asuswmi_read(data->bank, reg & 0xff, &tmp); 1531 1531 if (err) 1532 1532 return 0; 1533 1533
-2
drivers/hwmon/pwm-fan.c
··· 336 336 return ret; 337 337 } 338 338 339 - ctx->pwm_value = MAX_PWM; 340 - 341 339 pwm_init_state(ctx->pwm, &ctx->pwm_state); 342 340 343 341 /*
+2 -2
drivers/hwmon/sht4x.c
··· 23 23 /* 24 24 * I2C command delays (in microseconds) 25 25 */ 26 - #define SHT4X_MEAS_DELAY 1000 26 + #define SHT4X_MEAS_DELAY_HPM 8200 /* see t_MEAS,h in datasheet */ 27 27 #define SHT4X_DELAY_EXTRA 10000 28 28 29 29 /* ··· 90 90 if (ret < 0) 91 91 goto unlock; 92 92 93 - usleep_range(SHT4X_MEAS_DELAY, SHT4X_MEAS_DELAY + SHT4X_DELAY_EXTRA); 93 + usleep_range(SHT4X_MEAS_DELAY_HPM, SHT4X_MEAS_DELAY_HPM + SHT4X_DELAY_EXTRA); 94 94 95 95 ret = i2c_master_recv(client, raw_data, SHT4X_RESPONSE_LENGTH); 96 96 if (ret != SHT4X_RESPONSE_LENGTH) {
+1 -1
drivers/i2c/busses/i2c-mpc.c
··· 636 636 status = readb(i2c->base + MPC_I2C_SR); 637 637 if (status & CSR_MIF) { 638 638 /* Wait up to 100us for transfer to properly complete */ 639 - readb_poll_timeout(i2c->base + MPC_I2C_SR, status, !(status & CSR_MCF), 0, 100); 639 + readb_poll_timeout_atomic(i2c->base + MPC_I2C_SR, status, status & CSR_MCF, 0, 100); 640 640 writeb(0, i2c->base + MPC_I2C_SR); 641 641 mpc_i2c_do_intr(i2c, status); 642 642 return IRQ_HANDLED;
+12 -20
drivers/i2c/busses/i2c-virtio.c
··· 22 22 /** 23 23 * struct virtio_i2c - virtio I2C data 24 24 * @vdev: virtio device for this controller 25 - * @completion: completion of virtio I2C message 26 25 * @adap: I2C adapter for this controller 27 26 * @vq: the virtio virtqueue for communication 28 27 */ 29 28 struct virtio_i2c { 30 29 struct virtio_device *vdev; 31 - struct completion completion; 32 30 struct i2c_adapter adap; 33 31 struct virtqueue *vq; 34 32 }; 35 33 36 34 /** 37 35 * struct virtio_i2c_req - the virtio I2C request structure 36 + * @completion: completion of virtio I2C message 38 37 * @out_hdr: the OUT header of the virtio I2C message 39 38 * @buf: the buffer into which data is read, or from which it's written 40 39 * @in_hdr: the IN header of the virtio I2C message 41 40 */ 42 41 struct virtio_i2c_req { 42 + struct completion completion; 43 43 struct virtio_i2c_out_hdr out_hdr ____cacheline_aligned; 44 44 uint8_t *buf ____cacheline_aligned; 45 45 struct virtio_i2c_in_hdr in_hdr ____cacheline_aligned; ··· 47 47 48 48 static void virtio_i2c_msg_done(struct virtqueue *vq) 49 49 { 50 - struct virtio_i2c *vi = vq->vdev->priv; 50 + struct virtio_i2c_req *req; 51 + unsigned int len; 51 52 52 - complete(&vi->completion); 53 + while ((req = virtqueue_get_buf(vq, &len))) 54 + complete(&req->completion); 53 55 } 54 56 55 57 static int virtio_i2c_prepare_reqs(struct virtqueue *vq, ··· 63 61 64 62 for (i = 0; i < num; i++) { 65 63 int outcnt = 0, incnt = 0; 64 + 65 + init_completion(&reqs[i].completion); 66 66 67 67 /* 68 68 * Only 7-bit mode supported for this moment. For the address ··· 110 106 struct virtio_i2c_req *reqs, 111 107 struct i2c_msg *msgs, int num) 112 108 { 113 - struct virtio_i2c_req *req; 114 109 bool failed = false; 115 - unsigned int len; 116 110 int i, j = 0; 117 111 118 112 for (i = 0; i < num; i++) { 119 - /* Detach the ith request from the vq */ 120 - req = virtqueue_get_buf(vq, &len); 113 + struct virtio_i2c_req *req = &reqs[i]; 121 114 122 - /* 123 - * Condition req == &reqs[i] should always meet since we have 124 - * total num requests in the vq. reqs[i] can never be NULL here. 125 - */ 126 - if (!failed && (WARN_ON(req != &reqs[i]) || 127 - req->in_hdr.status != VIRTIO_I2C_MSG_OK)) 115 + wait_for_completion(&req->completion); 116 + 117 + if (!failed && req->in_hdr.status != VIRTIO_I2C_MSG_OK) 128 118 failed = true; 129 119 130 120 i2c_put_dma_safe_msg_buf(reqs[i].buf, &msgs[i], !failed); ··· 154 156 * remote here to clear the virtqueue, so we can try another set of 155 157 * messages later on. 156 158 */ 157 - 158 - reinit_completion(&vi->completion); 159 159 virtqueue_kick(vq); 160 - 161 - wait_for_completion(&vi->completion); 162 160 163 161 count = virtio_i2c_complete_reqs(vq, reqs, msgs, count); 164 162 ··· 203 209 204 210 vdev->priv = vi; 205 211 vi->vdev = vdev; 206 - 207 - init_completion(&vi->completion); 208 212 209 213 ret = virtio_i2c_setup_vqs(vi); 210 214 if (ret)
+2 -3
drivers/iio/accel/kxcjk-1013.c
··· 1595 1595 return 0; 1596 1596 1597 1597 err_buffer_cleanup: 1598 - if (data->dready_trig) 1599 - iio_triggered_buffer_cleanup(indio_dev); 1598 + iio_triggered_buffer_cleanup(indio_dev); 1600 1599 err_trigger_unregister: 1601 1600 if (data->dready_trig) 1602 1601 iio_trigger_unregister(data->dready_trig); ··· 1617 1618 pm_runtime_disable(&client->dev); 1618 1619 pm_runtime_set_suspended(&client->dev); 1619 1620 1621 + iio_triggered_buffer_cleanup(indio_dev); 1620 1622 if (data->dready_trig) { 1621 - iio_triggered_buffer_cleanup(indio_dev); 1622 1623 iio_trigger_unregister(data->dready_trig); 1623 1624 iio_trigger_unregister(data->motion_trig); 1624 1625 }
+3 -3
drivers/iio/accel/kxsd9.c
··· 224 224 hw_values.chan, 225 225 sizeof(hw_values.chan)); 226 226 if (ret) { 227 - dev_err(st->dev, 228 - "error reading data\n"); 229 - return ret; 227 + dev_err(st->dev, "error reading data: %d\n", ret); 228 + goto out; 230 229 } 231 230 232 231 iio_push_to_buffers_with_timestamp(indio_dev, 233 232 &hw_values, 234 233 iio_get_time_ns(indio_dev)); 234 + out: 235 235 iio_trigger_notify_done(indio_dev->trig); 236 236 237 237 return IRQ_HANDLED;
+1 -1
drivers/iio/accel/mma8452.c
··· 1470 1470 if (ret) 1471 1471 return ret; 1472 1472 1473 - indio_dev->trig = trig; 1473 + indio_dev->trig = iio_trigger_get(trig); 1474 1474 1475 1475 return 0; 1476 1476 }
+1 -1
drivers/iio/adc/Kconfig
··· 532 532 533 533 config IMX8QXP_ADC 534 534 tristate "NXP IMX8QXP ADC driver" 535 - depends on ARCH_MXC_ARM64 || COMPILE_TEST 535 + depends on ARCH_MXC || COMPILE_TEST 536 536 depends on HAS_IOMEM 537 537 help 538 538 Say yes here to build support for IMX8QXP ADC.
+1 -1
drivers/iio/adc/ad7768-1.c
··· 480 480 iio_push_to_buffers_with_timestamp(indio_dev, &st->data.scan, 481 481 iio_get_time_ns(indio_dev)); 482 482 483 - iio_trigger_notify_done(indio_dev->trig); 484 483 err_unlock: 484 + iio_trigger_notify_done(indio_dev->trig); 485 485 mutex_unlock(&st->lock); 486 486 487 487 return IRQ_HANDLED;
+2 -1
drivers/iio/adc/at91-sama5d2_adc.c
··· 1586 1586 *val = st->conversion_value; 1587 1587 ret = at91_adc_adjust_val_osr(st, val); 1588 1588 if (chan->scan_type.sign == 's') 1589 - *val = sign_extend32(*val, 11); 1589 + *val = sign_extend32(*val, 1590 + chan->scan_type.realbits - 1); 1590 1591 st->conversion_done = false; 1591 1592 } 1592 1593
+3 -15
drivers/iio/adc/axp20x_adc.c
··· 251 251 struct iio_chan_spec const *chan, int *val) 252 252 { 253 253 struct axp20x_adc_iio *info = iio_priv(indio_dev); 254 - int size; 255 254 256 - /* 257 - * N.B.: Unlike the Chinese datasheets tell, the charging current is 258 - * stored on 12 bits, not 13 bits. Only discharging current is on 13 259 - * bits. 260 - */ 261 - if (chan->type == IIO_CURRENT && chan->channel == AXP22X_BATT_DISCHRG_I) 262 - size = 13; 263 - else 264 - size = 12; 265 - 266 - *val = axp20x_read_variable_width(info->regmap, chan->address, size); 255 + *val = axp20x_read_variable_width(info->regmap, chan->address, 12); 267 256 if (*val < 0) 268 257 return *val; 269 258 ··· 375 386 return IIO_VAL_INT_PLUS_MICRO; 376 387 377 388 case IIO_CURRENT: 378 - *val = 0; 379 - *val2 = 500000; 380 - return IIO_VAL_INT_PLUS_MICRO; 389 + *val = 1; 390 + return IIO_VAL_INT; 381 391 382 392 case IIO_TEMP: 383 393 *val = 100;
+12 -9
drivers/iio/adc/dln2-adc.c
··· 248 248 static int dln2_adc_read(struct dln2_adc *dln2, unsigned int channel) 249 249 { 250 250 int ret, i; 251 - struct iio_dev *indio_dev = platform_get_drvdata(dln2->pdev); 252 251 u16 conflict; 253 252 __le16 value; 254 253 int olen = sizeof(value); ··· 256 257 .chan = channel, 257 258 }; 258 259 259 - ret = iio_device_claim_direct_mode(indio_dev); 260 - if (ret < 0) 261 - return ret; 262 - 263 260 ret = dln2_adc_set_chan_enabled(dln2, channel, true); 264 261 if (ret < 0) 265 - goto release_direct; 262 + return ret; 266 263 267 264 ret = dln2_adc_set_port_enabled(dln2, true, &conflict); 268 265 if (ret < 0) { ··· 295 300 dln2_adc_set_port_enabled(dln2, false, NULL); 296 301 disable_chan: 297 302 dln2_adc_set_chan_enabled(dln2, channel, false); 298 - release_direct: 299 - iio_device_release_direct_mode(indio_dev); 300 303 301 304 return ret; 302 305 } ··· 330 337 331 338 switch (mask) { 332 339 case IIO_CHAN_INFO_RAW: 340 + ret = iio_device_claim_direct_mode(indio_dev); 341 + if (ret < 0) 342 + return ret; 343 + 333 344 mutex_lock(&dln2->mutex); 334 345 ret = dln2_adc_read(dln2, chan->channel); 335 346 mutex_unlock(&dln2->mutex); 347 + 348 + iio_device_release_direct_mode(indio_dev); 336 349 337 350 if (ret < 0) 338 351 return ret; ··· 655 656 return -ENOMEM; 656 657 } 657 658 iio_trigger_set_drvdata(dln2->trig, dln2); 658 - devm_iio_trigger_register(dev, dln2->trig); 659 + ret = devm_iio_trigger_register(dev, dln2->trig); 660 + if (ret) { 661 + dev_err(dev, "failed to register trigger: %d\n", ret); 662 + return ret; 663 + } 659 664 iio_trigger_set_immutable(indio_dev, dln2->trig); 660 665 661 666 ret = devm_iio_triggered_buffer_setup(dev, indio_dev, NULL,
+2 -1
drivers/iio/adc/stm32-adc.c
··· 1117 1117 { 1118 1118 struct stm32_adc *adc = iio_priv(indio_dev); 1119 1119 1120 + stm32_adc_writel(adc, STM32H7_ADC_PCSEL, 0); 1120 1121 stm32h7_adc_disable(indio_dev); 1121 1122 stm32_adc_int_ch_disable(adc); 1122 1123 stm32h7_adc_enter_pwr_down(adc); ··· 1987 1986 /* Get calibration data for vrefint channel */ 1988 1987 ret = nvmem_cell_read_u16(&indio_dev->dev, "vrefint", &vrefint); 1989 1988 if (ret && ret != -ENOENT) { 1990 - return dev_err_probe(&indio_dev->dev, ret, 1989 + return dev_err_probe(indio_dev->dev.parent, ret, 1991 1990 "nvmem access error\n"); 1992 1991 } 1993 1992 if (ret == -ENOENT)
+3 -2
drivers/iio/gyro/adxrs290.c
··· 7 7 */ 8 8 9 9 #include <linux/bitfield.h> 10 + #include <linux/bitops.h> 10 11 #include <linux/delay.h> 11 12 #include <linux/device.h> 12 13 #include <linux/kernel.h> ··· 125 124 goto err_unlock; 126 125 } 127 126 128 - *val = temp; 127 + *val = sign_extend32(temp, 15); 129 128 130 129 err_unlock: 131 130 mutex_unlock(&st->lock); ··· 147 146 } 148 147 149 148 /* extract lower 12 bits temperature reading */ 150 - *val = temp & 0x0FFF; 149 + *val = sign_extend32(temp, 11); 151 150 152 151 err_unlock: 153 152 mutex_unlock(&st->lock);
+1 -1
drivers/iio/gyro/itg3200_buffer.c
··· 61 61 62 62 iio_push_to_buffers_with_timestamp(indio_dev, &scan, pf->timestamp); 63 63 64 + error_ret: 64 65 iio_trigger_notify_done(indio_dev->trig); 65 66 66 - error_ret: 67 67 return IRQ_HANDLED; 68 68 } 69 69
-1
drivers/iio/industrialio-trigger.c
··· 556 556 irq_modify_status(trig->subirq_base + i, 557 557 IRQ_NOREQUEST | IRQ_NOAUTOEN, IRQ_NOPROBE); 558 558 } 559 - get_device(&trig->dev); 560 559 561 560 return trig; 562 561
+1 -1
drivers/iio/light/ltr501.c
··· 1275 1275 ret = regmap_bulk_read(data->regmap, LTR501_ALS_DATA1, 1276 1276 als_buf, sizeof(als_buf)); 1277 1277 if (ret < 0) 1278 - return ret; 1278 + goto done; 1279 1279 if (test_bit(0, indio_dev->active_scan_mask)) 1280 1280 scan.channels[j++] = le16_to_cpu(als_buf[1]); 1281 1281 if (test_bit(1, indio_dev->active_scan_mask))
+3 -3
drivers/iio/light/stk3310.c
··· 546 546 mutex_lock(&data->lock); 547 547 ret = regmap_field_read(data->reg_flag_nf, &dir); 548 548 if (ret < 0) { 549 - dev_err(&data->client->dev, "register read failed\n"); 550 - mutex_unlock(&data->lock); 551 - return ret; 549 + dev_err(&data->client->dev, "register read failed: %d\n", ret); 550 + goto out; 552 551 } 553 552 event = IIO_UNMOD_EVENT_CODE(IIO_PROXIMITY, 1, 554 553 IIO_EV_TYPE_THRESH, ··· 559 560 ret = regmap_field_write(data->reg_flag_psint, 0); 560 561 if (ret < 0) 561 562 dev_err(&data->client->dev, "failed to reset interrupts\n"); 563 + out: 562 564 mutex_unlock(&data->lock); 563 565 564 566 return IRQ_HANDLED;
+1 -1
drivers/iio/trigger/stm32-timer-trigger.c
··· 912 912 }; 913 913 module_platform_driver(stm32_timer_trigger_driver); 914 914 915 - MODULE_ALIAS("platform: stm32-timer-trigger"); 915 + MODULE_ALIAS("platform:stm32-timer-trigger"); 916 916 MODULE_DESCRIPTION("STMicroelectronics STM32 Timer Trigger driver"); 917 917 MODULE_LICENSE("GPL v2");
+2
drivers/infiniband/hw/hfi1/chip.c
··· 8415 8415 */ 8416 8416 static void __hfi1_rcd_eoi_intr(struct hfi1_ctxtdata *rcd) 8417 8417 { 8418 + if (!rcd->rcvhdrq) 8419 + return; 8418 8420 clear_recv_intr(rcd); 8419 8421 if (check_packet_present(rcd)) 8420 8422 force_recv_intr(rcd);
+2
drivers/infiniband/hw/hfi1/driver.c
··· 1012 1012 struct hfi1_packet packet; 1013 1013 int skip_pkt = 0; 1014 1014 1015 + if (!rcd->rcvhdrq) 1016 + return RCV_PKT_OK; 1015 1017 /* Control context will always use the slow path interrupt handler */ 1016 1018 needset = (rcd->ctxt == HFI1_CTRL_CTXT) ? 0 : 1; 1017 1019
+17 -23
drivers/infiniband/hw/hfi1/init.c
··· 113 113 rcd->fast_handler = get_dma_rtail_setting(rcd) ? 114 114 handle_receive_interrupt_dma_rtail : 115 115 handle_receive_interrupt_nodma_rtail; 116 - rcd->slow_handler = handle_receive_interrupt; 117 116 118 117 hfi1_set_seq_cnt(rcd, 1); 119 118 ··· 333 334 rcd->numa_id = numa; 334 335 rcd->rcv_array_groups = dd->rcv_entries.ngroups; 335 336 rcd->rhf_rcv_function_map = normal_rhf_rcv_functions; 337 + rcd->slow_handler = handle_receive_interrupt; 338 + rcd->do_interrupt = rcd->slow_handler; 336 339 rcd->msix_intr = CCE_NUM_MSIX_VECTORS; 337 340 338 341 mutex_init(&rcd->exp_mutex); ··· 875 874 if (ret) 876 875 goto done; 877 876 878 - /* allocate dummy tail memory for all receive contexts */ 879 - dd->rcvhdrtail_dummy_kvaddr = dma_alloc_coherent(&dd->pcidev->dev, 880 - sizeof(u64), 881 - &dd->rcvhdrtail_dummy_dma, 882 - GFP_KERNEL); 883 - 884 - if (!dd->rcvhdrtail_dummy_kvaddr) { 885 - dd_dev_err(dd, "cannot allocate dummy tail memory\n"); 886 - ret = -ENOMEM; 887 - goto done; 888 - } 889 - 890 877 /* dd->rcd can be NULL if early initialization failed */ 891 878 for (i = 0; dd->rcd && i < dd->first_dyn_alloc_ctxt; ++i) { 892 879 /* ··· 886 897 rcd = hfi1_rcd_get_by_index(dd, i); 887 898 if (!rcd) 888 899 continue; 889 - 890 - rcd->do_interrupt = &handle_receive_interrupt; 891 900 892 901 lastfail = hfi1_create_rcvhdrq(dd, rcd); 893 902 if (!lastfail) ··· 1107 1120 rcd->egrbufs.rcvtids = NULL; 1108 1121 1109 1122 for (e = 0; e < rcd->egrbufs.alloced; e++) { 1110 - if (rcd->egrbufs.buffers[e].dma) 1123 + if (rcd->egrbufs.buffers[e].addr) 1111 1124 dma_free_coherent(&dd->pcidev->dev, 1112 1125 rcd->egrbufs.buffers[e].len, 1113 1126 rcd->egrbufs.buffers[e].addr, ··· 1188 1201 dd->tx_opstats = NULL; 1189 1202 kfree(dd->comp_vect); 1190 1203 dd->comp_vect = NULL; 1204 + if (dd->rcvhdrtail_dummy_kvaddr) 1205 + dma_free_coherent(&dd->pcidev->dev, sizeof(u64), 1206 + (void *)dd->rcvhdrtail_dummy_kvaddr, 1207 + dd->rcvhdrtail_dummy_dma); 1208 + dd->rcvhdrtail_dummy_kvaddr = NULL; 1191 1209 sdma_clean(dd, dd->num_sdma); 1192 1210 rvt_dealloc_device(&dd->verbs_dev.rdi); 1193 1211 } ··· 1286 1294 1287 1295 dd->comp_vect = kzalloc(sizeof(*dd->comp_vect), GFP_KERNEL); 1288 1296 if (!dd->comp_vect) { 1297 + ret = -ENOMEM; 1298 + goto bail; 1299 + } 1300 + 1301 + /* allocate dummy tail memory for all receive contexts */ 1302 + dd->rcvhdrtail_dummy_kvaddr = 1303 + dma_alloc_coherent(&dd->pcidev->dev, sizeof(u64), 1304 + &dd->rcvhdrtail_dummy_dma, GFP_KERNEL); 1305 + if (!dd->rcvhdrtail_dummy_kvaddr) { 1289 1306 ret = -ENOMEM; 1290 1307 goto bail; 1291 1308 } ··· 1505 1504 } 1506 1505 1507 1506 free_credit_return(dd); 1508 - 1509 - if (dd->rcvhdrtail_dummy_kvaddr) { 1510 - dma_free_coherent(&dd->pcidev->dev, sizeof(u64), 1511 - (void *)dd->rcvhdrtail_dummy_kvaddr, 1512 - dd->rcvhdrtail_dummy_dma); 1513 - dd->rcvhdrtail_dummy_kvaddr = NULL; 1514 - } 1515 1507 1516 1508 /* 1517 1509 * Free any resources still in use (usually just kernel contexts)
+1 -1
drivers/infiniband/hw/hfi1/sdma.c
··· 838 838 if (current->nr_cpus_allowed != 1) 839 839 goto out; 840 840 841 - cpu_id = smp_processor_id(); 842 841 rcu_read_lock(); 842 + cpu_id = smp_processor_id(); 843 843 rht_node = rhashtable_lookup(dd->sdma_rht, &cpu_id, 844 844 sdma_rht_params); 845 845
+11 -3
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 33 33 #include <linux/acpi.h> 34 34 #include <linux/etherdevice.h> 35 35 #include <linux/interrupt.h> 36 + #include <linux/iopoll.h> 36 37 #include <linux/kernel.h> 37 38 #include <linux/types.h> 38 39 #include <net/addrconf.h> ··· 1051 1050 unsigned long instance_stage, 1052 1051 unsigned long reset_stage) 1053 1052 { 1053 + #define HW_RESET_TIMEOUT_US 1000000 1054 + #define HW_RESET_SLEEP_US 1000 1055 + 1054 1056 struct hns_roce_v2_priv *priv = hr_dev->priv; 1055 1057 struct hnae3_handle *handle = priv->handle; 1056 1058 const struct hnae3_ae_ops *ops = handle->ae_algo->ops; 1059 + unsigned long val; 1060 + int ret; 1057 1061 1058 1062 /* When hardware reset is detected, we should stop sending mailbox&cmq& 1059 1063 * doorbell to hardware. If now in .init_instance() function, we should ··· 1070 1064 * again. 1071 1065 */ 1072 1066 hr_dev->dis_db = true; 1073 - if (!ops->get_hw_reset_stat(handle)) 1067 + 1068 + ret = read_poll_timeout(ops->ae_dev_reset_cnt, val, 1069 + val > hr_dev->reset_cnt, HW_RESET_SLEEP_US, 1070 + HW_RESET_TIMEOUT_US, false, handle); 1071 + if (!ret) 1074 1072 hr_dev->is_reset = true; 1075 1073 1076 1074 if (!hr_dev->is_reset || reset_stage == HNS_ROCE_STATE_RST_INIT || ··· 6397 6387 if (!hr_dev) 6398 6388 return 0; 6399 6389 6400 - hr_dev->is_reset = true; 6401 6390 hr_dev->active = false; 6402 6391 hr_dev->dis_db = true; 6403 - 6404 6392 hr_dev->state = HNS_ROCE_DEVICE_STATE_RST_DOWN; 6405 6393 6406 6394 return 0;
+6 -1
drivers/infiniband/hw/irdma/hw.c
··· 60 60 { 61 61 struct irdma_cq *cq = iwcq->back_cq; 62 62 63 + if (!cq->user_mode) 64 + cq->armed = false; 63 65 if (cq->ibcq.comp_handler) 64 66 cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context); 65 67 } ··· 148 146 qp->flush_code = FLUSH_PROT_ERR; 149 147 break; 150 148 case IRDMA_AE_AMP_BAD_QP: 149 + case IRDMA_AE_WQE_UNEXPECTED_OPCODE: 151 150 qp->flush_code = FLUSH_LOC_QP_OP_ERR; 152 151 break; 153 152 case IRDMA_AE_AMP_BAD_STAG_KEY: ··· 159 156 case IRDMA_AE_PRIV_OPERATION_DENIED: 160 157 case IRDMA_AE_IB_INVALID_REQUEST: 161 158 case IRDMA_AE_IB_REMOTE_ACCESS_ERROR: 162 - case IRDMA_AE_IB_REMOTE_OP_ERROR: 163 159 qp->flush_code = FLUSH_REM_ACCESS_ERR; 164 160 qp->event_type = IRDMA_QP_EVENT_ACCESS_ERR; 165 161 break; ··· 185 183 case IRDMA_AE_AMP_MWBIND_BIND_DISABLED: 186 184 case IRDMA_AE_AMP_MWBIND_INVALID_BOUNDS: 187 185 qp->flush_code = FLUSH_MW_BIND_ERR; 186 + break; 187 + case IRDMA_AE_IB_REMOTE_OP_ERROR: 188 + qp->flush_code = FLUSH_REM_OP_ERR; 188 189 break; 189 190 default: 190 191 qp->flush_code = FLUSH_FATAL_ERR;
+1
drivers/infiniband/hw/irdma/main.h
··· 542 542 void (*callback_fcn)(struct irdma_cqp_request *cqp_request), 543 543 void *cb_param); 544 544 void irdma_gsi_ud_qp_ah_cb(struct irdma_cqp_request *cqp_request); 545 + bool irdma_cq_empty(struct irdma_cq *iwcq); 545 546 int irdma_inetaddr_event(struct notifier_block *notifier, unsigned long event, 546 547 void *ptr); 547 548 int irdma_inet6addr_event(struct notifier_block *notifier, unsigned long event,
+3 -5
drivers/infiniband/hw/irdma/pble.c
··· 25 25 list_del(&chunk->list); 26 26 if (chunk->type == PBLE_SD_PAGED) 27 27 irdma_pble_free_paged_mem(chunk); 28 - if (chunk->bitmapbuf) 29 - kfree(chunk->bitmapmem.va); 28 + bitmap_free(chunk->bitmapbuf); 30 29 kfree(chunk->chunkmem.va); 31 30 } 32 31 } ··· 282 283 "PBLE: next_fpm_addr = %llx chunk_size[%llu] = 0x%llx\n", 283 284 pble_rsrc->next_fpm_addr, chunk->size, chunk->size); 284 285 pble_rsrc->unallocated_pble -= (u32)(chunk->size >> 3); 285 - list_add(&chunk->list, &pble_rsrc->pinfo.clist); 286 286 sd_reg_val = (sd_entry_type == IRDMA_SD_TYPE_PAGED) ? 287 287 sd_entry->u.pd_table.pd_page_addr.pa : 288 288 sd_entry->u.bp.addr.pa; ··· 293 295 goto error; 294 296 } 295 297 298 + list_add(&chunk->list, &pble_rsrc->pinfo.clist); 296 299 sd_entry->valid = true; 297 300 return 0; 298 301 299 302 error: 300 - if (chunk->bitmapbuf) 301 - kfree(chunk->bitmapmem.va); 303 + bitmap_free(chunk->bitmapbuf); 302 304 kfree(chunk->chunkmem.va); 303 305 304 306 return ret_code;
-1
drivers/infiniband/hw/irdma/pble.h
··· 78 78 u32 pg_cnt; 79 79 enum irdma_alloc_type type; 80 80 struct irdma_sc_dev *dev; 81 - struct irdma_virt_mem bitmapmem; 82 81 struct irdma_virt_mem chunkmem; 83 82 }; 84 83
+17 -7
drivers/infiniband/hw/irdma/utils.c
··· 2239 2239 2240 2240 sizeofbitmap = (u64)pchunk->size >> pprm->pble_shift; 2241 2241 2242 - pchunk->bitmapmem.size = sizeofbitmap >> 3; 2243 - pchunk->bitmapmem.va = kzalloc(pchunk->bitmapmem.size, GFP_KERNEL); 2244 - 2245 - if (!pchunk->bitmapmem.va) 2242 + pchunk->bitmapbuf = bitmap_zalloc(sizeofbitmap, GFP_KERNEL); 2243 + if (!pchunk->bitmapbuf) 2246 2244 return IRDMA_ERR_NO_MEMORY; 2247 - 2248 - pchunk->bitmapbuf = pchunk->bitmapmem.va; 2249 - bitmap_zero(pchunk->bitmapbuf, sizeofbitmap); 2250 2245 2251 2246 pchunk->sizeofbitmap = sizeofbitmap; 2252 2247 /* each pble is 8 bytes hence shift by 3 */ ··· 2485 2490 ibevent.device = iwqp->ibqp.device; 2486 2491 ibevent.element.qp = &iwqp->ibqp; 2487 2492 iwqp->ibqp.event_handler(&ibevent, iwqp->ibqp.qp_context); 2493 + } 2494 + 2495 + bool irdma_cq_empty(struct irdma_cq *iwcq) 2496 + { 2497 + struct irdma_cq_uk *ukcq; 2498 + u64 qword3; 2499 + __le64 *cqe; 2500 + u8 polarity; 2501 + 2502 + ukcq = &iwcq->sc_cq.cq_uk; 2503 + cqe = IRDMA_GET_CURRENT_CQ_ELEM(ukcq); 2504 + get_64bit_val(cqe, 24, &qword3); 2505 + polarity = (u8)FIELD_GET(IRDMA_CQ_VALID, qword3); 2506 + 2507 + return polarity != ukcq->polarity; 2488 2508 }
+18 -5
drivers/infiniband/hw/irdma/verbs.c
··· 3584 3584 struct irdma_cq *iwcq; 3585 3585 struct irdma_cq_uk *ukcq; 3586 3586 unsigned long flags; 3587 - enum irdma_cmpl_notify cq_notify = IRDMA_CQ_COMPL_EVENT; 3587 + enum irdma_cmpl_notify cq_notify; 3588 + bool promo_event = false; 3589 + int ret = 0; 3588 3590 3591 + cq_notify = notify_flags == IB_CQ_SOLICITED ? 3592 + IRDMA_CQ_COMPL_SOLICITED : IRDMA_CQ_COMPL_EVENT; 3589 3593 iwcq = to_iwcq(ibcq); 3590 3594 ukcq = &iwcq->sc_cq.cq_uk; 3591 - if (notify_flags == IB_CQ_SOLICITED) 3592 - cq_notify = IRDMA_CQ_COMPL_SOLICITED; 3593 3595 3594 3596 spin_lock_irqsave(&iwcq->lock, flags); 3595 - irdma_uk_cq_request_notification(ukcq, cq_notify); 3597 + /* Only promote to arm the CQ for any event if the last arm event was solicited. */ 3598 + if (iwcq->last_notify == IRDMA_CQ_COMPL_SOLICITED && notify_flags != IB_CQ_SOLICITED) 3599 + promo_event = true; 3600 + 3601 + if (!iwcq->armed || promo_event) { 3602 + iwcq->armed = true; 3603 + iwcq->last_notify = cq_notify; 3604 + irdma_uk_cq_request_notification(ukcq, cq_notify); 3605 + } 3606 + 3607 + if ((notify_flags & IB_CQ_REPORT_MISSED_EVENTS) && !irdma_cq_empty(iwcq)) 3608 + ret = 1; 3596 3609 spin_unlock_irqrestore(&iwcq->lock, flags); 3597 3610 3598 - return 0; 3611 + return ret; 3599 3612 } 3600 3613 3601 3614 static int irdma_roce_port_immutable(struct ib_device *ibdev, u32 port_num,
+2
drivers/infiniband/hw/irdma/verbs.h
··· 110 110 u16 cq_size; 111 111 u16 cq_num; 112 112 bool user_mode; 113 + bool armed; 114 + enum irdma_cmpl_notify last_notify; 113 115 u32 polled_cmpls; 114 116 u32 cq_mem_size; 115 117 struct irdma_dma_mem kmem;
+3 -3
drivers/infiniband/hw/mlx5/mlx5_ib.h
··· 665 665 666 666 /* User MR data */ 667 667 struct mlx5_cache_ent *cache_ent; 668 - struct ib_umem *umem; 669 668 670 669 /* This is zero'd when the MR is allocated */ 671 670 union { ··· 676 677 struct list_head list; 677 678 }; 678 679 679 - /* Used only by kernel MRs (umem == NULL) */ 680 + /* Used only by kernel MRs */ 680 681 struct { 681 682 void *descs; 682 683 void *descs_alloc; ··· 697 698 int data_length; 698 699 }; 699 700 700 - /* Used only by User MRs (umem != NULL) */ 701 + /* Used only by User MRs */ 701 702 struct { 703 + struct ib_umem *umem; 702 704 unsigned int page_shift; 703 705 /* Current access_flags */ 704 706 int access_flags;
+12 -14
drivers/infiniband/hw/mlx5/mr.c
··· 1904 1904 return ret; 1905 1905 } 1906 1906 1907 - static void 1908 - mlx5_free_priv_descs(struct mlx5_ib_mr *mr) 1907 + static void mlx5_free_priv_descs(struct mlx5_ib_mr *mr) 1909 1908 { 1910 - if (!mr->umem && mr->descs) { 1911 - struct ib_device *device = mr->ibmr.device; 1912 - int size = mr->max_descs * mr->desc_size; 1913 - struct mlx5_ib_dev *dev = to_mdev(device); 1909 + struct mlx5_ib_dev *dev = to_mdev(mr->ibmr.device); 1910 + int size = mr->max_descs * mr->desc_size; 1914 1911 1915 - dma_unmap_single(&dev->mdev->pdev->dev, mr->desc_map, size, 1916 - DMA_TO_DEVICE); 1917 - kfree(mr->descs_alloc); 1918 - mr->descs = NULL; 1919 - } 1912 + if (!mr->descs) 1913 + return; 1914 + 1915 + dma_unmap_single(&dev->mdev->pdev->dev, mr->desc_map, size, 1916 + DMA_TO_DEVICE); 1917 + kfree(mr->descs_alloc); 1918 + mr->descs = NULL; 1920 1919 } 1921 1920 1922 1921 int mlx5_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) ··· 1991 1992 if (mr->cache_ent) { 1992 1993 mlx5_mr_cache_free(dev, mr); 1993 1994 } else { 1994 - mlx5_free_priv_descs(mr); 1995 + if (!udata) 1996 + mlx5_free_priv_descs(mr); 1995 1997 kfree(mr); 1996 1998 } 1997 1999 return 0; ··· 2079 2079 if (err) 2080 2080 goto err_free_in; 2081 2081 2082 - mr->umem = NULL; 2083 2082 kfree(in); 2084 2083 2085 2084 return mr; ··· 2205 2206 } 2206 2207 2207 2208 mr->ibmr.device = pd->device; 2208 - mr->umem = NULL; 2209 2209 2210 2210 switch (mr_type) { 2211 2211 case IB_MR_TYPE_MEM_REG:
+1
drivers/infiniband/sw/rxe/rxe_qp.c
··· 359 359 360 360 err2: 361 361 rxe_queue_cleanup(qp->sq.queue); 362 + qp->sq.queue = NULL; 362 363 err1: 363 364 qp->pd = NULL; 364 365 qp->rcq = NULL;
+6 -3
drivers/infiniband/ulp/rtrs/rtrs-clt-stats.c
··· 19 19 int cpu; 20 20 21 21 cpu = raw_smp_processor_id(); 22 - s = this_cpu_ptr(stats->pcpu_stats); 22 + s = get_cpu_ptr(stats->pcpu_stats); 23 23 if (con->cpu != cpu) { 24 24 s->cpu_migr.to++; 25 25 ··· 27 27 s = per_cpu_ptr(stats->pcpu_stats, con->cpu); 28 28 atomic_inc(&s->cpu_migr.from); 29 29 } 30 + put_cpu_ptr(stats->pcpu_stats); 30 31 } 31 32 32 33 void rtrs_clt_inc_failover_cnt(struct rtrs_clt_stats *stats) 33 34 { 34 35 struct rtrs_clt_stats_pcpu *s; 35 36 36 - s = this_cpu_ptr(stats->pcpu_stats); 37 + s = get_cpu_ptr(stats->pcpu_stats); 37 38 s->rdma.failover_cnt++; 39 + put_cpu_ptr(stats->pcpu_stats); 38 40 } 39 41 40 42 int rtrs_clt_stats_migration_from_cnt_to_str(struct rtrs_clt_stats *stats, char *buf) ··· 171 169 { 172 170 struct rtrs_clt_stats_pcpu *s; 173 171 174 - s = this_cpu_ptr(stats->pcpu_stats); 172 + s = get_cpu_ptr(stats->pcpu_stats); 175 173 s->rdma.dir[d].cnt++; 176 174 s->rdma.dir[d].size_total += size; 175 + put_cpu_ptr(stats->pcpu_stats); 177 176 } 178 177 179 178 void rtrs_clt_update_all_stats(struct rtrs_clt_io_req *req, int dir)
+1 -1
drivers/irqchip/irq-apple-aic.c
··· 707 707 .free = aic_ipi_free, 708 708 }; 709 709 710 - static int aic_init_smp(struct aic_irq_chip *irqc, struct device_node *node) 710 + static int __init aic_init_smp(struct aic_irq_chip *irqc, struct device_node *node) 711 711 { 712 712 struct irq_domain *ipi_domain; 713 713 int base_ipi;
+7 -11
drivers/irqchip/irq-armada-370-xp.c
··· 232 232 int hwirq, i; 233 233 234 234 mutex_lock(&msi_used_lock); 235 - 236 - hwirq = bitmap_find_next_zero_area(msi_used, PCI_MSI_DOORBELL_NR, 237 - 0, nr_irqs, 0); 238 - if (hwirq >= PCI_MSI_DOORBELL_NR) { 239 - mutex_unlock(&msi_used_lock); 240 - return -ENOSPC; 241 - } 242 - 243 - bitmap_set(msi_used, hwirq, nr_irqs); 235 + hwirq = bitmap_find_free_region(msi_used, PCI_MSI_DOORBELL_NR, 236 + order_base_2(nr_irqs)); 244 237 mutex_unlock(&msi_used_lock); 238 + 239 + if (hwirq < 0) 240 + return -ENOSPC; 245 241 246 242 for (i = 0; i < nr_irqs; i++) { 247 243 irq_domain_set_info(domain, virq + i, hwirq + i, ··· 246 250 NULL, NULL); 247 251 } 248 252 249 - return hwirq; 253 + return 0; 250 254 } 251 255 252 256 static void armada_370_xp_msi_free(struct irq_domain *domain, ··· 255 259 struct irq_data *d = irq_domain_get_irq_data(domain, virq); 256 260 257 261 mutex_lock(&msi_used_lock); 258 - bitmap_clear(msi_used, d->hwirq, nr_irqs); 262 + bitmap_release_region(msi_used, d->hwirq, order_base_2(nr_irqs)); 259 263 mutex_unlock(&msi_used_lock); 260 264 } 261 265
+2 -2
drivers/irqchip/irq-aspeed-scu-ic.c
··· 76 76 generic_handle_domain_irq(scu_ic->irq_domain, 77 77 bit - scu_ic->irq_shift); 78 78 79 - regmap_update_bits(scu_ic->scu, scu_ic->reg, mask, 80 - BIT(bit + ASPEED_SCU_IC_STATUS_SHIFT)); 79 + regmap_write_bits(scu_ic->scu, scu_ic->reg, mask, 80 + BIT(bit + ASPEED_SCU_IC_STATUS_SHIFT)); 81 81 } 82 82 83 83 chained_irq_exit(chip, desc);
+1
drivers/irqchip/irq-bcm7120-l2.c
··· 238 238 } 239 239 240 240 data->num_parent_irqs = platform_irq_count(pdev); 241 + put_device(&pdev->dev); 241 242 if (data->num_parent_irqs <= 0) { 242 243 pr_err("invalid number of parent interrupts\n"); 243 244 ret = -ENOMEM;
+1 -1
drivers/irqchip/irq-gic-v3-its.c
··· 742 742 743 743 its_fixup_cmd(cmd); 744 744 745 - return NULL; 745 + return desc->its_invall_cmd.col; 746 746 } 747 747 748 748 static struct its_vpe *its_build_vinvall_cmd(struct its_node *its,
+2 -2
drivers/irqchip/irq-mips-gic.c
··· 9 9 10 10 #define pr_fmt(fmt) "irq-mips-gic: " fmt 11 11 12 + #include <linux/bitfield.h> 12 13 #include <linux/bitmap.h> 13 14 #include <linux/clocksource.h> 14 15 #include <linux/cpuhotplug.h> ··· 736 735 mips_gic_base = ioremap(gic_base, gic_len); 737 736 738 737 gicconfig = read_gic_config(); 739 - gic_shared_intrs = gicconfig & GIC_CONFIG_NUMINTERRUPTS; 740 - gic_shared_intrs >>= __ffs(GIC_CONFIG_NUMINTERRUPTS); 738 + gic_shared_intrs = FIELD_GET(GIC_CONFIG_NUMINTERRUPTS, gicconfig); 741 739 gic_shared_intrs = (gic_shared_intrs + 1) * 8; 742 740 743 741 if (cpu_has_veic) {
+1 -1
drivers/irqchip/irq-nvic.c
··· 26 26 27 27 #define NVIC_ISER 0x000 28 28 #define NVIC_ICER 0x080 29 - #define NVIC_IPR 0x300 29 + #define NVIC_IPR 0x400 30 30 31 31 #define NVIC_MAX_BANKS 16 32 32 /*
+1 -1
drivers/md/dm-integrity.c
··· 1963 1963 n_sectors -= bv.bv_len >> SECTOR_SHIFT; 1964 1964 bio_advance_iter(bio, &bio->bi_iter, bv.bv_len); 1965 1965 retry_kmap: 1966 - mem = bvec_kmap_local(&bv); 1966 + mem = kmap_local_page(bv.bv_page); 1967 1967 if (likely(dio->op == REQ_OP_WRITE)) 1968 1968 flush_dcache_page(bv.bv_page); 1969 1969
+3 -1
drivers/md/md.c
··· 2189 2189 2190 2190 if (!num_sectors || num_sectors > max_sectors) 2191 2191 num_sectors = max_sectors; 2192 + rdev->sb_start = sb_start; 2192 2193 } 2193 2194 sb = page_address(rdev->sb_page); 2194 2195 sb->data_size = cpu_to_le64(num_sectors); ··· 6271 6270 spin_lock(&mddev->lock); 6272 6271 mddev->pers = NULL; 6273 6272 spin_unlock(&mddev->lock); 6274 - pers->free(mddev, mddev->private); 6273 + if (mddev->private) 6274 + pers->free(mddev, mddev->private); 6275 6275 mddev->private = NULL; 6276 6276 if (pers->sync_request && mddev->to_remove == NULL) 6277 6277 mddev->to_remove = &md_redundancy_group;
+1 -1
drivers/md/persistent-data/dm-btree-remove.c
··· 423 423 424 424 memcpy(n, dm_block_data(child), 425 425 dm_bm_block_size(dm_tm_get_bm(info->tm))); 426 - dm_tm_unlock(info->tm, child); 427 426 428 427 dm_tm_dec(info->tm, dm_block_location(child)); 428 + dm_tm_unlock(info->tm, child); 429 429 return 0; 430 430 } 431 431
-4
drivers/misc/cardreader/rtsx_pcr.c
··· 1803 1803 mutex_lock(&pcr->pcr_mutex); 1804 1804 rtsx_pci_power_off(pcr, HOST_ENTER_S3); 1805 1805 1806 - free_irq(pcr->irq, (void *)pcr); 1807 - 1808 1806 mutex_unlock(&pcr->pcr_mutex); 1809 1807 1810 1808 pcr->is_runtime_suspended = true; ··· 1823 1825 mutex_lock(&pcr->pcr_mutex); 1824 1826 1825 1827 rtsx_pci_write_register(pcr, HOST_SLEEP_STATE, 0x03, 0x00); 1826 - rtsx_pci_acquire_irq(pcr); 1827 - synchronize_irq(pcr->irq); 1828 1828 1829 1829 if (pcr->ops->fetch_vendor_settings) 1830 1830 pcr->ops->fetch_vendor_settings(pcr);
+18 -20
drivers/misc/eeprom/at25.c
··· 376 376 static int at25_probe(struct spi_device *spi) 377 377 { 378 378 struct at25_data *at25 = NULL; 379 - struct spi_eeprom chip; 380 379 int err; 381 380 int sr; 382 381 u8 id[FM25_ID_LEN]; ··· 388 389 if (match && !strcmp(match->compatible, "cypress,fm25")) 389 390 is_fram = 1; 390 391 392 + at25 = devm_kzalloc(&spi->dev, sizeof(struct at25_data), GFP_KERNEL); 393 + if (!at25) 394 + return -ENOMEM; 395 + 391 396 /* Chip description */ 392 - if (!spi->dev.platform_data) { 393 - if (!is_fram) { 394 - err = at25_fw_to_chip(&spi->dev, &chip); 395 - if (err) 396 - return err; 397 - } 398 - } else 399 - chip = *(struct spi_eeprom *)spi->dev.platform_data; 397 + if (spi->dev.platform_data) { 398 + memcpy(&at25->chip, spi->dev.platform_data, sizeof(at25->chip)); 399 + } else if (!is_fram) { 400 + err = at25_fw_to_chip(&spi->dev, &at25->chip); 401 + if (err) 402 + return err; 403 + } 400 404 401 405 /* Ping the chip ... the status register is pretty portable, 402 406 * unlike probing manufacturer IDs. We do expect that system ··· 411 409 return -ENXIO; 412 410 } 413 411 414 - at25 = devm_kzalloc(&spi->dev, sizeof(struct at25_data), GFP_KERNEL); 415 - if (!at25) 416 - return -ENOMEM; 417 - 418 412 mutex_init(&at25->lock); 419 - at25->chip = chip; 420 413 at25->spi = spi; 421 414 spi_set_drvdata(spi, at25); 422 415 ··· 428 431 dev_err(&spi->dev, "Error: unsupported size (id %02x)\n", id[7]); 429 432 return -ENODEV; 430 433 } 431 - chip.byte_len = int_pow(2, id[7] - 0x21 + 4) * 1024; 434 + at25->chip.byte_len = int_pow(2, id[7] - 0x21 + 4) * 1024; 432 435 433 436 if (at25->chip.byte_len > 64 * 1024) 434 437 at25->chip.flags |= EE_ADDR3; ··· 461 464 at25->nvmem_config.type = is_fram ? NVMEM_TYPE_FRAM : NVMEM_TYPE_EEPROM; 462 465 at25->nvmem_config.name = dev_name(&spi->dev); 463 466 at25->nvmem_config.dev = &spi->dev; 464 - at25->nvmem_config.read_only = chip.flags & EE_READONLY; 467 + at25->nvmem_config.read_only = at25->chip.flags & EE_READONLY; 465 468 at25->nvmem_config.root_only = true; 466 469 at25->nvmem_config.owner = THIS_MODULE; 467 470 at25->nvmem_config.compat = true; ··· 471 474 at25->nvmem_config.priv = at25; 472 475 at25->nvmem_config.stride = 1; 473 476 at25->nvmem_config.word_size = 1; 474 - at25->nvmem_config.size = chip.byte_len; 477 + at25->nvmem_config.size = at25->chip.byte_len; 475 478 476 479 at25->nvmem = devm_nvmem_register(&spi->dev, &at25->nvmem_config); 477 480 if (IS_ERR(at25->nvmem)) 478 481 return PTR_ERR(at25->nvmem); 479 482 480 483 dev_info(&spi->dev, "%d %s %s %s%s, pagesize %u\n", 481 - (chip.byte_len < 1024) ? chip.byte_len : (chip.byte_len / 1024), 482 - (chip.byte_len < 1024) ? "Byte" : "KByte", 484 + (at25->chip.byte_len < 1024) ? 485 + at25->chip.byte_len : (at25->chip.byte_len / 1024), 486 + (at25->chip.byte_len < 1024) ? "Byte" : "KByte", 483 487 at25->chip.name, is_fram ? "fram" : "eeprom", 484 - (chip.flags & EE_READONLY) ? " (readonly)" : "", 488 + (at25->chip.flags & EE_READONLY) ? " (readonly)" : "", 485 489 at25->chip.page_size); 486 490 return 0; 487 491 }
+6 -4
drivers/misc/fastrpc.c
··· 719 719 static u64 fastrpc_get_payload_size(struct fastrpc_invoke_ctx *ctx, int metalen) 720 720 { 721 721 u64 size = 0; 722 - int i; 722 + int oix; 723 723 724 724 size = ALIGN(metalen, FASTRPC_ALIGN); 725 - for (i = 0; i < ctx->nscalars; i++) { 725 + for (oix = 0; oix < ctx->nbufs; oix++) { 726 + int i = ctx->olaps[oix].raix; 727 + 726 728 if (ctx->args[i].fd == 0 || ctx->args[i].fd == -1) { 727 729 728 - if (ctx->olaps[i].offset == 0) 730 + if (ctx->olaps[oix].offset == 0) 729 731 size = ALIGN(size, FASTRPC_ALIGN); 730 732 731 - size += (ctx->olaps[i].mend - ctx->olaps[i].mstart); 733 + size += (ctx->olaps[oix].mend - ctx->olaps[oix].mstart); 732 734 } 733 735 } 734 736
+3 -1
drivers/mmc/host/mtk-sd.c
··· 2291 2291 sdr_set_field(host->base + PAD_DS_TUNE, 2292 2292 PAD_DS_TUNE_DLY1, i); 2293 2293 ret = mmc_get_ext_csd(card, &ext_csd); 2294 - if (!ret) 2294 + if (!ret) { 2295 2295 result_dly1 |= (1 << i); 2296 + kfree(ext_csd); 2297 + } 2296 2298 } 2297 2299 host->hs400_tuning = false; 2298 2300
+1 -1
drivers/mmc/host/renesas_sdhi_core.c
··· 673 673 674 674 /* Issue CMD19 twice for each tap */ 675 675 for (i = 0; i < 2 * priv->tap_num; i++) { 676 - int cmd_error; 676 + int cmd_error = 0; 677 677 678 678 /* Set sampling clock position */ 679 679 sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_TAPSET, i % priv->tap_num);
+4
drivers/net/dsa/mv88e6xxx/chip.c
··· 768 768 if ((!mv88e6xxx_port_ppu_updates(chip, port) || 769 769 mode == MLO_AN_FIXED) && ops->port_sync_link) 770 770 err = ops->port_sync_link(chip, port, mode, false); 771 + 772 + if (!err && ops->port_set_speed_duplex) 773 + err = ops->port_set_speed_duplex(chip, port, SPEED_UNFORCED, 774 + DUPLEX_UNFORCED); 771 775 mv88e6xxx_reg_unlock(chip); 772 776 773 777 if (err)
+2 -2
drivers/net/dsa/mv88e6xxx/port.c
··· 283 283 if (err) 284 284 return err; 285 285 286 - if (speed) 286 + if (speed != SPEED_UNFORCED) 287 287 dev_dbg(chip->dev, "p%d: Speed set to %d Mbps\n", port, speed); 288 288 else 289 289 dev_dbg(chip->dev, "p%d: Speed unforced\n", port); ··· 516 516 if (err) 517 517 return err; 518 518 519 - if (speed) 519 + if (speed != SPEED_UNFORCED) 520 520 dev_dbg(chip->dev, "p%d: Speed set to %d Mbps\n", port, speed); 521 521 else 522 522 dev_dbg(chip->dev, "p%d: Speed unforced\n", port);
+4 -1
drivers/net/ethernet/broadcom/bcmsysport.c
··· 1309 1309 struct bcm_sysport_priv *priv = netdev_priv(dev); 1310 1310 struct device *kdev = &priv->pdev->dev; 1311 1311 struct bcm_sysport_tx_ring *ring; 1312 + unsigned long flags, desc_flags; 1312 1313 struct bcm_sysport_cb *cb; 1313 1314 struct netdev_queue *txq; 1314 1315 u32 len_status, addr_lo; 1315 1316 unsigned int skb_len; 1316 - unsigned long flags; 1317 1317 dma_addr_t mapping; 1318 1318 u16 queue; 1319 1319 int ret; ··· 1373 1373 ring->desc_count--; 1374 1374 1375 1375 /* Ports are latched, so write upper address first */ 1376 + spin_lock_irqsave(&priv->desc_lock, desc_flags); 1376 1377 tdma_writel(priv, len_status, TDMA_WRITE_PORT_HI(ring->index)); 1377 1378 tdma_writel(priv, addr_lo, TDMA_WRITE_PORT_LO(ring->index)); 1379 + spin_unlock_irqrestore(&priv->desc_lock, desc_flags); 1378 1380 1379 1381 /* Check ring space and update SW control flow */ 1380 1382 if (ring->desc_count == 0) ··· 2015 2013 } 2016 2014 2017 2015 /* Initialize both hardware and software ring */ 2016 + spin_lock_init(&priv->desc_lock); 2018 2017 for (i = 0; i < dev->num_tx_queues; i++) { 2019 2018 ret = bcm_sysport_init_tx_ring(priv, i); 2020 2019 if (ret) {
+1
drivers/net/ethernet/broadcom/bcmsysport.h
··· 711 711 int wol_irq; 712 712 713 713 /* Transmit rings */ 714 + spinlock_t desc_lock; 714 715 struct bcm_sysport_tx_ring *tx_rings; 715 716 716 717 /* Receive queue */
+2 -2
drivers/net/ethernet/broadcom/genet/bcmmii.c
··· 589 589 * Internal or external PHY with MDIO access 590 590 */ 591 591 phydev = phy_attach(priv->dev, phy_name, pd->phy_interface); 592 - if (!phydev) { 592 + if (IS_ERR(phydev)) { 593 593 dev_err(kdev, "failed to register PHY device\n"); 594 - return -ENODEV; 594 + return PTR_ERR(phydev); 595 595 } 596 596 } else { 597 597 /*
+2
drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
··· 388 388 __u64 bytes_per_cdan; 389 389 }; 390 390 391 + #define DPAA2_ETH_CH_STATS 7 392 + 391 393 /* Maximum number of queues associated with a DPNI */ 392 394 #define DPAA2_ETH_MAX_TCS 8 393 395 #define DPAA2_ETH_MAX_RX_QUEUES_PER_TC 16
+1 -1
drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
··· 278 278 /* Per-channel stats */ 279 279 for (k = 0; k < priv->num_channels; k++) { 280 280 ch_stats = &priv->channel[k]->stats; 281 - for (j = 0; j < sizeof(*ch_stats) / sizeof(__u64) - 1; j++) 281 + for (j = 0; j < DPAA2_ETH_CH_STATS; j++) 282 282 *((__u64 *)data + i + j) += *((__u64 *)ch_stats + j); 283 283 } 284 284 i += j;
+2
drivers/net/ethernet/hisilicon/hns3/hnae3.h
··· 839 839 840 840 u8 netdev_flags; 841 841 struct dentry *hnae3_dbgfs; 842 + /* protects concurrent contention between debugfs commands */ 843 + struct mutex dbgfs_lock; 842 844 843 845 /* Network interface message level enabled bits */ 844 846 u32 msg_enable;
+14 -6
drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
··· 1226 1226 if (ret) 1227 1227 return ret; 1228 1228 1229 + mutex_lock(&handle->dbgfs_lock); 1229 1230 save_buf = &hns3_dbg_cmd[index].buf; 1230 1231 1231 1232 if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) || ··· 1239 1238 read_buf = *save_buf; 1240 1239 } else { 1241 1240 read_buf = kvzalloc(hns3_dbg_cmd[index].buf_len, GFP_KERNEL); 1242 - if (!read_buf) 1243 - return -ENOMEM; 1241 + if (!read_buf) { 1242 + ret = -ENOMEM; 1243 + goto out; 1244 + } 1244 1245 1245 1246 /* save the buffer addr until the last read operation */ 1246 1247 *save_buf = read_buf; 1247 - } 1248 1248 1249 - /* get data ready for the first time to read */ 1250 - if (!*ppos) { 1249 + /* get data ready for the first time to read */ 1251 1250 ret = hns3_dbg_read_cmd(dbg_data, hns3_dbg_cmd[index].cmd, 1252 1251 read_buf, hns3_dbg_cmd[index].buf_len); 1253 1252 if (ret) ··· 1256 1255 1257 1256 size = simple_read_from_buffer(buffer, count, ppos, read_buf, 1258 1257 strlen(read_buf)); 1259 - if (size > 0) 1258 + if (size > 0) { 1259 + mutex_unlock(&handle->dbgfs_lock); 1260 1260 return size; 1261 + } 1261 1262 1262 1263 out: 1263 1264 /* free the buffer for the last read operation */ ··· 1268 1265 *save_buf = NULL; 1269 1266 } 1270 1267 1268 + mutex_unlock(&handle->dbgfs_lock); 1271 1269 return ret; 1272 1270 } 1273 1271 ··· 1341 1337 debugfs_create_dir(hns3_dbg_dentry[i].name, 1342 1338 handle->hnae3_dbgfs); 1343 1339 1340 + mutex_init(&handle->dbgfs_lock); 1341 + 1344 1342 for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++) { 1345 1343 if ((hns3_dbg_cmd[i].cmd == HNAE3_DBG_CMD_TM_NODES && 1346 1344 ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2) || ··· 1369 1363 return 0; 1370 1364 1371 1365 out: 1366 + mutex_destroy(&handle->dbgfs_lock); 1372 1367 debugfs_remove_recursive(handle->hnae3_dbgfs); 1373 1368 handle->hnae3_dbgfs = NULL; 1374 1369 return ret; ··· 1385 1378 hns3_dbg_cmd[i].buf = NULL; 1386 1379 } 1387 1380 1381 + mutex_destroy(&handle->dbgfs_lock); 1388 1382 debugfs_remove_recursive(handle->hnae3_dbgfs); 1389 1383 handle->hnae3_dbgfs = NULL; 1390 1384 }
+2 -1
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
··· 114 114 115 115 memcpy(&req->msg, send_msg, sizeof(struct hclge_vf_to_pf_msg)); 116 116 117 - trace_hclge_vf_mbx_send(hdev, req); 117 + if (test_bit(HCLGEVF_STATE_NIC_REGISTERED, &hdev->state)) 118 + trace_hclge_vf_mbx_send(hdev, req); 118 119 119 120 /* synchronous send */ 120 121 if (need_resp) {
+2 -3
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 2046 2046 } 2047 2047 adapter->aq_required = 0; 2048 2048 adapter->current_op = VIRTCHNL_OP_UNKNOWN; 2049 + mutex_unlock(&adapter->crit_lock); 2049 2050 queue_delayed_work(iavf_wq, 2050 2051 &adapter->watchdog_task, 2051 2052 msecs_to_jiffies(10)); ··· 2077 2076 iavf_detect_recover_hung(&adapter->vsi); 2078 2077 break; 2079 2078 case __IAVF_REMOVE: 2080 - mutex_unlock(&adapter->crit_lock); 2081 - return; 2082 2079 default: 2080 + mutex_unlock(&adapter->crit_lock); 2083 2081 return; 2084 2082 } 2085 2083 2086 2084 /* check for hw reset */ 2087 2085 reg_val = rd32(hw, IAVF_VF_ARQLEN1) & IAVF_VF_ARQLEN1_ARQENABLE_MASK; 2088 2086 if (!reg_val) { 2089 - iavf_change_state(adapter, __IAVF_RESETTING); 2090 2087 adapter->flags |= IAVF_FLAG_RESET_PENDING; 2091 2088 adapter->aq_required = 0; 2092 2089 adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+5 -8
drivers/net/ethernet/intel/ice/ice_ptp.c
··· 705 705 scaled_ppm = -scaled_ppm; 706 706 } 707 707 708 - while ((u64)scaled_ppm > div_u64(U64_MAX, incval)) { 708 + while ((u64)scaled_ppm > div64_u64(U64_MAX, incval)) { 709 709 /* handle overflow by scaling down the scaled_ppm and 710 710 * the divisor, losing some precision 711 711 */ ··· 1536 1536 if (err) 1537 1537 continue; 1538 1538 1539 - /* Check if the timestamp is valid */ 1540 - if (!(raw_tstamp & ICE_PTP_TS_VALID)) 1539 + /* Check if the timestamp is invalid or stale */ 1540 + if (!(raw_tstamp & ICE_PTP_TS_VALID) || 1541 + raw_tstamp == tx->tstamps[idx].cached_tstamp) 1541 1542 continue; 1542 - 1543 - /* clear the timestamp register, so that it won't show valid 1544 - * again when re-used. 1545 - */ 1546 - ice_clear_phy_tstamp(hw, tx->quad, phy_idx); 1547 1543 1548 1544 /* The timestamp is valid, so we'll go ahead and clear this 1549 1545 * index and then send the timestamp up to the stack. 1550 1546 */ 1551 1547 spin_lock(&tx->lock); 1548 + tx->tstamps[idx].cached_tstamp = raw_tstamp; 1552 1549 clear_bit(idx, tx->in_use); 1553 1550 skb = tx->tstamps[idx].skb; 1554 1551 tx->tstamps[idx].skb = NULL;
+6
drivers/net/ethernet/intel/ice/ice_ptp.h
··· 55 55 * struct ice_tx_tstamp - Tracking for a single Tx timestamp 56 56 * @skb: pointer to the SKB for this timestamp request 57 57 * @start: jiffies when the timestamp was first requested 58 + * @cached_tstamp: last read timestamp 58 59 * 59 60 * This structure tracks a single timestamp request. The SKB pointer is 60 61 * provided when initiating a request. The start time is used to ensure that 61 62 * we discard old requests that were not fulfilled within a 2 second time 62 63 * window. 64 + * Timestamp values in the PHY are read only and do not get cleared except at 65 + * hardware reset or when a new timestamp value is captured. The cached_tstamp 66 + * field is used to detect the case where a new timestamp has not yet been 67 + * captured, ensuring that we avoid sending stale timestamp data to the stack. 63 68 */ 64 69 struct ice_tx_tstamp { 65 70 struct sk_buff *skb; 66 71 unsigned long start; 72 + u64 cached_tstamp; 67 73 }; 68 74 69 75 /**
+14 -14
drivers/net/ethernet/intel/igb/igb_main.c
··· 7648 7648 struct vf_mac_filter *entry = NULL; 7649 7649 int ret = 0; 7650 7650 7651 + if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) && 7652 + !vf_data->trusted) { 7653 + dev_warn(&pdev->dev, 7654 + "VF %d requested MAC filter but is administratively denied\n", 7655 + vf); 7656 + return -EINVAL; 7657 + } 7658 + if (!is_valid_ether_addr(addr)) { 7659 + dev_warn(&pdev->dev, 7660 + "VF %d attempted to set invalid MAC filter\n", 7661 + vf); 7662 + return -EINVAL; 7663 + } 7664 + 7651 7665 switch (info) { 7652 7666 case E1000_VF_MAC_FILTER_CLR: 7653 7667 /* remove all unicast MAC filters related to the current VF */ ··· 7675 7661 } 7676 7662 break; 7677 7663 case E1000_VF_MAC_FILTER_ADD: 7678 - if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) && 7679 - !vf_data->trusted) { 7680 - dev_warn(&pdev->dev, 7681 - "VF %d requested MAC filter but is administratively denied\n", 7682 - vf); 7683 - return -EINVAL; 7684 - } 7685 - if (!is_valid_ether_addr(addr)) { 7686 - dev_warn(&pdev->dev, 7687 - "VF %d attempted to set invalid MAC filter\n", 7688 - vf); 7689 - return -EINVAL; 7690 - } 7691 - 7692 7664 /* try to find empty slot in the list */ 7693 7665 list_for_each(pos, &adapter->vf_macs.l) { 7694 7666 entry = list_entry(pos, struct vf_mac_filter, l);
+1
drivers/net/ethernet/intel/igbvf/netdev.c
··· 2859 2859 return 0; 2860 2860 2861 2861 err_hw_init: 2862 + netif_napi_del(&adapter->rx_ring->napi); 2862 2863 kfree(adapter->tx_ring); 2863 2864 kfree(adapter->rx_ring); 2864 2865 err_sw_init:
+1 -1
drivers/net/ethernet/intel/igc/igc_i225.c
··· 636 636 ltrv = rd32(IGC_LTRMAXV); 637 637 if (ltr_max != (ltrv & IGC_LTRMAXV_LTRV_MASK)) { 638 638 ltrv = IGC_LTRMAXV_LSNP_REQ | ltr_max | 639 - (scale_min << IGC_LTRMAXV_SCALE_SHIFT); 639 + (scale_max << IGC_LTRMAXV_SCALE_SHIFT); 640 640 wr32(IGC_LTRMAXV, ltrv); 641 641 } 642 642 }
+4
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 5531 5531 if (!speed && hw->mac.ops.get_link_capabilities) { 5532 5532 ret = hw->mac.ops.get_link_capabilities(hw, &speed, 5533 5533 &autoneg); 5534 + /* remove NBASE-T speeds from default autonegotiation 5535 + * to accommodate broken network switches in the field 5536 + * which cannot cope with advertised NBASE-T speeds 5537 + */ 5534 5538 speed &= ~(IXGBE_LINK_SPEED_5GB_FULL | 5535 5539 IXGBE_LINK_SPEED_2_5GB_FULL); 5536 5540 }
+3
drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
··· 3405 3405 /* flush pending Tx transactions */ 3406 3406 ixgbe_clear_tx_pending(hw); 3407 3407 3408 + /* set MDIO speed before talking to the PHY in case it's the 1st time */ 3409 + ixgbe_set_mdio_speed(hw); 3410 + 3408 3411 /* PHY ops must be identified and initialized prior to reset */ 3409 3412 status = hw->phy.ops.init(hw); 3410 3413 if (status == IXGBE_ERR_SFP_NOT_SUPPORTED ||
+2 -1
drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
··· 8505 8505 u8 mac_profile; 8506 8506 int err; 8507 8507 8508 - if (!mlxsw_sp_rif_mac_profile_is_shared(rif)) 8508 + if (!mlxsw_sp_rif_mac_profile_is_shared(rif) && 8509 + !mlxsw_sp_rif_mac_profile_find(mlxsw_sp, new_mac)) 8509 8510 return mlxsw_sp_rif_mac_profile_edit(rif, new_mac); 8510 8511 8511 8512 err = mlxsw_sp_rif_mac_profile_get(mlxsw_sp, new_mac,
+3
drivers/net/ethernet/sfc/ef100_nic.c
··· 609 609 ef100_common_stat_mask(mask); 610 610 ef100_ethtool_stat_mask(mask); 611 611 612 + if (!mc_stats) 613 + return 0; 614 + 612 615 efx_nic_copy_stats(efx, mc_stats); 613 616 efx_nic_update_stats(ef100_stat_desc, EF100_STAT_COUNT, mask, 614 617 stats, mc_stats, false);
+3 -1
drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
··· 33 33 void (*set_rgmii_speed)(struct rk_priv_data *bsp_priv, int speed); 34 34 void (*set_rmii_speed)(struct rk_priv_data *bsp_priv, int speed); 35 35 void (*integrated_phy_powerup)(struct rk_priv_data *bsp_priv); 36 + bool regs_valid; 36 37 u32 regs[]; 37 38 }; 38 39 ··· 1093 1092 .set_to_rmii = rk3568_set_to_rmii, 1094 1093 .set_rgmii_speed = rk3568_set_gmac_speed, 1095 1094 .set_rmii_speed = rk3568_set_gmac_speed, 1095 + .regs_valid = true, 1096 1096 .regs = { 1097 1097 0xfe2a0000, /* gmac0 */ 1098 1098 0xfe010000, /* gmac1 */ ··· 1385 1383 * to be distinguished. 1386 1384 */ 1387 1385 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1388 - if (res) { 1386 + if (res && ops->regs_valid) { 1389 1387 int i = 0; 1390 1388 1391 1389 while (ops->regs[i]) {
+17
drivers/net/ethernet/stmicro/stmmac/stmmac.h
··· 171 171 int is_l4; 172 172 }; 173 173 174 + /* Rx Frame Steering */ 175 + enum stmmac_rfs_type { 176 + STMMAC_RFS_T_VLAN, 177 + STMMAC_RFS_T_MAX, 178 + }; 179 + 180 + struct stmmac_rfs_entry { 181 + unsigned long cookie; 182 + int in_use; 183 + int type; 184 + int tc; 185 + }; 186 + 174 187 struct stmmac_priv { 175 188 /* Frequently used values are kept adjacent for cache effect */ 176 189 u32 tx_coal_frames[MTL_MAX_TX_QUEUES]; ··· 301 288 struct stmmac_tc_entry *tc_entries; 302 289 unsigned int flow_entries_max; 303 290 struct stmmac_flow_entry *flow_entries; 291 + unsigned int rfs_entries_max[STMMAC_RFS_T_MAX]; 292 + unsigned int rfs_entries_cnt[STMMAC_RFS_T_MAX]; 293 + unsigned int rfs_entries_total; 294 + struct stmmac_rfs_entry *rfs_entries; 304 295 305 296 /* Pulse Per Second output */ 306 297 struct stmmac_pps_cfg pps[STMMAC_PPS_MAX];
+12 -4
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 1451 1451 { 1452 1452 struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; 1453 1453 struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i]; 1454 + gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN); 1455 + 1456 + if (priv->dma_cap.addr64 <= 32) 1457 + gfp |= GFP_DMA32; 1454 1458 1455 1459 if (!buf->page) { 1456 - buf->page = page_pool_dev_alloc_pages(rx_q->page_pool); 1460 + buf->page = page_pool_alloc_pages(rx_q->page_pool, gfp); 1457 1461 if (!buf->page) 1458 1462 return -ENOMEM; 1459 1463 buf->page_offset = stmmac_rx_offset(priv); 1460 1464 } 1461 1465 1462 1466 if (priv->sph && !buf->sec_page) { 1463 - buf->sec_page = page_pool_dev_alloc_pages(rx_q->page_pool); 1467 + buf->sec_page = page_pool_alloc_pages(rx_q->page_pool, gfp); 1464 1468 if (!buf->sec_page) 1465 1469 return -ENOMEM; 1466 1470 ··· 4479 4475 struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; 4480 4476 int dirty = stmmac_rx_dirty(priv, queue); 4481 4477 unsigned int entry = rx_q->dirty_rx; 4478 + gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN); 4479 + 4480 + if (priv->dma_cap.addr64 <= 32) 4481 + gfp |= GFP_DMA32; 4482 4482 4483 4483 while (dirty-- > 0) { 4484 4484 struct stmmac_rx_buffer *buf = &rx_q->buf_pool[entry]; ··· 4495 4487 p = rx_q->dma_rx + entry; 4496 4488 4497 4489 if (!buf->page) { 4498 - buf->page = page_pool_dev_alloc_pages(rx_q->page_pool); 4490 + buf->page = page_pool_alloc_pages(rx_q->page_pool, gfp); 4499 4491 if (!buf->page) 4500 4492 break; 4501 4493 } 4502 4494 4503 4495 if (priv->sph && !buf->sec_page) { 4504 - buf->sec_page = page_pool_dev_alloc_pages(rx_q->page_pool); 4496 + buf->sec_page = page_pool_alloc_pages(rx_q->page_pool, gfp); 4505 4497 if (!buf->sec_page) 4506 4498 break; 4507 4499
+73 -13
drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
··· 232 232 } 233 233 } 234 234 235 + static int tc_rfs_init(struct stmmac_priv *priv) 236 + { 237 + int i; 238 + 239 + priv->rfs_entries_max[STMMAC_RFS_T_VLAN] = 8; 240 + 241 + for (i = 0; i < STMMAC_RFS_T_MAX; i++) 242 + priv->rfs_entries_total += priv->rfs_entries_max[i]; 243 + 244 + priv->rfs_entries = devm_kcalloc(priv->device, 245 + priv->rfs_entries_total, 246 + sizeof(*priv->rfs_entries), 247 + GFP_KERNEL); 248 + if (!priv->rfs_entries) 249 + return -ENOMEM; 250 + 251 + dev_info(priv->device, "Enabled RFS Flow TC (entries=%d)\n", 252 + priv->rfs_entries_total); 253 + 254 + return 0; 255 + } 256 + 235 257 static int tc_init(struct stmmac_priv *priv) 236 258 { 237 259 struct dma_features *dma_cap = &priv->dma_cap; 238 260 unsigned int count; 239 - int i; 261 + int ret, i; 240 262 241 263 if (dma_cap->l3l4fnum) { 242 264 priv->flow_entries_max = dma_cap->l3l4fnum; ··· 272 250 for (i = 0; i < priv->flow_entries_max; i++) 273 251 priv->flow_entries[i].idx = i; 274 252 275 - dev_info(priv->device, "Enabled Flow TC (entries=%d)\n", 253 + dev_info(priv->device, "Enabled L3L4 Flow TC (entries=%d)\n", 276 254 priv->flow_entries_max); 277 255 } 256 + 257 + ret = tc_rfs_init(priv); 258 + if (ret) 259 + return -ENOMEM; 278 260 279 261 if (!priv->plat->fpe_cfg) { 280 262 priv->plat->fpe_cfg = devm_kzalloc(priv->device, ··· 633 607 return ret; 634 608 } 635 609 610 + static struct stmmac_rfs_entry *tc_find_rfs(struct stmmac_priv *priv, 611 + struct flow_cls_offload *cls, 612 + bool get_free) 613 + { 614 + int i; 615 + 616 + for (i = 0; i < priv->rfs_entries_total; i++) { 617 + struct stmmac_rfs_entry *entry = &priv->rfs_entries[i]; 618 + 619 + if (entry->cookie == cls->cookie) 620 + return entry; 621 + if (get_free && entry->in_use == false) 622 + return entry; 623 + } 624 + 625 + return NULL; 626 + } 627 + 636 628 #define VLAN_PRIO_FULL_MASK (0x07) 637 629 638 630 static int tc_add_vlan_flow(struct stmmac_priv *priv, 639 631 struct flow_cls_offload *cls) 640 632 { 633 + struct stmmac_rfs_entry *entry = tc_find_rfs(priv, cls, false); 641 634 struct flow_rule *rule = flow_cls_offload_flow_rule(cls); 642 635 struct flow_dissector *dissector = rule->match.dissector; 643 636 int tc = tc_classid_to_hwtc(priv->dev, cls->classid); 644 637 struct flow_match_vlan match; 638 + 639 + if (!entry) { 640 + entry = tc_find_rfs(priv, cls, true); 641 + if (!entry) 642 + return -ENOENT; 643 + } 644 + 645 + if (priv->rfs_entries_cnt[STMMAC_RFS_T_VLAN] >= 646 + priv->rfs_entries_max[STMMAC_RFS_T_VLAN]) 647 + return -ENOENT; 645 648 646 649 /* Nothing to do here */ 647 650 if (!dissector_uses_key(dissector, FLOW_DISSECTOR_KEY_VLAN)) ··· 693 638 694 639 prio = BIT(match.key->vlan_priority); 695 640 stmmac_rx_queue_prio(priv, priv->hw, prio, tc); 641 + 642 + entry->in_use = true; 643 + entry->cookie = cls->cookie; 644 + entry->tc = tc; 645 + entry->type = STMMAC_RFS_T_VLAN; 646 + priv->rfs_entries_cnt[STMMAC_RFS_T_VLAN]++; 696 647 } 697 648 698 649 return 0; ··· 707 646 static int tc_del_vlan_flow(struct stmmac_priv *priv, 708 647 struct flow_cls_offload *cls) 709 648 { 710 - struct flow_rule *rule = flow_cls_offload_flow_rule(cls); 711 - struct flow_dissector *dissector = rule->match.dissector; 712 - int tc = tc_classid_to_hwtc(priv->dev, cls->classid); 649 + struct stmmac_rfs_entry *entry = tc_find_rfs(priv, cls, false); 713 650 714 - /* Nothing to do here */ 715 - if (!dissector_uses_key(dissector, FLOW_DISSECTOR_KEY_VLAN)) 716 - return -EINVAL; 651 + if (!entry || !entry->in_use || entry->type != STMMAC_RFS_T_VLAN) 652 + return -ENOENT; 717 653 718 - if (tc < 0) { 719 - netdev_err(priv->dev, "Invalid traffic class\n"); 720 - return -EINVAL; 721 - } 654 + stmmac_rx_queue_prio(priv, priv->hw, 0, entry->tc); 722 655 723 - stmmac_rx_queue_prio(priv, priv->hw, 0, tc); 656 + entry->in_use = false; 657 + entry->cookie = 0; 658 + entry->tc = 0; 659 + entry->type = 0; 660 + 661 + priv->rfs_entries_cnt[STMMAC_RFS_T_VLAN]--; 724 662 725 663 return 0; 726 664 }
+20 -9
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 1844 1844 if (ret < 0) { 1845 1845 dev_err(dev, "%pOF error reading port_id %d\n", 1846 1846 port_np, ret); 1847 - return ret; 1847 + goto of_node_put; 1848 1848 } 1849 1849 1850 1850 if (!port_id || port_id > common->port_num) { 1851 1851 dev_err(dev, "%pOF has invalid port_id %u %s\n", 1852 1852 port_np, port_id, port_np->name); 1853 - return -EINVAL; 1853 + ret = -EINVAL; 1854 + goto of_node_put; 1854 1855 } 1855 1856 1856 1857 port = am65_common_get_port(common, port_id); ··· 1867 1866 (AM65_CPSW_NU_FRAM_PORT_OFFSET * (port_id - 1)); 1868 1867 1869 1868 port->slave.mac_sl = cpsw_sl_get("am65", dev, port->port_base); 1870 - if (IS_ERR(port->slave.mac_sl)) 1871 - return PTR_ERR(port->slave.mac_sl); 1869 + if (IS_ERR(port->slave.mac_sl)) { 1870 + ret = PTR_ERR(port->slave.mac_sl); 1871 + goto of_node_put; 1872 + } 1872 1873 1873 1874 port->disabled = !of_device_is_available(port_np); 1874 1875 if (port->disabled) { ··· 1883 1880 ret = PTR_ERR(port->slave.ifphy); 1884 1881 dev_err(dev, "%pOF error retrieving port phy: %d\n", 1885 1882 port_np, ret); 1886 - return ret; 1883 + goto of_node_put; 1887 1884 } 1888 1885 1889 1886 port->slave.mac_only = ··· 1892 1889 /* get phy/link info */ 1893 1890 if (of_phy_is_fixed_link(port_np)) { 1894 1891 ret = of_phy_register_fixed_link(port_np); 1895 - if (ret) 1896 - return dev_err_probe(dev, ret, 1892 + if (ret) { 1893 + ret = dev_err_probe(dev, ret, 1897 1894 "failed to register fixed-link phy %pOF\n", 1898 1895 port_np); 1896 + goto of_node_put; 1897 + } 1899 1898 port->slave.phy_node = of_node_get(port_np); 1900 1899 } else { 1901 1900 port->slave.phy_node = ··· 1907 1902 if (!port->slave.phy_node) { 1908 1903 dev_err(dev, 1909 1904 "slave[%d] no phy found\n", port_id); 1910 - return -ENODEV; 1905 + ret = -ENODEV; 1906 + goto of_node_put; 1911 1907 } 1912 1908 1913 1909 ret = of_get_phy_mode(port_np, &port->slave.phy_if); 1914 1910 if (ret) { 1915 1911 dev_err(dev, "%pOF read phy-mode err %d\n", 1916 1912 port_np, ret); 1917 - return ret; 1913 + goto of_node_put; 1918 1914 } 1919 1915 1920 1916 ret = of_get_mac_address(port_np, port->slave.mac_addr); ··· 1938 1932 } 1939 1933 1940 1934 return 0; 1935 + 1936 + of_node_put: 1937 + of_node_put(port_np); 1938 + of_node_put(node); 1939 + return ret; 1941 1940 } 1942 1941 1943 1942 static void am65_cpsw_pcpu_stats_free(void *data)
+1
drivers/net/netdevsim/bpf.c
··· 514 514 goto err_free; 515 515 key = nmap->entry[i].key; 516 516 *key = i; 517 + memset(nmap->entry[i].value, 0, offmap->map.value_size); 517 518 } 518 519 } 519 520
+4 -1
drivers/net/netdevsim/ethtool.c
··· 81 81 { 82 82 struct netdevsim *ns = netdev_priv(dev); 83 83 84 - memcpy(&ns->ethtool.ring, ring, sizeof(ns->ethtool.ring)); 84 + ns->ethtool.ring.rx_pending = ring->rx_pending; 85 + ns->ethtool.ring.rx_jumbo_pending = ring->rx_jumbo_pending; 86 + ns->ethtool.ring.rx_mini_pending = ring->rx_mini_pending; 87 + ns->ethtool.ring.tx_pending = ring->tx_pending; 85 88 return 0; 86 89 } 87 90
+3
drivers/net/phy/mdio_bus.c
··· 462 462 463 463 if (addr == mdiodev->addr) { 464 464 device_set_node(dev, of_fwnode_handle(child)); 465 + /* The refcount on "child" is passed to the mdio 466 + * device. Do _not_ use of_node_put(child) here. 467 + */ 465 468 return; 466 469 } 467 470 }
+6
drivers/net/usb/lan78xx.c
··· 76 76 #define LAN7801_USB_PRODUCT_ID (0x7801) 77 77 #define LAN78XX_EEPROM_MAGIC (0x78A5) 78 78 #define LAN78XX_OTP_MAGIC (0x78F3) 79 + #define AT29M2AF_USB_VENDOR_ID (0x07C9) 80 + #define AT29M2AF_USB_PRODUCT_ID (0x0012) 79 81 80 82 #define MII_READ 1 81 83 #define MII_WRITE 0 ··· 5061 5059 { 5062 5060 /* LAN7801 USB Gigabit Ethernet Device */ 5063 5061 USB_DEVICE(LAN78XX_USB_VENDOR_ID, LAN7801_USB_PRODUCT_ID), 5062 + }, 5063 + { 5064 + /* ATM2-AF USB Gigabit Ethernet Device */ 5065 + USB_DEVICE(AT29M2AF_USB_VENDOR_ID, AT29M2AF_USB_PRODUCT_ID), 5064 5066 }, 5065 5067 {}, 5066 5068 };
+1
drivers/net/usb/qmi_wwan.c
··· 1358 1358 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)}, /* Telit LE922A */ 1359 1359 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1050, 2)}, /* Telit FN980 */ 1360 1360 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1060, 2)}, /* Telit LN920 */ 1361 + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1070, 2)}, /* Telit FN990 */ 1361 1362 {QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */ 1362 1363 {QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */ 1363 1364 {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */
+3 -6
drivers/net/virtio_net.c
··· 733 733 pr_debug("%s: rx error: len %u exceeds max size %d\n", 734 734 dev->name, len, GOOD_PACKET_LEN); 735 735 dev->stats.rx_length_errors++; 736 - goto err_len; 736 + goto err; 737 737 } 738 738 739 739 if (likely(!vi->xdp_enabled)) { ··· 825 825 826 826 skip_xdp: 827 827 skb = build_skb(buf, buflen); 828 - if (!skb) { 829 - put_page(page); 828 + if (!skb) 830 829 goto err; 831 - } 832 830 skb_reserve(skb, headroom - delta); 833 831 skb_put(skb, len); 834 832 if (!xdp_prog) { ··· 837 839 if (metasize) 838 840 skb_metadata_set(skb, metasize); 839 841 840 - err: 841 842 return skb; 842 843 843 844 err_xdp: 844 845 rcu_read_unlock(); 845 846 stats->xdp_drops++; 846 - err_len: 847 + err: 847 848 stats->drops++; 848 849 put_page(page); 849 850 xdp_xmit:
+5 -1
drivers/net/wireless/ath/ath11k/mhi.c
··· 533 533 ret = mhi_pm_suspend(ab_pci->mhi_ctrl); 534 534 break; 535 535 case ATH11K_MHI_RESUME: 536 - ret = mhi_pm_resume(ab_pci->mhi_ctrl); 536 + /* Do force MHI resume as some devices like QCA6390, WCN6855 537 + * are not in M3 state but they are functional. So just ignore 538 + * the MHI state while resuming. 539 + */ 540 + ret = mhi_pm_resume_force(ab_pci->mhi_ctrl); 537 541 break; 538 542 case ATH11K_MHI_TRIGGER_RDDM: 539 543 ret = mhi_force_rddm_mode(ab_pci->mhi_ctrl);
+9 -5
drivers/net/wireless/broadcom/brcm80211/Kconfig
··· 7 7 depends on MAC80211 8 8 depends on BCMA_POSSIBLE 9 9 select BCMA 10 - select NEW_LEDS if BCMA_DRIVER_GPIO 11 - select LEDS_CLASS if BCMA_DRIVER_GPIO 12 10 select BRCMUTIL 13 11 select FW_LOADER 14 12 select CORDIC 15 13 help 16 14 This module adds support for PCIe wireless adapters based on Broadcom 17 - IEEE802.11n SoftMAC chipsets. It also has WLAN led support, which will 18 - be available if you select BCMA_DRIVER_GPIO. If you choose to build a 19 - module, the driver will be called brcmsmac.ko. 15 + IEEE802.11n SoftMAC chipsets. If you choose to build a module, the 16 + driver will be called brcmsmac.ko. 17 + 18 + config BRCMSMAC_LEDS 19 + def_bool BRCMSMAC && BCMA_DRIVER_GPIO && MAC80211_LEDS 20 + help 21 + The brcmsmac LED support depends on the presence of the 22 + BCMA_DRIVER_GPIO driver, and it only works if LED support 23 + is enabled and reachable from the driver module. 20 24 21 25 source "drivers/net/wireless/broadcom/brcm80211/brcmfmac/Kconfig" 22 26
+1 -1
drivers/net/wireless/broadcom/brcm80211/brcmsmac/Makefile
··· 42 42 brcms_trace_events.o \ 43 43 debug.o 44 44 45 - brcmsmac-$(CONFIG_BCMA_DRIVER_GPIO) += led.o 45 + brcmsmac-$(CONFIG_BRCMSMAC_LEDS) += led.o 46 46 47 47 obj-$(CONFIG_BRCMSMAC) += brcmsmac.o
+1 -1
drivers/net/wireless/broadcom/brcm80211/brcmsmac/led.h
··· 24 24 struct gpio_desc *gpiod; 25 25 }; 26 26 27 - #ifdef CONFIG_BCMA_DRIVER_GPIO 27 + #ifdef CONFIG_BRCMSMAC_LEDS 28 28 void brcms_led_unregister(struct brcms_info *wl); 29 29 int brcms_led_register(struct brcms_info *wl); 30 30 #else
+2 -2
drivers/net/wireless/intel/iwlegacy/Kconfig
··· 2 2 config IWLEGACY 3 3 tristate 4 4 select FW_LOADER 5 - select NEW_LEDS 6 - select LEDS_CLASS 7 5 select LEDS_TRIGGERS 8 6 select MAC80211_LEDS 9 7 10 8 config IWL4965 11 9 tristate "Intel Wireless WiFi 4965AGN (iwl4965)" 12 10 depends on PCI && MAC80211 11 + depends on LEDS_CLASS=y || LEDS_CLASS=MAC80211 13 12 select IWLEGACY 14 13 help 15 14 This option enables support for ··· 37 38 config IWL3945 38 39 tristate "Intel PRO/Wireless 3945ABG/BG Network Connection (iwl3945)" 39 40 depends on PCI && MAC80211 41 + depends on LEDS_CLASS=y || LEDS_CLASS=MAC80211 40 42 select IWLEGACY 41 43 help 42 44 Select to build the driver supporting the:
+1 -1
drivers/net/wireless/intel/iwlwifi/Kconfig
··· 47 47 48 48 config IWLWIFI_LEDS 49 49 bool 50 - depends on LEDS_CLASS=y || LEDS_CLASS=IWLWIFI 50 + depends on LEDS_CLASS=y || LEDS_CLASS=MAC80211 51 51 depends on IWLMVM || IWLDVM 52 52 select LEDS_TRIGGERS 53 53 select MAC80211_LEDS
+3 -2
drivers/net/wireless/intel/iwlwifi/mvm/tx.c
··· 269 269 u8 rate_plcp; 270 270 u32 rate_flags = 0; 271 271 bool is_cck; 272 - struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta); 273 272 274 273 /* info->control is only relevant for non HW rate control */ 275 274 if (!ieee80211_hw_check(mvm->hw, HAS_RATE_CONTROL)) { 275 + struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta); 276 + 276 277 /* HT rate doesn't make sense for a non data frame */ 277 278 WARN_ONCE(info->control.rates[0].flags & IEEE80211_TX_RC_MCS && 278 279 !ieee80211_is_data(fc), 279 280 "Got a HT rate (flags:0x%x/mcs:%d/fc:0x%x/state:%d) for a non data frame\n", 280 281 info->control.rates[0].flags, 281 282 info->control.rates[0].idx, 282 - le16_to_cpu(fc), mvmsta->sta_state); 283 + le16_to_cpu(fc), sta ? mvmsta->sta_state : -1); 283 284 284 285 rate_idx = info->control.rates[0].idx; 285 286 }
+1 -1
drivers/net/wireless/mediatek/mt76/Makefile
··· 34 34 obj-$(CONFIG_MT7603E) += mt7603/ 35 35 obj-$(CONFIG_MT7615_COMMON) += mt7615/ 36 36 obj-$(CONFIG_MT7915E) += mt7915/ 37 - obj-$(CONFIG_MT7921E) += mt7921/ 37 + obj-$(CONFIG_MT7921_COMMON) += mt7921/
+18 -5
drivers/nvme/host/core.c
··· 666 666 struct request *rq) 667 667 { 668 668 if (ctrl->state != NVME_CTRL_DELETING_NOIO && 669 + ctrl->state != NVME_CTRL_DELETING && 669 670 ctrl->state != NVME_CTRL_DEAD && 670 671 !test_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags) && 671 672 !blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH)) ··· 1750 1749 */ 1751 1750 if (WARN_ON_ONCE(!(id->flbas & NVME_NS_FLBAS_META_EXT))) 1752 1751 return -EINVAL; 1753 - if (ctrl->max_integrity_segments) 1754 - ns->features |= 1755 - (NVME_NS_METADATA_SUPPORTED | NVME_NS_EXT_LBAS); 1752 + 1753 + ns->features |= NVME_NS_EXT_LBAS; 1754 + 1755 + /* 1756 + * The current fabrics transport drivers support namespace 1757 + * metadata formats only if nvme_ns_has_pi() returns true. 1758 + * Suppress support for all other formats so the namespace will 1759 + * have a 0 capacity and not be usable through the block stack. 1760 + * 1761 + * Note, this check will need to be modified if any drivers 1762 + * gain the ability to use other metadata formats. 1763 + */ 1764 + if (ctrl->max_integrity_segments && nvme_ns_has_pi(ns)) 1765 + ns->features |= NVME_NS_METADATA_SUPPORTED; 1756 1766 } else { 1757 1767 /* 1758 1768 * For PCIe controllers, we can't easily remap the separate ··· 2708 2696 2709 2697 if (tmp->cntlid == ctrl->cntlid) { 2710 2698 dev_err(ctrl->device, 2711 - "Duplicate cntlid %u with %s, rejecting\n", 2712 - ctrl->cntlid, dev_name(tmp->device)); 2699 + "Duplicate cntlid %u with %s, subsys %s, rejecting\n", 2700 + ctrl->cntlid, dev_name(tmp->device), 2701 + subsys->subnqn); 2713 2702 return false; 2714 2703 } 2715 2704
+2 -1
drivers/nvme/host/multipath.c
··· 866 866 } 867 867 if (ana_log_size > ctrl->ana_log_size) { 868 868 nvme_mpath_stop(ctrl); 869 - kfree(ctrl->ana_log_buf); 869 + nvme_mpath_uninit(ctrl); 870 870 ctrl->ana_log_buf = kmalloc(ana_log_size, GFP_KERNEL); 871 871 if (!ctrl->ana_log_buf) 872 872 return -ENOMEM; ··· 886 886 { 887 887 kfree(ctrl->ana_log_buf); 888 888 ctrl->ana_log_buf = NULL; 889 + ctrl->ana_log_size = 0; 889 890 }
+1 -1
drivers/nvme/host/nvme.h
··· 709 709 return true; 710 710 if (ctrl->ops->flags & NVME_F_FABRICS && 711 711 ctrl->state == NVME_CTRL_DELETING) 712 - return true; 712 + return queue_live; 713 713 return __nvme_check_ready(ctrl, rq, queue_live); 714 714 } 715 715 int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
+4 -1
drivers/nvme/host/zns.c
··· 166 166 zone.len = ns->zsze; 167 167 zone.capacity = nvme_lba_to_sect(ns, le64_to_cpu(entry->zcap)); 168 168 zone.start = nvme_lba_to_sect(ns, le64_to_cpu(entry->zslba)); 169 - zone.wp = nvme_lba_to_sect(ns, le64_to_cpu(entry->wp)); 169 + if (zone.cond == BLK_ZONE_COND_FULL) 170 + zone.wp = zone.start + zone.len; 171 + else 172 + zone.wp = nvme_lba_to_sect(ns, le64_to_cpu(entry->wp)); 170 173 171 174 return cb(&zone, idx, data); 172 175 }
+8 -1
drivers/nvme/target/tcp.c
··· 922 922 size_t data_len = le32_to_cpu(req->cmd->common.dptr.sgl.length); 923 923 int ret; 924 924 925 - if (!nvme_is_write(cmd->req.cmd) || 925 + /* 926 + * This command has not been processed yet, hence we are trying to 927 + * figure out if there is still pending data left to receive. If 928 + * we don't, we can simply prepare for the next pdu and bail out, 929 + * otherwise we will need to prepare a buffer and receive the 930 + * stale data before continuing forward. 931 + */ 932 + if (!nvme_is_write(cmd->req.cmd) || !data_len || 926 933 data_len > cmd->req.port->inline_data_size) { 927 934 nvmet_prepare_receive_pdu(queue); 928 935 return;
+25 -2
drivers/of/irq.c
··· 76 76 } 77 77 EXPORT_SYMBOL_GPL(of_irq_find_parent); 78 78 79 + /* 80 + * These interrupt controllers abuse interrupt-map for unspeakable 81 + * reasons and rely on the core code to *ignore* it (the drivers do 82 + * their own parsing of the property). 83 + * 84 + * If you think of adding to the list for something *new*, think 85 + * again. There is a high chance that you will be sent back to the 86 + * drawing board. 87 + */ 88 + static const char * const of_irq_imap_abusers[] = { 89 + "CBEA,platform-spider-pic", 90 + "sti,platform-spider-pic", 91 + "realtek,rtl-intc", 92 + "fsl,ls1021a-extirq", 93 + "fsl,ls1043a-extirq", 94 + "fsl,ls1088a-extirq", 95 + "renesas,rza1-irqc", 96 + NULL, 97 + }; 98 + 79 99 /** 80 100 * of_irq_parse_raw - Low level interrupt tree parsing 81 101 * @addr: address specifier (start of "reg" property of the device) in be32 format ··· 179 159 /* 180 160 * Now check if cursor is an interrupt-controller and 181 161 * if it is then we are done, unless there is an 182 - * interrupt-map which takes precedence. 162 + * interrupt-map which takes precedence except on one 163 + * of these broken platforms that want to parse 164 + * interrupt-map themselves for $reason. 183 165 */ 184 166 bool intc = of_property_read_bool(ipar, "interrupt-controller"); 185 167 186 168 imap = of_get_property(ipar, "interrupt-map", &imaplen); 187 - if (imap == NULL && intc) { 169 + if (intc && 170 + (!imap || of_device_compatible_match(ipar, of_irq_imap_abusers))) { 188 171 pr_debug(" -> got it !\n"); 189 172 return 0; 190 173 }
+2 -2
drivers/pci/controller/Kconfig
··· 332 332 If unsure, say Y if you have an Apple Silicon system. 333 333 334 334 config PCIE_MT7621 335 - tristate "MediaTek MT7621 PCIe Controller" 336 - depends on (RALINK && SOC_MT7621) || (MIPS && COMPILE_TEST) 335 + bool "MediaTek MT7621 PCIe Controller" 336 + depends on SOC_MT7621 || (MIPS && COMPILE_TEST) 337 337 select PHY_MT7621_PCI 338 338 default SOC_MT7621 339 339 help
-9
drivers/pci/controller/pci-aardvark.c
··· 32 32 #define PCIE_CORE_DEV_ID_REG 0x0 33 33 #define PCIE_CORE_CMD_STATUS_REG 0x4 34 34 #define PCIE_CORE_DEV_REV_REG 0x8 35 - #define PCIE_CORE_EXP_ROM_BAR_REG 0x30 36 35 #define PCIE_CORE_PCIEXP_CAP 0xc0 37 36 #define PCIE_CORE_ERR_CAPCTL_REG 0x118 38 37 #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX BIT(5) ··· 773 774 *value = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG); 774 775 return PCI_BRIDGE_EMUL_HANDLED; 775 776 776 - case PCI_ROM_ADDRESS1: 777 - *value = advk_readl(pcie, PCIE_CORE_EXP_ROM_BAR_REG); 778 - return PCI_BRIDGE_EMUL_HANDLED; 779 - 780 777 case PCI_INTERRUPT_LINE: { 781 778 /* 782 779 * From the whole 32bit register we support reading from HW only ··· 803 808 switch (reg) { 804 809 case PCI_COMMAND: 805 810 advk_writel(pcie, new, PCIE_CORE_CMD_STATUS_REG); 806 - break; 807 - 808 - case PCI_ROM_ADDRESS1: 809 - advk_writel(pcie, new, PCIE_CORE_EXP_ROM_BAR_REG); 810 811 break; 811 812 812 813 case PCI_INTERRUPT_LINE:
+12 -2
drivers/pci/controller/pcie-apple.c
··· 516 516 int ret, i; 517 517 518 518 reset = gpiod_get_from_of_node(np, "reset-gpios", 0, 519 - GPIOD_OUT_LOW, "#PERST"); 519 + GPIOD_OUT_LOW, "PERST#"); 520 520 if (IS_ERR(reset)) 521 521 return PTR_ERR(reset); 522 522 ··· 539 539 540 540 rmw_set(PORT_APPCLK_EN, port->base + PORT_APPCLK); 541 541 542 + /* Assert PERST# before setting up the clock */ 543 + gpiod_set_value(reset, 1); 544 + 542 545 ret = apple_pcie_setup_refclk(pcie, port); 543 546 if (ret < 0) 544 547 return ret; 545 548 549 + /* The minimal Tperst-clk value is 100us (PCIe CEM r5.0, 2.9.2) */ 550 + usleep_range(100, 200); 551 + 552 + /* Deassert PERST# */ 546 553 rmw_set(PORT_PERST_OFF, port->base + PORT_PERST); 547 - gpiod_set_value(reset, 1); 554 + gpiod_set_value(reset, 0); 555 + 556 + /* Wait for 100ms after PERST# deassertion (PCIe r5.0, 6.6.1) */ 557 + msleep(100); 548 558 549 559 ret = readl_relaxed_poll_timeout(port->base + PORT_STATUS, stat, 550 560 stat & PORT_STATUS_READY, 100, 250000);
+2 -2
drivers/phy/hisilicon/phy-hi3670-pcie.c
··· 757 757 return PTR_ERR(phy->sysctrl); 758 758 759 759 phy->pmctrl = syscon_regmap_lookup_by_compatible("hisilicon,hi3670-pmctrl"); 760 - if (IS_ERR(phy->sysctrl)) 761 - return PTR_ERR(phy->sysctrl); 760 + if (IS_ERR(phy->pmctrl)) 761 + return PTR_ERR(phy->pmctrl); 762 762 763 763 /* clocks */ 764 764 phy->phy_ref_clk = devm_clk_get(dev, "phy_ref");
+2 -2
drivers/phy/marvell/phy-mvebu-cp110-utmi.c
··· 82 82 * struct mvebu_cp110_utmi - PHY driver data 83 83 * 84 84 * @regs: PHY registers 85 - * @syscom: Regmap with system controller registers 85 + * @syscon: Regmap with system controller registers 86 86 * @dev: device driver handle 87 - * @caps: PHY capabilities 87 + * @ops: phy ops 88 88 */ 89 89 struct mvebu_cp110_utmi { 90 90 void __iomem *regs;
+14 -12
drivers/phy/qualcomm/phy-qcom-ipq806x-usb.c
··· 127 127 }; 128 128 129 129 /** 130 - * Write register and read back masked value to confirm it is written 130 + * usb_phy_write_readback() - Write register and read back masked value to 131 + * confirm it is written 131 132 * 132 - * @base - QCOM DWC3 PHY base virtual address. 133 - * @offset - register offset. 134 - * @mask - register bitmask specifying what should be updated 135 - * @val - value to write. 133 + * @phy_dwc3: QCOM DWC3 phy context 134 + * @offset: register offset. 135 + * @mask: register bitmask specifying what should be updated 136 + * @val: value to write. 136 137 */ 137 138 static inline void usb_phy_write_readback(struct usb_phy *phy_dwc3, 138 139 u32 offset, ··· 172 171 } 173 172 174 173 /** 175 - * Write SSPHY register 174 + * usb_ss_write_phycreg() - Write SSPHY register 176 175 * 177 - * @base - QCOM DWC3 PHY base virtual address. 178 - * @addr - SSPHY address to write. 179 - * @val - value to write. 176 + * @phy_dwc3: QCOM DWC3 phy context 177 + * @addr: SSPHY address to write. 178 + * @val: value to write. 180 179 */ 181 180 static int usb_ss_write_phycreg(struct usb_phy *phy_dwc3, 182 181 u32 addr, u32 val) ··· 210 209 } 211 210 212 211 /** 213 - * Read SSPHY register. 212 + * usb_ss_read_phycreg() - Read SSPHY register. 214 213 * 215 - * @base - QCOM DWC3 PHY base virtual address. 216 - * @addr - SSPHY address to read. 214 + * @phy_dwc3: QCOM DWC3 phy context 215 + * @addr: SSPHY address to read. 216 + * @val: pointer in which read is store. 217 217 */ 218 218 static int usb_ss_read_phycreg(struct usb_phy *phy_dwc3, 219 219 u32 addr, u32 *val)
+3
drivers/phy/qualcomm/phy-qcom-qmp.c
··· 2973 2973 * @qmp: QMP phy to which this lane belongs 2974 2974 * @lane_rst: lane's reset controller 2975 2975 * @mode: current PHY mode 2976 + * @dp_aux_cfg: Display port aux config 2977 + * @dp_opts: Display port optional config 2978 + * @dp_clks: Display port clocks 2976 2979 */ 2977 2980 struct qmp_phy { 2978 2981 struct phy *phy;
+1 -1
drivers/phy/qualcomm/phy-qcom-usb-hsic.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 - /** 2 + /* 3 3 * Copyright (C) 2016 Linaro Ltd 4 4 */ 5 5 #include <linux/module.h>
+1 -1
drivers/phy/st/phy-stm32-usbphyc.c
··· 478 478 if (!of_property_read_bool(np, "st,no-lsfs-fb-cap")) 479 479 usbphyc_phy->tune |= LFSCAPEN; 480 480 481 - if (of_property_read_bool(np, "st,slow-hs-slew-rate")) 481 + if (of_property_read_bool(np, "st,decrease-hs-slew-rate")) 482 482 usbphyc_phy->tune |= HSDRVSLEW; 483 483 484 484 ret = of_property_read_u32(np, "st,tune-hs-dc-level", &val);
+1 -1
drivers/phy/ti/phy-am654-serdes.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - /** 2 + /* 3 3 * PCIe SERDES driver for AM654x SoC 4 4 * 5 5 * Copyright (C) 2018 - 2019 Texas Instruments Incorporated - http://www.ti.com/
+1 -1
drivers/phy/ti/phy-j721e-wiz.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - /** 2 + /* 3 3 * Wrapper driver for SERDES used in J721E 4 4 * 5 5 * Copyright (C) 2019 Texas Instruments Incorporated - http://www.ti.com/
+3 -3
drivers/phy/ti/phy-omap-usb2.c
··· 89 89 } 90 90 91 91 /** 92 - * omap_usb2_set_comparator - links the comparator present in the system with 93 - * this phy 94 - * @comparator - the companion phy(comparator) for this phy 92 + * omap_usb2_set_comparator() - links the comparator present in the system with this phy 93 + * 94 + * @comparator: the companion phy(comparator) for this phy 95 95 * 96 96 * The phy companion driver should call this API passing the phy_companion 97 97 * filled with set_vbus and start_srp to be used by usb phy.
+1 -1
drivers/phy/ti/phy-tusb1210.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 - /** 2 + /* 3 3 * tusb1210.c - TUSB1210 USB ULPI PHY driver 4 4 * 5 5 * Copyright (C) 2015 Intel Corporation
+1 -8
drivers/reset/tegra/reset-bpmp.c
··· 20 20 struct tegra_bpmp *bpmp = to_tegra_bpmp(rstc); 21 21 struct mrq_reset_request request; 22 22 struct tegra_bpmp_message msg; 23 - int err; 24 23 25 24 memset(&request, 0, sizeof(request)); 26 25 request.cmd = command; ··· 30 31 msg.tx.data = &request; 31 32 msg.tx.size = sizeof(request); 32 33 33 - err = tegra_bpmp_transfer(bpmp, &msg); 34 - if (err) 35 - return err; 36 - if (msg.rx.ret) 37 - return -EINVAL; 38 - 39 - return 0; 34 + return tegra_bpmp_transfer(bpmp, &msg); 40 35 } 41 36 42 37 static int tegra_bpmp_reset_module(struct reset_controller_dev *rstc,
+2 -4
drivers/scsi/pm8001/pm8001_init.c
··· 282 282 if (rc) { 283 283 pm8001_dbg(pm8001_ha, FAIL, 284 284 "pm8001_setup_irq failed [ret: %d]\n", rc); 285 - goto err_out_shost; 285 + goto err_out; 286 286 } 287 287 /* Request Interrupt */ 288 288 rc = pm8001_request_irq(pm8001_ha); 289 289 if (rc) 290 - goto err_out_shost; 290 + goto err_out; 291 291 292 292 count = pm8001_ha->max_q_num; 293 293 /* Queues are chosen based on the number of cores/msix availability */ ··· 423 423 pm8001_tag_init(pm8001_ha); 424 424 return 0; 425 425 426 - err_out_shost: 427 - scsi_remove_host(pm8001_ha->shost); 428 426 err_out_nodev: 429 427 for (i = 0; i < pm8001_ha->max_memcnt; i++) { 430 428 if (pm8001_ha->memoryMap.region[i].virt_ptr != NULL) {
+15 -22
drivers/scsi/qedi/qedi_fw.c
··· 732 732 { 733 733 struct qedi_work_map *work, *work_tmp; 734 734 u32 proto_itt = cqe->itid; 735 - itt_t protoitt = 0; 736 735 int found = 0; 737 736 struct qedi_cmd *qedi_cmd = NULL; 738 737 u32 iscsi_cid; ··· 811 812 return; 812 813 813 814 check_cleanup_reqs: 814 - if (qedi_conn->cmd_cleanup_req > 0) { 815 - QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID, 815 + if (atomic_inc_return(&qedi_conn->cmd_cleanup_cmpl) == 816 + qedi_conn->cmd_cleanup_req) { 817 + QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM, 816 818 "Freeing tid=0x%x for cid=0x%x\n", 817 819 cqe->itid, qedi_conn->iscsi_conn_id); 818 - qedi_conn->cmd_cleanup_cmpl++; 819 820 wake_up(&qedi_conn->wait_queue); 820 - } else { 821 - QEDI_ERR(&qedi->dbg_ctx, 822 - "Delayed or untracked cleanup response, itt=0x%x, tid=0x%x, cid=0x%x\n", 823 - protoitt, cqe->itid, qedi_conn->iscsi_conn_id); 824 821 } 825 822 } 826 823 ··· 1158 1163 } 1159 1164 1160 1165 qedi_conn->cmd_cleanup_req = 0; 1161 - qedi_conn->cmd_cleanup_cmpl = 0; 1166 + atomic_set(&qedi_conn->cmd_cleanup_cmpl, 0); 1162 1167 1163 1168 QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM, 1164 1169 "active_cmd_count=%d, cid=0x%x, in_recovery=%d, lun_reset=%d\n", ··· 1210 1215 qedi_conn->iscsi_conn_id); 1211 1216 1212 1217 rval = wait_event_interruptible_timeout(qedi_conn->wait_queue, 1213 - ((qedi_conn->cmd_cleanup_req == 1214 - qedi_conn->cmd_cleanup_cmpl) || 1215 - test_bit(QEDI_IN_RECOVERY, 1216 - &qedi->flags)), 1217 - 5 * HZ); 1218 + (qedi_conn->cmd_cleanup_req == 1219 + atomic_read(&qedi_conn->cmd_cleanup_cmpl)) || 1220 + test_bit(QEDI_IN_RECOVERY, &qedi->flags), 1221 + 5 * HZ); 1218 1222 if (rval) { 1219 1223 QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM, 1220 1224 "i/o cmd_cleanup_req=%d, equal to cmd_cleanup_cmpl=%d, cid=0x%x\n", 1221 1225 qedi_conn->cmd_cleanup_req, 1222 - qedi_conn->cmd_cleanup_cmpl, 1226 + atomic_read(&qedi_conn->cmd_cleanup_cmpl), 1223 1227 qedi_conn->iscsi_conn_id); 1224 1228 1225 1229 return 0; ··· 1227 1233 QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_SCSI_TM, 1228 1234 "i/o cmd_cleanup_req=%d, not equal to cmd_cleanup_cmpl=%d, cid=0x%x\n", 1229 1235 qedi_conn->cmd_cleanup_req, 1230 - qedi_conn->cmd_cleanup_cmpl, 1236 + atomic_read(&qedi_conn->cmd_cleanup_cmpl), 1231 1237 qedi_conn->iscsi_conn_id); 1232 1238 1233 1239 iscsi_host_for_each_session(qedi->shost, ··· 1236 1242 1237 1243 /* Enable IOs for all other sessions except current.*/ 1238 1244 if (!wait_event_interruptible_timeout(qedi_conn->wait_queue, 1239 - (qedi_conn->cmd_cleanup_req == 1240 - qedi_conn->cmd_cleanup_cmpl) || 1241 - test_bit(QEDI_IN_RECOVERY, 1242 - &qedi->flags), 1243 - 5 * HZ)) { 1245 + (qedi_conn->cmd_cleanup_req == 1246 + atomic_read(&qedi_conn->cmd_cleanup_cmpl)) || 1247 + test_bit(QEDI_IN_RECOVERY, &qedi->flags), 1248 + 5 * HZ)) { 1244 1249 iscsi_host_for_each_session(qedi->shost, 1245 1250 qedi_mark_device_available); 1246 1251 return -1; ··· 1259 1266 1260 1267 qedi_ep = qedi_conn->ep; 1261 1268 qedi_conn->cmd_cleanup_req = 0; 1262 - qedi_conn->cmd_cleanup_cmpl = 0; 1269 + atomic_set(&qedi_conn->cmd_cleanup_cmpl, 0); 1263 1270 1264 1271 if (!qedi_ep) { 1265 1272 QEDI_WARN(&qedi->dbg_ctx,
+1 -1
drivers/scsi/qedi/qedi_iscsi.c
··· 412 412 qedi_conn->iscsi_conn_id = qedi_ep->iscsi_cid; 413 413 qedi_conn->fw_cid = qedi_ep->fw_cid; 414 414 qedi_conn->cmd_cleanup_req = 0; 415 - qedi_conn->cmd_cleanup_cmpl = 0; 415 + atomic_set(&qedi_conn->cmd_cleanup_cmpl, 0); 416 416 417 417 if (qedi_bind_conn_to_iscsi_cid(qedi, qedi_conn)) { 418 418 rc = -EINVAL;
+1 -1
drivers/scsi/qedi/qedi_iscsi.h
··· 155 155 spinlock_t list_lock; /* internal conn lock */ 156 156 u32 active_cmd_count; 157 157 u32 cmd_cleanup_req; 158 - u32 cmd_cleanup_cmpl; 158 + atomic_t cmd_cleanup_cmpl; 159 159 160 160 u32 iscsi_conn_id; 161 161 int itt;
+3
drivers/scsi/qla2xxx/qla_dbg.c
··· 2491 2491 struct va_format vaf; 2492 2492 char pbuf[64]; 2493 2493 2494 + if (!ql_mask_match(level) && !trace_ql_dbg_log_enabled()) 2495 + return; 2496 + 2494 2497 va_start(va, fmt); 2495 2498 2496 2499 vaf.fmt = fmt;
+1 -1
drivers/scsi/scsi_debug.c
··· 4342 4342 rep_max_zones = min((alloc_len - 64) >> ilog2(RZONES_DESC_HD), 4343 4343 max_zones); 4344 4344 4345 - arr = kcalloc(RZONES_DESC_HD, alloc_len, GFP_ATOMIC); 4345 + arr = kzalloc(alloc_len, GFP_ATOMIC); 4346 4346 if (!arr) { 4347 4347 mk_sense_buffer(scp, ILLEGAL_REQUEST, INSUFF_RES_ASC, 4348 4348 INSUFF_RES_ASCQ);
+19
drivers/soc/imx/imx8m-blk-ctrl.c
··· 17 17 18 18 #define BLK_SFT_RSTN 0x0 19 19 #define BLK_CLK_EN 0x4 20 + #define BLK_MIPI_RESET_DIV 0x8 /* Mini/Nano DISPLAY_BLK_CTRL only */ 20 21 21 22 struct imx8m_blk_ctrl_domain; 22 23 ··· 37 36 const char *gpc_name; 38 37 u32 rst_mask; 39 38 u32 clk_mask; 39 + 40 + /* 41 + * i.MX8M Mini and Nano have a third DISPLAY_BLK_CTRL register 42 + * which is used to control the reset for the MIPI Phy. 43 + * Since it's only present in certain circumstances, 44 + * an if-statement should be used before setting and clearing this 45 + * register. 46 + */ 47 + u32 mipi_phy_rst_mask; 40 48 }; 41 49 42 50 #define DOMAIN_MAX_CLKS 3 ··· 88 78 89 79 /* put devices into reset */ 90 80 regmap_clear_bits(bc->regmap, BLK_SFT_RSTN, data->rst_mask); 81 + if (data->mipi_phy_rst_mask) 82 + regmap_clear_bits(bc->regmap, BLK_MIPI_RESET_DIV, data->mipi_phy_rst_mask); 91 83 92 84 /* enable upstream and blk-ctrl clocks to allow reset to propagate */ 93 85 ret = clk_bulk_prepare_enable(data->num_clks, domain->clks); ··· 111 99 112 100 /* release reset */ 113 101 regmap_set_bits(bc->regmap, BLK_SFT_RSTN, data->rst_mask); 102 + if (data->mipi_phy_rst_mask) 103 + regmap_set_bits(bc->regmap, BLK_MIPI_RESET_DIV, data->mipi_phy_rst_mask); 114 104 115 105 /* disable upstream clocks */ 116 106 clk_bulk_disable_unprepare(data->num_clks, domain->clks); ··· 134 120 struct imx8m_blk_ctrl *bc = domain->bc; 135 121 136 122 /* put devices into reset and disable clocks */ 123 + if (data->mipi_phy_rst_mask) 124 + regmap_clear_bits(bc->regmap, BLK_MIPI_RESET_DIV, data->mipi_phy_rst_mask); 125 + 137 126 regmap_clear_bits(bc->regmap, BLK_SFT_RSTN, data->rst_mask); 138 127 regmap_clear_bits(bc->regmap, BLK_CLK_EN, data->clk_mask); 139 128 ··· 497 480 .gpc_name = "mipi-dsi", 498 481 .rst_mask = BIT(5), 499 482 .clk_mask = BIT(8) | BIT(9), 483 + .mipi_phy_rst_mask = BIT(17), 500 484 }, 501 485 [IMX8MM_DISPBLK_PD_MIPI_CSI] = { 502 486 .name = "dispblk-mipi-csi", ··· 506 488 .gpc_name = "mipi-csi", 507 489 .rst_mask = BIT(3) | BIT(4), 508 490 .clk_mask = BIT(10) | BIT(11), 491 + .mipi_phy_rst_mask = BIT(16), 509 492 }, 510 493 }; 511 494
+4
drivers/soc/imx/soc-imx.c
··· 36 36 int ret; 37 37 int i; 38 38 39 + /* Return early if this is running on devices with different SoCs */ 40 + if (!__mxc_cpu_type) 41 + return 0; 42 + 39 43 if (of_machine_is_compatible("fsl,ls1021a")) 40 44 return 0; 41 45
+1 -1
drivers/soc/tegra/fuse/fuse-tegra.c
··· 320 320 }; 321 321 builtin_platform_driver(tegra_fuse_driver); 322 322 323 - bool __init tegra_fuse_read_spare(unsigned int spare) 323 + u32 __init tegra_fuse_read_spare(unsigned int spare) 324 324 { 325 325 unsigned int offset = fuse->soc->info->spare + spare * 4; 326 326
+1 -1
drivers/soc/tegra/fuse/fuse.h
··· 65 65 void tegra_init_revision(void); 66 66 void tegra_init_apbmisc(void); 67 67 68 - bool __init tegra_fuse_read_spare(unsigned int spare); 68 + u32 __init tegra_fuse_read_spare(unsigned int spare); 69 69 u32 __init tegra_fuse_read_early(unsigned int offset); 70 70 71 71 u8 tegra_get_major_rev(void);
+2 -3
drivers/tee/amdtee/core.c
··· 203 203 204 204 *ta_size = roundup(fw->size, PAGE_SIZE); 205 205 *ta = (void *)__get_free_pages(GFP_KERNEL, get_order(*ta_size)); 206 - if (IS_ERR(*ta)) { 207 - pr_err("%s: get_free_pages failed 0x%llx\n", __func__, 208 - (u64)*ta); 206 + if (!*ta) { 207 + pr_err("%s: get_free_pages failed\n", __func__); 209 208 rc = -ENOMEM; 210 209 goto rel_fw; 211 210 }
+1 -1
drivers/thermal/intel/int340x_thermal/processor_thermal_rfim.c
··· 29 29 }; 30 30 31 31 static const struct mmio_reg tgl_fivr_mmio_regs[] = { 32 - { 0, 0x5A18, 3, 0x7, 12}, /* vco_ref_code_lo */ 32 + { 0, 0x5A18, 3, 0x7, 11}, /* vco_ref_code_lo */ 33 33 { 0, 0x5A18, 8, 0xFF, 16}, /* vco_ref_code_hi */ 34 34 { 0, 0x5A08, 8, 0xFF, 0}, /* spread_spectrum_pct */ 35 35 { 0, 0x5A08, 1, 0x1, 8}, /* spread_spectrum_clk_enable */
+3 -3
drivers/usb/core/config.c
··· 406 406 * the USB-2 spec requires such endpoints to have wMaxPacketSize = 0 407 407 * (see the end of section 5.6.3), so don't warn about them. 408 408 */ 409 - maxp = usb_endpoint_maxp(&endpoint->desc); 409 + maxp = le16_to_cpu(endpoint->desc.wMaxPacketSize); 410 410 if (maxp == 0 && !(usb_endpoint_xfer_isoc(d) && asnum == 0)) { 411 411 dev_warn(ddev, "config %d interface %d altsetting %d endpoint 0x%X has invalid wMaxPacketSize 0\n", 412 412 cfgno, inum, asnum, d->bEndpointAddress); ··· 422 422 maxpacket_maxes = full_speed_maxpacket_maxes; 423 423 break; 424 424 case USB_SPEED_HIGH: 425 - /* Bits 12..11 are allowed only for HS periodic endpoints */ 425 + /* Multiple-transactions bits are allowed only for HS periodic endpoints */ 426 426 if (usb_endpoint_xfer_int(d) || usb_endpoint_xfer_isoc(d)) { 427 - i = maxp & (BIT(12) | BIT(11)); 427 + i = maxp & USB_EP_MAXP_MULT_MASK; 428 428 maxp &= ~i; 429 429 } 430 430 fallthrough;
-15
drivers/usb/dwc3/dwc3-qcom.c
··· 649 649 struct dwc3_qcom *qcom = platform_get_drvdata(pdev); 650 650 struct device_node *np = pdev->dev.of_node, *dwc3_np; 651 651 struct device *dev = &pdev->dev; 652 - struct property *prop; 653 652 int ret; 654 653 655 654 dwc3_np = of_get_compatible_child(np, "snps,dwc3"); 656 655 if (!dwc3_np) { 657 656 dev_err(dev, "failed to find dwc3 core child\n"); 658 657 return -ENODEV; 659 - } 660 - 661 - prop = devm_kzalloc(dev, sizeof(*prop), GFP_KERNEL); 662 - if (!prop) { 663 - ret = -ENOMEM; 664 - dev_err(dev, "unable to allocate memory for property\n"); 665 - goto node_put; 666 - } 667 - 668 - prop->name = "tx-fifo-resize"; 669 - ret = of_add_property(dwc3_np, prop); 670 - if (ret) { 671 - dev_err(dev, "unable to add property\n"); 672 - goto node_put; 673 658 } 674 659 675 660 ret = of_platform_populate(np, NULL, NULL, dev);
+13 -1
drivers/usb/gadget/composite.c
··· 1679 1679 struct usb_function *f = NULL; 1680 1680 u8 endp; 1681 1681 1682 + if (w_length > USB_COMP_EP0_BUFSIZ) { 1683 + if (ctrl->bRequestType == USB_DIR_OUT) { 1684 + goto done; 1685 + } else { 1686 + /* Cast away the const, we are going to overwrite on purpose. */ 1687 + __le16 *temp = (__le16 *)&ctrl->wLength; 1688 + 1689 + *temp = cpu_to_le16(USB_COMP_EP0_BUFSIZ); 1690 + w_length = USB_COMP_EP0_BUFSIZ; 1691 + } 1692 + } 1693 + 1682 1694 /* partial re-init of the response message; the function or the 1683 1695 * gadget might need to intercept e.g. a control-OUT completion 1684 1696 * when we delegate to it. ··· 2221 2209 if (!cdev->req) 2222 2210 return -ENOMEM; 2223 2211 2224 - cdev->req->buf = kmalloc(USB_COMP_EP0_BUFSIZ, GFP_KERNEL); 2212 + cdev->req->buf = kzalloc(USB_COMP_EP0_BUFSIZ, GFP_KERNEL); 2225 2213 if (!cdev->req->buf) 2226 2214 goto fail; 2227 2215
+14 -1
drivers/usb/gadget/legacy/dbgp.c
··· 137 137 goto fail_1; 138 138 } 139 139 140 - req->buf = kmalloc(DBGP_REQ_LEN, GFP_KERNEL); 140 + req->buf = kzalloc(DBGP_REQ_LEN, GFP_KERNEL); 141 141 if (!req->buf) { 142 142 err = -ENOMEM; 143 143 stp = 2; ··· 344 344 int err = -EOPNOTSUPP; 345 345 void *data = NULL; 346 346 u16 len = 0; 347 + 348 + if (length > DBGP_REQ_LEN) { 349 + if (ctrl->bRequestType == USB_DIR_OUT) { 350 + return err; 351 + } else { 352 + /* Cast away the const, we are going to overwrite on purpose. */ 353 + __le16 *temp = (__le16 *)&ctrl->wLength; 354 + 355 + *temp = cpu_to_le16(DBGP_REQ_LEN); 356 + length = DBGP_REQ_LEN; 357 + } 358 + } 359 + 347 360 348 361 if (request == USB_REQ_GET_DESCRIPTOR) { 349 362 switch (value>>8) {
+15 -1
drivers/usb/gadget/legacy/inode.c
··· 110 110 /* enough for the whole queue: most events invalidate others */ 111 111 #define N_EVENT 5 112 112 113 + #define RBUF_SIZE 256 114 + 113 115 struct dev_data { 114 116 spinlock_t lock; 115 117 refcount_t count; ··· 146 144 struct dentry *dentry; 147 145 148 146 /* except this scratch i/o buffer for ep0 */ 149 - u8 rbuf [256]; 147 + u8 rbuf[RBUF_SIZE]; 150 148 }; 151 149 152 150 static inline void get_dev (struct dev_data *data) ··· 1332 1330 struct usb_gadgetfs_event *event; 1333 1331 u16 w_value = le16_to_cpu(ctrl->wValue); 1334 1332 u16 w_length = le16_to_cpu(ctrl->wLength); 1333 + 1334 + if (w_length > RBUF_SIZE) { 1335 + if (ctrl->bRequestType == USB_DIR_OUT) { 1336 + return value; 1337 + } else { 1338 + /* Cast away the const, we are going to overwrite on purpose. */ 1339 + __le16 *temp = (__le16 *)&ctrl->wLength; 1340 + 1341 + *temp = cpu_to_le16(RBUF_SIZE); 1342 + w_length = RBUF_SIZE; 1343 + } 1344 + } 1335 1345 1336 1346 spin_lock (&dev->lock); 1337 1347 dev->setup_abort = 0;
+1
drivers/usb/host/xhci-hub.c
··· 717 717 continue; 718 718 719 719 retval = xhci_disable_slot(xhci, i); 720 + xhci_free_virt_device(xhci, i); 720 721 if (retval) 721 722 xhci_err(xhci, "Failed to disable slot %d, %d. Enter test mode anyway\n", 722 723 i, retval);
-1
drivers/usb/host/xhci-ring.c
··· 1525 1525 if (xhci->quirks & XHCI_EP_LIMIT_QUIRK) 1526 1526 /* Delete default control endpoint resources */ 1527 1527 xhci_free_device_endpoint_resources(xhci, virt_dev, true); 1528 - xhci_free_virt_device(xhci, slot_id); 1529 1528 } 1530 1529 1531 1530 static void xhci_handle_cmd_config_ep(struct xhci_hcd *xhci, int slot_id,
+15 -11
drivers/usb/host/xhci.c
··· 3934 3934 struct xhci_slot_ctx *slot_ctx; 3935 3935 int i, ret; 3936 3936 3937 - #ifndef CONFIG_USB_DEFAULT_PERSIST 3938 3937 /* 3939 3938 * We called pm_runtime_get_noresume when the device was attached. 3940 3939 * Decrement the counter here to allow controller to runtime suspend ··· 3941 3942 */ 3942 3943 if (xhci->quirks & XHCI_RESET_ON_RESUME) 3943 3944 pm_runtime_put_noidle(hcd->self.controller); 3944 - #endif 3945 3945 3946 3946 ret = xhci_check_args(hcd, udev, NULL, 0, true, __func__); 3947 3947 /* If the host is halted due to driver unload, we still need to free the ··· 3959 3961 del_timer_sync(&virt_dev->eps[i].stop_cmd_timer); 3960 3962 } 3961 3963 virt_dev->udev = NULL; 3962 - ret = xhci_disable_slot(xhci, udev->slot_id); 3963 - if (ret) 3964 - xhci_free_virt_device(xhci, udev->slot_id); 3964 + xhci_disable_slot(xhci, udev->slot_id); 3965 + xhci_free_virt_device(xhci, udev->slot_id); 3965 3966 } 3966 3967 3967 3968 int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id) ··· 3970 3973 u32 state; 3971 3974 int ret = 0; 3972 3975 3973 - command = xhci_alloc_command(xhci, false, GFP_KERNEL); 3976 + command = xhci_alloc_command(xhci, true, GFP_KERNEL); 3974 3977 if (!command) 3975 3978 return -ENOMEM; 3976 3979 ··· 3995 3998 } 3996 3999 xhci_ring_cmd_db(xhci); 3997 4000 spin_unlock_irqrestore(&xhci->lock, flags); 4001 + 4002 + wait_for_completion(command->completion); 4003 + 4004 + if (command->status != COMP_SUCCESS) 4005 + xhci_warn(xhci, "Unsuccessful disable slot %u command, status %d\n", 4006 + slot_id, command->status); 4007 + 4008 + xhci_free_command(xhci, command); 4009 + 3998 4010 return ret; 3999 4011 } 4000 4012 ··· 4100 4094 4101 4095 xhci_debugfs_create_slot(xhci, slot_id); 4102 4096 4103 - #ifndef CONFIG_USB_DEFAULT_PERSIST 4104 4097 /* 4105 4098 * If resetting upon resume, we can't put the controller into runtime 4106 4099 * suspend if there is a device attached. 4107 4100 */ 4108 4101 if (xhci->quirks & XHCI_RESET_ON_RESUME) 4109 4102 pm_runtime_get_noresume(hcd->self.controller); 4110 - #endif 4111 4103 4112 4104 /* Is this a LS or FS device under a HS hub? */ 4113 4105 /* Hub or peripherial? */ 4114 4106 return 1; 4115 4107 4116 4108 disable_slot: 4117 - ret = xhci_disable_slot(xhci, udev->slot_id); 4118 - if (ret) 4119 - xhci_free_virt_device(xhci, udev->slot_id); 4109 + xhci_disable_slot(xhci, udev->slot_id); 4110 + xhci_free_virt_device(xhci, udev->slot_id); 4120 4111 4121 4112 return 0; 4122 4113 } ··· 4243 4240 4244 4241 mutex_unlock(&xhci->mutex); 4245 4242 ret = xhci_disable_slot(xhci, udev->slot_id); 4243 + xhci_free_virt_device(xhci, udev->slot_id); 4246 4244 if (!ret) 4247 4245 xhci_alloc_dev(hcd, udev); 4248 4246 kfree(command->completion);
+2 -1
drivers/vdpa/vdpa.c
··· 404 404 goto msg_err; 405 405 406 406 while (mdev->id_table[i].device) { 407 - supported_classes |= BIT(mdev->id_table[i].device); 407 + if (mdev->id_table[i].device <= 63) 408 + supported_classes |= BIT_ULL(mdev->id_table[i].device); 408 409 i++; 409 410 } 410 411
+4 -2
drivers/vdpa/vdpa_user/vduse_dev.c
··· 655 655 { 656 656 struct vduse_dev *dev = vdpa_to_vduse(vdpa); 657 657 658 - if (len > dev->config_size - offset) 658 + if (offset > dev->config_size || 659 + len > dev->config_size - offset) 659 660 return; 660 661 661 662 memcpy(buf, dev->config + offset, len); ··· 976 975 break; 977 976 978 977 ret = -EINVAL; 979 - if (config.length == 0 || 978 + if (config.offset > dev->config_size || 979 + config.length == 0 || 980 980 config.length > dev->config_size - config.offset) 981 981 break; 982 982
+1 -1
drivers/vhost/vdpa.c
··· 197 197 struct vdpa_device *vdpa = v->vdpa; 198 198 long size = vdpa->config->get_config_size(vdpa); 199 199 200 - if (c->len == 0) 200 + if (c->len == 0 || c->off > size) 201 201 return -EINVAL; 202 202 203 203 if (c->len > size - c->off)
+1 -1
drivers/virtio/virtio_ring.c
··· 268 268 size_t max_segment_size = SIZE_MAX; 269 269 270 270 if (vring_use_dma_api(vdev)) 271 - max_segment_size = dma_max_mapping_size(&vdev->dev); 271 + max_segment_size = dma_max_mapping_size(vdev->dev.parent); 272 272 273 273 return max_segment_size; 274 274 }
+3 -2
fs/afs/file.c
··· 514 514 if (atomic_inc_return(&vnode->cb_nr_mmap) == 1) { 515 515 down_write(&vnode->volume->cell->fs_open_mmaps_lock); 516 516 517 - list_add_tail(&vnode->cb_mmap_link, 518 - &vnode->volume->cell->fs_open_mmaps); 517 + if (list_empty(&vnode->cb_mmap_link)) 518 + list_add_tail(&vnode->cb_mmap_link, 519 + &vnode->volume->cell->fs_open_mmaps); 519 520 520 521 up_write(&vnode->volume->cell->fs_open_mmaps_lock); 521 522 }
+1
fs/afs/super.c
··· 667 667 INIT_LIST_HEAD(&vnode->pending_locks); 668 668 INIT_LIST_HEAD(&vnode->granted_locks); 669 669 INIT_DELAYED_WORK(&vnode->lock_work, afs_lock_work); 670 + INIT_LIST_HEAD(&vnode->cb_mmap_link); 670 671 seqlock_init(&vnode->cb_lock); 671 672 } 672 673
+152 -34
fs/aio.c
··· 181 181 struct file *file; 182 182 struct wait_queue_head *head; 183 183 __poll_t events; 184 - bool done; 185 184 bool cancelled; 185 + bool work_scheduled; 186 + bool work_need_resched; 186 187 struct wait_queue_entry wait; 187 188 struct work_struct work; 188 189 }; ··· 1620 1619 iocb_put(iocb); 1621 1620 } 1622 1621 1622 + /* 1623 + * Safely lock the waitqueue which the request is on, synchronizing with the 1624 + * case where the ->poll() provider decides to free its waitqueue early. 1625 + * 1626 + * Returns true on success, meaning that req->head->lock was locked, req->wait 1627 + * is on req->head, and an RCU read lock was taken. Returns false if the 1628 + * request was already removed from its waitqueue (which might no longer exist). 1629 + */ 1630 + static bool poll_iocb_lock_wq(struct poll_iocb *req) 1631 + { 1632 + wait_queue_head_t *head; 1633 + 1634 + /* 1635 + * While we hold the waitqueue lock and the waitqueue is nonempty, 1636 + * wake_up_pollfree() will wait for us. However, taking the waitqueue 1637 + * lock in the first place can race with the waitqueue being freed. 1638 + * 1639 + * We solve this as eventpoll does: by taking advantage of the fact that 1640 + * all users of wake_up_pollfree() will RCU-delay the actual free. If 1641 + * we enter rcu_read_lock() and see that the pointer to the queue is 1642 + * non-NULL, we can then lock it without the memory being freed out from 1643 + * under us, then check whether the request is still on the queue. 1644 + * 1645 + * Keep holding rcu_read_lock() as long as we hold the queue lock, in 1646 + * case the caller deletes the entry from the queue, leaving it empty. 1647 + * In that case, only RCU prevents the queue memory from being freed. 1648 + */ 1649 + rcu_read_lock(); 1650 + head = smp_load_acquire(&req->head); 1651 + if (head) { 1652 + spin_lock(&head->lock); 1653 + if (!list_empty(&req->wait.entry)) 1654 + return true; 1655 + spin_unlock(&head->lock); 1656 + } 1657 + rcu_read_unlock(); 1658 + return false; 1659 + } 1660 + 1661 + static void poll_iocb_unlock_wq(struct poll_iocb *req) 1662 + { 1663 + spin_unlock(&req->head->lock); 1664 + rcu_read_unlock(); 1665 + } 1666 + 1623 1667 static void aio_poll_complete_work(struct work_struct *work) 1624 1668 { 1625 1669 struct poll_iocb *req = container_of(work, struct poll_iocb, work); ··· 1684 1638 * avoid further branches in the fast path. 1685 1639 */ 1686 1640 spin_lock_irq(&ctx->ctx_lock); 1687 - if (!mask && !READ_ONCE(req->cancelled)) { 1688 - add_wait_queue(req->head, &req->wait); 1689 - spin_unlock_irq(&ctx->ctx_lock); 1690 - return; 1691 - } 1641 + if (poll_iocb_lock_wq(req)) { 1642 + if (!mask && !READ_ONCE(req->cancelled)) { 1643 + /* 1644 + * The request isn't actually ready to be completed yet. 1645 + * Reschedule completion if another wakeup came in. 1646 + */ 1647 + if (req->work_need_resched) { 1648 + schedule_work(&req->work); 1649 + req->work_need_resched = false; 1650 + } else { 1651 + req->work_scheduled = false; 1652 + } 1653 + poll_iocb_unlock_wq(req); 1654 + spin_unlock_irq(&ctx->ctx_lock); 1655 + return; 1656 + } 1657 + list_del_init(&req->wait.entry); 1658 + poll_iocb_unlock_wq(req); 1659 + } /* else, POLLFREE has freed the waitqueue, so we must complete */ 1692 1660 list_del_init(&iocb->ki_list); 1693 1661 iocb->ki_res.res = mangle_poll(mask); 1694 - req->done = true; 1695 1662 spin_unlock_irq(&ctx->ctx_lock); 1696 1663 1697 1664 iocb_put(iocb); ··· 1716 1657 struct aio_kiocb *aiocb = container_of(iocb, struct aio_kiocb, rw); 1717 1658 struct poll_iocb *req = &aiocb->poll; 1718 1659 1719 - spin_lock(&req->head->lock); 1720 - WRITE_ONCE(req->cancelled, true); 1721 - if (!list_empty(&req->wait.entry)) { 1722 - list_del_init(&req->wait.entry); 1723 - schedule_work(&aiocb->poll.work); 1724 - } 1725 - spin_unlock(&req->head->lock); 1660 + if (poll_iocb_lock_wq(req)) { 1661 + WRITE_ONCE(req->cancelled, true); 1662 + if (!req->work_scheduled) { 1663 + schedule_work(&aiocb->poll.work); 1664 + req->work_scheduled = true; 1665 + } 1666 + poll_iocb_unlock_wq(req); 1667 + } /* else, the request was force-cancelled by POLLFREE already */ 1726 1668 1727 1669 return 0; 1728 1670 } ··· 1740 1680 if (mask && !(mask & req->events)) 1741 1681 return 0; 1742 1682 1743 - list_del_init(&req->wait.entry); 1744 - 1745 - if (mask && spin_trylock_irqsave(&iocb->ki_ctx->ctx_lock, flags)) { 1683 + /* 1684 + * Complete the request inline if possible. This requires that three 1685 + * conditions be met: 1686 + * 1. An event mask must have been passed. If a plain wakeup was done 1687 + * instead, then mask == 0 and we have to call vfs_poll() to get 1688 + * the events, so inline completion isn't possible. 1689 + * 2. The completion work must not have already been scheduled. 1690 + * 3. ctx_lock must not be busy. We have to use trylock because we 1691 + * already hold the waitqueue lock, so this inverts the normal 1692 + * locking order. Use irqsave/irqrestore because not all 1693 + * filesystems (e.g. fuse) call this function with IRQs disabled, 1694 + * yet IRQs have to be disabled before ctx_lock is obtained. 1695 + */ 1696 + if (mask && !req->work_scheduled && 1697 + spin_trylock_irqsave(&iocb->ki_ctx->ctx_lock, flags)) { 1746 1698 struct kioctx *ctx = iocb->ki_ctx; 1747 1699 1748 - /* 1749 - * Try to complete the iocb inline if we can. Use 1750 - * irqsave/irqrestore because not all filesystems (e.g. fuse) 1751 - * call this function with IRQs disabled and because IRQs 1752 - * have to be disabled before ctx_lock is obtained. 1753 - */ 1700 + list_del_init(&req->wait.entry); 1754 1701 list_del(&iocb->ki_list); 1755 1702 iocb->ki_res.res = mangle_poll(mask); 1756 - req->done = true; 1757 - if (iocb->ki_eventfd && eventfd_signal_allowed()) { 1703 + if (iocb->ki_eventfd && !eventfd_signal_allowed()) { 1758 1704 iocb = NULL; 1759 1705 INIT_WORK(&req->work, aio_poll_put_work); 1760 1706 schedule_work(&req->work); ··· 1769 1703 if (iocb) 1770 1704 iocb_put(iocb); 1771 1705 } else { 1772 - schedule_work(&req->work); 1706 + /* 1707 + * Schedule the completion work if needed. If it was already 1708 + * scheduled, record that another wakeup came in. 1709 + * 1710 + * Don't remove the request from the waitqueue here, as it might 1711 + * not actually be complete yet (we won't know until vfs_poll() 1712 + * is called), and we must not miss any wakeups. POLLFREE is an 1713 + * exception to this; see below. 1714 + */ 1715 + if (req->work_scheduled) { 1716 + req->work_need_resched = true; 1717 + } else { 1718 + schedule_work(&req->work); 1719 + req->work_scheduled = true; 1720 + } 1721 + 1722 + /* 1723 + * If the waitqueue is being freed early but we can't complete 1724 + * the request inline, we have to tear down the request as best 1725 + * we can. That means immediately removing the request from its 1726 + * waitqueue and preventing all further accesses to the 1727 + * waitqueue via the request. We also need to schedule the 1728 + * completion work (done above). Also mark the request as 1729 + * cancelled, to potentially skip an unneeded call to ->poll(). 1730 + */ 1731 + if (mask & POLLFREE) { 1732 + WRITE_ONCE(req->cancelled, true); 1733 + list_del_init(&req->wait.entry); 1734 + 1735 + /* 1736 + * Careful: this *must* be the last step, since as soon 1737 + * as req->head is NULL'ed out, the request can be 1738 + * completed and freed, since aio_poll_complete_work() 1739 + * will no longer need to take the waitqueue lock. 1740 + */ 1741 + smp_store_release(&req->head, NULL); 1742 + } 1773 1743 } 1774 1744 return 1; 1775 1745 } ··· 1813 1711 struct aio_poll_table { 1814 1712 struct poll_table_struct pt; 1815 1713 struct aio_kiocb *iocb; 1714 + bool queued; 1816 1715 int error; 1817 1716 }; 1818 1717 ··· 1824 1721 struct aio_poll_table *pt = container_of(p, struct aio_poll_table, pt); 1825 1722 1826 1723 /* multiple wait queues per file are not supported */ 1827 - if (unlikely(pt->iocb->poll.head)) { 1724 + if (unlikely(pt->queued)) { 1828 1725 pt->error = -EINVAL; 1829 1726 return; 1830 1727 } 1831 1728 1729 + pt->queued = true; 1832 1730 pt->error = 0; 1833 1731 pt->iocb->poll.head = head; 1834 1732 add_wait_queue(head, &pt->iocb->poll.wait); ··· 1854 1750 req->events = demangle_poll(iocb->aio_buf) | EPOLLERR | EPOLLHUP; 1855 1751 1856 1752 req->head = NULL; 1857 - req->done = false; 1858 1753 req->cancelled = false; 1754 + req->work_scheduled = false; 1755 + req->work_need_resched = false; 1859 1756 1860 1757 apt.pt._qproc = aio_poll_queue_proc; 1861 1758 apt.pt._key = req->events; 1862 1759 apt.iocb = aiocb; 1760 + apt.queued = false; 1863 1761 apt.error = -EINVAL; /* same as no support for IOCB_CMD_POLL */ 1864 1762 1865 1763 /* initialized the list so that we can do list_empty checks */ ··· 1870 1764 1871 1765 mask = vfs_poll(req->file, &apt.pt) & req->events; 1872 1766 spin_lock_irq(&ctx->ctx_lock); 1873 - if (likely(req->head)) { 1874 - spin_lock(&req->head->lock); 1875 - if (unlikely(list_empty(&req->wait.entry))) { 1876 - if (apt.error) 1767 + if (likely(apt.queued)) { 1768 + bool on_queue = poll_iocb_lock_wq(req); 1769 + 1770 + if (!on_queue || req->work_scheduled) { 1771 + /* 1772 + * aio_poll_wake() already either scheduled the async 1773 + * completion work, or completed the request inline. 1774 + */ 1775 + if (apt.error) /* unsupported case: multiple queues */ 1877 1776 cancel = true; 1878 1777 apt.error = 0; 1879 1778 mask = 0; 1880 1779 } 1881 1780 if (mask || apt.error) { 1781 + /* Steal to complete synchronously. */ 1882 1782 list_del_init(&req->wait.entry); 1883 1783 } else if (cancel) { 1784 + /* Cancel if possible (may be too late though). */ 1884 1785 WRITE_ONCE(req->cancelled, true); 1885 - } else if (!req->done) { /* actually waiting for an event */ 1786 + } else if (on_queue) { 1787 + /* 1788 + * Actually waiting for an event, so add the request to 1789 + * active_reqs so that it can be cancelled if needed. 1790 + */ 1886 1791 list_add_tail(&aiocb->ki_list, &ctx->active_reqs); 1887 1792 aiocb->ki_cancel = aio_poll_cancel; 1888 1793 } 1889 - spin_unlock(&req->head->lock); 1794 + if (on_queue) 1795 + poll_iocb_unlock_wq(req); 1890 1796 } 1891 1797 if (mask) { /* no async, we'd stolen it */ 1892 1798 aiocb->ki_res.res = mangle_poll(mask);
+9 -3
fs/btrfs/delalloc-space.c
··· 143 143 144 144 /* Use new btrfs_qgroup_reserve_data to reserve precious data space. */ 145 145 ret = btrfs_qgroup_reserve_data(inode, reserved, start, len); 146 - if (ret < 0) 146 + if (ret < 0) { 147 147 btrfs_free_reserved_data_space_noquota(fs_info, len); 148 - else 148 + extent_changeset_free(*reserved); 149 + *reserved = NULL; 150 + } else { 149 151 ret = 0; 152 + } 150 153 return ret; 151 154 } 152 155 ··· 455 452 if (ret < 0) 456 453 return ret; 457 454 ret = btrfs_delalloc_reserve_metadata(inode, len); 458 - if (ret < 0) 455 + if (ret < 0) { 459 456 btrfs_free_reserved_data_space(inode, *reserved, start, len); 457 + extent_changeset_free(*reserved); 458 + *reserved = NULL; 459 + } 460 460 return ret; 461 461 } 462 462
+3
fs/btrfs/extent-tree.c
··· 6051 6051 int dev_ret = 0; 6052 6052 int ret = 0; 6053 6053 6054 + if (range->start == U64_MAX) 6055 + return -EINVAL; 6056 + 6054 6057 /* 6055 6058 * Check range overflow if range->len is set. 6056 6059 * The default range->len is U64_MAX.
+14
fs/btrfs/extent_io.c
··· 4314 4314 return; 4315 4315 4316 4316 /* 4317 + * A read may stumble upon this buffer later, make sure that it gets an 4318 + * error and knows there was an error. 4319 + */ 4320 + clear_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags); 4321 + 4322 + /* 4323 + * We need to set the mapping with the io error as well because a write 4324 + * error will flip the file system readonly, and then syncfs() will 4325 + * return a 0 because we are readonly if we don't modify the err seq for 4326 + * the superblock. 4327 + */ 4328 + mapping_set_error(page->mapping, -EIO); 4329 + 4330 + /* 4317 4331 * If we error out, we should add back the dirty_metadata_bytes 4318 4332 * to make it consistent. 4319 4333 */
+2 -4
fs/btrfs/ioctl.c
··· 3187 3187 return -EPERM; 3188 3188 3189 3189 vol_args = memdup_user(arg, sizeof(*vol_args)); 3190 - if (IS_ERR(vol_args)) { 3191 - ret = PTR_ERR(vol_args); 3192 - goto out; 3193 - } 3190 + if (IS_ERR(vol_args)) 3191 + return PTR_ERR(vol_args); 3194 3192 3195 3193 if (vol_args->flags & ~BTRFS_DEVICE_REMOVE_ARGS_MASK) { 3196 3194 ret = -EOPNOTSUPP;
+2 -1
fs/btrfs/root-tree.c
··· 334 334 key.offset = ref_id; 335 335 again: 336 336 ret = btrfs_search_slot(trans, tree_root, &key, path, -1, 1); 337 - BUG_ON(ret < 0); 337 + if (ret < 0) 338 + goto out; 338 339 if (ret == 0) { 339 340 leaf = path->nodes[0]; 340 341 ref = btrfs_item_ptr(leaf, path->slots[0],
+3 -2
fs/btrfs/tree-log.c
··· 2908 2908 path->nodes[*level]->len); 2909 2909 if (ret) 2910 2910 return ret; 2911 + btrfs_redirty_list_add(trans->transaction, 2912 + next); 2911 2913 } else { 2912 2914 if (test_and_clear_bit(EXTENT_BUFFER_DIRTY, &next->bflags)) 2913 2915 clear_extent_buffer_dirty(next); ··· 2990 2988 next->start, next->len); 2991 2989 if (ret) 2992 2990 goto out; 2991 + btrfs_redirty_list_add(trans->transaction, next); 2993 2992 } else { 2994 2993 if (test_and_clear_bit(EXTENT_BUFFER_DIRTY, &next->bflags)) 2995 2994 clear_extent_buffer_dirty(next); ··· 3441 3438 EXTENT_DIRTY | EXTENT_NEW | EXTENT_NEED_WAIT); 3442 3439 extent_io_tree_release(&log->log_csum_range); 3443 3440 3444 - if (trans && log->node) 3445 - btrfs_redirty_list_add(trans->transaction, log->node); 3446 3441 btrfs_put_root(log); 3447 3442 } 3448 3443
+2
fs/btrfs/zoned.c
··· 1860 1860 block_group->alloc_offset = block_group->zone_capacity; 1861 1861 block_group->free_space_ctl->free_space = 0; 1862 1862 btrfs_clear_treelog_bg(block_group); 1863 + btrfs_clear_data_reloc_bg(block_group); 1863 1864 spin_unlock(&block_group->lock); 1864 1865 1865 1866 ret = blkdev_zone_mgmt(device->bdev, REQ_OP_ZONE_FINISH, ··· 1943 1942 ASSERT(block_group->alloc_offset == block_group->zone_capacity); 1944 1943 ASSERT(block_group->free_space_ctl->free_space == 0); 1945 1944 btrfs_clear_treelog_bg(block_group); 1945 + btrfs_clear_data_reloc_bg(block_group); 1946 1946 spin_unlock(&block_group->lock); 1947 1947 1948 1948 map = block_group->physical_map;
+8 -8
fs/ceph/caps.c
··· 4350 4350 { 4351 4351 struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(ci->vfs_inode.i_sb); 4352 4352 int bits = (fmode << 1) | 1; 4353 - bool is_opened = false; 4353 + bool already_opened = false; 4354 4354 int i; 4355 4355 4356 4356 if (count == 1) ··· 4358 4358 4359 4359 spin_lock(&ci->i_ceph_lock); 4360 4360 for (i = 0; i < CEPH_FILE_MODE_BITS; i++) { 4361 - if (bits & (1 << i)) 4362 - ci->i_nr_by_mode[i] += count; 4363 - 4364 4361 /* 4365 - * If any of the mode ref is larger than 1, 4362 + * If any of the mode ref is larger than 0, 4366 4363 * that means it has been already opened by 4367 4364 * others. Just skip checking the PIN ref. 4368 4365 */ 4369 - if (i && ci->i_nr_by_mode[i] > 1) 4370 - is_opened = true; 4366 + if (i && ci->i_nr_by_mode[i]) 4367 + already_opened = true; 4368 + 4369 + if (bits & (1 << i)) 4370 + ci->i_nr_by_mode[i] += count; 4371 4371 } 4372 4372 4373 - if (!is_opened) 4373 + if (!already_opened) 4374 4374 percpu_counter_inc(&mdsc->metric.opened_inodes); 4375 4375 spin_unlock(&ci->i_ceph_lock); 4376 4376 }
+16 -4
fs/ceph/file.c
··· 605 605 in.cap.realm = cpu_to_le64(ci->i_snap_realm->ino); 606 606 in.cap.flags = CEPH_CAP_FLAG_AUTH; 607 607 in.ctime = in.mtime = in.atime = iinfo.btime; 608 - in.mode = cpu_to_le32((u32)mode); 609 608 in.truncate_seq = cpu_to_le32(1); 610 609 in.truncate_size = cpu_to_le64(-1ULL); 611 610 in.xattr_version = cpu_to_le64(1); 612 611 in.uid = cpu_to_le32(from_kuid(&init_user_ns, current_fsuid())); 613 - in.gid = cpu_to_le32(from_kgid(&init_user_ns, dir->i_mode & S_ISGID ? 614 - dir->i_gid : current_fsgid())); 612 + if (dir->i_mode & S_ISGID) { 613 + in.gid = cpu_to_le32(from_kgid(&init_user_ns, dir->i_gid)); 614 + 615 + /* Directories always inherit the setgid bit. */ 616 + if (S_ISDIR(mode)) 617 + mode |= S_ISGID; 618 + else if ((mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP) && 619 + !in_group_p(dir->i_gid) && 620 + !capable_wrt_inode_uidgid(&init_user_ns, dir, CAP_FSETID)) 621 + mode &= ~S_ISGID; 622 + } else { 623 + in.gid = cpu_to_le32(from_kgid(&init_user_ns, current_fsgid())); 624 + } 625 + in.mode = cpu_to_le32((u32)mode); 626 + 615 627 in.nlink = cpu_to_le32(1); 616 628 in.max_size = cpu_to_le64(lo->stripe_unit); 617 629 ··· 859 847 ssize_t ret; 860 848 u64 off = iocb->ki_pos; 861 849 u64 len = iov_iter_count(to); 862 - u64 i_size; 850 + u64 i_size = i_size_read(inode); 863 851 864 852 dout("sync_read on file %p %llu~%u %s\n", file, off, (unsigned)len, 865 853 (file->f_flags & O_DIRECT) ? "O_DIRECT" : "");
+1 -2
fs/ceph/mds_client.c
··· 3683 3683 struct ceph_pagelist *pagelist = recon_state->pagelist; 3684 3684 struct dentry *dentry; 3685 3685 char *path; 3686 - int pathlen, err; 3686 + int pathlen = 0, err; 3687 3687 u64 pathbase; 3688 3688 u64 snap_follows; 3689 3689 ··· 3703 3703 } 3704 3704 } else { 3705 3705 path = NULL; 3706 - pathlen = 0; 3707 3706 pathbase = 0; 3708 3707 } 3709 3708
+36 -18
fs/cifs/sess.c
··· 590 590 { 591 591 unsigned int tioffset; /* challenge message target info area */ 592 592 unsigned int tilen; /* challenge message target info area length */ 593 - 594 593 CHALLENGE_MESSAGE *pblob = (CHALLENGE_MESSAGE *)bcc_ptr; 594 + __u32 server_flags; 595 595 596 596 if (blob_len < sizeof(CHALLENGE_MESSAGE)) { 597 597 cifs_dbg(VFS, "challenge blob len %d too small\n", blob_len); ··· 609 609 return -EINVAL; 610 610 } 611 611 612 + server_flags = le32_to_cpu(pblob->NegotiateFlags); 613 + cifs_dbg(FYI, "%s: negotiate=0x%08x challenge=0x%08x\n", __func__, 614 + ses->ntlmssp->client_flags, server_flags); 615 + 616 + if ((ses->ntlmssp->client_flags & (NTLMSSP_NEGOTIATE_SEAL | NTLMSSP_NEGOTIATE_SIGN)) && 617 + (!(server_flags & NTLMSSP_NEGOTIATE_56) && !(server_flags & NTLMSSP_NEGOTIATE_128))) { 618 + cifs_dbg(VFS, "%s: requested signing/encryption but server did not return either 56-bit or 128-bit session key size\n", 619 + __func__); 620 + return -EINVAL; 621 + } 622 + if (!(server_flags & NTLMSSP_NEGOTIATE_NTLM) && !(server_flags & NTLMSSP_NEGOTIATE_EXTENDED_SEC)) { 623 + cifs_dbg(VFS, "%s: server does not seem to support either NTLMv1 or NTLMv2\n", __func__); 624 + return -EINVAL; 625 + } 626 + if (ses->server->sign && !(server_flags & NTLMSSP_NEGOTIATE_SIGN)) { 627 + cifs_dbg(VFS, "%s: forced packet signing but server does not seem to support it\n", 628 + __func__); 629 + return -EOPNOTSUPP; 630 + } 631 + if ((ses->ntlmssp->client_flags & NTLMSSP_NEGOTIATE_KEY_XCH) && 632 + !(server_flags & NTLMSSP_NEGOTIATE_KEY_XCH)) 633 + pr_warn_once("%s: authentication has been weakened as server does not support key exchange\n", 634 + __func__); 635 + 636 + ses->ntlmssp->server_flags = server_flags; 637 + 612 638 memcpy(ses->ntlmssp->cryptkey, pblob->Challenge, CIFS_CRYPTO_KEY_SIZE); 613 - /* BB we could decode pblob->NegotiateFlags; some may be useful */ 614 639 /* In particular we can examine sign flags */ 615 640 /* BB spec says that if AvId field of MsvAvTimestamp is populated then 616 641 we must set the MIC field of the AUTHENTICATE_MESSAGE */ 617 - ses->ntlmssp->server_flags = le32_to_cpu(pblob->NegotiateFlags); 642 + 618 643 tioffset = le32_to_cpu(pblob->TargetInfoArray.BufferOffset); 619 644 tilen = le16_to_cpu(pblob->TargetInfoArray.Length); 620 645 if (tioffset > blob_len || tioffset + tilen > blob_len) { ··· 746 721 flags = NTLMSSP_NEGOTIATE_56 | NTLMSSP_REQUEST_TARGET | 747 722 NTLMSSP_NEGOTIATE_128 | NTLMSSP_NEGOTIATE_UNICODE | 748 723 NTLMSSP_NEGOTIATE_NTLM | NTLMSSP_NEGOTIATE_EXTENDED_SEC | 749 - NTLMSSP_NEGOTIATE_SEAL; 750 - if (server->sign) 751 - flags |= NTLMSSP_NEGOTIATE_SIGN; 724 + NTLMSSP_NEGOTIATE_ALWAYS_SIGN | NTLMSSP_NEGOTIATE_SEAL | 725 + NTLMSSP_NEGOTIATE_SIGN; 752 726 if (!server->session_estab || ses->ntlmssp->sesskey_per_smbsess) 753 727 flags |= NTLMSSP_NEGOTIATE_KEY_XCH; 754 728 755 729 tmp = *pbuffer + sizeof(NEGOTIATE_MESSAGE); 730 + ses->ntlmssp->client_flags = flags; 756 731 sec_blob->NegotiateFlags = cpu_to_le32(flags); 757 732 758 733 /* these fields should be null in negotiate phase MS-NLMP 3.1.5.1.1 */ ··· 804 779 memcpy(sec_blob->Signature, NTLMSSP_SIGNATURE, 8); 805 780 sec_blob->MessageType = NtLmAuthenticate; 806 781 807 - flags = NTLMSSP_NEGOTIATE_56 | 808 - NTLMSSP_REQUEST_TARGET | NTLMSSP_NEGOTIATE_TARGET_INFO | 809 - NTLMSSP_NEGOTIATE_128 | NTLMSSP_NEGOTIATE_UNICODE | 810 - NTLMSSP_NEGOTIATE_NTLM | NTLMSSP_NEGOTIATE_EXTENDED_SEC | 811 - NTLMSSP_NEGOTIATE_SEAL | NTLMSSP_NEGOTIATE_WORKSTATION_SUPPLIED; 812 - if (ses->server->sign) 813 - flags |= NTLMSSP_NEGOTIATE_SIGN; 814 - if (!ses->server->session_estab || ses->ntlmssp->sesskey_per_smbsess) 815 - flags |= NTLMSSP_NEGOTIATE_KEY_XCH; 782 + flags = ses->ntlmssp->server_flags | NTLMSSP_REQUEST_TARGET | 783 + NTLMSSP_NEGOTIATE_TARGET_INFO | NTLMSSP_NEGOTIATE_WORKSTATION_SUPPLIED; 816 784 817 785 tmp = *pbuffer + sizeof(AUTHENTICATE_MESSAGE); 818 786 sec_blob->NegotiateFlags = cpu_to_le32(flags); ··· 852 834 *pbuffer, &tmp, 853 835 nls_cp); 854 836 855 - if (((ses->ntlmssp->server_flags & NTLMSSP_NEGOTIATE_KEY_XCH) || 856 - (ses->ntlmssp->server_flags & NTLMSSP_NEGOTIATE_EXTENDED_SEC)) 857 - && !calc_seckey(ses)) { 837 + if ((ses->ntlmssp->server_flags & NTLMSSP_NEGOTIATE_KEY_XCH) && 838 + (!ses->server->session_estab || ses->ntlmssp->sesskey_per_smbsess) && 839 + !calc_seckey(ses)) { 858 840 memcpy(tmp, ses->ntlmssp->ciphertext, CIFS_CPHTXT_SIZE); 859 841 sec_blob->SessionKey.BufferOffset = cpu_to_le32(tmp - *pbuffer); 860 842 sec_blob->SessionKey.Length = cpu_to_le16(CIFS_CPHTXT_SIZE);
+56 -16
fs/file.c
··· 841 841 spin_unlock(&files->file_lock); 842 842 } 843 843 844 + static inline struct file *__fget_files_rcu(struct files_struct *files, 845 + unsigned int fd, fmode_t mask, unsigned int refs) 846 + { 847 + for (;;) { 848 + struct file *file; 849 + struct fdtable *fdt = rcu_dereference_raw(files->fdt); 850 + struct file __rcu **fdentry; 851 + 852 + if (unlikely(fd >= fdt->max_fds)) 853 + return NULL; 854 + 855 + fdentry = fdt->fd + array_index_nospec(fd, fdt->max_fds); 856 + file = rcu_dereference_raw(*fdentry); 857 + if (unlikely(!file)) 858 + return NULL; 859 + 860 + if (unlikely(file->f_mode & mask)) 861 + return NULL; 862 + 863 + /* 864 + * Ok, we have a file pointer. However, because we do 865 + * this all locklessly under RCU, we may be racing with 866 + * that file being closed. 867 + * 868 + * Such a race can take two forms: 869 + * 870 + * (a) the file ref already went down to zero, 871 + * and get_file_rcu_many() fails. Just try 872 + * again: 873 + */ 874 + if (unlikely(!get_file_rcu_many(file, refs))) 875 + continue; 876 + 877 + /* 878 + * (b) the file table entry has changed under us. 879 + * Note that we don't need to re-check the 'fdt->fd' 880 + * pointer having changed, because it always goes 881 + * hand-in-hand with 'fdt'. 882 + * 883 + * If so, we need to put our refs and try again. 884 + */ 885 + if (unlikely(rcu_dereference_raw(files->fdt) != fdt) || 886 + unlikely(rcu_dereference_raw(*fdentry) != file)) { 887 + fput_many(file, refs); 888 + continue; 889 + } 890 + 891 + /* 892 + * Ok, we have a ref to the file, and checked that it 893 + * still exists. 894 + */ 895 + return file; 896 + } 897 + } 898 + 844 899 static struct file *__fget_files(struct files_struct *files, unsigned int fd, 845 900 fmode_t mask, unsigned int refs) 846 901 { 847 902 struct file *file; 848 903 849 904 rcu_read_lock(); 850 - loop: 851 - file = files_lookup_fd_rcu(files, fd); 852 - if (file) { 853 - /* File object ref couldn't be taken. 854 - * dup2() atomicity guarantee is the reason 855 - * we loop to catch the new file (or NULL pointer) 856 - */ 857 - if (file->f_mode & mask) 858 - file = NULL; 859 - else if (!get_file_rcu_many(file, refs)) 860 - goto loop; 861 - else if (files_lookup_fd_raw(files, fd) != file) { 862 - fput_many(file, refs); 863 - goto loop; 864 - } 865 - } 905 + file = __fget_files_rcu(files, fd, mask, refs); 866 906 rcu_read_unlock(); 867 907 868 908 return file;
+23 -6
fs/io-wq.c
··· 142 142 struct io_wqe_acct *acct, 143 143 struct io_cb_cancel_data *match); 144 144 static void create_worker_cb(struct callback_head *cb); 145 + static void io_wq_cancel_tw_create(struct io_wq *wq); 145 146 146 147 static bool io_worker_get(struct io_worker *worker) 147 148 { ··· 358 357 test_and_set_bit_lock(0, &worker->create_state)) 359 358 goto fail_release; 360 359 360 + atomic_inc(&wq->worker_refs); 361 361 init_task_work(&worker->create_work, func); 362 362 worker->create_index = acct->index; 363 363 if (!task_work_add(wq->task, &worker->create_work, TWA_SIGNAL)) { 364 - clear_bit_unlock(0, &worker->create_state); 364 + /* 365 + * EXIT may have been set after checking it above, check after 366 + * adding the task_work and remove any creation item if it is 367 + * now set. wq exit does that too, but we can have added this 368 + * work item after we canceled in io_wq_exit_workers(). 369 + */ 370 + if (test_bit(IO_WQ_BIT_EXIT, &wq->state)) 371 + io_wq_cancel_tw_create(wq); 372 + io_worker_ref_put(wq); 365 373 return true; 366 374 } 375 + io_worker_ref_put(wq); 367 376 clear_bit_unlock(0, &worker->create_state); 368 377 fail_release: 369 378 io_worker_release(worker); ··· 1209 1198 set_bit(IO_WQ_BIT_EXIT, &wq->state); 1210 1199 } 1211 1200 1212 - static void io_wq_exit_workers(struct io_wq *wq) 1201 + static void io_wq_cancel_tw_create(struct io_wq *wq) 1213 1202 { 1214 1203 struct callback_head *cb; 1215 - int node; 1216 - 1217 - if (!wq->task) 1218 - return; 1219 1204 1220 1205 while ((cb = task_work_cancel_match(wq->task, io_task_work_match, wq)) != NULL) { 1221 1206 struct io_worker *worker; ··· 1219 1212 worker = container_of(cb, struct io_worker, create_work); 1220 1213 io_worker_cancel_cb(worker); 1221 1214 } 1215 + } 1216 + 1217 + static void io_wq_exit_workers(struct io_wq *wq) 1218 + { 1219 + int node; 1220 + 1221 + if (!wq->task) 1222 + return; 1223 + 1224 + io_wq_cancel_tw_create(wq); 1222 1225 1223 1226 rcu_read_lock(); 1224 1227 for_each_node(node) {
+4 -2
fs/io_uring.c
··· 9824 9824 9825 9825 /* 9826 9826 * Find any io_uring ctx that this task has registered or done IO on, and cancel 9827 - * requests. @sqd should be not-null IIF it's an SQPOLL thread cancellation. 9827 + * requests. @sqd should be not-null IFF it's an SQPOLL thread cancellation. 9828 9828 */ 9829 9829 static __cold void io_uring_cancel_generic(bool cancel_all, 9830 9830 struct io_sq_data *sqd) ··· 9866 9866 cancel_all); 9867 9867 } 9868 9868 9869 - prepare_to_wait(&tctx->wait, &wait, TASK_UNINTERRUPTIBLE); 9869 + prepare_to_wait(&tctx->wait, &wait, TASK_INTERRUPTIBLE); 9870 + io_run_task_work(); 9870 9871 io_uring_drop_tctx_refs(current); 9872 + 9871 9873 /* 9872 9874 * If we've seen completions, retry without waiting. This 9873 9875 * avoids a race where a completion comes in before we did
+1
fs/nfsd/nfs4recover.c
··· 2156 2156 int 2157 2157 register_cld_notifier(void) 2158 2158 { 2159 + WARN_ON(!nfsd_net_id); 2159 2160 return rpc_pipefs_notifier_register(&nfsd4_cld_block); 2160 2161 } 2161 2162
+7 -2
fs/nfsd/nfs4state.c
··· 1207 1207 return 0; 1208 1208 } 1209 1209 1210 + static bool delegation_hashed(struct nfs4_delegation *dp) 1211 + { 1212 + return !(list_empty(&dp->dl_perfile)); 1213 + } 1214 + 1210 1215 static bool 1211 1216 unhash_delegation_locked(struct nfs4_delegation *dp) 1212 1217 { ··· 1219 1214 1220 1215 lockdep_assert_held(&state_lock); 1221 1216 1222 - if (list_empty(&dp->dl_perfile)) 1217 + if (!delegation_hashed(dp)) 1223 1218 return false; 1224 1219 1225 1220 dp->dl_stid.sc_type = NFS4_CLOSED_DELEG_STID; ··· 4603 4598 * queued for a lease break. Don't queue it again. 4604 4599 */ 4605 4600 spin_lock(&state_lock); 4606 - if (dp->dl_time == 0) { 4601 + if (delegation_hashed(dp) && dp->dl_time == 0) { 4607 4602 dp->dl_time = ktime_get_boottime_seconds(); 4608 4603 list_add_tail(&dp->dl_recall_lru, &nn->del_recall_lru); 4609 4604 }
+7 -7
fs/nfsd/nfsctl.c
··· 1521 1521 int retval; 1522 1522 printk(KERN_INFO "Installing knfsd (copyright (C) 1996 okir@monad.swb.de).\n"); 1523 1523 1524 - retval = register_cld_notifier(); 1525 - if (retval) 1526 - return retval; 1527 1524 retval = nfsd4_init_slabs(); 1528 1525 if (retval) 1529 - goto out_unregister_notifier; 1526 + return retval; 1530 1527 retval = nfsd4_init_pnfs(); 1531 1528 if (retval) 1532 1529 goto out_free_slabs; ··· 1542 1545 goto out_free_exports; 1543 1546 retval = register_pernet_subsys(&nfsd_net_ops); 1544 1547 if (retval < 0) 1548 + goto out_free_filesystem; 1549 + retval = register_cld_notifier(); 1550 + if (retval) 1545 1551 goto out_free_all; 1546 1552 return 0; 1547 1553 out_free_all: 1554 + unregister_pernet_subsys(&nfsd_net_ops); 1555 + out_free_filesystem: 1548 1556 unregister_filesystem(&nfsd_fs_type); 1549 1557 out_free_exports: 1550 1558 remove_proc_entry("fs/nfs/exports", NULL); ··· 1563 1561 nfsd4_exit_pnfs(); 1564 1562 out_free_slabs: 1565 1563 nfsd4_free_slabs(); 1566 - out_unregister_notifier: 1567 - unregister_cld_notifier(); 1568 1564 return retval; 1569 1565 } 1570 1566 1571 1567 static void __exit exit_nfsd(void) 1572 1568 { 1569 + unregister_cld_notifier(); 1573 1570 unregister_pernet_subsys(&nfsd_net_ops); 1574 1571 nfsd_drc_slab_free(); 1575 1572 remove_proc_entry("fs/nfs/exports", NULL); ··· 1578 1577 nfsd4_free_slabs(); 1579 1578 nfsd4_exit_pnfs(); 1580 1579 unregister_filesystem(&nfsd_fs_type); 1581 - unregister_cld_notifier(); 1582 1580 } 1583 1581 1584 1582 MODULE_AUTHOR("Olaf Kirch <okir@monad.swb.de>");
+1 -11
fs/signalfd.c
··· 35 35 36 36 void signalfd_cleanup(struct sighand_struct *sighand) 37 37 { 38 - wait_queue_head_t *wqh = &sighand->signalfd_wqh; 39 - /* 40 - * The lockless check can race with remove_wait_queue() in progress, 41 - * but in this case its caller should run under rcu_read_lock() and 42 - * sighand_cachep is SLAB_TYPESAFE_BY_RCU, we can safely return. 43 - */ 44 - if (likely(!waitqueue_active(wqh))) 45 - return; 46 - 47 - /* wait_queue_entry_t->func(POLLFREE) should do remove_wait_queue() */ 48 - wake_up_poll(wqh, EPOLLHUP | POLLFREE); 38 + wake_up_pollfree(&sighand->signalfd_wqh); 49 39 } 50 40 51 41 struct signalfd_ctx {
-13
fs/smbfs_common/cifs_arc4.c
··· 72 72 ctx->y = y; 73 73 } 74 74 EXPORT_SYMBOL_GPL(cifs_arc4_crypt); 75 - 76 - static int __init 77 - init_smbfs_common(void) 78 - { 79 - return 0; 80 - } 81 - static void __init 82 - exit_smbfs_common(void) 83 - { 84 - } 85 - 86 - module_init(init_smbfs_common) 87 - module_exit(exit_smbfs_common)
+76
fs/tracefs/inode.c
··· 161 161 struct tracefs_mount_opts mount_opts; 162 162 }; 163 163 164 + static void change_gid(struct dentry *dentry, kgid_t gid) 165 + { 166 + if (!dentry->d_inode) 167 + return; 168 + dentry->d_inode->i_gid = gid; 169 + } 170 + 171 + /* 172 + * Taken from d_walk, but without he need for handling renames. 173 + * Nothing can be renamed while walking the list, as tracefs 174 + * does not support renames. This is only called when mounting 175 + * or remounting the file system, to set all the files to 176 + * the given gid. 177 + */ 178 + static void set_gid(struct dentry *parent, kgid_t gid) 179 + { 180 + struct dentry *this_parent; 181 + struct list_head *next; 182 + 183 + this_parent = parent; 184 + spin_lock(&this_parent->d_lock); 185 + 186 + change_gid(this_parent, gid); 187 + repeat: 188 + next = this_parent->d_subdirs.next; 189 + resume: 190 + while (next != &this_parent->d_subdirs) { 191 + struct list_head *tmp = next; 192 + struct dentry *dentry = list_entry(tmp, struct dentry, d_child); 193 + next = tmp->next; 194 + 195 + spin_lock_nested(&dentry->d_lock, DENTRY_D_LOCK_NESTED); 196 + 197 + change_gid(dentry, gid); 198 + 199 + if (!list_empty(&dentry->d_subdirs)) { 200 + spin_unlock(&this_parent->d_lock); 201 + spin_release(&dentry->d_lock.dep_map, _RET_IP_); 202 + this_parent = dentry; 203 + spin_acquire(&this_parent->d_lock.dep_map, 0, 1, _RET_IP_); 204 + goto repeat; 205 + } 206 + spin_unlock(&dentry->d_lock); 207 + } 208 + /* 209 + * All done at this level ... ascend and resume the search. 210 + */ 211 + rcu_read_lock(); 212 + ascend: 213 + if (this_parent != parent) { 214 + struct dentry *child = this_parent; 215 + this_parent = child->d_parent; 216 + 217 + spin_unlock(&child->d_lock); 218 + spin_lock(&this_parent->d_lock); 219 + 220 + /* go into the first sibling still alive */ 221 + do { 222 + next = child->d_child.next; 223 + if (next == &this_parent->d_subdirs) 224 + goto ascend; 225 + child = list_entry(next, struct dentry, d_child); 226 + } while (unlikely(child->d_flags & DCACHE_DENTRY_KILLED)); 227 + rcu_read_unlock(); 228 + goto resume; 229 + } 230 + rcu_read_unlock(); 231 + spin_unlock(&this_parent->d_lock); 232 + return; 233 + } 234 + 164 235 static int tracefs_parse_options(char *data, struct tracefs_mount_opts *opts) 165 236 { 166 237 substring_t args[MAX_OPT_ARGS]; ··· 264 193 if (!gid_valid(gid)) 265 194 return -EINVAL; 266 195 opts->gid = gid; 196 + set_gid(tracefs_mount->mnt_root, gid); 267 197 break; 268 198 case Opt_mode: 269 199 if (match_octal(&args[0], &option)) ··· 486 414 inode->i_mode = mode; 487 415 inode->i_fop = fops ? fops : &tracefs_file_operations; 488 416 inode->i_private = data; 417 + inode->i_uid = d_inode(dentry->d_parent)->i_uid; 418 + inode->i_gid = d_inode(dentry->d_parent)->i_gid; 489 419 d_instantiate(dentry, inode); 490 420 fsnotify_create(dentry->d_parent->d_inode, dentry); 491 421 return end_creating(dentry); ··· 510 436 inode->i_mode = S_IFDIR | S_IRWXU | S_IRUSR| S_IRGRP | S_IXUSR | S_IXGRP; 511 437 inode->i_op = ops; 512 438 inode->i_fop = &simple_dir_operations; 439 + inode->i_uid = d_inode(dentry->d_parent)->i_uid; 440 + inode->i_gid = d_inode(dentry->d_parent)->i_gid; 513 441 514 442 /* directory inodes start off with i_nlink == 2 (for "." entry) */ 515 443 inc_nlink(inode);
+11 -3
fs/xfs/xfs_super.c
··· 1765 1765 xfs_remount_ro( 1766 1766 struct xfs_mount *mp) 1767 1767 { 1768 - int error; 1768 + struct xfs_icwalk icw = { 1769 + .icw_flags = XFS_ICWALK_FLAG_SYNC, 1770 + }; 1771 + int error; 1769 1772 1770 1773 /* 1771 1774 * Cancel background eofb scanning so it cannot race with the final ··· 1776 1773 */ 1777 1774 xfs_blockgc_stop(mp); 1778 1775 1779 - /* Get rid of any leftover CoW reservations... */ 1780 - error = xfs_blockgc_free_space(mp, NULL); 1776 + /* 1777 + * Clear out all remaining COW staging extents and speculative post-EOF 1778 + * preallocations so that we don't leave inodes requiring inactivation 1779 + * cleanups during reclaim on a read-only mount. We must process every 1780 + * cached inode, so this requires a synchronous cache scan. 1781 + */ 1782 + error = xfs_blockgc_free_space(mp, &icw); 1781 1783 if (error) { 1782 1784 xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE); 1783 1785 return error;
+13 -1
include/linux/delay.h
··· 20 20 */ 21 21 22 22 #include <linux/math.h> 23 + #include <linux/sched.h> 23 24 24 25 extern unsigned long loops_per_jiffy; 25 26 ··· 59 58 void __attribute__((weak)) calibration_delay_done(void); 60 59 void msleep(unsigned int msecs); 61 60 unsigned long msleep_interruptible(unsigned int msecs); 62 - void usleep_range(unsigned long min, unsigned long max); 61 + void usleep_range_state(unsigned long min, unsigned long max, 62 + unsigned int state); 63 + 64 + static inline void usleep_range(unsigned long min, unsigned long max) 65 + { 66 + usleep_range_state(min, max, TASK_UNINTERRUPTIBLE); 67 + } 68 + 69 + static inline void usleep_idle_range(unsigned long min, unsigned long max) 70 + { 71 + usleep_range_state(min, max, TASK_IDLE); 72 + } 63 73 64 74 static inline void ssleep(unsigned int seconds) 65 75 {
+13
include/linux/mhi.h
··· 664 664 int mhi_pm_resume(struct mhi_controller *mhi_cntrl); 665 665 666 666 /** 667 + * mhi_pm_resume_force - Force resume MHI from suspended state 668 + * @mhi_cntrl: MHI controller 669 + * 670 + * Resume the device irrespective of its MHI state. As per the MHI spec, devices 671 + * has to be in M3 state during resume. But some devices seem to be in a 672 + * different MHI state other than M3 but they continue working fine if allowed. 673 + * This API is intented to be used for such devices. 674 + * 675 + * Return: 0 if the resume succeeds, a negative error code otherwise 676 + */ 677 + int mhi_pm_resume_force(struct mhi_controller *mhi_cntrl); 678 + 679 + /** 667 680 * mhi_download_rddm_image - Download ramdump image from device for 668 681 * debugging purpose. 669 682 * @mhi_cntrl: MHI controller
+1 -1
include/linux/percpu-refcount.h
··· 51 51 #define _LINUX_PERCPU_REFCOUNT_H 52 52 53 53 #include <linux/atomic.h> 54 - #include <linux/kernel.h> 55 54 #include <linux/percpu.h> 56 55 #include <linux/rcupdate.h> 56 + #include <linux/types.h> 57 57 #include <linux/gfp.h> 58 58 59 59 struct percpu_ref;
+1 -1
include/linux/pm_runtime.h
··· 129 129 * pm_runtime_active - Check whether or not a device is runtime-active. 130 130 * @dev: Target device. 131 131 * 132 - * Return %true if runtime PM is enabled for @dev and its runtime PM status is 132 + * Return %true if runtime PM is disabled for @dev or its runtime PM status is 133 133 * %RPM_ACTIVE, or %false otherwise. 134 134 * 135 135 * Note that the return value of this function can only be trusted if it is
+26
include/linux/wait.h
··· 217 217 void __wake_up_locked_sync_key(struct wait_queue_head *wq_head, unsigned int mode, void *key); 218 218 void __wake_up_locked(struct wait_queue_head *wq_head, unsigned int mode, int nr); 219 219 void __wake_up_sync(struct wait_queue_head *wq_head, unsigned int mode); 220 + void __wake_up_pollfree(struct wait_queue_head *wq_head); 220 221 221 222 #define wake_up(x) __wake_up(x, TASK_NORMAL, 1, NULL) 222 223 #define wake_up_nr(x, nr) __wake_up(x, TASK_NORMAL, nr, NULL) ··· 245 244 __wake_up_sync_key((x), TASK_INTERRUPTIBLE, poll_to_key(m)) 246 245 #define wake_up_interruptible_sync_poll_locked(x, m) \ 247 246 __wake_up_locked_sync_key((x), TASK_INTERRUPTIBLE, poll_to_key(m)) 247 + 248 + /** 249 + * wake_up_pollfree - signal that a polled waitqueue is going away 250 + * @wq_head: the wait queue head 251 + * 252 + * In the very rare cases where a ->poll() implementation uses a waitqueue whose 253 + * lifetime is tied to a task rather than to the 'struct file' being polled, 254 + * this function must be called before the waitqueue is freed so that 255 + * non-blocking polls (e.g. epoll) are notified that the queue is going away. 256 + * 257 + * The caller must also RCU-delay the freeing of the wait_queue_head, e.g. via 258 + * an explicit synchronize_rcu() or call_rcu(), or via SLAB_TYPESAFE_BY_RCU. 259 + */ 260 + static inline void wake_up_pollfree(struct wait_queue_head *wq_head) 261 + { 262 + /* 263 + * For performance reasons, we don't always take the queue lock here. 264 + * Therefore, we might race with someone removing the last entry from 265 + * the queue, and proceed while they still hold the queue lock. 266 + * However, rcu_read_lock() is required to be held in such cases, so we 267 + * can safely proceed with an RCU-delayed free. 268 + */ 269 + if (waitqueue_active(wq_head)) 270 + __wake_up_pollfree(wq_head); 271 + } 248 272 249 273 #define ___wait_cond_timeout(condition) \ 250 274 ({ \
+1 -1
include/uapi/asm-generic/poll.h
··· 29 29 #define POLLRDHUP 0x2000 30 30 #endif 31 31 32 - #define POLLFREE (__force __poll_t)0x4000 /* currently only for epoll */ 32 + #define POLLFREE (__force __poll_t)0x4000 33 33 34 34 #define POLL_BUSY_LOOP (__force __poll_t)0x8000 35 35
+10 -8
include/uapi/linux/mptcp.h
··· 136 136 * MPTCP_EVENT_REMOVED: token, rem_id 137 137 * An address has been lost by the peer. 138 138 * 139 - * MPTCP_EVENT_SUB_ESTABLISHED: token, family, saddr4 | saddr6, 140 - * daddr4 | daddr6, sport, dport, backup, 141 - * if_idx [, error] 139 + * MPTCP_EVENT_SUB_ESTABLISHED: token, family, loc_id, rem_id, 140 + * saddr4 | saddr6, daddr4 | daddr6, sport, 141 + * dport, backup, if_idx [, error] 142 142 * A new subflow has been established. 'error' should not be set. 143 143 * 144 - * MPTCP_EVENT_SUB_CLOSED: token, family, saddr4 | saddr6, daddr4 | daddr6, 145 - * sport, dport, backup, if_idx [, error] 144 + * MPTCP_EVENT_SUB_CLOSED: token, family, loc_id, rem_id, saddr4 | saddr6, 145 + * daddr4 | daddr6, sport, dport, backup, if_idx 146 + * [, error] 146 147 * A subflow has been closed. An error (copy of sk_err) could be set if an 147 148 * error has been detected for this subflow. 148 149 * 149 - * MPTCP_EVENT_SUB_PRIORITY: token, family, saddr4 | saddr6, daddr4 | daddr6, 150 - * sport, dport, backup, if_idx [, error] 151 - * The priority of a subflow has changed. 'error' should not be set. 150 + * MPTCP_EVENT_SUB_PRIORITY: token, family, loc_id, rem_id, saddr4 | saddr6, 151 + * daddr4 | daddr6, sport, dport, backup, if_idx 152 + * [, error] 153 + * The priority of a subflow has changed. 'error' should not be set. 152 154 */ 153 155 enum mptcp_event_type { 154 156 MPTCP_EVENT_UNSPEC = 0,
+10 -3
include/uapi/linux/resource.h
··· 66 66 #define _STK_LIM (8*1024*1024) 67 67 68 68 /* 69 - * GPG2 wants 64kB of mlocked memory, to make sure pass phrases 70 - * and other sensitive information are never written to disk. 69 + * Limit the amount of locked memory by some sane default: 70 + * root can always increase this limit if needed. 71 + * 72 + * The main use-cases are (1) preventing sensitive memory 73 + * from being swapped; (2) real-time operations; (3) via 74 + * IOURING_REGISTER_BUFFERS. 75 + * 76 + * The first two don't need much. The latter will take as 77 + * much as it can get. 8MB is a reasonably sane default. 71 78 */ 72 - #define MLOCK_LIMIT ((PAGE_SIZE > 64*1024) ? PAGE_SIZE : 64*1024) 79 + #define MLOCK_LIMIT (8*1024*1024) 73 80 74 81 /* 75 82 * Due to binary compatibility, the actual resource numbers
+10 -11
kernel/audit.c
··· 718 718 { 719 719 int rc = 0; 720 720 struct sk_buff *skb; 721 - static unsigned int failed = 0; 721 + unsigned int failed = 0; 722 722 723 723 /* NOTE: kauditd_thread takes care of all our locking, we just use 724 724 * the netlink info passed to us (e.g. sk and portid) */ ··· 735 735 continue; 736 736 } 737 737 738 + retry: 738 739 /* grab an extra skb reference in case of error */ 739 740 skb_get(skb); 740 741 rc = netlink_unicast(sk, skb, portid, 0); 741 742 if (rc < 0) { 742 - /* fatal failure for our queue flush attempt? */ 743 + /* send failed - try a few times unless fatal error */ 743 744 if (++failed >= retry_limit || 744 745 rc == -ECONNREFUSED || rc == -EPERM) { 745 - /* yes - error processing for the queue */ 746 746 sk = NULL; 747 747 if (err_hook) 748 748 (*err_hook)(skb); 749 - if (!skb_hook) 750 - goto out; 751 - /* keep processing with the skb_hook */ 749 + if (rc == -EAGAIN) 750 + rc = 0; 751 + /* continue to drain the queue */ 752 752 continue; 753 753 } else 754 - /* no - requeue to preserve ordering */ 755 - skb_queue_head(queue, skb); 754 + goto retry; 756 755 } else { 757 - /* it worked - drop the extra reference and continue */ 756 + /* skb sent - drop the extra reference and continue */ 758 757 consume_skb(skb); 759 758 failed = 0; 760 759 } 761 760 } 762 761 763 - out: 764 762 return (rc >= 0 ? 0 : rc); 765 763 } 766 764 ··· 1607 1609 audit_panic("cannot initialize netlink socket in namespace"); 1608 1610 return -ENOMEM; 1609 1611 } 1610 - aunet->sk->sk_sndtimeo = MAX_SCHEDULE_TIMEOUT; 1612 + /* limit the timeout in case auditd is blocked/stopped */ 1613 + aunet->sk->sk_sndtimeo = HZ / 10; 1611 1614 1612 1615 return 0; 1613 1616 }
+36 -17
kernel/bpf/verifier.c
··· 1368 1368 reg->var_off = tnum_or(tnum_clear_subreg(var64_off), var32_off); 1369 1369 } 1370 1370 1371 + static bool __reg32_bound_s64(s32 a) 1372 + { 1373 + return a >= 0 && a <= S32_MAX; 1374 + } 1375 + 1371 1376 static void __reg_assign_32_into_64(struct bpf_reg_state *reg) 1372 1377 { 1373 1378 reg->umin_value = reg->u32_min_value; 1374 1379 reg->umax_value = reg->u32_max_value; 1375 - /* Attempt to pull 32-bit signed bounds into 64-bit bounds 1376 - * but must be positive otherwise set to worse case bounds 1377 - * and refine later from tnum. 1380 + 1381 + /* Attempt to pull 32-bit signed bounds into 64-bit bounds but must 1382 + * be positive otherwise set to worse case bounds and refine later 1383 + * from tnum. 1378 1384 */ 1379 - if (reg->s32_min_value >= 0 && reg->s32_max_value >= 0) 1380 - reg->smax_value = reg->s32_max_value; 1381 - else 1382 - reg->smax_value = U32_MAX; 1383 - if (reg->s32_min_value >= 0) 1385 + if (__reg32_bound_s64(reg->s32_min_value) && 1386 + __reg32_bound_s64(reg->s32_max_value)) { 1384 1387 reg->smin_value = reg->s32_min_value; 1385 - else 1388 + reg->smax_value = reg->s32_max_value; 1389 + } else { 1386 1390 reg->smin_value = 0; 1391 + reg->smax_value = U32_MAX; 1392 + } 1387 1393 } 1388 1394 1389 1395 static void __reg_combine_32_into_64(struct bpf_reg_state *reg) ··· 2387 2381 */ 2388 2382 if (insn->src_reg != BPF_REG_FP) 2389 2383 return 0; 2390 - if (BPF_SIZE(insn->code) != BPF_DW) 2391 - return 0; 2392 2384 2393 2385 /* dreg = *(u64 *)[fp - off] was a fill from the stack. 2394 2386 * that [fp - off] slot contains scalar that needs to be ··· 2408 2404 return -ENOTSUPP; 2409 2405 /* scalars can only be spilled into stack */ 2410 2406 if (insn->dst_reg != BPF_REG_FP) 2411 - return 0; 2412 - if (BPF_SIZE(insn->code) != BPF_DW) 2413 2407 return 0; 2414 2408 spi = (-insn->off - 1) / BPF_REG_SIZE; 2415 2409 if (spi >= 64) { ··· 4555 4553 4556 4554 if (insn->imm == BPF_CMPXCHG) { 4557 4555 /* Check comparison of R0 with memory location */ 4558 - err = check_reg_arg(env, BPF_REG_0, SRC_OP); 4556 + const u32 aux_reg = BPF_REG_0; 4557 + 4558 + err = check_reg_arg(env, aux_reg, SRC_OP); 4559 4559 if (err) 4560 4560 return err; 4561 + 4562 + if (is_pointer_value(env, aux_reg)) { 4563 + verbose(env, "R%d leaks addr into mem\n", aux_reg); 4564 + return -EACCES; 4565 + } 4561 4566 } 4562 4567 4563 4568 if (is_pointer_value(env, insn->src_reg)) { ··· 4599 4590 load_reg = -1; 4600 4591 } 4601 4592 4602 - /* check whether we can read the memory */ 4593 + /* Check whether we can read the memory, with second call for fetch 4594 + * case to simulate the register fill. 4595 + */ 4603 4596 err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, 4604 - BPF_SIZE(insn->code), BPF_READ, load_reg, true); 4597 + BPF_SIZE(insn->code), BPF_READ, -1, true); 4598 + if (!err && load_reg >= 0) 4599 + err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, 4600 + BPF_SIZE(insn->code), BPF_READ, load_reg, 4601 + true); 4605 4602 if (err) 4606 4603 return err; 4607 4604 4608 - /* check whether we can write into the same memory */ 4605 + /* Check whether we can write into the same memory. */ 4609 4606 err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, 4610 4607 BPF_SIZE(insn->code), BPF_WRITE, -1, true); 4611 4608 if (err) ··· 8379 8364 insn->dst_reg); 8380 8365 } 8381 8366 zext_32_to_64(dst_reg); 8367 + 8368 + __update_reg_bounds(dst_reg); 8369 + __reg_deduce_bounds(dst_reg); 8370 + __reg_bound_offset(dst_reg); 8382 8371 } 8383 8372 } else { 8384 8373 /* case: R = imm
+7
kernel/sched/wait.c
··· 238 238 } 239 239 EXPORT_SYMBOL_GPL(__wake_up_sync); /* For internal use only */ 240 240 241 + void __wake_up_pollfree(struct wait_queue_head *wq_head) 242 + { 243 + __wake_up(wq_head, TASK_NORMAL, 0, poll_to_key(EPOLLHUP | POLLFREE)); 244 + /* POLLFREE must have cleared the queue. */ 245 + WARN_ON_ONCE(waitqueue_active(wq_head)); 246 + } 247 + 241 248 /* 242 249 * Note: we use "set_current_state()" _after_ the wait-queue add, 243 250 * because we need a memory barrier there on SMP, so that any
+9 -7
kernel/time/timer.c
··· 2054 2054 EXPORT_SYMBOL(msleep_interruptible); 2055 2055 2056 2056 /** 2057 - * usleep_range - Sleep for an approximate time 2058 - * @min: Minimum time in usecs to sleep 2059 - * @max: Maximum time in usecs to sleep 2057 + * usleep_range_state - Sleep for an approximate time in a given state 2058 + * @min: Minimum time in usecs to sleep 2059 + * @max: Maximum time in usecs to sleep 2060 + * @state: State of the current task that will be while sleeping 2060 2061 * 2061 2062 * In non-atomic context where the exact wakeup time is flexible, use 2062 - * usleep_range() instead of udelay(). The sleep improves responsiveness 2063 + * usleep_range_state() instead of udelay(). The sleep improves responsiveness 2063 2064 * by avoiding the CPU-hogging busy-wait of udelay(), and the range reduces 2064 2065 * power usage by allowing hrtimers to take advantage of an already- 2065 2066 * scheduled interrupt instead of scheduling a new one just for this sleep. 2066 2067 */ 2067 - void __sched usleep_range(unsigned long min, unsigned long max) 2068 + void __sched usleep_range_state(unsigned long min, unsigned long max, 2069 + unsigned int state) 2068 2070 { 2069 2071 ktime_t exp = ktime_add_us(ktime_get(), min); 2070 2072 u64 delta = (u64)(max - min) * NSEC_PER_USEC; 2071 2073 2072 2074 for (;;) { 2073 - __set_current_state(TASK_UNINTERRUPTIBLE); 2075 + __set_current_state(state); 2074 2076 /* Do not return before the requested sleep time has elapsed */ 2075 2077 if (!schedule_hrtimeout_range(&exp, delta, HRTIMER_MODE_ABS)) 2076 2078 break; 2077 2079 } 2078 2080 } 2079 - EXPORT_SYMBOL(usleep_range); 2081 + EXPORT_SYMBOL(usleep_range_state);
+7 -1
kernel/trace/ftrace.c
··· 5217 5217 { 5218 5218 struct ftrace_direct_func *direct; 5219 5219 struct ftrace_func_entry *entry; 5220 + struct ftrace_hash *hash; 5220 5221 int ret = -ENODEV; 5221 5222 5222 5223 mutex_lock(&direct_mutex); ··· 5226 5225 if (!entry) 5227 5226 goto out_unlock; 5228 5227 5229 - if (direct_functions->count == 1) 5228 + hash = direct_ops.func_hash->filter_hash; 5229 + if (hash->count == 1) 5230 5230 unregister_ftrace_function(&direct_ops); 5231 5231 5232 5232 ret = ftrace_set_filter_ip(&direct_ops, ip, 1, 0); ··· 5542 5540 err = unregister_ftrace_function(ops); 5543 5541 remove_direct_functions_hash(hash, addr); 5544 5542 mutex_unlock(&direct_mutex); 5543 + 5544 + /* cleanup for possible another register call */ 5545 + ops->func = NULL; 5546 + ops->trampoline = 0; 5545 5547 return err; 5546 5548 } 5547 5549 EXPORT_SYMBOL_GPL(unregister_ftrace_direct_multi);
+6 -5
kernel/trace/trace_events_synth.c
··· 1237 1237 argv + consumed, &consumed, 1238 1238 &field_version); 1239 1239 if (IS_ERR(field)) { 1240 - argv_free(argv); 1241 1240 ret = PTR_ERR(field); 1242 - goto err; 1241 + goto err_free_arg; 1243 1242 } 1244 1243 1245 1244 /* ··· 1261 1262 if (cmd_version > 1 && n_fields_this_loop >= 1) { 1262 1263 synth_err(SYNTH_ERR_INVALID_CMD, errpos(field_str)); 1263 1264 ret = -EINVAL; 1264 - goto err; 1265 + goto err_free_arg; 1265 1266 } 1266 1267 1267 1268 fields[n_fields++] = field; 1268 1269 if (n_fields == SYNTH_FIELDS_MAX) { 1269 1270 synth_err(SYNTH_ERR_TOO_MANY_FIELDS, 0); 1270 1271 ret = -EINVAL; 1271 - goto err; 1272 + goto err_free_arg; 1272 1273 } 1273 1274 1274 1275 n_fields_this_loop++; 1275 1276 } 1277 + argv_free(argv); 1276 1278 1277 1279 if (consumed < argc) { 1278 1280 synth_err(SYNTH_ERR_INVALID_CMD, 0); ··· 1281 1281 goto err; 1282 1282 } 1283 1283 1284 - argv_free(argv); 1285 1284 } 1286 1285 1287 1286 if (n_fields == 0) { ··· 1306 1307 kfree(saved_fields); 1307 1308 1308 1309 return ret; 1310 + err_free_arg: 1311 + argv_free(argv); 1309 1312 err: 1310 1313 for (i = 0; i < n_fields; i++) 1311 1314 free_synth_field(fields[i]);
+1 -1
mm/Kconfig
··· 428 428 # UP and nommu archs use km based percpu allocator 429 429 # 430 430 config NEED_PER_CPU_KM 431 - depends on !SMP 431 + depends on !SMP || !MMU 432 432 bool 433 433 default y 434 434
+7
mm/backing-dev.c
··· 945 945 wb_shutdown(&bdi->wb); 946 946 cgwb_bdi_unregister(bdi); 947 947 948 + /* 949 + * If this BDI's min ratio has been set, use bdi_set_min_ratio() to 950 + * update the global bdi_min_ratio. 951 + */ 952 + if (bdi->min_ratio) 953 + bdi_set_min_ratio(bdi, 0); 954 + 948 955 if (bdi->dev) { 949 956 bdi_debug_unregister(bdi); 950 957 device_unregister(bdi->dev);
+7 -13
mm/damon/core.c
··· 282 282 for (i = 0; i < nr_ids; i++) { 283 283 t = damon_new_target(ids[i]); 284 284 if (!t) { 285 - pr_err("Failed to alloc damon_target\n"); 286 285 /* The caller should do cleanup of the ids itself */ 287 286 damon_for_each_target_safe(t, next, ctx) 288 287 damon_destroy_target(t); ··· 311 312 unsigned long aggr_int, unsigned long primitive_upd_int, 312 313 unsigned long min_nr_reg, unsigned long max_nr_reg) 313 314 { 314 - if (min_nr_reg < 3) { 315 - pr_err("min_nr_regions (%lu) must be at least 3\n", 316 - min_nr_reg); 315 + if (min_nr_reg < 3) 317 316 return -EINVAL; 318 - } 319 - if (min_nr_reg > max_nr_reg) { 320 - pr_err("invalid nr_regions. min (%lu) > max (%lu)\n", 321 - min_nr_reg, max_nr_reg); 317 + if (min_nr_reg > max_nr_reg) 322 318 return -EINVAL; 323 - } 324 319 325 320 ctx->sample_interval = sample_int; 326 321 ctx->aggr_interval = aggr_int; ··· 973 980 974 981 static void kdamond_usleep(unsigned long usecs) 975 982 { 976 - if (usecs > 100 * 1000) 977 - schedule_timeout_interruptible(usecs_to_jiffies(usecs)); 983 + /* See Documentation/timers/timers-howto.rst for the thresholds */ 984 + if (usecs > 20 * USEC_PER_MSEC) 985 + schedule_timeout_idle(usecs_to_jiffies(usecs)); 978 986 else 979 - usleep_range(usecs, usecs + 1); 987 + usleep_idle_range(usecs, usecs + 1); 980 988 } 981 989 982 990 /* Returns negative error code if it's not activated but should return */ ··· 1032 1038 ctx->callback.after_sampling(ctx)) 1033 1039 done = true; 1034 1040 1035 - usleep_range(ctx->sample_interval, ctx->sample_interval + 1); 1041 + kdamond_usleep(ctx->sample_interval); 1036 1042 1037 1043 if (ctx->primitive.check_accesses) 1038 1044 max_nr_accesses = ctx->primitive.check_accesses(ctx);
+1 -3
mm/damon/dbgfs.c
··· 210 210 &wmarks.low, &parsed); 211 211 if (ret != 18) 212 212 break; 213 - if (!damos_action_valid(action)) { 214 - pr_err("wrong action %d\n", action); 213 + if (!damos_action_valid(action)) 215 214 goto fail; 216 - } 217 215 218 216 pos += parsed; 219 217 scheme = damon_new_scheme(min_sz, max_sz, min_nr_a, max_nr_a,
+45 -50
mm/damon/vaddr-test.h
··· 135 135 struct damon_addr_range *three_regions, 136 136 unsigned long *expected, int nr_expected) 137 137 { 138 - struct damon_ctx *ctx = damon_new_ctx(); 139 138 struct damon_target *t; 140 139 struct damon_region *r; 141 140 int i; ··· 144 145 r = damon_new_region(regions[i * 2], regions[i * 2 + 1]); 145 146 damon_add_region(r, t); 146 147 } 147 - damon_add_target(ctx, t); 148 148 149 149 damon_va_apply_three_regions(t, three_regions); 150 150 ··· 152 154 KUNIT_EXPECT_EQ(test, r->ar.start, expected[i * 2]); 153 155 KUNIT_EXPECT_EQ(test, r->ar.end, expected[i * 2 + 1]); 154 156 } 155 - 156 - damon_destroy_ctx(ctx); 157 157 } 158 158 159 159 /* ··· 248 252 new_three_regions, expected, ARRAY_SIZE(expected)); 249 253 } 250 254 251 - static void damon_test_split_evenly(struct kunit *test) 255 + static void damon_test_split_evenly_fail(struct kunit *test, 256 + unsigned long start, unsigned long end, unsigned int nr_pieces) 252 257 { 253 - struct damon_ctx *c = damon_new_ctx(); 254 - struct damon_target *t; 255 - struct damon_region *r; 256 - unsigned long i; 257 - 258 - KUNIT_EXPECT_EQ(test, damon_va_evenly_split_region(NULL, NULL, 5), 259 - -EINVAL); 260 - 261 - t = damon_new_target(42); 262 - r = damon_new_region(0, 100); 263 - KUNIT_EXPECT_EQ(test, damon_va_evenly_split_region(t, r, 0), -EINVAL); 258 + struct damon_target *t = damon_new_target(42); 259 + struct damon_region *r = damon_new_region(start, end); 264 260 265 261 damon_add_region(r, t); 266 - KUNIT_EXPECT_EQ(test, damon_va_evenly_split_region(t, r, 10), 0); 267 - KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 10u); 268 - 269 - i = 0; 270 - damon_for_each_region(r, t) { 271 - KUNIT_EXPECT_EQ(test, r->ar.start, i++ * 10); 272 - KUNIT_EXPECT_EQ(test, r->ar.end, i * 10); 273 - } 274 - damon_free_target(t); 275 - 276 - t = damon_new_target(42); 277 - r = damon_new_region(5, 59); 278 - damon_add_region(r, t); 279 - KUNIT_EXPECT_EQ(test, damon_va_evenly_split_region(t, r, 5), 0); 280 - KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 5u); 281 - 282 - i = 0; 283 - damon_for_each_region(r, t) { 284 - if (i == 4) 285 - break; 286 - KUNIT_EXPECT_EQ(test, r->ar.start, 5 + 10 * i++); 287 - KUNIT_EXPECT_EQ(test, r->ar.end, 5 + 10 * i); 288 - } 289 - KUNIT_EXPECT_EQ(test, r->ar.start, 5 + 10 * i); 290 - KUNIT_EXPECT_EQ(test, r->ar.end, 59ul); 291 - damon_free_target(t); 292 - 293 - t = damon_new_target(42); 294 - r = damon_new_region(5, 6); 295 - damon_add_region(r, t); 296 - KUNIT_EXPECT_EQ(test, damon_va_evenly_split_region(t, r, 2), -EINVAL); 262 + KUNIT_EXPECT_EQ(test, 263 + damon_va_evenly_split_region(t, r, nr_pieces), -EINVAL); 297 264 KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 1u); 298 265 299 266 damon_for_each_region(r, t) { 300 - KUNIT_EXPECT_EQ(test, r->ar.start, 5ul); 301 - KUNIT_EXPECT_EQ(test, r->ar.end, 6ul); 267 + KUNIT_EXPECT_EQ(test, r->ar.start, start); 268 + KUNIT_EXPECT_EQ(test, r->ar.end, end); 302 269 } 270 + 303 271 damon_free_target(t); 304 - damon_destroy_ctx(c); 272 + } 273 + 274 + static void damon_test_split_evenly_succ(struct kunit *test, 275 + unsigned long start, unsigned long end, unsigned int nr_pieces) 276 + { 277 + struct damon_target *t = damon_new_target(42); 278 + struct damon_region *r = damon_new_region(start, end); 279 + unsigned long expected_width = (end - start) / nr_pieces; 280 + unsigned long i = 0; 281 + 282 + damon_add_region(r, t); 283 + KUNIT_EXPECT_EQ(test, 284 + damon_va_evenly_split_region(t, r, nr_pieces), 0); 285 + KUNIT_EXPECT_EQ(test, damon_nr_regions(t), nr_pieces); 286 + 287 + damon_for_each_region(r, t) { 288 + if (i == nr_pieces - 1) 289 + break; 290 + KUNIT_EXPECT_EQ(test, 291 + r->ar.start, start + i++ * expected_width); 292 + KUNIT_EXPECT_EQ(test, r->ar.end, start + i * expected_width); 293 + } 294 + KUNIT_EXPECT_EQ(test, r->ar.start, start + i * expected_width); 295 + KUNIT_EXPECT_EQ(test, r->ar.end, end); 296 + damon_free_target(t); 297 + } 298 + 299 + static void damon_test_split_evenly(struct kunit *test) 300 + { 301 + KUNIT_EXPECT_EQ(test, damon_va_evenly_split_region(NULL, NULL, 5), 302 + -EINVAL); 303 + 304 + damon_test_split_evenly_fail(test, 0, 100, 0); 305 + damon_test_split_evenly_succ(test, 0, 100, 10); 306 + damon_test_split_evenly_succ(test, 5, 59, 5); 307 + damon_test_split_evenly_fail(test, 5, 6, 2); 305 308 } 306 309 307 310 static struct kunit_case damon_test_cases[] = {
-1
mm/damon/vaddr.c
··· 627 627 case DAMOS_STAT: 628 628 return 0; 629 629 default: 630 - pr_warn("Wrong action %d\n", scheme->action); 631 630 return -EINVAL; 632 631 } 633 632
-2
mm/filemap.c
··· 3253 3253 goto skip; 3254 3254 if (!PageUptodate(page) || PageReadahead(page)) 3255 3255 goto skip; 3256 - if (PageHWPoison(page)) 3257 - goto skip; 3258 3256 if (!trylock_page(page)) 3259 3257 goto skip; 3260 3258 if (page->mapping != mapping)
+1 -1
mm/hugetlb.c
··· 2973 2973 struct huge_bootmem_page *m = NULL; /* initialize for clang */ 2974 2974 int nr_nodes, node; 2975 2975 2976 - if (nid >= nr_online_nodes) 2976 + if (nid != NUMA_NO_NODE && nid >= nr_online_nodes) 2977 2977 return 0; 2978 2978 /* do node specific alloc */ 2979 2979 if (nid != NUMA_NO_NODE) {
+53 -53
mm/memcontrol.c
··· 776 776 rcu_read_unlock(); 777 777 } 778 778 779 - /* 780 - * mod_objcg_mlstate() may be called with irq enabled, so 781 - * mod_memcg_lruvec_state() should be used. 782 - */ 783 - static inline void mod_objcg_mlstate(struct obj_cgroup *objcg, 784 - struct pglist_data *pgdat, 785 - enum node_stat_item idx, int nr) 786 - { 787 - struct mem_cgroup *memcg; 788 - struct lruvec *lruvec; 789 - 790 - rcu_read_lock(); 791 - memcg = obj_cgroup_memcg(objcg); 792 - lruvec = mem_cgroup_lruvec(memcg, pgdat); 793 - mod_memcg_lruvec_state(lruvec, idx, nr); 794 - rcu_read_unlock(); 795 - } 796 - 797 779 /** 798 780 * __count_memcg_events - account VM events in a cgroup 799 781 * @memcg: the memory cgroup ··· 2119 2137 } 2120 2138 #endif 2121 2139 2122 - /* 2123 - * Most kmem_cache_alloc() calls are from user context. The irq disable/enable 2124 - * sequence used in this case to access content from object stock is slow. 2125 - * To optimize for user context access, there are now two object stocks for 2126 - * task context and interrupt context access respectively. 2127 - * 2128 - * The task context object stock can be accessed by disabling preemption only 2129 - * which is cheap in non-preempt kernel. The interrupt context object stock 2130 - * can only be accessed after disabling interrupt. User context code can 2131 - * access interrupt object stock, but not vice versa. 2132 - */ 2133 - static inline struct obj_stock *get_obj_stock(unsigned long *pflags) 2134 - { 2135 - struct memcg_stock_pcp *stock; 2136 - 2137 - if (likely(in_task())) { 2138 - *pflags = 0UL; 2139 - preempt_disable(); 2140 - stock = this_cpu_ptr(&memcg_stock); 2141 - return &stock->task_obj; 2142 - } 2143 - 2144 - local_irq_save(*pflags); 2145 - stock = this_cpu_ptr(&memcg_stock); 2146 - return &stock->irq_obj; 2147 - } 2148 - 2149 - static inline void put_obj_stock(unsigned long flags) 2150 - { 2151 - if (likely(in_task())) 2152 - preempt_enable(); 2153 - else 2154 - local_irq_restore(flags); 2155 - } 2156 - 2157 2140 /** 2158 2141 * consume_stock: Try to consume stocked charge on this cpu. 2159 2142 * @memcg: memcg to consume from. ··· 2762 2815 * reclaimable. So those GFP bits should be masked off. 2763 2816 */ 2764 2817 #define OBJCGS_CLEAR_MASK (__GFP_DMA | __GFP_RECLAIMABLE | __GFP_ACCOUNT) 2818 + 2819 + /* 2820 + * Most kmem_cache_alloc() calls are from user context. The irq disable/enable 2821 + * sequence used in this case to access content from object stock is slow. 2822 + * To optimize for user context access, there are now two object stocks for 2823 + * task context and interrupt context access respectively. 2824 + * 2825 + * The task context object stock can be accessed by disabling preemption only 2826 + * which is cheap in non-preempt kernel. The interrupt context object stock 2827 + * can only be accessed after disabling interrupt. User context code can 2828 + * access interrupt object stock, but not vice versa. 2829 + */ 2830 + static inline struct obj_stock *get_obj_stock(unsigned long *pflags) 2831 + { 2832 + struct memcg_stock_pcp *stock; 2833 + 2834 + if (likely(in_task())) { 2835 + *pflags = 0UL; 2836 + preempt_disable(); 2837 + stock = this_cpu_ptr(&memcg_stock); 2838 + return &stock->task_obj; 2839 + } 2840 + 2841 + local_irq_save(*pflags); 2842 + stock = this_cpu_ptr(&memcg_stock); 2843 + return &stock->irq_obj; 2844 + } 2845 + 2846 + static inline void put_obj_stock(unsigned long flags) 2847 + { 2848 + if (likely(in_task())) 2849 + preempt_enable(); 2850 + else 2851 + local_irq_restore(flags); 2852 + } 2853 + 2854 + /* 2855 + * mod_objcg_mlstate() may be called with irq enabled, so 2856 + * mod_memcg_lruvec_state() should be used. 2857 + */ 2858 + static inline void mod_objcg_mlstate(struct obj_cgroup *objcg, 2859 + struct pglist_data *pgdat, 2860 + enum node_stat_item idx, int nr) 2861 + { 2862 + struct mem_cgroup *memcg; 2863 + struct lruvec *lruvec; 2864 + 2865 + rcu_read_lock(); 2866 + memcg = obj_cgroup_memcg(objcg); 2867 + lruvec = mem_cgroup_lruvec(memcg, pgdat); 2868 + mod_memcg_lruvec_state(lruvec, idx, nr); 2869 + rcu_read_unlock(); 2870 + } 2765 2871 2766 2872 int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s, 2767 2873 gfp_t gfp, bool new_page)
+9 -6
mm/slub.c
··· 5081 5081 unsigned long max; 5082 5082 unsigned long count; 5083 5083 struct location *loc; 5084 + loff_t idx; 5084 5085 }; 5085 5086 5086 5087 static struct dentry *slab_debugfs_root; ··· 6053 6052 #if defined(CONFIG_SLUB_DEBUG) && defined(CONFIG_DEBUG_FS) 6054 6053 static int slab_debugfs_show(struct seq_file *seq, void *v) 6055 6054 { 6056 - 6057 - struct location *l; 6058 - unsigned int idx = *(unsigned int *)v; 6059 6055 struct loc_track *t = seq->private; 6056 + struct location *l; 6057 + unsigned long idx; 6060 6058 6059 + idx = (unsigned long) t->idx; 6061 6060 if (idx < t->count) { 6062 6061 l = &t->loc[idx]; 6063 6062 ··· 6106 6105 { 6107 6106 struct loc_track *t = seq->private; 6108 6107 6109 - v = ppos; 6110 - ++*ppos; 6108 + t->idx = ++(*ppos); 6111 6109 if (*ppos <= t->count) 6112 - return v; 6110 + return ppos; 6113 6111 6114 6112 return NULL; 6115 6113 } 6116 6114 6117 6115 static void *slab_debugfs_start(struct seq_file *seq, loff_t *ppos) 6118 6116 { 6117 + struct loc_track *t = seq->private; 6118 + 6119 + t->idx = *ppos; 6119 6120 return ppos; 6120 6121 } 6121 6122
+1 -1
net/core/skbuff.c
··· 832 832 ntohs(skb->protocol), skb->pkt_type, skb->skb_iif); 833 833 834 834 if (dev) 835 - printk("%sdev name=%s feat=0x%pNF\n", 835 + printk("%sdev name=%s feat=%pNF\n", 836 836 level, dev->name, &dev->features); 837 837 if (sk) 838 838 printk("%ssk family=%hu type=%u proto=%u\n",
+1 -3
net/ipv4/inet_diag.c
··· 261 261 r->idiag_state = sk->sk_state; 262 262 r->idiag_timer = 0; 263 263 r->idiag_retrans = 0; 264 + r->idiag_expires = 0; 264 265 265 266 if (inet_diag_msg_attrs_fill(sk, skb, r, ext, 266 267 sk_user_ns(NETLINK_CB(cb->skb).sk), ··· 315 314 r->idiag_retrans = icsk->icsk_probes_out; 316 315 r->idiag_expires = 317 316 jiffies_delta_to_msecs(sk->sk_timer.expires - jiffies); 318 - } else { 319 - r->idiag_timer = 0; 320 - r->idiag_expires = 0; 321 317 } 322 318 323 319 if ((ext & (1 << (INET_DIAG_INFO - 1))) && handler->idiag_info_size) {
-1
net/ipv6/sit.c
··· 1933 1933 return 0; 1934 1934 1935 1935 err_reg_dev: 1936 - ipip6_dev_free(sitn->fb_tunnel_dev); 1937 1936 free_netdev(sitn->fb_tunnel_dev); 1938 1937 err_alloc_dev: 1939 1938 return err;
+3 -2
net/mac80211/agg-rx.c
··· 9 9 * Copyright 2007, Michael Wu <flamingice@sourmilk.net> 10 10 * Copyright 2007-2010, Intel Corporation 11 11 * Copyright(c) 2015-2017 Intel Deutschland GmbH 12 - * Copyright (C) 2018-2020 Intel Corporation 12 + * Copyright (C) 2018-2021 Intel Corporation 13 13 */ 14 14 15 15 /** ··· 191 191 sband = ieee80211_get_sband(sdata); 192 192 if (!sband) 193 193 return; 194 - he_cap = ieee80211_get_he_iftype_cap(sband, sdata->vif.type); 194 + he_cap = ieee80211_get_he_iftype_cap(sband, 195 + ieee80211_vif_type_p2p(&sdata->vif)); 195 196 if (!he_cap) 196 197 return; 197 198
+11 -5
net/mac80211/agg-tx.c
··· 9 9 * Copyright 2007, Michael Wu <flamingice@sourmilk.net> 10 10 * Copyright 2007-2010, Intel Corporation 11 11 * Copyright(c) 2015-2017 Intel Deutschland GmbH 12 - * Copyright (C) 2018 - 2020 Intel Corporation 12 + * Copyright (C) 2018 - 2021 Intel Corporation 13 13 */ 14 14 15 15 #include <linux/ieee80211.h> ··· 106 106 mgmt->u.action.u.addba_req.start_seq_num = 107 107 cpu_to_le16(start_seq_num << 4); 108 108 109 - ieee80211_tx_skb(sdata, skb); 109 + ieee80211_tx_skb_tid(sdata, skb, tid); 110 110 } 111 111 112 112 void ieee80211_send_bar(struct ieee80211_vif *vif, u8 *ra, u16 tid, u16 ssn) ··· 213 213 struct ieee80211_txq *txq = sta->sta.txq[tid]; 214 214 struct txq_info *txqi; 215 215 216 + lockdep_assert_held(&sta->ampdu_mlme.mtx); 217 + 216 218 if (!txq) 217 219 return; 218 220 ··· 292 290 ieee80211_assign_tid_tx(sta, tid, NULL); 293 291 294 292 ieee80211_agg_splice_finish(sta->sdata, tid); 295 - ieee80211_agg_start_txq(sta, tid, false); 296 293 297 294 kfree_rcu(tid_tx, rcu_head); 298 295 } ··· 481 480 482 481 /* send AddBA request */ 483 482 ieee80211_send_addba_request(sdata, sta->sta.addr, tid, 484 - tid_tx->dialog_token, 485 - sta->tid_seq[tid] >> 4, 483 + tid_tx->dialog_token, tid_tx->ssn, 486 484 buf_size, tid_tx->timeout); 487 485 488 486 WARN_ON(test_and_set_bit(HT_AGG_STATE_SENT_ADDBA, &tid_tx->state)); ··· 523 523 524 524 params.ssn = sta->tid_seq[tid] >> 4; 525 525 ret = drv_ampdu_action(local, sdata, &params); 526 + tid_tx->ssn = params.ssn; 526 527 if (ret == IEEE80211_AMPDU_TX_START_DELAY_ADDBA) { 527 528 return; 528 529 } else if (ret == IEEE80211_AMPDU_TX_START_IMMEDIATE) { ··· 890 889 { 891 890 struct ieee80211_sub_if_data *sdata = sta->sdata; 892 891 bool send_delba = false; 892 + bool start_txq = false; 893 893 894 894 ht_dbg(sdata, "Stopping Tx BA session for %pM tid %d\n", 895 895 sta->sta.addr, tid); ··· 908 906 send_delba = true; 909 907 910 908 ieee80211_remove_tid_tx(sta, tid); 909 + start_txq = true; 911 910 912 911 unlock_sta: 913 912 spin_unlock_bh(&sta->lock); 913 + 914 + if (start_txq) 915 + ieee80211_agg_start_txq(sta, tid, false); 914 916 915 917 if (send_delba) 916 918 ieee80211_send_delba(sdata, sta->sta.addr, tid,
+4 -1
net/mac80211/driver-ops.h
··· 1219 1219 { 1220 1220 struct ieee80211_sub_if_data *sdata = vif_to_sdata(txq->txq.vif); 1221 1221 1222 - if (local->in_reconfig) 1222 + /* In reconfig don't transmit now, but mark for waking later */ 1223 + if (local->in_reconfig) { 1224 + set_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txq->flags); 1223 1225 return; 1226 + } 1224 1227 1225 1228 if (!check_sdata_in_driver(sdata)) 1226 1229 return;
+10 -3
net/mac80211/mlme.c
··· 2452 2452 u16 tx_time) 2453 2453 { 2454 2454 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; 2455 - u16 tid = ieee80211_get_tid(hdr); 2456 - int ac = ieee80211_ac_from_tid(tid); 2457 - struct ieee80211_sta_tx_tspec *tx_tspec = &ifmgd->tx_tspec[ac]; 2455 + u16 tid; 2456 + int ac; 2457 + struct ieee80211_sta_tx_tspec *tx_tspec; 2458 2458 unsigned long now = jiffies; 2459 + 2460 + if (!ieee80211_is_data_qos(hdr->frame_control)) 2461 + return; 2462 + 2463 + tid = ieee80211_get_tid(hdr); 2464 + ac = ieee80211_ac_from_tid(tid); 2465 + tx_tspec = &ifmgd->tx_tspec[ac]; 2459 2466 2460 2467 if (likely(!tx_tspec->admitted_time)) 2461 2468 return;
+1
net/mac80211/rx.c
··· 2944 2944 if (!fwd_skb) 2945 2945 goto out; 2946 2946 2947 + fwd_skb->dev = sdata->dev; 2947 2948 fwd_hdr = (struct ieee80211_hdr *) fwd_skb->data; 2948 2949 fwd_hdr->frame_control &= ~cpu_to_le16(IEEE80211_FCTL_RETRY); 2949 2950 info = IEEE80211_SKB_CB(fwd_skb);
+12 -9
net/mac80211/sta_info.c
··· 644 644 /* check if STA exists already */ 645 645 if (sta_info_get_bss(sdata, sta->sta.addr)) { 646 646 err = -EEXIST; 647 - goto out_err; 647 + goto out_cleanup; 648 648 } 649 649 650 650 sinfo = kzalloc(sizeof(struct station_info), GFP_KERNEL); 651 651 if (!sinfo) { 652 652 err = -ENOMEM; 653 - goto out_err; 653 + goto out_cleanup; 654 654 } 655 655 656 656 local->num_sta++; ··· 667 667 668 668 list_add_tail_rcu(&sta->list, &local->sta_list); 669 669 670 + /* update channel context before notifying the driver about state 671 + * change, this enables driver using the updated channel context right away. 672 + */ 673 + if (sta->sta_state >= IEEE80211_STA_ASSOC) { 674 + ieee80211_recalc_min_chandef(sta->sdata); 675 + if (!sta->sta.support_p2p_ps) 676 + ieee80211_recalc_p2p_go_ps_allowed(sta->sdata); 677 + } 678 + 670 679 /* notify driver */ 671 680 err = sta_info_insert_drv_state(local, sdata, sta); 672 681 if (err) 673 682 goto out_remove; 674 683 675 684 set_sta_flag(sta, WLAN_STA_INSERTED); 676 - 677 - if (sta->sta_state >= IEEE80211_STA_ASSOC) { 678 - ieee80211_recalc_min_chandef(sta->sdata); 679 - if (!sta->sta.support_p2p_ps) 680 - ieee80211_recalc_p2p_go_ps_allowed(sta->sdata); 681 - } 682 685 683 686 /* accept BA sessions now */ 684 687 clear_sta_flag(sta, WLAN_STA_BLOCK_BA); ··· 709 706 out_drop_sta: 710 707 local->num_sta--; 711 708 synchronize_net(); 709 + out_cleanup: 712 710 cleanup_single_sta(sta); 713 - out_err: 714 711 mutex_unlock(&local->sta_mtx); 715 712 kfree(sinfo); 716 713 rcu_read_lock();
+2
net/mac80211/sta_info.h
··· 176 176 * @failed_bar_ssn: ssn of the last failed BAR tx attempt 177 177 * @bar_pending: BAR needs to be re-sent 178 178 * @amsdu: support A-MSDU withing A-MDPU 179 + * @ssn: starting sequence number of the session 179 180 * 180 181 * This structure's lifetime is managed by RCU, assignments to 181 182 * the array holding it must hold the aggregation mutex. ··· 200 199 u8 stop_initiator; 201 200 bool tx_stop; 202 201 u16 buf_size; 202 + u16 ssn; 203 203 204 204 u16 failed_bar_ssn; 205 205 bool bar_pending;
+5 -5
net/mac80211/tx.c
··· 1822 1822 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(tx->skb); 1823 1823 ieee80211_tx_result res = TX_CONTINUE; 1824 1824 1825 + if (!ieee80211_hw_check(&tx->local->hw, HAS_RATE_CONTROL)) 1826 + CALL_TXH(ieee80211_tx_h_rate_ctrl); 1827 + 1825 1828 if (unlikely(info->flags & IEEE80211_TX_INTFL_RETRANSMISSION)) { 1826 1829 __skb_queue_tail(&tx->skbs, tx->skb); 1827 1830 tx->skb = NULL; 1828 1831 goto txh_done; 1829 1832 } 1830 - 1831 - if (!ieee80211_hw_check(&tx->local->hw, HAS_RATE_CONTROL)) 1832 - CALL_TXH(ieee80211_tx_h_rate_ctrl); 1833 1833 1834 1834 CALL_TXH(ieee80211_tx_h_michael_mic_add); 1835 1835 CALL_TXH(ieee80211_tx_h_sequence); ··· 4191 4191 4192 4192 ieee80211_aggr_check(sdata, sta, skb); 4193 4193 4194 + sk_pacing_shift_update(skb->sk, sdata->local->hw.tx_sk_pacing_shift); 4195 + 4194 4196 if (sta) { 4195 4197 struct ieee80211_fast_tx *fast_tx; 4196 - 4197 - sk_pacing_shift_update(skb->sk, sdata->local->hw.tx_sk_pacing_shift); 4198 4198 4199 4199 fast_tx = rcu_dereference(sta->fast_tx); 4200 4200
+14 -9
net/mac80211/util.c
··· 943 943 struct ieee802_11_elems *elems) 944 944 { 945 945 const void *data = elem->data + 1; 946 - u8 len = elem->datalen - 1; 946 + u8 len; 947 + 948 + if (!elem->datalen) 949 + return; 950 + 951 + len = elem->datalen - 1; 947 952 948 953 switch (elem->data[0]) { 949 954 case WLAN_EID_EXT_HE_MU_EDCA: ··· 2068 2063 chandef.chan = chan; 2069 2064 2070 2065 skb = ieee80211_probereq_get(&local->hw, src, ssid, ssid_len, 2071 - 100 + ie_len); 2066 + local->scan_ies_len + ie_len); 2072 2067 if (!skb) 2073 2068 return NULL; 2074 2069 ··· 2651 2646 mutex_unlock(&local->sta_mtx); 2652 2647 } 2653 2648 2649 + /* 2650 + * If this is for hw restart things are still running. 2651 + * We may want to change that later, however. 2652 + */ 2653 + if (local->open_count && (!suspended || reconfig_due_to_wowlan)) 2654 + drv_reconfig_complete(local, IEEE80211_RECONFIG_TYPE_RESTART); 2655 + 2654 2656 if (local->in_reconfig) { 2655 2657 local->in_reconfig = false; 2656 2658 barrier(); ··· 2675 2663 ieee80211_wake_queues_by_reason(hw, IEEE80211_MAX_QUEUE_MAP, 2676 2664 IEEE80211_QUEUE_STOP_REASON_SUSPEND, 2677 2665 false); 2678 - 2679 - /* 2680 - * If this is for hw restart things are still running. 2681 - * We may want to change that later, however. 2682 - */ 2683 - if (local->open_count && (!suspended || reconfig_due_to_wowlan)) 2684 - drv_reconfig_complete(local, IEEE80211_RECONFIG_TYPE_RESTART); 2685 2666 2686 2667 if (!suspended) 2687 2668 return 0;
+3
net/mptcp/pm_netlink.c
··· 700 700 701 701 msk_owned_by_me(msk); 702 702 703 + if (sk->sk_state == TCP_LISTEN) 704 + return; 705 + 703 706 if (!rm_list->nr) 704 707 return; 705 708
+4 -2
net/mptcp/protocol.c
··· 1527 1527 int ret = 0; 1528 1528 1529 1529 prev_ssk = ssk; 1530 - mptcp_flush_join_list(msk); 1530 + __mptcp_flush_join_list(msk); 1531 1531 ssk = mptcp_subflow_get_send(msk); 1532 1532 1533 1533 /* First check. If the ssk has changed since ··· 2914 2914 */ 2915 2915 if (WARN_ON_ONCE(!new_mptcp_sock)) { 2916 2916 tcp_sk(newsk)->is_mptcp = 0; 2917 - return newsk; 2917 + goto out; 2918 2918 } 2919 2919 2920 2920 /* acquire the 2nd reference for the owning socket */ ··· 2926 2926 MPTCP_MIB_MPCAPABLEPASSIVEFALLBACK); 2927 2927 } 2928 2928 2929 + out: 2930 + newsk->sk_kern_sock = kern; 2929 2931 return newsk; 2930 2932 } 2931 2933
-1
net/mptcp/sockopt.c
··· 543 543 case TCP_NODELAY: 544 544 case TCP_THIN_LINEAR_TIMEOUTS: 545 545 case TCP_CONGESTION: 546 - case TCP_ULP: 547 546 case TCP_CORK: 548 547 case TCP_KEEPIDLE: 549 548 case TCP_KEEPINTVL:
+3 -2
net/packet/af_packet.c
··· 4496 4496 } 4497 4497 4498 4498 out_free_pg_vec: 4499 - bitmap_free(rx_owner_map); 4500 - if (pg_vec) 4499 + if (pg_vec) { 4500 + bitmap_free(rx_owner_map); 4501 4501 free_pg_vec(pg_vec, order, req->tp_block_nr); 4502 + } 4502 4503 out: 4503 4504 return err; 4504 4505 }
+1
net/phonet/pep.c
··· 868 868 869 869 err = pep_accept_conn(newsk, skb); 870 870 if (err) { 871 + __sock_put(sk); 871 872 sock_put(newsk); 872 873 newsk = NULL; 873 874 goto drop;
+1
net/rds/connection.c
··· 253 253 * should end up here, but if it 254 254 * does, reset/destroy the connection. 255 255 */ 256 + kfree(conn->c_path); 256 257 kmem_cache_free(rds_conn_slab, conn); 257 258 conn = ERR_PTR(-EOPNOTSUPP); 258 259 goto out;
+1
net/sched/cls_api.c
··· 3687 3687 entry->mpls_mangle.ttl = tcf_mpls_ttl(act); 3688 3688 break; 3689 3689 default: 3690 + err = -EOPNOTSUPP; 3690 3691 goto err_out_locked; 3691 3692 } 3692 3693 } else if (is_tcf_skbedit_ptype(act)) {
+1 -5
net/sched/sch_cake.c
··· 2736 2736 q->tins = kvcalloc(CAKE_MAX_TINS, sizeof(struct cake_tin_data), 2737 2737 GFP_KERNEL); 2738 2738 if (!q->tins) 2739 - goto nomem; 2739 + return -ENOMEM; 2740 2740 2741 2741 for (i = 0; i < CAKE_MAX_TINS; i++) { 2742 2742 struct cake_tin_data *b = q->tins + i; ··· 2766 2766 q->min_netlen = ~0; 2767 2767 q->min_adjlen = ~0; 2768 2768 return 0; 2769 - 2770 - nomem: 2771 - cake_destroy(sch); 2772 - return -ENOMEM; 2773 2769 } 2774 2770 2775 2771 static int cake_dump(struct Qdisc *sch, struct sk_buff *skb)
+2 -2
net/sched/sch_ets.c
··· 666 666 } 667 667 } 668 668 for (i = q->nbands; i < oldbands; i++) { 669 - qdisc_tree_flush_backlog(q->classes[i].qdisc); 670 - if (i >= q->nstrict) 669 + if (i >= q->nstrict && q->classes[i].qdisc->q.qlen) 671 670 list_del(&q->classes[i].alist); 671 + qdisc_tree_flush_backlog(q->classes[i].qdisc); 672 672 } 673 673 q->nstrict = nstrict; 674 674 memcpy(q->prio2band, priomap, sizeof(priomap));
+3 -1
net/smc/af_smc.c
··· 194 194 /* cleanup for a dangling non-blocking connect */ 195 195 if (smc->connect_nonblock && sk->sk_state == SMC_INIT) 196 196 tcp_abort(smc->clcsock->sk, ECONNABORTED); 197 - flush_work(&smc->connect_work); 197 + 198 + if (cancel_work_sync(&smc->connect_work)) 199 + sock_put(&smc->sk); /* sock_hold in smc_connect for passive closing */ 198 200 199 201 if (sk->sk_state == SMC_LISTEN) 200 202 /* smc_close_non_accepted() is called and acquires
+2 -1
net/vmw_vsock/virtio_transport_common.c
··· 1299 1299 space_available = virtio_transport_space_update(sk, pkt); 1300 1300 1301 1301 /* Update CID in case it has changed after a transport reset event */ 1302 - vsk->local_addr.svm_cid = dst.svm_cid; 1302 + if (vsk->local_addr.svm_cid != VMADDR_CID_ANY) 1303 + vsk->local_addr.svm_cid = dst.svm_cid; 1303 1304 1304 1305 if (space_available) 1305 1306 sk->sk_write_space(sk);
+28 -2
net/wireless/reg.c
··· 133 133 134 134 static void restore_regulatory_settings(bool reset_user, bool cached); 135 135 static void print_regdomain(const struct ieee80211_regdomain *rd); 136 + static void reg_process_hint(struct regulatory_request *reg_request); 136 137 137 138 static const struct ieee80211_regdomain *get_cfg80211_regdom(void) 138 139 { ··· 1099 1098 const struct firmware *fw; 1100 1099 void *db; 1101 1100 int err; 1101 + const struct ieee80211_regdomain *current_regdomain; 1102 + struct regulatory_request *request; 1102 1103 1103 1104 err = request_firmware(&fw, "regulatory.db", &reg_pdev->dev); 1104 1105 if (err) ··· 1121 1118 if (!IS_ERR_OR_NULL(regdb)) 1122 1119 kfree(regdb); 1123 1120 regdb = db; 1124 - rtnl_unlock(); 1125 1121 1122 + /* reset regulatory domain */ 1123 + current_regdomain = get_cfg80211_regdom(); 1124 + 1125 + request = kzalloc(sizeof(*request), GFP_KERNEL); 1126 + if (!request) { 1127 + err = -ENOMEM; 1128 + goto out_unlock; 1129 + } 1130 + 1131 + request->wiphy_idx = WIPHY_IDX_INVALID; 1132 + request->alpha2[0] = current_regdomain->alpha2[0]; 1133 + request->alpha2[1] = current_regdomain->alpha2[1]; 1134 + request->initiator = NL80211_REGDOM_SET_BY_CORE; 1135 + request->user_reg_hint_type = NL80211_USER_REG_HINT_USER; 1136 + 1137 + reg_process_hint(request); 1138 + 1139 + out_unlock: 1140 + rtnl_unlock(); 1126 1141 out: 1127 1142 release_firmware(fw); 1128 1143 return err; ··· 2359 2338 struct cfg80211_chan_def chandef = {}; 2360 2339 struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); 2361 2340 enum nl80211_iftype iftype; 2341 + bool ret; 2362 2342 2363 2343 wdev_lock(wdev); 2364 2344 iftype = wdev->iftype; ··· 2409 2387 case NL80211_IFTYPE_AP: 2410 2388 case NL80211_IFTYPE_P2P_GO: 2411 2389 case NL80211_IFTYPE_ADHOC: 2412 - return cfg80211_reg_can_beacon_relax(wiphy, &chandef, iftype); 2390 + wiphy_lock(wiphy); 2391 + ret = cfg80211_reg_can_beacon_relax(wiphy, &chandef, iftype); 2392 + wiphy_unlock(wiphy); 2393 + 2394 + return ret; 2413 2395 case NL80211_IFTYPE_STATION: 2414 2396 case NL80211_IFTYPE_P2P_CLIENT: 2415 2397 return cfg80211_chandef_usable(wiphy, &chandef,
+2 -2
net/xdp/xsk.c
··· 677 677 struct xdp_sock *xs = xdp_sk(sk); 678 678 struct xsk_buff_pool *pool; 679 679 680 - sock_poll_wait(file, sock, wait); 681 - 682 680 if (unlikely(!xsk_is_bound(xs))) 683 681 return mask; 684 682 ··· 688 690 else 689 691 /* Poll needs to drive Tx also in copy mode */ 690 692 __xsk_sendmsg(sk); 693 + } else { 694 + sock_poll_wait(file, sock, wait); 691 695 } 692 696 693 697 if (xs->rx && !xskq_prod_is_empty(xs->rx))
+1
samples/ftrace/Makefile
··· 4 4 obj-$(CONFIG_SAMPLE_FTRACE_DIRECT) += ftrace-direct-too.o 5 5 obj-$(CONFIG_SAMPLE_FTRACE_DIRECT) += ftrace-direct-modify.o 6 6 obj-$(CONFIG_SAMPLE_FTRACE_DIRECT_MULTI) += ftrace-direct-multi.o 7 + obj-$(CONFIG_SAMPLE_FTRACE_DIRECT_MULTI) += ftrace-direct-multi-modify.o 7 8 8 9 CFLAGS_sample-trace-array.o := -I$(src) 9 10 obj-$(CONFIG_SAMPLE_TRACE_ARRAY) += sample-trace-array.o
+152
samples/ftrace/ftrace-direct-multi-modify.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + #include <linux/module.h> 3 + #include <linux/kthread.h> 4 + #include <linux/ftrace.h> 5 + #include <asm/asm-offsets.h> 6 + 7 + void my_direct_func1(unsigned long ip) 8 + { 9 + trace_printk("my direct func1 ip %lx\n", ip); 10 + } 11 + 12 + void my_direct_func2(unsigned long ip) 13 + { 14 + trace_printk("my direct func2 ip %lx\n", ip); 15 + } 16 + 17 + extern void my_tramp1(void *); 18 + extern void my_tramp2(void *); 19 + 20 + #ifdef CONFIG_X86_64 21 + 22 + asm ( 23 + " .pushsection .text, \"ax\", @progbits\n" 24 + " .type my_tramp1, @function\n" 25 + " .globl my_tramp1\n" 26 + " my_tramp1:" 27 + " pushq %rbp\n" 28 + " movq %rsp, %rbp\n" 29 + " pushq %rdi\n" 30 + " movq 8(%rbp), %rdi\n" 31 + " call my_direct_func1\n" 32 + " popq %rdi\n" 33 + " leave\n" 34 + " ret\n" 35 + " .size my_tramp1, .-my_tramp1\n" 36 + " .type my_tramp2, @function\n" 37 + "\n" 38 + " .globl my_tramp2\n" 39 + " my_tramp2:" 40 + " pushq %rbp\n" 41 + " movq %rsp, %rbp\n" 42 + " pushq %rdi\n" 43 + " movq 8(%rbp), %rdi\n" 44 + " call my_direct_func2\n" 45 + " popq %rdi\n" 46 + " leave\n" 47 + " ret\n" 48 + " .size my_tramp2, .-my_tramp2\n" 49 + " .popsection\n" 50 + ); 51 + 52 + #endif /* CONFIG_X86_64 */ 53 + 54 + #ifdef CONFIG_S390 55 + 56 + asm ( 57 + " .pushsection .text, \"ax\", @progbits\n" 58 + " .type my_tramp1, @function\n" 59 + " .globl my_tramp1\n" 60 + " my_tramp1:" 61 + " lgr %r1,%r15\n" 62 + " stmg %r0,%r5,"__stringify(__SF_GPRS)"(%r15)\n" 63 + " stg %r14,"__stringify(__SF_GPRS+8*8)"(%r15)\n" 64 + " aghi %r15,"__stringify(-STACK_FRAME_OVERHEAD)"\n" 65 + " stg %r1,"__stringify(__SF_BACKCHAIN)"(%r15)\n" 66 + " lgr %r2,%r0\n" 67 + " brasl %r14,my_direct_func1\n" 68 + " aghi %r15,"__stringify(STACK_FRAME_OVERHEAD)"\n" 69 + " lmg %r0,%r5,"__stringify(__SF_GPRS)"(%r15)\n" 70 + " lg %r14,"__stringify(__SF_GPRS+8*8)"(%r15)\n" 71 + " lgr %r1,%r0\n" 72 + " br %r1\n" 73 + " .size my_tramp1, .-my_tramp1\n" 74 + "\n" 75 + " .type my_tramp2, @function\n" 76 + " .globl my_tramp2\n" 77 + " my_tramp2:" 78 + " lgr %r1,%r15\n" 79 + " stmg %r0,%r5,"__stringify(__SF_GPRS)"(%r15)\n" 80 + " stg %r14,"__stringify(__SF_GPRS+8*8)"(%r15)\n" 81 + " aghi %r15,"__stringify(-STACK_FRAME_OVERHEAD)"\n" 82 + " stg %r1,"__stringify(__SF_BACKCHAIN)"(%r15)\n" 83 + " lgr %r2,%r0\n" 84 + " brasl %r14,my_direct_func2\n" 85 + " aghi %r15,"__stringify(STACK_FRAME_OVERHEAD)"\n" 86 + " lmg %r0,%r5,"__stringify(__SF_GPRS)"(%r15)\n" 87 + " lg %r14,"__stringify(__SF_GPRS+8*8)"(%r15)\n" 88 + " lgr %r1,%r0\n" 89 + " br %r1\n" 90 + " .size my_tramp2, .-my_tramp2\n" 91 + " .popsection\n" 92 + ); 93 + 94 + #endif /* CONFIG_S390 */ 95 + 96 + static unsigned long my_tramp = (unsigned long)my_tramp1; 97 + static unsigned long tramps[2] = { 98 + (unsigned long)my_tramp1, 99 + (unsigned long)my_tramp2, 100 + }; 101 + 102 + static struct ftrace_ops direct; 103 + 104 + static int simple_thread(void *arg) 105 + { 106 + static int t; 107 + int ret = 0; 108 + 109 + while (!kthread_should_stop()) { 110 + set_current_state(TASK_INTERRUPTIBLE); 111 + schedule_timeout(2 * HZ); 112 + 113 + if (ret) 114 + continue; 115 + t ^= 1; 116 + ret = modify_ftrace_direct_multi(&direct, tramps[t]); 117 + if (!ret) 118 + my_tramp = tramps[t]; 119 + WARN_ON_ONCE(ret); 120 + } 121 + 122 + return 0; 123 + } 124 + 125 + static struct task_struct *simple_tsk; 126 + 127 + static int __init ftrace_direct_multi_init(void) 128 + { 129 + int ret; 130 + 131 + ftrace_set_filter_ip(&direct, (unsigned long) wake_up_process, 0, 0); 132 + ftrace_set_filter_ip(&direct, (unsigned long) schedule, 0, 0); 133 + 134 + ret = register_ftrace_direct_multi(&direct, my_tramp); 135 + 136 + if (!ret) 137 + simple_tsk = kthread_run(simple_thread, NULL, "event-sample-fn"); 138 + return ret; 139 + } 140 + 141 + static void __exit ftrace_direct_multi_exit(void) 142 + { 143 + kthread_stop(simple_tsk); 144 + unregister_ftrace_direct_multi(&direct, my_tramp); 145 + } 146 + 147 + module_init(ftrace_direct_multi_init); 148 + module_exit(ftrace_direct_multi_exit); 149 + 150 + MODULE_AUTHOR("Jiri Olsa"); 151 + MODULE_DESCRIPTION("Example use case of using modify_ftrace_direct_multi()"); 152 + MODULE_LICENSE("GPL");
+1 -1
scripts/recordmcount.pl
··· 219 219 220 220 } elsif ($arch eq "s390" && $bits == 64) { 221 221 if ($cc =~ /-DCC_USING_HOTPATCH/) { 222 - $mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*c0 04 00 00 00 00\\s*brcl\\s*0,[0-9a-f]+ <([^\+]*)>\$"; 222 + $mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*c0 04 00 00 00 00\\s*(bcrl\\s*0,|jgnop\\s*)[0-9a-f]+ <([^\+]*)>\$"; 223 223 $mcount_adjust = 0; 224 224 } 225 225 $alignment = 8;
+3
sound/core/control_compat.c
··· 264 264 struct snd_ctl_elem_value *data, 265 265 int type, int count) 266 266 { 267 + struct snd_ctl_elem_value32 __user *data32 = userdata; 267 268 int i, size; 268 269 269 270 if (type == SNDRV_CTL_ELEM_TYPE_BOOLEAN || ··· 281 280 if (copy_to_user(valuep, data->value.bytes.data, size)) 282 281 return -EFAULT; 283 282 } 283 + if (copy_to_user(&data32->id, &data->id, sizeof(data32->id))) 284 + return -EFAULT; 284 285 return 0; 285 286 } 286 287
+25 -12
sound/core/oss/pcm_oss.c
··· 147 147 * 148 148 * Return the maximum value for field PAR. 149 149 */ 150 - static unsigned int 150 + static int 151 151 snd_pcm_hw_param_value_max(const struct snd_pcm_hw_params *params, 152 152 snd_pcm_hw_param_t var, int *dir) 153 153 { ··· 682 682 struct snd_pcm_hw_params *oss_params, 683 683 struct snd_pcm_hw_params *slave_params) 684 684 { 685 - size_t s; 686 - size_t oss_buffer_size, oss_period_size, oss_periods; 687 - size_t min_period_size, max_period_size; 685 + ssize_t s; 686 + ssize_t oss_buffer_size; 687 + ssize_t oss_period_size, oss_periods; 688 + ssize_t min_period_size, max_period_size; 688 689 struct snd_pcm_runtime *runtime = substream->runtime; 689 690 size_t oss_frame_size; 690 691 691 692 oss_frame_size = snd_pcm_format_physical_width(params_format(oss_params)) * 692 693 params_channels(oss_params) / 8; 693 694 695 + oss_buffer_size = snd_pcm_hw_param_value_max(slave_params, 696 + SNDRV_PCM_HW_PARAM_BUFFER_SIZE, 697 + NULL); 698 + if (oss_buffer_size <= 0) 699 + return -EINVAL; 694 700 oss_buffer_size = snd_pcm_plug_client_size(substream, 695 - snd_pcm_hw_param_value_max(slave_params, SNDRV_PCM_HW_PARAM_BUFFER_SIZE, NULL)) * oss_frame_size; 696 - if (!oss_buffer_size) 701 + oss_buffer_size * oss_frame_size); 702 + if (oss_buffer_size <= 0) 697 703 return -EINVAL; 698 704 oss_buffer_size = rounddown_pow_of_two(oss_buffer_size); 699 705 if (atomic_read(&substream->mmap_count)) { ··· 736 730 737 731 min_period_size = snd_pcm_plug_client_size(substream, 738 732 snd_pcm_hw_param_value_min(slave_params, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, NULL)); 739 - if (min_period_size) { 733 + if (min_period_size > 0) { 740 734 min_period_size *= oss_frame_size; 741 735 min_period_size = roundup_pow_of_two(min_period_size); 742 736 if (oss_period_size < min_period_size) ··· 745 739 746 740 max_period_size = snd_pcm_plug_client_size(substream, 747 741 snd_pcm_hw_param_value_max(slave_params, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, NULL)); 748 - if (max_period_size) { 742 + if (max_period_size > 0) { 749 743 max_period_size *= oss_frame_size; 750 744 max_period_size = rounddown_pow_of_two(max_period_size); 751 745 if (oss_period_size > max_period_size) ··· 758 752 oss_periods = substream->oss.setup.periods; 759 753 760 754 s = snd_pcm_hw_param_value_max(slave_params, SNDRV_PCM_HW_PARAM_PERIODS, NULL); 761 - if (runtime->oss.maxfrags && s > runtime->oss.maxfrags) 755 + if (s > 0 && runtime->oss.maxfrags && s > runtime->oss.maxfrags) 762 756 s = runtime->oss.maxfrags; 763 757 if (oss_periods > s) 764 758 oss_periods = s; ··· 884 878 err = -EINVAL; 885 879 goto failure; 886 880 } 887 - choose_rate(substream, sparams, runtime->oss.rate); 888 - snd_pcm_hw_param_near(substream, sparams, SNDRV_PCM_HW_PARAM_CHANNELS, runtime->oss.channels, NULL); 881 + 882 + err = choose_rate(substream, sparams, runtime->oss.rate); 883 + if (err < 0) 884 + goto failure; 885 + err = snd_pcm_hw_param_near(substream, sparams, 886 + SNDRV_PCM_HW_PARAM_CHANNELS, 887 + runtime->oss.channels, NULL); 888 + if (err < 0) 889 + goto failure; 889 890 890 891 format = snd_pcm_oss_format_from(runtime->oss.format); 891 892 ··· 1969 1956 if (runtime->oss.subdivision || runtime->oss.fragshift) 1970 1957 return -EINVAL; 1971 1958 fragshift = val & 0xffff; 1972 - if (fragshift >= 31) 1959 + if (fragshift >= 25) /* should be large enough */ 1973 1960 return -EINVAL; 1974 1961 runtime->oss.fragshift = fragshift; 1975 1962 runtime->oss.maxfrags = (val >> 16) & 0xffff;
+62 -18
sound/pci/hda/patch_realtek.c
··· 6503 6503 /* for alc285_fixup_ideapad_s740_coef() */ 6504 6504 #include "ideapad_s740_helper.c" 6505 6505 6506 - static void alc256_fixup_tongfang_reset_persistent_settings(struct hda_codec *codec, 6507 - const struct hda_fixup *fix, 6508 - int action) 6506 + static const struct coef_fw alc256_fixup_set_coef_defaults_coefs[] = { 6507 + WRITE_COEF(0x10, 0x0020), WRITE_COEF(0x24, 0x0000), 6508 + WRITE_COEF(0x26, 0x0000), WRITE_COEF(0x29, 0x3000), 6509 + WRITE_COEF(0x37, 0xfe05), WRITE_COEF(0x45, 0x5089), 6510 + {} 6511 + }; 6512 + 6513 + static void alc256_fixup_set_coef_defaults(struct hda_codec *codec, 6514 + const struct hda_fixup *fix, 6515 + int action) 6509 6516 { 6510 6517 /* 6511 - * A certain other OS sets these coeffs to different values. On at least one TongFang 6512 - * barebone these settings might survive even a cold reboot. So to restore a clean slate the 6513 - * values are explicitly reset to default here. Without this, the external microphone is 6514 - * always in a plugged-in state, while the internal microphone is always in an unplugged 6515 - * state, breaking the ability to use the internal microphone. 6516 - */ 6517 - alc_write_coef_idx(codec, 0x24, 0x0000); 6518 - alc_write_coef_idx(codec, 0x26, 0x0000); 6519 - alc_write_coef_idx(codec, 0x29, 0x3000); 6520 - alc_write_coef_idx(codec, 0x37, 0xfe05); 6521 - alc_write_coef_idx(codec, 0x45, 0x5089); 6518 + * A certain other OS sets these coeffs to different values. On at least 6519 + * one TongFang barebone these settings might survive even a cold 6520 + * reboot. So to restore a clean slate the values are explicitly reset 6521 + * to default here. Without this, the external microphone is always in a 6522 + * plugged-in state, while the internal microphone is always in an 6523 + * unplugged state, breaking the ability to use the internal microphone. 6524 + */ 6525 + alc_process_coef_fw(codec, alc256_fixup_set_coef_defaults_coefs); 6522 6526 } 6523 6527 6524 6528 static const struct coef_fw alc233_fixup_no_audio_jack_coefs[] = { ··· 6763 6759 ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE, 6764 6760 ALC287_FIXUP_YOGA7_14ITL_SPEAKERS, 6765 6761 ALC287_FIXUP_13S_GEN2_SPEAKERS, 6766 - ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS, 6762 + ALC256_FIXUP_SET_COEF_DEFAULTS, 6767 6763 ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE, 6768 6764 ALC233_FIXUP_NO_AUDIO_JACK, 6769 6765 }; ··· 8469 8465 .chained = true, 8470 8466 .chain_id = ALC269_FIXUP_HEADSET_MODE, 8471 8467 }, 8472 - [ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS] = { 8468 + [ALC256_FIXUP_SET_COEF_DEFAULTS] = { 8473 8469 .type = HDA_FIXUP_FUNC, 8474 - .v.func = alc256_fixup_tongfang_reset_persistent_settings, 8470 + .v.func = alc256_fixup_set_coef_defaults, 8475 8471 }, 8476 8472 [ALC245_FIXUP_HP_GPIO_LED] = { 8477 8473 .type = HDA_FIXUP_FUNC, ··· 8933 8929 SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */ 8934 8930 SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802), 8935 8931 SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X), 8936 - SND_PCI_QUIRK(0x1d05, 0x1132, "TongFang PHxTxX1", ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS), 8932 + SND_PCI_QUIRK(0x1d05, 0x1132, "TongFang PHxTxX1", ALC256_FIXUP_SET_COEF_DEFAULTS), 8937 8933 SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC), 8938 8934 SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE), 8939 8935 SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC), ··· 10235 10231 } 10236 10232 } 10237 10233 10234 + static void alc897_hp_automute_hook(struct hda_codec *codec, 10235 + struct hda_jack_callback *jack) 10236 + { 10237 + struct alc_spec *spec = codec->spec; 10238 + int vref; 10239 + 10240 + snd_hda_gen_hp_automute(codec, jack); 10241 + vref = spec->gen.hp_jack_present ? (PIN_HP | AC_PINCTL_VREF_100) : PIN_HP; 10242 + snd_hda_codec_write(codec, 0x1b, 0, AC_VERB_SET_PIN_WIDGET_CONTROL, 10243 + vref); 10244 + } 10245 + 10246 + static void alc897_fixup_lenovo_headset_mic(struct hda_codec *codec, 10247 + const struct hda_fixup *fix, int action) 10248 + { 10249 + struct alc_spec *spec = codec->spec; 10250 + if (action == HDA_FIXUP_ACT_PRE_PROBE) { 10251 + spec->gen.hp_automute_hook = alc897_hp_automute_hook; 10252 + } 10253 + } 10254 + 10238 10255 static const struct coef_fw alc668_coefs[] = { 10239 10256 WRITE_COEF(0x01, 0xbebe), WRITE_COEF(0x02, 0xaaaa), WRITE_COEF(0x03, 0x0), 10240 10257 WRITE_COEF(0x04, 0x0180), WRITE_COEF(0x06, 0x0), WRITE_COEF(0x07, 0x0f80), ··· 10336 10311 ALC668_FIXUP_ASUS_NO_HEADSET_MIC, 10337 10312 ALC668_FIXUP_HEADSET_MIC, 10338 10313 ALC668_FIXUP_MIC_DET_COEF, 10314 + ALC897_FIXUP_LENOVO_HEADSET_MIC, 10315 + ALC897_FIXUP_HEADSET_MIC_PIN, 10339 10316 }; 10340 10317 10341 10318 static const struct hda_fixup alc662_fixups[] = { ··· 10744 10717 {} 10745 10718 }, 10746 10719 }, 10720 + [ALC897_FIXUP_LENOVO_HEADSET_MIC] = { 10721 + .type = HDA_FIXUP_FUNC, 10722 + .v.func = alc897_fixup_lenovo_headset_mic, 10723 + }, 10724 + [ALC897_FIXUP_HEADSET_MIC_PIN] = { 10725 + .type = HDA_FIXUP_PINS, 10726 + .v.pins = (const struct hda_pintbl[]) { 10727 + { 0x1a, 0x03a11050 }, 10728 + { } 10729 + }, 10730 + .chained = true, 10731 + .chain_id = ALC897_FIXUP_LENOVO_HEADSET_MIC 10732 + }, 10747 10733 }; 10748 10734 10749 10735 static const struct snd_pci_quirk alc662_fixup_tbl[] = { ··· 10801 10761 SND_PCI_QUIRK(0x144d, 0xc051, "Samsung R720", ALC662_FIXUP_IDEAPAD), 10802 10762 SND_PCI_QUIRK(0x14cd, 0x5003, "USI", ALC662_FIXUP_USI_HEADSET_MODE), 10803 10763 SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC662_FIXUP_LENOVO_MULTI_CODECS), 10764 + SND_PCI_QUIRK(0x17aa, 0x32ca, "Lenovo ThinkCentre M80", ALC897_FIXUP_HEADSET_MIC_PIN), 10765 + SND_PCI_QUIRK(0x17aa, 0x32cb, "Lenovo ThinkCentre M70", ALC897_FIXUP_HEADSET_MIC_PIN), 10766 + SND_PCI_QUIRK(0x17aa, 0x32cf, "Lenovo ThinkCentre M950", ALC897_FIXUP_HEADSET_MIC_PIN), 10767 + SND_PCI_QUIRK(0x17aa, 0x32f7, "Lenovo ThinkCentre M90", ALC897_FIXUP_HEADSET_MIC_PIN), 10804 10768 SND_PCI_QUIRK(0x17aa, 0x38af, "Lenovo Ideapad Y550P", ALC662_FIXUP_IDEAPAD), 10805 10769 SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Ideapad Y550", ALC662_FIXUP_IDEAPAD), 10806 10770 SND_PCI_QUIRK(0x1849, 0x5892, "ASRock B150M", ALC892_FIXUP_ASROCK_MOBO),
+2 -1
sound/soc/amd/yc/pci-acp6x.c
··· 146 146 { 147 147 struct acp6x_dev_data *adata; 148 148 struct platform_device_info pdevinfo[ACP6x_DEVS]; 149 - int ret, index; 149 + int index = 0; 150 150 int val = 0x00; 151 151 u32 addr; 152 152 unsigned int irqflags; 153 + int ret; 153 154 154 155 irqflags = IRQF_SHARED; 155 156 /* Yellow Carp device check */
+6 -4
sound/soc/codecs/rt5682.c
··· 2858 2858 2859 2859 for (i = 0; i < RT5682_DAI_NUM_CLKS; ++i) { 2860 2860 struct clk_init_data init = { }; 2861 + struct clk_parent_data parent_data; 2862 + const struct clk_hw *parent; 2861 2863 2862 2864 dai_clk_hw = &rt5682->dai_clks_hw[i]; 2863 2865 ··· 2867 2865 case RT5682_DAI_WCLK_IDX: 2868 2866 /* Make MCLK the parent of WCLK */ 2869 2867 if (rt5682->mclk) { 2870 - init.parent_data = &(struct clk_parent_data){ 2868 + parent_data = (struct clk_parent_data){ 2871 2869 .fw_name = "mclk", 2872 2870 }; 2871 + init.parent_data = &parent_data; 2873 2872 init.num_parents = 1; 2874 2873 } 2875 2874 break; 2876 2875 case RT5682_DAI_BCLK_IDX: 2877 2876 /* Make WCLK the parent of BCLK */ 2878 - init.parent_hws = &(const struct clk_hw *){ 2879 - &rt5682->dai_clks_hw[RT5682_DAI_WCLK_IDX] 2880 - }; 2877 + parent = &rt5682->dai_clks_hw[RT5682_DAI_WCLK_IDX]; 2878 + init.parent_hws = &parent; 2881 2879 init.num_parents = 1; 2882 2880 break; 2883 2881 default:
+6 -4
sound/soc/codecs/rt5682s.c
··· 2693 2693 2694 2694 for (i = 0; i < RT5682S_DAI_NUM_CLKS; ++i) { 2695 2695 struct clk_init_data init = { }; 2696 + struct clk_parent_data parent_data; 2697 + const struct clk_hw *parent; 2696 2698 2697 2699 dai_clk_hw = &rt5682s->dai_clks_hw[i]; 2698 2700 ··· 2702 2700 case RT5682S_DAI_WCLK_IDX: 2703 2701 /* Make MCLK the parent of WCLK */ 2704 2702 if (rt5682s->mclk) { 2705 - init.parent_data = &(struct clk_parent_data){ 2703 + parent_data = (struct clk_parent_data){ 2706 2704 .fw_name = "mclk", 2707 2705 }; 2706 + init.parent_data = &parent_data; 2708 2707 init.num_parents = 1; 2709 2708 } 2710 2709 break; 2711 2710 case RT5682S_DAI_BCLK_IDX: 2712 2711 /* Make WCLK the parent of BCLK */ 2713 - init.parent_hws = &(const struct clk_hw *){ 2714 - &rt5682s->dai_clks_hw[RT5682S_DAI_WCLK_IDX] 2715 - }; 2712 + parent = &rt5682s->dai_clks_hw[RT5682S_DAI_WCLK_IDX]; 2713 + init.parent_hws = &parent; 2716 2714 init.num_parents = 1; 2717 2715 break; 2718 2716 default:
+93 -33
sound/soc/codecs/wcd934x.c
··· 3256 3256 int value = ucontrol->value.integer.value[0]; 3257 3257 int sel; 3258 3258 3259 + if (wcd->comp_enabled[comp] == value) 3260 + return 0; 3261 + 3259 3262 wcd->comp_enabled[comp] = value; 3260 3263 sel = value ? WCD934X_HPH_GAIN_SRC_SEL_COMPANDER : 3261 3264 WCD934X_HPH_GAIN_SRC_SEL_REGISTER; ··· 3282 3279 case COMPANDER_8: 3283 3280 break; 3284 3281 default: 3285 - break; 3282 + return 0; 3286 3283 } 3287 3284 3288 - return 0; 3285 + return 1; 3289 3286 } 3290 3287 3291 3288 static int wcd934x_rx_hph_mode_get(struct snd_kcontrol *kc, ··· 3329 3326 return 0; 3330 3327 } 3331 3328 3329 + static int slim_rx_mux_to_dai_id(int mux) 3330 + { 3331 + int aif_id; 3332 + 3333 + switch (mux) { 3334 + case 1: 3335 + aif_id = AIF1_PB; 3336 + break; 3337 + case 2: 3338 + aif_id = AIF2_PB; 3339 + break; 3340 + case 3: 3341 + aif_id = AIF3_PB; 3342 + break; 3343 + case 4: 3344 + aif_id = AIF4_PB; 3345 + break; 3346 + default: 3347 + aif_id = -1; 3348 + break; 3349 + } 3350 + 3351 + return aif_id; 3352 + } 3353 + 3332 3354 static int slim_rx_mux_put(struct snd_kcontrol *kc, 3333 3355 struct snd_ctl_elem_value *ucontrol) 3334 3356 { ··· 3361 3333 struct wcd934x_codec *wcd = dev_get_drvdata(w->dapm->dev); 3362 3334 struct soc_enum *e = (struct soc_enum *)kc->private_value; 3363 3335 struct snd_soc_dapm_update *update = NULL; 3336 + struct wcd934x_slim_ch *ch, *c; 3364 3337 u32 port_id = w->shift; 3338 + bool found = false; 3339 + int mux_idx; 3340 + int prev_mux_idx = wcd->rx_port_value[port_id]; 3341 + int aif_id; 3365 3342 3366 - if (wcd->rx_port_value[port_id] == ucontrol->value.enumerated.item[0]) 3343 + mux_idx = ucontrol->value.enumerated.item[0]; 3344 + 3345 + if (mux_idx == prev_mux_idx) 3367 3346 return 0; 3368 3347 3369 - wcd->rx_port_value[port_id] = ucontrol->value.enumerated.item[0]; 3370 - 3371 - switch (wcd->rx_port_value[port_id]) { 3348 + switch(mux_idx) { 3372 3349 case 0: 3373 - list_del_init(&wcd->rx_chs[port_id].list); 3350 + aif_id = slim_rx_mux_to_dai_id(prev_mux_idx); 3351 + if (aif_id < 0) 3352 + return 0; 3353 + 3354 + list_for_each_entry_safe(ch, c, &wcd->dai[aif_id].slim_ch_list, list) { 3355 + if (ch->port == port_id + WCD934X_RX_START) { 3356 + found = true; 3357 + list_del_init(&ch->list); 3358 + break; 3359 + } 3360 + } 3361 + if (!found) 3362 + return 0; 3363 + 3374 3364 break; 3375 - case 1: 3376 - list_add_tail(&wcd->rx_chs[port_id].list, 3377 - &wcd->dai[AIF1_PB].slim_ch_list); 3365 + case 1 ... 4: 3366 + aif_id = slim_rx_mux_to_dai_id(mux_idx); 3367 + if (aif_id < 0) 3368 + return 0; 3369 + 3370 + if (list_empty(&wcd->rx_chs[port_id].list)) { 3371 + list_add_tail(&wcd->rx_chs[port_id].list, 3372 + &wcd->dai[aif_id].slim_ch_list); 3373 + } else { 3374 + dev_err(wcd->dev ,"SLIM_RX%d PORT is busy\n", port_id); 3375 + return 0; 3376 + } 3378 3377 break; 3379 - case 2: 3380 - list_add_tail(&wcd->rx_chs[port_id].list, 3381 - &wcd->dai[AIF2_PB].slim_ch_list); 3382 - break; 3383 - case 3: 3384 - list_add_tail(&wcd->rx_chs[port_id].list, 3385 - &wcd->dai[AIF3_PB].slim_ch_list); 3386 - break; 3387 - case 4: 3388 - list_add_tail(&wcd->rx_chs[port_id].list, 3389 - &wcd->dai[AIF4_PB].slim_ch_list); 3390 - break; 3378 + 3391 3379 default: 3392 - dev_err(wcd->dev, "Unknown AIF %d\n", 3393 - wcd->rx_port_value[port_id]); 3380 + dev_err(wcd->dev, "Unknown AIF %d\n", mux_idx); 3394 3381 goto err; 3395 3382 } 3396 3383 3384 + wcd->rx_port_value[port_id] = mux_idx; 3397 3385 snd_soc_dapm_mux_update_power(w->dapm, kc, wcd->rx_port_value[port_id], 3398 3386 e, update); 3399 3387 3400 - return 0; 3388 + return 1; 3401 3389 err: 3402 3390 return -EINVAL; 3403 3391 } ··· 3859 3815 struct soc_mixer_control *mixer = 3860 3816 (struct soc_mixer_control *)kc->private_value; 3861 3817 int enable = ucontrol->value.integer.value[0]; 3818 + struct wcd934x_slim_ch *ch, *c; 3862 3819 int dai_id = widget->shift; 3863 3820 int port_id = mixer->shift; 3864 3821 ··· 3867 3822 if (enable == wcd->tx_port_value[port_id]) 3868 3823 return 0; 3869 3824 3825 + if (enable) { 3826 + if (list_empty(&wcd->tx_chs[port_id].list)) { 3827 + list_add_tail(&wcd->tx_chs[port_id].list, 3828 + &wcd->dai[dai_id].slim_ch_list); 3829 + } else { 3830 + dev_err(wcd->dev ,"SLIM_TX%d PORT is busy\n", port_id); 3831 + return 0; 3832 + } 3833 + } else { 3834 + bool found = false; 3835 + 3836 + list_for_each_entry_safe(ch, c, &wcd->dai[dai_id].slim_ch_list, list) { 3837 + if (ch->port == port_id) { 3838 + found = true; 3839 + list_del_init(&wcd->tx_chs[port_id].list); 3840 + break; 3841 + } 3842 + } 3843 + if (!found) 3844 + return 0; 3845 + } 3846 + 3870 3847 wcd->tx_port_value[port_id] = enable; 3871 - 3872 - if (enable) 3873 - list_add_tail(&wcd->tx_chs[port_id].list, 3874 - &wcd->dai[dai_id].slim_ch_list); 3875 - else 3876 - list_del_init(&wcd->tx_chs[port_id].list); 3877 - 3878 3848 snd_soc_dapm_mixer_update_power(widget->dapm, kc, enable, update); 3879 3849 3880 - return 0; 3850 + return 1; 3881 3851 } 3882 3852 3883 3853 static const struct snd_kcontrol_new aif1_slim_cap_mixer[] = {
+12 -4
sound/soc/codecs/wsa881x.c
··· 772 772 773 773 usleep_range(1000, 1010); 774 774 } 775 - return 0; 775 + 776 + return 1; 776 777 } 777 778 778 779 static int wsa881x_get_port(struct snd_kcontrol *kcontrol, ··· 817 816 (struct soc_mixer_control *)kcontrol->private_value; 818 817 int portidx = mixer->reg; 819 818 820 - if (ucontrol->value.integer.value[0]) 819 + if (ucontrol->value.integer.value[0]) { 820 + if (data->port_enable[portidx]) 821 + return 0; 822 + 821 823 data->port_enable[portidx] = true; 822 - else 824 + } else { 825 + if (!data->port_enable[portidx]) 826 + return 0; 827 + 823 828 data->port_enable[portidx] = false; 829 + } 824 830 825 831 if (portidx == WSA881X_PORT_BOOST) /* Boost Switch */ 826 832 wsa881x_boost_ctrl(comp, data->port_enable[portidx]); 827 833 828 - return 0; 834 + return 1; 829 835 } 830 836 831 837 static const char * const smart_boost_lvl_text[] = {
+5 -3
sound/soc/qcom/qdsp6/q6routing.c
··· 498 498 struct session_data *session = &data->sessions[session_id]; 499 499 500 500 if (ucontrol->value.integer.value[0]) { 501 + if (session->port_id == be_id) 502 + return 0; 503 + 501 504 session->port_id = be_id; 502 505 snd_soc_dapm_mixer_update_power(dapm, kcontrol, 1, update); 503 506 } else { 504 - if (session->port_id == be_id) { 505 - session->port_id = -1; 507 + if (session->port_id == -1 || session->port_id != be_id) 506 508 return 0; 507 - } 508 509 510 + session->port_id = -1; 509 511 snd_soc_dapm_mixer_update_power(dapm, kcontrol, 0, update); 510 512 } 511 513
+31 -21
sound/soc/rockchip/rockchip_i2s_tdm.c
··· 95 95 spinlock_t lock; /* xfer lock */ 96 96 bool has_playback; 97 97 bool has_capture; 98 + struct snd_soc_dai_driver *dai; 98 99 }; 99 100 100 101 static int to_ch_num(unsigned int val) ··· 1311 1310 {}, 1312 1311 }; 1313 1312 1314 - static struct snd_soc_dai_driver i2s_tdm_dai = { 1313 + static const struct snd_soc_dai_driver i2s_tdm_dai = { 1315 1314 .probe = rockchip_i2s_tdm_dai_probe, 1316 - .playback = { 1317 - .stream_name = "Playback", 1318 - }, 1319 - .capture = { 1320 - .stream_name = "Capture", 1321 - }, 1322 1315 .ops = &rockchip_i2s_tdm_dai_ops, 1323 1316 }; 1324 1317 1325 - static void rockchip_i2s_tdm_init_dai(struct rk_i2s_tdm_dev *i2s_tdm) 1318 + static int rockchip_i2s_tdm_init_dai(struct rk_i2s_tdm_dev *i2s_tdm) 1326 1319 { 1320 + struct snd_soc_dai_driver *dai; 1327 1321 struct property *dma_names; 1328 1322 const char *dma_name; 1329 1323 u64 formats = (SNDRV_PCM_FMTBIT_S8 | SNDRV_PCM_FMTBIT_S16_LE | ··· 1333 1337 i2s_tdm->has_capture = true; 1334 1338 } 1335 1339 1340 + dai = devm_kmemdup(i2s_tdm->dev, &i2s_tdm_dai, 1341 + sizeof(*dai), GFP_KERNEL); 1342 + if (!dai) 1343 + return -ENOMEM; 1344 + 1336 1345 if (i2s_tdm->has_playback) { 1337 - i2s_tdm_dai.playback.channels_min = 2; 1338 - i2s_tdm_dai.playback.channels_max = 8; 1339 - i2s_tdm_dai.playback.rates = SNDRV_PCM_RATE_8000_192000; 1340 - i2s_tdm_dai.playback.formats = formats; 1346 + dai->playback.stream_name = "Playback"; 1347 + dai->playback.channels_min = 2; 1348 + dai->playback.channels_max = 8; 1349 + dai->playback.rates = SNDRV_PCM_RATE_8000_192000; 1350 + dai->playback.formats = formats; 1341 1351 } 1342 1352 1343 1353 if (i2s_tdm->has_capture) { 1344 - i2s_tdm_dai.capture.channels_min = 2; 1345 - i2s_tdm_dai.capture.channels_max = 8; 1346 - i2s_tdm_dai.capture.rates = SNDRV_PCM_RATE_8000_192000; 1347 - i2s_tdm_dai.capture.formats = formats; 1354 + dai->capture.stream_name = "Capture"; 1355 + dai->capture.channels_min = 2; 1356 + dai->capture.channels_max = 8; 1357 + dai->capture.rates = SNDRV_PCM_RATE_8000_192000; 1358 + dai->capture.formats = formats; 1348 1359 } 1360 + 1361 + if (i2s_tdm->clk_trcm != TRCM_TXRX) 1362 + dai->symmetric_rate = 1; 1363 + 1364 + i2s_tdm->dai = dai; 1365 + 1366 + return 0; 1349 1367 } 1350 1368 1351 1369 static int rockchip_i2s_tdm_path_check(struct rk_i2s_tdm_dev *i2s_tdm, ··· 1551 1541 spin_lock_init(&i2s_tdm->lock); 1552 1542 i2s_tdm->soc_data = (struct rk_i2s_soc_data *)of_id->data; 1553 1543 1554 - rockchip_i2s_tdm_init_dai(i2s_tdm); 1555 - 1556 1544 i2s_tdm->frame_width = 64; 1557 1545 1558 1546 i2s_tdm->clk_trcm = TRCM_TXRX; ··· 1563 1555 } 1564 1556 i2s_tdm->clk_trcm = TRCM_RX; 1565 1557 } 1566 - if (i2s_tdm->clk_trcm != TRCM_TXRX) 1567 - i2s_tdm_dai.symmetric_rate = 1; 1558 + 1559 + ret = rockchip_i2s_tdm_init_dai(i2s_tdm); 1560 + if (ret) 1561 + return ret; 1568 1562 1569 1563 i2s_tdm->grf = syscon_regmap_lookup_by_phandle(node, "rockchip,grf"); 1570 1564 if (IS_ERR(i2s_tdm->grf)) ··· 1688 1678 1689 1679 ret = devm_snd_soc_register_component(&pdev->dev, 1690 1680 &rockchip_i2s_tdm_component, 1691 - &i2s_tdm_dai, 1); 1681 + i2s_tdm->dai, 1); 1692 1682 1693 1683 if (ret) { 1694 1684 dev_err(&pdev->dev, "Could not register DAI\n");
+9 -5
sound/soc/sof/intel/hda-codec.c
··· 22 22 23 23 #if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA_AUDIO_CODEC) 24 24 #define IDISP_VID_INTEL 0x80860000 25 + #define CODEC_PROBE_RETRIES 3 25 26 26 27 /* load the legacy HDA codec driver */ 27 28 static int request_codec_module(struct hda_codec *codec) ··· 122 121 u32 hda_cmd = (address << 28) | (AC_NODE_ROOT << 20) | 123 122 (AC_VERB_PARAMETERS << 8) | AC_PAR_VENDOR_ID; 124 123 u32 resp = -1; 125 - int ret; 124 + int ret, retry = 0; 126 125 127 - mutex_lock(&hbus->core.cmd_mutex); 128 - snd_hdac_bus_send_cmd(&hbus->core, hda_cmd); 129 - snd_hdac_bus_get_response(&hbus->core, address, &resp); 130 - mutex_unlock(&hbus->core.cmd_mutex); 126 + do { 127 + mutex_lock(&hbus->core.cmd_mutex); 128 + snd_hdac_bus_send_cmd(&hbus->core, hda_cmd); 129 + snd_hdac_bus_get_response(&hbus->core, address, &resp); 130 + mutex_unlock(&hbus->core.cmd_mutex); 131 + } while (resp == -1 && retry++ < CODEC_PROBE_RETRIES); 132 + 131 133 if (resp == -1) 132 134 return -EIO; 133 135 dev_dbg(sdev->dev, "HDA codec #%d probed OK: response: %x\n",
+2 -2
sound/soc/tegra/tegra210_adx.c
··· 514 514 static const struct dev_pm_ops tegra210_adx_pm_ops = { 515 515 SET_RUNTIME_PM_OPS(tegra210_adx_runtime_suspend, 516 516 tegra210_adx_runtime_resume, NULL) 517 - SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, 518 - pm_runtime_force_resume) 517 + SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, 518 + pm_runtime_force_resume) 519 519 }; 520 520 521 521 static struct platform_driver tegra210_adx_driver = {
+2 -2
sound/soc/tegra/tegra210_amx.c
··· 583 583 static const struct dev_pm_ops tegra210_amx_pm_ops = { 584 584 SET_RUNTIME_PM_OPS(tegra210_amx_runtime_suspend, 585 585 tegra210_amx_runtime_resume, NULL) 586 - SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, 587 - pm_runtime_force_resume) 586 + SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, 587 + pm_runtime_force_resume) 588 588 }; 589 589 590 590 static struct platform_driver tegra210_amx_driver = {
+2 -2
sound/soc/tegra/tegra210_mixer.c
··· 666 666 static const struct dev_pm_ops tegra210_mixer_pm_ops = { 667 667 SET_RUNTIME_PM_OPS(tegra210_mixer_runtime_suspend, 668 668 tegra210_mixer_runtime_resume, NULL) 669 - SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, 670 - pm_runtime_force_resume) 669 + SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, 670 + pm_runtime_force_resume) 671 671 }; 672 672 673 673 static struct platform_driver tegra210_mixer_driver = {
+4 -4
sound/soc/tegra/tegra210_mvc.c
··· 164 164 if (err < 0) 165 165 goto end; 166 166 167 - return 1; 167 + err = 1; 168 168 169 169 end: 170 170 pm_runtime_put(cmpnt->dev); ··· 236 236 TEGRA210_MVC_VOLUME_SWITCH_MASK, 237 237 TEGRA210_MVC_VOLUME_SWITCH_TRIGGER); 238 238 239 - return 1; 239 + err = 1; 240 240 241 241 end: 242 242 pm_runtime_put(cmpnt->dev); ··· 639 639 static const struct dev_pm_ops tegra210_mvc_pm_ops = { 640 640 SET_RUNTIME_PM_OPS(tegra210_mvc_runtime_suspend, 641 641 tegra210_mvc_runtime_resume, NULL) 642 - SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, 643 - pm_runtime_force_resume) 642 + SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, 643 + pm_runtime_force_resume) 644 644 }; 645 645 646 646 static struct platform_driver tegra210_mvc_driver = {
+2 -2
sound/soc/tegra/tegra210_sfc.c
··· 3594 3594 static const struct dev_pm_ops tegra210_sfc_pm_ops = { 3595 3595 SET_RUNTIME_PM_OPS(tegra210_sfc_runtime_suspend, 3596 3596 tegra210_sfc_runtime_resume, NULL) 3597 - SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, 3598 - pm_runtime_force_resume) 3597 + SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, 3598 + pm_runtime_force_resume) 3599 3599 }; 3600 3600 3601 3601 static struct platform_driver tegra210_sfc_driver = {
+5 -5
sound/usb/mixer_quirks.c
··· 3016 3016 3017 3017 3018 3018 static const struct snd_djm_device snd_djm_devices[] = { 3019 - SND_DJM_DEVICE(250mk2), 3020 - SND_DJM_DEVICE(750), 3021 - SND_DJM_DEVICE(750mk2), 3022 - SND_DJM_DEVICE(850), 3023 - SND_DJM_DEVICE(900nxs2) 3019 + [SND_DJM_250MK2_IDX] = SND_DJM_DEVICE(250mk2), 3020 + [SND_DJM_750_IDX] = SND_DJM_DEVICE(750), 3021 + [SND_DJM_850_IDX] = SND_DJM_DEVICE(850), 3022 + [SND_DJM_900NXS2_IDX] = SND_DJM_DEVICE(900nxs2), 3023 + [SND_DJM_750MK2_IDX] = SND_DJM_DEVICE(750mk2), 3024 3024 }; 3025 3025 3026 3026
+4 -1
tools/perf/util/event.h
··· 44 44 /* perf sample has 16 bits size limit */ 45 45 #define PERF_SAMPLE_MAX_SIZE (1 << 16) 46 46 47 + /* number of register is bound by the number of bits in regs_dump::mask (64) */ 48 + #define PERF_SAMPLE_REGS_CACHE_SIZE (8 * sizeof(u64)) 49 + 47 50 struct regs_dump { 48 51 u64 abi; 49 52 u64 mask; 50 53 u64 *regs; 51 54 52 55 /* Cached values/mask filled by first register access. */ 53 - u64 cache_regs[PERF_REGS_MAX]; 56 + u64 cache_regs[PERF_SAMPLE_REGS_CACHE_SIZE]; 54 57 u64 cache_mask; 55 58 }; 56 59
+55 -30
tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
··· 1205 1205 1206 1206 static bool intel_pt_fup_event(struct intel_pt_decoder *decoder) 1207 1207 { 1208 + enum intel_pt_sample_type type = decoder->state.type; 1208 1209 bool ret = false; 1210 + 1211 + decoder->state.type &= ~INTEL_PT_BRANCH; 1209 1212 1210 1213 if (decoder->set_fup_tx_flags) { 1211 1214 decoder->set_fup_tx_flags = false; 1212 1215 decoder->tx_flags = decoder->fup_tx_flags; 1213 - decoder->state.type = INTEL_PT_TRANSACTION; 1216 + decoder->state.type |= INTEL_PT_TRANSACTION; 1214 1217 if (decoder->fup_tx_flags & INTEL_PT_ABORT_TX) 1215 1218 decoder->state.type |= INTEL_PT_BRANCH; 1216 - decoder->state.from_ip = decoder->ip; 1217 - decoder->state.to_ip = 0; 1218 1219 decoder->state.flags = decoder->fup_tx_flags; 1219 - return true; 1220 + ret = true; 1220 1221 } 1221 1222 if (decoder->set_fup_ptw) { 1222 1223 decoder->set_fup_ptw = false; 1223 - decoder->state.type = INTEL_PT_PTW; 1224 + decoder->state.type |= INTEL_PT_PTW; 1224 1225 decoder->state.flags |= INTEL_PT_FUP_IP; 1225 - decoder->state.from_ip = decoder->ip; 1226 - decoder->state.to_ip = 0; 1227 1226 decoder->state.ptw_payload = decoder->fup_ptw_payload; 1228 - return true; 1227 + ret = true; 1229 1228 } 1230 1229 if (decoder->set_fup_mwait) { 1231 1230 decoder->set_fup_mwait = false; 1232 - decoder->state.type = INTEL_PT_MWAIT_OP; 1233 - decoder->state.from_ip = decoder->ip; 1234 - decoder->state.to_ip = 0; 1231 + decoder->state.type |= INTEL_PT_MWAIT_OP; 1235 1232 decoder->state.mwait_payload = decoder->fup_mwait_payload; 1236 1233 ret = true; 1237 1234 } 1238 1235 if (decoder->set_fup_pwre) { 1239 1236 decoder->set_fup_pwre = false; 1240 1237 decoder->state.type |= INTEL_PT_PWR_ENTRY; 1241 - decoder->state.type &= ~INTEL_PT_BRANCH; 1242 - decoder->state.from_ip = decoder->ip; 1243 - decoder->state.to_ip = 0; 1244 1238 decoder->state.pwre_payload = decoder->fup_pwre_payload; 1245 1239 ret = true; 1246 1240 } 1247 1241 if (decoder->set_fup_exstop) { 1248 1242 decoder->set_fup_exstop = false; 1249 1243 decoder->state.type |= INTEL_PT_EX_STOP; 1250 - decoder->state.type &= ~INTEL_PT_BRANCH; 1251 1244 decoder->state.flags |= INTEL_PT_FUP_IP; 1252 - decoder->state.from_ip = decoder->ip; 1253 - decoder->state.to_ip = 0; 1254 1245 ret = true; 1255 1246 } 1256 1247 if (decoder->set_fup_bep) { 1257 1248 decoder->set_fup_bep = false; 1258 1249 decoder->state.type |= INTEL_PT_BLK_ITEMS; 1259 - decoder->state.type &= ~INTEL_PT_BRANCH; 1250 + ret = true; 1251 + } 1252 + if (decoder->overflow) { 1253 + decoder->overflow = false; 1254 + if (!ret && !decoder->pge) { 1255 + if (decoder->hop) { 1256 + decoder->state.type = 0; 1257 + decoder->pkt_state = INTEL_PT_STATE_RESAMPLE; 1258 + } 1259 + decoder->pge = true; 1260 + decoder->state.type |= INTEL_PT_BRANCH | INTEL_PT_TRACE_BEGIN; 1261 + decoder->state.from_ip = 0; 1262 + decoder->state.to_ip = decoder->ip; 1263 + return true; 1264 + } 1265 + } 1266 + if (ret) { 1260 1267 decoder->state.from_ip = decoder->ip; 1261 1268 decoder->state.to_ip = 0; 1262 - ret = true; 1269 + } else { 1270 + decoder->state.type = type; 1263 1271 } 1264 1272 return ret; 1265 1273 } ··· 1616 1608 intel_pt_clear_tx_flags(decoder); 1617 1609 intel_pt_set_nr(decoder); 1618 1610 decoder->timestamp_insn_cnt = 0; 1619 - decoder->pkt_state = INTEL_PT_STATE_ERR_RESYNC; 1611 + decoder->pkt_state = INTEL_PT_STATE_IN_SYNC; 1612 + decoder->state.from_ip = decoder->ip; 1613 + decoder->ip = 0; 1614 + decoder->pge = false; 1615 + decoder->set_fup_tx_flags = false; 1616 + decoder->set_fup_ptw = false; 1617 + decoder->set_fup_mwait = false; 1618 + decoder->set_fup_pwre = false; 1619 + decoder->set_fup_exstop = false; 1620 + decoder->set_fup_bep = false; 1620 1621 decoder->overflow = true; 1621 1622 return -EOVERFLOW; 1622 1623 } ··· 2683 2666 /* Hop mode: Ignore TNT, do not walk code, but get ip from FUPs and TIPs */ 2684 2667 static int intel_pt_hop_trace(struct intel_pt_decoder *decoder, bool *no_tip, int *err) 2685 2668 { 2669 + *err = 0; 2670 + 2686 2671 /* Leap from PSB to PSB, getting ip from FUP within PSB+ */ 2687 2672 if (decoder->leap && !decoder->in_psb && decoder->packet.type != INTEL_PT_PSB) { 2688 2673 *err = intel_pt_scan_for_psb(decoder); ··· 2697 2678 return HOP_IGNORE; 2698 2679 2699 2680 case INTEL_PT_TIP_PGD: 2681 + decoder->pge = false; 2700 2682 if (!decoder->packet.count) { 2701 2683 intel_pt_set_nr(decoder); 2702 2684 return HOP_IGNORE; ··· 2725 2705 if (!decoder->packet.count) 2726 2706 return HOP_IGNORE; 2727 2707 intel_pt_set_ip(decoder); 2728 - if (intel_pt_fup_event(decoder)) 2729 - return HOP_RETURN; 2730 - if (!decoder->branch_enable) 2708 + if (decoder->set_fup_mwait || decoder->set_fup_pwre) 2709 + *no_tip = true; 2710 + if (!decoder->branch_enable || !decoder->pge) 2731 2711 *no_tip = true; 2732 2712 if (*no_tip) { 2733 2713 decoder->state.type = INTEL_PT_INSTRUCTION; 2734 2714 decoder->state.from_ip = decoder->ip; 2735 2715 decoder->state.to_ip = 0; 2716 + intel_pt_fup_event(decoder); 2736 2717 return HOP_RETURN; 2737 2718 } 2719 + intel_pt_fup_event(decoder); 2720 + decoder->state.type |= INTEL_PT_INSTRUCTION | INTEL_PT_BRANCH; 2738 2721 *err = intel_pt_walk_fup_tip(decoder); 2739 - if (!*err) 2722 + if (!*err && decoder->state.to_ip) 2740 2723 decoder->pkt_state = INTEL_PT_STATE_RESAMPLE; 2741 2724 return HOP_RETURN; 2742 2725 ··· 2920 2897 { 2921 2898 struct intel_pt_psb_info data = { .fup = false }; 2922 2899 2923 - if (!decoder->branch_enable || !decoder->pge) 2900 + if (!decoder->branch_enable) 2924 2901 return false; 2925 2902 2926 2903 intel_pt_pkt_lookahead(decoder, intel_pt_psb_lookahead_cb, &data); ··· 2947 2924 if (err) 2948 2925 return err; 2949 2926 next: 2927 + err = 0; 2950 2928 if (decoder->cyc_threshold) { 2951 2929 if (decoder->sample_cyc && last_packet_type != INTEL_PT_CYC) 2952 2930 decoder->sample_cyc = false; ··· 2986 2962 2987 2963 case INTEL_PT_TIP_PGE: { 2988 2964 decoder->pge = true; 2965 + decoder->overflow = false; 2989 2966 intel_pt_mtc_cyc_cnt_pge(decoder); 2990 2967 intel_pt_set_nr(decoder); 2991 2968 if (decoder->packet.count == 0) { ··· 3024 2999 break; 3025 3000 } 3026 3001 intel_pt_set_last_ip(decoder); 3027 - if (!decoder->branch_enable) { 3002 + if (!decoder->branch_enable || !decoder->pge) { 3028 3003 decoder->ip = decoder->last_ip; 3029 3004 if (intel_pt_fup_event(decoder)) 3030 3005 return 0; ··· 3492 3467 decoder->set_fup_pwre = false; 3493 3468 decoder->set_fup_exstop = false; 3494 3469 decoder->set_fup_bep = false; 3470 + decoder->overflow = false; 3495 3471 3496 3472 if (!decoder->branch_enable) { 3497 3473 decoder->pkt_state = INTEL_PT_STATE_IN_SYNC; 3498 - decoder->overflow = false; 3499 3474 decoder->state.type = 0; /* Do not have a sample */ 3500 3475 return 0; 3501 3476 } ··· 3510 3485 decoder->pkt_state = INTEL_PT_STATE_RESAMPLE; 3511 3486 else 3512 3487 decoder->pkt_state = INTEL_PT_STATE_IN_SYNC; 3513 - decoder->overflow = false; 3514 3488 3515 3489 decoder->state.from_ip = 0; 3516 3490 decoder->state.to_ip = decoder->ip; ··· 3631 3607 } 3632 3608 3633 3609 decoder->have_last_ip = true; 3634 - decoder->pkt_state = INTEL_PT_STATE_NO_IP; 3610 + decoder->pkt_state = INTEL_PT_STATE_IN_SYNC; 3635 3611 3636 3612 err = intel_pt_walk_psb(decoder); 3637 3613 if (err) ··· 3728 3704 3729 3705 if (err) { 3730 3706 decoder->state.err = intel_pt_ext_err(err); 3731 - decoder->state.from_ip = decoder->ip; 3707 + if (err != -EOVERFLOW) 3708 + decoder->state.from_ip = decoder->ip; 3732 3709 intel_pt_update_sample_time(decoder); 3733 3710 decoder->sample_tot_cyc_cnt = decoder->tot_cyc_cnt; 3734 3711 intel_pt_set_nr(decoder);
+1
tools/perf/util/intel-pt.c
··· 2565 2565 ptq->sync_switch = false; 2566 2566 intel_pt_next_tid(pt, ptq); 2567 2567 } 2568 + ptq->timestamp = state->est_timestamp; 2568 2569 if (pt->synth_opts.errors) { 2569 2570 err = intel_ptq_synth_error(ptq, state); 2570 2571 if (err)
+3
tools/perf/util/perf_regs.c
··· 25 25 int i, idx = 0; 26 26 u64 mask = regs->mask; 27 27 28 + if ((u64)id >= PERF_SAMPLE_REGS_CACHE_SIZE) 29 + return -EINVAL; 30 + 28 31 if (regs->cache_mask & (1ULL << id)) 29 32 goto out; 30 33
+1 -1
tools/perf/util/python.c
··· 461 461 struct tep_event *tp_format; 462 462 463 463 tp_format = trace_event__tp_format_id(evsel->core.attr.config); 464 - if (!tp_format) 464 + if (IS_ERR_OR_NULL(tp_format)) 465 465 return NULL; 466 466 467 467 evsel->tp_format = tp_format;
+1
tools/power/acpi/Makefile.config
··· 69 69 ACPICA_INCLUDE := $(srctree)/../../../drivers/acpi/acpica 70 70 CFLAGS += -D_LINUX -I$(KERNEL_INCLUDE) -I$(ACPICA_INCLUDE) 71 71 CFLAGS += $(WARNINGS) 72 + MKDIR = mkdir 72 73 73 74 ifeq ($(strip $(V)),false) 74 75 QUIET=@
+1
tools/power/acpi/Makefile.rules
··· 21 21 22 22 $(objdir)%.o: %.c $(KERNEL_INCLUDE) 23 23 $(ECHO) " CC " $(subst $(OUTPUT),,$@) 24 + $(QUIET) $(MKDIR) -p $(objdir) 2>/dev/null 24 25 $(QUIET) $(CC) -c $(CFLAGS) -o $@ $< 25 26 26 27 all: $(OUTPUT)$(TOOL)
+20
tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
··· 33 33 return sum; 34 34 } 35 35 36 + __weak noinline struct file *bpf_testmod_return_ptr(int arg) 37 + { 38 + static struct file f = {}; 39 + 40 + switch (arg) { 41 + case 1: return (void *)EINVAL; /* user addr */ 42 + case 2: return (void *)0xcafe4a11; /* user addr */ 43 + case 3: return (void *)-EINVAL; /* canonical, but invalid */ 44 + case 4: return (void *)(1ull << 60); /* non-canonical and invalid */ 45 + case 5: return (void *)~(1ull << 30); /* trigger extable */ 46 + case 6: return &f; /* valid addr */ 47 + case 7: return (void *)((long)&f | 1); /* kernel tricks */ 48 + default: return NULL; 49 + } 50 + } 51 + 36 52 noinline ssize_t 37 53 bpf_testmod_test_read(struct file *file, struct kobject *kobj, 38 54 struct bin_attribute *bin_attr, ··· 59 43 .off = off, 60 44 .len = len, 61 45 }; 46 + int i = 1; 47 + 48 + while (bpf_testmod_return_ptr(i)) 49 + i++; 62 50 63 51 /* This is always true. Use the check to make sure the compiler 64 52 * doesn't remove bpf_testmod_loop_test.
+14 -2
tools/testing/selftests/bpf/prog_tests/btf_skc_cls_ingress.c
··· 90 90 91 91 static void test_conn(void) 92 92 { 93 - int listen_fd = -1, cli_fd = -1, err; 93 + int listen_fd = -1, cli_fd = -1, srv_fd = -1, err; 94 94 socklen_t addrlen = sizeof(srv_sa6); 95 95 int srv_port; 96 96 ··· 110 110 111 111 cli_fd = connect_to_fd(listen_fd, 0); 112 112 if (CHECK_FAIL(cli_fd == -1)) 113 + goto done; 114 + 115 + srv_fd = accept(listen_fd, NULL, NULL); 116 + if (CHECK_FAIL(srv_fd == -1)) 113 117 goto done; 114 118 115 119 if (CHECK(skel->bss->listen_tp_sport != srv_port || ··· 138 134 close(listen_fd); 139 135 if (cli_fd != -1) 140 136 close(cli_fd); 137 + if (srv_fd != -1) 138 + close(srv_fd); 141 139 } 142 140 143 141 static void test_syncookie(void) 144 142 { 145 - int listen_fd = -1, cli_fd = -1, err; 143 + int listen_fd = -1, cli_fd = -1, srv_fd = -1, err; 146 144 socklen_t addrlen = sizeof(srv_sa6); 147 145 int srv_port; 148 146 ··· 165 159 166 160 cli_fd = connect_to_fd(listen_fd, 0); 167 161 if (CHECK_FAIL(cli_fd == -1)) 162 + goto done; 163 + 164 + srv_fd = accept(listen_fd, NULL, NULL); 165 + if (CHECK_FAIL(srv_fd == -1)) 168 166 goto done; 169 167 170 168 if (CHECK(skel->bss->listen_tp_sport != srv_port, ··· 198 188 close(listen_fd); 199 189 if (cli_fd != -1) 200 190 close(cli_fd); 191 + if (srv_fd != -1) 192 + close(srv_fd); 201 193 } 202 194 203 195 struct test {
+12
tools/testing/selftests/bpf/progs/test_module_attach.c
··· 87 87 return 0; 88 88 } 89 89 90 + SEC("fexit/bpf_testmod_return_ptr") 91 + int BPF_PROG(handle_fexit_ret, int arg, struct file *ret) 92 + { 93 + long buf = 0; 94 + 95 + bpf_probe_read_kernel(&buf, 8, ret); 96 + bpf_probe_read_kernel(&buf, 8, (char *)ret + 256); 97 + *(volatile long long *)ret; 98 + *(volatile int *)&ret->f_mode; 99 + return 0; 100 + } 101 + 90 102 __u32 fmod_ret_read_sz = 0; 91 103 92 104 SEC("fmod_ret/bpf_testmod_test_read")
+1 -1
tools/testing/selftests/bpf/test_verifier.c
··· 54 54 #define MAX_INSNS BPF_MAXINSNS 55 55 #define MAX_TEST_INSNS 1000000 56 56 #define MAX_FIXUPS 8 57 - #define MAX_NR_MAPS 21 57 + #define MAX_NR_MAPS 22 58 58 #define MAX_TEST_RUNS 8 59 59 #define POINTER_VALUE 0xcafe4all 60 60 #define TEST_DATA_LEN 64
+86
tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
··· 138 138 BPF_EXIT_INSN(), 139 139 }, 140 140 .result = ACCEPT, 141 + .result_unpriv = REJECT, 142 + .errstr_unpriv = "R0 leaks addr into mem", 141 143 }, 142 144 { 143 145 "Dest pointer in r0 - succeed", ··· 158 156 BPF_EXIT_INSN(), 159 157 }, 160 158 .result = ACCEPT, 159 + .result_unpriv = REJECT, 160 + .errstr_unpriv = "R0 leaks addr into mem", 161 + }, 162 + { 163 + "Dest pointer in r0 - succeed, check 2", 164 + .insns = { 165 + /* r0 = &val */ 166 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_10), 167 + /* val = r0; */ 168 + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), 169 + /* r5 = &val */ 170 + BPF_MOV64_REG(BPF_REG_5, BPF_REG_10), 171 + /* r0 = atomic_cmpxchg(&val, r0, r5); */ 172 + BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, BPF_REG_10, BPF_REG_5, -8), 173 + /* r1 = *r0 */ 174 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8), 175 + /* exit(0); */ 176 + BPF_MOV64_IMM(BPF_REG_0, 0), 177 + BPF_EXIT_INSN(), 178 + }, 179 + .result = ACCEPT, 180 + .result_unpriv = REJECT, 181 + .errstr_unpriv = "R0 leaks addr into mem", 182 + }, 183 + { 184 + "Dest pointer in r0 - succeed, check 3", 185 + .insns = { 186 + /* r0 = &val */ 187 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_10), 188 + /* val = r0; */ 189 + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), 190 + /* r5 = &val */ 191 + BPF_MOV64_REG(BPF_REG_5, BPF_REG_10), 192 + /* r0 = atomic_cmpxchg(&val, r0, r5); */ 193 + BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, BPF_REG_10, BPF_REG_5, -8), 194 + /* exit(0); */ 195 + BPF_MOV64_IMM(BPF_REG_0, 0), 196 + BPF_EXIT_INSN(), 197 + }, 198 + .result = REJECT, 199 + .errstr = "invalid size of register fill", 200 + .errstr_unpriv = "R0 leaks addr into mem", 201 + }, 202 + { 203 + "Dest pointer in r0 - succeed, check 4", 204 + .insns = { 205 + /* r0 = &val */ 206 + BPF_MOV32_REG(BPF_REG_0, BPF_REG_10), 207 + /* val = r0; */ 208 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -8), 209 + /* r5 = &val */ 210 + BPF_MOV32_REG(BPF_REG_5, BPF_REG_10), 211 + /* r0 = atomic_cmpxchg(&val, r0, r5); */ 212 + BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, BPF_REG_10, BPF_REG_5, -8), 213 + /* r1 = *r10 */ 214 + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -8), 215 + /* exit(0); */ 216 + BPF_MOV64_IMM(BPF_REG_0, 0), 217 + BPF_EXIT_INSN(), 218 + }, 219 + .result = ACCEPT, 220 + .result_unpriv = REJECT, 221 + .errstr_unpriv = "R10 partial copy of pointer", 222 + }, 223 + { 224 + "Dest pointer in r0 - succeed, check 5", 225 + .insns = { 226 + /* r0 = &val */ 227 + BPF_MOV32_REG(BPF_REG_0, BPF_REG_10), 228 + /* val = r0; */ 229 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -8), 230 + /* r5 = &val */ 231 + BPF_MOV32_REG(BPF_REG_5, BPF_REG_10), 232 + /* r0 = atomic_cmpxchg(&val, r0, r5); */ 233 + BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, BPF_REG_10, BPF_REG_5, -8), 234 + /* r1 = *r0 */ 235 + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, -8), 236 + /* exit(0); */ 237 + BPF_MOV64_IMM(BPF_REG_0, 0), 238 + BPF_EXIT_INSN(), 239 + }, 240 + .result = REJECT, 241 + .errstr = "R0 invalid mem access", 242 + .errstr_unpriv = "R10 partial copy of pointer", 161 243 },
+94
tools/testing/selftests/bpf/verifier/atomic_fetch.c
··· 1 + { 2 + "atomic dw/fetch and address leakage of (map ptr & -1) via stack slot", 3 + .insns = { 4 + BPF_LD_IMM64(BPF_REG_1, -1), 5 + BPF_LD_MAP_FD(BPF_REG_8, 0), 6 + BPF_LD_MAP_FD(BPF_REG_9, 0), 7 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 8 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 9 + BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_9, 0), 10 + BPF_ATOMIC_OP(BPF_DW, BPF_AND | BPF_FETCH, BPF_REG_2, BPF_REG_1, 0), 11 + BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_2, 0), 12 + BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), 13 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), 14 + BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), 15 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 16 + BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0), 17 + BPF_MOV64_IMM(BPF_REG_0, 0), 18 + BPF_EXIT_INSN(), 19 + }, 20 + .fixup_map_array_48b = { 2, 4 }, 21 + .result = ACCEPT, 22 + .result_unpriv = REJECT, 23 + .errstr_unpriv = "leaking pointer from stack off -8", 24 + }, 25 + { 26 + "atomic dw/fetch and address leakage of (map ptr & -1) via returned value", 27 + .insns = { 28 + BPF_LD_IMM64(BPF_REG_1, -1), 29 + BPF_LD_MAP_FD(BPF_REG_8, 0), 30 + BPF_LD_MAP_FD(BPF_REG_9, 0), 31 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 32 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 33 + BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_9, 0), 34 + BPF_ATOMIC_OP(BPF_DW, BPF_AND | BPF_FETCH, BPF_REG_2, BPF_REG_1, 0), 35 + BPF_MOV64_REG(BPF_REG_9, BPF_REG_1), 36 + BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), 37 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), 38 + BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), 39 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 40 + BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0), 41 + BPF_MOV64_IMM(BPF_REG_0, 0), 42 + BPF_EXIT_INSN(), 43 + }, 44 + .fixup_map_array_48b = { 2, 4 }, 45 + .result = ACCEPT, 46 + .result_unpriv = REJECT, 47 + .errstr_unpriv = "leaking pointer from stack off -8", 48 + }, 49 + { 50 + "atomic w/fetch and address leakage of (map ptr & -1) via stack slot", 51 + .insns = { 52 + BPF_LD_IMM64(BPF_REG_1, -1), 53 + BPF_LD_MAP_FD(BPF_REG_8, 0), 54 + BPF_LD_MAP_FD(BPF_REG_9, 0), 55 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 56 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 57 + BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_9, 0), 58 + BPF_ATOMIC_OP(BPF_W, BPF_AND | BPF_FETCH, BPF_REG_2, BPF_REG_1, 0), 59 + BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_2, 0), 60 + BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), 61 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), 62 + BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), 63 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 64 + BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0), 65 + BPF_MOV64_IMM(BPF_REG_0, 0), 66 + BPF_EXIT_INSN(), 67 + }, 68 + .fixup_map_array_48b = { 2, 4 }, 69 + .result = REJECT, 70 + .errstr = "invalid size of register fill", 71 + }, 72 + { 73 + "atomic w/fetch and address leakage of (map ptr & -1) via returned value", 74 + .insns = { 75 + BPF_LD_IMM64(BPF_REG_1, -1), 76 + BPF_LD_MAP_FD(BPF_REG_8, 0), 77 + BPF_LD_MAP_FD(BPF_REG_9, 0), 78 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 79 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 80 + BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_9, 0), 81 + BPF_ATOMIC_OP(BPF_W, BPF_AND | BPF_FETCH, BPF_REG_2, BPF_REG_1, 0), 82 + BPF_MOV64_REG(BPF_REG_9, BPF_REG_1), 83 + BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), 84 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), 85 + BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), 86 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 87 + BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0), 88 + BPF_MOV64_IMM(BPF_REG_0, 0), 89 + BPF_EXIT_INSN(), 90 + }, 91 + .fixup_map_array_48b = { 2, 4 }, 92 + .result = REJECT, 93 + .errstr = "invalid size of register fill", 94 + }, 1 95 #define __ATOMIC_FETCH_OP_TEST(src_reg, dst_reg, operand1, op, operand2, expect) \ 2 96 { \ 3 97 "atomic fetch " #op ", src=" #dst_reg " dst=" #dst_reg, \
+71
tools/testing/selftests/bpf/verifier/search_pruning.c
··· 133 133 .prog_type = BPF_PROG_TYPE_TRACEPOINT, 134 134 }, 135 135 { 136 + "precision tracking for u32 spill/fill", 137 + .insns = { 138 + BPF_MOV64_REG(BPF_REG_7, BPF_REG_1), 139 + BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), 140 + BPF_MOV32_IMM(BPF_REG_6, 32), 141 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 142 + BPF_MOV32_IMM(BPF_REG_6, 4), 143 + /* Additional insns to introduce a pruning point. */ 144 + BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), 145 + BPF_MOV64_IMM(BPF_REG_3, 0), 146 + BPF_MOV64_IMM(BPF_REG_3, 0), 147 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 148 + BPF_MOV64_IMM(BPF_REG_3, 0), 149 + /* u32 spill/fill */ 150 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_6, -8), 151 + BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_10, -8), 152 + /* out-of-bound map value access for r6=32 */ 153 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0), 154 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 155 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16), 156 + BPF_LD_MAP_FD(BPF_REG_1, 0), 157 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), 158 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), 159 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_8), 160 + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), 161 + BPF_MOV64_IMM(BPF_REG_0, 0), 162 + BPF_EXIT_INSN(), 163 + }, 164 + .fixup_map_hash_8b = { 15 }, 165 + .result = REJECT, 166 + .errstr = "R0 min value is outside of the allowed memory range", 167 + .prog_type = BPF_PROG_TYPE_TRACEPOINT, 168 + }, 169 + { 170 + "precision tracking for u32 spills, u64 fill", 171 + .insns = { 172 + BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), 173 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), 174 + BPF_MOV32_IMM(BPF_REG_7, 0xffffffff), 175 + /* Additional insns to introduce a pruning point. */ 176 + BPF_MOV64_IMM(BPF_REG_3, 1), 177 + BPF_MOV64_IMM(BPF_REG_3, 1), 178 + BPF_MOV64_IMM(BPF_REG_3, 1), 179 + BPF_MOV64_IMM(BPF_REG_3, 1), 180 + BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), 181 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 182 + BPF_MOV64_IMM(BPF_REG_3, 1), 183 + BPF_ALU32_IMM(BPF_DIV, BPF_REG_3, 0), 184 + /* u32 spills, u64 fill */ 185 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_6, -4), 186 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_7, -8), 187 + BPF_LDX_MEM(BPF_DW, BPF_REG_8, BPF_REG_10, -8), 188 + /* if r8 != X goto pc+1 r8 known in fallthrough branch */ 189 + BPF_JMP_IMM(BPF_JNE, BPF_REG_8, 0xffffffff, 1), 190 + BPF_MOV64_IMM(BPF_REG_3, 1), 191 + /* if r8 == X goto pc+1 condition always true on first 192 + * traversal, so starts backtracking to mark r8 as requiring 193 + * precision. r7 marked as needing precision. r6 not marked 194 + * since it's not tracked. 195 + */ 196 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_8, 0xffffffff, 1), 197 + /* fails if r8 correctly marked unknown after fill. */ 198 + BPF_ALU32_IMM(BPF_DIV, BPF_REG_3, 0), 199 + BPF_MOV64_IMM(BPF_REG_0, 0), 200 + BPF_EXIT_INSN(), 201 + }, 202 + .result = REJECT, 203 + .errstr = "div by zero", 204 + .prog_type = BPF_PROG_TYPE_TRACEPOINT, 205 + }, 206 + { 136 207 "allocated_stack", 137 208 .insns = { 138 209 BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_1),
+32
tools/testing/selftests/bpf/verifier/spill_fill.c
··· 176 176 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 177 177 }, 178 178 { 179 + "Spill u32 const scalars. Refill as u64. Offset to skb->data", 180 + .insns = { 181 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 182 + offsetof(struct __sk_buff, data)), 183 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 184 + offsetof(struct __sk_buff, data_end)), 185 + /* r6 = 0 */ 186 + BPF_MOV32_IMM(BPF_REG_6, 0), 187 + /* r7 = 20 */ 188 + BPF_MOV32_IMM(BPF_REG_7, 20), 189 + /* *(u32 *)(r10 -4) = r6 */ 190 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_6, -4), 191 + /* *(u32 *)(r10 -8) = r7 */ 192 + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_7, -8), 193 + /* r4 = *(u64 *)(r10 -8) */ 194 + BPF_LDX_MEM(BPF_H, BPF_REG_4, BPF_REG_10, -8), 195 + /* r0 = r2 */ 196 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), 197 + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv,umax=65535 */ 198 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), 199 + /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv,umax=65535 */ 200 + BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), 201 + /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv20 */ 202 + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), 203 + BPF_MOV64_IMM(BPF_REG_0, 0), 204 + BPF_EXIT_INSN(), 205 + }, 206 + .result = REJECT, 207 + .errstr = "invalid access to packet", 208 + .prog_type = BPF_PROG_TYPE_SCHED_CLS, 209 + }, 210 + { 179 211 "Spill a u32 const scalar. Refill as u16 from fp-6. Offset to skb->data", 180 212 .insns = { 181 213 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+23
tools/testing/selftests/bpf/verifier/value_ptr_arith.c
··· 1078 1078 .errstr_unpriv = "R0 pointer -= pointer prohibited", 1079 1079 }, 1080 1080 { 1081 + "map access: trying to leak tained dst reg", 1082 + .insns = { 1083 + BPF_MOV64_IMM(BPF_REG_0, 0), 1084 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 1085 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 1086 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 1087 + BPF_LD_MAP_FD(BPF_REG_1, 0), 1088 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), 1089 + BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), 1090 + BPF_EXIT_INSN(), 1091 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), 1092 + BPF_MOV32_IMM(BPF_REG_1, 0xFFFFFFFF), 1093 + BPF_MOV32_REG(BPF_REG_1, BPF_REG_1), 1094 + BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), 1095 + BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 0), 1096 + BPF_MOV64_IMM(BPF_REG_0, 0), 1097 + BPF_EXIT_INSN(), 1098 + }, 1099 + .fixup_map_array_48b = { 4 }, 1100 + .result = REJECT, 1101 + .errstr = "math between map_value pointer and 4294967295 is not allowed", 1102 + }, 1103 + { 1081 1104 "32bit pkt_ptr -= scalar", 1082 1105 .insns = { 1083 1106 BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_1,
+2
tools/testing/selftests/damon/.gitignore
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + huge_count_read_write
+5 -2
tools/testing/selftests/damon/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 # Makefile for damon selftests 3 3 4 - TEST_FILES = _chk_dependency.sh 5 - TEST_PROGS = debugfs_attrs.sh 4 + TEST_GEN_FILES += huge_count_read_write 5 + 6 + TEST_FILES = _chk_dependency.sh _debugfs_common.sh 7 + TEST_PROGS = debugfs_attrs.sh debugfs_schemes.sh debugfs_target_ids.sh 8 + TEST_PROGS += debugfs_empty_targets.sh debugfs_huge_count_read_write.sh 6 9 7 10 include ../lib.mk
+52
tools/testing/selftests/damon/_debugfs_common.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + test_write_result() { 5 + file=$1 6 + content=$2 7 + orig_content=$3 8 + expect_reason=$4 9 + expected=$5 10 + 11 + echo "$content" > "$file" 12 + if [ $? -ne "$expected" ] 13 + then 14 + echo "writing $content to $file doesn't return $expected" 15 + echo "expected because: $expect_reason" 16 + echo "$orig_content" > "$file" 17 + exit 1 18 + fi 19 + } 20 + 21 + test_write_succ() { 22 + test_write_result "$1" "$2" "$3" "$4" 0 23 + } 24 + 25 + test_write_fail() { 26 + test_write_result "$1" "$2" "$3" "$4" 1 27 + } 28 + 29 + test_content() { 30 + file=$1 31 + orig_content=$2 32 + expected=$3 33 + expect_reason=$4 34 + 35 + content=$(cat "$file") 36 + if [ "$content" != "$expected" ] 37 + then 38 + echo "reading $file expected $expected but $content" 39 + echo "expected because: $expect_reason" 40 + echo "$orig_content" > "$file" 41 + exit 1 42 + fi 43 + } 44 + 45 + source ./_chk_dependency.sh 46 + 47 + damon_onoff="$DBGFS/monitor_on" 48 + if [ $(cat "$damon_onoff") = "on" ] 49 + then 50 + echo "monitoring is on" 51 + exit $ksft_skip 52 + fi
+1 -72
tools/testing/selftests/damon/debugfs_attrs.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 - test_write_result() { 5 - file=$1 6 - content=$2 7 - orig_content=$3 8 - expect_reason=$4 9 - expected=$5 10 - 11 - echo "$content" > "$file" 12 - if [ $? -ne "$expected" ] 13 - then 14 - echo "writing $content to $file doesn't return $expected" 15 - echo "expected because: $expect_reason" 16 - echo "$orig_content" > "$file" 17 - exit 1 18 - fi 19 - } 20 - 21 - test_write_succ() { 22 - test_write_result "$1" "$2" "$3" "$4" 0 23 - } 24 - 25 - test_write_fail() { 26 - test_write_result "$1" "$2" "$3" "$4" 1 27 - } 28 - 29 - test_content() { 30 - file=$1 31 - orig_content=$2 32 - expected=$3 33 - expect_reason=$4 34 - 35 - content=$(cat "$file") 36 - if [ "$content" != "$expected" ] 37 - then 38 - echo "reading $file expected $expected but $content" 39 - echo "expected because: $expect_reason" 40 - echo "$orig_content" > "$file" 41 - exit 1 42 - fi 43 - } 44 - 45 - source ./_chk_dependency.sh 4 + source _debugfs_common.sh 46 5 47 6 # Test attrs file 48 7 # =============== ··· 15 56 "min_nr_regions > max_nr_regions" 16 57 test_content "$file" "$orig_content" "1 2 3 4 5" "successfully written" 17 58 echo "$orig_content" > "$file" 18 - 19 - # Test schemes file 20 - # ================= 21 - 22 - file="$DBGFS/schemes" 23 - orig_content=$(cat "$file") 24 - 25 - test_write_succ "$file" "1 2 3 4 5 6 4 0 0 0 1 2 3 1 100 3 2 1" \ 26 - "$orig_content" "valid input" 27 - test_write_fail "$file" "1 2 28 - 3 4 5 6 3 0 0 0 1 2 3 1 100 3 2 1" "$orig_content" "multi lines" 29 - test_write_succ "$file" "" "$orig_content" "disabling" 30 - echo "$orig_content" > "$file" 31 - 32 - # Test target_ids file 33 - # ==================== 34 - 35 - file="$DBGFS/target_ids" 36 - orig_content=$(cat "$file") 37 - 38 - test_write_succ "$file" "1 2 3 4" "$orig_content" "valid input" 39 - test_write_succ "$file" "1 2 abc 4" "$orig_content" "still valid input" 40 - test_content "$file" "$orig_content" "1 2" "non-integer was there" 41 - test_write_succ "$file" "abc 2 3" "$orig_content" "the file allows wrong input" 42 - test_content "$file" "$orig_content" "" "wrong input written" 43 - test_write_succ "$file" "" "$orig_content" "empty input" 44 - test_content "$file" "$orig_content" "" "empty input written" 45 - echo "$orig_content" > "$file" 46 - 47 - echo "PASS"
+13
tools/testing/selftests/damon/debugfs_empty_targets.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + source _debugfs_common.sh 5 + 6 + # Test empty targets case 7 + # ======================= 8 + 9 + orig_target_ids=$(cat "$DBGFS/target_ids") 10 + echo "" > "$DBGFS/target_ids" 11 + orig_monitor_on=$(cat "$DBGFS/monitor_on") 12 + test_write_fail "$DBGFS/monitor_on" "on" "orig_monitor_on" "empty target ids" 13 + echo "$orig_target_ids" > "$DBGFS/target_ids"
+22
tools/testing/selftests/damon/debugfs_huge_count_read_write.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + source _debugfs_common.sh 5 + 6 + # Test huge count read write 7 + # ========================== 8 + 9 + dmesg -C 10 + 11 + for file in "$DBGFS/"* 12 + do 13 + ./huge_count_read_write "$file" 14 + done 15 + 16 + if dmesg | grep -q WARNING 17 + then 18 + dmesg 19 + exit 1 20 + else 21 + exit 0 22 + fi
+19
tools/testing/selftests/damon/debugfs_schemes.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + source _debugfs_common.sh 5 + 6 + # Test schemes file 7 + # ================= 8 + 9 + file="$DBGFS/schemes" 10 + orig_content=$(cat "$file") 11 + 12 + test_write_succ "$file" "1 2 3 4 5 6 4 0 0 0 1 2 3 1 100 3 2 1" \ 13 + "$orig_content" "valid input" 14 + test_write_fail "$file" "1 2 15 + 3 4 5 6 3 0 0 0 1 2 3 1 100 3 2 1" "$orig_content" "multi lines" 16 + test_write_succ "$file" "" "$orig_content" "disabling" 17 + test_write_fail "$file" "2 1 2 1 10 1 3 10 1 1 1 1 1 1 1 1 2 3" \ 18 + "$orig_content" "wrong condition ranges" 19 + echo "$orig_content" > "$file"
+19
tools/testing/selftests/damon/debugfs_target_ids.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + source _debugfs_common.sh 5 + 6 + # Test target_ids file 7 + # ==================== 8 + 9 + file="$DBGFS/target_ids" 10 + orig_content=$(cat "$file") 11 + 12 + test_write_succ "$file" "1 2 3 4" "$orig_content" "valid input" 13 + test_write_succ "$file" "1 2 abc 4" "$orig_content" "still valid input" 14 + test_content "$file" "$orig_content" "1 2" "non-integer was there" 15 + test_write_succ "$file" "abc 2 3" "$orig_content" "the file allows wrong input" 16 + test_content "$file" "$orig_content" "" "wrong input written" 17 + test_write_succ "$file" "" "$orig_content" "empty input" 18 + test_content "$file" "$orig_content" "" "empty input written" 19 + echo "$orig_content" > "$file"
+39
tools/testing/selftests/damon/huge_count_read_write.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Author: SeongJae Park <sj@kernel.org> 4 + */ 5 + 6 + #include <fcntl.h> 7 + #include <stdlib.h> 8 + #include <unistd.h> 9 + #include <stdio.h> 10 + 11 + void write_read_with_huge_count(char *file) 12 + { 13 + int filedesc = open(file, O_RDWR); 14 + char buf[25]; 15 + int ret; 16 + 17 + printf("%s %s\n", __func__, file); 18 + if (filedesc < 0) { 19 + fprintf(stderr, "failed opening %s\n", file); 20 + exit(1); 21 + } 22 + 23 + write(filedesc, "", 0xfffffffful); 24 + perror("after write: "); 25 + ret = read(filedesc, buf, 0xfffffffful); 26 + perror("after read: "); 27 + close(filedesc); 28 + } 29 + 30 + int main(int argc, char *argv[]) 31 + { 32 + if (argc != 2) { 33 + fprintf(stderr, "Usage: %s <file>\n", argv[0]); 34 + exit(1); 35 + } 36 + write_read_with_huge_count(argv[1]); 37 + 38 + return 0; 39 + }
+30
tools/testing/selftests/drivers/net/mlxsw/rif_mac_profiles_occ.sh
··· 72 72 ip link set $h1.10 address $h1_10_mac 73 73 } 74 74 75 + rif_mac_profile_consolidation_test() 76 + { 77 + local count=$1; shift 78 + local h1_20_mac 79 + 80 + RET=0 81 + 82 + if [[ $count -eq 1 ]]; then 83 + return 84 + fi 85 + 86 + h1_20_mac=$(mac_get $h1.20) 87 + 88 + # Set the MAC of $h1.20 to that of $h1.10 and confirm that they are 89 + # using the same MAC profile. 90 + ip link set $h1.20 address 00:11:11:11:11:11 91 + check_err $? 92 + 93 + occ=$(devlink -j resource show $DEVLINK_DEV \ 94 + | jq '.[][][] | select(.name=="rif_mac_profiles") |.["occ"]') 95 + 96 + [[ $occ -eq $((count - 1)) ]] 97 + check_err $? "MAC profile occupancy did not decrease" 98 + 99 + log_test "RIF MAC profile consolidation" 100 + 101 + ip link set $h1.20 address $h1_20_mac 102 + } 103 + 75 104 rif_mac_profile_shared_replacement_test() 76 105 { 77 106 local count=$1; shift ··· 133 104 create_max_rif_mac_profiles $count 134 105 135 106 rif_mac_profile_replacement_test 107 + rif_mac_profile_consolidation_test $count 136 108 rif_mac_profile_shared_replacement_test $count 137 109 } 138 110
+1
tools/testing/selftests/kvm/.gitignore
··· 30 30 /x86_64/svm_int_ctl_test 31 31 /x86_64/sync_regs_test 32 32 /x86_64/tsc_msrs_test 33 + /x86_64/userspace_io_test 33 34 /x86_64/userspace_msr_exit_test 34 35 /x86_64/vmx_apic_access_test 35 36 /x86_64/vmx_close_while_nested_test
+1
tools/testing/selftests/kvm/Makefile
··· 59 59 TEST_GEN_PROGS_x86_64 += x86_64/svm_vmcall_test 60 60 TEST_GEN_PROGS_x86_64 += x86_64/svm_int_ctl_test 61 61 TEST_GEN_PROGS_x86_64 += x86_64/sync_regs_test 62 + TEST_GEN_PROGS_x86_64 += x86_64/userspace_io_test 62 63 TEST_GEN_PROGS_x86_64 += x86_64/userspace_msr_exit_test 63 64 TEST_GEN_PROGS_x86_64 += x86_64/vmx_apic_access_test 64 65 TEST_GEN_PROGS_x86_64 += x86_64/vmx_close_while_nested_test
+9
tools/testing/selftests/kvm/include/kvm_util.h
··· 71 71 72 72 #endif 73 73 74 + #if defined(__x86_64__) 75 + unsigned long vm_compute_max_gfn(struct kvm_vm *vm); 76 + #else 77 + static inline unsigned long vm_compute_max_gfn(struct kvm_vm *vm) 78 + { 79 + return ((1ULL << vm->pa_bits) >> vm->page_shift) - 1; 80 + } 81 + #endif 82 + 74 83 #define MIN_PAGE_SIZE (1U << MIN_PAGE_SHIFT) 75 84 #define PTES_PER_MIN_PAGE ptes_per_page(MIN_PAGE_SIZE) 76 85
+1 -1
tools/testing/selftests/kvm/lib/kvm_util.c
··· 302 302 (1ULL << (vm->va_bits - 1)) >> vm->page_shift); 303 303 304 304 /* Limit physical addresses to PA-bits. */ 305 - vm->max_gfn = ((1ULL << vm->pa_bits) >> vm->page_shift) - 1; 305 + vm->max_gfn = vm_compute_max_gfn(vm); 306 306 307 307 /* Allocate and setup memory for guest. */ 308 308 vm->vpages_mapped = sparsebit_alloc();
+68
tools/testing/selftests/kvm/lib/x86_64/processor.c
··· 1431 1431 1432 1432 return cpuid; 1433 1433 } 1434 + 1435 + #define X86EMUL_CPUID_VENDOR_AuthenticAMD_ebx 0x68747541 1436 + #define X86EMUL_CPUID_VENDOR_AuthenticAMD_ecx 0x444d4163 1437 + #define X86EMUL_CPUID_VENDOR_AuthenticAMD_edx 0x69746e65 1438 + 1439 + static inline unsigned x86_family(unsigned int eax) 1440 + { 1441 + unsigned int x86; 1442 + 1443 + x86 = (eax >> 8) & 0xf; 1444 + 1445 + if (x86 == 0xf) 1446 + x86 += (eax >> 20) & 0xff; 1447 + 1448 + return x86; 1449 + } 1450 + 1451 + unsigned long vm_compute_max_gfn(struct kvm_vm *vm) 1452 + { 1453 + const unsigned long num_ht_pages = 12 << (30 - vm->page_shift); /* 12 GiB */ 1454 + unsigned long ht_gfn, max_gfn, max_pfn; 1455 + uint32_t eax, ebx, ecx, edx, max_ext_leaf; 1456 + 1457 + max_gfn = (1ULL << (vm->pa_bits - vm->page_shift)) - 1; 1458 + 1459 + /* Avoid reserved HyperTransport region on AMD processors. */ 1460 + eax = ecx = 0; 1461 + cpuid(&eax, &ebx, &ecx, &edx); 1462 + if (ebx != X86EMUL_CPUID_VENDOR_AuthenticAMD_ebx || 1463 + ecx != X86EMUL_CPUID_VENDOR_AuthenticAMD_ecx || 1464 + edx != X86EMUL_CPUID_VENDOR_AuthenticAMD_edx) 1465 + return max_gfn; 1466 + 1467 + /* On parts with <40 physical address bits, the area is fully hidden */ 1468 + if (vm->pa_bits < 40) 1469 + return max_gfn; 1470 + 1471 + /* Before family 17h, the HyperTransport area is just below 1T. */ 1472 + ht_gfn = (1 << 28) - num_ht_pages; 1473 + eax = 1; 1474 + cpuid(&eax, &ebx, &ecx, &edx); 1475 + if (x86_family(eax) < 0x17) 1476 + goto done; 1477 + 1478 + /* 1479 + * Otherwise it's at the top of the physical address space, possibly 1480 + * reduced due to SME by bits 11:6 of CPUID[0x8000001f].EBX. Use 1481 + * the old conservative value if MAXPHYADDR is not enumerated. 1482 + */ 1483 + eax = 0x80000000; 1484 + cpuid(&eax, &ebx, &ecx, &edx); 1485 + max_ext_leaf = eax; 1486 + if (max_ext_leaf < 0x80000008) 1487 + goto done; 1488 + 1489 + eax = 0x80000008; 1490 + cpuid(&eax, &ebx, &ecx, &edx); 1491 + max_pfn = (1ULL << ((eax & 0xff) - vm->page_shift)) - 1; 1492 + if (max_ext_leaf >= 0x8000001f) { 1493 + eax = 0x8000001f; 1494 + cpuid(&eax, &ebx, &ecx, &edx); 1495 + max_pfn >>= (ebx >> 6) & 0x3f; 1496 + } 1497 + 1498 + ht_gfn = max_pfn - num_ht_pages; 1499 + done: 1500 + return min(max_gfn, ht_gfn - 1); 1501 + }
+1 -1
tools/testing/selftests/kvm/x86_64/svm_int_ctl_test.c
··· 75 75 vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; 76 76 77 77 /* No intercepts for real and virtual interrupts */ 78 - vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR | INTERCEPT_VINTR); 78 + vmcb->control.intercept &= ~(BIT(INTERCEPT_INTR) | BIT(INTERCEPT_VINTR)); 79 79 80 80 /* Make a virtual interrupt VINTR_IRQ_NUMBER pending */ 81 81 vmcb->control.int_ctl |= V_IRQ_MASK | (0x1 << V_INTR_PRIO_SHIFT);
+114
tools/testing/selftests/kvm/x86_64/userspace_io_test.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <fcntl.h> 3 + #include <stdio.h> 4 + #include <stdlib.h> 5 + #include <string.h> 6 + #include <sys/ioctl.h> 7 + 8 + #include "test_util.h" 9 + 10 + #include "kvm_util.h" 11 + #include "processor.h" 12 + 13 + #define VCPU_ID 1 14 + 15 + static void guest_ins_port80(uint8_t *buffer, unsigned int count) 16 + { 17 + unsigned long end; 18 + 19 + if (count == 2) 20 + end = (unsigned long)buffer + 1; 21 + else 22 + end = (unsigned long)buffer + 8192; 23 + 24 + asm volatile("cld; rep; insb" : "+D"(buffer), "+c"(count) : "d"(0x80) : "memory"); 25 + GUEST_ASSERT_1(count == 0, count); 26 + GUEST_ASSERT_2((unsigned long)buffer == end, buffer, end); 27 + } 28 + 29 + static void guest_code(void) 30 + { 31 + uint8_t buffer[8192]; 32 + int i; 33 + 34 + /* 35 + * Special case tests. main() will adjust RCX 2 => 1 and 3 => 8192 to 36 + * test that KVM doesn't explode when userspace modifies the "count" on 37 + * a userspace I/O exit. KVM isn't required to play nice with the I/O 38 + * itself as KVM doesn't support manipulating the count, it just needs 39 + * to not explode or overflow a buffer. 40 + */ 41 + guest_ins_port80(buffer, 2); 42 + guest_ins_port80(buffer, 3); 43 + 44 + /* Verify KVM fills the buffer correctly when not stuffing RCX. */ 45 + memset(buffer, 0, sizeof(buffer)); 46 + guest_ins_port80(buffer, 8192); 47 + for (i = 0; i < 8192; i++) 48 + GUEST_ASSERT_2(buffer[i] == 0xaa, i, buffer[i]); 49 + 50 + GUEST_DONE(); 51 + } 52 + 53 + int main(int argc, char *argv[]) 54 + { 55 + struct kvm_regs regs; 56 + struct kvm_run *run; 57 + struct kvm_vm *vm; 58 + struct ucall uc; 59 + int rc; 60 + 61 + /* Tell stdout not to buffer its content */ 62 + setbuf(stdout, NULL); 63 + 64 + /* Create VM */ 65 + vm = vm_create_default(VCPU_ID, 0, guest_code); 66 + run = vcpu_state(vm, VCPU_ID); 67 + 68 + memset(&regs, 0, sizeof(regs)); 69 + 70 + while (1) { 71 + rc = _vcpu_run(vm, VCPU_ID); 72 + 73 + TEST_ASSERT(rc == 0, "vcpu_run failed: %d\n", rc); 74 + TEST_ASSERT(run->exit_reason == KVM_EXIT_IO, 75 + "Unexpected exit reason: %u (%s),\n", 76 + run->exit_reason, 77 + exit_reason_str(run->exit_reason)); 78 + 79 + if (get_ucall(vm, VCPU_ID, &uc)) 80 + break; 81 + 82 + TEST_ASSERT(run->io.port == 0x80, 83 + "Expected I/O at port 0x80, got port 0x%x\n", run->io.port); 84 + 85 + /* 86 + * Modify the rep string count in RCX: 2 => 1 and 3 => 8192. 87 + * Note, this abuses KVM's batching of rep string I/O to avoid 88 + * getting stuck in an infinite loop. That behavior isn't in 89 + * scope from a testing perspective as it's not ABI in any way, 90 + * i.e. it really is abusing internal KVM knowledge. 91 + */ 92 + vcpu_regs_get(vm, VCPU_ID, &regs); 93 + if (regs.rcx == 2) 94 + regs.rcx = 1; 95 + if (regs.rcx == 3) 96 + regs.rcx = 8192; 97 + memset((void *)run + run->io.data_offset, 0xaa, 4096); 98 + vcpu_regs_set(vm, VCPU_ID, &regs); 99 + } 100 + 101 + switch (uc.cmd) { 102 + case UCALL_DONE: 103 + break; 104 + case UCALL_ABORT: 105 + TEST_FAIL("%s at %s:%ld : argN+1 = 0x%lx, argN+2 = 0x%lx", 106 + (const char *)uc.args[0], __FILE__, uc.args[1], 107 + uc.args[2], uc.args[3]); 108 + default: 109 + TEST_FAIL("Unknown ucall %lu", uc.cmd); 110 + } 111 + 112 + kvm_vm_free(vm); 113 + return 0; 114 + }
+34 -11
tools/testing/selftests/net/fcnal-test.sh
··· 462 462 ip netns del ${NSC} >/dev/null 2>&1 463 463 } 464 464 465 + cleanup_vrf_dup() 466 + { 467 + ip link del ${NSA_DEV2} >/dev/null 2>&1 468 + ip netns pids ${NSC} | xargs kill 2>/dev/null 469 + ip netns del ${NSC} >/dev/null 2>&1 470 + } 471 + 472 + setup_vrf_dup() 473 + { 474 + # some VRF tests use ns-C which has the same config as 475 + # ns-B but for a device NOT in the VRF 476 + create_ns ${NSC} "-" "-" 477 + connect_ns ${NSA} ${NSA_DEV2} ${NSA_IP}/24 ${NSA_IP6}/64 \ 478 + ${NSC} ${NSC_DEV} ${NSB_IP}/24 ${NSB_IP6}/64 479 + } 480 + 465 481 setup() 466 482 { 467 483 local with_vrf=${1} ··· 507 491 508 492 ip -netns ${NSB} ro add ${VRF_IP}/32 via ${NSA_IP} dev ${NSB_DEV} 509 493 ip -netns ${NSB} -6 ro add ${VRF_IP6}/128 via ${NSA_IP6} dev ${NSB_DEV} 510 - 511 - # some VRF tests use ns-C which has the same config as 512 - # ns-B but for a device NOT in the VRF 513 - create_ns ${NSC} "-" "-" 514 - connect_ns ${NSA} ${NSA_DEV2} ${NSA_IP}/24 ${NSA_IP6}/64 \ 515 - ${NSC} ${NSC_DEV} ${NSB_IP}/24 ${NSB_IP6}/64 516 494 else 517 495 ip -netns ${NSA} ro add ${NSB_LO_IP}/32 via ${NSB_IP} dev ${NSA_DEV} 518 496 ip -netns ${NSA} ro add ${NSB_LO_IP6}/128 via ${NSB_IP6} dev ${NSA_DEV} ··· 1257 1247 log_test_addr ${a} $? 1 "Global server, local connection" 1258 1248 1259 1249 # run MD5 tests 1250 + setup_vrf_dup 1260 1251 ipv4_tcp_md5 1252 + cleanup_vrf_dup 1261 1253 1262 1254 # 1263 1255 # enable VRF global server ··· 1825 1813 for a in ${NSA_IP} ${VRF_IP} 1826 1814 do 1827 1815 log_start 1816 + show_hint "Socket not bound to VRF, but address is in VRF" 1828 1817 run_cmd nettest -s -R -P icmp -l ${a} -b 1829 - log_test_addr ${a} $? 0 "Raw socket bind to local address" 1818 + log_test_addr ${a} $? 1 "Raw socket bind to local address" 1830 1819 1831 1820 log_start 1832 1821 run_cmd nettest -s -R -P icmp -l ${a} -I ${NSA_DEV} -b ··· 2228 2215 log_start 2229 2216 show_hint "Fails since VRF device does not support linklocal or multicast" 2230 2217 run_cmd ${ping6} -c1 -w1 ${a} 2231 - log_test_addr ${a} $? 2 "ping out, VRF bind" 2218 + log_test_addr ${a} $? 1 "ping out, VRF bind" 2232 2219 done 2233 2220 2234 2221 for a in ${NSB_IP6} ${NSB_LO_IP6} ${NSB_LINKIP6}%${NSA_DEV} ${MCAST}%${NSA_DEV} ··· 2756 2743 log_test_addr ${a} $? 1 "Global server, local connection" 2757 2744 2758 2745 # run MD5 tests 2746 + setup_vrf_dup 2759 2747 ipv6_tcp_md5 2748 + cleanup_vrf_dup 2760 2749 2761 2750 # 2762 2751 # enable VRF global server ··· 3461 3446 run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b 3462 3447 log_test_addr ${a} $? 0 "TCP socket bind to local address after device bind" 3463 3448 3449 + # Sadly, the kernel allows binding a socket to a device and then 3450 + # binding to an address not on the device. So this test passes 3451 + # when it really should not 3464 3452 a=${NSA_LO_IP6} 3465 3453 log_start 3466 - show_hint "Should fail with 'Cannot assign requested address'" 3454 + show_hint "Tecnically should fail since address is not on device but kernel allows" 3467 3455 run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b 3468 - log_test_addr ${a} $? 1 "TCP socket bind to out of scope local address" 3456 + log_test_addr ${a} $? 0 "TCP socket bind to out of scope local address" 3469 3457 } 3470 3458 3471 3459 ipv6_addr_bind_vrf() ··· 3517 3499 run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b 3518 3500 log_test_addr ${a} $? 0 "TCP socket bind to local address with device bind" 3519 3501 3502 + # Sadly, the kernel allows binding a socket to a device and then 3503 + # binding to an address not on the device. The only restriction 3504 + # is that the address is valid in the L3 domain. So this test 3505 + # passes when it really should not 3520 3506 a=${VRF_IP6} 3521 3507 log_start 3508 + show_hint "Tecnically should fail since address is not on device but kernel allows" 3522 3509 run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b 3523 - log_test_addr ${a} $? 1 "TCP socket bind to VRF address with device bind" 3510 + log_test_addr ${a} $? 0 "TCP socket bind to VRF address with device bind" 3524 3511 3525 3512 a=${NSA_LO_IP6} 3526 3513 log_start
+2
tools/testing/selftests/net/forwarding/forwarding.config.sample
··· 13 13 NETIFS[p6]=veth5 14 14 NETIFS[p7]=veth6 15 15 NETIFS[p8]=veth7 16 + NETIFS[p9]=veth8 17 + NETIFS[p10]=veth9 16 18 17 19 # Port that does not have a cable connected. 18 20 NETIF_NO_CABLE=eth8
+1 -1
tools/testing/selftests/net/icmp_redirect.sh
··· 311 311 ip -netns h1 ro get ${H1_VRF_ARG} ${H2_N2_IP} | \ 312 312 grep -E -v 'mtu|redirected' | grep -q "cache" 313 313 fi 314 - log_test $? 0 "IPv4: ${desc}" 314 + log_test $? 0 "IPv4: ${desc}" 0 315 315 316 316 # No PMTU info for test "redirect" and "mtu exception plus redirect" 317 317 if [ "$with_redirect" = "yes" ] && [ "$desc" != "redirect exception plus mtu" ]; then
+1 -1
tools/testing/selftests/net/toeplitz.c
··· 498 498 bool have_toeplitz = false; 499 499 int index, c; 500 500 501 - while ((c = getopt_long(argc, argv, "46C:d:i:k:r:stT:u:v", long_options, &index)) != -1) { 501 + while ((c = getopt_long(argc, argv, "46C:d:i:k:r:stT:uv", long_options, &index)) != -1) { 502 502 switch (c) { 503 503 case '4': 504 504 cfg_family = AF_INET;