Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR.

Conflicts:

include/linux/mlx5/driver.h
617f5db1a626 ("RDMA/mlx5: Fix affinity assignment")
dc13180824b7 ("net/mlx5: Enable devlink port for embedded cpu VF vports")
https://lore.kernel.org/all/20230613125939.595e50b8@canb.auug.org.au/

tools/testing/selftests/net/mptcp/mptcp_join.sh
47867f0a7e83 ("selftests: mptcp: join: skip check if MIB counter not supported")
425ba803124b ("selftests: mptcp: join: support RM_ADDR for used endpoints or not")
45b1a1227a7a ("mptcp: introduces more address related mibs")
0639fa230a21 ("selftests: mptcp: add explicit check for new mibs")
https://lore.kernel.org/netdev/20230609-upstream-net-20230610-mptcp-selftests-support-old-kernels-part-3-v1-0-2896fe2ee8a3@tessares.net/

No adjacent changes.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+3266 -1621
+1
.mailmap
··· 233 233 Johan Hovold <johan@kernel.org> <jhovold@gmail.com> 234 234 Johan Hovold <johan@kernel.org> <johan@hovoldconsulting.com> 235 235 John Crispin <john@phrozen.org> <blogic@openwrt.org> 236 + John Keeping <john@keeping.me.uk> <john@metanate.com> 236 237 John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> 237 238 John Stultz <johnstul@us.ibm.com> 238 239 <jon.toppins+linux@gmail.com> <jtoppins@cumulusnetworks.com>
+10 -12
Documentation/admin-guide/cgroup-v2.rst
··· 1213 1213 A read-write single value file which exists on non-root 1214 1214 cgroups. The default is "max". 1215 1215 1216 - Memory usage throttle limit. This is the main mechanism to 1217 - control memory usage of a cgroup. If a cgroup's usage goes 1216 + Memory usage throttle limit. If a cgroup's usage goes 1218 1217 over the high boundary, the processes of the cgroup are 1219 1218 throttled and put under heavy reclaim pressure. 1220 1219 1221 1220 Going over the high limit never invokes the OOM killer and 1222 - under extreme conditions the limit may be breached. 1221 + under extreme conditions the limit may be breached. The high 1222 + limit should be used in scenarios where an external process 1223 + monitors the limited cgroup to alleviate heavy reclaim 1224 + pressure. 1223 1225 1224 1226 memory.max 1225 1227 A read-write single value file which exists on non-root 1226 1228 cgroups. The default is "max". 1227 1229 1228 - Memory usage hard limit. This is the final protection 1229 - mechanism. If a cgroup's memory usage reaches this limit and 1230 - can't be reduced, the OOM killer is invoked in the cgroup. 1231 - Under certain circumstances, the usage may go over the limit 1232 - temporarily. 1230 + Memory usage hard limit. This is the main mechanism to limit 1231 + memory usage of a cgroup. If a cgroup's memory usage reaches 1232 + this limit and can't be reduced, the OOM killer is invoked in 1233 + the cgroup. Under certain circumstances, the usage may go 1234 + over the limit temporarily. 1233 1235 1234 1236 In default configuration regular 0-order allocations always 1235 1237 succeed unless OOM killer chooses current task as a victim. ··· 1239 1237 Some kinds of allocations don't invoke the OOM killer. 1240 1238 Caller could retry them differently, return into userspace 1241 1239 as -ENOMEM or silently ignore in cases like disk readahead. 1242 - 1243 - This is the ultimate protection mechanism. As long as the 1244 - high limit is used and monitored properly, this limit's 1245 - utility is limited to providing the final safety net. 1246 1240 1247 1241 memory.reclaim 1248 1242 A write-only nested-keyed file which exists for all cgroups.
+1 -1
Documentation/devicetree/bindings/ata/ahci-common.yaml
··· 8 8 9 9 maintainers: 10 10 - Hans de Goede <hdegoede@redhat.com> 11 - - Damien Le Moal <damien.lemoal@opensource.wdc.com> 11 + - Damien Le Moal <dlemoal@kernel.org> 12 12 13 13 description: 14 14 This document defines device tree properties for a common AHCI SATA
+1
Documentation/devicetree/bindings/cache/qcom,llcc.yaml
··· 129 129 - qcom,sm8250-llcc 130 130 - qcom,sm8350-llcc 131 131 - qcom,sm8450-llcc 132 + - qcom,sm8550-llcc 132 133 then: 133 134 properties: 134 135 reg:
+1 -1
Documentation/devicetree/bindings/clock/canaan,k210-clk.yaml
··· 7 7 title: Canaan Kendryte K210 Clock 8 8 9 9 maintainers: 10 - - Damien Le Moal <damien.lemoal@wdc.com> 10 + - Damien Le Moal <dlemoal@kernel.org> 11 11 12 12 description: | 13 13 Canaan Kendryte K210 SoC clocks driver bindings. The clock
+1 -1
Documentation/devicetree/bindings/i3c/silvaco,i3c-master.yaml
··· 44 44 - clock-names 45 45 - clocks 46 46 47 - additionalProperties: true 47 + unevaluatedProperties: false 48 48 49 49 examples: 50 50 - |
+1 -1
Documentation/devicetree/bindings/mfd/canaan,k210-sysctl.yaml
··· 7 7 title: Canaan Kendryte K210 System Controller 8 8 9 9 maintainers: 10 - - Damien Le Moal <damien.lemoal@wdc.com> 10 + - Damien Le Moal <dlemoal@kernel.org> 11 11 12 12 description: 13 13 Canaan Inc. Kendryte K210 SoC system controller which provides a
+2 -2
Documentation/devicetree/bindings/net/realtek-bluetooth.yaml
··· 11 11 - Alistair Francis <alistair@alistair23.me> 12 12 13 13 description: 14 - RTL8723CS/RTL8723CS/RTL8821CS/RTL8822CS is a WiFi + BT chip. WiFi part 14 + RTL8723BS/RTL8723CS/RTL8821CS/RTL8822CS is a WiFi + BT chip. WiFi part 15 15 is connected over SDIO, while BT is connected over serial. It speaks 16 16 H5 protocol with few extra commands to upload firmware and change 17 17 module speed. ··· 27 27 - items: 28 28 - enum: 29 29 - realtek,rtl8821cs-bt 30 - - const: realtek,rtl8822cs-bt 30 + - const: realtek,rtl8723bs-bt 31 31 32 32 device-wake-gpios: 33 33 maxItems: 1
+1 -1
Documentation/devicetree/bindings/pinctrl/canaan,k210-fpioa.yaml
··· 7 7 title: Canaan Kendryte K210 FPIOA 8 8 9 9 maintainers: 10 - - Damien Le Moal <damien.lemoal@wdc.com> 10 + - Damien Le Moal <dlemoal@kernel.org> 11 11 12 12 description: 13 13 The Canaan Kendryte K210 SoC Fully Programmable IO Array (FPIOA)
+3 -2
Documentation/devicetree/bindings/pinctrl/qcom,pmic-mpp.yaml
··· 144 144 enum: [0, 1, 2, 3, 4, 5, 6, 7] 145 145 146 146 qcom,paired: 147 - - description: 148 - Indicates that the pin should be operating in paired mode. 147 + type: boolean 148 + description: 149 + Indicates that the pin should be operating in paired mode. 149 150 150 151 required: 151 152 - pins
+1
Documentation/devicetree/bindings/power/qcom,rpmpd.yaml
··· 29 29 - qcom,qcm2290-rpmpd 30 30 - qcom,qcs404-rpmpd 31 31 - qcom,qdu1000-rpmhpd 32 + - qcom,sa8155p-rpmhpd 32 33 - qcom,sa8540p-rpmhpd 33 34 - qcom,sa8775p-rpmhpd 34 35 - qcom,sdm660-rpmpd
+1 -1
Documentation/devicetree/bindings/reset/canaan,k210-rst.yaml
··· 7 7 title: Canaan Kendryte K210 Reset Controller 8 8 9 9 maintainers: 10 - - Damien Le Moal <damien.lemoal@wdc.com> 10 + - Damien Le Moal <dlemoal@kernel.org> 11 11 12 12 description: | 13 13 Canaan Kendryte K210 reset controller driver which supports the SoC
+1 -1
Documentation/devicetree/bindings/riscv/canaan.yaml
··· 7 7 title: Canaan SoC-based boards 8 8 9 9 maintainers: 10 - - Damien Le Moal <damien.lemoal@wdc.com> 10 + - Damien Le Moal <dlemoal@kernel.org> 11 11 12 12 description: 13 13 Canaan Kendryte K210 SoC-based boards
+1 -1
Documentation/devicetree/usage-model.rst
··· 415 415 because it must decide whether to register each node as either a 416 416 platform_device or an amba_device. This unfortunately complicates the 417 417 device creation model a little bit, but the solution turns out not to 418 - be too invasive. If a node is compatible with "arm,amba-primecell", then 418 + be too invasive. If a node is compatible with "arm,primecell", then 419 419 of_platform_populate() will register it as an amba_device instead of a 420 420 platform_device.
+1 -1
Documentation/translations/zh_CN/devicetree/usage-model.rst
··· 325 325 326 326 当使用DT时,这给of_platform_populate()带来了问题,因为它必须决定是否将 327 327 每个节点注册为platform_device或amba_device。不幸的是,这使设备创建模型 328 - 变得有点复杂,但解决方案原来并不是太具有侵略性。如果一个节点与“arm,amba-primecell” 328 + 变得有点复杂,但解决方案原来并不是太具有侵略性。如果一个节点与“arm,primecell” 329 329 兼容,那么of_platform_populate()将把它注册为amba_device而不是 330 330 platform_device。
+14 -1
MAINTAINERS
··· 8798 8798 GPIO SUBSYSTEM 8799 8799 M: Linus Walleij <linus.walleij@linaro.org> 8800 8800 M: Bartosz Golaszewski <brgl@bgdev.pl> 8801 + R: Andy Shevchenko <andy@kernel.org> 8801 8802 L: linux-gpio@vger.kernel.org 8802 8803 S: Maintained 8803 8804 T: git git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux.git ··· 9695 9694 F: include/uapi/linux/i2c.h 9696 9695 9697 9696 I2C SUBSYSTEM HOST DRIVERS 9697 + M: Andi Shyti <andi.shyti@kernel.org> 9698 9698 L: linux-i2c@vger.kernel.org 9699 - S: Odd Fixes 9699 + S: Maintained 9700 9700 W: https://i2c.wiki.kernel.org/ 9701 9701 Q: https://patchwork.ozlabs.org/project/linux-i2c/list/ 9702 9702 T: git git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux.git ··· 18062 18060 F: Documentation/devicetree/bindings/usb/renesas,rzn1-usbf.yaml 18063 18061 F: drivers/usb/gadget/udc/renesas_usbf.c 18064 18062 18063 + RENESAS RZ/V2M I2C DRIVER 18064 + M: Fabrizio Castro <fabrizio.castro.jz@renesas.com> 18065 + L: linux-i2c@vger.kernel.org 18066 + L: linux-renesas-soc@vger.kernel.org 18067 + S: Supported 18068 + F: Documentation/devicetree/bindings/i2c/renesas,rzv2m.yaml 18069 + F: drivers/i2c/busses/i2c-rzv2m.c 18070 + 18065 18071 RENESAS USB PHY DRIVER 18066 18072 M: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> 18067 18073 L: linux-renesas-soc@vger.kernel.org ··· 19154 19144 M: Karsten Graul <kgraul@linux.ibm.com> 19155 19145 M: Wenjia Zhang <wenjia@linux.ibm.com> 19156 19146 M: Jan Karcher <jaka@linux.ibm.com> 19147 + R: D. Wythe <alibuda@linux.alibaba.com> 19148 + R: Tony Lu <tonylu@linux.alibaba.com> 19149 + R: Wen Gu <guwen@linux.alibaba.com> 19157 19150 L: linux-s390@vger.kernel.org 19158 19151 S: Supported 19159 19152 F: net/smc/
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 4 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc5 5 + EXTRAVERSION = -rc6 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION*
+1 -1
arch/arm/boot/dts/am57xx-cl-som-am57x.dts
··· 527 527 528 528 interrupt-parent = <&gpio1>; 529 529 interrupts = <31 0>; 530 - pendown-gpio = <&gpio1 31 0>; 530 + pendown-gpio = <&gpio1 31 GPIO_ACTIVE_LOW>; 531 531 532 532 533 533 ti,x-min = /bits/ 16 <0x0>;
+1 -1
arch/arm/boot/dts/at91-sama7g5ek.dts
··· 792 792 }; 793 793 794 794 &shdwc { 795 - atmel,shdwc-debouncer = <976>; 795 + debounce-delay-us = <976>; 796 796 status = "okay"; 797 797 798 798 input@0 {
+1 -1
arch/arm/boot/dts/at91sam9261ek.dts
··· 156 156 compatible = "ti,ads7843"; 157 157 interrupts-extended = <&pioC 2 IRQ_TYPE_EDGE_BOTH>; 158 158 spi-max-frequency = <3000000>; 159 - pendown-gpio = <&pioC 2 GPIO_ACTIVE_HIGH>; 159 + pendown-gpio = <&pioC 2 GPIO_ACTIVE_LOW>; 160 160 161 161 ti,x-min = /bits/ 16 <150>; 162 162 ti,x-max = /bits/ 16 <3830>;
+1 -1
arch/arm/boot/dts/imx7d-pico-hobbit.dts
··· 64 64 interrupt-parent = <&gpio2>; 65 65 interrupts = <7 0>; 66 66 spi-max-frequency = <1000000>; 67 - pendown-gpio = <&gpio2 7 0>; 67 + pendown-gpio = <&gpio2 7 GPIO_ACTIVE_LOW>; 68 68 vcc-supply = <&reg_3p3v>; 69 69 ti,x-min = /bits/ 16 <0>; 70 70 ti,x-max = /bits/ 16 <4095>;
+1 -1
arch/arm/boot/dts/imx7d-sdb.dts
··· 205 205 pinctrl-0 = <&pinctrl_tsc2046_pendown>; 206 206 interrupt-parent = <&gpio2>; 207 207 interrupts = <29 0>; 208 - pendown-gpio = <&gpio2 29 GPIO_ACTIVE_HIGH>; 208 + pendown-gpio = <&gpio2 29 GPIO_ACTIVE_LOW>; 209 209 touchscreen-max-pressure = <255>; 210 210 wakeup-source; 211 211 };
+1 -1
arch/arm/boot/dts/omap3-cm-t3x.dtsi
··· 227 227 228 228 interrupt-parent = <&gpio2>; 229 229 interrupts = <25 0>; /* gpio_57 */ 230 - pendown-gpio = <&gpio2 25 GPIO_ACTIVE_HIGH>; 230 + pendown-gpio = <&gpio2 25 GPIO_ACTIVE_LOW>; 231 231 232 232 ti,x-min = /bits/ 16 <0x0>; 233 233 ti,x-max = /bits/ 16 <0x0fff>;
+1 -1
arch/arm/boot/dts/omap3-devkit8000-lcd-common.dtsi
··· 54 54 55 55 interrupt-parent = <&gpio1>; 56 56 interrupts = <27 0>; /* gpio_27 */ 57 - pendown-gpio = <&gpio1 27 GPIO_ACTIVE_HIGH>; 57 + pendown-gpio = <&gpio1 27 GPIO_ACTIVE_LOW>; 58 58 59 59 ti,x-min = /bits/ 16 <0x0>; 60 60 ti,x-max = /bits/ 16 <0x0fff>;
+1 -1
arch/arm/boot/dts/omap3-lilly-a83x.dtsi
··· 311 311 interrupt-parent = <&gpio1>; 312 312 interrupts = <8 0>; /* boot6 / gpio_8 */ 313 313 spi-max-frequency = <1000000>; 314 - pendown-gpio = <&gpio1 8 GPIO_ACTIVE_HIGH>; 314 + pendown-gpio = <&gpio1 8 GPIO_ACTIVE_LOW>; 315 315 vcc-supply = <&reg_vcc3>; 316 316 pinctrl-names = "default"; 317 317 pinctrl-0 = <&tsc2048_pins>;
+1 -1
arch/arm/boot/dts/omap3-overo-common-lcd35.dtsi
··· 149 149 150 150 interrupt-parent = <&gpio4>; 151 151 interrupts = <18 0>; /* gpio_114 */ 152 - pendown-gpio = <&gpio4 18 GPIO_ACTIVE_HIGH>; 152 + pendown-gpio = <&gpio4 18 GPIO_ACTIVE_LOW>; 153 153 154 154 ti,x-min = /bits/ 16 <0x0>; 155 155 ti,x-max = /bits/ 16 <0x0fff>;
+1 -1
arch/arm/boot/dts/omap3-overo-common-lcd43.dtsi
··· 160 160 161 161 interrupt-parent = <&gpio4>; 162 162 interrupts = <18 0>; /* gpio_114 */ 163 - pendown-gpio = <&gpio4 18 GPIO_ACTIVE_HIGH>; 163 + pendown-gpio = <&gpio4 18 GPIO_ACTIVE_LOW>; 164 164 165 165 ti,x-min = /bits/ 16 <0x0>; 166 166 ti,x-max = /bits/ 16 <0x0fff>;
+1 -1
arch/arm/boot/dts/omap3-pandora-common.dtsi
··· 651 651 pinctrl-0 = <&penirq_pins>; 652 652 interrupt-parent = <&gpio3>; 653 653 interrupts = <30 IRQ_TYPE_NONE>; /* GPIO_94 */ 654 - pendown-gpio = <&gpio3 30 GPIO_ACTIVE_HIGH>; 654 + pendown-gpio = <&gpio3 30 GPIO_ACTIVE_LOW>; 655 655 vcc-supply = <&vaux4>; 656 656 657 657 ti,x-min = /bits/ 16 <0>;
+1 -1
arch/arm/boot/dts/omap5-cm-t54.dts
··· 354 354 355 355 interrupt-parent = <&gpio1>; 356 356 interrupts = <15 0>; /* gpio1_wk15 */ 357 - pendown-gpio = <&gpio1 15 GPIO_ACTIVE_HIGH>; 357 + pendown-gpio = <&gpio1 15 GPIO_ACTIVE_LOW>; 358 358 359 359 360 360 ti,x-min = /bits/ 16 <0x0>;
-2
arch/arm/boot/dts/qcom-apq8026-asus-sparrow.dts
··· 268 268 function = "gpio"; 269 269 drive-strength = <8>; 270 270 bias-disable; 271 - input-enable; 272 271 }; 273 272 274 273 wlan_hostwake_default_state: wlan-hostwake-default-state { ··· 275 276 function = "gpio"; 276 277 drive-strength = <2>; 277 278 bias-disable; 278 - input-enable; 279 279 }; 280 280 281 281 wlan_regulator_default_state: wlan-regulator-default-state {
-1
arch/arm/boot/dts/qcom-apq8026-huawei-sturgeon.dts
··· 352 352 function = "gpio"; 353 353 drive-strength = <2>; 354 354 bias-disable; 355 - input-enable; 356 355 }; 357 356 358 357 wlan_regulator_default_state: wlan-regulator-default-state {
-3
arch/arm/boot/dts/qcom-apq8026-lg-lenok.dts
··· 307 307 function = "gpio"; 308 308 drive-strength = <2>; 309 309 bias-disable; 310 - input-enable; 311 310 }; 312 311 313 312 touch_pins: touch-state { ··· 316 317 317 318 drive-strength = <8>; 318 319 bias-pull-down; 319 - input-enable; 320 320 }; 321 321 322 322 reset-pins { ··· 333 335 function = "gpio"; 334 336 drive-strength = <2>; 335 337 bias-disable; 336 - input-enable; 337 338 }; 338 339 339 340 wlan_regulator_default_state: wlan-regulator-default-state {
+1
arch/arm/boot/dts/qcom-apq8064.dtsi
··· 83 83 L2: l2-cache { 84 84 compatible = "cache"; 85 85 cache-level = <2>; 86 + cache-unified; 86 87 }; 87 88 88 89 idle-states {
+1
arch/arm/boot/dts/qcom-apq8084.dtsi
··· 74 74 L2: l2-cache { 75 75 compatible = "cache"; 76 76 cache-level = <2>; 77 + cache-unified; 77 78 qcom,saw = <&saw_l2>; 78 79 }; 79 80
+1
arch/arm/boot/dts/qcom-ipq4019.dtsi
··· 102 102 L2: l2-cache { 103 103 compatible = "cache"; 104 104 cache-level = <2>; 105 + cache-unified; 105 106 qcom,saw = <&saw_l2>; 106 107 }; 107 108 };
+1
arch/arm/boot/dts/qcom-ipq8064.dtsi
··· 45 45 L2: l2-cache { 46 46 compatible = "cache"; 47 47 cache-level = <2>; 48 + cache-unified; 48 49 }; 49 50 }; 50 51
-1
arch/arm/boot/dts/qcom-mdm9615-wp8548-mangoh-green.dts
··· 49 49 gpioext1-pins { 50 50 pins = "gpio2"; 51 51 function = "gpio"; 52 - input-enable; 53 52 bias-disable; 54 53 }; 55 54 };
+1
arch/arm/boot/dts/qcom-msm8660.dtsi
··· 36 36 L2: l2-cache { 37 37 compatible = "cache"; 38 38 cache-level = <2>; 39 + cache-unified; 39 40 }; 40 41 }; 41 42
+1
arch/arm/boot/dts/qcom-msm8960.dtsi
··· 42 42 L2: l2-cache { 43 43 compatible = "cache"; 44 44 cache-level = <2>; 45 + cache-unified; 45 46 }; 46 47 }; 47 48
-2
arch/arm/boot/dts/qcom-msm8974-lge-nexus5-hammerhead.dts
··· 592 592 pins = "gpio73"; 593 593 function = "gpio"; 594 594 bias-disable; 595 - input-enable; 596 595 }; 597 596 598 597 touch_pin: touch-state { ··· 601 602 602 603 drive-strength = <2>; 603 604 bias-disable; 604 - input-enable; 605 605 }; 606 606 607 607 reset-pins {
-1
arch/arm/boot/dts/qcom-msm8974-sony-xperia-rhine.dtsi
··· 433 433 function = "gpio"; 434 434 drive-strength = <2>; 435 435 bias-disable; 436 - input-enable; 437 436 }; 438 437 439 438 sdc1_on: sdc1-on-state {
+1
arch/arm/boot/dts/qcom-msm8974.dtsi
··· 80 80 L2: l2-cache { 81 81 compatible = "cache"; 82 82 cache-level = <2>; 83 + cache-unified; 83 84 qcom,saw = <&saw_l2>; 84 85 }; 85 86
-1
arch/arm/boot/dts/qcom-msm8974pro-oneplus-bacon.dts
··· 461 461 function = "gpio"; 462 462 drive-strength = <2>; 463 463 bias-disable; 464 - input-enable; 465 464 }; 466 465 467 466 reset-pins {
-4
arch/arm/boot/dts/qcom-msm8974pro-samsung-klte.dts
··· 704 704 pins = "gpio75"; 705 705 function = "gpio"; 706 706 drive-strength = <16>; 707 - input-enable; 708 707 }; 709 708 710 709 devwake-pins { ··· 759 760 i2c_touchkey_pins: i2c-touchkey-state { 760 761 pins = "gpio95", "gpio96"; 761 762 function = "gpio"; 762 - input-enable; 763 763 bias-pull-up; 764 764 }; 765 765 766 766 i2c_led_gpioex_pins: i2c-led-gpioex-state { 767 767 pins = "gpio120", "gpio121"; 768 768 function = "gpio"; 769 - input-enable; 770 769 bias-pull-down; 771 770 }; 772 771 ··· 778 781 wifi_pin: wifi-state { 779 782 pins = "gpio92"; 780 783 function = "gpio"; 781 - input-enable; 782 784 bias-pull-down; 783 785 }; 784 786
-1
arch/arm/boot/dts/qcom-msm8974pro-sony-xperia-shinano-castor.dts
··· 631 631 function = "gpio"; 632 632 drive-strength = <2>; 633 633 bias-disable; 634 - input-enable; 635 634 }; 636 635 637 636 bt_host_wake_pin: bt-host-wake-state {
+9 -11
arch/arm/mach-at91/pm.c
··· 334 334 pdev = of_find_device_by_node(eth->np); 335 335 if (!pdev) 336 336 return false; 337 + /* put_device(eth->dev) is called at the end of suspend. */ 337 338 eth->dev = &pdev->dev; 338 339 } 339 340 340 341 /* No quirks if device isn't a wakeup source. */ 341 - if (!device_may_wakeup(eth->dev)) { 342 - put_device(eth->dev); 342 + if (!device_may_wakeup(eth->dev)) 343 343 return false; 344 - } 345 344 346 - /* put_device(eth->dev) is called at the end of suspend. */ 347 345 return true; 348 346 } 349 347 ··· 437 439 pr_err("AT91: PM: failed to enable %s clocks\n", 438 440 j == AT91_PM_G_ETH ? "geth" : "eth"); 439 441 } 440 - } else { 441 - /* 442 - * Release the reference to eth->dev taken in 443 - * at91_pm_eth_quirk_is_valid(). 444 - */ 445 - put_device(eth->dev); 446 - eth->dev = NULL; 447 442 } 443 + 444 + /* 445 + * Release the reference to eth->dev taken in 446 + * at91_pm_eth_quirk_is_valid(). 447 + */ 448 + put_device(eth->dev); 449 + eth->dev = NULL; 448 450 } 449 451 450 452 return ret;
+1 -1
arch/arm64/Kconfig
··· 1516 1516 # 16K | 27 | 14 | 13 | 11 | 1517 1517 # 64K | 29 | 16 | 13 | 13 | 1518 1518 config ARCH_FORCE_MAX_ORDER 1519 - int "Order of maximal physically contiguous allocations" if EXPERT && (ARM64_4K_PAGES || ARM64_16K_PAGES) 1519 + int 1520 1520 default "13" if ARM64_64K_PAGES 1521 1521 default "11" if ARM64_16K_PAGES 1522 1522 default "10"
+8
arch/arm64/boot/dts/freescale/imx8-ss-dma.dtsi
··· 90 90 clocks = <&uart0_lpcg IMX_LPCG_CLK_4>, 91 91 <&uart0_lpcg IMX_LPCG_CLK_0>; 92 92 clock-names = "ipg", "baud"; 93 + assigned-clocks = <&clk IMX_SC_R_UART_0 IMX_SC_PM_CLK_PER>; 94 + assigned-clock-rates = <80000000>; 93 95 power-domains = <&pd IMX_SC_R_UART_0>; 94 96 status = "disabled"; 95 97 }; ··· 102 100 clocks = <&uart1_lpcg IMX_LPCG_CLK_4>, 103 101 <&uart1_lpcg IMX_LPCG_CLK_0>; 104 102 clock-names = "ipg", "baud"; 103 + assigned-clocks = <&clk IMX_SC_R_UART_1 IMX_SC_PM_CLK_PER>; 104 + assigned-clock-rates = <80000000>; 105 105 power-domains = <&pd IMX_SC_R_UART_1>; 106 106 status = "disabled"; 107 107 }; ··· 114 110 clocks = <&uart2_lpcg IMX_LPCG_CLK_4>, 115 111 <&uart2_lpcg IMX_LPCG_CLK_0>; 116 112 clock-names = "ipg", "baud"; 113 + assigned-clocks = <&clk IMX_SC_R_UART_2 IMX_SC_PM_CLK_PER>; 114 + assigned-clock-rates = <80000000>; 117 115 power-domains = <&pd IMX_SC_R_UART_2>; 118 116 status = "disabled"; 119 117 }; ··· 126 120 clocks = <&uart3_lpcg IMX_LPCG_CLK_4>, 127 121 <&uart3_lpcg IMX_LPCG_CLK_0>; 128 122 clock-names = "ipg", "baud"; 123 + assigned-clocks = <&clk IMX_SC_R_UART_3 IMX_SC_PM_CLK_PER>; 124 + assigned-clock-rates = <80000000>; 129 125 power-domains = <&pd IMX_SC_R_UART_3>; 130 126 status = "disabled"; 131 127 };
+2 -2
arch/arm64/boot/dts/freescale/imx8mn-beacon-baseboard.dtsi
··· 81 81 &ecspi2 { 82 82 pinctrl-names = "default"; 83 83 pinctrl-0 = <&pinctrl_espi2>; 84 - cs-gpios = <&gpio5 9 GPIO_ACTIVE_LOW>; 84 + cs-gpios = <&gpio5 13 GPIO_ACTIVE_LOW>; 85 85 status = "okay"; 86 86 87 87 eeprom@0 { ··· 202 202 MX8MN_IOMUXC_ECSPI2_SCLK_ECSPI2_SCLK 0x82 203 203 MX8MN_IOMUXC_ECSPI2_MOSI_ECSPI2_MOSI 0x82 204 204 MX8MN_IOMUXC_ECSPI2_MISO_ECSPI2_MISO 0x82 205 - MX8MN_IOMUXC_ECSPI1_SS0_GPIO5_IO9 0x41 205 + MX8MN_IOMUXC_ECSPI2_SS0_GPIO5_IO13 0x41 206 206 >; 207 207 }; 208 208
+2 -2
arch/arm64/boot/dts/freescale/imx8qm-mek.dts
··· 82 82 pinctrl-0 = <&pinctrl_usdhc2>; 83 83 bus-width = <4>; 84 84 vmmc-supply = <&reg_usdhc2_vmmc>; 85 - cd-gpios = <&lsio_gpio4 22 GPIO_ACTIVE_LOW>; 86 - wp-gpios = <&lsio_gpio4 21 GPIO_ACTIVE_HIGH>; 85 + cd-gpios = <&lsio_gpio5 22 GPIO_ACTIVE_LOW>; 86 + wp-gpios = <&lsio_gpio5 21 GPIO_ACTIVE_HIGH>; 87 87 status = "okay"; 88 88 }; 89 89
+1
arch/arm64/boot/dts/qcom/ipq5332.dtsi
··· 73 73 L2_0: l2-cache { 74 74 compatible = "cache"; 75 75 cache-level = <2>; 76 + cache-unified; 76 77 }; 77 78 }; 78 79
+2 -1
arch/arm64/boot/dts/qcom/ipq6018.dtsi
··· 83 83 84 84 L2_0: l2-cache { 85 85 compatible = "cache"; 86 - cache-level = <0x2>; 86 + cache-level = <2>; 87 + cache-unified; 87 88 }; 88 89 }; 89 90
+2 -1
arch/arm64/boot/dts/qcom/ipq8074.dtsi
··· 66 66 67 67 L2_0: l2-cache { 68 68 compatible = "cache"; 69 - cache-level = <0x2>; 69 + cache-level = <2>; 70 + cache-unified; 70 71 }; 71 72 }; 72 73
+1
arch/arm64/boot/dts/qcom/ipq9574.dtsi
··· 72 72 L2_0: l2-cache { 73 73 compatible = "cache"; 74 74 cache-level = <2>; 75 + cache-unified; 75 76 }; 76 77 }; 77 78
+1
arch/arm64/boot/dts/qcom/msm8916.dtsi
··· 180 180 L2_0: l2-cache { 181 181 compatible = "cache"; 182 182 cache-level = <2>; 183 + cache-unified; 183 184 }; 184 185 185 186 idle-states {
+2
arch/arm64/boot/dts/qcom/msm8953.dtsi
··· 153 153 L2_0: l2-cache-0 { 154 154 compatible = "cache"; 155 155 cache-level = <2>; 156 + cache-unified; 156 157 }; 157 158 158 159 L2_1: l2-cache-1 { 159 160 compatible = "cache"; 160 161 cache-level = <2>; 162 + cache-unified; 161 163 }; 162 164 }; 163 165
+2
arch/arm64/boot/dts/qcom/msm8976.dtsi
··· 193 193 l2_0: l2-cache0 { 194 194 compatible = "cache"; 195 195 cache-level = <2>; 196 + cache-unified; 196 197 }; 197 198 198 199 l2_1: l2-cache1 { 199 200 compatible = "cache"; 200 201 cache-level = <2>; 202 + cache-unified; 201 203 }; 202 204 }; 203 205
+2
arch/arm64/boot/dts/qcom/msm8994.dtsi
··· 52 52 L2_0: l2-cache { 53 53 compatible = "cache"; 54 54 cache-level = <2>; 55 + cache-unified; 55 56 }; 56 57 }; 57 58 ··· 89 88 L2_1: l2-cache { 90 89 compatible = "cache"; 91 90 cache-level = <2>; 91 + cache-unified; 92 92 }; 93 93 }; 94 94
+6 -4
arch/arm64/boot/dts/qcom/msm8996.dtsi
··· 53 53 #cooling-cells = <2>; 54 54 next-level-cache = <&L2_0>; 55 55 L2_0: l2-cache { 56 - compatible = "cache"; 57 - cache-level = <2>; 56 + compatible = "cache"; 57 + cache-level = <2>; 58 + cache-unified; 58 59 }; 59 60 }; 60 61 ··· 84 83 #cooling-cells = <2>; 85 84 next-level-cache = <&L2_1>; 86 85 L2_1: l2-cache { 87 - compatible = "cache"; 88 - cache-level = <2>; 86 + compatible = "cache"; 87 + cache-level = <2>; 88 + cache-unified; 89 89 }; 90 90 }; 91 91
+2
arch/arm64/boot/dts/qcom/msm8998.dtsi
··· 146 146 L2_0: l2-cache { 147 147 compatible = "cache"; 148 148 cache-level = <2>; 149 + cache-unified; 149 150 }; 150 151 }; 151 152 ··· 191 190 L2_1: l2-cache { 192 191 compatible = "cache"; 193 192 cache-level = <2>; 193 + cache-unified; 194 194 }; 195 195 }; 196 196
+1
arch/arm64/boot/dts/qcom/qcm2290.dtsi
··· 51 51 L2_0: l2-cache { 52 52 compatible = "cache"; 53 53 cache-level = <2>; 54 + cache-unified; 54 55 }; 55 56 }; 56 57
+1
arch/arm64/boot/dts/qcom/qcs404.dtsi
··· 95 95 L2_0: l2-cache { 96 96 compatible = "cache"; 97 97 cache-level = <2>; 98 + cache-unified; 98 99 }; 99 100 100 101 idle-states {
+10
arch/arm64/boot/dts/qcom/qdu1000.dtsi
··· 35 35 next-level-cache = <&L2_0>; 36 36 L2_0: l2-cache { 37 37 compatible = "cache"; 38 + cache-level = <2>; 39 + cache-unified; 38 40 next-level-cache = <&L3_0>; 39 41 L3_0: l3-cache { 40 42 compatible = "cache"; 43 + cache-level = <3>; 44 + cache-unified; 41 45 }; 42 46 }; 43 47 }; ··· 58 54 next-level-cache = <&L2_100>; 59 55 L2_100: l2-cache { 60 56 compatible = "cache"; 57 + cache-level = <2>; 58 + cache-unified; 61 59 next-level-cache = <&L3_0>; 62 60 }; 63 61 }; ··· 76 70 next-level-cache = <&L2_200>; 77 71 L2_200: l2-cache { 78 72 compatible = "cache"; 73 + cache-level = <2>; 74 + cache-unified; 79 75 next-level-cache = <&L3_0>; 80 76 }; 81 77 }; ··· 94 86 next-level-cache = <&L2_300>; 95 87 L2_300: l2-cache { 96 88 compatible = "cache"; 89 + cache-level = <2>; 90 + cache-unified; 97 91 next-level-cache = <&L3_0>; 98 92 }; 99 93 };
+1 -1
arch/arm64/boot/dts/qcom/sa8155p-adp.dts
··· 7 7 8 8 #include <dt-bindings/regulator/qcom,rpmh-regulator.h> 9 9 #include <dt-bindings/gpio/gpio.h> 10 - #include "sm8150.dtsi" 10 + #include "sa8155p.dtsi" 11 11 #include "pmm8155au_1.dtsi" 12 12 #include "pmm8155au_2.dtsi" 13 13
+40
arch/arm64/boot/dts/qcom/sa8155p.dtsi
··· 1 + // SPDX-License-Identifier: BSD-3-Clause 2 + /* 3 + * Copyright (c) 2023, Linaro Limited 4 + * 5 + * SA8155P is an automotive variant of SM8150, with some minor changes. 6 + * Most notably, the RPMhPD setup differs: MMCX and LCX/LMX rails are gone, 7 + * though the cmd-db doesn't reflect that and access attemps result in a bite. 8 + */ 9 + 10 + #include "sm8150.dtsi" 11 + 12 + &dispcc { 13 + power-domains = <&rpmhpd SA8155P_CX>; 14 + }; 15 + 16 + &mdss_dsi0 { 17 + power-domains = <&rpmhpd SA8155P_CX>; 18 + }; 19 + 20 + &mdss_dsi1 { 21 + power-domains = <&rpmhpd SA8155P_CX>; 22 + }; 23 + 24 + &mdss_mdp { 25 + power-domains = <&rpmhpd SA8155P_CX>; 26 + }; 27 + 28 + &remoteproc_slpi { 29 + power-domains = <&rpmhpd SA8155P_CX>, 30 + <&rpmhpd SA8155P_MX>; 31 + }; 32 + 33 + &rpmhpd { 34 + /* 35 + * The bindings were crafted such that SA8155P PDs match their 36 + * SM8150 counterparts to make it more maintainable and only 37 + * necessitate adjusting entries that actually differ 38 + */ 39 + compatible = "qcom,sa8155p-rpmhpd"; 40 + };
+20
arch/arm64/boot/dts/qcom/sa8775p.dtsi
··· 42 42 next-level-cache = <&L2_0>; 43 43 L2_0: l2-cache { 44 44 compatible = "cache"; 45 + cache-level = <2>; 46 + cache-unified; 45 47 next-level-cache = <&L3_0>; 46 48 L3_0: l3-cache { 47 49 compatible = "cache"; 50 + cache-level = <3>; 51 + cache-unified; 48 52 }; 49 53 }; 50 54 }; ··· 62 58 next-level-cache = <&L2_1>; 63 59 L2_1: l2-cache { 64 60 compatible = "cache"; 61 + cache-level = <2>; 62 + cache-unified; 65 63 next-level-cache = <&L3_0>; 66 64 }; 67 65 }; ··· 77 71 next-level-cache = <&L2_2>; 78 72 L2_2: l2-cache { 79 73 compatible = "cache"; 74 + cache-level = <2>; 75 + cache-unified; 80 76 next-level-cache = <&L3_0>; 81 77 }; 82 78 }; ··· 92 84 next-level-cache = <&L2_3>; 93 85 L2_3: l2-cache { 94 86 compatible = "cache"; 87 + cache-level = <2>; 88 + cache-unified; 95 89 next-level-cache = <&L3_0>; 96 90 }; 97 91 }; ··· 107 97 next-level-cache = <&L2_4>; 108 98 L2_4: l2-cache { 109 99 compatible = "cache"; 100 + cache-level = <2>; 101 + cache-unified; 110 102 next-level-cache = <&L3_1>; 111 103 L3_1: l3-cache { 112 104 compatible = "cache"; 105 + cache-level = <3>; 106 + cache-unified; 113 107 }; 114 108 115 109 }; ··· 128 114 next-level-cache = <&L2_5>; 129 115 L2_5: l2-cache { 130 116 compatible = "cache"; 117 + cache-level = <2>; 118 + cache-unified; 131 119 next-level-cache = <&L3_1>; 132 120 }; 133 121 }; ··· 143 127 next-level-cache = <&L2_6>; 144 128 L2_6: l2-cache { 145 129 compatible = "cache"; 130 + cache-level = <2>; 131 + cache-unified; 146 132 next-level-cache = <&L3_1>; 147 133 }; 148 134 }; ··· 158 140 next-level-cache = <&L2_7>; 159 141 L2_7: l2-cache { 160 142 compatible = "cache"; 143 + cache-level = <2>; 144 + cache-unified; 161 145 next-level-cache = <&L3_1>; 162 146 }; 163 147 };
+8
arch/arm64/boot/dts/qcom/sc7180-lite.dtsi
··· 16 16 &cpu6_opp12 { 17 17 opp-peak-kBps = <8532000 23347200>; 18 18 }; 19 + 20 + &cpu6_opp13 { 21 + opp-peak-kBps = <8532000 23347200>; 22 + }; 23 + 24 + &cpu6_opp14 { 25 + opp-peak-kBps = <8532000 23347200>; 26 + };
+9
arch/arm64/boot/dts/qcom/sc7180.dtsi
··· 92 92 L2_0: l2-cache { 93 93 compatible = "cache"; 94 94 cache-level = <2>; 95 + cache-unified; 95 96 next-level-cache = <&L3_0>; 96 97 L3_0: l3-cache { 97 98 compatible = "cache"; 98 99 cache-level = <3>; 100 + cache-unified; 99 101 }; 100 102 }; 101 103 }; ··· 122 120 L2_100: l2-cache { 123 121 compatible = "cache"; 124 122 cache-level = <2>; 123 + cache-unified; 125 124 next-level-cache = <&L3_0>; 126 125 }; 127 126 }; ··· 147 144 L2_200: l2-cache { 148 145 compatible = "cache"; 149 146 cache-level = <2>; 147 + cache-unified; 150 148 next-level-cache = <&L3_0>; 151 149 }; 152 150 }; ··· 172 168 L2_300: l2-cache { 173 169 compatible = "cache"; 174 170 cache-level = <2>; 171 + cache-unified; 175 172 next-level-cache = <&L3_0>; 176 173 }; 177 174 }; ··· 197 192 L2_400: l2-cache { 198 193 compatible = "cache"; 199 194 cache-level = <2>; 195 + cache-unified; 200 196 next-level-cache = <&L3_0>; 201 197 }; 202 198 }; ··· 222 216 L2_500: l2-cache { 223 217 compatible = "cache"; 224 218 cache-level = <2>; 219 + cache-unified; 225 220 next-level-cache = <&L3_0>; 226 221 }; 227 222 }; ··· 247 240 L2_600: l2-cache { 248 241 compatible = "cache"; 249 242 cache-level = <2>; 243 + cache-unified; 250 244 next-level-cache = <&L3_0>; 251 245 }; 252 246 }; ··· 272 264 L2_700: l2-cache { 273 265 compatible = "cache"; 274 266 cache-level = <2>; 267 + cache-unified; 275 268 next-level-cache = <&L3_0>; 276 269 }; 277 270 };
-2
arch/arm64/boot/dts/qcom/sc7280-idp.dtsi
··· 480 480 wcd_rx: codec@0,4 { 481 481 compatible = "sdw20217010d00"; 482 482 reg = <0 4>; 483 - #sound-dai-cells = <1>; 484 483 qcom,rx-port-mapping = <1 2 3 4 5>; 485 484 }; 486 485 }; ··· 490 491 wcd_tx: codec@0,3 { 491 492 compatible = "sdw20217010d00"; 492 493 reg = <0 3>; 493 - #sound-dai-cells = <1>; 494 494 qcom,tx-port-mapping = <1 2 3 4>; 495 495 }; 496 496 };
-2
arch/arm64/boot/dts/qcom/sc7280-qcard.dtsi
··· 414 414 wcd_rx: codec@0,4 { 415 415 compatible = "sdw20217010d00"; 416 416 reg = <0 4>; 417 - #sound-dai-cells = <1>; 418 417 qcom,rx-port-mapping = <1 2 3 4 5>; 419 418 }; 420 419 }; ··· 422 423 wcd_tx: codec@0,3 { 423 424 compatible = "sdw20217010d00"; 424 425 reg = <0 3>; 425 - #sound-dai-cells = <1>; 426 426 qcom,tx-port-mapping = <1 2 3 4>; 427 427 }; 428 428 };
+9
arch/arm64/boot/dts/qcom/sc7280.dtsi
··· 182 182 L2_0: l2-cache { 183 183 compatible = "cache"; 184 184 cache-level = <2>; 185 + cache-unified; 185 186 next-level-cache = <&L3_0>; 186 187 L3_0: l3-cache { 187 188 compatible = "cache"; 188 189 cache-level = <3>; 190 + cache-unified; 189 191 }; 190 192 }; 191 193 }; ··· 210 208 L2_100: l2-cache { 211 209 compatible = "cache"; 212 210 cache-level = <2>; 211 + cache-unified; 213 212 next-level-cache = <&L3_0>; 214 213 }; 215 214 }; ··· 233 230 L2_200: l2-cache { 234 231 compatible = "cache"; 235 232 cache-level = <2>; 233 + cache-unified; 236 234 next-level-cache = <&L3_0>; 237 235 }; 238 236 }; ··· 256 252 L2_300: l2-cache { 257 253 compatible = "cache"; 258 254 cache-level = <2>; 255 + cache-unified; 259 256 next-level-cache = <&L3_0>; 260 257 }; 261 258 }; ··· 279 274 L2_400: l2-cache { 280 275 compatible = "cache"; 281 276 cache-level = <2>; 277 + cache-unified; 282 278 next-level-cache = <&L3_0>; 283 279 }; 284 280 }; ··· 302 296 L2_500: l2-cache { 303 297 compatible = "cache"; 304 298 cache-level = <2>; 299 + cache-unified; 305 300 next-level-cache = <&L3_0>; 306 301 }; 307 302 }; ··· 325 318 L2_600: l2-cache { 326 319 compatible = "cache"; 327 320 cache-level = <2>; 321 + cache-unified; 328 322 next-level-cache = <&L3_0>; 329 323 }; 330 324 }; ··· 348 340 L2_700: l2-cache { 349 341 compatible = "cache"; 350 342 cache-level = <2>; 343 + cache-unified; 351 344 next-level-cache = <&L3_0>; 352 345 }; 353 346 };
+16 -2
arch/arm64/boot/dts/qcom/sc8280xp.dtsi
··· 58 58 L2_0: l2-cache { 59 59 compatible = "cache"; 60 60 cache-level = <2>; 61 + cache-unified; 61 62 next-level-cache = <&L3_0>; 62 63 L3_0: l3-cache { 63 - compatible = "cache"; 64 - cache-level = <3>; 64 + compatible = "cache"; 65 + cache-level = <3>; 66 + cache-unified; 65 67 }; 66 68 }; 67 69 }; ··· 85 83 L2_100: l2-cache { 86 84 compatible = "cache"; 87 85 cache-level = <2>; 86 + cache-unified; 88 87 next-level-cache = <&L3_0>; 89 88 }; 90 89 }; ··· 107 104 L2_200: l2-cache { 108 105 compatible = "cache"; 109 106 cache-level = <2>; 107 + cache-unified; 110 108 next-level-cache = <&L3_0>; 111 109 }; 112 110 }; ··· 129 125 L2_300: l2-cache { 130 126 compatible = "cache"; 131 127 cache-level = <2>; 128 + cache-unified; 132 129 next-level-cache = <&L3_0>; 133 130 }; 134 131 }; ··· 151 146 L2_400: l2-cache { 152 147 compatible = "cache"; 153 148 cache-level = <2>; 149 + cache-unified; 154 150 next-level-cache = <&L3_0>; 155 151 }; 156 152 }; ··· 173 167 L2_500: l2-cache { 174 168 compatible = "cache"; 175 169 cache-level = <2>; 170 + cache-unified; 176 171 next-level-cache = <&L3_0>; 177 172 }; 178 173 }; ··· 195 188 L2_600: l2-cache { 196 189 compatible = "cache"; 197 190 cache-level = <2>; 191 + cache-unified; 198 192 next-level-cache = <&L3_0>; 199 193 }; 200 194 }; ··· 217 209 L2_700: l2-cache { 218 210 compatible = "cache"; 219 211 cache-level = <2>; 212 + cache-unified; 220 213 next-level-cache = <&L3_0>; 221 214 }; 222 215 }; ··· 2735 2726 pins = "gpio7"; 2736 2727 function = "dmic1_data"; 2737 2728 drive-strength = <8>; 2729 + input-enable; 2738 2730 }; 2739 2731 }; 2740 2732 ··· 2753 2743 function = "dmic1_data"; 2754 2744 drive-strength = <2>; 2755 2745 bias-pull-down; 2746 + input-enable; 2756 2747 }; 2757 2748 }; 2758 2749 ··· 2769 2758 pins = "gpio9"; 2770 2759 function = "dmic2_data"; 2771 2760 drive-strength = <8>; 2761 + input-enable; 2772 2762 }; 2773 2763 }; 2774 2764 ··· 2787 2775 function = "dmic2_data"; 2788 2776 drive-strength = <2>; 2789 2777 bias-pull-down; 2778 + input-enable; 2790 2779 }; 2791 2780 }; 2792 2781 ··· 3995 3982 qcom,tcs-config = <ACTIVE_TCS 2>, <SLEEP_TCS 3>, 3996 3983 <WAKE_TCS 3>, <CONTROL_TCS 1>; 3997 3984 label = "apps_rsc"; 3985 + power-domains = <&CLUSTER_PD>; 3998 3986 3999 3987 apps_bcm_voter: bcm-voter { 4000 3988 compatible = "qcom,bcm-voter";
+2
arch/arm64/boot/dts/qcom/sdm630.dtsi
··· 63 63 L2_1: l2-cache { 64 64 compatible = "cache"; 65 65 cache-level = <2>; 66 + cache-unified; 66 67 }; 67 68 }; 68 69 ··· 128 127 L2_0: l2-cache { 129 128 compatible = "cache"; 130 129 cache-level = <2>; 130 + cache-unified; 131 131 }; 132 132 }; 133 133
+19 -1
arch/arm64/boot/dts/qcom/sdm670.dtsi
··· 41 41 L2_0: l2-cache { 42 42 compatible = "cache"; 43 43 next-level-cache = <&L3_0>; 44 + cache-level = <2>; 45 + cache-unified; 44 46 L3_0: l3-cache { 45 - compatible = "cache"; 47 + compatible = "cache"; 48 + cache-level = <3>; 49 + cache-unified; 46 50 }; 47 51 }; 48 52 }; ··· 61 57 next-level-cache = <&L2_100>; 62 58 L2_100: l2-cache { 63 59 compatible = "cache"; 60 + cache-level = <2>; 61 + cache-unified; 64 62 next-level-cache = <&L3_0>; 65 63 }; 66 64 }; ··· 77 71 next-level-cache = <&L2_200>; 78 72 L2_200: l2-cache { 79 73 compatible = "cache"; 74 + cache-level = <2>; 75 + cache-unified; 80 76 next-level-cache = <&L3_0>; 81 77 }; 82 78 }; ··· 93 85 next-level-cache = <&L2_300>; 94 86 L2_300: l2-cache { 95 87 compatible = "cache"; 88 + cache-level = <2>; 89 + cache-unified; 96 90 next-level-cache = <&L3_0>; 97 91 }; 98 92 }; ··· 109 99 next-level-cache = <&L2_400>; 110 100 L2_400: l2-cache { 111 101 compatible = "cache"; 102 + cache-level = <2>; 103 + cache-unified; 112 104 next-level-cache = <&L3_0>; 113 105 }; 114 106 }; ··· 125 113 next-level-cache = <&L2_500>; 126 114 L2_500: l2-cache { 127 115 compatible = "cache"; 116 + cache-level = <2>; 117 + cache-unified; 128 118 next-level-cache = <&L3_0>; 129 119 }; 130 120 }; ··· 141 127 next-level-cache = <&L2_600>; 142 128 L2_600: l2-cache { 143 129 compatible = "cache"; 130 + cache-level = <2>; 131 + cache-unified; 144 132 next-level-cache = <&L3_0>; 145 133 }; 146 134 }; ··· 157 141 next-level-cache = <&L2_700>; 158 142 L2_700: l2-cache { 159 143 compatible = "cache"; 144 + cache-level = <2>; 145 + cache-unified; 160 146 next-level-cache = <&L3_0>; 161 147 }; 162 148 };
+11 -2
arch/arm64/boot/dts/qcom/sdm845.dtsi
··· 108 108 L2_0: l2-cache { 109 109 compatible = "cache"; 110 110 cache-level = <2>; 111 + cache-unified; 111 112 next-level-cache = <&L3_0>; 112 113 L3_0: l3-cache { 113 - compatible = "cache"; 114 - cache-level = <3>; 114 + compatible = "cache"; 115 + cache-level = <3>; 116 + cache-unified; 115 117 }; 116 118 }; 117 119 }; ··· 137 135 L2_100: l2-cache { 138 136 compatible = "cache"; 139 137 cache-level = <2>; 138 + cache-unified; 140 139 next-level-cache = <&L3_0>; 141 140 }; 142 141 }; ··· 161 158 L2_200: l2-cache { 162 159 compatible = "cache"; 163 160 cache-level = <2>; 161 + cache-unified; 164 162 next-level-cache = <&L3_0>; 165 163 }; 166 164 }; ··· 185 181 L2_300: l2-cache { 186 182 compatible = "cache"; 187 183 cache-level = <2>; 184 + cache-unified; 188 185 next-level-cache = <&L3_0>; 189 186 }; 190 187 }; ··· 209 204 L2_400: l2-cache { 210 205 compatible = "cache"; 211 206 cache-level = <2>; 207 + cache-unified; 212 208 next-level-cache = <&L3_0>; 213 209 }; 214 210 }; ··· 233 227 L2_500: l2-cache { 234 228 compatible = "cache"; 235 229 cache-level = <2>; 230 + cache-unified; 236 231 next-level-cache = <&L3_0>; 237 232 }; 238 233 }; ··· 257 250 L2_600: l2-cache { 258 251 compatible = "cache"; 259 252 cache-level = <2>; 253 + cache-unified; 260 254 next-level-cache = <&L3_0>; 261 255 }; 262 256 }; ··· 281 273 L2_700: l2-cache { 282 274 compatible = "cache"; 283 275 cache-level = <2>; 276 + cache-unified; 284 277 next-level-cache = <&L3_0>; 285 278 }; 286 279 };
+2
arch/arm64/boot/dts/qcom/sm6115.dtsi
··· 50 50 L2_0: l2-cache { 51 51 compatible = "cache"; 52 52 cache-level = <2>; 53 + cache-unified; 53 54 }; 54 55 }; 55 56 ··· 103 102 L2_1: l2-cache { 104 103 compatible = "cache"; 105 104 cache-level = <2>; 105 + cache-unified; 106 106 }; 107 107 }; 108 108
+2
arch/arm64/boot/dts/qcom/sm6125.dtsi
··· 47 47 L2_0: l2-cache { 48 48 compatible = "cache"; 49 49 cache-level = <2>; 50 + cache-unified; 50 51 }; 51 52 }; 52 53 ··· 88 87 L2_1: l2-cache { 89 88 compatible = "cache"; 90 89 cache-level = <2>; 90 + cache-unified; 91 91 }; 92 92 }; 93 93
+9
arch/arm64/boot/dts/qcom/sm6350.dtsi
··· 60 60 L2_0: l2-cache { 61 61 compatible = "cache"; 62 62 cache-level = <2>; 63 + cache-unified; 63 64 next-level-cache = <&L3_0>; 64 65 L3_0: l3-cache { 65 66 compatible = "cache"; 66 67 cache-level = <3>; 68 + cache-unified; 67 69 }; 68 70 }; 69 71 }; ··· 88 86 L2_100: l2-cache { 89 87 compatible = "cache"; 90 88 cache-level = <2>; 89 + cache-unified; 91 90 next-level-cache = <&L3_0>; 92 91 }; 93 92 }; ··· 111 108 L2_200: l2-cache { 112 109 compatible = "cache"; 113 110 cache-level = <2>; 111 + cache-unified; 114 112 next-level-cache = <&L3_0>; 115 113 }; 116 114 }; ··· 134 130 L2_300: l2-cache { 135 131 compatible = "cache"; 136 132 cache-level = <2>; 133 + cache-unified; 137 134 next-level-cache = <&L3_0>; 138 135 }; 139 136 }; ··· 157 152 L2_400: l2-cache { 158 153 compatible = "cache"; 159 154 cache-level = <2>; 155 + cache-unified; 160 156 next-level-cache = <&L3_0>; 161 157 }; 162 158 }; ··· 180 174 L2_500: l2-cache { 181 175 compatible = "cache"; 182 176 cache-level = <2>; 177 + cache-unified; 183 178 next-level-cache = <&L3_0>; 184 179 }; 185 180 }; ··· 203 196 L2_600: l2-cache { 204 197 compatible = "cache"; 205 198 cache-level = <2>; 199 + cache-unified; 206 200 next-level-cache = <&L3_0>; 207 201 }; 208 202 }; ··· 226 218 L2_700: l2-cache { 227 219 compatible = "cache"; 228 220 cache-level = <2>; 221 + cache-unified; 229 222 next-level-cache = <&L3_0>; 230 223 }; 231 224 };
+2 -2
arch/arm64/boot/dts/qcom/sm6375-sony-xperia-murray-pdx225.dts
··· 178 178 }; 179 179 180 180 &remoteproc_adsp { 181 - firmware-name = "qcom/Sony/murray/adsp.mbn"; 181 + firmware-name = "qcom/sm6375/Sony/murray/adsp.mbn"; 182 182 status = "okay"; 183 183 }; 184 184 185 185 &remoteproc_cdsp { 186 - firmware-name = "qcom/Sony/murray/cdsp.mbn"; 186 + firmware-name = "qcom/sm6375/Sony/murray/cdsp.mbn"; 187 187 status = "okay"; 188 188 }; 189 189
+35 -17
arch/arm64/boot/dts/qcom/sm6375.dtsi
··· 48 48 power-domain-names = "psci"; 49 49 #cooling-cells = <2>; 50 50 L2_0: l2-cache { 51 - compatible = "cache"; 52 - next-level-cache = <&L3_0>; 51 + compatible = "cache"; 52 + cache-level = <2>; 53 + cache-unified; 54 + next-level-cache = <&L3_0>; 53 55 L3_0: l3-cache { 54 - compatible = "cache"; 56 + compatible = "cache"; 57 + cache-level = <3>; 58 + cache-unified; 55 59 }; 56 60 }; 57 61 }; ··· 72 68 power-domain-names = "psci"; 73 69 #cooling-cells = <2>; 74 70 L2_100: l2-cache { 75 - compatible = "cache"; 76 - next-level-cache = <&L3_0>; 71 + compatible = "cache"; 72 + cache-level = <2>; 73 + cache-unified; 74 + next-level-cache = <&L3_0>; 77 75 }; 78 76 }; 79 77 ··· 91 85 power-domain-names = "psci"; 92 86 #cooling-cells = <2>; 93 87 L2_200: l2-cache { 94 - compatible = "cache"; 95 - next-level-cache = <&L3_0>; 88 + compatible = "cache"; 89 + cache-level = <2>; 90 + cache-unified; 91 + next-level-cache = <&L3_0>; 96 92 }; 97 93 }; 98 94 ··· 110 102 power-domain-names = "psci"; 111 103 #cooling-cells = <2>; 112 104 L2_300: l2-cache { 113 - compatible = "cache"; 114 - next-level-cache = <&L3_0>; 105 + compatible = "cache"; 106 + cache-level = <2>; 107 + cache-unified; 108 + next-level-cache = <&L3_0>; 115 109 }; 116 110 }; 117 111 ··· 129 119 power-domain-names = "psci"; 130 120 #cooling-cells = <2>; 131 121 L2_400: l2-cache { 132 - compatible = "cache"; 133 - next-level-cache = <&L3_0>; 122 + compatible = "cache"; 123 + cache-level = <2>; 124 + cache-unified; 125 + next-level-cache = <&L3_0>; 134 126 }; 135 127 }; 136 128 ··· 148 136 power-domain-names = "psci"; 149 137 #cooling-cells = <2>; 150 138 L2_500: l2-cache { 151 - compatible = "cache"; 152 - next-level-cache = <&L3_0>; 139 + compatible = "cache"; 140 + cache-level = <2>; 141 + cache-unified; 142 + next-level-cache = <&L3_0>; 153 143 }; 154 144 }; 155 145 ··· 167 153 power-domain-names = "psci"; 168 154 #cooling-cells = <2>; 169 155 L2_600: l2-cache { 170 - compatible = "cache"; 171 - next-level-cache = <&L3_0>; 156 + compatible = "cache"; 157 + cache-level = <2>; 158 + cache-unified; 159 + next-level-cache = <&L3_0>; 172 160 }; 173 161 }; 174 162 ··· 186 170 power-domain-names = "psci"; 187 171 #cooling-cells = <2>; 188 172 L2_700: l2-cache { 189 - compatible = "cache"; 190 - next-level-cache = <&L3_0>; 173 + compatible = "cache"; 174 + cache-level = <2>; 175 + cache-unified; 176 + next-level-cache = <&L3_0>; 191 177 }; 192 178 }; 193 179
+11 -2
arch/arm64/boot/dts/qcom/sm8150.dtsi
··· 63 63 L2_0: l2-cache { 64 64 compatible = "cache"; 65 65 cache-level = <2>; 66 + cache-unified; 66 67 next-level-cache = <&L3_0>; 67 68 L3_0: l3-cache { 68 - compatible = "cache"; 69 - cache-level = <3>; 69 + compatible = "cache"; 70 + cache-level = <3>; 71 + cache-unified; 70 72 }; 71 73 }; 72 74 }; ··· 92 90 L2_100: l2-cache { 93 91 compatible = "cache"; 94 92 cache-level = <2>; 93 + cache-unified; 95 94 next-level-cache = <&L3_0>; 96 95 }; 97 96 }; ··· 116 113 L2_200: l2-cache { 117 114 compatible = "cache"; 118 115 cache-level = <2>; 116 + cache-unified; 119 117 next-level-cache = <&L3_0>; 120 118 }; 121 119 }; ··· 140 136 L2_300: l2-cache { 141 137 compatible = "cache"; 142 138 cache-level = <2>; 139 + cache-unified; 143 140 next-level-cache = <&L3_0>; 144 141 }; 145 142 }; ··· 164 159 L2_400: l2-cache { 165 160 compatible = "cache"; 166 161 cache-level = <2>; 162 + cache-unified; 167 163 next-level-cache = <&L3_0>; 168 164 }; 169 165 }; ··· 188 182 L2_500: l2-cache { 189 183 compatible = "cache"; 190 184 cache-level = <2>; 185 + cache-unified; 191 186 next-level-cache = <&L3_0>; 192 187 }; 193 188 }; ··· 212 205 L2_600: l2-cache { 213 206 compatible = "cache"; 214 207 cache-level = <2>; 208 + cache-unified; 215 209 next-level-cache = <&L3_0>; 216 210 }; 217 211 }; ··· 236 228 L2_700: l2-cache { 237 229 compatible = "cache"; 238 230 cache-level = <2>; 231 + cache-unified; 239 232 next-level-cache = <&L3_0>; 240 233 }; 241 234 };
+1 -1
arch/arm64/boot/dts/qcom/sm8250-xiaomi-elish-boe.dts
··· 13 13 }; 14 14 15 15 &display_panel { 16 - compatible = "xiaomi,elish-boe-nt36523"; 16 + compatible = "xiaomi,elish-boe-nt36523", "novatek,nt36523"; 17 17 status = "okay"; 18 18 };
+1 -1
arch/arm64/boot/dts/qcom/sm8250-xiaomi-elish-csot.dts
··· 13 13 }; 14 14 15 15 &display_panel { 16 - compatible = "xiaomi,elish-csot-nt36523"; 16 + compatible = "xiaomi,elish-csot-nt36523", "novatek,nt36523"; 17 17 status = "okay"; 18 18 };
+35 -26
arch/arm64/boot/dts/qcom/sm8350.dtsi
··· 58 58 power-domain-names = "psci"; 59 59 #cooling-cells = <2>; 60 60 L2_0: l2-cache { 61 - compatible = "cache"; 62 - cache-level = <2>; 63 - next-level-cache = <&L3_0>; 61 + compatible = "cache"; 62 + cache-level = <2>; 63 + cache-unified; 64 + next-level-cache = <&L3_0>; 64 65 L3_0: l3-cache { 65 - compatible = "cache"; 66 - cache-level = <3>; 66 + compatible = "cache"; 67 + cache-level = <3>; 68 + cache-unified; 67 69 }; 68 70 }; 69 71 }; ··· 82 80 power-domain-names = "psci"; 83 81 #cooling-cells = <2>; 84 82 L2_100: l2-cache { 85 - compatible = "cache"; 86 - cache-level = <2>; 87 - next-level-cache = <&L3_0>; 83 + compatible = "cache"; 84 + cache-level = <2>; 85 + cache-unified; 86 + next-level-cache = <&L3_0>; 88 87 }; 89 88 }; 90 89 ··· 101 98 power-domain-names = "psci"; 102 99 #cooling-cells = <2>; 103 100 L2_200: l2-cache { 104 - compatible = "cache"; 105 - cache-level = <2>; 106 - next-level-cache = <&L3_0>; 101 + compatible = "cache"; 102 + cache-level = <2>; 103 + cache-unified; 104 + next-level-cache = <&L3_0>; 107 105 }; 108 106 }; 109 107 ··· 120 116 power-domain-names = "psci"; 121 117 #cooling-cells = <2>; 122 118 L2_300: l2-cache { 123 - compatible = "cache"; 124 - cache-level = <2>; 125 - next-level-cache = <&L3_0>; 119 + compatible = "cache"; 120 + cache-level = <2>; 121 + cache-unified; 122 + next-level-cache = <&L3_0>; 126 123 }; 127 124 }; 128 125 ··· 139 134 power-domain-names = "psci"; 140 135 #cooling-cells = <2>; 141 136 L2_400: l2-cache { 142 - compatible = "cache"; 143 - cache-level = <2>; 144 - next-level-cache = <&L3_0>; 137 + compatible = "cache"; 138 + cache-level = <2>; 139 + cache-unified; 140 + next-level-cache = <&L3_0>; 145 141 }; 146 142 }; 147 143 ··· 158 152 power-domain-names = "psci"; 159 153 #cooling-cells = <2>; 160 154 L2_500: l2-cache { 161 - compatible = "cache"; 162 - cache-level = <2>; 163 - next-level-cache = <&L3_0>; 155 + compatible = "cache"; 156 + cache-level = <2>; 157 + cache-unified; 158 + next-level-cache = <&L3_0>; 164 159 }; 165 160 }; 166 161 ··· 177 170 power-domain-names = "psci"; 178 171 #cooling-cells = <2>; 179 172 L2_600: l2-cache { 180 - compatible = "cache"; 181 - cache-level = <2>; 182 - next-level-cache = <&L3_0>; 173 + compatible = "cache"; 174 + cache-level = <2>; 175 + cache-unified; 176 + next-level-cache = <&L3_0>; 183 177 }; 184 178 }; 185 179 ··· 196 188 power-domain-names = "psci"; 197 189 #cooling-cells = <2>; 198 190 L2_700: l2-cache { 199 - compatible = "cache"; 200 - cache-level = <2>; 201 - next-level-cache = <&L3_0>; 191 + compatible = "cache"; 192 + cache-level = <2>; 193 + cache-unified; 194 + next-level-cache = <&L3_0>; 202 195 }; 203 196 }; 204 197
+35 -26
arch/arm64/boot/dts/qcom/sm8450.dtsi
··· 57 57 #cooling-cells = <2>; 58 58 clocks = <&cpufreq_hw 0>; 59 59 L2_0: l2-cache { 60 - compatible = "cache"; 61 - cache-level = <2>; 62 - next-level-cache = <&L3_0>; 60 + compatible = "cache"; 61 + cache-level = <2>; 62 + cache-unified; 63 + next-level-cache = <&L3_0>; 63 64 L3_0: l3-cache { 64 - compatible = "cache"; 65 - cache-level = <3>; 65 + compatible = "cache"; 66 + cache-level = <3>; 67 + cache-unified; 66 68 }; 67 69 }; 68 70 }; ··· 81 79 #cooling-cells = <2>; 82 80 clocks = <&cpufreq_hw 0>; 83 81 L2_100: l2-cache { 84 - compatible = "cache"; 85 - cache-level = <2>; 86 - next-level-cache = <&L3_0>; 82 + compatible = "cache"; 83 + cache-level = <2>; 84 + cache-unified; 85 + next-level-cache = <&L3_0>; 87 86 }; 88 87 }; 89 88 ··· 100 97 #cooling-cells = <2>; 101 98 clocks = <&cpufreq_hw 0>; 102 99 L2_200: l2-cache { 103 - compatible = "cache"; 104 - cache-level = <2>; 105 - next-level-cache = <&L3_0>; 100 + compatible = "cache"; 101 + cache-level = <2>; 102 + cache-unified; 103 + next-level-cache = <&L3_0>; 106 104 }; 107 105 }; 108 106 ··· 119 115 #cooling-cells = <2>; 120 116 clocks = <&cpufreq_hw 0>; 121 117 L2_300: l2-cache { 122 - compatible = "cache"; 123 - cache-level = <2>; 124 - next-level-cache = <&L3_0>; 118 + compatible = "cache"; 119 + cache-level = <2>; 120 + cache-unified; 121 + next-level-cache = <&L3_0>; 125 122 }; 126 123 }; 127 124 ··· 138 133 #cooling-cells = <2>; 139 134 clocks = <&cpufreq_hw 1>; 140 135 L2_400: l2-cache { 141 - compatible = "cache"; 142 - cache-level = <2>; 143 - next-level-cache = <&L3_0>; 136 + compatible = "cache"; 137 + cache-level = <2>; 138 + cache-unified; 139 + next-level-cache = <&L3_0>; 144 140 }; 145 141 }; 146 142 ··· 157 151 #cooling-cells = <2>; 158 152 clocks = <&cpufreq_hw 1>; 159 153 L2_500: l2-cache { 160 - compatible = "cache"; 161 - cache-level = <2>; 162 - next-level-cache = <&L3_0>; 154 + compatible = "cache"; 155 + cache-level = <2>; 156 + cache-unified; 157 + next-level-cache = <&L3_0>; 163 158 }; 164 159 }; 165 160 ··· 176 169 #cooling-cells = <2>; 177 170 clocks = <&cpufreq_hw 1>; 178 171 L2_600: l2-cache { 179 - compatible = "cache"; 180 - cache-level = <2>; 181 - next-level-cache = <&L3_0>; 172 + compatible = "cache"; 173 + cache-level = <2>; 174 + cache-unified; 175 + next-level-cache = <&L3_0>; 182 176 }; 183 177 }; 184 178 ··· 195 187 #cooling-cells = <2>; 196 188 clocks = <&cpufreq_hw 2>; 197 189 L2_700: l2-cache { 198 - compatible = "cache"; 199 - cache-level = <2>; 200 - next-level-cache = <&L3_0>; 190 + compatible = "cache"; 191 + cache-level = <2>; 192 + cache-unified; 193 + next-level-cache = <&L3_0>; 201 194 }; 202 195 }; 203 196
+21 -5
arch/arm64/boot/dts/qcom/sm8550.dtsi
··· 80 80 L2_0: l2-cache { 81 81 compatible = "cache"; 82 82 cache-level = <2>; 83 + cache-unified; 83 84 next-level-cache = <&L3_0>; 84 85 L3_0: l3-cache { 85 86 compatible = "cache"; 86 87 cache-level = <3>; 88 + cache-unified; 87 89 }; 88 90 }; 89 91 }; ··· 106 104 L2_100: l2-cache { 107 105 compatible = "cache"; 108 106 cache-level = <2>; 107 + cache-unified; 109 108 next-level-cache = <&L3_0>; 110 109 }; 111 110 }; ··· 127 124 L2_200: l2-cache { 128 125 compatible = "cache"; 129 126 cache-level = <2>; 127 + cache-unified; 130 128 next-level-cache = <&L3_0>; 131 129 }; 132 130 }; ··· 148 144 L2_300: l2-cache { 149 145 compatible = "cache"; 150 146 cache-level = <2>; 147 + cache-unified; 151 148 next-level-cache = <&L3_0>; 152 149 }; 153 150 }; ··· 169 164 L2_400: l2-cache { 170 165 compatible = "cache"; 171 166 cache-level = <2>; 167 + cache-unified; 172 168 next-level-cache = <&L3_0>; 173 169 }; 174 170 }; ··· 190 184 L2_500: l2-cache { 191 185 compatible = "cache"; 192 186 cache-level = <2>; 187 + cache-unified; 193 188 next-level-cache = <&L3_0>; 194 189 }; 195 190 }; ··· 211 204 L2_600: l2-cache { 212 205 compatible = "cache"; 213 206 cache-level = <2>; 207 + cache-unified; 214 208 next-level-cache = <&L3_0>; 215 209 }; 216 210 }; ··· 232 224 L2_700: l2-cache { 233 225 compatible = "cache"; 234 226 cache-level = <2>; 227 + cache-unified; 235 228 next-level-cache = <&L3_0>; 236 229 }; 237 230 }; ··· 2031 2022 qcom,din-ports = <4>; 2032 2023 qcom,dout-ports = <9>; 2033 2024 2034 - qcom,ports-sinterval = <0x07 0x1f 0x3f 0x07 0x1f 0x3f 0x18f 0xff 0xff 0x0f 0x0f 0xff 0x31f>; 2025 + qcom,ports-sinterval = /bits/ 16 <0x07 0x1f 0x3f 0x07 0x1f 0x3f 0x18f 0xff 0xff 0x0f 0x0f 0xff 0x31f>; 2035 2026 qcom,ports-offset1 = /bits/ 8 <0x01 0x03 0x05 0x02 0x04 0x15 0x00 0xff 0xff 0x06 0x0d 0xff 0x00>; 2036 2027 qcom,ports-offset2 = /bits/ 8 <0xff 0x07 0x1f 0xff 0x07 0x1f 0xff 0xff 0xff 0xff 0xff 0xff 0xff>; 2037 2028 qcom,ports-hstart = /bits/ 8 <0xff 0xff 0xff 0xff 0xff 0xff 0x08 0xff 0xff 0xff 0xff 0xff 0x0f>; ··· 2077 2068 qcom,din-ports = <0>; 2078 2069 qcom,dout-ports = <10>; 2079 2070 2080 - qcom,ports-sinterval = <0x03 0x3f 0x1f 0x07 0x00 0x18f 0xff 0xff 0xff 0xff>; 2071 + qcom,ports-sinterval = /bits/ 16 <0x03 0x3f 0x1f 0x07 0x00 0x18f 0xff 0xff 0xff 0xff>; 2081 2072 qcom,ports-offset1 = /bits/ 8 <0x00 0x00 0x0b 0x01 0x00 0x00 0xff 0xff 0xff 0xff>; 2082 2073 qcom,ports-offset2 = /bits/ 8 <0x00 0x00 0x0b 0x00 0x00 0x00 0xff 0xff 0xff 0xff>; 2083 2074 qcom,ports-hstart = /bits/ 8 <0xff 0x03 0xff 0xff 0xff 0x08 0xff 0xff 0xff 0xff>; ··· 2142 2133 qcom,din-ports = <4>; 2143 2134 qcom,dout-ports = <9>; 2144 2135 2145 - qcom,ports-sinterval = <0x07 0x1f 0x3f 0x07 0x1f 0x3f 0x18f 0xff 0xff 0x0f 0x0f 0xff 0x31f>; 2136 + qcom,ports-sinterval = /bits/ 16 <0x07 0x1f 0x3f 0x07 0x1f 0x3f 0x18f 0xff 0xff 0x0f 0x0f 0xff 0x31f>; 2146 2137 qcom,ports-offset1 = /bits/ 8 <0x01 0x03 0x05 0x02 0x04 0x15 0x00 0xff 0xff 0x06 0x0d 0xff 0x00>; 2147 2138 qcom,ports-offset2 = /bits/ 8 <0xff 0x07 0x1f 0xff 0x07 0x1f 0xff 0xff 0xff 0xff 0xff 0xff 0xff>; 2148 2139 qcom,ports-hstart = /bits/ 8 <0xff 0xff 0xff 0xff 0xff 0xff 0x08 0xff 0xff 0xff 0xff 0xff 0x0f>; ··· 3771 3762 3772 3763 system-cache-controller@25000000 { 3773 3764 compatible = "qcom,sm8550-llcc"; 3774 - reg = <0 0x25000000 0 0x800000>, 3765 + reg = <0 0x25000000 0 0x200000>, 3766 + <0 0x25200000 0 0x200000>, 3767 + <0 0x25400000 0 0x200000>, 3768 + <0 0x25600000 0 0x200000>, 3775 3769 <0 0x25800000 0 0x200000>; 3776 - reg-names = "llcc_base", "llcc_broadcast_base"; 3770 + reg-names = "llcc0_base", 3771 + "llcc1_base", 3772 + "llcc2_base", 3773 + "llcc3_base", 3774 + "llcc_broadcast_base"; 3777 3775 interrupts = <GIC_SPI 266 IRQ_TYPE_LEVEL_HIGH>; 3778 3776 }; 3779 3777
+1 -2
arch/arm64/mm/fault.c
··· 600 600 vma_end_read(vma); 601 601 goto lock_mmap; 602 602 } 603 - fault = handle_mm_fault(vma, addr & PAGE_MASK, 604 - mm_flags | FAULT_FLAG_VMA_LOCK, regs); 603 + fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs); 605 604 vma_end_read(vma); 606 605 607 606 if (!(fault & VM_FAULT_RETRY)) {
+1 -1
arch/loongarch/include/asm/loongarch.h
··· 1496 1496 #define write_fcsr(dest, val) \ 1497 1497 do { \ 1498 1498 __asm__ __volatile__( \ 1499 - " movgr2fcsr %0, "__stringify(dest)" \n" \ 1499 + " movgr2fcsr "__stringify(dest)", %0 \n" \ 1500 1500 : : "r" (val)); \ 1501 1501 } while (0) 1502 1502
+2
arch/loongarch/include/asm/pgtable-bits.h
··· 22 22 #define _PAGE_PFN_SHIFT 12 23 23 #define _PAGE_SWP_EXCLUSIVE_SHIFT 23 24 24 #define _PAGE_PFN_END_SHIFT 48 25 + #define _PAGE_PRESENT_INVALID_SHIFT 60 25 26 #define _PAGE_NO_READ_SHIFT 61 26 27 #define _PAGE_NO_EXEC_SHIFT 62 27 28 #define _PAGE_RPLV_SHIFT 63 28 29 29 30 /* Used by software */ 30 31 #define _PAGE_PRESENT (_ULCAST_(1) << _PAGE_PRESENT_SHIFT) 32 + #define _PAGE_PRESENT_INVALID (_ULCAST_(1) << _PAGE_PRESENT_INVALID_SHIFT) 31 33 #define _PAGE_WRITE (_ULCAST_(1) << _PAGE_WRITE_SHIFT) 32 34 #define _PAGE_ACCESSED (_ULCAST_(1) << _PAGE_ACCESSED_SHIFT) 33 35 #define _PAGE_MODIFIED (_ULCAST_(1) << _PAGE_MODIFIED_SHIFT)
+2 -1
arch/loongarch/include/asm/pgtable.h
··· 213 213 static inline int pmd_present(pmd_t pmd) 214 214 { 215 215 if (unlikely(pmd_val(pmd) & _PAGE_HUGE)) 216 - return !!(pmd_val(pmd) & (_PAGE_PRESENT | _PAGE_PROTNONE)); 216 + return !!(pmd_val(pmd) & (_PAGE_PRESENT | _PAGE_PROTNONE | _PAGE_PRESENT_INVALID)); 217 217 218 218 return pmd_val(pmd) != (unsigned long)invalid_pte_table; 219 219 } ··· 558 558 559 559 static inline pmd_t pmd_mkinvalid(pmd_t pmd) 560 560 { 561 + pmd_val(pmd) |= _PAGE_PRESENT_INVALID; 561 562 pmd_val(pmd) &= ~(_PAGE_PRESENT | _PAGE_VALID | _PAGE_DIRTY | _PAGE_PROTNONE); 562 563 563 564 return pmd;
+2
arch/loongarch/kernel/hw_breakpoint.c
··· 396 396 397 397 if (hw->ctrl.type != LOONGARCH_BREAKPOINT_EXECUTE) 398 398 alignment_mask = 0x7; 399 + else 400 + alignment_mask = 0x3; 399 401 offset = hw->address & alignment_mask; 400 402 401 403 hw->address &= ~alignment_mask;
+3 -3
arch/loongarch/kernel/perf_event.c
··· 271 271 WARN_ON(idx < 0 || idx >= loongarch_pmu.num_counters); 272 272 273 273 /* Make sure interrupt enabled. */ 274 - cpuc->saved_ctrl[idx] = M_PERFCTL_EVENT(evt->event_base & 0xff) | 274 + cpuc->saved_ctrl[idx] = M_PERFCTL_EVENT(evt->event_base) | 275 275 (evt->config_base & M_PERFCTL_CONFIG_MASK) | CSR_PERFCTRL_IE; 276 276 277 277 cpu = (event->cpu >= 0) ? event->cpu : smp_processor_id(); ··· 594 594 595 595 static unsigned int loongarch_pmu_perf_event_encode(const struct loongarch_perf_event *pev) 596 596 { 597 - return (pev->event_id & 0xff); 597 + return M_PERFCTL_EVENT(pev->event_id); 598 598 } 599 599 600 600 static const struct loongarch_perf_event *loongarch_pmu_map_general_event(int idx) ··· 849 849 850 850 static const struct loongarch_perf_event *loongarch_pmu_map_raw_event(u64 config) 851 851 { 852 - raw_event.event_id = config & 0xff; 852 + raw_event.event_id = M_PERFCTL_EVENT(config); 853 853 854 854 return &raw_event; 855 855 }
+1 -1
arch/loongarch/kernel/unaligned.c
··· 485 485 struct dentry *d; 486 486 487 487 d = debugfs_create_dir("loongarch", NULL); 488 - if (!d) 488 + if (IS_ERR_OR_NULL(d)) 489 489 return -ENOMEM; 490 490 491 491 debugfs_create_u32("unaligned_instructions_user",
+1 -1
arch/nios2/boot/dts/10m50_devboard.dts
··· 97 97 rx-fifo-depth = <8192>; 98 98 tx-fifo-depth = <8192>; 99 99 address-bits = <48>; 100 - max-frame-size = <1518>; 100 + max-frame-size = <1500>; 101 101 local-mac-address = [00 00 00 00 00 00]; 102 102 altr,has-supplementary-unicast; 103 103 altr,enable-sup-addr = <1>;
+1 -1
arch/nios2/boot/dts/3c120_devboard.dts
··· 106 106 interrupt-names = "rx_irq", "tx_irq"; 107 107 rx-fifo-depth = <8192>; 108 108 tx-fifo-depth = <8192>; 109 - max-frame-size = <1518>; 109 + max-frame-size = <1500>; 110 110 local-mac-address = [ 00 00 00 00 00 00 ]; 111 111 phy-mode = "rgmii-id"; 112 112 phy-handle = <&phy0>;
+5
arch/powerpc/purgatory/Makefile
··· 5 5 6 6 targets += trampoline_$(BITS).o purgatory.ro 7 7 8 + # When profile-guided optimization is enabled, llvm emits two different 9 + # overlapping text sections, which is not supported by kexec. Remove profile 10 + # optimization flags. 11 + KBUILD_CFLAGS := $(filter-out -fprofile-sample-use=% -fprofile-use=%,$(KBUILD_CFLAGS)) 12 + 8 13 LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined 9 14 10 15 $(obj)/purgatory.ro: $(obj)/trampoline_$(BITS).o FORCE
+1
arch/riscv/Kconfig
··· 26 26 select ARCH_HAS_GIGANTIC_PAGE 27 27 select ARCH_HAS_KCOV 28 28 select ARCH_HAS_MMIOWB 29 + select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE 29 30 select ARCH_HAS_PMEM_API 30 31 select ARCH_HAS_PTE_SPECIAL 31 32 select ARCH_HAS_SET_DIRECT_MAP if MMU
-33
arch/riscv/include/asm/kfence.h
··· 8 8 #include <asm-generic/pgalloc.h> 9 9 #include <asm/pgtable.h> 10 10 11 - static inline int split_pmd_page(unsigned long addr) 12 - { 13 - int i; 14 - unsigned long pfn = PFN_DOWN(__pa((addr & PMD_MASK))); 15 - pmd_t *pmd = pmd_off_k(addr); 16 - pte_t *pte = pte_alloc_one_kernel(&init_mm); 17 - 18 - if (!pte) 19 - return -ENOMEM; 20 - 21 - for (i = 0; i < PTRS_PER_PTE; i++) 22 - set_pte(pte + i, pfn_pte(pfn + i, PAGE_KERNEL)); 23 - set_pmd(pmd, pfn_pmd(PFN_DOWN(__pa(pte)), PAGE_TABLE)); 24 - 25 - flush_tlb_kernel_range(addr, addr + PMD_SIZE); 26 - return 0; 27 - } 28 - 29 11 static inline bool arch_kfence_init_pool(void) 30 12 { 31 - int ret; 32 - unsigned long addr; 33 - pmd_t *pmd; 34 - 35 - for (addr = (unsigned long)__kfence_pool; is_kfence_address((void *)addr); 36 - addr += PAGE_SIZE) { 37 - pmd = pmd_off_k(addr); 38 - 39 - if (pmd_leaf(*pmd)) { 40 - ret = split_pmd_page(addr); 41 - if (ret) 42 - return false; 43 - } 44 - } 45 - 46 13 return true; 47 14 } 48 15
+1 -2
arch/riscv/include/asm/pgtable.h
··· 165 165 _PAGE_EXEC | _PAGE_WRITE) 166 166 167 167 #define PAGE_COPY PAGE_READ 168 - #define PAGE_COPY_EXEC PAGE_EXEC 169 - #define PAGE_COPY_READ_EXEC PAGE_READ_EXEC 168 + #define PAGE_COPY_EXEC PAGE_READ_EXEC 170 169 #define PAGE_SHARED PAGE_WRITE 171 170 #define PAGE_SHARED_EXEC PAGE_WRITE_EXEC 172 171
+37 -11
arch/riscv/mm/init.c
··· 23 23 #ifdef CONFIG_RELOCATABLE 24 24 #include <linux/elf.h> 25 25 #endif 26 + #include <linux/kfence.h> 26 27 27 28 #include <asm/fixmap.h> 28 29 #include <asm/tlbflush.h> ··· 294 293 [VM_EXEC] = PAGE_EXEC, 295 294 [VM_EXEC | VM_READ] = PAGE_READ_EXEC, 296 295 [VM_EXEC | VM_WRITE] = PAGE_COPY_EXEC, 297 - [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_READ_EXEC, 296 + [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_EXEC, 298 297 [VM_SHARED] = PAGE_NONE, 299 298 [VM_SHARED | VM_READ] = PAGE_READ, 300 299 [VM_SHARED | VM_WRITE] = PAGE_SHARED, ··· 660 659 create_pgd_next_mapping(nextp, va, pa, sz, prot); 661 660 } 662 661 663 - static uintptr_t __init best_map_size(phys_addr_t base, phys_addr_t size) 662 + static uintptr_t __init best_map_size(phys_addr_t pa, uintptr_t va, 663 + phys_addr_t size) 664 664 { 665 - if (!(base & (PGDIR_SIZE - 1)) && size >= PGDIR_SIZE) 665 + if (!(pa & (PGDIR_SIZE - 1)) && !(va & (PGDIR_SIZE - 1)) && size >= PGDIR_SIZE) 666 666 return PGDIR_SIZE; 667 667 668 - if (!(base & (P4D_SIZE - 1)) && size >= P4D_SIZE) 668 + if (!(pa & (P4D_SIZE - 1)) && !(va & (P4D_SIZE - 1)) && size >= P4D_SIZE) 669 669 return P4D_SIZE; 670 670 671 - if (!(base & (PUD_SIZE - 1)) && size >= PUD_SIZE) 671 + if (!(pa & (PUD_SIZE - 1)) && !(va & (PUD_SIZE - 1)) && size >= PUD_SIZE) 672 672 return PUD_SIZE; 673 673 674 - if (!(base & (PMD_SIZE - 1)) && size >= PMD_SIZE) 674 + if (!(pa & (PMD_SIZE - 1)) && !(va & (PMD_SIZE - 1)) && size >= PMD_SIZE) 675 675 return PMD_SIZE; 676 676 677 677 return PAGE_SIZE; ··· 1169 1167 } 1170 1168 1171 1169 static void __init create_linear_mapping_range(phys_addr_t start, 1172 - phys_addr_t end) 1170 + phys_addr_t end, 1171 + uintptr_t fixed_map_size) 1173 1172 { 1174 1173 phys_addr_t pa; 1175 1174 uintptr_t va, map_size; 1176 1175 1177 1176 for (pa = start; pa < end; pa += map_size) { 1178 1177 va = (uintptr_t)__va(pa); 1179 - map_size = best_map_size(pa, end - pa); 1178 + map_size = fixed_map_size ? fixed_map_size : 1179 + best_map_size(pa, va, end - pa); 1180 1180 1181 1181 create_pgd_mapping(swapper_pg_dir, va, pa, map_size, 1182 1182 pgprot_from_va(va)); ··· 1188 1184 static void __init create_linear_mapping_page_table(void) 1189 1185 { 1190 1186 phys_addr_t start, end; 1187 + phys_addr_t kfence_pool __maybe_unused; 1191 1188 u64 i; 1192 1189 1193 1190 #ifdef CONFIG_STRICT_KERNEL_RWX ··· 1202 1197 memblock_mark_nomap(krodata_start, krodata_size); 1203 1198 #endif 1204 1199 1200 + #ifdef CONFIG_KFENCE 1201 + /* 1202 + * kfence pool must be backed by PAGE_SIZE mappings, so allocate it 1203 + * before we setup the linear mapping so that we avoid using hugepages 1204 + * for this region. 1205 + */ 1206 + kfence_pool = memblock_phys_alloc(KFENCE_POOL_SIZE, PAGE_SIZE); 1207 + BUG_ON(!kfence_pool); 1208 + 1209 + memblock_mark_nomap(kfence_pool, KFENCE_POOL_SIZE); 1210 + __kfence_pool = __va(kfence_pool); 1211 + #endif 1212 + 1205 1213 /* Map all memory banks in the linear mapping */ 1206 1214 for_each_mem_range(i, &start, &end) { 1207 1215 if (start >= end) ··· 1225 1207 if (end >= __pa(PAGE_OFFSET) + memory_limit) 1226 1208 end = __pa(PAGE_OFFSET) + memory_limit; 1227 1209 1228 - create_linear_mapping_range(start, end); 1210 + create_linear_mapping_range(start, end, 0); 1229 1211 } 1230 1212 1231 1213 #ifdef CONFIG_STRICT_KERNEL_RWX 1232 - create_linear_mapping_range(ktext_start, ktext_start + ktext_size); 1214 + create_linear_mapping_range(ktext_start, ktext_start + ktext_size, 0); 1233 1215 create_linear_mapping_range(krodata_start, 1234 - krodata_start + krodata_size); 1216 + krodata_start + krodata_size, 0); 1235 1217 1236 1218 memblock_clear_nomap(ktext_start, ktext_size); 1237 1219 memblock_clear_nomap(krodata_start, krodata_size); 1220 + #endif 1221 + 1222 + #ifdef CONFIG_KFENCE 1223 + create_linear_mapping_range(kfence_pool, 1224 + kfence_pool + KFENCE_POOL_SIZE, 1225 + PAGE_SIZE); 1226 + 1227 + memblock_clear_nomap(kfence_pool, KFENCE_POOL_SIZE); 1238 1228 #endif 1239 1229 } 1240 1230
+5
arch/riscv/purgatory/Makefile
··· 35 35 CFLAGS_string.o := -D__DISABLE_EXPORTS 36 36 CFLAGS_ctype.o := -D__DISABLE_EXPORTS 37 37 38 + # When profile-guided optimization is enabled, llvm emits two different 39 + # overlapping text sections, which is not supported by kexec. Remove profile 40 + # optimization flags. 41 + KBUILD_CFLAGS := $(filter-out -fprofile-sample-use=% -fprofile-use=%,$(KBUILD_CFLAGS)) 42 + 38 43 # When linking purgatory.ro with -r unresolved symbols are not checked, 39 44 # also link a purgatory.chk binary without -r to check for unresolved symbols. 40 45 PURGATORY_LDFLAGS := -e purgatory_start -z nodefaultlib
+1
arch/s390/purgatory/Makefile
··· 26 26 KBUILD_CFLAGS += -fno-zero-initialized-in-bss -fno-builtin -ffreestanding 27 27 KBUILD_CFLAGS += -Os -m64 -msoft-float -fno-common 28 28 KBUILD_CFLAGS += -fno-stack-protector 29 + KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING 29 30 KBUILD_CFLAGS += $(CLANG_FLAGS) 30 31 KBUILD_CFLAGS += $(call cc-option,-fno-PIE) 31 32 KBUILD_AFLAGS := $(filter-out -DCC_USING_EXPOLINE,$(KBUILD_AFLAGS))
+9 -9
arch/x86/kernel/head_64.S
··· 77 77 call startup_64_setup_env 78 78 popq %rsi 79 79 80 + /* Now switch to __KERNEL_CS so IRET works reliably */ 81 + pushq $__KERNEL_CS 82 + leaq .Lon_kernel_cs(%rip), %rax 83 + pushq %rax 84 + lretq 85 + 86 + .Lon_kernel_cs: 87 + UNWIND_HINT_END_OF_STACK 88 + 80 89 #ifdef CONFIG_AMD_MEM_ENCRYPT 81 90 /* 82 91 * Activate SEV/SME memory encryption if supported/enabled. This needs to ··· 98 89 call sme_enable 99 90 popq %rsi 100 91 #endif 101 - 102 - /* Now switch to __KERNEL_CS so IRET works reliably */ 103 - pushq $__KERNEL_CS 104 - leaq .Lon_kernel_cs(%rip), %rax 105 - pushq %rax 106 - lretq 107 - 108 - .Lon_kernel_cs: 109 - UNWIND_HINT_END_OF_STACK 110 92 111 93 /* Sanitize CPU configuration */ 112 94 call verify_cpu
+5
arch/x86/purgatory/Makefile
··· 14 14 15 15 CFLAGS_sha256.o := -D__DISABLE_EXPORTS 16 16 17 + # When profile-guided optimization is enabled, llvm emits two different 18 + # overlapping text sections, which is not supported by kexec. Remove profile 19 + # optimization flags. 20 + KBUILD_CFLAGS := $(filter-out -fprofile-sample-use=% -fprofile-use=%,$(KBUILD_CFLAGS)) 21 + 17 22 # When linking purgatory.ro with -r unresolved symbols are not checked, 18 23 # also link a purgatory.chk binary without -r to check for unresolved symbols. 19 24 PURGATORY_LDFLAGS := -e purgatory_start -z nodefaultlib
+4 -4
block/blk-mq.c
··· 683 683 blk_crypto_free_request(rq); 684 684 blk_pm_mark_last_busy(rq); 685 685 rq->mq_hctx = NULL; 686 + 687 + if (rq->rq_flags & RQF_MQ_INFLIGHT) 688 + __blk_mq_dec_active_requests(hctx); 689 + 686 690 if (rq->tag != BLK_MQ_NO_TAG) 687 691 blk_mq_put_tag(hctx->tags, ctx, rq->tag); 688 692 if (sched_tag != BLK_MQ_NO_TAG) ··· 698 694 void blk_mq_free_request(struct request *rq) 699 695 { 700 696 struct request_queue *q = rq->q; 701 - struct blk_mq_hw_ctx *hctx = rq->mq_hctx; 702 697 703 698 if ((rq->rq_flags & RQF_ELVPRIV) && 704 699 q->elevator->type->ops.finish_request) 705 700 q->elevator->type->ops.finish_request(rq); 706 - 707 - if (rq->rq_flags & RQF_MQ_INFLIGHT) 708 - __blk_mq_dec_active_requests(hctx); 709 701 710 702 if (unlikely(laptop_mode && !blk_rq_is_passthrough(rq))) 711 703 laptop_io_completion(q->disk->bdi);
+1
drivers/accel/ivpu/Kconfig
··· 7 7 depends on PCI && PCI_MSI 8 8 select FW_LOADER 9 9 select SHMEM 10 + select GENERIC_ALLOCATOR 10 11 help 11 12 Choose this option if you have a system that has an 14th generation Intel CPU 12 13 or newer. VPU stands for Versatile Processing Unit and it's a CPU-integrated
+17 -5
drivers/accel/ivpu/ivpu_hw_mtl.c
··· 197 197 hw->pll.pn_ratio = clamp_t(u8, fuse_pn_ratio, hw->pll.min_ratio, hw->pll.max_ratio); 198 198 } 199 199 200 + static int ivpu_hw_mtl_wait_for_vpuip_bar(struct ivpu_device *vdev) 201 + { 202 + return REGV_POLL_FLD(MTL_VPU_HOST_SS_CPR_RST_CLR, AON, 0, 100); 203 + } 204 + 200 205 static int ivpu_pll_drive(struct ivpu_device *vdev, bool enable) 201 206 { 202 207 struct ivpu_hw_info *hw = vdev->hw; ··· 244 239 ivpu_err(vdev, "Timed out waiting for PLL ready status\n"); 245 240 return ret; 246 241 } 242 + 243 + ret = ivpu_hw_mtl_wait_for_vpuip_bar(vdev); 244 + if (ret) { 245 + ivpu_err(vdev, "Timed out waiting for VPUIP bar\n"); 246 + return ret; 247 + } 247 248 } 248 249 249 250 return 0; ··· 267 256 268 257 static void ivpu_boot_host_ss_rst_clr_assert(struct ivpu_device *vdev) 269 258 { 270 - u32 val = REGV_RD32(MTL_VPU_HOST_SS_CPR_RST_CLR); 259 + u32 val = 0; 271 260 272 261 val = REG_SET_FLD(MTL_VPU_HOST_SS_CPR_RST_CLR, TOP_NOC, val); 273 262 val = REG_SET_FLD(MTL_VPU_HOST_SS_CPR_RST_CLR, DSS_MAS, val); ··· 765 754 { 766 755 int ret = 0; 767 756 768 - if (ivpu_hw_mtl_reset(vdev)) { 757 + if (!ivpu_hw_mtl_is_idle(vdev) && ivpu_hw_mtl_reset(vdev)) { 769 758 ivpu_err(vdev, "Failed to reset the VPU\n"); 770 - ret = -EIO; 771 759 } 772 760 773 761 if (ivpu_pll_disable(vdev)) { ··· 774 764 ret = -EIO; 775 765 } 776 766 777 - if (ivpu_hw_mtl_d0i3_enable(vdev)) 778 - ivpu_warn(vdev, "Failed to enable D0I3\n"); 767 + if (ivpu_hw_mtl_d0i3_enable(vdev)) { 768 + ivpu_err(vdev, "Failed to enter D0I3\n"); 769 + ret = -EIO; 770 + } 779 771 780 772 return ret; 781 773 }
+1
drivers/accel/ivpu/ivpu_hw_mtl_reg.h
··· 91 91 #define MTL_VPU_HOST_SS_CPR_RST_SET_MSS_MAS_MASK BIT_MASK(11) 92 92 93 93 #define MTL_VPU_HOST_SS_CPR_RST_CLR 0x00000098u 94 + #define MTL_VPU_HOST_SS_CPR_RST_CLR_AON_MASK BIT_MASK(0) 94 95 #define MTL_VPU_HOST_SS_CPR_RST_CLR_TOP_NOC_MASK BIT_MASK(1) 95 96 #define MTL_VPU_HOST_SS_CPR_RST_CLR_DSS_MAS_MASK BIT_MASK(10) 96 97 #define MTL_VPU_HOST_SS_CPR_RST_CLR_MSS_MAS_MASK BIT_MASK(11)
+1 -3
drivers/accel/ivpu/ivpu_ipc.c
··· 183 183 struct ivpu_ipc_info *ipc = vdev->ipc; 184 184 int ret; 185 185 186 - ret = mutex_lock_interruptible(&ipc->lock); 187 - if (ret) 188 - return ret; 186 + mutex_lock(&ipc->lock); 189 187 190 188 if (!ipc->on) { 191 189 ret = -EAGAIN;
+14 -7
drivers/accel/ivpu/ivpu_job.c
··· 431 431 struct ivpu_file_priv *file_priv = file->driver_priv; 432 432 struct ivpu_device *vdev = file_priv->vdev; 433 433 struct ww_acquire_ctx acquire_ctx; 434 + enum dma_resv_usage usage; 434 435 struct ivpu_bo *bo; 435 436 int ret; 436 437 u32 i; ··· 462 461 463 462 job->cmd_buf_vpu_addr = bo->vpu_addr + commands_offset; 464 463 465 - ret = drm_gem_lock_reservations((struct drm_gem_object **)job->bos, 1, &acquire_ctx); 464 + ret = drm_gem_lock_reservations((struct drm_gem_object **)job->bos, buf_count, 465 + &acquire_ctx); 466 466 if (ret) { 467 467 ivpu_warn(vdev, "Failed to lock reservations: %d\n", ret); 468 468 return ret; 469 469 } 470 470 471 - ret = dma_resv_reserve_fences(bo->base.resv, 1); 472 - if (ret) { 473 - ivpu_warn(vdev, "Failed to reserve fences: %d\n", ret); 474 - goto unlock_reservations; 471 + for (i = 0; i < buf_count; i++) { 472 + ret = dma_resv_reserve_fences(job->bos[i]->base.resv, 1); 473 + if (ret) { 474 + ivpu_warn(vdev, "Failed to reserve fences: %d\n", ret); 475 + goto unlock_reservations; 476 + } 475 477 } 476 478 477 - dma_resv_add_fence(bo->base.resv, job->done_fence, DMA_RESV_USAGE_WRITE); 479 + for (i = 0; i < buf_count; i++) { 480 + usage = (i == CMD_BUF_IDX) ? DMA_RESV_USAGE_WRITE : DMA_RESV_USAGE_BOOKKEEP; 481 + dma_resv_add_fence(job->bos[i]->base.resv, job->done_fence, usage); 482 + } 478 483 479 484 unlock_reservations: 480 - drm_gem_unlock_reservations((struct drm_gem_object **)job->bos, 1, &acquire_ctx); 485 + drm_gem_unlock_reservations((struct drm_gem_object **)job->bos, buf_count, &acquire_ctx); 481 486 482 487 wmb(); /* Flush write combining buffers */ 483 488
+6 -16
drivers/accel/ivpu/ivpu_mmu.c
··· 587 587 int ivpu_mmu_invalidate_tlb(struct ivpu_device *vdev, u16 ssid) 588 588 { 589 589 struct ivpu_mmu_info *mmu = vdev->mmu; 590 - int ret; 590 + int ret = 0; 591 591 592 - ret = mutex_lock_interruptible(&mmu->lock); 593 - if (ret) 594 - return ret; 595 - 596 - if (!mmu->on) { 597 - ret = 0; 592 + mutex_lock(&mmu->lock); 593 + if (!mmu->on) 598 594 goto unlock; 599 - } 600 595 601 596 ret = ivpu_mmu_cmdq_write_tlbi_nh_asid(vdev, ssid); 602 597 if (ret) ··· 609 614 struct ivpu_mmu_cdtab *cdtab = &mmu->cdtab; 610 615 u64 *entry; 611 616 u64 cd[4]; 612 - int ret; 617 + int ret = 0; 613 618 614 619 if (ssid > IVPU_MMU_CDTAB_ENT_COUNT) 615 620 return -EINVAL; ··· 650 655 ivpu_dbg(vdev, MMU, "CDTAB %s entry (SSID=%u, dma=%pad): 0x%llx, 0x%llx, 0x%llx, 0x%llx\n", 651 656 cd_dma ? "write" : "clear", ssid, &cd_dma, cd[0], cd[1], cd[2], cd[3]); 652 657 653 - ret = mutex_lock_interruptible(&mmu->lock); 654 - if (ret) 655 - return ret; 656 - 657 - if (!mmu->on) { 658 - ret = 0; 658 + mutex_lock(&mmu->lock); 659 + if (!mmu->on) 659 660 goto unlock; 660 - } 661 661 662 662 ret = ivpu_mmu_cmdq_write_cfgi_all(vdev); 663 663 if (ret)
+3
drivers/base/regmap/regcache.c
··· 284 284 { 285 285 int ret; 286 286 287 + if (!regmap_writeable(map, reg)) 288 + return false; 289 + 287 290 /* If we don't know the chip just got reset, then sync everything. */ 288 291 if (!map->no_sync_defaults) 289 292 return true;
+1
drivers/block/null_blk/main.c
··· 2244 2244 struct nullb_device *dev = nullb->dev; 2245 2245 2246 2246 null_del_dev(nullb); 2247 + null_free_device_storage(dev, false); 2247 2248 null_free_dev(dev); 2248 2249 } 2249 2250
+44 -18
drivers/block/rbd.c
··· 1334 1334 /* 1335 1335 * Must be called after rbd_obj_calc_img_extents(). 1336 1336 */ 1337 - static bool rbd_obj_copyup_enabled(struct rbd_obj_request *obj_req) 1337 + static void rbd_obj_set_copyup_enabled(struct rbd_obj_request *obj_req) 1338 1338 { 1339 - if (!obj_req->num_img_extents || 1340 - (rbd_obj_is_entire(obj_req) && 1341 - !obj_req->img_request->snapc->num_snaps)) 1342 - return false; 1339 + rbd_assert(obj_req->img_request->snapc); 1343 1340 1344 - return true; 1341 + if (obj_req->img_request->op_type == OBJ_OP_DISCARD) { 1342 + dout("%s %p objno %llu discard\n", __func__, obj_req, 1343 + obj_req->ex.oe_objno); 1344 + return; 1345 + } 1346 + 1347 + if (!obj_req->num_img_extents) { 1348 + dout("%s %p objno %llu not overlapping\n", __func__, obj_req, 1349 + obj_req->ex.oe_objno); 1350 + return; 1351 + } 1352 + 1353 + if (rbd_obj_is_entire(obj_req) && 1354 + !obj_req->img_request->snapc->num_snaps) { 1355 + dout("%s %p objno %llu entire\n", __func__, obj_req, 1356 + obj_req->ex.oe_objno); 1357 + return; 1358 + } 1359 + 1360 + obj_req->flags |= RBD_OBJ_FLAG_COPYUP_ENABLED; 1345 1361 } 1346 1362 1347 1363 static u64 rbd_obj_img_extents_bytes(struct rbd_obj_request *obj_req) ··· 1458 1442 static struct ceph_osd_request * 1459 1443 rbd_obj_add_osd_request(struct rbd_obj_request *obj_req, int num_ops) 1460 1444 { 1445 + rbd_assert(obj_req->img_request->snapc); 1461 1446 return __rbd_obj_add_osd_request(obj_req, obj_req->img_request->snapc, 1462 1447 num_ops); 1463 1448 } ··· 1595 1578 mutex_init(&img_request->state_mutex); 1596 1579 } 1597 1580 1581 + /* 1582 + * Only snap_id is captured here, for reads. For writes, snapshot 1583 + * context is captured in rbd_img_object_requests() after exclusive 1584 + * lock is ensured to be held. 1585 + */ 1598 1586 static void rbd_img_capture_header(struct rbd_img_request *img_req) 1599 1587 { 1600 1588 struct rbd_device *rbd_dev = img_req->rbd_dev; 1601 1589 1602 1590 lockdep_assert_held(&rbd_dev->header_rwsem); 1603 1591 1604 - if (rbd_img_is_write(img_req)) 1605 - img_req->snapc = ceph_get_snap_context(rbd_dev->header.snapc); 1606 - else 1592 + if (!rbd_img_is_write(img_req)) 1607 1593 img_req->snap_id = rbd_dev->spec->snap_id; 1608 1594 1609 1595 if (rbd_dev_parent_get(rbd_dev)) ··· 2253 2233 if (ret) 2254 2234 return ret; 2255 2235 2256 - if (rbd_obj_copyup_enabled(obj_req)) 2257 - obj_req->flags |= RBD_OBJ_FLAG_COPYUP_ENABLED; 2258 - 2259 2236 obj_req->write_state = RBD_OBJ_WRITE_START; 2260 2237 return 0; 2261 2238 } ··· 2358 2341 if (ret) 2359 2342 return ret; 2360 2343 2361 - if (rbd_obj_copyup_enabled(obj_req)) 2362 - obj_req->flags |= RBD_OBJ_FLAG_COPYUP_ENABLED; 2363 2344 if (!obj_req->num_img_extents) { 2364 2345 obj_req->flags |= RBD_OBJ_FLAG_NOOP_FOR_NONEXISTENT; 2365 2346 if (rbd_obj_is_entire(obj_req)) ··· 3301 3286 case RBD_OBJ_WRITE_START: 3302 3287 rbd_assert(!*result); 3303 3288 3289 + rbd_obj_set_copyup_enabled(obj_req); 3304 3290 if (rbd_obj_write_is_noop(obj_req)) 3305 3291 return true; 3306 3292 ··· 3488 3472 3489 3473 static void rbd_img_object_requests(struct rbd_img_request *img_req) 3490 3474 { 3475 + struct rbd_device *rbd_dev = img_req->rbd_dev; 3491 3476 struct rbd_obj_request *obj_req; 3492 3477 3493 3478 rbd_assert(!img_req->pending.result && !img_req->pending.num_pending); 3479 + rbd_assert(!need_exclusive_lock(img_req) || 3480 + __rbd_is_lock_owner(rbd_dev)); 3481 + 3482 + if (rbd_img_is_write(img_req)) { 3483 + rbd_assert(!img_req->snapc); 3484 + down_read(&rbd_dev->header_rwsem); 3485 + img_req->snapc = ceph_get_snap_context(rbd_dev->header.snapc); 3486 + up_read(&rbd_dev->header_rwsem); 3487 + } 3494 3488 3495 3489 for_each_obj_request(img_req, obj_req) { 3496 3490 int result = 0; ··· 3518 3492 3519 3493 static bool rbd_img_advance(struct rbd_img_request *img_req, int *result) 3520 3494 { 3521 - struct rbd_device *rbd_dev = img_req->rbd_dev; 3522 3495 int ret; 3523 3496 3524 3497 again: ··· 3537 3512 case RBD_IMG_EXCLUSIVE_LOCK: 3538 3513 if (*result) 3539 3514 return true; 3540 - 3541 - rbd_assert(!need_exclusive_lock(img_req) || 3542 - __rbd_is_lock_owner(rbd_dev)); 3543 3515 3544 3516 rbd_img_object_requests(img_req); 3545 3517 if (!img_req->pending.num_pending) { ··· 3998 3976 static int rbd_post_acquire_action(struct rbd_device *rbd_dev) 3999 3977 { 4000 3978 int ret; 3979 + 3980 + ret = rbd_dev_refresh(rbd_dev); 3981 + if (ret) 3982 + return ret; 4001 3983 4002 3984 if (rbd_dev->header.features & RBD_FEATURE_OBJECT_MAP) { 4003 3985 ret = rbd_object_map_open(rbd_dev);
+59 -59
drivers/edac/qcom_edac.c
··· 21 21 #define TRP_SYN_REG_CNT 6 22 22 #define DRP_SYN_REG_CNT 8 23 23 24 - #define LLCC_COMMON_STATUS0 0x0003000c 25 24 #define LLCC_LB_CNT_MASK GENMASK(31, 28) 26 25 #define LLCC_LB_CNT_SHIFT 28 27 - 28 - /* Single & double bit syndrome register offsets */ 29 - #define TRP_ECC_SB_ERR_SYN0 0x0002304c 30 - #define TRP_ECC_DB_ERR_SYN0 0x00020370 31 - #define DRP_ECC_SB_ERR_SYN0 0x0004204c 32 - #define DRP_ECC_DB_ERR_SYN0 0x00042070 33 - 34 - /* Error register offsets */ 35 - #define TRP_ECC_ERROR_STATUS1 0x00020348 36 - #define TRP_ECC_ERROR_STATUS0 0x00020344 37 - #define DRP_ECC_ERROR_STATUS1 0x00042048 38 - #define DRP_ECC_ERROR_STATUS0 0x00042044 39 - 40 - /* TRP, DRP interrupt register offsets */ 41 - #define DRP_INTERRUPT_STATUS 0x00041000 42 - #define TRP_INTERRUPT_0_STATUS 0x00020480 43 - #define DRP_INTERRUPT_CLEAR 0x00041008 44 - #define DRP_ECC_ERROR_CNTR_CLEAR 0x00040004 45 - #define TRP_INTERRUPT_0_CLEAR 0x00020484 46 - #define TRP_ECC_ERROR_CNTR_CLEAR 0x00020440 47 26 48 27 /* Mask and shift macros */ 49 28 #define ECC_DB_ERR_COUNT_MASK GENMASK(4, 0) ··· 38 59 39 60 #define DRP_TRP_INT_CLEAR GENMASK(1, 0) 40 61 #define DRP_TRP_CNT_CLEAR GENMASK(1, 0) 41 - 42 - /* Config registers offsets*/ 43 - #define DRP_ECC_ERROR_CFG 0x00040000 44 - 45 - /* Tag RAM, Data RAM interrupt register offsets */ 46 - #define CMN_INTERRUPT_0_ENABLE 0x0003001c 47 - #define CMN_INTERRUPT_2_ENABLE 0x0003003c 48 - #define TRP_INTERRUPT_0_ENABLE 0x00020488 49 - #define DRP_INTERRUPT_ENABLE 0x0004100c 50 62 51 63 #define SB_ERROR_THRESHOLD 0x1 52 64 #define SB_ERROR_THRESHOLD_SHIFT 24 ··· 58 88 static const struct llcc_edac_reg_data edac_reg_data[] = { 59 89 [LLCC_DRAM_CE] = { 60 90 .name = "DRAM Single-bit", 61 - .synd_reg = DRP_ECC_SB_ERR_SYN0, 62 - .count_status_reg = DRP_ECC_ERROR_STATUS1, 63 - .ways_status_reg = DRP_ECC_ERROR_STATUS0, 64 91 .reg_cnt = DRP_SYN_REG_CNT, 65 92 .count_mask = ECC_SB_ERR_COUNT_MASK, 66 93 .ways_mask = ECC_SB_ERR_WAYS_MASK, ··· 65 98 }, 66 99 [LLCC_DRAM_UE] = { 67 100 .name = "DRAM Double-bit", 68 - .synd_reg = DRP_ECC_DB_ERR_SYN0, 69 - .count_status_reg = DRP_ECC_ERROR_STATUS1, 70 - .ways_status_reg = DRP_ECC_ERROR_STATUS0, 71 101 .reg_cnt = DRP_SYN_REG_CNT, 72 102 .count_mask = ECC_DB_ERR_COUNT_MASK, 73 103 .ways_mask = ECC_DB_ERR_WAYS_MASK, ··· 72 108 }, 73 109 [LLCC_TRAM_CE] = { 74 110 .name = "TRAM Single-bit", 75 - .synd_reg = TRP_ECC_SB_ERR_SYN0, 76 - .count_status_reg = TRP_ECC_ERROR_STATUS1, 77 - .ways_status_reg = TRP_ECC_ERROR_STATUS0, 78 111 .reg_cnt = TRP_SYN_REG_CNT, 79 112 .count_mask = ECC_SB_ERR_COUNT_MASK, 80 113 .ways_mask = ECC_SB_ERR_WAYS_MASK, ··· 79 118 }, 80 119 [LLCC_TRAM_UE] = { 81 120 .name = "TRAM Double-bit", 82 - .synd_reg = TRP_ECC_DB_ERR_SYN0, 83 - .count_status_reg = TRP_ECC_ERROR_STATUS1, 84 - .ways_status_reg = TRP_ECC_ERROR_STATUS0, 85 121 .reg_cnt = TRP_SYN_REG_CNT, 86 122 .count_mask = ECC_DB_ERR_COUNT_MASK, 87 123 .ways_mask = ECC_DB_ERR_WAYS_MASK, ··· 86 128 }, 87 129 }; 88 130 89 - static int qcom_llcc_core_setup(struct regmap *llcc_bcast_regmap) 131 + static int qcom_llcc_core_setup(struct llcc_drv_data *drv, struct regmap *llcc_bcast_regmap) 90 132 { 91 133 u32 sb_err_threshold; 92 134 int ret; ··· 95 137 * Configure interrupt enable registers such that Tag, Data RAM related 96 138 * interrupts are propagated to interrupt controller for servicing 97 139 */ 98 - ret = regmap_update_bits(llcc_bcast_regmap, CMN_INTERRUPT_2_ENABLE, 140 + ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_2_enable, 99 141 TRP0_INTERRUPT_ENABLE, 100 142 TRP0_INTERRUPT_ENABLE); 101 143 if (ret) 102 144 return ret; 103 145 104 - ret = regmap_update_bits(llcc_bcast_regmap, TRP_INTERRUPT_0_ENABLE, 146 + ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->trp_interrupt_0_enable, 105 147 SB_DB_TRP_INTERRUPT_ENABLE, 106 148 SB_DB_TRP_INTERRUPT_ENABLE); 107 149 if (ret) 108 150 return ret; 109 151 110 152 sb_err_threshold = (SB_ERROR_THRESHOLD << SB_ERROR_THRESHOLD_SHIFT); 111 - ret = regmap_write(llcc_bcast_regmap, DRP_ECC_ERROR_CFG, 153 + ret = regmap_write(llcc_bcast_regmap, drv->edac_reg_offset->drp_ecc_error_cfg, 112 154 sb_err_threshold); 113 155 if (ret) 114 156 return ret; 115 157 116 - ret = regmap_update_bits(llcc_bcast_regmap, CMN_INTERRUPT_2_ENABLE, 158 + ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_2_enable, 117 159 DRP0_INTERRUPT_ENABLE, 118 160 DRP0_INTERRUPT_ENABLE); 119 161 if (ret) 120 162 return ret; 121 163 122 - ret = regmap_write(llcc_bcast_regmap, DRP_INTERRUPT_ENABLE, 164 + ret = regmap_write(llcc_bcast_regmap, drv->edac_reg_offset->drp_interrupt_enable, 123 165 SB_DB_DRP_INTERRUPT_ENABLE); 124 166 return ret; 125 167 } ··· 128 170 static int 129 171 qcom_llcc_clear_error_status(int err_type, struct llcc_drv_data *drv) 130 172 { 131 - int ret = 0; 173 + int ret; 132 174 133 175 switch (err_type) { 134 176 case LLCC_DRAM_CE: 135 177 case LLCC_DRAM_UE: 136 - ret = regmap_write(drv->bcast_regmap, DRP_INTERRUPT_CLEAR, 178 + ret = regmap_write(drv->bcast_regmap, 179 + drv->edac_reg_offset->drp_interrupt_clear, 137 180 DRP_TRP_INT_CLEAR); 138 181 if (ret) 139 182 return ret; 140 183 141 - ret = regmap_write(drv->bcast_regmap, DRP_ECC_ERROR_CNTR_CLEAR, 184 + ret = regmap_write(drv->bcast_regmap, 185 + drv->edac_reg_offset->drp_ecc_error_cntr_clear, 142 186 DRP_TRP_CNT_CLEAR); 143 187 if (ret) 144 188 return ret; 145 189 break; 146 190 case LLCC_TRAM_CE: 147 191 case LLCC_TRAM_UE: 148 - ret = regmap_write(drv->bcast_regmap, TRP_INTERRUPT_0_CLEAR, 192 + ret = regmap_write(drv->bcast_regmap, 193 + drv->edac_reg_offset->trp_interrupt_0_clear, 149 194 DRP_TRP_INT_CLEAR); 150 195 if (ret) 151 196 return ret; 152 197 153 - ret = regmap_write(drv->bcast_regmap, TRP_ECC_ERROR_CNTR_CLEAR, 198 + ret = regmap_write(drv->bcast_regmap, 199 + drv->edac_reg_offset->trp_ecc_error_cntr_clear, 154 200 DRP_TRP_CNT_CLEAR); 155 201 if (ret) 156 202 return ret; ··· 167 205 return ret; 168 206 } 169 207 208 + struct qcom_llcc_syn_regs { 209 + u32 synd_reg; 210 + u32 count_status_reg; 211 + u32 ways_status_reg; 212 + }; 213 + 214 + static void get_reg_offsets(struct llcc_drv_data *drv, int err_type, 215 + struct qcom_llcc_syn_regs *syn_regs) 216 + { 217 + const struct llcc_edac_reg_offset *edac_reg_offset = drv->edac_reg_offset; 218 + 219 + switch (err_type) { 220 + case LLCC_DRAM_CE: 221 + syn_regs->synd_reg = edac_reg_offset->drp_ecc_sb_err_syn0; 222 + syn_regs->count_status_reg = edac_reg_offset->drp_ecc_error_status1; 223 + syn_regs->ways_status_reg = edac_reg_offset->drp_ecc_error_status0; 224 + break; 225 + case LLCC_DRAM_UE: 226 + syn_regs->synd_reg = edac_reg_offset->drp_ecc_db_err_syn0; 227 + syn_regs->count_status_reg = edac_reg_offset->drp_ecc_error_status1; 228 + syn_regs->ways_status_reg = edac_reg_offset->drp_ecc_error_status0; 229 + break; 230 + case LLCC_TRAM_CE: 231 + syn_regs->synd_reg = edac_reg_offset->trp_ecc_sb_err_syn0; 232 + syn_regs->count_status_reg = edac_reg_offset->trp_ecc_error_status1; 233 + syn_regs->ways_status_reg = edac_reg_offset->trp_ecc_error_status0; 234 + break; 235 + case LLCC_TRAM_UE: 236 + syn_regs->synd_reg = edac_reg_offset->trp_ecc_db_err_syn0; 237 + syn_regs->count_status_reg = edac_reg_offset->trp_ecc_error_status1; 238 + syn_regs->ways_status_reg = edac_reg_offset->trp_ecc_error_status0; 239 + break; 240 + } 241 + } 242 + 170 243 /* Dump Syndrome registers data for Tag RAM, Data RAM bit errors*/ 171 244 static int 172 245 dump_syn_reg_values(struct llcc_drv_data *drv, u32 bank, int err_type) 173 246 { 174 247 struct llcc_edac_reg_data reg_data = edac_reg_data[err_type]; 248 + struct qcom_llcc_syn_regs regs = { }; 175 249 int err_cnt, err_ways, ret, i; 176 250 u32 synd_reg, synd_val; 177 251 252 + get_reg_offsets(drv, err_type, &regs); 253 + 178 254 for (i = 0; i < reg_data.reg_cnt; i++) { 179 - synd_reg = reg_data.synd_reg + (i * 4); 255 + synd_reg = regs.synd_reg + (i * 4); 180 256 ret = regmap_read(drv->regmaps[bank], synd_reg, 181 257 &synd_val); 182 258 if (ret) ··· 224 224 reg_data.name, i, synd_val); 225 225 } 226 226 227 - ret = regmap_read(drv->regmaps[bank], reg_data.count_status_reg, 227 + ret = regmap_read(drv->regmaps[bank], regs.count_status_reg, 228 228 &err_cnt); 229 229 if (ret) 230 230 goto clear; ··· 234 234 edac_printk(KERN_CRIT, EDAC_LLCC, "%s: Error count: 0x%4x\n", 235 235 reg_data.name, err_cnt); 236 236 237 - ret = regmap_read(drv->regmaps[bank], reg_data.ways_status_reg, 237 + ret = regmap_read(drv->regmaps[bank], regs.ways_status_reg, 238 238 &err_ways); 239 239 if (ret) 240 240 goto clear; ··· 295 295 296 296 /* Iterate over the banks and look for Tag RAM or Data RAM errors */ 297 297 for (i = 0; i < drv->num_banks; i++) { 298 - ret = regmap_read(drv->regmaps[i], DRP_INTERRUPT_STATUS, 298 + ret = regmap_read(drv->regmaps[i], drv->edac_reg_offset->drp_interrupt_status, 299 299 &drp_error); 300 300 301 301 if (!ret && (drp_error & SB_ECC_ERROR)) { ··· 310 310 if (!ret) 311 311 irq_rc = IRQ_HANDLED; 312 312 313 - ret = regmap_read(drv->regmaps[i], TRP_INTERRUPT_0_STATUS, 313 + ret = regmap_read(drv->regmaps[i], drv->edac_reg_offset->trp_interrupt_0_status, 314 314 &trp_error); 315 315 316 316 if (!ret && (trp_error & SB_ECC_ERROR)) { ··· 342 342 int ecc_irq; 343 343 int rc; 344 344 345 - rc = qcom_llcc_core_setup(llcc_driv_data->bcast_regmap); 345 + rc = qcom_llcc_core_setup(llcc_driv_data, llcc_driv_data->bcast_regmap); 346 346 if (rc) 347 347 return rc; 348 348
+1
drivers/firmware/arm_ffa/driver.c
··· 424 424 ep_mem_access->flag = 0; 425 425 ep_mem_access->reserved = 0; 426 426 } 427 + mem_region->handle = 0; 427 428 mem_region->reserved_0 = 0; 428 429 mem_region->reserved_1 = 0; 429 430 mem_region->ep_count = args->nattrs;
+16 -2
drivers/gpio/gpio-sim.c
··· 696 696 char **line_names; 697 697 698 698 list_for_each_entry(line, &bank->line_list, siblings) { 699 + if (line->offset >= bank->num_lines) 700 + continue; 701 + 699 702 if (line->name) { 700 703 if (line->offset > max_offset) 701 704 max_offset = line->offset; ··· 724 721 if (!line_names) 725 722 return ERR_PTR(-ENOMEM); 726 723 727 - list_for_each_entry(line, &bank->line_list, siblings) 728 - line_names[line->offset] = line->name; 724 + list_for_each_entry(line, &bank->line_list, siblings) { 725 + if (line->offset >= bank->num_lines) 726 + continue; 727 + 728 + if (line->name && (line->offset <= max_offset)) 729 + line_names[line->offset] = line->name; 730 + } 729 731 730 732 return line_names; 731 733 } ··· 762 754 763 755 list_for_each_entry(bank, &dev->bank_list, siblings) { 764 756 list_for_each_entry(line, &bank->line_list, siblings) { 757 + if (line->offset >= bank->num_lines) 758 + continue; 759 + 765 760 if (line->hog) 766 761 num_hogs++; 767 762 } ··· 780 769 781 770 list_for_each_entry(bank, &dev->bank_list, siblings) { 782 771 list_for_each_entry(line, &bank->line_list, siblings) { 772 + if (line->offset >= bank->num_lines) 773 + continue; 774 + 783 775 if (!line->hog) 784 776 continue; 785 777
+8 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
··· 1092 1092 * S0ix even though the system is suspending to idle, so return false 1093 1093 * in that case. 1094 1094 */ 1095 - if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0)) 1096 - dev_warn_once(adev->dev, 1095 + if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0)) { 1096 + dev_err_once(adev->dev, 1097 1097 "Power consumption will be higher as BIOS has not been configured for suspend-to-idle.\n" 1098 1098 "To use suspend-to-idle change the sleep mode in BIOS setup.\n"); 1099 + return false; 1100 + } 1099 1101 1100 1102 #if !IS_ENABLED(CONFIG_AMD_PMC) 1101 - dev_warn_once(adev->dev, 1103 + dev_err_once(adev->dev, 1102 1104 "Power consumption will be higher as the kernel has not been compiled with CONFIG_AMD_PMC.\n"); 1103 - #endif /* CONFIG_AMD_PMC */ 1105 + return false; 1106 + #else 1104 1107 return true; 1108 + #endif /* CONFIG_AMD_PMC */ 1105 1109 } 1106 1110 1107 1111 #endif /* CONFIG_SUSPEND */
+4 -6
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
··· 79 79 static void amdgpu_bo_vm_destroy(struct ttm_buffer_object *tbo) 80 80 { 81 81 struct amdgpu_device *adev = amdgpu_ttm_adev(tbo->bdev); 82 - struct amdgpu_bo *bo = ttm_to_amdgpu_bo(tbo); 82 + struct amdgpu_bo *shadow_bo = ttm_to_amdgpu_bo(tbo), *bo; 83 83 struct amdgpu_bo_vm *vmbo; 84 84 85 + bo = shadow_bo->parent; 85 86 vmbo = to_amdgpu_bo_vm(bo); 86 87 /* in case amdgpu_device_recover_vram got NULL of bo->parent */ 87 88 if (!list_empty(&vmbo->shadow_list)) { ··· 695 694 return r; 696 695 697 696 *vmbo_ptr = to_amdgpu_bo_vm(bo_ptr); 698 - INIT_LIST_HEAD(&(*vmbo_ptr)->shadow_list); 699 - /* Set destroy callback to amdgpu_bo_vm_destroy after vmbo->shadow_list 700 - * is initialized. 701 - */ 702 - bo_ptr->tbo.destroy = &amdgpu_bo_vm_destroy; 703 697 return r; 704 698 } 705 699 ··· 711 715 712 716 mutex_lock(&adev->shadow_list_lock); 713 717 list_add_tail(&vmbo->shadow_list, &adev->shadow_list); 718 + vmbo->shadow->parent = amdgpu_bo_ref(&vmbo->bo); 719 + vmbo->shadow->tbo.destroy = &amdgpu_bo_vm_destroy; 714 720 mutex_unlock(&adev->shadow_list_lock); 715 721 } 716 722
-1
drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
··· 564 564 return r; 565 565 } 566 566 567 - (*vmbo)->shadow->parent = amdgpu_bo_ref(bo); 568 567 amdgpu_bo_add_to_shadow_list(*vmbo); 569 568 570 569 return 0;
+4 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
··· 800 800 { 801 801 struct amdgpu_vram_mgr *mgr = to_vram_mgr(man); 802 802 struct drm_buddy *mm = &mgr->mm; 803 - struct drm_buddy_block *block; 803 + struct amdgpu_vram_reservation *rsv; 804 804 805 805 drm_printf(printer, " vis usage:%llu\n", 806 806 amdgpu_vram_mgr_vis_usage(mgr)); ··· 812 812 drm_buddy_print(mm, printer); 813 813 814 814 drm_printf(printer, "reserved:\n"); 815 - list_for_each_entry(block, &mgr->reserved_pages, link) 816 - drm_buddy_block_print(mm, block, printer); 815 + list_for_each_entry(rsv, &mgr->reserved_pages, blocks) 816 + drm_printf(printer, "%#018llx-%#018llx: %llu\n", 817 + rsv->start, rsv->start + rsv->size, rsv->size); 817 818 mutex_unlock(&mgr->lock); 818 819 } 819 820
-35
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
··· 149 149 #define mmGOLDEN_TSC_COUNT_LOWER_Renoir 0x0026 150 150 #define mmGOLDEN_TSC_COUNT_LOWER_Renoir_BASE_IDX 1 151 151 152 - #define mmGOLDEN_TSC_COUNT_UPPER_Raven 0x007a 153 - #define mmGOLDEN_TSC_COUNT_UPPER_Raven_BASE_IDX 0 154 - #define mmGOLDEN_TSC_COUNT_LOWER_Raven 0x007b 155 - #define mmGOLDEN_TSC_COUNT_LOWER_Raven_BASE_IDX 0 156 - 157 - #define mmGOLDEN_TSC_COUNT_UPPER_Raven2 0x0068 158 - #define mmGOLDEN_TSC_COUNT_UPPER_Raven2_BASE_IDX 0 159 - #define mmGOLDEN_TSC_COUNT_LOWER_Raven2 0x0069 160 - #define mmGOLDEN_TSC_COUNT_LOWER_Raven2_BASE_IDX 0 161 - 162 152 enum ta_ras_gfx_subblock { 163 153 /*CPC*/ 164 154 TA_RAS_BLOCK__GFX_CPC_INDEX_START = 0, ··· 3989 3999 */ 3990 4000 if (hi_check != clock_hi) { 3991 4001 clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, mmGOLDEN_TSC_COUNT_LOWER_Renoir); 3992 - clock_hi = hi_check; 3993 - } 3994 - preempt_enable(); 3995 - clock = clock_lo | (clock_hi << 32ULL); 3996 - break; 3997 - case IP_VERSION(9, 1, 0): 3998 - case IP_VERSION(9, 2, 2): 3999 - preempt_disable(); 4000 - if (adev->rev_id >= 0x8) { 4001 - clock_hi = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_UPPER_Raven2); 4002 - clock_lo = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_LOWER_Raven2); 4003 - hi_check = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_UPPER_Raven2); 4004 - } else { 4005 - clock_hi = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_UPPER_Raven); 4006 - clock_lo = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_LOWER_Raven); 4007 - hi_check = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_UPPER_Raven); 4008 - } 4009 - /* The PWR TSC clock frequency is 100MHz, which sets 32-bit carry over 4010 - * roughly every 42 seconds. 4011 - */ 4012 - if (hi_check != clock_hi) { 4013 - if (adev->rev_id >= 0x8) 4014 - clock_lo = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_LOWER_Raven2); 4015 - else 4016 - clock_lo = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_LOWER_Raven); 4017 4002 clock_hi = hi_check; 4018 4003 } 4019 4004 preempt_enable();
+4 -3
drivers/gpu/drm/amd/amdgpu/soc15.c
··· 301 301 u32 reference_clock = adev->clock.spll.reference_freq; 302 302 303 303 if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(12, 0, 0) || 304 - adev->ip_versions[MP1_HWIP][0] == IP_VERSION(12, 0, 1) || 305 - adev->ip_versions[MP1_HWIP][0] == IP_VERSION(10, 0, 0) || 306 - adev->ip_versions[MP1_HWIP][0] == IP_VERSION(10, 0, 1)) 304 + adev->ip_versions[MP1_HWIP][0] == IP_VERSION(12, 0, 1)) 307 305 return 10000; 306 + if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(10, 0, 0) || 307 + adev->ip_versions[MP1_HWIP][0] == IP_VERSION(10, 0, 1)) 308 + return reference_clock / 4; 308 309 309 310 return reference_clock; 310 311 }
+9 -2
drivers/gpu/drm/amd/amdgpu/vi.c
··· 542 542 u32 reference_clock = adev->clock.spll.reference_freq; 543 543 u32 tmp; 544 544 545 - if (adev->flags & AMD_IS_APU) 546 - return reference_clock; 545 + if (adev->flags & AMD_IS_APU) { 546 + switch (adev->asic_type) { 547 + case CHIP_STONEY: 548 + /* vbios says 48Mhz, but the actual freq is 100Mhz */ 549 + return 10000; 550 + default: 551 + return reference_clock; 552 + } 553 + } 547 554 548 555 tmp = RREG32_SMC(ixCG_CLKPIN_CNTL_2); 549 556 if (REG_GET_FIELD(tmp, CG_CLKPIN_CNTL_2, MUX_TCLK_TO_XCLK))
+35 -1
drivers/gpu/drm/amd/display/dc/core/dc.c
··· 1981 1981 return result; 1982 1982 } 1983 1983 1984 + static bool commit_minimal_transition_state(struct dc *dc, 1985 + struct dc_state *transition_base_context); 1986 + 1984 1987 /** 1985 1988 * dc_commit_streams - Commit current stream state 1986 1989 * ··· 2005 2002 struct dc_state *context; 2006 2003 enum dc_status res = DC_OK; 2007 2004 struct dc_validation_set set[MAX_STREAMS] = {0}; 2005 + struct pipe_ctx *pipe; 2006 + bool handle_exit_odm2to1 = false; 2008 2007 2009 2008 if (dc->ctx->dce_environment == DCE_ENV_VIRTUAL_HW) 2010 2009 return res; ··· 2030 2025 set[i].plane_states[j] = status->plane_states[j]; 2031 2026 } 2032 2027 } 2028 + 2029 + /* Check for case where we are going from odm 2:1 to max 2030 + * pipe scenario. For these cases, we will call 2031 + * commit_minimal_transition_state() to exit out of odm 2:1 2032 + * first before processing new streams 2033 + */ 2034 + if (stream_count == dc->res_pool->pipe_count) { 2035 + for (i = 0; i < dc->res_pool->pipe_count; i++) { 2036 + pipe = &dc->current_state->res_ctx.pipe_ctx[i]; 2037 + if (pipe->next_odm_pipe) 2038 + handle_exit_odm2to1 = true; 2039 + } 2040 + } 2041 + 2042 + if (handle_exit_odm2to1) 2043 + res = commit_minimal_transition_state(dc, dc->current_state); 2033 2044 2034 2045 context = dc_create_state(dc); 2035 2046 if (!context) ··· 3893 3872 unsigned int i, j; 3894 3873 unsigned int pipe_in_use = 0; 3895 3874 bool subvp_in_use = false; 3875 + bool odm_in_use = false; 3896 3876 3897 3877 if (!transition_context) 3898 3878 return false; ··· 3922 3900 } 3923 3901 } 3924 3902 3903 + /* If ODM is enabled and we are adding or removing planes from any ODM 3904 + * pipe, we must use the minimal transition. 3905 + */ 3906 + for (i = 0; i < dc->res_pool->pipe_count; i++) { 3907 + struct pipe_ctx *pipe = &dc->current_state->res_ctx.pipe_ctx[i]; 3908 + 3909 + if (pipe->stream && pipe->next_odm_pipe) { 3910 + odm_in_use = true; 3911 + break; 3912 + } 3913 + } 3914 + 3925 3915 /* When the OS add a new surface if we have been used all of pipes with odm combine 3926 3916 * and mpc split feature, it need use commit_minimal_transition_state to transition safely. 3927 3917 * After OS exit MPO, it will back to use odm and mpc split with all of pipes, we need ··· 3942 3908 * Reduce the scenarios to use dc_commit_state_no_check in the stage of flip. Especially 3943 3909 * enter/exit MPO when DCN still have enough resources. 3944 3910 */ 3945 - if (pipe_in_use != dc->res_pool->pipe_count && !subvp_in_use) { 3911 + if (pipe_in_use != dc->res_pool->pipe_count && !subvp_in_use && !odm_in_use) { 3946 3912 dc_release_state(transition_context); 3947 3913 return true; 3948 3914 }
+20
drivers/gpu/drm/amd/display/dc/core/dc_resource.c
··· 1446 1446 1447 1447 split_pipe->stream = stream; 1448 1448 return i; 1449 + } else if (split_pipe->prev_odm_pipe && 1450 + split_pipe->prev_odm_pipe->plane_state == split_pipe->plane_state) { 1451 + split_pipe->prev_odm_pipe->next_odm_pipe = split_pipe->next_odm_pipe; 1452 + if (split_pipe->next_odm_pipe) 1453 + split_pipe->next_odm_pipe->prev_odm_pipe = split_pipe->prev_odm_pipe; 1454 + 1455 + if (split_pipe->prev_odm_pipe->plane_state) 1456 + resource_build_scaling_params(split_pipe->prev_odm_pipe); 1457 + 1458 + memset(split_pipe, 0, sizeof(*split_pipe)); 1459 + split_pipe->stream_res.tg = pool->timing_generators[i]; 1460 + split_pipe->plane_res.hubp = pool->hubps[i]; 1461 + split_pipe->plane_res.ipp = pool->ipps[i]; 1462 + split_pipe->plane_res.dpp = pool->dpps[i]; 1463 + split_pipe->stream_res.opp = pool->opps[i]; 1464 + split_pipe->plane_res.mpcc_inst = pool->dpps[i]->inst; 1465 + split_pipe->pipe_idx = i; 1466 + 1467 + split_pipe->stream = stream; 1468 + return i; 1449 1469 } 1450 1470 } 1451 1471 return -1;
+1 -1
drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
··· 138 138 .urgent_out_of_order_return_per_channel_pixel_only_bytes = 4096, 139 139 .urgent_out_of_order_return_per_channel_pixel_and_vm_bytes = 4096, 140 140 .urgent_out_of_order_return_per_channel_vm_only_bytes = 4096, 141 - .pct_ideal_sdp_bw_after_urgent = 100.0, 141 + .pct_ideal_sdp_bw_after_urgent = 90.0, 142 142 .pct_ideal_fabric_bw_after_urgent = 67.0, 143 143 .pct_ideal_dram_sdp_bw_after_urgent_pixel_only = 20.0, 144 144 .pct_ideal_dram_sdp_bw_after_urgent_pixel_and_vm = 60.0, // N/A, for now keep as is until DML implemented
+74 -18
drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
··· 2067 2067 return ret; 2068 2068 } 2069 2069 2070 + static void sienna_cichlid_get_override_pcie_settings(struct smu_context *smu, 2071 + uint32_t *gen_speed_override, 2072 + uint32_t *lane_width_override) 2073 + { 2074 + struct amdgpu_device *adev = smu->adev; 2075 + 2076 + *gen_speed_override = 0xff; 2077 + *lane_width_override = 0xff; 2078 + 2079 + switch (adev->pdev->device) { 2080 + case 0x73A0: 2081 + case 0x73A1: 2082 + case 0x73A2: 2083 + case 0x73A3: 2084 + case 0x73AB: 2085 + case 0x73AE: 2086 + /* Bit 7:0: PCIE lane width, 1 to 7 corresponds is x1 to x32 */ 2087 + *lane_width_override = 6; 2088 + break; 2089 + case 0x73E0: 2090 + case 0x73E1: 2091 + case 0x73E3: 2092 + *lane_width_override = 4; 2093 + break; 2094 + case 0x7420: 2095 + case 0x7421: 2096 + case 0x7422: 2097 + case 0x7423: 2098 + case 0x7424: 2099 + *lane_width_override = 3; 2100 + break; 2101 + default: 2102 + break; 2103 + } 2104 + } 2105 + 2106 + #define MAX(a, b) ((a) > (b) ? (a) : (b)) 2107 + 2070 2108 static int sienna_cichlid_update_pcie_parameters(struct smu_context *smu, 2071 2109 uint32_t pcie_gen_cap, 2072 2110 uint32_t pcie_width_cap) 2073 2111 { 2074 2112 struct smu_11_0_dpm_context *dpm_context = smu->smu_dpm.dpm_context; 2075 - 2076 - uint32_t smu_pcie_arg; 2113 + struct smu_11_0_pcie_table *pcie_table = &dpm_context->dpm_tables.pcie_table; 2114 + uint32_t gen_speed_override, lane_width_override; 2077 2115 uint8_t *table_member1, *table_member2; 2116 + uint32_t min_gen_speed, max_gen_speed; 2117 + uint32_t min_lane_width, max_lane_width; 2118 + uint32_t smu_pcie_arg; 2078 2119 int ret, i; 2079 2120 2080 2121 GET_PPTABLE_MEMBER(PcieGenSpeed, &table_member1); 2081 2122 GET_PPTABLE_MEMBER(PcieLaneCount, &table_member2); 2082 2123 2083 - /* lclk dpm table setup */ 2084 - for (i = 0; i < MAX_PCIE_CONF; i++) { 2085 - dpm_context->dpm_tables.pcie_table.pcie_gen[i] = table_member1[i]; 2086 - dpm_context->dpm_tables.pcie_table.pcie_lane[i] = table_member2[i]; 2124 + sienna_cichlid_get_override_pcie_settings(smu, 2125 + &gen_speed_override, 2126 + &lane_width_override); 2127 + 2128 + /* PCIE gen speed override */ 2129 + if (gen_speed_override != 0xff) { 2130 + min_gen_speed = MIN(pcie_gen_cap, gen_speed_override); 2131 + max_gen_speed = MIN(pcie_gen_cap, gen_speed_override); 2132 + } else { 2133 + min_gen_speed = MAX(0, table_member1[0]); 2134 + max_gen_speed = MIN(pcie_gen_cap, table_member1[1]); 2135 + min_gen_speed = min_gen_speed > max_gen_speed ? 2136 + max_gen_speed : min_gen_speed; 2087 2137 } 2138 + pcie_table->pcie_gen[0] = min_gen_speed; 2139 + pcie_table->pcie_gen[1] = max_gen_speed; 2140 + 2141 + /* PCIE lane width override */ 2142 + if (lane_width_override != 0xff) { 2143 + min_lane_width = MIN(pcie_width_cap, lane_width_override); 2144 + max_lane_width = MIN(pcie_width_cap, lane_width_override); 2145 + } else { 2146 + min_lane_width = MAX(1, table_member2[0]); 2147 + max_lane_width = MIN(pcie_width_cap, table_member2[1]); 2148 + min_lane_width = min_lane_width > max_lane_width ? 2149 + max_lane_width : min_lane_width; 2150 + } 2151 + pcie_table->pcie_lane[0] = min_lane_width; 2152 + pcie_table->pcie_lane[1] = max_lane_width; 2088 2153 2089 2154 for (i = 0; i < NUM_LINK_LEVELS; i++) { 2090 - smu_pcie_arg = (i << 16) | 2091 - ((table_member1[i] <= pcie_gen_cap) ? 2092 - (table_member1[i] << 8) : 2093 - (pcie_gen_cap << 8)) | 2094 - ((table_member2[i] <= pcie_width_cap) ? 2095 - table_member2[i] : 2096 - pcie_width_cap); 2155 + smu_pcie_arg = (i << 16 | 2156 + pcie_table->pcie_gen[i] << 8 | 2157 + pcie_table->pcie_lane[i]); 2097 2158 2098 2159 ret = smu_cmn_send_smc_msg_with_param(smu, 2099 2160 SMU_MSG_OverridePcieParameters, ··· 2162 2101 NULL); 2163 2102 if (ret) 2164 2103 return ret; 2165 - 2166 - if (table_member1[i] > pcie_gen_cap) 2167 - dpm_context->dpm_tables.pcie_table.pcie_gen[i] = pcie_gen_cap; 2168 - if (table_member2[i] > pcie_width_cap) 2169 - dpm_context->dpm_tables.pcie_table.pcie_lane[i] = pcie_width_cap; 2170 2104 } 2171 2105 2172 2106 return 0;
+2 -2
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
··· 573 573 if (smu_power->power_context || smu_power->power_context_size != 0) 574 574 return -EINVAL; 575 575 576 - smu_power->power_context = kzalloc(sizeof(struct smu_13_0_dpm_context), 576 + smu_power->power_context = kzalloc(sizeof(struct smu_13_0_power_context), 577 577 GFP_KERNEL); 578 578 if (!smu_power->power_context) 579 579 return -ENOMEM; 580 - smu_power->power_context_size = sizeof(struct smu_13_0_dpm_context); 580 + smu_power->power_context_size = sizeof(struct smu_13_0_power_context); 581 581 582 582 return 0; 583 583 }
+18 -39
drivers/gpu/drm/ast/ast_dp.c
··· 119 119 /* 120 120 * Launch Aspeed DP 121 121 */ 122 - void ast_dp_launch(struct drm_device *dev, u8 bPower) 122 + void ast_dp_launch(struct drm_device *dev) 123 123 { 124 - u32 i = 0, j = 0, WaitCount = 1; 125 - u8 bDPTX = 0; 124 + u32 i = 0; 126 125 u8 bDPExecute = 1; 127 - 128 126 struct ast_device *ast = to_ast_device(dev); 129 - // S3 come back, need more time to wait BMC ready. 130 - if (bPower) 131 - WaitCount = 300; 132 127 133 - 134 - // Wait total count by different condition. 135 - for (j = 0; j < WaitCount; j++) { 136 - bDPTX = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xD1, TX_TYPE_MASK); 137 - 138 - if (bDPTX) 139 - break; 140 - 128 + // Wait one second then timeout. 129 + while (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xD1, ASTDP_MCU_FW_EXECUTING) != 130 + ASTDP_MCU_FW_EXECUTING) { 131 + i++; 132 + // wait 100 ms 141 133 msleep(100); 142 - } 143 134 144 - // 0xE : ASTDP with DPMCU FW handling 145 - if (bDPTX == ASTDP_DPMCU_TX) { 146 - // Wait one second then timeout. 147 - i = 0; 148 - 149 - while (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xD1, COPROCESSOR_LAUNCH) != 150 - COPROCESSOR_LAUNCH) { 151 - i++; 152 - // wait 100 ms 153 - msleep(100); 154 - 155 - if (i >= 10) { 156 - // DP would not be ready. 157 - bDPExecute = 0; 158 - break; 159 - } 135 + if (i >= 10) { 136 + // DP would not be ready. 137 + bDPExecute = 0; 138 + break; 160 139 } 161 - 162 - if (bDPExecute) 163 - ast->tx_chip_types |= BIT(AST_TX_ASTDP); 164 - 165 - ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xE5, 166 - (u8) ~ASTDP_HOST_EDID_READ_DONE_MASK, 167 - ASTDP_HOST_EDID_READ_DONE); 168 140 } 141 + 142 + if (!bDPExecute) 143 + drm_err(dev, "Wait DPMCU executing timeout\n"); 144 + 145 + ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xE5, 146 + (u8) ~ASTDP_HOST_EDID_READ_DONE_MASK, 147 + ASTDP_HOST_EDID_READ_DONE); 169 148 } 170 149 171 150
+1 -4
drivers/gpu/drm/ast/ast_drv.h
··· 350 350 #define AST_DP501_LINKRATE 0xf014 351 351 #define AST_DP501_EDID_DATA 0xf020 352 352 353 - /* Define for Soc scratched reg */ 354 - #define COPROCESSOR_LAUNCH BIT(5) 355 - 356 353 /* 357 354 * Display Transmitter Type: 358 355 */ ··· 477 480 478 481 /* aspeed DP */ 479 482 int ast_astdp_read_edid(struct drm_device *dev, u8 *ediddata); 480 - void ast_dp_launch(struct drm_device *dev, u8 bPower); 483 + void ast_dp_launch(struct drm_device *dev); 481 484 void ast_dp_power_on_off(struct drm_device *dev, bool no); 482 485 void ast_dp_set_on_off(struct drm_device *dev, bool no); 483 486 void ast_dp_set_mode(struct drm_crtc *crtc, struct ast_vbios_mode_info *vbios_mode);
+9 -2
drivers/gpu/drm/ast/ast_main.c
··· 254 254 case 0x0c: 255 255 ast->tx_chip_types = AST_TX_DP501_BIT; 256 256 } 257 - } else if (ast->chip == AST2600) 258 - ast_dp_launch(&ast->base, 0); 257 + } else if (ast->chip == AST2600) { 258 + if (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xD1, TX_TYPE_MASK) == 259 + ASTDP_DPMCU_TX) { 260 + ast->tx_chip_types = AST_TX_ASTDP_BIT; 261 + ast_dp_launch(&ast->base); 262 + } 263 + } 259 264 260 265 /* Print stuff for diagnostic purposes */ 261 266 if (ast->tx_chip_types & AST_TX_NONE_BIT) ··· 269 264 drm_info(dev, "Using Sil164 TMDS transmitter\n"); 270 265 if (ast->tx_chip_types & AST_TX_DP501_BIT) 271 266 drm_info(dev, "Using DP501 DisplayPort transmitter\n"); 267 + if (ast->tx_chip_types & AST_TX_ASTDP_BIT) 268 + drm_info(dev, "Using ASPEED DisplayPort transmitter\n"); 272 269 273 270 return 0; 274 271 }
+13 -2
drivers/gpu/drm/ast/ast_mode.c
··· 1647 1647 static int ast_astdp_connector_helper_get_modes(struct drm_connector *connector) 1648 1648 { 1649 1649 void *edid; 1650 + struct drm_device *dev = connector->dev; 1651 + struct ast_device *ast = to_ast_device(dev); 1650 1652 1651 1653 int succ; 1652 1654 int count; ··· 1657 1655 if (!edid) 1658 1656 goto err_drm_connector_update_edid_property; 1659 1657 1658 + /* 1659 + * Protect access to I/O registers from concurrent modesetting 1660 + * by acquiring the I/O-register lock. 1661 + */ 1662 + mutex_lock(&ast->ioregs_lock); 1663 + 1660 1664 succ = ast_astdp_read_edid(connector->dev, edid); 1661 1665 if (succ < 0) 1662 - goto err_kfree; 1666 + goto err_mutex_unlock; 1667 + 1668 + mutex_unlock(&ast->ioregs_lock); 1663 1669 1664 1670 drm_connector_update_edid_property(connector, edid); 1665 1671 count = drm_add_edid_modes(connector, edid); ··· 1675 1665 1676 1666 return count; 1677 1667 1678 - err_kfree: 1668 + err_mutex_unlock: 1669 + mutex_unlock(&ast->ioregs_lock); 1679 1670 kfree(edid); 1680 1671 err_drm_connector_update_edid_property: 1681 1672 drm_connector_update_edid_property(connector, NULL);
+2 -1
drivers/gpu/drm/ast/ast_post.c
··· 380 380 ast_set_def_ext_reg(dev); 381 381 382 382 if (ast->chip == AST2600) { 383 - ast_dp_launch(dev, 1); 383 + if (ast->tx_chip_types & AST_TX_ASTDP_BIT) 384 + ast_dp_launch(dev); 384 385 } else if (ast->config_mode == ast_use_p2a) { 385 386 if (ast->chip == AST2500) 386 387 ast_post_chip_2500(dev);
+7 -5
drivers/gpu/drm/drm_fb_helper.c
··· 1545 1545 } 1546 1546 } 1547 1547 1548 - static void __fill_var(struct fb_var_screeninfo *var, 1548 + static void __fill_var(struct fb_var_screeninfo *var, struct fb_info *info, 1549 1549 struct drm_framebuffer *fb) 1550 1550 { 1551 1551 int i; 1552 1552 1553 1553 var->xres_virtual = fb->width; 1554 1554 var->yres_virtual = fb->height; 1555 - var->accel_flags = FB_ACCELF_TEXT; 1555 + var->accel_flags = 0; 1556 1556 var->bits_per_pixel = drm_format_info_bpp(fb->format, 0); 1557 1557 1558 - var->height = var->width = 0; 1558 + var->height = info->var.height; 1559 + var->width = info->var.width; 1560 + 1559 1561 var->left_margin = var->right_margin = 0; 1560 1562 var->upper_margin = var->lower_margin = 0; 1561 1563 var->hsync_len = var->vsync_len = 0; ··· 1620 1618 return -EINVAL; 1621 1619 } 1622 1620 1623 - __fill_var(var, fb); 1621 + __fill_var(var, info, fb); 1624 1622 1625 1623 /* 1626 1624 * fb_pan_display() validates this, but fb_set_par() doesn't and just ··· 2076 2074 info->pseudo_palette = fb_helper->pseudo_palette; 2077 2075 info->var.xoffset = 0; 2078 2076 info->var.yoffset = 0; 2079 - __fill_var(&info->var, fb); 2077 + __fill_var(&info->var, info, fb); 2080 2078 info->var.activate = FB_ACTIVATE_NOW; 2081 2079 2082 2080 drm_fb_helper_fill_pixel_fmt(&info->var, format);
+1 -1
drivers/gpu/drm/exynos/exynos_drm_g2d.c
··· 1335 1335 /* Let the runqueue know that there is work to do. */ 1336 1336 queue_work(g2d->g2d_workq, &g2d->runqueue_work); 1337 1337 1338 - if (runqueue_node->async) 1338 + if (req->async) 1339 1339 goto out; 1340 1340 1341 1341 wait_for_completion(&runqueue_node->complete);
-2
drivers/gpu/drm/exynos/exynos_drm_vidi.c
··· 469 469 if (ctx->raw_edid != (struct edid *)fake_edid_info) { 470 470 kfree(ctx->raw_edid); 471 471 ctx->raw_edid = NULL; 472 - 473 - return -EINVAL; 474 472 } 475 473 476 474 component_del(&pdev->dev, &vidi_component_ops);
+26 -4
drivers/gpu/drm/i915/display/intel_cdclk.c
··· 1453 1453 return 0; 1454 1454 } 1455 1455 1456 + static u8 rplu_calc_voltage_level(int cdclk) 1457 + { 1458 + if (cdclk > 556800) 1459 + return 3; 1460 + else if (cdclk > 480000) 1461 + return 2; 1462 + else if (cdclk > 312000) 1463 + return 1; 1464 + else 1465 + return 0; 1466 + } 1467 + 1456 1468 static void icl_readout_refclk(struct drm_i915_private *dev_priv, 1457 1469 struct intel_cdclk_config *cdclk_config) 1458 1470 { ··· 3254 3242 .calc_voltage_level = tgl_calc_voltage_level, 3255 3243 }; 3256 3244 3245 + static const struct intel_cdclk_funcs rplu_cdclk_funcs = { 3246 + .get_cdclk = bxt_get_cdclk, 3247 + .set_cdclk = bxt_set_cdclk, 3248 + .modeset_calc_cdclk = bxt_modeset_calc_cdclk, 3249 + .calc_voltage_level = rplu_calc_voltage_level, 3250 + }; 3251 + 3257 3252 static const struct intel_cdclk_funcs tgl_cdclk_funcs = { 3258 3253 .get_cdclk = bxt_get_cdclk, 3259 3254 .set_cdclk = bxt_set_cdclk, ··· 3403 3384 dev_priv->display.funcs.cdclk = &tgl_cdclk_funcs; 3404 3385 dev_priv->display.cdclk.table = dg2_cdclk_table; 3405 3386 } else if (IS_ALDERLAKE_P(dev_priv)) { 3406 - dev_priv->display.funcs.cdclk = &tgl_cdclk_funcs; 3407 3387 /* Wa_22011320316:adl-p[a0] */ 3408 - if (IS_ADLP_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0)) 3388 + if (IS_ADLP_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0)) { 3409 3389 dev_priv->display.cdclk.table = adlp_a_step_cdclk_table; 3410 - else if (IS_ADLP_RPLU(dev_priv)) 3390 + dev_priv->display.funcs.cdclk = &tgl_cdclk_funcs; 3391 + } else if (IS_ADLP_RPLU(dev_priv)) { 3411 3392 dev_priv->display.cdclk.table = rplu_cdclk_table; 3412 - else 3393 + dev_priv->display.funcs.cdclk = &rplu_cdclk_funcs; 3394 + } else { 3413 3395 dev_priv->display.cdclk.table = adlp_cdclk_table; 3396 + dev_priv->display.funcs.cdclk = &tgl_cdclk_funcs; 3397 + } 3414 3398 } else if (IS_ROCKETLAKE(dev_priv)) { 3415 3399 dev_priv->display.funcs.cdclk = &tgl_cdclk_funcs; 3416 3400 dev_priv->display.cdclk.table = rkl_cdclk_table;
+1 -1
drivers/gpu/drm/i915/display/intel_dp_aux.c
··· 129 129 130 130 static int intel_dp_aux_fw_sync_len(void) 131 131 { 132 - int precharge = 16; /* 10-16 */ 132 + int precharge = 10; /* 10-16 */ 133 133 int preamble = 8; 134 134 135 135 return precharge + preamble;
+10 -4
drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
··· 346 346 continue; 347 347 348 348 ce = intel_context_create(data[m].ce[0]->engine); 349 - if (IS_ERR(ce)) 349 + if (IS_ERR(ce)) { 350 + err = PTR_ERR(ce); 350 351 goto out; 352 + } 351 353 352 354 err = intel_context_pin(ce); 353 355 if (err) { ··· 369 367 370 368 worker = kthread_create_worker(0, "igt/parallel:%s", 371 369 data[n].ce[0]->engine->name); 372 - if (IS_ERR(worker)) 370 + if (IS_ERR(worker)) { 371 + err = PTR_ERR(worker); 373 372 goto out; 373 + } 374 374 375 375 data[n].worker = worker; 376 376 } ··· 401 397 } 402 398 } 403 399 404 - if (igt_live_test_end(&t)) 405 - err = -EIO; 400 + if (igt_live_test_end(&t)) { 401 + err = err ?: -EIO; 402 + break; 403 + } 406 404 } 407 405 408 406 out:
+8 -4
drivers/gpu/drm/i915/gt/selftest_execlists.c
··· 1530 1530 struct drm_i915_gem_object *obj; 1531 1531 struct i915_vma *vma; 1532 1532 enum intel_engine_id id; 1533 - int err = -ENOMEM; 1534 1533 u32 *map; 1534 + int err; 1535 1535 1536 1536 /* 1537 1537 * Verify that even without HAS_LOGICAL_RING_PREEMPTION, we can ··· 1539 1539 */ 1540 1540 1541 1541 ctx_hi = kernel_context(gt->i915, NULL); 1542 - if (!ctx_hi) 1543 - return -ENOMEM; 1542 + if (IS_ERR(ctx_hi)) 1543 + return PTR_ERR(ctx_hi); 1544 + 1544 1545 ctx_hi->sched.priority = I915_CONTEXT_MAX_USER_PRIORITY; 1545 1546 1546 1547 ctx_lo = kernel_context(gt->i915, NULL); 1547 - if (!ctx_lo) 1548 + if (IS_ERR(ctx_lo)) { 1549 + err = PTR_ERR(ctx_lo); 1548 1550 goto err_ctx_hi; 1551 + } 1552 + 1549 1553 ctx_lo->sched.priority = I915_CONTEXT_MIN_USER_PRIORITY; 1550 1554 1551 1555 obj = i915_gem_object_create_internal(gt->i915, PAGE_SIZE);
+1 -1
drivers/gpu/drm/lima/lima_sched.c
··· 165 165 void lima_sched_context_fini(struct lima_sched_pipe *pipe, 166 166 struct lima_sched_context *context) 167 167 { 168 - drm_sched_entity_fini(&context->base); 168 + drm_sched_entity_destroy(&context->base); 169 169 } 170 170 171 171 struct dma_fence *lima_sched_context_queue_task(struct lima_sched_task *task)
-2
drivers/gpu/drm/msm/adreno/a6xx_gmu.c
··· 1526 1526 if (!pdev) 1527 1527 return -ENODEV; 1528 1528 1529 - mutex_init(&gmu->lock); 1530 - 1531 1529 gmu->dev = &pdev->dev; 1532 1530 1533 1531 of_dma_configure(gmu->dev, node, true);
+2
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
··· 1981 1981 adreno_gpu = &a6xx_gpu->base; 1982 1982 gpu = &adreno_gpu->base; 1983 1983 1984 + mutex_init(&a6xx_gpu->gmu.lock); 1985 + 1984 1986 adreno_gpu->registers = NULL; 1985 1987 1986 1988 /*
+14 -1
drivers/gpu/drm/msm/dp/dp_catalog.c
··· 620 620 config & DP_DP_HPD_INT_MASK); 621 621 } 622 622 623 - void dp_catalog_ctrl_hpd_config(struct dp_catalog *dp_catalog) 623 + void dp_catalog_ctrl_hpd_enable(struct dp_catalog *dp_catalog) 624 624 { 625 625 struct dp_catalog_private *catalog = container_of(dp_catalog, 626 626 struct dp_catalog_private, dp_catalog); ··· 633 633 634 634 /* Enable HPD */ 635 635 dp_write_aux(catalog, REG_DP_DP_HPD_CTRL, DP_DP_HPD_CTRL_HPD_EN); 636 + } 637 + 638 + void dp_catalog_ctrl_hpd_disable(struct dp_catalog *dp_catalog) 639 + { 640 + struct dp_catalog_private *catalog = container_of(dp_catalog, 641 + struct dp_catalog_private, dp_catalog); 642 + 643 + u32 reftimer = dp_read_aux(catalog, REG_DP_DP_HPD_REFTIMER); 644 + 645 + reftimer &= ~DP_DP_HPD_REFTIMER_ENABLE; 646 + dp_write_aux(catalog, REG_DP_DP_HPD_REFTIMER, reftimer); 647 + 648 + dp_write_aux(catalog, REG_DP_DP_HPD_CTRL, 0); 636 649 } 637 650 638 651 static void dp_catalog_enable_sdp(struct dp_catalog_private *catalog)
+2 -1
drivers/gpu/drm/msm/dp/dp_catalog.h
··· 104 104 void dp_catalog_ctrl_enable_irq(struct dp_catalog *dp_catalog, bool enable); 105 105 void dp_catalog_hpd_config_intr(struct dp_catalog *dp_catalog, 106 106 u32 intr_mask, bool en); 107 - void dp_catalog_ctrl_hpd_config(struct dp_catalog *dp_catalog); 107 + void dp_catalog_ctrl_hpd_enable(struct dp_catalog *dp_catalog); 108 + void dp_catalog_ctrl_hpd_disable(struct dp_catalog *dp_catalog); 108 109 void dp_catalog_ctrl_config_psr(struct dp_catalog *dp_catalog); 109 110 void dp_catalog_ctrl_set_psr(struct dp_catalog *dp_catalog, bool enter); 110 111 u32 dp_catalog_link_is_connected(struct dp_catalog *dp_catalog);
+24 -53
drivers/gpu/drm/msm/dp/dp_display.c
··· 28 28 #include "dp_audio.h" 29 29 #include "dp_debug.h" 30 30 31 + static bool psr_enabled = false; 32 + module_param(psr_enabled, bool, 0); 33 + MODULE_PARM_DESC(psr_enabled, "enable PSR for eDP and DP displays"); 34 + 31 35 #define HPD_STRING_SIZE 30 32 36 33 37 enum { ··· 411 407 412 408 edid = dp->panel->edid; 413 409 414 - dp->dp_display.psr_supported = dp->panel->psr_cap.version; 410 + dp->dp_display.psr_supported = dp->panel->psr_cap.version && psr_enabled; 415 411 416 412 dp->audio_supported = drm_detect_monitor_audio(edid); 417 413 dp_panel_handle_sink_request(dp->panel); ··· 620 616 dp->hpd_state = ST_MAINLINK_READY; 621 617 } 622 618 623 - /* enable HDP irq_hpd/replug interrupt */ 624 - if (dp->dp_display.internal_hpd) 625 - dp_catalog_hpd_config_intr(dp->catalog, 626 - DP_DP_IRQ_HPD_INT_MASK | DP_DP_HPD_REPLUG_INT_MASK, 627 - true); 628 - 629 619 drm_dbg_dp(dp->drm_dev, "After, type=%d hpd_state=%d\n", 630 620 dp->dp_display.connector_type, state); 631 621 mutex_unlock(&dp->event_mutex); ··· 657 659 drm_dbg_dp(dp->drm_dev, "Before, type=%d hpd_state=%d\n", 658 660 dp->dp_display.connector_type, state); 659 661 660 - /* disable irq_hpd/replug interrupts */ 661 - if (dp->dp_display.internal_hpd) 662 - dp_catalog_hpd_config_intr(dp->catalog, 663 - DP_DP_IRQ_HPD_INT_MASK | DP_DP_HPD_REPLUG_INT_MASK, 664 - false); 665 - 666 662 /* unplugged, no more irq_hpd handle */ 667 663 dp_del_event(dp, EV_IRQ_HPD_INT); 668 664 ··· 680 688 return 0; 681 689 } 682 690 683 - /* disable HPD plug interrupts */ 684 - if (dp->dp_display.internal_hpd) 685 - dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK, false); 686 - 687 691 /* 688 692 * We don't need separate work for disconnect as 689 693 * connect/attention interrupts are disabled ··· 694 706 695 707 /* signal the disconnect event early to ensure proper teardown */ 696 708 dp_display_handle_plugged_change(&dp->dp_display, false); 697 - 698 - /* enable HDP plug interrupt to prepare for next plugin */ 699 - if (dp->dp_display.internal_hpd) 700 - dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK, true); 701 709 702 710 drm_dbg_dp(dp->drm_dev, "After, type=%d hpd_state=%d\n", 703 711 dp->dp_display.connector_type, state); ··· 1067 1083 mutex_unlock(&dp_display->event_mutex); 1068 1084 } 1069 1085 1070 - static void dp_display_config_hpd(struct dp_display_private *dp) 1071 - { 1072 - 1073 - dp_display_host_init(dp); 1074 - dp_catalog_ctrl_hpd_config(dp->catalog); 1075 - 1076 - /* Enable plug and unplug interrupts only if requested */ 1077 - if (dp->dp_display.internal_hpd) 1078 - dp_catalog_hpd_config_intr(dp->catalog, 1079 - DP_DP_HPD_PLUG_INT_MASK | 1080 - DP_DP_HPD_UNPLUG_INT_MASK, 1081 - true); 1082 - 1083 - /* Enable interrupt first time 1084 - * we are leaving dp clocks on during disconnect 1085 - * and never disable interrupt 1086 - */ 1087 - enable_irq(dp->irq); 1088 - } 1089 - 1090 1086 void dp_display_set_psr(struct msm_dp *dp_display, bool enter) 1091 1087 { 1092 1088 struct dp_display_private *dp; ··· 1141 1177 1142 1178 switch (todo->event_id) { 1143 1179 case EV_HPD_INIT_SETUP: 1144 - dp_display_config_hpd(dp_priv); 1180 + dp_display_host_init(dp_priv); 1145 1181 break; 1146 1182 case EV_HPD_PLUG_INT: 1147 1183 dp_hpd_plug_handle(dp_priv, todo->data); ··· 1247 1283 dp->irq, rc); 1248 1284 return rc; 1249 1285 } 1250 - disable_irq(dp->irq); 1251 1286 1252 1287 return 0; 1253 1288 } ··· 1358 1395 /* turn on dp ctrl/phy */ 1359 1396 dp_display_host_init(dp); 1360 1397 1361 - dp_catalog_ctrl_hpd_config(dp->catalog); 1362 - 1363 - if (dp->dp_display.internal_hpd) 1364 - dp_catalog_hpd_config_intr(dp->catalog, 1365 - DP_DP_HPD_PLUG_INT_MASK | 1366 - DP_DP_HPD_UNPLUG_INT_MASK, 1367 - true); 1398 + if (dp_display->is_edp) 1399 + dp_catalog_ctrl_hpd_enable(dp->catalog); 1368 1400 1369 1401 if (dp_catalog_link_is_connected(dp->catalog)) { 1370 1402 /* ··· 1527 1569 1528 1570 if (aux_bus && dp->is_edp) { 1529 1571 dp_display_host_init(dp_priv); 1530 - dp_catalog_ctrl_hpd_config(dp_priv->catalog); 1572 + dp_catalog_ctrl_hpd_enable(dp_priv->catalog); 1531 1573 dp_display_host_phy_init(dp_priv); 1532 - enable_irq(dp_priv->irq); 1533 1574 1534 1575 /* 1535 1576 * The code below assumes that the panel will finish probing ··· 1570 1613 1571 1614 error: 1572 1615 if (dp->is_edp) { 1573 - disable_irq(dp_priv->irq); 1574 1616 dp_display_host_phy_exit(dp_priv); 1575 1617 dp_display_host_deinit(dp_priv); 1576 1618 } ··· 1758 1802 { 1759 1803 struct msm_dp_bridge *dp_bridge = to_dp_bridge(bridge); 1760 1804 struct msm_dp *dp_display = dp_bridge->dp_display; 1805 + struct dp_display_private *dp = container_of(dp_display, struct dp_display_private, dp_display); 1806 + 1807 + mutex_lock(&dp->event_mutex); 1808 + dp_catalog_ctrl_hpd_enable(dp->catalog); 1809 + 1810 + /* enable HDP interrupts */ 1811 + dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, true); 1761 1812 1762 1813 dp_display->internal_hpd = true; 1814 + mutex_unlock(&dp->event_mutex); 1763 1815 } 1764 1816 1765 1817 void dp_bridge_hpd_disable(struct drm_bridge *bridge) 1766 1818 { 1767 1819 struct msm_dp_bridge *dp_bridge = to_dp_bridge(bridge); 1768 1820 struct msm_dp *dp_display = dp_bridge->dp_display; 1821 + struct dp_display_private *dp = container_of(dp_display, struct dp_display_private, dp_display); 1822 + 1823 + mutex_lock(&dp->event_mutex); 1824 + /* disable HDP interrupts */ 1825 + dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, false); 1826 + dp_catalog_ctrl_hpd_disable(dp->catalog); 1769 1827 1770 1828 dp_display->internal_hpd = false; 1829 + mutex_unlock(&dp->event_mutex); 1771 1830 } 1772 1831 1773 1832 void dp_bridge_hpd_notify(struct drm_bridge *bridge,
+2 -2
drivers/gpu/drm/msm/msm_drv.c
··· 449 449 if (ret) 450 450 goto err_cleanup_mode_config; 451 451 452 + dma_set_max_seg_size(dev, UINT_MAX); 453 + 452 454 /* Bind all our sub-components: */ 453 455 ret = component_bind_all(dev, ddev); 454 456 if (ret) ··· 460 458 ret = drm_aperture_remove_framebuffers(false, drv); 461 459 if (ret) 462 460 goto err_msm_uninit; 463 - 464 - dma_set_max_seg_size(dev, UINT_MAX); 465 461 466 462 msm_gem_shrinker_init(ddev); 467 463
+1 -3
drivers/gpu/drm/radeon/radeon_gem.c
··· 459 459 struct radeon_device *rdev = dev->dev_private; 460 460 struct drm_radeon_gem_set_domain *args = data; 461 461 struct drm_gem_object *gobj; 462 - struct radeon_bo *robj; 463 462 int r; 464 463 465 464 /* for now if someone requests domain CPU - ··· 471 472 up_read(&rdev->exclusive_lock); 472 473 return -ENOENT; 473 474 } 474 - robj = gem_to_radeon_bo(gobj); 475 475 476 476 r = radeon_gem_set_domain(gobj, args->read_domains, args->write_domain); 477 477 478 478 drm_gem_object_put(gobj); 479 479 up_read(&rdev->exclusive_lock); 480 - r = radeon_gem_handle_lockup(robj->rdev, r); 480 + r = radeon_gem_handle_lockup(rdev, r); 481 481 return r; 482 482 } 483 483
+1
drivers/i2c/busses/i2c-designware-core.h
··· 40 40 #define DW_IC_CON_BUS_CLEAR_CTRL BIT(11) 41 41 42 42 #define DW_IC_DATA_CMD_DAT GENMASK(7, 0) 43 + #define DW_IC_DATA_CMD_FIRST_DATA_BYTE BIT(11) 43 44 44 45 /* 45 46 * Registers offset
+4
drivers/i2c/busses/i2c-designware-slave.c
··· 176 176 177 177 do { 178 178 regmap_read(dev->map, DW_IC_DATA_CMD, &tmp); 179 + if (tmp & DW_IC_DATA_CMD_FIRST_DATA_BYTE) 180 + i2c_slave_event(dev->slave, 181 + I2C_SLAVE_WRITE_REQUESTED, 182 + &val); 179 183 val = tmp; 180 184 i2c_slave_event(dev->slave, I2C_SLAVE_WRITE_RECEIVED, 181 185 &val);
+1 -1
drivers/i2c/busses/i2c-img-scb.c
··· 257 257 #define IMG_I2C_TIMEOUT (msecs_to_jiffies(1000)) 258 258 259 259 /* 260 - * Worst incs are 1 (innacurate) and 16*256 (irregular). 260 + * Worst incs are 1 (inaccurate) and 16*256 (irregular). 261 261 * So a sensible inc is the logarithmic mean: 64 (2^6), which is 262 262 * in the middle of the valid range (0-127). 263 263 */
+4 -2
drivers/i2c/busses/i2c-mchp-pci1xxxx.c
··· 1118 1118 static DEFINE_SIMPLE_DEV_PM_OPS(pci1xxxx_i2c_pm_ops, pci1xxxx_i2c_suspend, 1119 1119 pci1xxxx_i2c_resume); 1120 1120 1121 - static void pci1xxxx_i2c_shutdown(struct pci1xxxx_i2c *i2c) 1121 + static void pci1xxxx_i2c_shutdown(void *data) 1122 1122 { 1123 + struct pci1xxxx_i2c *i2c = data; 1124 + 1123 1125 pci1xxxx_i2c_config_padctrl(i2c, false); 1124 1126 pci1xxxx_i2c_configure_core_reg(i2c, false); 1125 1127 } ··· 1158 1156 init_completion(&i2c->i2c_xfer_done); 1159 1157 pci1xxxx_i2c_init(i2c); 1160 1158 1161 - ret = devm_add_action(dev, (void (*)(void *))pci1xxxx_i2c_shutdown, i2c); 1159 + ret = devm_add_action(dev, pci1xxxx_i2c_shutdown, i2c); 1162 1160 if (ret) 1163 1161 return ret; 1164 1162
+11
drivers/i2c/busses/i2c-mv64xxx.c
··· 520 520 521 521 while (readl(drv_data->reg_base + drv_data->reg_offsets.control) & 522 522 MV64XXX_I2C_REG_CONTROL_IFLG) { 523 + /* 524 + * It seems that sometime the controller updates the status 525 + * register only after it asserts IFLG in control register. 526 + * This may result in weird bugs when in atomic mode. A delay 527 + * of 100 ns before reading the status register solves this 528 + * issue. This bug does not seem to appear when using 529 + * interrupts. 530 + */ 531 + if (drv_data->atomic) 532 + ndelay(100); 533 + 523 534 status = readl(drv_data->reg_base + drv_data->reg_offsets.status); 524 535 mv64xxx_i2c_fsm(drv_data, status); 525 536 mv64xxx_i2c_do_action(drv_data);
+5 -3
drivers/i2c/busses/i2c-sprd.c
··· 576 576 struct sprd_i2c *i2c_dev = platform_get_drvdata(pdev); 577 577 int ret; 578 578 579 - ret = pm_runtime_resume_and_get(i2c_dev->dev); 579 + ret = pm_runtime_get_sync(i2c_dev->dev); 580 580 if (ret < 0) 581 - return ret; 581 + dev_err(&pdev->dev, "Failed to resume device (%pe)\n", ERR_PTR(ret)); 582 582 583 583 i2c_del_adapter(&i2c_dev->adap); 584 - clk_disable_unprepare(i2c_dev->clk); 584 + 585 + if (ret >= 0) 586 + clk_disable_unprepare(i2c_dev->clk); 585 587 586 588 pm_runtime_put_noidle(i2c_dev->dev); 587 589 pm_runtime_disable(i2c_dev->dev);
+2 -2
drivers/infiniband/core/cma.c
··· 3295 3295 route->path_rec->traffic_class = tos; 3296 3296 route->path_rec->mtu = iboe_get_mtu(ndev->mtu); 3297 3297 route->path_rec->rate_selector = IB_SA_EQ; 3298 - route->path_rec->rate = iboe_get_rate(ndev); 3298 + route->path_rec->rate = IB_RATE_PORT_CURRENT; 3299 3299 dev_put(ndev); 3300 3300 route->path_rec->packet_life_time_selector = IB_SA_EQ; 3301 3301 /* In case ACK timeout is set, use this value to calculate ··· 4964 4964 if (!ndev) 4965 4965 return -ENODEV; 4966 4966 4967 - ib.rec.rate = iboe_get_rate(ndev); 4967 + ib.rec.rate = IB_RATE_PORT_CURRENT; 4968 4968 ib.rec.hop_limit = 1; 4969 4969 ib.rec.mtu = iboe_get_mtu(ndev->mtu); 4970 4970
+6 -1
drivers/infiniband/core/uverbs_cmd.c
··· 1850 1850 attr->path_mtu = cmd->base.path_mtu; 1851 1851 if (cmd->base.attr_mask & IB_QP_PATH_MIG_STATE) 1852 1852 attr->path_mig_state = cmd->base.path_mig_state; 1853 - if (cmd->base.attr_mask & IB_QP_QKEY) 1853 + if (cmd->base.attr_mask & IB_QP_QKEY) { 1854 + if (cmd->base.qkey & IB_QP_SET_QKEY && !capable(CAP_NET_RAW)) { 1855 + ret = -EPERM; 1856 + goto release_qp; 1857 + } 1854 1858 attr->qkey = cmd->base.qkey; 1859 + } 1855 1860 if (cmd->base.attr_mask & IB_QP_RQ_PSN) 1856 1861 attr->rq_psn = cmd->base.rq_psn; 1857 1862 if (cmd->base.attr_mask & IB_QP_SQ_PSN)
+5 -7
drivers/infiniband/core/uverbs_main.c
··· 222 222 spin_lock_irq(&ev_queue->lock); 223 223 224 224 while (list_empty(&ev_queue->event_list)) { 225 - spin_unlock_irq(&ev_queue->lock); 225 + if (ev_queue->is_closed) { 226 + spin_unlock_irq(&ev_queue->lock); 227 + return -EIO; 228 + } 226 229 230 + spin_unlock_irq(&ev_queue->lock); 227 231 if (filp->f_flags & O_NONBLOCK) 228 232 return -EAGAIN; 229 233 ··· 237 233 return -ERESTARTSYS; 238 234 239 235 spin_lock_irq(&ev_queue->lock); 240 - 241 - /* If device was disassociated and no event exists set an error */ 242 - if (list_empty(&ev_queue->event_list) && ev_queue->is_closed) { 243 - spin_unlock_irq(&ev_queue->lock); 244 - return -EIO; 245 - } 246 236 } 247 237 248 238 event = list_entry(ev_queue->event_list.next, struct ib_uverbs_event, list);
-2
drivers/infiniband/hw/bnxt_re/bnxt_re.h
··· 135 135 136 136 struct delayed_work worker; 137 137 u8 cur_prio_map; 138 - u16 active_speed; 139 - u8 active_width; 140 138 141 139 /* FP Notification Queue (CQ & SRQ) */ 142 140 struct tasklet_struct nq_task;
+4 -3
drivers/infiniband/hw/bnxt_re/ib_verbs.c
··· 199 199 { 200 200 struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev); 201 201 struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr; 202 + int rc; 202 203 203 204 memset(port_attr, 0, sizeof(*port_attr)); 204 205 ··· 229 228 port_attr->sm_sl = 0; 230 229 port_attr->subnet_timeout = 0; 231 230 port_attr->init_type_reply = 0; 232 - port_attr->active_speed = rdev->active_speed; 233 - port_attr->active_width = rdev->active_width; 231 + rc = ib_get_eth_speed(&rdev->ibdev, port_num, &port_attr->active_speed, 232 + &port_attr->active_width); 234 233 235 - return 0; 234 + return rc; 236 235 } 237 236 238 237 int bnxt_re_get_port_immutable(struct ib_device *ibdev, u32 port_num,
-2
drivers/infiniband/hw/bnxt_re/main.c
··· 1077 1077 return rc; 1078 1078 } 1079 1079 dev_info(rdev_to_dev(rdev), "Device registered with IB successfully"); 1080 - ib_get_eth_speed(&rdev->ibdev, 1, &rdev->active_speed, 1081 - &rdev->active_width); 1082 1080 set_bit(BNXT_RE_FLAG_ISSUE_ROCE_STATS, &rdev->flags); 1083 1081 1084 1082 event = netif_running(rdev->netdev) && netif_carrier_ok(rdev->netdev) ?
+62 -27
drivers/infiniband/hw/mlx5/counters.c
··· 209 209 !vport_qcounters_supported(dev)) || !port_num) 210 210 return &dev->port[0].cnts; 211 211 212 - return &dev->port[port_num - 1].cnts; 212 + return is_mdev_switchdev_mode(dev->mdev) ? 213 + &dev->port[1].cnts : &dev->port[port_num - 1].cnts; 213 214 } 214 215 215 216 /** ··· 263 262 mlx5_ib_alloc_hw_port_stats(struct ib_device *ibdev, u32 port_num) 264 263 { 265 264 struct mlx5_ib_dev *dev = to_mdev(ibdev); 266 - const struct mlx5_ib_counters *cnts = &dev->port[port_num - 1].cnts; 265 + const struct mlx5_ib_counters *cnts = get_counters(dev, port_num); 267 266 268 267 return do_alloc_stats(cnts); 269 268 } ··· 330 329 { 331 330 u32 out[MLX5_ST_SZ_DW(query_q_counter_out)] = {}; 332 331 u32 in[MLX5_ST_SZ_DW(query_q_counter_in)] = {}; 332 + struct mlx5_core_dev *mdev; 333 333 __be32 val; 334 334 int ret, i; 335 335 ··· 338 336 dev->port[port_num].rep->vport == MLX5_VPORT_UPLINK) 339 337 return 0; 340 338 339 + mdev = mlx5_eswitch_get_core_dev(dev->port[port_num].rep->esw); 340 + if (!mdev) 341 + return -EOPNOTSUPP; 342 + 341 343 MLX5_SET(query_q_counter_in, in, opcode, MLX5_CMD_OP_QUERY_Q_COUNTER); 342 344 MLX5_SET(query_q_counter_in, in, other_vport, 1); 343 345 MLX5_SET(query_q_counter_in, in, vport_number, 344 346 dev->port[port_num].rep->vport); 345 347 MLX5_SET(query_q_counter_in, in, aggregate, 1); 346 - ret = mlx5_cmd_exec_inout(dev->mdev, query_q_counter, in, out); 348 + ret = mlx5_cmd_exec_inout(mdev, query_q_counter, in, out); 347 349 if (ret) 348 350 return ret; 349 351 ··· 581 575 bool is_vport = is_mdev_switchdev_mode(dev->mdev) && 582 576 port_num != MLX5_VPORT_PF; 583 577 const struct mlx5_ib_counter *names; 584 - int j = 0, i; 578 + int j = 0, i, size; 585 579 586 580 names = is_vport ? vport_basic_q_cnts : basic_q_cnts; 587 - for (i = 0; i < ARRAY_SIZE(basic_q_cnts); i++, j++) { 581 + size = is_vport ? ARRAY_SIZE(vport_basic_q_cnts) : 582 + ARRAY_SIZE(basic_q_cnts); 583 + for (i = 0; i < size; i++, j++) { 588 584 descs[j].name = names[i].name; 589 - offsets[j] = basic_q_cnts[i].offset; 585 + offsets[j] = names[i].offset; 590 586 } 591 587 592 588 names = is_vport ? vport_out_of_seq_q_cnts : out_of_seq_q_cnts; 589 + size = is_vport ? ARRAY_SIZE(vport_out_of_seq_q_cnts) : 590 + ARRAY_SIZE(out_of_seq_q_cnts); 593 591 if (MLX5_CAP_GEN(dev->mdev, out_of_seq_cnt)) { 594 - for (i = 0; i < ARRAY_SIZE(out_of_seq_q_cnts); i++, j++) { 592 + for (i = 0; i < size; i++, j++) { 595 593 descs[j].name = names[i].name; 596 - offsets[j] = out_of_seq_q_cnts[i].offset; 594 + offsets[j] = names[i].offset; 597 595 } 598 596 } 599 597 600 598 names = is_vport ? vport_retrans_q_cnts : retrans_q_cnts; 599 + size = is_vport ? ARRAY_SIZE(vport_retrans_q_cnts) : 600 + ARRAY_SIZE(retrans_q_cnts); 601 601 if (MLX5_CAP_GEN(dev->mdev, retransmission_q_counters)) { 602 - for (i = 0; i < ARRAY_SIZE(retrans_q_cnts); i++, j++) { 602 + for (i = 0; i < size; i++, j++) { 603 603 descs[j].name = names[i].name; 604 - offsets[j] = retrans_q_cnts[i].offset; 604 + offsets[j] = names[i].offset; 605 605 } 606 606 } 607 607 608 608 names = is_vport ? vport_extended_err_cnts : extended_err_cnts; 609 + size = is_vport ? ARRAY_SIZE(vport_extended_err_cnts) : 610 + ARRAY_SIZE(extended_err_cnts); 609 611 if (MLX5_CAP_GEN(dev->mdev, enhanced_error_q_counters)) { 610 - for (i = 0; i < ARRAY_SIZE(extended_err_cnts); i++, j++) { 612 + for (i = 0; i < size; i++, j++) { 611 613 descs[j].name = names[i].name; 612 - offsets[j] = extended_err_cnts[i].offset; 614 + offsets[j] = names[i].offset; 613 615 } 614 616 } 615 617 616 618 names = is_vport ? vport_roce_accl_cnts : roce_accl_cnts; 619 + size = is_vport ? ARRAY_SIZE(vport_roce_accl_cnts) : 620 + ARRAY_SIZE(roce_accl_cnts); 617 621 if (MLX5_CAP_GEN(dev->mdev, roce_accl)) { 618 - for (i = 0; i < ARRAY_SIZE(roce_accl_cnts); i++, j++) { 622 + for (i = 0; i < size; i++, j++) { 619 623 descs[j].name = names[i].name; 620 - offsets[j] = roce_accl_cnts[i].offset; 624 + offsets[j] = names[i].offset; 621 625 } 622 626 } 623 627 ··· 677 661 static int __mlx5_ib_alloc_counters(struct mlx5_ib_dev *dev, 678 662 struct mlx5_ib_counters *cnts, u32 port_num) 679 663 { 680 - u32 num_counters, num_op_counters = 0; 664 + bool is_vport = is_mdev_switchdev_mode(dev->mdev) && 665 + port_num != MLX5_VPORT_PF; 666 + u32 num_counters, num_op_counters = 0, size; 681 667 682 - num_counters = ARRAY_SIZE(basic_q_cnts); 668 + size = is_vport ? ARRAY_SIZE(vport_basic_q_cnts) : 669 + ARRAY_SIZE(basic_q_cnts); 670 + num_counters = size; 683 671 672 + size = is_vport ? ARRAY_SIZE(vport_out_of_seq_q_cnts) : 673 + ARRAY_SIZE(out_of_seq_q_cnts); 684 674 if (MLX5_CAP_GEN(dev->mdev, out_of_seq_cnt)) 685 - num_counters += ARRAY_SIZE(out_of_seq_q_cnts); 675 + num_counters += size; 686 676 677 + size = is_vport ? ARRAY_SIZE(vport_retrans_q_cnts) : 678 + ARRAY_SIZE(retrans_q_cnts); 687 679 if (MLX5_CAP_GEN(dev->mdev, retransmission_q_counters)) 688 - num_counters += ARRAY_SIZE(retrans_q_cnts); 680 + num_counters += size; 689 681 682 + size = is_vport ? ARRAY_SIZE(vport_extended_err_cnts) : 683 + ARRAY_SIZE(extended_err_cnts); 690 684 if (MLX5_CAP_GEN(dev->mdev, enhanced_error_q_counters)) 691 - num_counters += ARRAY_SIZE(extended_err_cnts); 685 + num_counters += size; 692 686 687 + size = is_vport ? ARRAY_SIZE(vport_roce_accl_cnts) : 688 + ARRAY_SIZE(roce_accl_cnts); 693 689 if (MLX5_CAP_GEN(dev->mdev, roce_accl)) 694 - num_counters += ARRAY_SIZE(roce_accl_cnts); 690 + num_counters += size; 695 691 696 692 cnts->num_q_counters = num_counters; 697 693 698 - if (is_mdev_switchdev_mode(dev->mdev) && port_num != MLX5_VPORT_PF) 694 + if (is_vport) 699 695 goto skip_non_qcounters; 700 696 701 697 if (MLX5_CAP_GEN(dev->mdev, cc_query_allowed)) { ··· 753 725 static void mlx5_ib_dealloc_counters(struct mlx5_ib_dev *dev) 754 726 { 755 727 u32 in[MLX5_ST_SZ_DW(dealloc_q_counter_in)] = {}; 756 - int num_cnt_ports; 728 + int num_cnt_ports = dev->num_ports; 757 729 int i, j; 758 730 759 - num_cnt_ports = (!is_mdev_switchdev_mode(dev->mdev) || 760 - vport_qcounters_supported(dev)) ? dev->num_ports : 1; 731 + if (is_mdev_switchdev_mode(dev->mdev)) 732 + num_cnt_ports = min(2, num_cnt_ports); 761 733 762 734 MLX5_SET(dealloc_q_counter_in, in, opcode, 763 735 MLX5_CMD_OP_DEALLOC_Q_COUNTER); ··· 789 761 { 790 762 u32 out[MLX5_ST_SZ_DW(alloc_q_counter_out)] = {}; 791 763 u32 in[MLX5_ST_SZ_DW(alloc_q_counter_in)] = {}; 792 - int num_cnt_ports; 764 + int num_cnt_ports = dev->num_ports; 793 765 int err = 0; 794 766 int i; 795 767 bool is_shared; 796 768 797 769 MLX5_SET(alloc_q_counter_in, in, opcode, MLX5_CMD_OP_ALLOC_Q_COUNTER); 798 770 is_shared = MLX5_CAP_GEN(dev->mdev, log_max_uctx) != 0; 799 - num_cnt_ports = (!is_mdev_switchdev_mode(dev->mdev) || 800 - vport_qcounters_supported(dev)) ? dev->num_ports : 1; 771 + 772 + /* 773 + * In switchdev we need to allocate two ports, one that is used for 774 + * the device Q_counters and it is essentially the real Q_counters of 775 + * this device, while the other is used as a helper for PF to be able to 776 + * query all other vports. 777 + */ 778 + if (is_mdev_switchdev_mode(dev->mdev)) 779 + num_cnt_ports = min(2, num_cnt_ports); 801 780 802 781 for (i = 0; i < num_cnt_ports; i++) { 803 782 err = __mlx5_ib_alloc_counters(dev, &dev->port[i].cnts, i);
+269 -7
drivers/infiniband/hw/mlx5/fs.c
··· 695 695 struct mlx5_flow_table_attr ft_attr = {}; 696 696 struct mlx5_flow_table *ft; 697 697 698 - if (mlx5_ib_shared_ft_allowed(&dev->ib_dev)) 699 - ft_attr.uid = MLX5_SHARED_RESOURCE_UID; 700 698 ft_attr.prio = priority; 701 699 ft_attr.max_fte = num_entries; 702 700 ft_attr.flags = flags; ··· 2023 2025 return 0; 2024 2026 } 2025 2027 2028 + static int steering_anchor_create_ft(struct mlx5_ib_dev *dev, 2029 + struct mlx5_ib_flow_prio *ft_prio, 2030 + enum mlx5_flow_namespace_type ns_type) 2031 + { 2032 + struct mlx5_flow_table_attr ft_attr = {}; 2033 + struct mlx5_flow_namespace *ns; 2034 + struct mlx5_flow_table *ft; 2035 + 2036 + if (ft_prio->anchor.ft) 2037 + return 0; 2038 + 2039 + ns = mlx5_get_flow_namespace(dev->mdev, ns_type); 2040 + if (!ns) 2041 + return -EOPNOTSUPP; 2042 + 2043 + ft_attr.flags = MLX5_FLOW_TABLE_UNMANAGED; 2044 + ft_attr.uid = MLX5_SHARED_RESOURCE_UID; 2045 + ft_attr.prio = 0; 2046 + ft_attr.max_fte = 2; 2047 + ft_attr.level = 1; 2048 + 2049 + ft = mlx5_create_flow_table(ns, &ft_attr); 2050 + if (IS_ERR(ft)) 2051 + return PTR_ERR(ft); 2052 + 2053 + ft_prio->anchor.ft = ft; 2054 + 2055 + return 0; 2056 + } 2057 + 2058 + static void steering_anchor_destroy_ft(struct mlx5_ib_flow_prio *ft_prio) 2059 + { 2060 + if (ft_prio->anchor.ft) { 2061 + mlx5_destroy_flow_table(ft_prio->anchor.ft); 2062 + ft_prio->anchor.ft = NULL; 2063 + } 2064 + } 2065 + 2066 + static int 2067 + steering_anchor_create_fg_drop(struct mlx5_ib_flow_prio *ft_prio) 2068 + { 2069 + int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); 2070 + struct mlx5_flow_group *fg; 2071 + void *flow_group_in; 2072 + int err = 0; 2073 + 2074 + if (ft_prio->anchor.fg_drop) 2075 + return 0; 2076 + 2077 + flow_group_in = kvzalloc(inlen, GFP_KERNEL); 2078 + if (!flow_group_in) 2079 + return -ENOMEM; 2080 + 2081 + MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 1); 2082 + MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 1); 2083 + 2084 + fg = mlx5_create_flow_group(ft_prio->anchor.ft, flow_group_in); 2085 + if (IS_ERR(fg)) { 2086 + err = PTR_ERR(fg); 2087 + goto out; 2088 + } 2089 + 2090 + ft_prio->anchor.fg_drop = fg; 2091 + 2092 + out: 2093 + kvfree(flow_group_in); 2094 + 2095 + return err; 2096 + } 2097 + 2098 + static void 2099 + steering_anchor_destroy_fg_drop(struct mlx5_ib_flow_prio *ft_prio) 2100 + { 2101 + if (ft_prio->anchor.fg_drop) { 2102 + mlx5_destroy_flow_group(ft_prio->anchor.fg_drop); 2103 + ft_prio->anchor.fg_drop = NULL; 2104 + } 2105 + } 2106 + 2107 + static int 2108 + steering_anchor_create_fg_goto_table(struct mlx5_ib_flow_prio *ft_prio) 2109 + { 2110 + int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); 2111 + struct mlx5_flow_group *fg; 2112 + void *flow_group_in; 2113 + int err = 0; 2114 + 2115 + if (ft_prio->anchor.fg_goto_table) 2116 + return 0; 2117 + 2118 + flow_group_in = kvzalloc(inlen, GFP_KERNEL); 2119 + if (!flow_group_in) 2120 + return -ENOMEM; 2121 + 2122 + fg = mlx5_create_flow_group(ft_prio->anchor.ft, flow_group_in); 2123 + if (IS_ERR(fg)) { 2124 + err = PTR_ERR(fg); 2125 + goto out; 2126 + } 2127 + ft_prio->anchor.fg_goto_table = fg; 2128 + 2129 + out: 2130 + kvfree(flow_group_in); 2131 + 2132 + return err; 2133 + } 2134 + 2135 + static void 2136 + steering_anchor_destroy_fg_goto_table(struct mlx5_ib_flow_prio *ft_prio) 2137 + { 2138 + if (ft_prio->anchor.fg_goto_table) { 2139 + mlx5_destroy_flow_group(ft_prio->anchor.fg_goto_table); 2140 + ft_prio->anchor.fg_goto_table = NULL; 2141 + } 2142 + } 2143 + 2144 + static int 2145 + steering_anchor_create_rule_drop(struct mlx5_ib_flow_prio *ft_prio) 2146 + { 2147 + struct mlx5_flow_act flow_act = {}; 2148 + struct mlx5_flow_handle *handle; 2149 + 2150 + if (ft_prio->anchor.rule_drop) 2151 + return 0; 2152 + 2153 + flow_act.fg = ft_prio->anchor.fg_drop; 2154 + flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP; 2155 + 2156 + handle = mlx5_add_flow_rules(ft_prio->anchor.ft, NULL, &flow_act, 2157 + NULL, 0); 2158 + if (IS_ERR(handle)) 2159 + return PTR_ERR(handle); 2160 + 2161 + ft_prio->anchor.rule_drop = handle; 2162 + 2163 + return 0; 2164 + } 2165 + 2166 + static void steering_anchor_destroy_rule_drop(struct mlx5_ib_flow_prio *ft_prio) 2167 + { 2168 + if (ft_prio->anchor.rule_drop) { 2169 + mlx5_del_flow_rules(ft_prio->anchor.rule_drop); 2170 + ft_prio->anchor.rule_drop = NULL; 2171 + } 2172 + } 2173 + 2174 + static int 2175 + steering_anchor_create_rule_goto_table(struct mlx5_ib_flow_prio *ft_prio) 2176 + { 2177 + struct mlx5_flow_destination dest = {}; 2178 + struct mlx5_flow_act flow_act = {}; 2179 + struct mlx5_flow_handle *handle; 2180 + 2181 + if (ft_prio->anchor.rule_goto_table) 2182 + return 0; 2183 + 2184 + flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; 2185 + flow_act.flags |= FLOW_ACT_IGNORE_FLOW_LEVEL; 2186 + flow_act.fg = ft_prio->anchor.fg_goto_table; 2187 + 2188 + dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; 2189 + dest.ft = ft_prio->flow_table; 2190 + 2191 + handle = mlx5_add_flow_rules(ft_prio->anchor.ft, NULL, &flow_act, 2192 + &dest, 1); 2193 + if (IS_ERR(handle)) 2194 + return PTR_ERR(handle); 2195 + 2196 + ft_prio->anchor.rule_goto_table = handle; 2197 + 2198 + return 0; 2199 + } 2200 + 2201 + static void 2202 + steering_anchor_destroy_rule_goto_table(struct mlx5_ib_flow_prio *ft_prio) 2203 + { 2204 + if (ft_prio->anchor.rule_goto_table) { 2205 + mlx5_del_flow_rules(ft_prio->anchor.rule_goto_table); 2206 + ft_prio->anchor.rule_goto_table = NULL; 2207 + } 2208 + } 2209 + 2210 + static int steering_anchor_create_res(struct mlx5_ib_dev *dev, 2211 + struct mlx5_ib_flow_prio *ft_prio, 2212 + enum mlx5_flow_namespace_type ns_type) 2213 + { 2214 + int err; 2215 + 2216 + err = steering_anchor_create_ft(dev, ft_prio, ns_type); 2217 + if (err) 2218 + return err; 2219 + 2220 + err = steering_anchor_create_fg_drop(ft_prio); 2221 + if (err) 2222 + goto destroy_ft; 2223 + 2224 + err = steering_anchor_create_fg_goto_table(ft_prio); 2225 + if (err) 2226 + goto destroy_fg_drop; 2227 + 2228 + err = steering_anchor_create_rule_drop(ft_prio); 2229 + if (err) 2230 + goto destroy_fg_goto_table; 2231 + 2232 + err = steering_anchor_create_rule_goto_table(ft_prio); 2233 + if (err) 2234 + goto destroy_rule_drop; 2235 + 2236 + return 0; 2237 + 2238 + destroy_rule_drop: 2239 + steering_anchor_destroy_rule_drop(ft_prio); 2240 + destroy_fg_goto_table: 2241 + steering_anchor_destroy_fg_goto_table(ft_prio); 2242 + destroy_fg_drop: 2243 + steering_anchor_destroy_fg_drop(ft_prio); 2244 + destroy_ft: 2245 + steering_anchor_destroy_ft(ft_prio); 2246 + 2247 + return err; 2248 + } 2249 + 2250 + static void mlx5_steering_anchor_destroy_res(struct mlx5_ib_flow_prio *ft_prio) 2251 + { 2252 + steering_anchor_destroy_rule_goto_table(ft_prio); 2253 + steering_anchor_destroy_rule_drop(ft_prio); 2254 + steering_anchor_destroy_fg_goto_table(ft_prio); 2255 + steering_anchor_destroy_fg_drop(ft_prio); 2256 + steering_anchor_destroy_ft(ft_prio); 2257 + } 2258 + 2026 2259 static int steering_anchor_cleanup(struct ib_uobject *uobject, 2027 2260 enum rdma_remove_reason why, 2028 2261 struct uverbs_attr_bundle *attrs) ··· 2264 2035 return -EBUSY; 2265 2036 2266 2037 mutex_lock(&obj->dev->flow_db->lock); 2038 + if (!--obj->ft_prio->anchor.rule_goto_table_ref) 2039 + steering_anchor_destroy_rule_goto_table(obj->ft_prio); 2040 + 2267 2041 put_flow_table(obj->dev, obj->ft_prio, true); 2268 2042 mutex_unlock(&obj->dev->flow_db->lock); 2269 2043 2270 2044 kfree(obj); 2271 2045 return 0; 2046 + } 2047 + 2048 + static void fs_cleanup_anchor(struct mlx5_ib_flow_prio *prio, 2049 + int count) 2050 + { 2051 + while (count--) 2052 + mlx5_steering_anchor_destroy_res(&prio[count]); 2053 + } 2054 + 2055 + void mlx5_ib_fs_cleanup_anchor(struct mlx5_ib_dev *dev) 2056 + { 2057 + fs_cleanup_anchor(dev->flow_db->prios, MLX5_IB_NUM_FLOW_FT); 2058 + fs_cleanup_anchor(dev->flow_db->egress_prios, MLX5_IB_NUM_FLOW_FT); 2059 + fs_cleanup_anchor(dev->flow_db->sniffer, MLX5_IB_NUM_SNIFFER_FTS); 2060 + fs_cleanup_anchor(dev->flow_db->egress, MLX5_IB_NUM_EGRESS_FTS); 2061 + fs_cleanup_anchor(dev->flow_db->fdb, MLX5_IB_NUM_FDB_FTS); 2062 + fs_cleanup_anchor(dev->flow_db->rdma_rx, MLX5_IB_NUM_FLOW_FT); 2063 + fs_cleanup_anchor(dev->flow_db->rdma_tx, MLX5_IB_NUM_FLOW_FT); 2272 2064 } 2273 2065 2274 2066 static int mlx5_ib_matcher_ns(struct uverbs_attr_bundle *attrs, ··· 2432 2182 return -ENOMEM; 2433 2183 2434 2184 mutex_lock(&dev->flow_db->lock); 2185 + 2435 2186 ft_prio = _get_flow_table(dev, priority, ns_type, 0); 2436 2187 if (IS_ERR(ft_prio)) { 2437 - mutex_unlock(&dev->flow_db->lock); 2438 2188 err = PTR_ERR(ft_prio); 2439 2189 goto free_obj; 2440 2190 } 2441 2191 2442 2192 ft_prio->refcount++; 2443 - ft_id = mlx5_flow_table_id(ft_prio->flow_table); 2444 - mutex_unlock(&dev->flow_db->lock); 2193 + 2194 + if (!ft_prio->anchor.rule_goto_table_ref) { 2195 + err = steering_anchor_create_res(dev, ft_prio, ns_type); 2196 + if (err) 2197 + goto put_flow_table; 2198 + } 2199 + 2200 + ft_prio->anchor.rule_goto_table_ref++; 2201 + 2202 + ft_id = mlx5_flow_table_id(ft_prio->anchor.ft); 2445 2203 2446 2204 err = uverbs_copy_to(attrs, MLX5_IB_ATTR_STEERING_ANCHOR_FT_ID, 2447 2205 &ft_id, sizeof(ft_id)); 2448 2206 if (err) 2449 - goto put_flow_table; 2207 + goto destroy_res; 2208 + 2209 + mutex_unlock(&dev->flow_db->lock); 2450 2210 2451 2211 uobj->object = obj; 2452 2212 obj->dev = dev; ··· 2465 2205 2466 2206 return 0; 2467 2207 2208 + destroy_res: 2209 + --ft_prio->anchor.rule_goto_table_ref; 2210 + mlx5_steering_anchor_destroy_res(ft_prio); 2468 2211 put_flow_table: 2469 - mutex_lock(&dev->flow_db->lock); 2470 2212 put_flow_table(dev, ft_prio, true); 2471 2213 mutex_unlock(&dev->flow_db->lock); 2472 2214 free_obj:
+16
drivers/infiniband/hw/mlx5/fs.h
··· 10 10 11 11 #if IS_ENABLED(CONFIG_INFINIBAND_USER_ACCESS) 12 12 int mlx5_ib_fs_init(struct mlx5_ib_dev *dev); 13 + void mlx5_ib_fs_cleanup_anchor(struct mlx5_ib_dev *dev); 13 14 #else 14 15 static inline int mlx5_ib_fs_init(struct mlx5_ib_dev *dev) 15 16 { ··· 22 21 mutex_init(&dev->flow_db->lock); 23 22 return 0; 24 23 } 24 + 25 + inline void mlx5_ib_fs_cleanup_anchor(struct mlx5_ib_dev *dev) {} 25 26 #endif 27 + 26 28 static inline void mlx5_ib_fs_cleanup(struct mlx5_ib_dev *dev) 27 29 { 30 + /* When a steering anchor is created, a special flow table is also 31 + * created for the user to reference. Since the user can reference it, 32 + * the kernel cannot trust that when the user destroys the steering 33 + * anchor, they no longer reference the flow table. 34 + * 35 + * To address this issue, when a user destroys a steering anchor, only 36 + * the flow steering rule in the table is destroyed, but the table 37 + * itself is kept to deal with the above scenario. The remaining 38 + * resources are only removed when the RDMA device is destroyed, which 39 + * is a safe assumption that all references are gone. 40 + */ 41 + mlx5_ib_fs_cleanup_anchor(dev); 28 42 kfree(dev->flow_db); 29 43 } 30 44 #endif /* _MLX5_IB_FS_H */
+3
drivers/infiniband/hw/mlx5/main.c
··· 4275 4275 STAGE_CREATE(MLX5_IB_STAGE_POST_IB_REG_UMR, 4276 4276 mlx5_ib_stage_post_ib_reg_umr_init, 4277 4277 NULL), 4278 + STAGE_CREATE(MLX5_IB_STAGE_DELAY_DROP, 4279 + mlx5_ib_stage_delay_drop_init, 4280 + mlx5_ib_stage_delay_drop_cleanup), 4278 4281 STAGE_CREATE(MLX5_IB_STAGE_RESTRACK, 4279 4282 mlx5_ib_restrack_init, 4280 4283 NULL),
+14
drivers/infiniband/hw/mlx5/mlx5_ib.h
··· 237 237 #define MLX5_IB_NUM_SNIFFER_FTS 2 238 238 #define MLX5_IB_NUM_EGRESS_FTS 1 239 239 #define MLX5_IB_NUM_FDB_FTS MLX5_BY_PASS_NUM_REGULAR_PRIOS 240 + 241 + struct mlx5_ib_anchor { 242 + struct mlx5_flow_table *ft; 243 + struct mlx5_flow_group *fg_goto_table; 244 + struct mlx5_flow_group *fg_drop; 245 + struct mlx5_flow_handle *rule_goto_table; 246 + struct mlx5_flow_handle *rule_drop; 247 + unsigned int rule_goto_table_ref; 248 + }; 249 + 240 250 struct mlx5_ib_flow_prio { 241 251 struct mlx5_flow_table *flow_table; 252 + struct mlx5_ib_anchor anchor; 242 253 unsigned int refcount; 243 254 }; 244 255 ··· 1596 1585 if (dev->lag_active && 1597 1586 mlx5_lag_mode_is_hash(dev->mdev) && 1598 1587 MLX5_CAP_PORT_SELECTION(dev->mdev, port_select_flow_table_bypass)) 1588 + return 0; 1589 + 1590 + if (mlx5_lag_is_lacp_owner(dev->mdev) && !dev->lag_active) 1599 1591 return 0; 1600 1592 1601 1593 return dev->lag_active ||
+3
drivers/infiniband/hw/mlx5/qp.c
··· 1237 1237 1238 1238 MLX5_SET(create_tis_in, in, uid, to_mpd(pd)->uid); 1239 1239 MLX5_SET(tisc, tisc, transport_domain, tdn); 1240 + if (!mlx5_ib_lag_should_assign_affinity(dev) && 1241 + mlx5_lag_is_lacp_owner(dev->mdev)) 1242 + MLX5_SET(tisc, tisc, strict_lag_tx_port_affinity, 1); 1240 1243 if (qp->flags & IB_QP_CREATE_SOURCE_QPN) 1241 1244 MLX5_SET(tisc, tisc, underlay_qpn, qp->underlay_qpn); 1242 1245
+2 -2
drivers/infiniband/sw/rxe/rxe_cq.c
··· 113 113 114 114 queue_advance_producer(cq->queue, QUEUE_TYPE_TO_CLIENT); 115 115 116 - spin_unlock_irqrestore(&cq->cq_lock, flags); 117 - 118 116 if ((cq->notify == IB_CQ_NEXT_COMP) || 119 117 (cq->notify == IB_CQ_SOLICITED && solicited)) { 120 118 cq->notify = 0; 121 119 122 120 cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context); 123 121 } 122 + 123 + spin_unlock_irqrestore(&cq->cq_lock, flags); 124 124 125 125 return 0; 126 126 }
+6
drivers/infiniband/sw/rxe/rxe_net.c
··· 159 159 pkt->mask = RXE_GRH_MASK; 160 160 pkt->paylen = be16_to_cpu(udph->len) - sizeof(*udph); 161 161 162 + /* remove udp header */ 163 + skb_pull(skb, sizeof(struct udphdr)); 164 + 162 165 rxe_rcv(skb); 163 166 164 167 return 0; ··· 403 400 kfree_skb(skb); 404 401 return -EIO; 405 402 } 403 + 404 + /* remove udp header */ 405 + skb_pull(skb, sizeof(struct udphdr)); 406 406 407 407 rxe_rcv(skb); 408 408
+3 -4
drivers/infiniband/sw/rxe/rxe_qp.c
··· 176 176 spin_lock_init(&qp->rq.producer_lock); 177 177 spin_lock_init(&qp->rq.consumer_lock); 178 178 179 + skb_queue_head_init(&qp->req_pkts); 180 + skb_queue_head_init(&qp->resp_pkts); 181 + 179 182 atomic_set(&qp->ssn, 0); 180 183 atomic_set(&qp->skb_out, 0); 181 184 } ··· 237 234 qp->req.opcode = -1; 238 235 qp->comp.opcode = -1; 239 236 240 - skb_queue_head_init(&qp->req_pkts); 241 - 242 237 rxe_init_task(&qp->req.task, qp, rxe_requester); 243 238 rxe_init_task(&qp->comp.task, qp, rxe_completer); 244 239 ··· 279 278 return err; 280 279 } 281 280 } 282 - 283 - skb_queue_head_init(&qp->resp_pkts); 284 281 285 282 rxe_init_task(&qp->resp.task, qp, rxe_responder); 286 283
+2 -1
drivers/infiniband/sw/rxe/rxe_resp.c
··· 489 489 if (mw->access & IB_ZERO_BASED) 490 490 qp->resp.offset = mw->addr; 491 491 492 - rxe_put(mw); 493 492 rxe_get(mr); 493 + rxe_put(mw); 494 + mw = NULL; 494 495 } else { 495 496 mr = lookup_mr(qp->pd, access, rkey, RXE_LOOKUP_REMOTE); 496 497 if (!mr) {
+12 -4
drivers/infiniband/ulp/isert/ib_isert.c
··· 657 657 isert_connect_error(struct rdma_cm_id *cma_id) 658 658 { 659 659 struct isert_conn *isert_conn = cma_id->qp->qp_context; 660 + struct isert_np *isert_np = cma_id->context; 660 661 661 662 ib_drain_qp(isert_conn->qp); 663 + 664 + mutex_lock(&isert_np->mutex); 662 665 list_del_init(&isert_conn->node); 666 + mutex_unlock(&isert_np->mutex); 663 667 isert_conn->cm_id = NULL; 664 668 isert_put_conn(isert_conn); 665 669 ··· 2435 2431 { 2436 2432 struct isert_np *isert_np = np->np_context; 2437 2433 struct isert_conn *isert_conn, *n; 2434 + LIST_HEAD(drop_conn_list); 2438 2435 2439 2436 if (isert_np->cm_id) 2440 2437 rdma_destroy_id(isert_np->cm_id); ··· 2455 2450 node) { 2456 2451 isert_info("cleaning isert_conn %p state (%d)\n", 2457 2452 isert_conn, isert_conn->state); 2458 - isert_connect_release(isert_conn); 2453 + list_move_tail(&isert_conn->node, &drop_conn_list); 2459 2454 } 2460 2455 } 2461 2456 ··· 2466 2461 node) { 2467 2462 isert_info("cleaning isert_conn %p state (%d)\n", 2468 2463 isert_conn, isert_conn->state); 2469 - isert_connect_release(isert_conn); 2464 + list_move_tail(&isert_conn->node, &drop_conn_list); 2470 2465 } 2471 2466 } 2472 2467 mutex_unlock(&isert_np->mutex); 2468 + 2469 + list_for_each_entry_safe(isert_conn, n, &drop_conn_list, node) { 2470 + list_del_init(&isert_conn->node); 2471 + isert_connect_release(isert_conn); 2472 + } 2473 2473 2474 2474 np->np_context = NULL; 2475 2475 kfree(isert_np); ··· 2570 2560 isert_put_unsol_pending_cmds(conn); 2571 2561 isert_wait4cmds(conn); 2572 2562 isert_wait4logout(isert_conn); 2573 - 2574 - queue_work(isert_release_wq, &isert_conn->release_work); 2575 2563 } 2576 2564 2577 2565 static void isert_free_conn(struct iscsit_conn *conn)
+23 -32
drivers/infiniband/ulp/rtrs/rtrs-clt.c
··· 2040 2040 return 0; 2041 2041 } 2042 2042 2043 + /* The caller should do the cleanup in case of error */ 2043 2044 static int create_cm(struct rtrs_clt_con *con) 2044 2045 { 2045 2046 struct rtrs_path *s = con->c.path; ··· 2063 2062 err = rdma_set_reuseaddr(cm_id, 1); 2064 2063 if (err != 0) { 2065 2064 rtrs_err(s, "Set address reuse failed, err: %d\n", err); 2066 - goto destroy_cm; 2065 + return err; 2067 2066 } 2068 2067 err = rdma_resolve_addr(cm_id, (struct sockaddr *)&clt_path->s.src_addr, 2069 2068 (struct sockaddr *)&clt_path->s.dst_addr, 2070 2069 RTRS_CONNECT_TIMEOUT_MS); 2071 2070 if (err) { 2072 2071 rtrs_err(s, "Failed to resolve address, err: %d\n", err); 2073 - goto destroy_cm; 2072 + return err; 2074 2073 } 2075 2074 /* 2076 2075 * Combine connection status and session events. This is needed ··· 2085 2084 if (err == 0) 2086 2085 err = -ETIMEDOUT; 2087 2086 /* Timedout or interrupted */ 2088 - goto errr; 2087 + return err; 2089 2088 } 2090 - if (con->cm_err < 0) { 2091 - err = con->cm_err; 2092 - goto errr; 2093 - } 2094 - if (READ_ONCE(clt_path->state) != RTRS_CLT_CONNECTING) { 2089 + if (con->cm_err < 0) 2090 + return con->cm_err; 2091 + if (READ_ONCE(clt_path->state) != RTRS_CLT_CONNECTING) 2095 2092 /* Device removal */ 2096 - err = -ECONNABORTED; 2097 - goto errr; 2098 - } 2093 + return -ECONNABORTED; 2099 2094 2100 2095 return 0; 2101 - 2102 - errr: 2103 - stop_cm(con); 2104 - mutex_lock(&con->con_mutex); 2105 - destroy_con_cq_qp(con); 2106 - mutex_unlock(&con->con_mutex); 2107 - destroy_cm: 2108 - destroy_cm(con); 2109 - 2110 - return err; 2111 2096 } 2112 2097 2113 2098 static void rtrs_clt_path_up(struct rtrs_clt_path *clt_path) ··· 2321 2334 static int init_conns(struct rtrs_clt_path *clt_path) 2322 2335 { 2323 2336 unsigned int cid; 2324 - int err; 2337 + int err, i; 2325 2338 2326 2339 /* 2327 2340 * On every new session connections increase reconnect counter ··· 2337 2350 goto destroy; 2338 2351 2339 2352 err = create_cm(to_clt_con(clt_path->s.con[cid])); 2340 - if (err) { 2341 - destroy_con(to_clt_con(clt_path->s.con[cid])); 2353 + if (err) 2342 2354 goto destroy; 2343 - } 2344 2355 } 2345 2356 err = alloc_path_reqs(clt_path); 2346 2357 if (err) ··· 2349 2364 return 0; 2350 2365 2351 2366 destroy: 2352 - while (cid--) { 2353 - struct rtrs_clt_con *con = to_clt_con(clt_path->s.con[cid]); 2367 + /* Make sure we do the cleanup in the order they are created */ 2368 + for (i = 0; i <= cid; i++) { 2369 + struct rtrs_clt_con *con; 2354 2370 2355 - stop_cm(con); 2371 + if (!clt_path->s.con[i]) 2372 + break; 2356 2373 2357 - mutex_lock(&con->con_mutex); 2358 - destroy_con_cq_qp(con); 2359 - mutex_unlock(&con->con_mutex); 2360 - destroy_cm(con); 2374 + con = to_clt_con(clt_path->s.con[i]); 2375 + if (con->c.cm_id) { 2376 + stop_cm(con); 2377 + mutex_lock(&con->con_mutex); 2378 + destroy_con_cq_qp(con); 2379 + mutex_unlock(&con->con_mutex); 2380 + destroy_cm(con); 2381 + } 2361 2382 destroy_con(con); 2362 2383 } 2363 2384 /*
+3 -1
drivers/infiniband/ulp/rtrs/rtrs.c
··· 37 37 goto err; 38 38 39 39 iu->dma_addr = ib_dma_map_single(dma_dev, iu->buf, size, dir); 40 - if (ib_dma_mapping_error(dma_dev, iu->dma_addr)) 40 + if (ib_dma_mapping_error(dma_dev, iu->dma_addr)) { 41 + kfree(iu->buf); 41 42 goto err; 43 + } 42 44 43 45 iu->cqe.done = done; 44 46 iu->size = size;
+1 -4
drivers/md/dm-ioctl.c
··· 1168 1168 /* Do we need to load a new map ? */ 1169 1169 if (new_map) { 1170 1170 sector_t old_size, new_size; 1171 - int srcu_idx; 1172 1171 1173 1172 /* Suspend if it isn't already suspended */ 1174 - old_map = dm_get_live_table(md, &srcu_idx); 1175 - if ((param->flags & DM_SKIP_LOCKFS_FLAG) || !old_map) 1173 + if (param->flags & DM_SKIP_LOCKFS_FLAG) 1176 1174 suspend_flags &= ~DM_SUSPEND_LOCKFS_FLAG; 1177 - dm_put_live_table(md, srcu_idx); 1178 1175 if (param->flags & DM_NOFLUSH_FLAG) 1179 1176 suspend_flags |= DM_SUSPEND_NOFLUSH_FLAG; 1180 1177 if (!dm_suspended_md(md))
+12 -8
drivers/md/dm-thin-metadata.c
··· 1756 1756 1757 1757 int dm_pool_block_is_shared(struct dm_pool_metadata *pmd, dm_block_t b, bool *result) 1758 1758 { 1759 - int r; 1759 + int r = -EINVAL; 1760 1760 uint32_t ref_count; 1761 1761 1762 1762 down_read(&pmd->root_lock); 1763 - r = dm_sm_get_count(pmd->data_sm, b, &ref_count); 1764 - if (!r) 1765 - *result = (ref_count > 1); 1763 + if (!pmd->fail_io) { 1764 + r = dm_sm_get_count(pmd->data_sm, b, &ref_count); 1765 + if (!r) 1766 + *result = (ref_count > 1); 1767 + } 1766 1768 up_read(&pmd->root_lock); 1767 1769 1768 1770 return r; ··· 1772 1770 1773 1771 int dm_pool_inc_data_range(struct dm_pool_metadata *pmd, dm_block_t b, dm_block_t e) 1774 1772 { 1775 - int r = 0; 1773 + int r = -EINVAL; 1776 1774 1777 1775 pmd_write_lock(pmd); 1778 - r = dm_sm_inc_blocks(pmd->data_sm, b, e); 1776 + if (!pmd->fail_io) 1777 + r = dm_sm_inc_blocks(pmd->data_sm, b, e); 1779 1778 pmd_write_unlock(pmd); 1780 1779 1781 1780 return r; ··· 1784 1781 1785 1782 int dm_pool_dec_data_range(struct dm_pool_metadata *pmd, dm_block_t b, dm_block_t e) 1786 1783 { 1787 - int r = 0; 1784 + int r = -EINVAL; 1788 1785 1789 1786 pmd_write_lock(pmd); 1790 - r = dm_sm_dec_blocks(pmd->data_sm, b, e); 1787 + if (!pmd->fail_io) 1788 + r = dm_sm_dec_blocks(pmd->data_sm, b, e); 1791 1789 pmd_write_unlock(pmd); 1792 1790 1793 1791 return r;
+1 -2
drivers/md/dm-thin.c
··· 401 401 sector_t s = block_to_sectors(tc->pool, data_b); 402 402 sector_t len = block_to_sectors(tc->pool, data_e - data_b); 403 403 404 - return __blkdev_issue_discard(tc->pool_dev->bdev, s, len, GFP_NOWAIT, 405 - &op->bio); 404 + return __blkdev_issue_discard(tc->pool_dev->bdev, s, len, GFP_NOIO, &op->bio); 406 405 } 407 406 408 407 static void end_discard(struct discard_op *op, int r)
+20 -9
drivers/md/dm.c
··· 1172 1172 } 1173 1173 1174 1174 static sector_t __max_io_len(struct dm_target *ti, sector_t sector, 1175 - unsigned int max_granularity) 1175 + unsigned int max_granularity, 1176 + unsigned int max_sectors) 1176 1177 { 1177 1178 sector_t target_offset = dm_target_offset(ti, sector); 1178 1179 sector_t len = max_io_len_target_boundary(ti, target_offset); ··· 1187 1186 if (!max_granularity) 1188 1187 return len; 1189 1188 return min_t(sector_t, len, 1190 - min(queue_max_sectors(ti->table->md->queue), 1189 + min(max_sectors ? : queue_max_sectors(ti->table->md->queue), 1191 1190 blk_chunk_sectors_left(target_offset, max_granularity))); 1192 1191 } 1193 1192 1194 1193 static inline sector_t max_io_len(struct dm_target *ti, sector_t sector) 1195 1194 { 1196 - return __max_io_len(ti, sector, ti->max_io_len); 1195 + return __max_io_len(ti, sector, ti->max_io_len, 0); 1197 1196 } 1198 1197 1199 1198 int dm_set_target_max_io_len(struct dm_target *ti, sector_t len) ··· 1582 1581 1583 1582 static void __send_changing_extent_only(struct clone_info *ci, struct dm_target *ti, 1584 1583 unsigned int num_bios, 1585 - unsigned int max_granularity) 1584 + unsigned int max_granularity, 1585 + unsigned int max_sectors) 1586 1586 { 1587 1587 unsigned int len, bios; 1588 1588 1589 1589 len = min_t(sector_t, ci->sector_count, 1590 - __max_io_len(ti, ci->sector, max_granularity)); 1590 + __max_io_len(ti, ci->sector, max_granularity, max_sectors)); 1591 1591 1592 1592 atomic_add(num_bios, &ci->io->io_count); 1593 1593 bios = __send_duplicate_bios(ci, ti, num_bios, &len); ··· 1625 1623 { 1626 1624 unsigned int num_bios = 0; 1627 1625 unsigned int max_granularity = 0; 1626 + unsigned int max_sectors = 0; 1628 1627 struct queue_limits *limits = dm_get_queue_limits(ti->table->md); 1629 1628 1630 1629 switch (bio_op(ci->bio)) { 1631 1630 case REQ_OP_DISCARD: 1632 1631 num_bios = ti->num_discard_bios; 1632 + max_sectors = limits->max_discard_sectors; 1633 1633 if (ti->max_discard_granularity) 1634 - max_granularity = limits->max_discard_sectors; 1634 + max_granularity = max_sectors; 1635 1635 break; 1636 1636 case REQ_OP_SECURE_ERASE: 1637 1637 num_bios = ti->num_secure_erase_bios; 1638 + max_sectors = limits->max_secure_erase_sectors; 1638 1639 if (ti->max_secure_erase_granularity) 1639 - max_granularity = limits->max_secure_erase_sectors; 1640 + max_granularity = max_sectors; 1640 1641 break; 1641 1642 case REQ_OP_WRITE_ZEROES: 1642 1643 num_bios = ti->num_write_zeroes_bios; 1644 + max_sectors = limits->max_write_zeroes_sectors; 1643 1645 if (ti->max_write_zeroes_granularity) 1644 - max_granularity = limits->max_write_zeroes_sectors; 1646 + max_granularity = max_sectors; 1645 1647 break; 1646 1648 default: 1647 1649 break; ··· 1660 1654 if (unlikely(!num_bios)) 1661 1655 return BLK_STS_NOTSUPP; 1662 1656 1663 - __send_changing_extent_only(ci, ti, num_bios, max_granularity); 1657 + __send_changing_extent_only(ci, ti, num_bios, 1658 + max_granularity, max_sectors); 1664 1659 return BLK_STS_OK; 1665 1660 } 1666 1661 ··· 2815 2808 } 2816 2809 2817 2810 map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock)); 2811 + if (!map) { 2812 + /* avoid deadlock with fs/namespace.c:do_mount() */ 2813 + suspend_flags &= ~DM_SUSPEND_LOCKFS_FLAG; 2814 + } 2818 2815 2819 2816 r = __dm_suspend(md, map, suspend_flags, TASK_INTERRUPTIBLE, DMF_SUSPENDED); 2820 2817 if (r)
+9 -44
drivers/media/dvb-core/dvb_frontend.c
··· 817 817 818 818 dev_dbg(fe->dvb->device, "%s:\n", __func__); 819 819 820 - mutex_lock(&fe->remove_mutex); 821 - 822 820 if (fe->exit != DVB_FE_DEVICE_REMOVED) 823 821 fe->exit = DVB_FE_NORMAL_EXIT; 824 822 mb(); 825 823 826 - if (!fepriv->thread) { 827 - mutex_unlock(&fe->remove_mutex); 824 + if (!fepriv->thread) 828 825 return; 829 - } 830 826 831 827 kthread_stop(fepriv->thread); 832 - 833 - mutex_unlock(&fe->remove_mutex); 834 - 835 - if (fepriv->dvbdev->users < -1) { 836 - wait_event(fepriv->dvbdev->wait_queue, 837 - fepriv->dvbdev->users == -1); 838 - } 839 828 840 829 sema_init(&fepriv->sem, 1); 841 830 fepriv->state = FESTATE_IDLE; ··· 2769 2780 struct dvb_adapter *adapter = fe->dvb; 2770 2781 int ret; 2771 2782 2772 - mutex_lock(&fe->remove_mutex); 2773 - 2774 2783 dev_dbg(fe->dvb->device, "%s:\n", __func__); 2775 - if (fe->exit == DVB_FE_DEVICE_REMOVED) { 2776 - ret = -ENODEV; 2777 - goto err_remove_mutex; 2778 - } 2784 + if (fe->exit == DVB_FE_DEVICE_REMOVED) 2785 + return -ENODEV; 2779 2786 2780 2787 if (adapter->mfe_shared == 2) { 2781 2788 mutex_lock(&adapter->mfe_lock); ··· 2779 2794 if (adapter->mfe_dvbdev && 2780 2795 !adapter->mfe_dvbdev->writers) { 2781 2796 mutex_unlock(&adapter->mfe_lock); 2782 - ret = -EBUSY; 2783 - goto err_remove_mutex; 2797 + return -EBUSY; 2784 2798 } 2785 2799 adapter->mfe_dvbdev = dvbdev; 2786 2800 } ··· 2802 2818 while (mferetry-- && (mfedev->users != -1 || 2803 2819 mfepriv->thread)) { 2804 2820 if (msleep_interruptible(500)) { 2805 - if (signal_pending(current)) { 2806 - ret = -EINTR; 2807 - goto err_remove_mutex; 2808 - } 2821 + if (signal_pending(current)) 2822 + return -EINTR; 2809 2823 } 2810 2824 } 2811 2825 ··· 2815 2833 if (mfedev->users != -1 || 2816 2834 mfepriv->thread) { 2817 2835 mutex_unlock(&adapter->mfe_lock); 2818 - ret = -EBUSY; 2819 - goto err_remove_mutex; 2836 + return -EBUSY; 2820 2837 } 2821 2838 adapter->mfe_dvbdev = dvbdev; 2822 2839 } ··· 2874 2893 2875 2894 if (adapter->mfe_shared) 2876 2895 mutex_unlock(&adapter->mfe_lock); 2877 - 2878 - mutex_unlock(&fe->remove_mutex); 2879 2896 return ret; 2880 2897 2881 2898 err3: ··· 2895 2916 err0: 2896 2917 if (adapter->mfe_shared) 2897 2918 mutex_unlock(&adapter->mfe_lock); 2898 - 2899 - err_remove_mutex: 2900 - mutex_unlock(&fe->remove_mutex); 2901 2919 return ret; 2902 2920 } 2903 2921 ··· 2904 2928 struct dvb_frontend *fe = dvbdev->priv; 2905 2929 struct dvb_frontend_private *fepriv = fe->frontend_priv; 2906 2930 int ret; 2907 - 2908 - mutex_lock(&fe->remove_mutex); 2909 2931 2910 2932 dev_dbg(fe->dvb->device, "%s:\n", __func__); 2911 2933 ··· 2926 2952 } 2927 2953 mutex_unlock(&fe->dvb->mdev_lock); 2928 2954 #endif 2955 + if (fe->exit != DVB_FE_NO_EXIT) 2956 + wake_up(&dvbdev->wait_queue); 2929 2957 if (fe->ops.ts_bus_ctrl) 2930 2958 fe->ops.ts_bus_ctrl(fe, 0); 2931 - 2932 - if (fe->exit != DVB_FE_NO_EXIT) { 2933 - mutex_unlock(&fe->remove_mutex); 2934 - wake_up(&dvbdev->wait_queue); 2935 - } else { 2936 - mutex_unlock(&fe->remove_mutex); 2937 - } 2938 - 2939 - } else { 2940 - mutex_unlock(&fe->remove_mutex); 2941 2959 } 2942 2960 2943 2961 dvb_frontend_put(fe); ··· 3030 3064 fepriv = fe->frontend_priv; 3031 3065 3032 3066 kref_init(&fe->refcount); 3033 - mutex_init(&fe->remove_mutex); 3034 3067 3035 3068 /* 3036 3069 * After initialization, there need to be two references: one
+1
drivers/misc/eeprom/Kconfig
··· 6 6 depends on I2C && SYSFS 7 7 select NVMEM 8 8 select NVMEM_SYSFS 9 + select REGMAP 9 10 select REGMAP_I2C 10 11 help 11 12 Enable this driver to get read/write support to most I2C EEPROMs
+1 -1
drivers/net/dsa/ocelot/felix_vsc9959.c
··· 1251 1251 /* Consider the standard Ethernet overhead of 8 octets preamble+SFD, 1252 1252 * 4 octets FCS, 12 octets IFG. 1253 1253 */ 1254 - needed_bit_time_ps = (maxlen + 24) * picos_per_byte; 1254 + needed_bit_time_ps = (u64)(maxlen + 24) * picos_per_byte; 1255 1255 1256 1256 dev_dbg(ocelot->dev, 1257 1257 "port %d: max frame size %d needs %llu ps at speed %d\n",
+7 -2
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 14294 14294 bp->fw_seq = SHMEM_RD(bp, func_mb[BP_FW_MB_IDX(bp)].drv_mb_header) & 14295 14295 DRV_MSG_SEQ_NUMBER_MASK; 14296 14296 14297 - if (netif_running(dev)) 14298 - bnx2x_nic_load(bp, LOAD_NORMAL); 14297 + if (netif_running(dev)) { 14298 + if (bnx2x_nic_load(bp, LOAD_NORMAL)) { 14299 + netdev_err(bp->dev, "Error during driver initialization, try unloading/reloading the driver\n"); 14300 + goto done; 14301 + } 14302 + } 14299 14303 14300 14304 netif_device_attach(dev); 14301 14305 14306 + done: 14302 14307 rtnl_unlock(); 14303 14308 } 14304 14309
+2 -2
drivers/net/ethernet/freescale/enetc/enetc_qos.c
··· 246 246 int bw_sum = 0; 247 247 u8 bw; 248 248 249 - prio_top = netdev_get_prio_tc_map(ndev, tc_nums - 1); 250 - prio_next = netdev_get_prio_tc_map(ndev, tc_nums - 2); 249 + prio_top = tc_nums - 1; 250 + prio_next = tc_nums - 2; 251 251 252 252 /* Support highest prio and second prio tc in cbs mode */ 253 253 if (tc != prio_top && tc != prio_next)
+1 -1
drivers/net/ethernet/intel/iavf/iavf.h
··· 525 525 void iavf_update_stats(struct iavf_adapter *adapter); 526 526 void iavf_reset_interrupt_capability(struct iavf_adapter *adapter); 527 527 int iavf_init_interrupt_scheme(struct iavf_adapter *adapter); 528 - void iavf_irq_enable_queues(struct iavf_adapter *adapter, u32 mask); 528 + void iavf_irq_enable_queues(struct iavf_adapter *adapter); 529 529 void iavf_free_all_tx_resources(struct iavf_adapter *adapter); 530 530 void iavf_free_all_rx_resources(struct iavf_adapter *adapter); 531 531
+6 -9
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 359 359 } 360 360 361 361 /** 362 - * iavf_irq_enable_queues - Enable interrupt for specified queues 362 + * iavf_irq_enable_queues - Enable interrupt for all queues 363 363 * @adapter: board private structure 364 - * @mask: bitmap of queues to enable 365 364 **/ 366 - void iavf_irq_enable_queues(struct iavf_adapter *adapter, u32 mask) 365 + void iavf_irq_enable_queues(struct iavf_adapter *adapter) 367 366 { 368 367 struct iavf_hw *hw = &adapter->hw; 369 368 int i; 370 369 371 370 for (i = 1; i < adapter->num_msix_vectors; i++) { 372 - if (mask & BIT(i - 1)) { 373 - wr32(hw, IAVF_VFINT_DYN_CTLN1(i - 1), 374 - IAVF_VFINT_DYN_CTLN1_INTENA_MASK | 375 - IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK); 376 - } 371 + wr32(hw, IAVF_VFINT_DYN_CTLN1(i - 1), 372 + IAVF_VFINT_DYN_CTLN1_INTENA_MASK | 373 + IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK); 377 374 } 378 375 } 379 376 ··· 384 387 struct iavf_hw *hw = &adapter->hw; 385 388 386 389 iavf_misc_irq_enable(adapter); 387 - iavf_irq_enable_queues(adapter, ~0); 390 + iavf_irq_enable_queues(adapter); 388 391 389 392 if (flush) 390 393 iavf_flush(hw);
+1 -1
drivers/net/ethernet/intel/iavf/iavf_register.h
··· 40 40 #define IAVF_VFINT_DYN_CTL01_INTENA_MASK IAVF_MASK(0x1, IAVF_VFINT_DYN_CTL01_INTENA_SHIFT) 41 41 #define IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT 3 42 42 #define IAVF_VFINT_DYN_CTL01_ITR_INDX_MASK IAVF_MASK(0x3, IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT) 43 - #define IAVF_VFINT_DYN_CTLN1(_INTVF) (0x00003800 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */ 43 + #define IAVF_VFINT_DYN_CTLN1(_INTVF) (0x00003800 + ((_INTVF) * 4)) /* _i=0...63 */ /* Reset: VFR */ 44 44 #define IAVF_VFINT_DYN_CTLN1_INTENA_SHIFT 0 45 45 #define IAVF_VFINT_DYN_CTLN1_INTENA_MASK IAVF_MASK(0x1, IAVF_VFINT_DYN_CTLN1_INTENA_SHIFT) 46 46 #define IAVF_VFINT_DYN_CTLN1_SWINT_TRIG_SHIFT 2
+1 -7
drivers/net/ethernet/intel/ice/ice_gnss.c
··· 96 96 int err = 0; 97 97 98 98 pf = gnss->back; 99 - if (!pf) { 100 - err = -EFAULT; 101 - goto exit; 102 - } 103 - 104 - if (!test_bit(ICE_FLAG_GNSS, pf->flags)) 99 + if (!pf || !test_bit(ICE_FLAG_GNSS, pf->flags)) 105 100 return; 106 101 107 102 hw = &pf->hw; ··· 154 159 free_page((unsigned long)buf); 155 160 requeue: 156 161 kthread_queue_delayed_work(gnss->kworker, &gnss->read_work, delay); 157 - exit: 158 162 if (err) 159 163 dev_dbg(ice_pf_to_dev(pf), "GNSS failed to read err=%d\n", err); 160 164 }
+9 -11
drivers/net/ethernet/intel/ice/ice_main.c
··· 4591 4591 static void ice_deinit_dev(struct ice_pf *pf) 4592 4592 { 4593 4593 ice_free_irq_msix_misc(pf); 4594 - ice_clear_interrupt_scheme(pf); 4595 4594 ice_deinit_pf(pf); 4596 4595 ice_deinit_hw(&pf->hw); 4596 + 4597 + /* Service task is already stopped, so call reset directly. */ 4598 + ice_reset(&pf->hw, ICE_RESET_PFR); 4599 + pci_wait_for_pending_transaction(pf->pdev); 4600 + ice_clear_interrupt_scheme(pf); 4597 4601 } 4598 4602 4599 4603 static void ice_init_features(struct ice_pf *pf) ··· 4887 4883 struct ice_vsi *vsi; 4888 4884 int err; 4889 4885 4890 - err = ice_reset(&pf->hw, ICE_RESET_PFR); 4891 - if (err) 4892 - return err; 4893 - 4894 4886 err = ice_init_dev(pf); 4895 4887 if (err) 4896 4888 return err; ··· 5143 5143 ice_setup_mc_magic_wake(pf); 5144 5144 ice_set_wake(pf); 5145 5145 5146 - /* Issue a PFR as part of the prescribed driver unload flow. Do not 5147 - * do it via ice_schedule_reset() since there is no need to rebuild 5148 - * and the service task is already stopped. 5149 - */ 5150 - ice_reset(&pf->hw, ICE_RESET_PFR); 5151 - pci_wait_for_pending_transaction(pdev); 5152 5146 pci_disable_device(pdev); 5153 5147 } 5154 5148 ··· 6838 6844 6839 6845 ice_for_each_txq(vsi, i) 6840 6846 ice_clean_tx_ring(vsi->tx_rings[i]); 6847 + 6848 + if (ice_is_xdp_ena_vsi(vsi)) 6849 + ice_for_each_xdp_txq(vsi, i) 6850 + ice_clean_tx_ring(vsi->xdp_rings[i]); 6841 6851 6842 6852 ice_for_each_rxq(vsi, i) 6843 6853 ice_clean_rx_ring(vsi->rx_rings[i]);
+3
drivers/net/ethernet/intel/igb/igb_ethtool.c
··· 822 822 */ 823 823 ret_val = hw->nvm.ops.read(hw, last_word, 1, 824 824 &eeprom_buff[last_word - first_word]); 825 + if (ret_val) 826 + goto out; 825 827 } 826 828 827 829 /* Device's eeprom is always little-endian, word addressable */ ··· 843 841 hw->nvm.ops.update(hw); 844 842 845 843 igb_set_fw_version(adapter); 844 + out: 846 845 kfree(eeprom_buff); 847 846 return ret_val; 848 847 }
+6 -2
drivers/net/ethernet/intel/igb/igb_main.c
··· 6949 6949 struct e1000_hw *hw = &adapter->hw; 6950 6950 struct ptp_clock_event event; 6951 6951 struct timespec64 ts; 6952 + unsigned long flags; 6952 6953 6953 6954 if (pin < 0 || pin >= IGB_N_SDP) 6954 6955 return; ··· 6957 6956 if (hw->mac.type == e1000_82580 || 6958 6957 hw->mac.type == e1000_i354 || 6959 6958 hw->mac.type == e1000_i350) { 6960 - s64 ns = rd32(auxstmpl); 6959 + u64 ns = rd32(auxstmpl); 6961 6960 6962 - ns += ((s64)(rd32(auxstmph) & 0xFF)) << 32; 6961 + ns += ((u64)(rd32(auxstmph) & 0xFF)) << 32; 6962 + spin_lock_irqsave(&adapter->tmreg_lock, flags); 6963 + ns = timecounter_cyc2time(&adapter->tc, ns); 6964 + spin_unlock_irqrestore(&adapter->tmreg_lock, flags); 6963 6965 ts = ns_to_timespec64(ns); 6964 6966 } else { 6965 6967 ts.tv_nsec = rd32(auxstmpl);
+11 -1
drivers/net/ethernet/intel/igc/igc_main.c
··· 254 254 /* reset BQL for queue */ 255 255 netdev_tx_reset_queue(txring_txq(tx_ring)); 256 256 257 + /* Zero out the buffer ring */ 258 + memset(tx_ring->tx_buffer_info, 0, 259 + sizeof(*tx_ring->tx_buffer_info) * tx_ring->count); 260 + 261 + /* Zero out the descriptor ring */ 262 + memset(tx_ring->desc, 0, tx_ring->size); 263 + 257 264 /* reset next_to_use and next_to_clean */ 258 265 tx_ring->next_to_use = 0; 259 266 tx_ring->next_to_clean = 0; ··· 274 267 */ 275 268 void igc_free_tx_resources(struct igc_ring *tx_ring) 276 269 { 277 - igc_clean_tx_ring(tx_ring); 270 + igc_disable_tx_ring(tx_ring); 278 271 279 272 vfree(tx_ring->tx_buffer_info); 280 273 tx_ring->tx_buffer_info = NULL; ··· 6840 6833 igc_flush_nfc_rules(adapter); 6841 6834 6842 6835 igc_ptp_stop(adapter); 6836 + 6837 + pci_disable_ptm(pdev); 6838 + pci_clear_master(pdev); 6843 6839 6844 6840 set_bit(__IGC_DOWN, &adapter->state); 6845 6841
+6 -1
drivers/net/ethernet/marvell/octeon_ep/octep_main.c
··· 981 981 oct->mmio[i].hw_addr = 982 982 ioremap(pci_resource_start(oct->pdev, i * 2), 983 983 pci_resource_len(oct->pdev, i * 2)); 984 + if (!oct->mmio[i].hw_addr) 985 + goto unmap_prev; 986 + 984 987 oct->mmio[i].mapped = 1; 985 988 } 986 989 ··· 1018 1015 return 0; 1019 1016 1020 1017 unsupported_dev: 1021 - for (i = 0; i < OCTEP_MMIO_REGIONS; i++) 1018 + i = OCTEP_MMIO_REGIONS; 1019 + unmap_prev: 1020 + while (i--) 1022 1021 iounmap(oct->mmio[i].hw_addr); 1023 1022 1024 1023 kfree(oct->conf);
+2 -5
drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
··· 1926 1926 free_cnt = rvu_rsrc_free_count(&txsch->schq); 1927 1927 } 1928 1928 1929 - if (free_cnt < req_schq || req_schq > MAX_TXSCHQ_PER_FUNC) 1929 + if (free_cnt < req_schq || req->schq[lvl] > MAX_TXSCHQ_PER_FUNC || 1930 + req->schq_contig[lvl] > MAX_TXSCHQ_PER_FUNC) 1930 1931 return NIX_AF_ERR_TLX_ALLOC_FAIL; 1931 1932 1932 1933 /* If contiguous queues are needed, check for availability */ ··· 4340 4339 4341 4340 static u64 rvu_get_lbk_link_credits(struct rvu *rvu, u16 lbk_max_frs) 4342 4341 { 4343 - /* CN10k supports 72KB FIFO size and max packet size of 64k */ 4344 - if (rvu->hw->lbk_bufsize == 0x12000) 4345 - return (rvu->hw->lbk_bufsize - lbk_max_frs) / 16; 4346 - 4347 4342 return 1600; /* 16 * max LBK datarate = 16 * 100Gbps */ 4348 4343 } 4349 4344
+2 -27
drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c
··· 1164 1164 { 1165 1165 struct npc_exact_table *table; 1166 1166 u16 *cnt, old_cnt; 1167 - bool promisc; 1168 1167 1169 1168 table = rvu->hw->table; 1170 - promisc = table->promisc_mode[drop_mcam_idx]; 1171 1169 1172 1170 cnt = &table->cnt_cmd_rules[drop_mcam_idx]; 1173 1171 old_cnt = *cnt; ··· 1177 1179 1178 1180 *enable_or_disable_cam = false; 1179 1181 1180 - if (promisc) 1181 - goto done; 1182 - 1183 - /* If all rules are deleted and not already in promisc mode; disable cam */ 1182 + /* If all rules are deleted, disable cam */ 1184 1183 if (!*cnt && val < 0) { 1185 1184 *enable_or_disable_cam = true; 1186 1185 goto done; 1187 1186 } 1188 1187 1189 - /* If rule got added and not already in promisc mode; enable cam */ 1188 + /* If rule got added, enable cam */ 1190 1189 if (!old_cnt && val > 0) { 1191 1190 *enable_or_disable_cam = true; 1192 1191 goto done; ··· 1438 1443 u32 drop_mcam_idx; 1439 1444 bool *promisc; 1440 1445 bool rc; 1441 - u32 cnt; 1442 1446 1443 1447 table = rvu->hw->table; 1444 1448 ··· 1460 1466 return LMAC_AF_ERR_INVALID_PARAM; 1461 1467 } 1462 1468 *promisc = false; 1463 - cnt = __rvu_npc_exact_cmd_rules_cnt_update(rvu, drop_mcam_idx, 0, NULL); 1464 1469 mutex_unlock(&table->lock); 1465 1470 1466 - /* If no dmac filter entries configured, disable drop rule */ 1467 - if (!cnt) 1468 - rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX, false); 1469 - else 1470 - rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX, !*promisc); 1471 - 1472 - dev_dbg(rvu->dev, "%s: disabled promisc mode (cgx=%d lmac=%d, cnt=%d)\n", 1473 - __func__, cgx_id, lmac_id, cnt); 1474 1471 return 0; 1475 1472 } 1476 1473 ··· 1479 1494 u32 drop_mcam_idx; 1480 1495 bool *promisc; 1481 1496 bool rc; 1482 - u32 cnt; 1483 1497 1484 1498 table = rvu->hw->table; 1485 1499 ··· 1501 1517 return LMAC_AF_ERR_INVALID_PARAM; 1502 1518 } 1503 1519 *promisc = true; 1504 - cnt = __rvu_npc_exact_cmd_rules_cnt_update(rvu, drop_mcam_idx, 0, NULL); 1505 1520 mutex_unlock(&table->lock); 1506 1521 1507 - /* If no dmac filter entries configured, disable drop rule */ 1508 - if (!cnt) 1509 - rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX, false); 1510 - else 1511 - rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX, !*promisc); 1512 - 1513 - dev_dbg(rvu->dev, "%s: Enabled promisc mode (cgx=%d lmac=%d cnt=%d)\n", 1514 - __func__, cgx_id, lmac_id, cnt); 1515 1522 return 0; 1516 1523 } 1517 1524
-12
drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
··· 279 279 return pci_num_vf(dev->pdev) ? true : false; 280 280 } 281 281 282 - static inline int mlx5_lag_is_lacp_owner(struct mlx5_core_dev *dev) 283 - { 284 - /* LACP owner conditions: 285 - * 1) Function is physical. 286 - * 2) LAG is supported by FW. 287 - * 3) LAG is managed by driver (currently the only option). 288 - */ 289 - return MLX5_CAP_GEN(dev, vport_group_manager) && 290 - (MLX5_CAP_GEN(dev, num_lag_ports) > 1) && 291 - MLX5_CAP_GEN(dev, lag_master); 292 - } 293 - 294 282 int mlx5_rescan_drivers_locked(struct mlx5_core_dev *dev); 295 283 static inline int mlx5_rescan_drivers(struct mlx5_core_dev *dev) 296 284 {
+22 -14
drivers/net/ethernet/renesas/rswitch.c
··· 334 334 return -ENOMEM; 335 335 } 336 336 337 - static int rswitch_gwca_ts_queue_alloc(struct rswitch_private *priv) 338 - { 339 - struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue; 340 - 341 - gq->ring_size = TS_RING_SIZE; 342 - gq->ts_ring = dma_alloc_coherent(&priv->pdev->dev, 343 - sizeof(struct rswitch_ts_desc) * 344 - (gq->ring_size + 1), &gq->ring_dma, GFP_KERNEL); 345 - return !gq->ts_ring ? -ENOMEM : 0; 346 - } 347 - 348 337 static void rswitch_desc_set_dptr(struct rswitch_desc *desc, dma_addr_t addr) 349 338 { 350 339 desc->dptrl = cpu_to_le32(lower_32_bits(addr)); ··· 508 519 dma_free_coherent(&priv->pdev->dev, gwca->linkfix_table_size, 509 520 gwca->linkfix_table, gwca->linkfix_table_dma); 510 521 gwca->linkfix_table = NULL; 522 + } 523 + 524 + static int rswitch_gwca_ts_queue_alloc(struct rswitch_private *priv) 525 + { 526 + struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue; 527 + struct rswitch_ts_desc *desc; 528 + 529 + gq->ring_size = TS_RING_SIZE; 530 + gq->ts_ring = dma_alloc_coherent(&priv->pdev->dev, 531 + sizeof(struct rswitch_ts_desc) * 532 + (gq->ring_size + 1), &gq->ring_dma, GFP_KERNEL); 533 + 534 + if (!gq->ts_ring) 535 + return -ENOMEM; 536 + 537 + rswitch_gwca_ts_queue_fill(priv, 0, TS_RING_SIZE); 538 + desc = &gq->ts_ring[gq->ring_size]; 539 + desc->desc.die_dt = DT_LINKFIX; 540 + rswitch_desc_set_dptr(&desc->desc, gq->ring_dma); 541 + INIT_LIST_HEAD(&priv->gwca.ts_info_list); 542 + 543 + return 0; 511 544 } 512 545 513 546 static struct rswitch_gwca_queue *rswitch_gwca_get(struct rswitch_private *priv) ··· 1782 1771 err = rswitch_gwca_ts_queue_alloc(priv); 1783 1772 if (err < 0) 1784 1773 goto err_ts_queue_alloc; 1785 - 1786 - rswitch_gwca_ts_queue_fill(priv, 0, TS_RING_SIZE); 1787 - INIT_LIST_HEAD(&priv->gwca.ts_info_list); 1788 1774 1789 1775 for (i = 0; i < RSWITCH_NUM_PORTS; i++) { 1790 1776 err = rswitch_device_alloc(priv, i);
+2
drivers/net/ethernet/sfc/efx_channels.c
··· 301 301 efx->tx_channel_offset = 0; 302 302 efx->n_xdp_channels = 0; 303 303 efx->xdp_channel_offset = efx->n_channels; 304 + efx->xdp_txq_queues_mode = EFX_XDP_TX_QUEUES_BORROWED; 304 305 rc = pci_enable_msi(efx->pci_dev); 305 306 if (rc == 0) { 306 307 efx_get_channel(efx, 0)->irq = efx->pci_dev->irq; ··· 323 322 efx->tx_channel_offset = efx_separate_tx_channels ? 1 : 0; 324 323 efx->n_xdp_channels = 0; 325 324 efx->xdp_channel_offset = efx->n_channels; 325 + efx->xdp_txq_queues_mode = EFX_XDP_TX_QUEUES_BORROWED; 326 326 efx->legacy_irq = efx->pci_dev->irq; 327 327 } 328 328
+2
drivers/net/ethernet/sfc/siena/efx_channels.c
··· 302 302 efx->tx_channel_offset = 0; 303 303 efx->n_xdp_channels = 0; 304 304 efx->xdp_channel_offset = efx->n_channels; 305 + efx->xdp_txq_queues_mode = EFX_XDP_TX_QUEUES_BORROWED; 305 306 rc = pci_enable_msi(efx->pci_dev); 306 307 if (rc == 0) { 307 308 efx_get_channel(efx, 0)->irq = efx->pci_dev->irq; ··· 324 323 efx->tx_channel_offset = efx_siena_separate_tx_channels ? 1 : 0; 325 324 efx->n_xdp_channels = 0; 326 325 efx->xdp_channel_offset = efx->n_channels; 326 + efx->xdp_txq_queues_mode = EFX_XDP_TX_QUEUES_BORROWED; 327 327 efx->legacy_irq = efx->pci_dev->irq; 328 328 } 329 329
+7 -2
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 3877 3877 3878 3878 stmmac_hw_teardown(dev); 3879 3879 init_error: 3880 - free_dma_desc_resources(priv, &priv->dma_conf); 3881 3880 phylink_disconnect_phy(priv->phylink); 3882 3881 init_phy_error: 3883 3882 pm_runtime_put(priv->device); ··· 3894 3895 return PTR_ERR(dma_conf); 3895 3896 3896 3897 ret = __stmmac_open(dev, dma_conf); 3898 + if (ret) 3899 + free_dma_desc_resources(priv, dma_conf); 3900 + 3897 3901 kfree(dma_conf); 3898 3902 return ret; 3899 3903 } ··· 5639 5637 stmmac_release(dev); 5640 5638 5641 5639 ret = __stmmac_open(dev, dma_conf); 5642 - kfree(dma_conf); 5643 5640 if (ret) { 5641 + free_dma_desc_resources(priv, dma_conf); 5642 + kfree(dma_conf); 5644 5643 netdev_err(priv->dev, "failed reopening the interface after MTU change\n"); 5645 5644 return ret; 5646 5645 } 5646 + 5647 + kfree(dma_conf); 5647 5648 5648 5649 stmmac_set_rx_mode(dev); 5649 5650 }
+1 -1
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 2068 2068 /* Initialize the Serdes PHY for the port */ 2069 2069 ret = am65_cpsw_init_serdes_phy(dev, port_np, port); 2070 2070 if (ret) 2071 - return ret; 2071 + goto of_node_put; 2072 2072 2073 2073 port->slave.mac_only = 2074 2074 of_property_read_bool(port_np, "ti,mac-only");
+4
drivers/net/ipvlan/ipvlan_l3s.c
··· 102 102 103 103 skb->dev = addr->master->dev; 104 104 skb->skb_iif = skb->dev->ifindex; 105 + #if IS_ENABLED(CONFIG_IPV6) 106 + if (addr->atype == IPVL_IPV6) 107 + IP6CB(skb)->iif = skb->dev->ifindex; 108 + #endif 105 109 len = skb->len + ETH_HLEN; 106 110 ipvlan_count_rx(addr->master, len, true, false); 107 111 out:
+5 -7
drivers/net/macsec.c
··· 3997 3997 return -ENOMEM; 3998 3998 3999 3999 secy->tx_sc.stats = netdev_alloc_pcpu_stats(struct pcpu_tx_sc_stats); 4000 - if (!secy->tx_sc.stats) { 4001 - free_percpu(macsec->stats); 4000 + if (!secy->tx_sc.stats) 4002 4001 return -ENOMEM; 4003 - } 4004 4002 4005 4003 secy->tx_sc.md_dst = metadata_dst_alloc(0, METADATA_MACSEC, GFP_KERNEL); 4006 - if (!secy->tx_sc.md_dst) { 4007 - free_percpu(secy->tx_sc.stats); 4008 - free_percpu(macsec->stats); 4004 + if (!secy->tx_sc.md_dst) 4005 + /* macsec and secy percpu stats will be freed when unregistering 4006 + * net_device in macsec_free_netdev() 4007 + */ 4009 4008 return -ENOMEM; 4010 - } 4011 4009 4012 4010 if (sci == MACSEC_UNDEF_SCI) 4013 4011 sci = dev_to_sci(dev, MACSEC_PORT_ES);
+39 -2
drivers/net/phy/phylink.c
··· 205 205 case PHY_INTERFACE_MODE_RGMII_ID: 206 206 case PHY_INTERFACE_MODE_RGMII: 207 207 case PHY_INTERFACE_MODE_QSGMII: 208 + case PHY_INTERFACE_MODE_QUSGMII: 208 209 case PHY_INTERFACE_MODE_SGMII: 209 210 case PHY_INTERFACE_MODE_GMII: 210 211 return SPEED_1000; ··· 222 221 case PHY_INTERFACE_MODE_10GBASER: 223 222 case PHY_INTERFACE_MODE_10GKR: 224 223 case PHY_INTERFACE_MODE_USXGMII: 225 - case PHY_INTERFACE_MODE_QUSGMII: 226 224 return SPEED_10000; 227 225 228 226 case PHY_INTERFACE_MODE_25GBASER: ··· 3365 3365 EXPORT_SYMBOL_GPL(phylink_decode_usxgmii_word); 3366 3366 3367 3367 /** 3368 + * phylink_decode_usgmii_word() - decode the USGMII word from a MAC PCS 3369 + * @state: a pointer to a struct phylink_link_state. 3370 + * @lpa: a 16 bit value which stores the USGMII auto-negotiation word 3371 + * 3372 + * Helper for MAC PCS supporting the USGMII protocol and the auto-negotiation 3373 + * code word. Decode the USGMII code word and populate the corresponding fields 3374 + * (speed, duplex) into the phylink_link_state structure. The structure for this 3375 + * word is the same as the USXGMII word, except it only supports speeds up to 3376 + * 1Gbps. 3377 + */ 3378 + static void phylink_decode_usgmii_word(struct phylink_link_state *state, 3379 + uint16_t lpa) 3380 + { 3381 + switch (lpa & MDIO_USXGMII_SPD_MASK) { 3382 + case MDIO_USXGMII_10: 3383 + state->speed = SPEED_10; 3384 + break; 3385 + case MDIO_USXGMII_100: 3386 + state->speed = SPEED_100; 3387 + break; 3388 + case MDIO_USXGMII_1000: 3389 + state->speed = SPEED_1000; 3390 + break; 3391 + default: 3392 + state->link = false; 3393 + return; 3394 + } 3395 + 3396 + if (lpa & MDIO_USXGMII_FULL_DUPLEX) 3397 + state->duplex = DUPLEX_FULL; 3398 + else 3399 + state->duplex = DUPLEX_HALF; 3400 + } 3401 + 3402 + /** 3368 3403 * phylink_mii_c22_pcs_decode_state() - Decode MAC PCS state from MII registers 3369 3404 * @state: a pointer to a &struct phylink_link_state. 3370 3405 * @bmsr: The value of the %MII_BMSR register ··· 3436 3401 3437 3402 case PHY_INTERFACE_MODE_SGMII: 3438 3403 case PHY_INTERFACE_MODE_QSGMII: 3439 - case PHY_INTERFACE_MODE_QUSGMII: 3440 3404 phylink_decode_sgmii_word(state, lpa); 3405 + break; 3406 + case PHY_INTERFACE_MODE_QUSGMII: 3407 + phylink_decode_usgmii_word(state, lpa); 3441 3408 break; 3442 3409 3443 3410 default:
+2
drivers/net/usb/qmi_wwan.c
··· 1220 1220 {QMI_FIXED_INTF(0x05c6, 0x9080, 8)}, 1221 1221 {QMI_FIXED_INTF(0x05c6, 0x9083, 3)}, 1222 1222 {QMI_FIXED_INTF(0x05c6, 0x9084, 4)}, 1223 + {QMI_QUIRK_SET_DTR(0x05c6, 0x9091, 2)}, /* Compal RXM-G1 */ 1223 1224 {QMI_FIXED_INTF(0x05c6, 0x90b2, 3)}, /* ublox R410M */ 1225 + {QMI_QUIRK_SET_DTR(0x05c6, 0x90db, 2)}, /* Compal RXM-G1 */ 1224 1226 {QMI_FIXED_INTF(0x05c6, 0x920d, 0)}, 1225 1227 {QMI_FIXED_INTF(0x05c6, 0x920d, 5)}, 1226 1228 {QMI_QUIRK_SET_DTR(0x05c6, 0x9625, 4)}, /* YUGA CLM920-NC5 */
+3
drivers/net/wan/lapbether.c
··· 384 384 385 385 ASSERT_RTNL(); 386 386 387 + if (dev->type != ARPHRD_ETHER) 388 + return -EINVAL; 389 + 387 390 ndev = alloc_netdev(sizeof(*lapbeth), "lapb%d", NET_NAME_UNKNOWN, 388 391 lapbeth_setup); 389 392 if (!ndev)
+6 -6
drivers/net/wireless/intel/iwlwifi/mvm/rs.c
··· 2692 2692 2693 2693 lq_sta = mvm_sta; 2694 2694 2695 - spin_lock(&lq_sta->pers.lock); 2695 + spin_lock_bh(&lq_sta->pers.lock); 2696 2696 iwl_mvm_hwrate_to_tx_rate_v1(lq_sta->last_rate_n_flags, 2697 2697 info->band, &info->control.rates[0]); 2698 2698 info->control.rates[0].count = 1; ··· 2707 2707 iwl_mvm_hwrate_to_tx_rate_v1(last_ucode_rate, info->band, 2708 2708 &txrc->reported_rate); 2709 2709 } 2710 - spin_unlock(&lq_sta->pers.lock); 2710 + spin_unlock_bh(&lq_sta->pers.lock); 2711 2711 } 2712 2712 2713 2713 static void *rs_drv_alloc_sta(void *mvm_rate, struct ieee80211_sta *sta, ··· 3264 3264 /* If it's locked we are in middle of init flow 3265 3265 * just wait for next tx status to update the lq_sta data 3266 3266 */ 3267 - if (!spin_trylock(&mvmsta->deflink.lq_sta.rs_drv.pers.lock)) 3267 + if (!spin_trylock_bh(&mvmsta->deflink.lq_sta.rs_drv.pers.lock)) 3268 3268 return; 3269 3269 3270 3270 __iwl_mvm_rs_tx_status(mvm, sta, tid, info, ndp); 3271 - spin_unlock(&mvmsta->deflink.lq_sta.rs_drv.pers.lock); 3271 + spin_unlock_bh(&mvmsta->deflink.lq_sta.rs_drv.pers.lock); 3272 3272 } 3273 3273 3274 3274 #ifdef CONFIG_MAC80211_DEBUGFS ··· 4117 4117 } else { 4118 4118 struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta); 4119 4119 4120 - spin_lock(&mvmsta->deflink.lq_sta.rs_drv.pers.lock); 4120 + spin_lock_bh(&mvmsta->deflink.lq_sta.rs_drv.pers.lock); 4121 4121 rs_drv_rate_init(mvm, sta, band); 4122 - spin_unlock(&mvmsta->deflink.lq_sta.rs_drv.pers.lock); 4122 + spin_unlock_bh(&mvmsta->deflink.lq_sta.rs_drv.pers.lock); 4123 4123 } 4124 4124 } 4125 4125
+1
drivers/of/overlay.c
··· 811 811 if (!fragment->target) { 812 812 pr_err("symbols in overlay, but not in live tree\n"); 813 813 ret = -EINVAL; 814 + of_node_put(node); 814 815 goto err_out; 815 816 } 816 817
+1
drivers/pinctrl/meson/pinctrl-meson-axg.c
··· 400 400 GPIO_GROUP(GPIOA_15), 401 401 GPIO_GROUP(GPIOA_16), 402 402 GPIO_GROUP(GPIOA_17), 403 + GPIO_GROUP(GPIOA_18), 403 404 GPIO_GROUP(GPIOA_19), 404 405 GPIO_GROUP(GPIOA_20), 405 406
+15 -15
drivers/regulator/qcom-rpmh-regulator.c
··· 1057 1057 }; 1058 1058 1059 1059 static const struct rpmh_vreg_init_data pm8550_vreg_data[] = { 1060 - RPMH_VREG("ldo1", "ldo%s1", &pmic5_pldo, "vdd-l1-l4-l10"), 1060 + RPMH_VREG("ldo1", "ldo%s1", &pmic5_nldo515, "vdd-l1-l4-l10"), 1061 1061 RPMH_VREG("ldo2", "ldo%s2", &pmic5_pldo, "vdd-l2-l13-l14"), 1062 - RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo, "vdd-l3"), 1063 - RPMH_VREG("ldo4", "ldo%s4", &pmic5_nldo, "vdd-l1-l4-l10"), 1062 + RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo515, "vdd-l3"), 1063 + RPMH_VREG("ldo4", "ldo%s4", &pmic5_nldo515, "vdd-l1-l4-l10"), 1064 1064 RPMH_VREG("ldo5", "ldo%s5", &pmic5_pldo, "vdd-l5-l16"), 1065 - RPMH_VREG("ldo6", "ldo%s6", &pmic5_pldo_lv, "vdd-l6-l7"), 1066 - RPMH_VREG("ldo7", "ldo%s7", &pmic5_pldo_lv, "vdd-l6-l7"), 1067 - RPMH_VREG("ldo8", "ldo%s8", &pmic5_pldo_lv, "vdd-l8-l9"), 1065 + RPMH_VREG("ldo6", "ldo%s6", &pmic5_pldo, "vdd-l6-l7"), 1066 + RPMH_VREG("ldo7", "ldo%s7", &pmic5_pldo, "vdd-l6-l7"), 1067 + RPMH_VREG("ldo8", "ldo%s8", &pmic5_pldo, "vdd-l8-l9"), 1068 1068 RPMH_VREG("ldo9", "ldo%s9", &pmic5_pldo, "vdd-l8-l9"), 1069 - RPMH_VREG("ldo10", "ldo%s10", &pmic5_nldo, "vdd-l1-l4-l10"), 1070 - RPMH_VREG("ldo11", "ldo%s11", &pmic5_nldo, "vdd-l11"), 1069 + RPMH_VREG("ldo10", "ldo%s10", &pmic5_nldo515, "vdd-l1-l4-l10"), 1070 + RPMH_VREG("ldo11", "ldo%s11", &pmic5_nldo515, "vdd-l11"), 1071 1071 RPMH_VREG("ldo12", "ldo%s12", &pmic5_pldo, "vdd-l12"), 1072 1072 RPMH_VREG("ldo13", "ldo%s13", &pmic5_pldo, "vdd-l2-l13-l14"), 1073 1073 RPMH_VREG("ldo14", "ldo%s14", &pmic5_pldo, "vdd-l2-l13-l14"), 1074 - RPMH_VREG("ldo15", "ldo%s15", &pmic5_pldo, "vdd-l15"), 1074 + RPMH_VREG("ldo15", "ldo%s15", &pmic5_nldo515, "vdd-l15"), 1075 1075 RPMH_VREG("ldo16", "ldo%s16", &pmic5_pldo, "vdd-l5-l16"), 1076 1076 RPMH_VREG("ldo17", "ldo%s17", &pmic5_pldo, "vdd-l17"), 1077 1077 RPMH_VREG("bob1", "bob%s1", &pmic5_bob, "vdd-bob1"), ··· 1086 1086 RPMH_VREG("smps4", "smp%s4", &pmic5_ftsmps525_lv, "vdd-s4"), 1087 1087 RPMH_VREG("smps5", "smp%s5", &pmic5_ftsmps525_lv, "vdd-s5"), 1088 1088 RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525_mv, "vdd-s6"), 1089 - RPMH_VREG("ldo1", "ldo%s1", &pmic5_nldo, "vdd-l1"), 1090 - RPMH_VREG("ldo2", "ldo%s2", &pmic5_nldo, "vdd-l2"), 1091 - RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo, "vdd-l3"), 1089 + RPMH_VREG("ldo1", "ldo%s1", &pmic5_nldo515, "vdd-l1"), 1090 + RPMH_VREG("ldo2", "ldo%s2", &pmic5_nldo515, "vdd-l2"), 1091 + RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo515, "vdd-l3"), 1092 1092 {} 1093 1093 }; 1094 1094 ··· 1101 1101 RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525_lv, "vdd-s6"), 1102 1102 RPMH_VREG("smps7", "smp%s7", &pmic5_ftsmps525_lv, "vdd-s7"), 1103 1103 RPMH_VREG("smps8", "smp%s8", &pmic5_ftsmps525_lv, "vdd-s8"), 1104 - RPMH_VREG("ldo1", "ldo%s1", &pmic5_nldo, "vdd-l1"), 1105 - RPMH_VREG("ldo2", "ldo%s2", &pmic5_nldo, "vdd-l2"), 1106 - RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo, "vdd-l3"), 1104 + RPMH_VREG("ldo1", "ldo%s1", &pmic5_nldo515, "vdd-l1"), 1105 + RPMH_VREG("ldo2", "ldo%s2", &pmic5_nldo515, "vdd-l2"), 1106 + RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo515, "vdd-l3"), 1107 1107 {} 1108 1108 }; 1109 1109
+2 -2
drivers/s390/block/dasd_ioctl.c
··· 552 552 553 553 memcpy(dasd_info->type, base->discipline->name, 4); 554 554 555 - spin_lock_irqsave(&block->queue_lock, flags); 555 + spin_lock_irqsave(get_ccwdev_lock(base->cdev), flags); 556 556 list_for_each(l, &base->ccw_queue) 557 557 dasd_info->chanq_len++; 558 - spin_unlock_irqrestore(&block->queue_lock, flags); 558 + spin_unlock_irqrestore(get_ccwdev_lock(base->cdev), flags); 559 559 return 0; 560 560 } 561 561
+4 -1
drivers/s390/cio/device.c
··· 1376 1376 enum io_sch_action { 1377 1377 IO_SCH_UNREG, 1378 1378 IO_SCH_ORPH_UNREG, 1379 + IO_SCH_UNREG_CDEV, 1379 1380 IO_SCH_ATTACH, 1380 1381 IO_SCH_UNREG_ATTACH, 1381 1382 IO_SCH_ORPH_ATTACH, ··· 1409 1408 } 1410 1409 if ((sch->schib.pmcw.pam & sch->opm) == 0) { 1411 1410 if (ccw_device_notify(cdev, CIO_NO_PATH) != NOTIFY_OK) 1412 - return IO_SCH_UNREG; 1411 + return IO_SCH_UNREG_CDEV; 1413 1412 return IO_SCH_DISC; 1414 1413 } 1415 1414 if (device_is_disconnected(cdev)) ··· 1471 1470 case IO_SCH_ORPH_ATTACH: 1472 1471 ccw_device_set_disconnected(cdev); 1473 1472 break; 1473 + case IO_SCH_UNREG_CDEV: 1474 1474 case IO_SCH_UNREG_ATTACH: 1475 1475 case IO_SCH_UNREG: 1476 1476 if (!cdev) ··· 1505 1503 if (rc) 1506 1504 goto out; 1507 1505 break; 1506 + case IO_SCH_UNREG_CDEV: 1508 1507 case IO_SCH_UNREG_ATTACH: 1509 1508 spin_lock_irqsave(sch->lock, flags); 1510 1509 sch_set_cdev(sch, NULL);
-8
drivers/s390/net/ism_drv.c
··· 771 771 772 772 static void __exit ism_exit(void) 773 773 { 774 - struct ism_dev *ism; 775 - 776 - mutex_lock(&ism_dev_list.mutex); 777 - list_for_each_entry(ism, &ism_dev_list.list, list) { 778 - ism_dev_exit(ism); 779 - } 780 - mutex_unlock(&ism_dev_list.mutex); 781 - 782 774 pci_unregister_driver(&ism_driver); 783 775 debug_unregister(ism_debug_info); 784 776 }
+2 -1
drivers/soc/qcom/Makefile
··· 32 32 obj-$(CONFIG_QCOM_RPMPD) += rpmpd.o 33 33 obj-$(CONFIG_QCOM_KRYO_L2_ACCESSORS) += kryo-l2-accessors.o 34 34 obj-$(CONFIG_QCOM_ICC_BWMON) += icc-bwmon.o 35 - obj-$(CONFIG_QCOM_INLINE_CRYPTO_ENGINE) += ice.o 35 + qcom_ice-objs += ice.o 36 + obj-$(CONFIG_QCOM_INLINE_CRYPTO_ENGINE) += qcom_ice.o
+2 -2
drivers/soc/qcom/icc-bwmon.c
··· 773 773 bwmon->max_bw_kbps = UINT_MAX; 774 774 opp = dev_pm_opp_find_bw_floor(dev, &bwmon->max_bw_kbps, 0); 775 775 if (IS_ERR(opp)) 776 - return dev_err_probe(dev, ret, "failed to find max peak bandwidth\n"); 776 + return dev_err_probe(dev, PTR_ERR(opp), "failed to find max peak bandwidth\n"); 777 777 778 778 bwmon->min_bw_kbps = 0; 779 779 opp = dev_pm_opp_find_bw_ceil(dev, &bwmon->min_bw_kbps, 0); 780 780 if (IS_ERR(opp)) 781 - return dev_err_probe(dev, ret, "failed to find min peak bandwidth\n"); 781 + return dev_err_probe(dev, PTR_ERR(opp), "failed to find min peak bandwidth\n"); 782 782 783 783 bwmon->dev = dev; 784 784
+1 -1
drivers/soc/qcom/ramp_controller.c
··· 296 296 return -ENOMEM; 297 297 298 298 qrc->desc = device_get_match_data(&pdev->dev); 299 - if (!qrc) 299 + if (!qrc->desc) 300 300 return -EINVAL; 301 301 302 302 qrc->regmap = devm_regmap_init_mmio(&pdev->dev, base, &qrc_regmap_config);
+1
drivers/soc/qcom/rmtfs_mem.c
··· 233 233 num_vmids = 0; 234 234 } else if (num_vmids < 0) { 235 235 dev_err(&pdev->dev, "failed to count qcom,vmid elements: %d\n", num_vmids); 236 + ret = num_vmids; 236 237 goto remove_cdev; 237 238 } else if (num_vmids > NUM_MAX_VMIDS) { 238 239 dev_warn(&pdev->dev,
+1 -1
drivers/soc/qcom/rpmh-rsc.c
··· 1073 1073 drv->ver.minor = rsc_id & (MINOR_VER_MASK << MINOR_VER_SHIFT); 1074 1074 drv->ver.minor >>= MINOR_VER_SHIFT; 1075 1075 1076 - if (drv->ver.major == 3 && drv->ver.minor >= 0) 1076 + if (drv->ver.major == 3) 1077 1077 drv->regs = rpmh_rsc_reg_offset_ver_3_0; 1078 1078 else 1079 1079 drv->regs = rpmh_rsc_reg_offset_ver_2_7;
+16
drivers/soc/qcom/rpmhpd.c
··· 342 342 .num_pds = ARRAY_SIZE(sm8150_rpmhpds), 343 343 }; 344 344 345 + static struct rpmhpd *sa8155p_rpmhpds[] = { 346 + [SA8155P_CX] = &cx_w_mx_parent, 347 + [SA8155P_CX_AO] = &cx_ao_w_mx_parent, 348 + [SA8155P_EBI] = &ebi, 349 + [SA8155P_GFX] = &gfx, 350 + [SA8155P_MSS] = &mss, 351 + [SA8155P_MX] = &mx, 352 + [SA8155P_MX_AO] = &mx_ao, 353 + }; 354 + 355 + static const struct rpmhpd_desc sa8155p_desc = { 356 + .rpmhpds = sa8155p_rpmhpds, 357 + .num_pds = ARRAY_SIZE(sa8155p_rpmhpds), 358 + }; 359 + 345 360 /* SM8250 RPMH powerdomains */ 346 361 static struct rpmhpd *sm8250_rpmhpds[] = { 347 362 [SM8250_CX] = &cx_w_mx_parent, ··· 534 519 535 520 static const struct of_device_id rpmhpd_match_table[] = { 536 521 { .compatible = "qcom,qdu1000-rpmhpd", .data = &qdu1000_desc }, 522 + { .compatible = "qcom,sa8155p-rpmhpd", .data = &sa8155p_desc }, 537 523 { .compatible = "qcom,sa8540p-rpmhpd", .data = &sa8540p_desc }, 538 524 { .compatible = "qcom,sa8775p-rpmhpd", .data = &sa8775p_desc }, 539 525 { .compatible = "qcom,sc7180-rpmhpd", .data = &sc7180_desc },
+7
drivers/soundwire/dmi-quirks.c
··· 100 100 .driver_data = (void *)intel_tgl_bios, 101 101 }, 102 102 { 103 + .matches = { 104 + DMI_MATCH(DMI_SYS_VENDOR, "HP"), 105 + DMI_MATCH(DMI_BOARD_NAME, "8709"), 106 + }, 107 + .driver_data = (void *)intel_tgl_bios, 108 + }, 109 + { 103 110 /* quirk used for NUC15 'Bishop County' LAPBC510 and LAPBC710 skews */ 104 111 .matches = { 105 112 DMI_MATCH(DMI_SYS_VENDOR, "Intel(R) Client Systems"),
+13 -4
drivers/soundwire/qcom.c
··· 1099 1099 } 1100 1100 1101 1101 sruntime = sdw_alloc_stream(dai->name); 1102 - if (!sruntime) 1103 - return -ENOMEM; 1102 + if (!sruntime) { 1103 + ret = -ENOMEM; 1104 + goto err_alloc; 1105 + } 1104 1106 1105 1107 ctrl->sruntime[dai->id] = sruntime; 1106 1108 ··· 1112 1110 if (ret < 0 && ret != -ENOTSUPP) { 1113 1111 dev_err(dai->dev, "Failed to set sdw stream on %s\n", 1114 1112 codec_dai->name); 1115 - sdw_release_stream(sruntime); 1116 - return ret; 1113 + goto err_set_stream; 1117 1114 } 1118 1115 } 1119 1116 1120 1117 return 0; 1118 + 1119 + err_set_stream: 1120 + sdw_release_stream(sruntime); 1121 + err_alloc: 1122 + pm_runtime_mark_last_busy(ctrl->dev); 1123 + pm_runtime_put_autosuspend(ctrl->dev); 1124 + 1125 + return ret; 1121 1126 } 1122 1127 1123 1128 static void qcom_swrm_shutdown(struct snd_pcm_substream *substream,
+3 -1
drivers/soundwire/stream.c
··· 2021 2021 2022 2022 skip_alloc_master_rt: 2023 2023 s_rt = sdw_slave_rt_find(slave, stream); 2024 - if (s_rt) 2024 + if (s_rt) { 2025 + alloc_slave_rt = false; 2025 2026 goto skip_alloc_slave_rt; 2027 + } 2026 2028 2027 2029 s_rt = sdw_slave_rt_alloc(slave, m_rt); 2028 2030 if (!s_rt) {
+5 -2
drivers/spi/spi-cadence-quadspi.c
··· 1756 1756 cqspi->slow_sram = true; 1757 1757 1758 1758 if (of_device_is_compatible(pdev->dev.of_node, 1759 - "xlnx,versal-ospi-1.0")) 1760 - dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); 1759 + "xlnx,versal-ospi-1.0")) { 1760 + ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); 1761 + if (ret) 1762 + goto probe_reset_failed; 1763 + } 1761 1764 } 1762 1765 1763 1766 ret = devm_request_irq(dev, irq, cqspi_irq_handler, 0,
+1 -1
drivers/spi/spi-dw-mmio.c
··· 274 274 */ 275 275 spi_set_chipselect(spi, 0, 0); 276 276 dw_spi_set_cs(spi, enable); 277 - spi_get_chipselect(spi, cs); 277 + spi_set_chipselect(spi, 0, cs); 278 278 } 279 279 280 280 static int dw_spi_elba_init(struct platform_device *pdev,
+15
drivers/spi/spi-fsl-dspi.c
··· 1002 1002 static int dspi_setup(struct spi_device *spi) 1003 1003 { 1004 1004 struct fsl_dspi *dspi = spi_controller_get_devdata(spi->controller); 1005 + u32 period_ns = DIV_ROUND_UP(NSEC_PER_SEC, spi->max_speed_hz); 1005 1006 unsigned char br = 0, pbr = 0, pcssck = 0, cssck = 0; 1007 + u32 quarter_period_ns = DIV_ROUND_UP(period_ns, 4); 1006 1008 u32 cs_sck_delay = 0, sck_cs_delay = 0; 1007 1009 struct fsl_dspi_platform_data *pdata; 1008 1010 unsigned char pasc = 0, asc = 0; ··· 1032 1030 cs_sck_delay = pdata->cs_sck_delay; 1033 1031 sck_cs_delay = pdata->sck_cs_delay; 1034 1032 } 1033 + 1034 + /* Since tCSC and tASC apply to continuous transfers too, avoid SCK 1035 + * glitches of half a cycle by never allowing tCSC + tASC to go below 1036 + * half a SCK period. 1037 + */ 1038 + if (cs_sck_delay < quarter_period_ns) 1039 + cs_sck_delay = quarter_period_ns; 1040 + if (sck_cs_delay < quarter_period_ns) 1041 + sck_cs_delay = quarter_period_ns; 1042 + 1043 + dev_dbg(&spi->dev, 1044 + "DSPI controller timing params: CS-to-SCK delay %u ns, SCK-to-CS delay %u ns\n", 1045 + cs_sck_delay, sck_cs_delay); 1035 1046 1036 1047 clkrate = clk_get_rate(dspi->clk); 1037 1048 hz_to_spi_baud(&pbr, &br, spi->max_speed_hz, clkrate);
+6 -4
drivers/tee/amdtee/amdtee_if.h
··· 118 118 119 119 /** 120 120 * struct tee_cmd_load_ta - load Trusted Application (TA) binary into TEE 121 - * @low_addr: [in] bits [31:0] of the physical address of the TA binary 122 - * @hi_addr: [in] bits [63:32] of the physical address of the TA binary 123 - * @size: [in] size of TA binary in bytes 124 - * @ta_handle: [out] return handle of the loaded TA 121 + * @low_addr: [in] bits [31:0] of the physical address of the TA binary 122 + * @hi_addr: [in] bits [63:32] of the physical address of the TA binary 123 + * @size: [in] size of TA binary in bytes 124 + * @ta_handle: [out] return handle of the loaded TA 125 + * @return_origin: [out] origin of return code after TEE processing 125 126 */ 126 127 struct tee_cmd_load_ta { 127 128 u32 low_addr; 128 129 u32 hi_addr; 129 130 u32 size; 130 131 u32 ta_handle; 132 + u32 return_origin; 131 133 }; 132 134 133 135 /**
+16 -12
drivers/tee/amdtee/call.c
··· 423 423 if (ret) { 424 424 arg->ret_origin = TEEC_ORIGIN_COMMS; 425 425 arg->ret = TEEC_ERROR_COMMUNICATION; 426 - } else if (arg->ret == TEEC_SUCCESS) { 427 - ret = get_ta_refcount(load_cmd.ta_handle); 428 - if (!ret) { 429 - arg->ret_origin = TEEC_ORIGIN_COMMS; 430 - arg->ret = TEEC_ERROR_OUT_OF_MEMORY; 426 + } else { 427 + arg->ret_origin = load_cmd.return_origin; 431 428 432 - /* Unload the TA on error */ 433 - unload_cmd.ta_handle = load_cmd.ta_handle; 434 - psp_tee_process_cmd(TEE_CMD_ID_UNLOAD_TA, 435 - (void *)&unload_cmd, 436 - sizeof(unload_cmd), &ret); 437 - } else { 438 - set_session_id(load_cmd.ta_handle, 0, &arg->session); 429 + if (arg->ret == TEEC_SUCCESS) { 430 + ret = get_ta_refcount(load_cmd.ta_handle); 431 + if (!ret) { 432 + arg->ret_origin = TEEC_ORIGIN_COMMS; 433 + arg->ret = TEEC_ERROR_OUT_OF_MEMORY; 434 + 435 + /* Unload the TA on error */ 436 + unload_cmd.ta_handle = load_cmd.ta_handle; 437 + psp_tee_process_cmd(TEE_CMD_ID_UNLOAD_TA, 438 + (void *)&unload_cmd, 439 + sizeof(unload_cmd), &ret); 440 + } else { 441 + set_session_id(load_cmd.ta_handle, 0, &arg->session); 442 + } 439 443 } 440 444 } 441 445 mutex_unlock(&ta_refcount_mutex);
+1 -1
drivers/vdpa/mlx5/net/mlx5_vnet.c
··· 3349 3349 mlx5_vdpa_remove_debugfs(ndev->debugfs); 3350 3350 ndev->debugfs = NULL; 3351 3351 unregister_link_notifier(ndev); 3352 + _vdpa_unregister_device(dev); 3352 3353 wq = mvdev->wq; 3353 3354 mvdev->wq = NULL; 3354 3355 destroy_workqueue(wq); 3355 - _vdpa_unregister_device(dev); 3356 3356 mgtdev->ndev = NULL; 3357 3357 } 3358 3358
+3
drivers/vdpa/vdpa_user/vduse_dev.c
··· 1685 1685 if (config->vq_num > 0xffff) 1686 1686 return false; 1687 1687 1688 + if (!config->name[0]) 1689 + return false; 1690 + 1688 1691 if (!device_is_allowed(config->device_id)) 1689 1692 return false; 1690 1693
+8 -3
drivers/vhost/net.c
··· 935 935 936 936 err = sock->ops->sendmsg(sock, &msg, len); 937 937 if (unlikely(err < 0)) { 938 + bool retry = err == -EAGAIN || err == -ENOMEM || err == -ENOBUFS; 939 + 938 940 if (zcopy_used) { 939 941 if (vq->heads[ubuf->desc].len == VHOST_DMA_IN_PROGRESS) 940 942 vhost_net_ubuf_put(ubufs); 941 - nvq->upend_idx = ((unsigned)nvq->upend_idx - 1) 942 - % UIO_MAXIOV; 943 + if (retry) 944 + nvq->upend_idx = ((unsigned)nvq->upend_idx - 1) 945 + % UIO_MAXIOV; 946 + else 947 + vq->heads[ubuf->desc].len = VHOST_DMA_DONE_LEN; 943 948 } 944 - if (err == -EAGAIN || err == -ENOMEM || err == -ENOBUFS) { 949 + if (retry) { 945 950 vhost_discard_vq_desc(vq, 1); 946 951 vhost_net_enable_vq(net, vq); 947 952 break;
+30 -4
drivers/vhost/vdpa.c
··· 407 407 { 408 408 struct vdpa_device *vdpa = v->vdpa; 409 409 const struct vdpa_config_ops *ops = vdpa->config; 410 + struct vhost_dev *d = &v->vdev; 411 + u64 actual_features; 410 412 u64 features; 413 + int i; 411 414 412 415 /* 413 416 * It's not allowed to change the features after they have ··· 424 421 425 422 if (vdpa_set_features(vdpa, features)) 426 423 return -EINVAL; 424 + 425 + /* let the vqs know what has been configured */ 426 + actual_features = ops->get_driver_features(vdpa); 427 + for (i = 0; i < d->nvqs; ++i) { 428 + struct vhost_virtqueue *vq = d->vqs[i]; 429 + 430 + mutex_lock(&vq->mutex); 431 + vq->acked_features = actual_features; 432 + mutex_unlock(&vq->mutex); 433 + } 427 434 428 435 return 0; 429 436 } ··· 607 594 if (r) 608 595 return r; 609 596 610 - vq->last_avail_idx = vq_state.split.avail_index; 597 + if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) { 598 + vq->last_avail_idx = vq_state.packed.last_avail_idx | 599 + (vq_state.packed.last_avail_counter << 15); 600 + vq->last_used_idx = vq_state.packed.last_used_idx | 601 + (vq_state.packed.last_used_counter << 15); 602 + } else { 603 + vq->last_avail_idx = vq_state.split.avail_index; 604 + } 611 605 break; 612 606 } 613 607 ··· 632 612 break; 633 613 634 614 case VHOST_SET_VRING_BASE: 635 - vq_state.split.avail_index = vq->last_avail_idx; 636 - if (ops->set_vq_state(vdpa, idx, &vq_state)) 637 - r = -EINVAL; 615 + if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) { 616 + vq_state.packed.last_avail_idx = vq->last_avail_idx & 0x7fff; 617 + vq_state.packed.last_avail_counter = !!(vq->last_avail_idx & 0x8000); 618 + vq_state.packed.last_used_idx = vq->last_used_idx & 0x7fff; 619 + vq_state.packed.last_used_counter = !!(vq->last_used_idx & 0x8000); 620 + } else { 621 + vq_state.split.avail_index = vq->last_avail_idx; 622 + } 623 + r = ops->set_vq_state(vdpa, idx, &vq_state); 638 624 break; 639 625 640 626 case VHOST_SET_VRING_CALL:
+34 -41
drivers/vhost/vhost.c
··· 235 235 { 236 236 struct vhost_flush_struct flush; 237 237 238 - if (dev->worker) { 238 + if (dev->worker.vtsk) { 239 239 init_completion(&flush.wait_event); 240 240 vhost_work_init(&flush.work, vhost_flush_work); 241 241 ··· 247 247 248 248 void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work) 249 249 { 250 - if (!dev->worker) 250 + if (!dev->worker.vtsk) 251 251 return; 252 252 253 253 if (!test_and_set_bit(VHOST_WORK_QUEUED, &work->flags)) { ··· 255 255 * sure it was not in the list. 256 256 * test_and_set_bit() implies a memory barrier. 257 257 */ 258 - llist_add(&work->node, &dev->worker->work_list); 259 - vhost_task_wake(dev->worker->vtsk); 258 + llist_add(&work->node, &dev->worker.work_list); 259 + vhost_task_wake(dev->worker.vtsk); 260 260 } 261 261 } 262 262 EXPORT_SYMBOL_GPL(vhost_work_queue); ··· 264 264 /* A lockless hint for busy polling code to exit the loop */ 265 265 bool vhost_has_work(struct vhost_dev *dev) 266 266 { 267 - return dev->worker && !llist_empty(&dev->worker->work_list); 267 + return !llist_empty(&dev->worker.work_list); 268 268 } 269 269 EXPORT_SYMBOL_GPL(vhost_has_work); 270 270 ··· 341 341 342 342 node = llist_del_all(&worker->work_list); 343 343 if (node) { 344 + __set_current_state(TASK_RUNNING); 345 + 344 346 node = llist_reverse_order(node); 345 347 /* make sure flag is seen after deletion */ 346 348 smp_wmb(); ··· 458 456 dev->umem = NULL; 459 457 dev->iotlb = NULL; 460 458 dev->mm = NULL; 461 - dev->worker = NULL; 459 + memset(&dev->worker, 0, sizeof(dev->worker)); 460 + init_llist_head(&dev->worker.work_list); 462 461 dev->iov_limit = iov_limit; 463 462 dev->weight = weight; 464 463 dev->byte_weight = byte_weight; ··· 533 530 534 531 static void vhost_worker_free(struct vhost_dev *dev) 535 532 { 536 - struct vhost_worker *worker = dev->worker; 537 - 538 - if (!worker) 533 + if (!dev->worker.vtsk) 539 534 return; 540 535 541 - dev->worker = NULL; 542 - WARN_ON(!llist_empty(&worker->work_list)); 543 - vhost_task_stop(worker->vtsk); 544 - kfree(worker); 536 + WARN_ON(!llist_empty(&dev->worker.work_list)); 537 + vhost_task_stop(dev->worker.vtsk); 538 + dev->worker.kcov_handle = 0; 539 + dev->worker.vtsk = NULL; 545 540 } 546 541 547 542 static int vhost_worker_create(struct vhost_dev *dev) 548 543 { 549 - struct vhost_worker *worker; 550 544 struct vhost_task *vtsk; 551 545 char name[TASK_COMM_LEN]; 552 - int ret; 553 546 554 - worker = kzalloc(sizeof(*worker), GFP_KERNEL_ACCOUNT); 555 - if (!worker) 556 - return -ENOMEM; 557 - 558 - dev->worker = worker; 559 - worker->kcov_handle = kcov_common_handle(); 560 - init_llist_head(&worker->work_list); 561 547 snprintf(name, sizeof(name), "vhost-%d", current->pid); 562 548 563 - vtsk = vhost_task_create(vhost_worker, worker, name); 564 - if (!vtsk) { 565 - ret = -ENOMEM; 566 - goto free_worker; 567 - } 549 + vtsk = vhost_task_create(vhost_worker, &dev->worker, name); 550 + if (!vtsk) 551 + return -ENOMEM; 568 552 569 - worker->vtsk = vtsk; 553 + dev->worker.kcov_handle = kcov_common_handle(); 554 + dev->worker.vtsk = vtsk; 570 555 vhost_task_start(vtsk); 571 556 return 0; 572 - 573 - free_worker: 574 - kfree(worker); 575 - dev->worker = NULL; 576 - return ret; 577 557 } 578 558 579 559 /* Caller should have device mutex */ ··· 1600 1614 r = -EFAULT; 1601 1615 break; 1602 1616 } 1603 - if (s.num > 0xffff) { 1604 - r = -EINVAL; 1605 - break; 1617 + if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) { 1618 + vq->last_avail_idx = s.num & 0xffff; 1619 + vq->last_used_idx = (s.num >> 16) & 0xffff; 1620 + } else { 1621 + if (s.num > 0xffff) { 1622 + r = -EINVAL; 1623 + break; 1624 + } 1625 + vq->last_avail_idx = s.num; 1606 1626 } 1607 - vq->last_avail_idx = s.num; 1608 1627 /* Forget the cached index value. */ 1609 1628 vq->avail_idx = vq->last_avail_idx; 1610 1629 break; 1611 1630 case VHOST_GET_VRING_BASE: 1612 1631 s.index = idx; 1613 - s.num = vq->last_avail_idx; 1632 + if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) 1633 + s.num = (u32)vq->last_avail_idx | ((u32)vq->last_used_idx << 16); 1634 + else 1635 + s.num = vq->last_avail_idx; 1614 1636 if (copy_to_user(argp, &s, sizeof s)) 1615 1637 r = -EFAULT; 1616 1638 break; ··· 2557 2563 /* Create a new message. */ 2558 2564 struct vhost_msg_node *vhost_new_msg(struct vhost_virtqueue *vq, int type) 2559 2565 { 2560 - struct vhost_msg_node *node = kmalloc(sizeof *node, GFP_KERNEL); 2566 + /* Make sure all padding within the structure is initialized. */ 2567 + struct vhost_msg_node *node = kzalloc(sizeof(*node), GFP_KERNEL); 2561 2568 if (!node) 2562 2569 return NULL; 2563 2570 2564 - /* Make sure all padding within the structure is initialized. */ 2565 - memset(&node->msg, 0, sizeof node->msg); 2566 2571 node->vq = vq; 2567 2572 node->msg.type = type; 2568 2573 return node;
+7 -3
drivers/vhost/vhost.h
··· 92 92 /* The routine to call when the Guest pings us, or timeout. */ 93 93 vhost_work_fn_t handle_kick; 94 94 95 - /* Last available index we saw. */ 95 + /* Last available index we saw. 96 + * Values are limited to 0x7fff, and the high bit is used as 97 + * a wrap counter when using VIRTIO_F_RING_PACKED. */ 96 98 u16 last_avail_idx; 97 99 98 100 /* Caches available index value from user. */ 99 101 u16 avail_idx; 100 102 101 - /* Last index we used. */ 103 + /* Last index we used. 104 + * Values are limited to 0x7fff, and the high bit is used as 105 + * a wrap counter when using VIRTIO_F_RING_PACKED. */ 102 106 u16 last_used_idx; 103 107 104 108 /* Used flags */ ··· 158 154 struct vhost_virtqueue **vqs; 159 155 int nvqs; 160 156 struct eventfd_ctx *log_ctx; 161 - struct vhost_worker *worker; 157 + struct vhost_worker worker; 162 158 struct vhost_iotlb *umem; 163 159 struct vhost_iotlb *iotlb; 164 160 spinlock_t iotlb_lock;
+5 -4
fs/btrfs/disk-io.c
··· 242 242 int mirror_num) 243 243 { 244 244 struct btrfs_fs_info *fs_info = eb->fs_info; 245 - u64 start = eb->start; 246 245 int i, num_pages = num_extent_pages(eb); 247 246 int ret = 0; 248 247 ··· 250 251 251 252 for (i = 0; i < num_pages; i++) { 252 253 struct page *p = eb->pages[i]; 254 + u64 start = max_t(u64, eb->start, page_offset(p)); 255 + u64 end = min_t(u64, eb->start + eb->len, page_offset(p) + PAGE_SIZE); 256 + u32 len = end - start; 253 257 254 - ret = btrfs_repair_io_failure(fs_info, 0, start, PAGE_SIZE, 255 - start, p, start - page_offset(p), mirror_num); 258 + ret = btrfs_repair_io_failure(fs_info, 0, start, len, 259 + start, p, offset_in_page(start), mirror_num); 256 260 if (ret) 257 261 break; 258 - start += PAGE_SIZE; 259 262 } 260 263 261 264 return ret;
+19 -7
fs/btrfs/scrub.c
··· 134 134 * The errors hit during the initial read of the stripe. 135 135 * 136 136 * Would be utilized for error reporting and repair. 137 + * 138 + * The remaining init_nr_* records the number of errors hit, only used 139 + * by error reporting. 137 140 */ 138 141 unsigned long init_error_bitmap; 142 + unsigned int init_nr_io_errors; 143 + unsigned int init_nr_csum_errors; 144 + unsigned int init_nr_meta_errors; 139 145 140 146 /* 141 147 * The following error bitmaps are all for the current status. ··· 1009 1003 sctx->stat.data_bytes_scrubbed += nr_data_sectors << fs_info->sectorsize_bits; 1010 1004 sctx->stat.tree_bytes_scrubbed += nr_meta_sectors << fs_info->sectorsize_bits; 1011 1005 sctx->stat.no_csum += nr_nodatacsum_sectors; 1012 - sctx->stat.read_errors += 1013 - bitmap_weight(&stripe->io_error_bitmap, stripe->nr_sectors); 1014 - sctx->stat.csum_errors += 1015 - bitmap_weight(&stripe->csum_error_bitmap, stripe->nr_sectors); 1016 - sctx->stat.verify_errors += 1017 - bitmap_weight(&stripe->meta_error_bitmap, stripe->nr_sectors); 1006 + sctx->stat.read_errors += stripe->init_nr_io_errors; 1007 + sctx->stat.csum_errors += stripe->init_nr_csum_errors; 1008 + sctx->stat.verify_errors += stripe->init_nr_meta_errors; 1018 1009 sctx->stat.uncorrectable_errors += 1019 1010 bitmap_weight(&stripe->error_bitmap, stripe->nr_sectors); 1020 1011 sctx->stat.corrected_errors += nr_repaired_sectors; ··· 1044 1041 scrub_verify_one_stripe(stripe, stripe->extent_sector_bitmap); 1045 1042 /* Save the initial failed bitmap for later repair and report usage. */ 1046 1043 stripe->init_error_bitmap = stripe->error_bitmap; 1044 + stripe->init_nr_io_errors = bitmap_weight(&stripe->io_error_bitmap, 1045 + stripe->nr_sectors); 1046 + stripe->init_nr_csum_errors = bitmap_weight(&stripe->csum_error_bitmap, 1047 + stripe->nr_sectors); 1048 + stripe->init_nr_meta_errors = bitmap_weight(&stripe->meta_error_bitmap, 1049 + stripe->nr_sectors); 1047 1050 1048 1051 if (bitmap_empty(&stripe->init_error_bitmap, stripe->nr_sectors)) 1049 1052 goto out; ··· 1499 1490 { 1500 1491 stripe->extent_sector_bitmap = 0; 1501 1492 stripe->init_error_bitmap = 0; 1493 + stripe->init_nr_io_errors = 0; 1494 + stripe->init_nr_csum_errors = 0; 1495 + stripe->init_nr_meta_errors = 0; 1502 1496 stripe->error_bitmap = 0; 1503 1497 stripe->io_error_bitmap = 0; 1504 1498 stripe->csum_error_bitmap = 0; ··· 1742 1730 break; 1743 1731 } 1744 1732 } 1745 - } else { 1733 + } else if (!sctx->readonly) { 1746 1734 for (int i = 0; i < nr_stripes; i++) { 1747 1735 unsigned long repaired; 1748 1736
+6
fs/btrfs/super.c
··· 1841 1841 btrfs_clear_sb_rdonly(sb); 1842 1842 1843 1843 set_bit(BTRFS_FS_OPEN, &fs_info->flags); 1844 + 1845 + /* 1846 + * If we've gone from readonly -> read/write, we need to get 1847 + * our sync/async discard lists in the right state. 1848 + */ 1849 + btrfs_discard_resume(fs_info); 1844 1850 } 1845 1851 out: 1846 1852 /*
+6
fs/ceph/caps.c
··· 1627 1627 struct inode *inode = &ci->netfs.inode; 1628 1628 struct ceph_mds_client *mdsc = ceph_inode_to_client(inode)->mdsc; 1629 1629 struct ceph_mds_session *session = NULL; 1630 + bool need_put = false; 1630 1631 int mds; 1631 1632 1632 1633 dout("ceph_flush_snaps %p\n", inode); ··· 1672 1671 ceph_put_mds_session(session); 1673 1672 /* we flushed them all; remove this inode from the queue */ 1674 1673 spin_lock(&mdsc->snap_flush_lock); 1674 + if (!list_empty(&ci->i_snap_flush_item)) 1675 + need_put = true; 1675 1676 list_del_init(&ci->i_snap_flush_item); 1676 1677 spin_unlock(&mdsc->snap_flush_lock); 1678 + 1679 + if (need_put) 1680 + iput(inode); 1677 1681 } 1678 1682 1679 1683 /*
+3 -1
fs/ceph/snap.c
··· 693 693 capsnap->size); 694 694 695 695 spin_lock(&mdsc->snap_flush_lock); 696 - if (list_empty(&ci->i_snap_flush_item)) 696 + if (list_empty(&ci->i_snap_flush_item)) { 697 + ihold(inode); 697 698 list_add_tail(&ci->i_snap_flush_item, &mdsc->snap_flush_list); 699 + } 698 700 spin_unlock(&mdsc->snap_flush_lock); 699 701 return 1; /* caller may want to ceph_flush_snaps */ 700 702 }
+5 -1
fs/eventpoll.c
··· 1805 1805 { 1806 1806 int ret = default_wake_function(wq_entry, mode, sync, key); 1807 1807 1808 - list_del_init(&wq_entry->entry); 1808 + /* 1809 + * Pairs with list_empty_careful in ep_poll, and ensures future loop 1810 + * iterations see the cause of this wakeup. 1811 + */ 1812 + list_del_init_careful(&wq_entry->entry); 1809 1813 return ret; 1810 1814 } 1811 1815
+12 -11
fs/ext4/balloc.c
··· 324 324 struct ext4_group_info *ext4_get_group_info(struct super_block *sb, 325 325 ext4_group_t group) 326 326 { 327 - struct ext4_group_info **grp_info; 328 - long indexv, indexh; 327 + struct ext4_group_info **grp_info; 328 + long indexv, indexh; 329 329 330 - if (unlikely(group >= EXT4_SB(sb)->s_groups_count)) { 331 - ext4_error(sb, "invalid group %u", group); 332 - return NULL; 333 - } 334 - indexv = group >> (EXT4_DESC_PER_BLOCK_BITS(sb)); 335 - indexh = group & ((EXT4_DESC_PER_BLOCK(sb)) - 1); 336 - grp_info = sbi_array_rcu_deref(EXT4_SB(sb), s_group_info, indexv); 337 - return grp_info[indexh]; 330 + if (unlikely(group >= EXT4_SB(sb)->s_groups_count)) 331 + return NULL; 332 + indexv = group >> (EXT4_DESC_PER_BLOCK_BITS(sb)); 333 + indexh = group & ((EXT4_DESC_PER_BLOCK(sb)) - 1); 334 + grp_info = sbi_array_rcu_deref(EXT4_SB(sb), s_group_info, indexv); 335 + return grp_info[indexh]; 338 336 } 339 337 340 338 /* ··· 884 886 if (!ext4_bg_has_super(sb, group)) 885 887 return 0; 886 888 887 - return EXT4_SB(sb)->s_gdb_count; 889 + if (ext4_has_feature_meta_bg(sb)) 890 + return le32_to_cpu(EXT4_SB(sb)->s_es->s_first_meta_bg); 891 + else 892 + return EXT4_SB(sb)->s_gdb_count; 888 893 } 889 894 890 895 /**
+1 -5
fs/ext4/super.c
··· 6388 6388 struct ext4_mount_options old_opts; 6389 6389 ext4_group_t g; 6390 6390 int err = 0; 6391 - int enable_rw = 0; 6392 6391 #ifdef CONFIG_QUOTA 6393 6392 int enable_quota = 0; 6394 6393 int i, j; ··· 6574 6575 if (err) 6575 6576 goto restore_opts; 6576 6577 6577 - enable_rw = 1; 6578 + sb->s_flags &= ~SB_RDONLY; 6578 6579 if (ext4_has_feature_mmp(sb)) { 6579 6580 err = ext4_multi_mount_protect(sb, 6580 6581 le64_to_cpu(es->s_mmp_block)); ··· 6620 6621 #endif 6621 6622 if (!test_opt(sb, BLOCK_VALIDITY) && sbi->s_system_blks) 6622 6623 ext4_release_system_zone(sb); 6623 - 6624 - if (enable_rw) 6625 - sb->s_flags &= ~SB_RDONLY; 6626 6624 6627 6625 /* 6628 6626 * Reinitialize lazy itable initialization thread based on
+4 -2
fs/ext4/xattr.c
··· 2056 2056 else { 2057 2057 u32 ref; 2058 2058 2059 + #ifdef EXT4_XATTR_DEBUG 2059 2060 WARN_ON_ONCE(dquot_initialize_needed(inode)); 2060 - 2061 + #endif 2061 2062 /* The old block is released after updating 2062 2063 the inode. */ 2063 2064 error = dquot_alloc_block(inode, ··· 2121 2120 /* We need to allocate a new block */ 2122 2121 ext4_fsblk_t goal, block; 2123 2122 2123 + #ifdef EXT4_XATTR_DEBUG 2124 2124 WARN_ON_ONCE(dquot_initialize_needed(inode)); 2125 - 2125 + #endif 2126 2126 goal = ext4_group_first_block_no(sb, 2127 2127 EXT4_I(inode)->i_block_group); 2128 2128 block = ext4_new_meta_blocks(handle, inode, goal, 0,
+10 -2
fs/nilfs2/btnode.c
··· 285 285 if (nbh == NULL) { /* blocksize == pagesize */ 286 286 xa_erase_irq(&btnc->i_pages, newkey); 287 287 unlock_page(ctxt->bh->b_page); 288 - } else 289 - brelse(nbh); 288 + } else { 289 + /* 290 + * When canceling a buffer that a prepare operation has 291 + * allocated to copy a node block to another location, use 292 + * nilfs_btnode_delete() to initialize and release the buffer 293 + * so that the buffer flags will not be in an inconsistent 294 + * state when it is reallocated. 295 + */ 296 + nilfs_btnode_delete(nbh); 297 + } 290 298 }
+9
fs/nilfs2/sufile.c
··· 779 779 goto out_header; 780 780 781 781 sui->ncleansegs -= nsegs - newnsegs; 782 + 783 + /* 784 + * If the sufile is successfully truncated, immediately adjust 785 + * the segment allocation space while locking the semaphore 786 + * "mi_sem" so that nilfs_sufile_alloc() never allocates 787 + * segments in the truncated space. 788 + */ 789 + sui->allocmax = newnsegs - 1; 790 + sui->allocmin = 0; 782 791 } 783 792 784 793 kaddr = kmap_atomic(header_bh->b_page);
+42 -1
fs/nilfs2/the_nilfs.c
··· 405 405 100)); 406 406 } 407 407 408 + /** 409 + * nilfs_max_segment_count - calculate the maximum number of segments 410 + * @nilfs: nilfs object 411 + */ 412 + static u64 nilfs_max_segment_count(struct the_nilfs *nilfs) 413 + { 414 + u64 max_count = U64_MAX; 415 + 416 + do_div(max_count, nilfs->ns_blocks_per_segment); 417 + return min_t(u64, max_count, ULONG_MAX); 418 + } 419 + 408 420 void nilfs_set_nsegments(struct the_nilfs *nilfs, unsigned long nsegs) 409 421 { 410 422 nilfs->ns_nsegments = nsegs; ··· 426 414 static int nilfs_store_disk_layout(struct the_nilfs *nilfs, 427 415 struct nilfs_super_block *sbp) 428 416 { 417 + u64 nsegments, nblocks; 418 + 429 419 if (le32_to_cpu(sbp->s_rev_level) < NILFS_MIN_SUPP_REV) { 430 420 nilfs_err(nilfs->ns_sb, 431 421 "unsupported revision (superblock rev.=%d.%d, current rev.=%d.%d). Please check the version of mkfs.nilfs(2).", ··· 471 457 return -EINVAL; 472 458 } 473 459 474 - nilfs_set_nsegments(nilfs, le64_to_cpu(sbp->s_nsegments)); 460 + nsegments = le64_to_cpu(sbp->s_nsegments); 461 + if (nsegments > nilfs_max_segment_count(nilfs)) { 462 + nilfs_err(nilfs->ns_sb, 463 + "segment count %llu exceeds upper limit (%llu segments)", 464 + (unsigned long long)nsegments, 465 + (unsigned long long)nilfs_max_segment_count(nilfs)); 466 + return -EINVAL; 467 + } 468 + 469 + nblocks = sb_bdev_nr_blocks(nilfs->ns_sb); 470 + if (nblocks) { 471 + u64 min_block_count = nsegments * nilfs->ns_blocks_per_segment; 472 + /* 473 + * To avoid failing to mount early device images without a 474 + * second superblock, exclude that block count from the 475 + * "min_block_count" calculation. 476 + */ 477 + 478 + if (nblocks < min_block_count) { 479 + nilfs_err(nilfs->ns_sb, 480 + "total number of segment blocks %llu exceeds device size (%llu blocks)", 481 + (unsigned long long)min_block_count, 482 + (unsigned long long)nblocks); 483 + return -EINVAL; 484 + } 485 + } 486 + 487 + nilfs_set_nsegments(nilfs, nsegments); 475 488 nilfs->ns_crc_seed = le32_to_cpu(sbp->s_crc_seed); 476 489 return 0; 477 490 }
+7 -1
fs/ocfs2/file.c
··· 2100 2100 struct ocfs2_space_resv sr; 2101 2101 int change_size = 1; 2102 2102 int cmd = OCFS2_IOC_RESVSP64; 2103 + int ret = 0; 2103 2104 2104 2105 if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE)) 2105 2106 return -EOPNOTSUPP; 2106 2107 if (!ocfs2_writes_unwritten_extents(osb)) 2107 2108 return -EOPNOTSUPP; 2108 2109 2109 - if (mode & FALLOC_FL_KEEP_SIZE) 2110 + if (mode & FALLOC_FL_KEEP_SIZE) { 2110 2111 change_size = 0; 2112 + } else { 2113 + ret = inode_newsize_ok(inode, offset + len); 2114 + if (ret) 2115 + return ret; 2116 + } 2111 2117 2112 2118 if (mode & FALLOC_FL_PUNCH_HOLE) 2113 2119 cmd = OCFS2_IOC_UNRESVSP64;
+4 -2
fs/ocfs2/super.c
··· 952 952 for (type = 0; type < OCFS2_MAXQUOTAS; type++) { 953 953 if (!sb_has_quota_loaded(sb, type)) 954 954 continue; 955 - oinfo = sb_dqinfo(sb, type)->dqi_priv; 956 - cancel_delayed_work_sync(&oinfo->dqi_sync_work); 955 + if (!sb_has_quota_suspended(sb, type)) { 956 + oinfo = sb_dqinfo(sb, type)->dqi_priv; 957 + cancel_delayed_work_sync(&oinfo->dqi_sync_work); 958 + } 957 959 inode = igrab(sb->s_dquot.files[type]); 958 960 /* Turn off quotas. This will remove all dquot structures from 959 961 * memory and so they will be automatically synced to global
+54 -4
fs/smb/client/cifs_debug.c
··· 12 12 #include <linux/module.h> 13 13 #include <linux/proc_fs.h> 14 14 #include <linux/uaccess.h> 15 + #include <uapi/linux/ethtool.h> 15 16 #include "cifspdu.h" 16 17 #include "cifsglob.h" 17 18 #include "cifsproto.h" ··· 131 130 struct TCP_Server_Info *server = chan->server; 132 131 133 132 seq_printf(m, "\n\n\t\tChannel: %d ConnectionId: 0x%llx" 134 - "\n\t\tNumber of credits: %d Dialect 0x%x" 133 + "\n\t\tNumber of credits: %d,%d,%d Dialect 0x%x" 135 134 "\n\t\tTCP status: %d Instance: %d" 136 135 "\n\t\tLocal Users To Server: %d SecMode: 0x%x Req On Wire: %d" 137 136 "\n\t\tIn Send: %d In MaxReq Wait: %d", 138 137 i+1, server->conn_id, 139 138 server->credits, 139 + server->echo_credits, 140 + server->oplock_credits, 140 141 server->dialect, 141 142 server->tcpStatus, 142 143 server->reconnect_instance, ··· 149 146 atomic_read(&server->num_waiters)); 150 147 } 151 148 149 + static inline const char *smb_speed_to_str(size_t bps) 150 + { 151 + size_t mbps = bps / 1000 / 1000; 152 + 153 + switch (mbps) { 154 + case SPEED_10: 155 + return "10Mbps"; 156 + case SPEED_100: 157 + return "100Mbps"; 158 + case SPEED_1000: 159 + return "1Gbps"; 160 + case SPEED_2500: 161 + return "2.5Gbps"; 162 + case SPEED_5000: 163 + return "5Gbps"; 164 + case SPEED_10000: 165 + return "10Gbps"; 166 + case SPEED_14000: 167 + return "14Gbps"; 168 + case SPEED_20000: 169 + return "20Gbps"; 170 + case SPEED_25000: 171 + return "25Gbps"; 172 + case SPEED_40000: 173 + return "40Gbps"; 174 + case SPEED_50000: 175 + return "50Gbps"; 176 + case SPEED_56000: 177 + return "56Gbps"; 178 + case SPEED_100000: 179 + return "100Gbps"; 180 + case SPEED_200000: 181 + return "200Gbps"; 182 + case SPEED_400000: 183 + return "400Gbps"; 184 + case SPEED_800000: 185 + return "800Gbps"; 186 + default: 187 + return "Unknown"; 188 + } 189 + } 190 + 152 191 static void 153 192 cifs_dump_iface(struct seq_file *m, struct cifs_server_iface *iface) 154 193 { 155 194 struct sockaddr_in *ipv4 = (struct sockaddr_in *)&iface->sockaddr; 156 195 struct sockaddr_in6 *ipv6 = (struct sockaddr_in6 *)&iface->sockaddr; 157 196 158 - seq_printf(m, "\tSpeed: %zu bps\n", iface->speed); 197 + seq_printf(m, "\tSpeed: %s\n", smb_speed_to_str(iface->speed)); 159 198 seq_puts(m, "\t\tCapabilities: "); 160 199 if (iface->rdma_capable) 161 200 seq_puts(m, "rdma "); 162 201 if (iface->rss_capable) 163 202 seq_puts(m, "rss "); 203 + if (!iface->rdma_capable && !iface->rss_capable) 204 + seq_puts(m, "None"); 164 205 seq_putc(m, '\n'); 165 206 if (iface->sockaddr.ss_family == AF_INET) 166 207 seq_printf(m, "\t\tIPv4: %pI4\n", &ipv4->sin_addr); ··· 397 350 atomic_read(&server->smbd_conn->mr_used_count)); 398 351 skip_rdma: 399 352 #endif 400 - seq_printf(m, "\nNumber of credits: %d Dialect 0x%x", 401 - server->credits, server->dialect); 353 + seq_printf(m, "\nNumber of credits: %d,%d,%d Dialect 0x%x", 354 + server->credits, 355 + server->echo_credits, 356 + server->oplock_credits, 357 + server->dialect); 402 358 if (server->compress_algorithm == SMB3_COMPRESS_LZNT1) 403 359 seq_printf(m, " COMPRESS_LZNT1"); 404 360 else if (server->compress_algorithm == SMB3_COMPRESS_LZ77)
-37
fs/smb/client/cifsglob.h
··· 970 970 kfree(iface); 971 971 } 972 972 973 - /* 974 - * compare two interfaces a and b 975 - * return 0 if everything matches. 976 - * return 1 if a has higher link speed, or rdma capable, or rss capable 977 - * return -1 otherwise. 978 - */ 979 - static inline int 980 - iface_cmp(struct cifs_server_iface *a, struct cifs_server_iface *b) 981 - { 982 - int cmp_ret = 0; 983 - 984 - WARN_ON(!a || !b); 985 - if (a->speed == b->speed) { 986 - if (a->rdma_capable == b->rdma_capable) { 987 - if (a->rss_capable == b->rss_capable) { 988 - cmp_ret = memcmp(&a->sockaddr, &b->sockaddr, 989 - sizeof(a->sockaddr)); 990 - if (!cmp_ret) 991 - return 0; 992 - else if (cmp_ret > 0) 993 - return 1; 994 - else 995 - return -1; 996 - } else if (a->rss_capable > b->rss_capable) 997 - return 1; 998 - else 999 - return -1; 1000 - } else if (a->rdma_capable > b->rdma_capable) 1001 - return 1; 1002 - else 1003 - return -1; 1004 - } else if (a->speed > b->speed) 1005 - return 1; 1006 - else 1007 - return -1; 1008 - } 1009 - 1010 973 struct cifs_chan { 1011 974 unsigned int in_reconnect : 1; /* if session setup in progress for this channel */ 1012 975 struct TCP_Server_Info *server;
+1
fs/smb/client/cifsproto.h
··· 87 87 struct mid_q_entry *mid); 88 88 extern int smb3_parse_devname(const char *devname, struct smb3_fs_context *ctx); 89 89 extern int smb3_parse_opt(const char *options, const char *key, char **val); 90 + extern int cifs_ipaddr_cmp(struct sockaddr *srcaddr, struct sockaddr *rhs); 90 91 extern bool cifs_match_ipaddr(struct sockaddr *srcaddr, struct sockaddr *rhs); 91 92 extern int cifs_discard_remaining_data(struct TCP_Server_Info *server); 92 93 extern int cifs_call_async(struct TCP_Server_Info *server,
+55 -4
fs/smb/client/connect.c
··· 1288 1288 module_put_and_kthread_exit(0); 1289 1289 } 1290 1290 1291 + int 1292 + cifs_ipaddr_cmp(struct sockaddr *srcaddr, struct sockaddr *rhs) 1293 + { 1294 + struct sockaddr_in *saddr4 = (struct sockaddr_in *)srcaddr; 1295 + struct sockaddr_in *vaddr4 = (struct sockaddr_in *)rhs; 1296 + struct sockaddr_in6 *saddr6 = (struct sockaddr_in6 *)srcaddr; 1297 + struct sockaddr_in6 *vaddr6 = (struct sockaddr_in6 *)rhs; 1298 + 1299 + switch (srcaddr->sa_family) { 1300 + case AF_UNSPEC: 1301 + switch (rhs->sa_family) { 1302 + case AF_UNSPEC: 1303 + return 0; 1304 + case AF_INET: 1305 + case AF_INET6: 1306 + return 1; 1307 + default: 1308 + return -1; 1309 + } 1310 + case AF_INET: { 1311 + switch (rhs->sa_family) { 1312 + case AF_UNSPEC: 1313 + return -1; 1314 + case AF_INET: 1315 + return memcmp(saddr4, vaddr4, 1316 + sizeof(struct sockaddr_in)); 1317 + case AF_INET6: 1318 + return 1; 1319 + default: 1320 + return -1; 1321 + } 1322 + } 1323 + case AF_INET6: { 1324 + switch (rhs->sa_family) { 1325 + case AF_UNSPEC: 1326 + case AF_INET: 1327 + return -1; 1328 + case AF_INET6: 1329 + return memcmp(saddr6, 1330 + vaddr6, 1331 + sizeof(struct sockaddr_in6)); 1332 + default: 1333 + return -1; 1334 + } 1335 + } 1336 + default: 1337 + return -1; /* don't expect to be here */ 1338 + } 1339 + } 1340 + 1291 1341 /* 1292 1342 * Returns true if srcaddr isn't specified and rhs isn't specified, or 1293 1343 * if srcaddr is specified and matches the IP address of the rhs argument ··· 4136 4086 4137 4087 /* only send once per connect */ 4138 4088 spin_lock(&tcon->tc_lock); 4089 + if (tcon->status == TID_GOOD) { 4090 + spin_unlock(&tcon->tc_lock); 4091 + return 0; 4092 + } 4093 + 4139 4094 if (tcon->status != TID_NEW && 4140 4095 tcon->status != TID_NEED_TCON) { 4141 4096 spin_unlock(&tcon->tc_lock); 4142 4097 return -EHOSTDOWN; 4143 4098 } 4144 4099 4145 - if (tcon->status == TID_GOOD) { 4146 - spin_unlock(&tcon->tc_lock); 4147 - return 0; 4148 - } 4149 4100 tcon->status = TID_IN_TCON; 4150 4101 spin_unlock(&tcon->tc_lock); 4151 4102
+5 -4
fs/smb/client/dfs.c
··· 575 575 576 576 /* only send once per connect */ 577 577 spin_lock(&tcon->tc_lock); 578 + if (tcon->status == TID_GOOD) { 579 + spin_unlock(&tcon->tc_lock); 580 + return 0; 581 + } 582 + 578 583 if (tcon->status != TID_NEW && 579 584 tcon->status != TID_NEED_TCON) { 580 585 spin_unlock(&tcon->tc_lock); 581 586 return -EHOSTDOWN; 582 587 } 583 588 584 - if (tcon->status == TID_GOOD) { 585 - spin_unlock(&tcon->tc_lock); 586 - return 0; 587 - } 588 589 tcon->status = TID_IN_TCON; 589 590 spin_unlock(&tcon->tc_lock); 590 591
+6 -2
fs/smb/client/file.c
··· 4942 4942 * disconnected since oplock already released by the server 4943 4943 */ 4944 4944 if (!oplock_break_cancelled) { 4945 - rc = tcon->ses->server->ops->oplock_response(tcon, persistent_fid, 4945 + /* check for server null since can race with kill_sb calling tree disconnect */ 4946 + if (tcon->ses && tcon->ses->server) { 4947 + rc = tcon->ses->server->ops->oplock_response(tcon, persistent_fid, 4946 4948 volatile_fid, net_fid, cinode); 4947 - cifs_dbg(FYI, "Oplock release rc = %d\n", rc); 4949 + cifs_dbg(FYI, "Oplock release rc = %d\n", rc); 4950 + } else 4951 + pr_warn_once("lease break not sent for unmounted share\n"); 4948 4952 } 4949 4953 4950 4954 cifs_done_oplock_break(cinode);
+40
fs/smb/client/smb2ops.c
··· 34 34 change_conf(struct TCP_Server_Info *server) 35 35 { 36 36 server->credits += server->echo_credits + server->oplock_credits; 37 + if (server->credits > server->max_credits) 38 + server->credits = server->max_credits; 37 39 server->oplock_credits = server->echo_credits = 0; 38 40 switch (server->credits) { 39 41 case 0: ··· 93 91 server->conn_id, server->hostname, *val, 94 92 add, server->in_flight); 95 93 } 94 + WARN_ON_ONCE(server->in_flight == 0); 96 95 server->in_flight--; 97 96 if (server->in_flight == 0 && 98 97 ((optype & CIFS_OP_MASK) != CIFS_NEG_OP) && ··· 511 508 rsize = min_t(unsigned int, rsize, SMB2_MAX_BUFFER_SIZE); 512 509 513 510 return rsize; 511 + } 512 + 513 + /* 514 + * compare two interfaces a and b 515 + * return 0 if everything matches. 516 + * return 1 if a is rdma capable, or rss capable, or has higher link speed 517 + * return -1 otherwise. 518 + */ 519 + static int 520 + iface_cmp(struct cifs_server_iface *a, struct cifs_server_iface *b) 521 + { 522 + int cmp_ret = 0; 523 + 524 + WARN_ON(!a || !b); 525 + if (a->rdma_capable == b->rdma_capable) { 526 + if (a->rss_capable == b->rss_capable) { 527 + if (a->speed == b->speed) { 528 + cmp_ret = cifs_ipaddr_cmp((struct sockaddr *) &a->sockaddr, 529 + (struct sockaddr *) &b->sockaddr); 530 + if (!cmp_ret) 531 + return 0; 532 + else if (cmp_ret > 0) 533 + return 1; 534 + else 535 + return -1; 536 + } else if (a->speed > b->speed) 537 + return 1; 538 + else 539 + return -1; 540 + } else if (a->rss_capable > b->rss_capable) 541 + return 1; 542 + else 543 + return -1; 544 + } else if (a->rdma_capable > b->rdma_capable) 545 + return 1; 546 + else 547 + return -1; 514 548 } 515 549 516 550 static int
+28 -4
fs/smb/client/smb2pdu.c
··· 1305 1305 } 1306 1306 1307 1307 /* enough to enable echos and oplocks and one max size write */ 1308 - req->hdr.CreditRequest = cpu_to_le16(130); 1308 + if (server->credits >= server->max_credits) 1309 + req->hdr.CreditRequest = cpu_to_le16(0); 1310 + else 1311 + req->hdr.CreditRequest = cpu_to_le16( 1312 + min_t(int, server->max_credits - 1313 + server->credits, 130)); 1309 1314 1310 1315 /* only one of SMB2 signing flags may be set in SMB2 request */ 1311 1316 if (server->sign) ··· 1904 1899 rqst.rq_nvec = 2; 1905 1900 1906 1901 /* Need 64 for max size write so ask for more in case not there yet */ 1907 - req->hdr.CreditRequest = cpu_to_le16(64); 1902 + if (server->credits >= server->max_credits) 1903 + req->hdr.CreditRequest = cpu_to_le16(0); 1904 + else 1905 + req->hdr.CreditRequest = cpu_to_le16( 1906 + min_t(int, server->max_credits - 1907 + server->credits, 64)); 1908 1908 1909 1909 rc = cifs_send_recv(xid, ses, server, 1910 1910 &rqst, &resp_buftype, flags, &rsp_iov); ··· 4237 4227 struct TCP_Server_Info *server; 4238 4228 struct cifs_tcon *tcon = tlink_tcon(rdata->cfile->tlink); 4239 4229 unsigned int total_len; 4230 + int credit_request; 4240 4231 4241 4232 cifs_dbg(FYI, "%s: offset=%llu bytes=%u\n", 4242 4233 __func__, rdata->offset, rdata->bytes); ··· 4269 4258 if (rdata->credits.value > 0) { 4270 4259 shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(rdata->bytes, 4271 4260 SMB2_MAX_BUFFER_SIZE)); 4272 - shdr->CreditRequest = cpu_to_le16(le16_to_cpu(shdr->CreditCharge) + 8); 4261 + credit_request = le16_to_cpu(shdr->CreditCharge) + 8; 4262 + if (server->credits >= server->max_credits) 4263 + shdr->CreditRequest = cpu_to_le16(0); 4264 + else 4265 + shdr->CreditRequest = cpu_to_le16( 4266 + min_t(int, server->max_credits - 4267 + server->credits, credit_request)); 4273 4268 4274 4269 rc = adjust_credits(server, &rdata->credits, rdata->bytes); 4275 4270 if (rc) ··· 4485 4468 unsigned int total_len; 4486 4469 struct cifs_io_parms _io_parms; 4487 4470 struct cifs_io_parms *io_parms = NULL; 4471 + int credit_request; 4488 4472 4489 4473 if (!wdata->server) 4490 4474 server = wdata->server = cifs_pick_channel(tcon->ses); ··· 4590 4572 if (wdata->credits.value > 0) { 4591 4573 shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(wdata->bytes, 4592 4574 SMB2_MAX_BUFFER_SIZE)); 4593 - shdr->CreditRequest = cpu_to_le16(le16_to_cpu(shdr->CreditCharge) + 8); 4575 + credit_request = le16_to_cpu(shdr->CreditCharge) + 8; 4576 + if (server->credits >= server->max_credits) 4577 + shdr->CreditRequest = cpu_to_le16(0); 4578 + else 4579 + shdr->CreditRequest = cpu_to_le16( 4580 + min_t(int, server->max_credits - 4581 + server->credits, credit_request)); 4594 4582 4595 4583 rc = adjust_credits(server, &wdata->credits, io_parms->length); 4596 4584 if (rc)
+1 -1
fs/smb/client/transport.c
··· 55 55 temp->pid = current->pid; 56 56 temp->command = cpu_to_le16(smb_buffer->Command); 57 57 cifs_dbg(FYI, "For smb_command %d\n", smb_buffer->Command); 58 - /* do_gettimeofday(&temp->when_sent);*/ /* easier to use jiffies */ 58 + /* easier to use jiffies */ 59 59 /* when mid allocated can be before when sent */ 60 60 temp->when_alloc = jiffies; 61 61 temp->server = server;
+15 -2
fs/smb/server/connection.c
··· 294 294 return true; 295 295 } 296 296 297 + #define SMB1_MIN_SUPPORTED_HEADER_SIZE (sizeof(struct smb_hdr)) 298 + #define SMB2_MIN_SUPPORTED_HEADER_SIZE (sizeof(struct smb2_hdr) + 4) 299 + 297 300 /** 298 301 * ksmbd_conn_handler_loop() - session thread to listen on new smb requests 299 302 * @p: connection instance ··· 353 350 if (pdu_size > MAX_STREAM_PROT_LEN) 354 351 break; 355 352 353 + if (pdu_size < SMB1_MIN_SUPPORTED_HEADER_SIZE) 354 + break; 355 + 356 356 /* 4 for rfc1002 length field */ 357 357 /* 1 for implied bcc[0] */ 358 358 size = pdu_size + 4 + 1; ··· 364 358 break; 365 359 366 360 memcpy(conn->request_buf, hdr_buf, sizeof(hdr_buf)); 367 - if (!ksmbd_smb_request(conn)) 368 - break; 369 361 370 362 /* 371 363 * We already read 4 bytes to find out PDU size, now ··· 379 375 pr_err("PDU error. Read: %d, Expected: %d\n", 380 376 size, pdu_size); 381 377 continue; 378 + } 379 + 380 + if (!ksmbd_smb_request(conn)) 381 + break; 382 + 383 + if (((struct smb2_hdr *)smb2_get_msg(conn->request_buf))->ProtocolId == 384 + SMB2_PROTO_NUMBER) { 385 + if (pdu_size < SMB2_MIN_SUPPORTED_HEADER_SIZE) 386 + break; 382 387 } 383 388 384 389 if (!default_conn_ops.process_fn) {
+24 -42
fs/smb/server/oplock.c
··· 1415 1415 */ 1416 1416 struct lease_ctx_info *parse_lease_state(void *open_req) 1417 1417 { 1418 - char *data_offset; 1419 1418 struct create_context *cc; 1420 - unsigned int next = 0; 1421 - char *name; 1422 - bool found = false; 1423 1419 struct smb2_create_req *req = (struct smb2_create_req *)open_req; 1424 - struct lease_ctx_info *lreq = kzalloc(sizeof(struct lease_ctx_info), 1425 - GFP_KERNEL); 1420 + struct lease_ctx_info *lreq; 1421 + 1422 + cc = smb2_find_context_vals(req, SMB2_CREATE_REQUEST_LEASE, 4); 1423 + if (IS_ERR_OR_NULL(cc)) 1424 + return NULL; 1425 + 1426 + lreq = kzalloc(sizeof(struct lease_ctx_info), GFP_KERNEL); 1426 1427 if (!lreq) 1427 1428 return NULL; 1428 1429 1429 - data_offset = (char *)req + le32_to_cpu(req->CreateContextsOffset); 1430 - cc = (struct create_context *)data_offset; 1431 - do { 1432 - cc = (struct create_context *)((char *)cc + next); 1433 - name = le16_to_cpu(cc->NameOffset) + (char *)cc; 1434 - if (le16_to_cpu(cc->NameLength) != 4 || 1435 - strncmp(name, SMB2_CREATE_REQUEST_LEASE, 4)) { 1436 - next = le32_to_cpu(cc->Next); 1437 - continue; 1438 - } 1439 - found = true; 1440 - break; 1441 - } while (next != 0); 1430 + if (sizeof(struct lease_context_v2) == le32_to_cpu(cc->DataLength)) { 1431 + struct create_lease_v2 *lc = (struct create_lease_v2 *)cc; 1442 1432 1443 - if (found) { 1444 - if (sizeof(struct lease_context_v2) == le32_to_cpu(cc->DataLength)) { 1445 - struct create_lease_v2 *lc = (struct create_lease_v2 *)cc; 1433 + memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE); 1434 + lreq->req_state = lc->lcontext.LeaseState; 1435 + lreq->flags = lc->lcontext.LeaseFlags; 1436 + lreq->duration = lc->lcontext.LeaseDuration; 1437 + memcpy(lreq->parent_lease_key, lc->lcontext.ParentLeaseKey, 1438 + SMB2_LEASE_KEY_SIZE); 1439 + lreq->version = 2; 1440 + } else { 1441 + struct create_lease *lc = (struct create_lease *)cc; 1446 1442 1447 - memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE); 1448 - lreq->req_state = lc->lcontext.LeaseState; 1449 - lreq->flags = lc->lcontext.LeaseFlags; 1450 - lreq->duration = lc->lcontext.LeaseDuration; 1451 - memcpy(lreq->parent_lease_key, lc->lcontext.ParentLeaseKey, 1452 - SMB2_LEASE_KEY_SIZE); 1453 - lreq->version = 2; 1454 - } else { 1455 - struct create_lease *lc = (struct create_lease *)cc; 1456 - 1457 - memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE); 1458 - lreq->req_state = lc->lcontext.LeaseState; 1459 - lreq->flags = lc->lcontext.LeaseFlags; 1460 - lreq->duration = lc->lcontext.LeaseDuration; 1461 - lreq->version = 1; 1462 - } 1463 - return lreq; 1443 + memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE); 1444 + lreq->req_state = lc->lcontext.LeaseState; 1445 + lreq->flags = lc->lcontext.LeaseFlags; 1446 + lreq->duration = lc->lcontext.LeaseDuration; 1447 + lreq->version = 1; 1464 1448 } 1465 - 1466 - kfree(lreq); 1467 - return NULL; 1449 + return lreq; 1468 1450 } 1469 1451 1470 1452 /**
+6 -7
fs/smb/server/smb2pdu.c
··· 963 963 964 964 static __le32 deassemble_neg_contexts(struct ksmbd_conn *conn, 965 965 struct smb2_negotiate_req *req, 966 - int len_of_smb) 966 + unsigned int len_of_smb) 967 967 { 968 968 /* +4 is to account for the RFC1001 len field */ 969 969 struct smb2_neg_context *pctx = (struct smb2_neg_context *)req; 970 970 int i = 0, len_of_ctxts; 971 - int offset = le32_to_cpu(req->NegotiateContextOffset); 972 - int neg_ctxt_cnt = le16_to_cpu(req->NegotiateContextCount); 971 + unsigned int offset = le32_to_cpu(req->NegotiateContextOffset); 972 + unsigned int neg_ctxt_cnt = le16_to_cpu(req->NegotiateContextCount); 973 973 __le32 status = STATUS_INVALID_PARAMETER; 974 974 975 975 ksmbd_debug(SMB, "decoding %d negotiate contexts\n", neg_ctxt_cnt); ··· 983 983 while (i++ < neg_ctxt_cnt) { 984 984 int clen, ctxt_len; 985 985 986 - if (len_of_ctxts < sizeof(struct smb2_neg_context)) 986 + if (len_of_ctxts < (int)sizeof(struct smb2_neg_context)) 987 987 break; 988 988 989 989 pctx = (struct smb2_neg_context *)((char *)pctx + offset); ··· 1038 1038 } 1039 1039 1040 1040 /* offsets must be 8 byte aligned */ 1041 - clen = (clen + 7) & ~0x7; 1042 - offset = clen + sizeof(struct smb2_neg_context); 1043 - len_of_ctxts -= clen + sizeof(struct smb2_neg_context); 1041 + offset = (ctxt_len + 7) & ~0x7; 1042 + len_of_ctxts -= offset; 1044 1043 } 1045 1044 return status; 1046 1045 }
+13 -1
fs/smb/server/smb_common.c
··· 158 158 */ 159 159 bool ksmbd_smb_request(struct ksmbd_conn *conn) 160 160 { 161 - return conn->request_buf[0] == 0; 161 + __le32 *proto = (__le32 *)smb2_get_msg(conn->request_buf); 162 + 163 + if (*proto == SMB2_COMPRESSION_TRANSFORM_ID) { 164 + pr_err_ratelimited("smb2 compression not support yet"); 165 + return false; 166 + } 167 + 168 + if (*proto != SMB1_PROTO_NUMBER && 169 + *proto != SMB2_PROTO_NUMBER && 170 + *proto != SMB2_TRANSFORM_PROTO_NUM) 171 + return false; 172 + 173 + return true; 162 174 } 163 175 164 176 static bool supported_protocol(int idx)
+2 -2
fs/smb/server/smbacl.c
··· 1290 1290 1291 1291 if (IS_ENABLED(CONFIG_FS_POSIX_ACL)) { 1292 1292 posix_acls = get_inode_acl(d_inode(path->dentry), ACL_TYPE_ACCESS); 1293 - if (posix_acls && !found) { 1293 + if (!IS_ERR_OR_NULL(posix_acls) && !found) { 1294 1294 unsigned int id = -1; 1295 1295 1296 1296 pa_entry = posix_acls->a_entries; ··· 1314 1314 } 1315 1315 } 1316 1316 } 1317 - if (posix_acls) 1317 + if (!IS_ERR_OR_NULL(posix_acls)) 1318 1318 posix_acl_release(posix_acls); 1319 1319 } 1320 1320
+2 -2
fs/smb/server/vfs.c
··· 1321 1321 return NULL; 1322 1322 1323 1323 posix_acls = get_inode_acl(inode, acl_type); 1324 - if (!posix_acls) 1324 + if (IS_ERR_OR_NULL(posix_acls)) 1325 1325 return NULL; 1326 1326 1327 1327 smb_acl = kzalloc(sizeof(struct xattr_smb_acl) + ··· 1830 1830 return -EOPNOTSUPP; 1831 1831 1832 1832 acls = get_inode_acl(parent_inode, ACL_TYPE_DEFAULT); 1833 - if (!acls) 1833 + if (IS_ERR_OR_NULL(acls)) 1834 1834 return -ENOENT; 1835 1835 pace = acls->a_entries; 1836 1836
+11 -2
fs/userfaultfd.c
··· 1332 1332 bool basic_ioctls; 1333 1333 unsigned long start, end, vma_end; 1334 1334 struct vma_iterator vmi; 1335 + pgoff_t pgoff; 1335 1336 1336 1337 user_uffdio_register = (struct uffdio_register __user *) arg; 1337 1338 ··· 1460 1459 1461 1460 vma_iter_set(&vmi, start); 1462 1461 prev = vma_prev(&vmi); 1462 + if (vma->vm_start < start) 1463 + prev = vma; 1463 1464 1464 1465 ret = 0; 1465 1466 for_each_vma_range(vmi, vma, end) { ··· 1485 1482 vma_end = min(end, vma->vm_end); 1486 1483 1487 1484 new_flags = (vma->vm_flags & ~__VM_UFFD_FLAGS) | vm_flags; 1485 + pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT); 1488 1486 prev = vma_merge(&vmi, mm, prev, start, vma_end, new_flags, 1489 - vma->anon_vma, vma->vm_file, vma->vm_pgoff, 1487 + vma->anon_vma, vma->vm_file, pgoff, 1490 1488 vma_policy(vma), 1491 1489 ((struct vm_userfaultfd_ctx){ ctx }), 1492 1490 anon_vma_name(vma)); ··· 1567 1563 unsigned long start, end, vma_end; 1568 1564 const void __user *buf = (void __user *)arg; 1569 1565 struct vma_iterator vmi; 1566 + pgoff_t pgoff; 1570 1567 1571 1568 ret = -EFAULT; 1572 1569 if (copy_from_user(&uffdio_unregister, buf, sizeof(uffdio_unregister))) ··· 1630 1625 1631 1626 vma_iter_set(&vmi, start); 1632 1627 prev = vma_prev(&vmi); 1628 + if (vma->vm_start < start) 1629 + prev = vma; 1630 + 1633 1631 ret = 0; 1634 1632 for_each_vma_range(vmi, vma, end) { 1635 1633 cond_resched(); ··· 1670 1662 uffd_wp_range(vma, start, vma_end - start, false); 1671 1663 1672 1664 new_flags = vma->vm_flags & ~__VM_UFFD_FLAGS; 1665 + pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT); 1673 1666 prev = vma_merge(&vmi, mm, prev, start, vma_end, new_flags, 1674 - vma->anon_vma, vma->vm_file, vma->vm_pgoff, 1667 + vma->anon_vma, vma->vm_file, pgoff, 1675 1668 vma_policy(vma), 1676 1669 NULL_VM_UFFD_CTX, anon_vma_name(vma)); 1677 1670 if (prev) {
+9
include/dt-bindings/power/qcom-rpmpd.h
··· 90 90 #define SM8150_MMCX 9 91 91 #define SM8150_MMCX_AO 10 92 92 93 + /* SA8155P is a special case, kept for backwards compatibility */ 94 + #define SA8155P_CX SM8150_CX 95 + #define SA8155P_CX_AO SM8150_CX_AO 96 + #define SA8155P_EBI SM8150_EBI 97 + #define SA8155P_GFX SM8150_GFX 98 + #define SA8155P_MSS SM8150_MSS 99 + #define SA8155P_MX SM8150_MX 100 + #define SA8155P_MX_AO SM8150_MX_AO 101 + 93 102 /* SM8250 Power Domain Indexes */ 94 103 #define SM8250_CX 0 95 104 #define SM8250_CX_AO 1
+12
include/linux/mlx5/driver.h
··· 1246 1246 return dev->priv.sriov.max_vfs; 1247 1247 } 1248 1248 1249 + static inline int mlx5_lag_is_lacp_owner(struct mlx5_core_dev *dev) 1250 + { 1251 + /* LACP owner conditions: 1252 + * 1) Function is physical. 1253 + * 2) LAG is supported by FW. 1254 + * 3) LAG is managed by driver (currently the only option). 1255 + */ 1256 + return MLX5_CAP_GEN(dev, vport_group_manager) && 1257 + (MLX5_CAP_GEN(dev, num_lag_ports) > 1) && 1258 + MLX5_CAP_GEN(dev, lag_master); 1259 + } 1260 + 1249 1261 static inline u16 mlx5_core_max_ec_vfs(const struct mlx5_core_dev *dev) 1250 1262 { 1251 1263 return dev->priv.sriov.max_ec_vfs;
-6
include/linux/soc/qcom/llcc-qcom.h
··· 69 69 /** 70 70 * struct llcc_edac_reg_data - llcc edac registers data for each error type 71 71 * @name: Name of the error 72 - * @synd_reg: Syndrome register address 73 - * @count_status_reg: Status register address to read the error count 74 - * @ways_status_reg: Status register address to read the error ways 75 72 * @reg_cnt: Number of registers 76 73 * @count_mask: Mask value to get the error count 77 74 * @ways_mask: Mask value to get the error ways ··· 77 80 */ 78 81 struct llcc_edac_reg_data { 79 82 char *name; 80 - u64 synd_reg; 81 - u64 count_status_reg; 82 - u64 ways_status_reg; 83 83 u32 reg_cnt; 84 84 u32 count_mask; 85 85 u32 ways_mask;
+1 -5
include/media/dvb_frontend.h
··· 686 686 * @id: Frontend ID 687 687 * @exit: Used to inform the DVB core that the frontend 688 688 * thread should exit (usually, means that the hardware 689 - * got disconnected). 690 - * @remove_mutex: mutex that avoids a race condition between a callback 691 - * called when the hardware is disconnected and the 692 - * file_operations of dvb_frontend. 689 + * got disconnected. 693 690 */ 694 691 695 692 struct dvb_frontend { ··· 704 707 int (*callback)(void *adapter_priv, int component, int cmd, int arg); 705 708 int id; 706 709 unsigned int exit; 707 - struct mutex remove_mutex; 708 710 }; 709 711 710 712 /**
+1 -1
include/net/netfilter/nf_flow_table.h
··· 268 268 269 269 int flow_offload_add(struct nf_flowtable *flow_table, struct flow_offload *flow); 270 270 void flow_offload_refresh(struct nf_flowtable *flow_table, 271 - struct flow_offload *flow); 271 + struct flow_offload *flow, bool force); 272 272 273 273 struct flow_offload_tuple_rhash *flow_offload_lookup(struct nf_flowtable *flow_table, 274 274 struct flow_offload_tuple *tuple);
+3 -1
include/net/netfilter/nf_tables.h
··· 462 462 const struct nft_set *set, 463 463 const struct nft_set_elem *elem, 464 464 unsigned int flags); 465 - 465 + void (*commit)(const struct nft_set *set); 466 + void (*abort)(const struct nft_set *set); 466 467 u64 (*privsize)(const struct nlattr * const nla[], 467 468 const struct nft_set_desc *desc); 468 469 bool (*estimate)(const struct nft_set_desc *desc, ··· 558 557 u16 policy; 559 558 u16 udlen; 560 559 unsigned char *udata; 560 + struct list_head pending_update; 561 561 /* runtime data below here */ 562 562 const struct nft_set_ops *ops ____cacheline_aligned; 563 563 u16 flags:14,
+8
include/net/sch_generic.h
··· 137 137 refcount_inc(&qdisc->refcnt); 138 138 } 139 139 140 + static inline bool qdisc_refcount_dec_if_one(struct Qdisc *qdisc) 141 + { 142 + if (qdisc->flags & TCQ_F_BUILTIN) 143 + return true; 144 + return refcount_dec_if_one(&qdisc->refcnt); 145 + } 146 + 140 147 /* Intended to be used by unlocked users, when concurrent qdisc release is 141 148 * possible. 142 149 */ ··· 659 652 struct Qdisc *dev_graft_qdisc(struct netdev_queue *dev_queue, 660 653 struct Qdisc *qdisc); 661 654 void qdisc_reset(struct Qdisc *qdisc); 655 + void qdisc_destroy(struct Qdisc *qdisc); 662 656 void qdisc_put(struct Qdisc *qdisc); 663 657 void qdisc_put_unlocked(struct Qdisc *qdisc); 664 658 void qdisc_tree_reduce_backlog(struct Qdisc *qdisc, int n, int len);
-23
include/rdma/ib_addr.h
··· 194 194 return 0; 195 195 } 196 196 197 - static inline int iboe_get_rate(struct net_device *dev) 198 - { 199 - struct ethtool_link_ksettings cmd; 200 - int err; 201 - 202 - rtnl_lock(); 203 - err = __ethtool_get_link_ksettings(dev, &cmd); 204 - rtnl_unlock(); 205 - if (err) 206 - return IB_RATE_PORT_CURRENT; 207 - 208 - if (cmd.base.speed >= 40000) 209 - return IB_RATE_40_GBPS; 210 - else if (cmd.base.speed >= 30000) 211 - return IB_RATE_30_GBPS; 212 - else if (cmd.base.speed >= 20000) 213 - return IB_RATE_20_GBPS; 214 - else if (cmd.base.speed >= 10000) 215 - return IB_RATE_10_GBPS; 216 - else 217 - return IB_RATE_PORT_CURRENT; 218 - } 219 - 220 197 static inline int rdma_link_local_addr(struct in6_addr *addr) 221 198 { 222 199 if (addr->s6_addr32[0] == htonl(0xfe800000) &&
+1 -1
include/uapi/linux/ethtool_netlink.h
··· 783 783 784 784 /* add new constants above here */ 785 785 __ETHTOOL_A_STATS_GRP_CNT, 786 - ETHTOOL_A_STATS_GRP_MAX = (__ETHTOOL_A_STATS_CNT - 1) 786 + ETHTOOL_A_STATS_GRP_MAX = (__ETHTOOL_A_STATS_GRP_CNT - 1) 787 787 }; 788 788 789 789 enum {
-3
io_uring/io-wq.c
··· 221 221 raw_spin_unlock(&wq->lock); 222 222 io_wq_dec_running(worker); 223 223 worker->flags = 0; 224 - preempt_disable(); 225 - current->flags &= ~PF_IO_WORKER; 226 - preempt_enable(); 227 224 228 225 kfree_rcu(worker, rcu); 229 226 io_worker_ref_put(wq);
+2 -2
kernel/cgroup/cgroup-v1.c
··· 108 108 109 109 cgroup_lock(); 110 110 111 - percpu_down_write(&cgroup_threadgroup_rwsem); 111 + cgroup_attach_lock(true); 112 112 113 113 /* all tasks in @from are being moved, all csets are source */ 114 114 spin_lock_irq(&css_set_lock); ··· 144 144 } while (task && !ret); 145 145 out_err: 146 146 cgroup_migrate_finish(&mgctx); 147 - percpu_up_write(&cgroup_threadgroup_rwsem); 147 + cgroup_attach_unlock(true); 148 148 cgroup_unlock(); 149 149 return ret; 150 150 }
+8 -9
kernel/cgroup/cgroup.c
··· 6486 6486 static void cgroup_css_set_put_fork(struct kernel_clone_args *kargs) 6487 6487 __releases(&cgroup_threadgroup_rwsem) __releases(&cgroup_mutex) 6488 6488 { 6489 + struct cgroup *cgrp = kargs->cgrp; 6490 + struct css_set *cset = kargs->cset; 6491 + 6489 6492 cgroup_threadgroup_change_end(current); 6490 6493 6494 + if (cset) { 6495 + put_css_set(cset); 6496 + kargs->cset = NULL; 6497 + } 6498 + 6491 6499 if (kargs->flags & CLONE_INTO_CGROUP) { 6492 - struct cgroup *cgrp = kargs->cgrp; 6493 - struct css_set *cset = kargs->cset; 6494 - 6495 6500 cgroup_unlock(); 6496 - 6497 - if (cset) { 6498 - put_css_set(cset); 6499 - kargs->cset = NULL; 6500 - } 6501 - 6502 6501 if (cgrp) { 6503 6502 cgroup_put(cgrp); 6504 6503 kargs->cgrp = NULL;
+13 -1
kernel/kexec_file.c
··· 901 901 } 902 902 903 903 offset = ALIGN(offset, align); 904 + 905 + /* 906 + * Check if the segment contains the entry point, if so, 907 + * calculate the value of image->start based on it. 908 + * If the compiler has produced more than one .text section 909 + * (Eg: .text.hot), they are generally after the main .text 910 + * section, and they shall not be used to calculate 911 + * image->start. So do not re-calculate image->start if it 912 + * is not set to the initial value, and warn the user so they 913 + * have a chance to fix their purgatory's linker script. 914 + */ 904 915 if (sechdrs[i].sh_flags & SHF_EXECINSTR && 905 916 pi->ehdr->e_entry >= sechdrs[i].sh_addr && 906 917 pi->ehdr->e_entry < (sechdrs[i].sh_addr 907 - + sechdrs[i].sh_size)) { 918 + + sechdrs[i].sh_size) && 919 + !WARN_ON(kbuf->image->start != pi->ehdr->e_entry)) { 908 920 kbuf->image->start -= sechdrs[i].sh_addr; 909 921 kbuf->image->start += kbuf->mem + offset; 910 922 }
+10 -8
kernel/vhost_task.c
··· 28 28 for (;;) { 29 29 bool did_work; 30 30 31 - /* mb paired w/ vhost_task_stop */ 32 - if (test_bit(VHOST_TASK_FLAGS_STOP, &vtsk->flags)) 33 - break; 34 - 35 31 if (!dead && signal_pending(current)) { 36 32 struct ksignal ksig; 37 33 /* ··· 44 48 clear_thread_flag(TIF_SIGPENDING); 45 49 } 46 50 47 - did_work = vtsk->fn(vtsk->data); 48 - if (!did_work) { 49 - set_current_state(TASK_INTERRUPTIBLE); 50 - schedule(); 51 + /* mb paired w/ vhost_task_stop */ 52 + set_current_state(TASK_INTERRUPTIBLE); 53 + 54 + if (test_bit(VHOST_TASK_FLAGS_STOP, &vtsk->flags)) { 55 + __set_current_state(TASK_RUNNING); 56 + break; 51 57 } 58 + 59 + did_work = vtsk->fn(vtsk->data); 60 + if (!did_work) 61 + schedule(); 52 62 } 53 63 54 64 complete(&vtsk->exited);
+2
lib/radix-tree.c
··· 27 27 #include <linux/string.h> 28 28 #include <linux/xarray.h> 29 29 30 + #include "radix-tree.h" 31 + 30 32 /* 31 33 * Radix tree node cache. 32 34 */
+8
lib/radix-tree.h
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* radix-tree helpers that are only shared with xarray */ 3 + 4 + struct kmem_cache; 5 + struct rcu_head; 6 + 7 + extern struct kmem_cache *radix_tree_node_cachep; 8 + extern void radix_tree_node_rcu_free(struct rcu_head *head);
+1 -1
lib/test_vmalloc.c
··· 369 369 int i; 370 370 371 371 map_nr_pages = nr_pages > 0 ? nr_pages:1; 372 - pages = kmalloc(map_nr_pages * sizeof(struct page), GFP_KERNEL); 372 + pages = kcalloc(map_nr_pages, sizeof(struct page *), GFP_KERNEL); 373 373 if (!pages) 374 374 return -1; 375 375
+2 -4
lib/xarray.c
··· 12 12 #include <linux/slab.h> 13 13 #include <linux/xarray.h> 14 14 15 + #include "radix-tree.h" 16 + 15 17 /* 16 18 * Coding conventions in this file: 17 19 * ··· 248 246 return entry; 249 247 } 250 248 EXPORT_SYMBOL_GPL(xas_load); 251 - 252 - /* Move the radix tree node cache here */ 253 - extern struct kmem_cache *radix_tree_node_cachep; 254 - extern void radix_tree_node_rcu_free(struct rcu_head *head); 255 249 256 250 #define XA_RCU_FREE ((struct xarray *)1) 257 251
+2
mm/damon/core.c
··· 551 551 return -EINVAL; 552 552 if (attrs->min_nr_regions > attrs->max_nr_regions) 553 553 return -EINVAL; 554 + if (attrs->sample_interval > attrs->aggr_interval) 555 + return -EINVAL; 554 556 555 557 damon_update_monitoring_results(ctx, attrs); 556 558 ctx->attrs = *attrs;
+16 -10
mm/filemap.c
··· 1760 1760 * 1761 1761 * Return: The index of the gap if found, otherwise an index outside the 1762 1762 * range specified (in which case 'return - index >= max_scan' will be true). 1763 - * In the rare case of index wrap-around, 0 will be returned. 1763 + * In the rare case of index wrap-around, 0 will be returned. 0 will also 1764 + * be returned if index == 0 and there is a gap at the index. We can not 1765 + * wrap-around if passed index == 0. 1764 1766 */ 1765 1767 pgoff_t page_cache_next_miss(struct address_space *mapping, 1766 1768 pgoff_t index, unsigned long max_scan) ··· 1772 1770 while (max_scan--) { 1773 1771 void *entry = xas_next(&xas); 1774 1772 if (!entry || xa_is_value(entry)) 1775 - break; 1776 - if (xas.xa_index == 0) 1777 - break; 1773 + return xas.xa_index; 1774 + if (xas.xa_index == 0 && index != 0) 1775 + return xas.xa_index; 1778 1776 } 1779 1777 1780 - return xas.xa_index; 1778 + /* No gaps in range and no wrap-around, return index beyond range */ 1779 + return xas.xa_index + 1; 1781 1780 } 1782 1781 EXPORT_SYMBOL(page_cache_next_miss); 1783 1782 ··· 1799 1796 * 1800 1797 * Return: The index of the gap if found, otherwise an index outside the 1801 1798 * range specified (in which case 'index - return >= max_scan' will be true). 1802 - * In the rare case of wrap-around, ULONG_MAX will be returned. 1799 + * In the rare case of wrap-around, ULONG_MAX will be returned. ULONG_MAX 1800 + * will also be returned if index == ULONG_MAX and there is a gap at the 1801 + * index. We can not wrap-around if passed index == ULONG_MAX. 1803 1802 */ 1804 1803 pgoff_t page_cache_prev_miss(struct address_space *mapping, 1805 1804 pgoff_t index, unsigned long max_scan) ··· 1811 1806 while (max_scan--) { 1812 1807 void *entry = xas_prev(&xas); 1813 1808 if (!entry || xa_is_value(entry)) 1814 - break; 1815 - if (xas.xa_index == ULONG_MAX) 1816 - break; 1809 + return xas.xa_index; 1810 + if (xas.xa_index == ULONG_MAX && index != ULONG_MAX) 1811 + return xas.xa_index; 1817 1812 } 1818 1813 1819 - return xas.xa_index; 1814 + /* No gaps in range and no wrap-around, return index beyond range */ 1815 + return xas.xa_index - 1; 1820 1816 } 1821 1817 EXPORT_SYMBOL(page_cache_prev_miss); 1822 1818
+1
mm/gup_test.c
··· 381 381 static const struct file_operations gup_test_fops = { 382 382 .open = nonseekable_open, 383 383 .unlocked_ioctl = gup_test_ioctl, 384 + .compat_ioctl = compat_ptr_ioctl, 384 385 .release = gup_test_release, 385 386 }; 386 387
+9 -2
mm/zswap.c
··· 1174 1174 goto reject; 1175 1175 } 1176 1176 1177 + /* 1178 + * XXX: zswap reclaim does not work with cgroups yet. Without a 1179 + * cgroup-aware entry LRU, we will push out entries system-wide based on 1180 + * local cgroup limits. 1181 + */ 1177 1182 objcg = get_obj_cgroup_from_page(page); 1178 - if (objcg && !obj_cgroup_may_zswap(objcg)) 1179 - goto shrink; 1183 + if (objcg && !obj_cgroup_may_zswap(objcg)) { 1184 + ret = -ENOMEM; 1185 + goto reject; 1186 + } 1180 1187 1181 1188 /* reclaim space if needed */ 1182 1189 if (zswap_is_full()) {
+3
net/dccp/proto.c
··· 191 191 struct dccp_sock *dp = dccp_sk(sk); 192 192 struct inet_connection_sock *icsk = inet_csk(sk); 193 193 194 + pr_warn_once("DCCP is deprecated and scheduled to be removed in 2025, " 195 + "please contact the netdev mailing list\n"); 196 + 194 197 icsk->icsk_rto = DCCP_TIMEOUT_INIT; 195 198 icsk->icsk_syn_retries = sysctl_dccp_request_retries; 196 199 sk->sk_state = DCCP_CLOSED;
-1
net/handshake/handshake.h
··· 31 31 struct list_head hr_list; 32 32 struct rhash_head hr_rhash; 33 33 unsigned long hr_flags; 34 - struct file *hr_file; 35 34 const struct handshake_proto *hr_proto; 36 35 struct sock *hr_sk; 37 36 void (*hr_odestruct)(struct sock *sk);
-4
net/handshake/request.c
··· 239 239 } 240 240 req->hr_odestruct = req->hr_sk->sk_destruct; 241 241 req->hr_sk->sk_destruct = handshake_sk_destruct; 242 - req->hr_file = sock->file; 243 242 244 243 ret = -EOPNOTSUPP; 245 244 net = sock_net(req->hr_sk); ··· 333 334 trace_handshake_cancel_busy(net, req, sk); 334 335 return false; 335 336 } 336 - 337 - /* Request accepted and waiting for DONE */ 338 - fput(req->hr_file); 339 337 340 338 out_true: 341 339 trace_handshake_cancel(net, req, sk);
+2
net/ipv4/udplite.c
··· 22 22 { 23 23 udp_init_sock(sk); 24 24 udp_sk(sk)->pcflag = UDPLITE_BIT; 25 + pr_warn_once("UDP-Lite is deprecated and scheduled to be removed in 2025, " 26 + "please contact the netdev mailing list\n"); 25 27 return 0; 26 28 } 27 29
+2 -1
net/ipv6/ping.c
··· 114 114 addr_type = ipv6_addr_type(daddr); 115 115 if ((__ipv6_addr_needs_scope_id(addr_type) && !oif) || 116 116 (addr_type & IPV6_ADDR_MAPPED) || 117 - (oif && sk->sk_bound_dev_if && oif != sk->sk_bound_dev_if)) 117 + (oif && sk->sk_bound_dev_if && oif != sk->sk_bound_dev_if && 118 + l3mdev_master_ifindex_by_index(sock_net(sk), oif) != sk->sk_bound_dev_if)) 118 119 return -EINVAL; 119 120 120 121 ipcm6_init_sk(&ipc6, np);
+4
net/ipv6/udplite.c
··· 8 8 * Changes: 9 9 * Fixes: 10 10 */ 11 + #define pr_fmt(fmt) "UDPLite6: " fmt 12 + 11 13 #include <linux/export.h> 12 14 #include <linux/proc_fs.h> 13 15 #include "udp_impl.h" ··· 18 16 { 19 17 udpv6_init_sock(sk); 20 18 udp_sk(sk)->pcflag = UDPLITE_BIT; 19 + pr_warn_once("UDP-Lite is deprecated and scheduled to be removed in 2025, " 20 + "please contact the netdev mailing list\n"); 21 21 return 0; 22 22 } 23 23
+8 -1
net/mac80211/cfg.c
··· 4862 4862 unsigned int link_id) 4863 4863 { 4864 4864 struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev); 4865 + int res; 4865 4866 4866 4867 if (wdev->use_4addr) 4867 4868 return -EOPNOTSUPP; 4868 4869 4869 - return ieee80211_vif_set_links(sdata, wdev->valid_links); 4870 + mutex_lock(&sdata->local->mtx); 4871 + res = ieee80211_vif_set_links(sdata, wdev->valid_links); 4872 + mutex_unlock(&sdata->local->mtx); 4873 + 4874 + return res; 4870 4875 } 4871 4876 4872 4877 static void ieee80211_del_intf_link(struct wiphy *wiphy, ··· 4880 4875 { 4881 4876 struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev); 4882 4877 4878 + mutex_lock(&sdata->local->mtx); 4883 4879 ieee80211_vif_set_links(sdata, wdev->valid_links); 4880 + mutex_unlock(&sdata->local->mtx); 4884 4881 } 4885 4882 4886 4883 static int sta_add_link_station(struct ieee80211_local *local,
+1 -1
net/mac80211/ieee80211_i.h
··· 2315 2315 return ieee802_11_parse_elems_crc(start, len, action, 0, 0, bss); 2316 2316 } 2317 2317 2318 - void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos); 2318 + void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos, u8 frag_id); 2319 2319 2320 2320 extern const int ieee802_1d_to_ac[8]; 2321 2321
+2 -2
net/mac80211/link.c
··· 2 2 /* 3 3 * MLO link handling 4 4 * 5 - * Copyright (C) 2022 Intel Corporation 5 + * Copyright (C) 2022-2023 Intel Corporation 6 6 */ 7 7 #include <linux/slab.h> 8 8 #include <linux/kernel.h> ··· 409 409 IEEE80211_CHANCTX_SHARED); 410 410 WARN_ON_ONCE(ret); 411 411 412 + ieee80211_mgd_set_link_qos_params(link); 412 413 ieee80211_link_info_change_notify(sdata, link, 413 414 BSS_CHANGED_ERP_CTS_PROT | 414 415 BSS_CHANGED_ERP_PREAMBLE | ··· 424 423 BSS_CHANGED_TWT | 425 424 BSS_CHANGED_HE_OBSS_PD | 426 425 BSS_CHANGED_HE_BSS_COLOR); 427 - ieee80211_mgd_set_link_qos_params(link); 428 426 } 429 427 430 428 old_active = sdata->vif.active_links;
+3 -2
net/mac80211/mlme.c
··· 1372 1372 ieee80211_add_non_inheritance_elem(skb, outer_present_elems, 1373 1373 link_present_elems); 1374 1374 1375 - ieee80211_fragment_element(skb, subelem_len); 1375 + ieee80211_fragment_element(skb, subelem_len, 1376 + IEEE80211_MLE_SUBELEM_FRAGMENT); 1376 1377 } 1377 1378 1378 - ieee80211_fragment_element(skb, ml_elem_len); 1379 + ieee80211_fragment_element(skb, ml_elem_len, WLAN_EID_FRAGMENT); 1379 1380 } 1380 1381 1381 1382 static int ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata)
+3 -3
net/mac80211/tx.c
··· 4430 4430 struct sk_buff *skb) 4431 4431 { 4432 4432 struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev); 4433 - unsigned long links = sdata->vif.valid_links; 4433 + unsigned long links = sdata->vif.active_links; 4434 4434 unsigned int link; 4435 4435 u32 ctrl_flags = IEEE80211_TX_CTRL_MCAST_MLO_FIRST_TX; 4436 4436 ··· 6025 6025 rcu_read_unlock(); 6026 6026 6027 6027 if (WARN_ON_ONCE(link == ARRAY_SIZE(sdata->vif.link_conf))) 6028 - link = ffs(sdata->vif.valid_links) - 1; 6028 + link = ffs(sdata->vif.active_links) - 1; 6029 6029 } 6030 6030 6031 6031 IEEE80211_SKB_CB(skb)->control.flags |= ··· 6061 6061 band = chanctx_conf->def.chan->band; 6062 6062 } else { 6063 6063 WARN_ON(link_id >= 0 && 6064 - !(sdata->vif.valid_links & BIT(link_id))); 6064 + !(sdata->vif.active_links & BIT(link_id))); 6065 6065 /* MLD transmissions must not rely on the band */ 6066 6066 band = 0; 6067 6067 }
+2 -2
net/mac80211/util.c
··· 5131 5131 return pos; 5132 5132 } 5133 5133 5134 - void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos) 5134 + void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos, u8 frag_id) 5135 5135 { 5136 5136 unsigned int elem_len; 5137 5137 ··· 5151 5151 memmove(len_pos + 255 + 3, len_pos + 255 + 1, elem_len); 5152 5152 /* place the fragment ID */ 5153 5153 len_pos += 255 + 1; 5154 - *len_pos = WLAN_EID_FRAGMENT; 5154 + *len_pos = frag_id; 5155 5155 /* and point to fragment length to update later */ 5156 5156 len_pos++; 5157 5157 }
+10 -3
net/netfilter/nf_flow_table_core.c
··· 299 299 EXPORT_SYMBOL_GPL(flow_offload_add); 300 300 301 301 void flow_offload_refresh(struct nf_flowtable *flow_table, 302 - struct flow_offload *flow) 302 + struct flow_offload *flow, bool force) 303 303 { 304 304 u32 timeout; 305 305 306 306 timeout = nf_flowtable_time_stamp + flow_offload_get_timeout(flow); 307 - if (timeout - READ_ONCE(flow->timeout) > HZ) 307 + if (force || timeout - READ_ONCE(flow->timeout) > HZ) 308 308 WRITE_ONCE(flow->timeout, timeout); 309 309 else 310 310 return; ··· 315 315 nf_flow_offload_add(flow_table, flow); 316 316 } 317 317 EXPORT_SYMBOL_GPL(flow_offload_refresh); 318 + 319 + static bool nf_flow_is_outdated(const struct flow_offload *flow) 320 + { 321 + return test_bit(IPS_SEEN_REPLY_BIT, &flow->ct->status) && 322 + !test_bit(NF_FLOW_HW_ESTABLISHED, &flow->flags); 323 + } 318 324 319 325 static inline bool nf_flow_has_expired(const struct flow_offload *flow) 320 326 { ··· 411 405 struct flow_offload *flow, void *data) 412 406 { 413 407 if (nf_flow_has_expired(flow) || 414 - nf_ct_is_dying(flow->ct)) 408 + nf_ct_is_dying(flow->ct) || 409 + nf_flow_is_outdated(flow)) 415 410 flow_offload_teardown(flow); 416 411 417 412 if (test_bit(NF_FLOW_TEARDOWN, &flow->flags)) {
+2 -2
net/netfilter/nf_flow_table_ip.c
··· 388 388 if (skb_try_make_writable(skb, thoff + ctx->hdrsize)) 389 389 return -1; 390 390 391 - flow_offload_refresh(flow_table, flow); 391 + flow_offload_refresh(flow_table, flow, false); 392 392 393 393 nf_flow_encap_pop(skb, tuplehash); 394 394 thoff -= ctx->offset; ··· 667 667 if (skb_try_make_writable(skb, thoff + ctx->hdrsize)) 668 668 return -1; 669 669 670 - flow_offload_refresh(flow_table, flow); 670 + flow_offload_refresh(flow_table, flow, false); 671 671 672 672 nf_flow_encap_pop(skb, tuplehash); 673 673
+58 -1
net/netfilter/nf_tables_api.c
··· 3844 3844 if (flow) 3845 3845 nft_flow_rule_destroy(flow); 3846 3846 err_release_rule: 3847 - nf_tables_rule_release(&ctx, rule); 3847 + nft_rule_expr_deactivate(&ctx, rule, NFT_TRANS_PREPARE); 3848 + nf_tables_rule_destroy(&ctx, rule); 3848 3849 err_release_expr: 3849 3850 for (i = 0; i < n; i++) { 3850 3851 if (expr_info[i].ops) { ··· 4920 4919 4921 4920 set->num_exprs = num_exprs; 4922 4921 set->handle = nf_tables_alloc_handle(table); 4922 + INIT_LIST_HEAD(&set->pending_update); 4923 4923 4924 4924 err = nft_trans_set_add(&ctx, NFT_MSG_NEWSET, set); 4925 4925 if (err < 0) ··· 9280 9278 } 9281 9279 } 9282 9280 9281 + static void nft_set_commit_update(struct list_head *set_update_list) 9282 + { 9283 + struct nft_set *set, *next; 9284 + 9285 + list_for_each_entry_safe(set, next, set_update_list, pending_update) { 9286 + list_del_init(&set->pending_update); 9287 + 9288 + if (!set->ops->commit) 9289 + continue; 9290 + 9291 + set->ops->commit(set); 9292 + } 9293 + } 9294 + 9283 9295 static int nf_tables_commit(struct net *net, struct sk_buff *skb) 9284 9296 { 9285 9297 struct nftables_pernet *nft_net = nft_pernet(net); 9286 9298 struct nft_trans *trans, *next; 9299 + LIST_HEAD(set_update_list); 9287 9300 struct nft_trans_elem *te; 9288 9301 struct nft_chain *chain; 9289 9302 struct nft_table *table; ··· 9473 9456 nf_tables_setelem_notify(&trans->ctx, te->set, 9474 9457 &te->elem, 9475 9458 NFT_MSG_NEWSETELEM); 9459 + if (te->set->ops->commit && 9460 + list_empty(&te->set->pending_update)) { 9461 + list_add_tail(&te->set->pending_update, 9462 + &set_update_list); 9463 + } 9476 9464 nft_trans_destroy(trans); 9477 9465 break; 9478 9466 case NFT_MSG_DELSETELEM: ··· 9491 9469 if (!nft_setelem_is_catchall(te->set, &te->elem)) { 9492 9470 atomic_dec(&te->set->nelems); 9493 9471 te->set->ndeact--; 9472 + } 9473 + if (te->set->ops->commit && 9474 + list_empty(&te->set->pending_update)) { 9475 + list_add_tail(&te->set->pending_update, 9476 + &set_update_list); 9494 9477 } 9495 9478 break; 9496 9479 case NFT_MSG_NEWOBJ: ··· 9559 9532 } 9560 9533 } 9561 9534 9535 + nft_set_commit_update(&set_update_list); 9536 + 9562 9537 nft_commit_notify(net, NETLINK_CB(skb).portid); 9563 9538 nf_tables_gen_notify(net, skb, NFT_MSG_NEWGEN); 9564 9539 nf_tables_commit_audit_log(&adl, nft_net->base_seq); ··· 9620 9591 kfree(trans); 9621 9592 } 9622 9593 9594 + static void nft_set_abort_update(struct list_head *set_update_list) 9595 + { 9596 + struct nft_set *set, *next; 9597 + 9598 + list_for_each_entry_safe(set, next, set_update_list, pending_update) { 9599 + list_del_init(&set->pending_update); 9600 + 9601 + if (!set->ops->abort) 9602 + continue; 9603 + 9604 + set->ops->abort(set); 9605 + } 9606 + } 9607 + 9623 9608 static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action) 9624 9609 { 9625 9610 struct nftables_pernet *nft_net = nft_pernet(net); 9626 9611 struct nft_trans *trans, *next; 9612 + LIST_HEAD(set_update_list); 9627 9613 struct nft_trans_elem *te; 9628 9614 9629 9615 if (action == NFNL_ABORT_VALIDATE && ··· 9748 9704 nft_setelem_remove(net, te->set, &te->elem); 9749 9705 if (!nft_setelem_is_catchall(te->set, &te->elem)) 9750 9706 atomic_dec(&te->set->nelems); 9707 + 9708 + if (te->set->ops->abort && 9709 + list_empty(&te->set->pending_update)) { 9710 + list_add_tail(&te->set->pending_update, 9711 + &set_update_list); 9712 + } 9751 9713 break; 9752 9714 case NFT_MSG_DELSETELEM: 9753 9715 case NFT_MSG_DESTROYSETELEM: ··· 9764 9714 if (!nft_setelem_is_catchall(te->set, &te->elem)) 9765 9715 te->set->ndeact--; 9766 9716 9717 + if (te->set->ops->abort && 9718 + list_empty(&te->set->pending_update)) { 9719 + list_add_tail(&te->set->pending_update, 9720 + &set_update_list); 9721 + } 9767 9722 nft_trans_destroy(trans); 9768 9723 break; 9769 9724 case NFT_MSG_NEWOBJ: ··· 9810 9755 break; 9811 9756 } 9812 9757 } 9758 + 9759 + nft_set_abort_update(&set_update_list); 9813 9760 9814 9761 synchronize_rcu(); 9815 9762
+2 -1
net/netfilter/nfnetlink.c
··· 533 533 * processed, this avoids that the same error is 534 534 * reported several times when replaying the batch. 535 535 */ 536 - if (nfnl_err_add(&err_list, nlh, err, &extack) < 0) { 536 + if (err == -ENOMEM || 537 + nfnl_err_add(&err_list, nlh, err, &extack) < 0) { 537 538 /* We failed to enqueue an error, reset the 538 539 * list of errors and send OOM to userspace 539 540 * pointing to the batch header.
+40 -15
net/netfilter/nft_set_pipapo.c
··· 1599 1599 } 1600 1600 } 1601 1601 1602 - /** 1603 - * pipapo_reclaim_match - RCU callback to free fields from old matching data 1604 - * @rcu: RCU head 1605 - */ 1606 - static void pipapo_reclaim_match(struct rcu_head *rcu) 1602 + static void pipapo_free_match(struct nft_pipapo_match *m) 1607 1603 { 1608 - struct nft_pipapo_match *m; 1609 1604 int i; 1610 - 1611 - m = container_of(rcu, struct nft_pipapo_match, rcu); 1612 1605 1613 1606 for_each_possible_cpu(i) 1614 1607 kfree(*per_cpu_ptr(m->scratch, i)); ··· 1617 1624 } 1618 1625 1619 1626 /** 1620 - * pipapo_commit() - Replace lookup data with current working copy 1627 + * pipapo_reclaim_match - RCU callback to free fields from old matching data 1628 + * @rcu: RCU head 1629 + */ 1630 + static void pipapo_reclaim_match(struct rcu_head *rcu) 1631 + { 1632 + struct nft_pipapo_match *m; 1633 + 1634 + m = container_of(rcu, struct nft_pipapo_match, rcu); 1635 + pipapo_free_match(m); 1636 + } 1637 + 1638 + /** 1639 + * nft_pipapo_commit() - Replace lookup data with current working copy 1621 1640 * @set: nftables API set representation 1622 1641 * 1623 1642 * While at it, check if we should perform garbage collection on the working ··· 1639 1634 * We also need to create a new working copy for subsequent insertions and 1640 1635 * deletions. 1641 1636 */ 1642 - static void pipapo_commit(const struct nft_set *set) 1637 + static void nft_pipapo_commit(const struct nft_set *set) 1643 1638 { 1644 1639 struct nft_pipapo *priv = nft_set_priv(set); 1645 1640 struct nft_pipapo_match *new_clone, *old; ··· 1664 1659 priv->clone = new_clone; 1665 1660 } 1666 1661 1662 + static void nft_pipapo_abort(const struct nft_set *set) 1663 + { 1664 + struct nft_pipapo *priv = nft_set_priv(set); 1665 + struct nft_pipapo_match *new_clone, *m; 1666 + 1667 + if (!priv->dirty) 1668 + return; 1669 + 1670 + m = rcu_dereference(priv->match); 1671 + 1672 + new_clone = pipapo_clone(m); 1673 + if (IS_ERR(new_clone)) 1674 + return; 1675 + 1676 + priv->dirty = false; 1677 + 1678 + pipapo_free_match(priv->clone); 1679 + priv->clone = new_clone; 1680 + } 1681 + 1667 1682 /** 1668 1683 * nft_pipapo_activate() - Mark element reference as active given key, commit 1669 1684 * @net: Network namespace ··· 1691 1666 * @elem: nftables API element representation containing key data 1692 1667 * 1693 1668 * On insertion, elements are added to a copy of the matching data currently 1694 - * in use for lookups, and not directly inserted into current lookup data, so 1695 - * we'll take care of that by calling pipapo_commit() here. Both 1669 + * in use for lookups, and not directly inserted into current lookup data. Both 1696 1670 * nft_pipapo_insert() and nft_pipapo_activate() are called once for each 1697 1671 * element, hence we can't purpose either one as a real commit operation. 1698 1672 */ ··· 1707 1683 1708 1684 nft_set_elem_change_active(net, set, &e->ext); 1709 1685 nft_set_elem_clear_busy(&e->ext); 1710 - 1711 - pipapo_commit(set); 1712 1686 } 1713 1687 1714 1688 /** ··· 1952 1930 if (i == m->field_count) { 1953 1931 priv->dirty = true; 1954 1932 pipapo_drop(m, rulemap); 1955 - pipapo_commit(set); 1956 1933 return; 1957 1934 } 1958 1935 ··· 2249 2228 .init = nft_pipapo_init, 2250 2229 .destroy = nft_pipapo_destroy, 2251 2230 .gc_init = nft_pipapo_gc_init, 2231 + .commit = nft_pipapo_commit, 2232 + .abort = nft_pipapo_abort, 2252 2233 .elemsize = offsetof(struct nft_pipapo_elem, ext), 2253 2234 }, 2254 2235 }; ··· 2273 2250 .init = nft_pipapo_init, 2274 2251 .destroy = nft_pipapo_destroy, 2275 2252 .gc_init = nft_pipapo_gc_init, 2253 + .commit = nft_pipapo_commit, 2254 + .abort = nft_pipapo_abort, 2276 2255 .elemsize = offsetof(struct nft_pipapo_elem, ext), 2277 2256 }, 2278 2257 };
+2 -1
net/netlabel/netlabel_kapi.c
··· 857 857 858 858 offset -= iter->startbit; 859 859 idx = offset / NETLBL_CATMAP_MAPSIZE; 860 - iter->bitmap[idx] |= bitmap << (offset % NETLBL_CATMAP_MAPSIZE); 860 + iter->bitmap[idx] |= (NETLBL_CATMAP_MAPTYPE)bitmap 861 + << (offset % NETLBL_CATMAP_MAPSIZE); 861 862 862 863 return 0; 863 864 }
+8 -1
net/sched/act_ct.c
··· 610 610 struct flow_offload_tuple tuple = {}; 611 611 enum ip_conntrack_info ctinfo; 612 612 struct tcphdr *tcph = NULL; 613 + bool force_refresh = false; 613 614 struct flow_offload *flow; 614 615 struct nf_conn *ct; 615 616 u8 dir; ··· 648 647 * established state, then don't refresh. 649 648 */ 650 649 return false; 650 + force_refresh = true; 651 651 } 652 652 653 653 if (tcph && (unlikely(tcph->fin || tcph->rst))) { ··· 662 660 else 663 661 ctinfo = IP_CT_ESTABLISHED_REPLY; 664 662 665 - flow_offload_refresh(nf_ft, flow); 663 + flow_offload_refresh(nf_ft, flow, force_refresh); 664 + if (!test_bit(IPS_ASSURED_BIT, &ct->status)) { 665 + /* Process this flow in SW to allow promoting to ASSURED */ 666 + return false; 667 + } 668 + 666 669 nf_conntrack_get(&ct->ct_general); 667 670 nf_ct_set(skb, ct, ctinfo); 668 671 if (nf_ft->flags & NF_FLOWTABLE_COUNTER)
+43 -5
net/sched/act_pedit.c
··· 13 13 #include <linux/rtnetlink.h> 14 14 #include <linux/module.h> 15 15 #include <linux/init.h> 16 + #include <linux/ip.h> 17 + #include <linux/ipv6.h> 16 18 #include <linux/slab.h> 19 + #include <net/ipv6.h> 17 20 #include <net/netlink.h> 18 21 #include <net/pkt_sched.h> 19 22 #include <linux/tc_act/tc_pedit.h> ··· 328 325 return true; 329 326 } 330 327 331 - static void pedit_skb_hdr_offset(struct sk_buff *skb, 328 + static int pedit_l4_skb_offset(struct sk_buff *skb, int *hoffset, const int header_type) 329 + { 330 + const int noff = skb_network_offset(skb); 331 + int ret = -EINVAL; 332 + struct iphdr _iph; 333 + 334 + switch (skb->protocol) { 335 + case htons(ETH_P_IP): { 336 + const struct iphdr *iph = skb_header_pointer(skb, noff, sizeof(_iph), &_iph); 337 + 338 + if (!iph) 339 + goto out; 340 + *hoffset = noff + iph->ihl * 4; 341 + ret = 0; 342 + break; 343 + } 344 + case htons(ETH_P_IPV6): 345 + ret = ipv6_find_hdr(skb, hoffset, header_type, NULL, NULL) == header_type ? 0 : -EINVAL; 346 + break; 347 + } 348 + out: 349 + return ret; 350 + } 351 + 352 + static int pedit_skb_hdr_offset(struct sk_buff *skb, 332 353 enum pedit_header_type htype, int *hoffset) 333 354 { 355 + int ret = -EINVAL; 334 356 /* 'htype' is validated in the netlink parsing */ 335 357 switch (htype) { 336 358 case TCA_PEDIT_KEY_EX_HDR_TYPE_ETH: 337 - if (skb_mac_header_was_set(skb)) 359 + if (skb_mac_header_was_set(skb)) { 338 360 *hoffset = skb_mac_offset(skb); 361 + ret = 0; 362 + } 339 363 break; 340 364 case TCA_PEDIT_KEY_EX_HDR_TYPE_NETWORK: 341 365 case TCA_PEDIT_KEY_EX_HDR_TYPE_IP4: 342 366 case TCA_PEDIT_KEY_EX_HDR_TYPE_IP6: 343 367 *hoffset = skb_network_offset(skb); 368 + ret = 0; 344 369 break; 345 370 case TCA_PEDIT_KEY_EX_HDR_TYPE_TCP: 371 + ret = pedit_l4_skb_offset(skb, hoffset, IPPROTO_TCP); 372 + break; 346 373 case TCA_PEDIT_KEY_EX_HDR_TYPE_UDP: 347 - if (skb_transport_header_was_set(skb)) 348 - *hoffset = skb_transport_offset(skb); 374 + ret = pedit_l4_skb_offset(skb, hoffset, IPPROTO_UDP); 349 375 break; 350 376 default: 351 377 break; 352 378 } 379 + return ret; 353 380 } 354 381 355 382 TC_INDIRECT_SCOPE int tcf_pedit_act(struct sk_buff *skb, ··· 415 382 int hoffset = 0; 416 383 u32 *ptr, hdata; 417 384 u32 val; 385 + int rc; 418 386 419 387 if (tkey_ex) { 420 388 htype = tkey_ex->htype; ··· 424 390 tkey_ex++; 425 391 } 426 392 427 - pedit_skb_hdr_offset(skb, htype, &hoffset); 393 + rc = pedit_skb_hdr_offset(skb, htype, &hoffset); 394 + if (rc) { 395 + pr_info_ratelimited("tc action pedit unable to extract header offset for header type (0x%x)\n", htype); 396 + goto bad; 397 + } 428 398 429 399 if (tkey->offmask) { 430 400 u8 *d, _d;
+7 -5
net/sched/cls_api.c
··· 657 657 { 658 658 struct tcf_block *block = chain->block; 659 659 const struct tcf_proto_ops *tmplt_ops; 660 + unsigned int refcnt, non_act_refcnt; 660 661 bool free_block = false; 661 - unsigned int refcnt; 662 662 void *tmplt_priv; 663 663 664 664 mutex_lock(&block->lock); ··· 678 678 * save these to temporary variables. 679 679 */ 680 680 refcnt = --chain->refcnt; 681 + non_act_refcnt = refcnt - chain->action_refcnt; 681 682 tmplt_ops = chain->tmplt_ops; 682 683 tmplt_priv = chain->tmplt_priv; 683 684 684 - /* The last dropped non-action reference will trigger notification. */ 685 - if (refcnt - chain->action_refcnt == 0 && !by_act) { 686 - tc_chain_notify_delete(tmplt_ops, tmplt_priv, chain->index, 687 - block, NULL, 0, 0, false); 685 + if (non_act_refcnt == chain->explicitly_created && !by_act) { 686 + if (non_act_refcnt == 0) 687 + tc_chain_notify_delete(tmplt_ops, tmplt_priv, 688 + chain->index, block, NULL, 0, 0, 689 + false); 688 690 /* Last reference to chain, no need to lock. */ 689 691 chain->flushing = false; 690 692 }
+10 -8
net/sched/cls_u32.c
··· 718 718 struct nlattr *est, u32 flags, u32 fl_flags, 719 719 struct netlink_ext_ack *extack) 720 720 { 721 - int err; 721 + int err, ifindex = -1; 722 722 723 723 err = tcf_exts_validate_ex(net, tp, tb, est, &n->exts, flags, 724 724 fl_flags, extack); 725 725 if (err < 0) 726 726 return err; 727 + 728 + if (tb[TCA_U32_INDEV]) { 729 + ifindex = tcf_change_indev(net, tb[TCA_U32_INDEV], extack); 730 + if (ifindex < 0) 731 + return -EINVAL; 732 + } 727 733 728 734 if (tb[TCA_U32_LINK]) { 729 735 u32 handle = nla_get_u32(tb[TCA_U32_LINK]); ··· 765 759 tcf_bind_filter(tp, &n->res, base); 766 760 } 767 761 768 - if (tb[TCA_U32_INDEV]) { 769 - int ret; 770 - ret = tcf_change_indev(net, tb[TCA_U32_INDEV], extack); 771 - if (ret < 0) 772 - return -EINVAL; 773 - n->ifindex = ret; 774 - } 762 + if (ifindex >= 0) 763 + n->ifindex = ifindex; 764 + 775 765 return 0; 776 766 } 777 767
+31 -13
net/sched/sch_api.c
··· 1079 1079 1080 1080 if (parent == NULL) { 1081 1081 unsigned int i, num_q, ingress; 1082 + struct netdev_queue *dev_queue; 1082 1083 1083 1084 ingress = 0; 1084 1085 num_q = dev->num_tx_queues; 1085 1086 if ((q && q->flags & TCQ_F_INGRESS) || 1086 1087 (new && new->flags & TCQ_F_INGRESS)) { 1087 - num_q = 1; 1088 1088 ingress = 1; 1089 - if (!dev_ingress_queue(dev)) { 1089 + dev_queue = dev_ingress_queue(dev); 1090 + if (!dev_queue) { 1090 1091 NL_SET_ERR_MSG(extack, "Device does not have an ingress queue"); 1091 1092 return -ENOENT; 1093 + } 1094 + 1095 + q = rtnl_dereference(dev_queue->qdisc_sleeping); 1096 + 1097 + /* This is the counterpart of that qdisc_refcount_inc_nz() call in 1098 + * __tcf_qdisc_find() for filter requests. 1099 + */ 1100 + if (!qdisc_refcount_dec_if_one(q)) { 1101 + NL_SET_ERR_MSG(extack, 1102 + "Current ingress or clsact Qdisc has ongoing filter requests"); 1103 + return -EBUSY; 1092 1104 } 1093 1105 } 1094 1106 ··· 1112 1100 if (new && new->ops->attach && !ingress) 1113 1101 goto skip; 1114 1102 1115 - for (i = 0; i < num_q; i++) { 1116 - struct netdev_queue *dev_queue = dev_ingress_queue(dev); 1117 - 1118 - if (!ingress) 1103 + if (!ingress) { 1104 + for (i = 0; i < num_q; i++) { 1119 1105 dev_queue = netdev_get_tx_queue(dev, i); 1106 + old = dev_graft_qdisc(dev_queue, new); 1120 1107 1121 - old = dev_graft_qdisc(dev_queue, new); 1122 - if (new && i > 0) 1123 - qdisc_refcount_inc(new); 1124 - 1125 - if (!ingress) 1108 + if (new && i > 0) 1109 + qdisc_refcount_inc(new); 1126 1110 qdisc_put(old); 1111 + } 1112 + } else { 1113 + old = dev_graft_qdisc(dev_queue, NULL); 1114 + 1115 + /* {ingress,clsact}_destroy() @old before grafting @new to avoid 1116 + * unprotected concurrent accesses to net_device::miniq_{in,e}gress 1117 + * pointer(s) in mini_qdisc_pair_swap(). 1118 + */ 1119 + qdisc_notify(net, skb, n, classid, old, new, extack); 1120 + qdisc_destroy(old); 1121 + 1122 + dev_graft_qdisc(dev_queue, new); 1127 1123 } 1128 1124 1129 1125 skip: ··· 1145 1125 1146 1126 if (new && new->ops->attach) 1147 1127 new->ops->attach(new); 1148 - } else { 1149 - notify_and_destroy(net, skb, n, classid, old, new, extack); 1150 1128 } 1151 1129 1152 1130 if (dev->flags & IFF_UP)
+11 -3
net/sched/sch_generic.c
··· 1046 1046 qdisc_free(q); 1047 1047 } 1048 1048 1049 - static void qdisc_destroy(struct Qdisc *qdisc) 1049 + static void __qdisc_destroy(struct Qdisc *qdisc) 1050 1050 { 1051 1051 const struct Qdisc_ops *ops = qdisc->ops; 1052 1052 ··· 1070 1070 call_rcu(&qdisc->rcu, qdisc_free_cb); 1071 1071 } 1072 1072 1073 + void qdisc_destroy(struct Qdisc *qdisc) 1074 + { 1075 + if (qdisc->flags & TCQ_F_BUILTIN) 1076 + return; 1077 + 1078 + __qdisc_destroy(qdisc); 1079 + } 1080 + 1073 1081 void qdisc_put(struct Qdisc *qdisc) 1074 1082 { 1075 1083 if (!qdisc) ··· 1087 1079 !refcount_dec_and_test(&qdisc->refcnt)) 1088 1080 return; 1089 1081 1090 - qdisc_destroy(qdisc); 1082 + __qdisc_destroy(qdisc); 1091 1083 } 1092 1084 EXPORT_SYMBOL(qdisc_put); 1093 1085 ··· 1102 1094 !refcount_dec_and_rtnl_lock(&qdisc->refcnt)) 1103 1095 return; 1104 1096 1105 - qdisc_destroy(qdisc); 1097 + __qdisc_destroy(qdisc); 1106 1098 rtnl_unlock(); 1107 1099 } 1108 1100 EXPORT_SYMBOL(qdisc_put_unlocked);
+3
net/sched/sch_taprio.c
··· 800 800 801 801 taprio_next_tc_txq(dev, tc, &q->cur_txq[tc]); 802 802 803 + if (q->cur_txq[tc] >= dev->num_tx_queues) 804 + q->cur_txq[tc] = first_txq; 805 + 803 806 if (skb) 804 807 return skb; 805 808 } while (q->cur_txq[tc] != first_txq);
+4 -1
net/sctp/sm_sideeffect.c
··· 1250 1250 default: 1251 1251 pr_err("impossible disposition %d in state %d, event_type %d, event_id %d\n", 1252 1252 status, state, event_type, subtype.chunk); 1253 - BUG(); 1253 + error = status; 1254 + if (error >= 0) 1255 + error = -EINVAL; 1256 + WARN_ON_ONCE(1); 1254 1257 break; 1255 1258 } 1256 1259
+1 -1
net/sctp/sm_statefuns.c
··· 4482 4482 SCTP_AUTH_NEW_KEY, GFP_ATOMIC); 4483 4483 4484 4484 if (!ev) 4485 - return -ENOMEM; 4485 + return SCTP_DISPOSITION_NOMEM; 4486 4486 4487 4487 sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, 4488 4488 SCTP_ULPEVENT(ev));
+2 -2
net/tipc/bearer.c
··· 1256 1256 struct tipc_nl_msg msg; 1257 1257 struct tipc_media *media; 1258 1258 struct sk_buff *rep; 1259 - struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1]; 1259 + struct nlattr *attrs[TIPC_NLA_MEDIA_MAX + 1]; 1260 1260 1261 1261 if (!info->attrs[TIPC_NLA_MEDIA]) 1262 1262 return -EINVAL; ··· 1305 1305 int err; 1306 1306 char *name; 1307 1307 struct tipc_media *m; 1308 - struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1]; 1308 + struct nlattr *attrs[TIPC_NLA_MEDIA_MAX + 1]; 1309 1309 1310 1310 if (!info->attrs[TIPC_NLA_MEDIA]) 1311 1311 return -EINVAL;
+3 -3
net/wireless/rdev-ops.h
··· 2 2 /* 3 3 * Portions of this file 4 4 * Copyright(c) 2016-2017 Intel Deutschland GmbH 5 - * Copyright (C) 2018, 2021-2022 Intel Corporation 5 + * Copyright (C) 2018, 2021-2023 Intel Corporation 6 6 */ 7 7 #ifndef __CFG80211_RDEV_OPS 8 8 #define __CFG80211_RDEV_OPS ··· 1441 1441 unsigned int link_id) 1442 1442 { 1443 1443 trace_rdev_del_intf_link(&rdev->wiphy, wdev, link_id); 1444 - if (rdev->ops->add_intf_link) 1445 - rdev->ops->add_intf_link(&rdev->wiphy, wdev, link_id); 1444 + if (rdev->ops->del_intf_link) 1445 + rdev->ops->del_intf_link(&rdev->wiphy, wdev, link_id); 1446 1446 trace_rdev_return_void(&rdev->wiphy); 1447 1447 } 1448 1448
-3
net/wireless/reg.c
··· 2404 2404 case NL80211_IFTYPE_P2P_GO: 2405 2405 case NL80211_IFTYPE_ADHOC: 2406 2406 case NL80211_IFTYPE_MESH_POINT: 2407 - wiphy_lock(wiphy); 2408 2407 ret = cfg80211_reg_can_beacon_relax(wiphy, &chandef, 2409 2408 iftype); 2410 - wiphy_unlock(wiphy); 2411 - 2412 2409 if (!ret) 2413 2410 return ret; 2414 2411 break;
+8 -1
net/wireless/util.c
··· 5 5 * Copyright 2007-2009 Johannes Berg <johannes@sipsolutions.net> 6 6 * Copyright 2013-2014 Intel Mobile Communications GmbH 7 7 * Copyright 2017 Intel Deutschland GmbH 8 - * Copyright (C) 2018-2022 Intel Corporation 8 + * Copyright (C) 2018-2023 Intel Corporation 9 9 */ 10 10 #include <linux/export.h> 11 11 #include <linux/bitops.h> ··· 2557 2557 void cfg80211_remove_links(struct wireless_dev *wdev) 2558 2558 { 2559 2559 unsigned int link_id; 2560 + 2561 + /* 2562 + * links are controlled by upper layers (userspace/cfg) 2563 + * only for AP mode, so only remove them here for AP 2564 + */ 2565 + if (wdev->iftype != NL80211_IFTYPE_AP) 2566 + return; 2560 2567 2561 2568 wdev_lock(wdev); 2562 2569 if (wdev->valid_links) {
+1 -1
sound/isa/gus/gus_pcm.c
··· 892 892 kctl = snd_ctl_new1(&snd_gf1_pcm_volume_control1, gus); 893 893 else 894 894 kctl = snd_ctl_new1(&snd_gf1_pcm_volume_control, gus); 895 + kctl->id.index = control_index; 895 896 err = snd_ctl_add(card, kctl); 896 897 if (err < 0) 897 898 return err; 898 - kctl->id.index = control_index; 899 899 900 900 return 0; 901 901 }
+3 -3
sound/pci/cmipci.c
··· 2688 2688 } 2689 2689 if (cm->can_ac3_hw) { 2690 2690 kctl = snd_ctl_new1(&snd_cmipci_spdif_default, cm); 2691 + kctl->id.device = pcm_spdif_device; 2691 2692 err = snd_ctl_add(card, kctl); 2692 2693 if (err < 0) 2693 2694 return err; 2694 - kctl->id.device = pcm_spdif_device; 2695 2695 kctl = snd_ctl_new1(&snd_cmipci_spdif_mask, cm); 2696 + kctl->id.device = pcm_spdif_device; 2696 2697 err = snd_ctl_add(card, kctl); 2697 2698 if (err < 0) 2698 2699 return err; 2699 - kctl->id.device = pcm_spdif_device; 2700 2700 kctl = snd_ctl_new1(&snd_cmipci_spdif_stream, cm); 2701 + kctl->id.device = pcm_spdif_device; 2701 2702 err = snd_ctl_add(card, kctl); 2702 2703 if (err < 0) 2703 2704 return err; 2704 - kctl->id.device = pcm_spdif_device; 2705 2705 } 2706 2706 if (cm->chip_version <= 37) { 2707 2707 sw = snd_cmipci_old_mixer_switches;
+5 -1
sound/pci/hda/hda_codec.c
··· 2458 2458 type == HDA_PCM_TYPE_HDMI) { 2459 2459 /* suppose a single SPDIF device */ 2460 2460 for (dig_mix = dig_mixes; dig_mix->name; dig_mix++) { 2461 + struct snd_ctl_elem_id id; 2462 + 2461 2463 kctl = find_mixer_ctl(codec, dig_mix->name, 0, 0); 2462 2464 if (!kctl) 2463 2465 break; 2464 - kctl->id.index = spdif_index; 2466 + id = kctl->id; 2467 + id.index = spdif_index; 2468 + snd_ctl_rename_id(codec->card, &kctl->id, &id); 2465 2469 } 2466 2470 bus->primary_dig_out_type = HDA_PCM_TYPE_HDMI; 2467 2471 }
+12 -1
sound/pci/hda/patch_realtek.c
··· 9500 9500 SND_PCI_QUIRK(0x103c, 0x8b8a, "HP", ALC236_FIXUP_HP_GPIO_LED), 9501 9501 SND_PCI_QUIRK(0x103c, 0x8b8b, "HP", ALC236_FIXUP_HP_GPIO_LED), 9502 9502 SND_PCI_QUIRK(0x103c, 0x8b8d, "HP", ALC236_FIXUP_HP_GPIO_LED), 9503 - SND_PCI_QUIRK(0x103c, 0x8b8f, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9503 + SND_PCI_QUIRK(0x103c, 0x8b8f, "HP", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED), 9504 9504 SND_PCI_QUIRK(0x103c, 0x8b92, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9505 9505 SND_PCI_QUIRK(0x103c, 0x8b96, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 9506 9506 SND_PCI_QUIRK(0x103c, 0x8b97, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), ··· 9547 9547 SND_PCI_QUIRK(0x1043, 0x1a8f, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2), 9548 9548 SND_PCI_QUIRK(0x1043, 0x1b11, "ASUS UX431DA", ALC294_FIXUP_ASUS_COEF_1B), 9549 9549 SND_PCI_QUIRK(0x1043, 0x1b13, "Asus U41SV", ALC269_FIXUP_INV_DMIC), 9550 + SND_PCI_QUIRK(0x1043, 0x1b93, "ASUS G614JVR/JIR", ALC245_FIXUP_CS35L41_SPI_2), 9550 9551 SND_PCI_QUIRK(0x1043, 0x1bbd, "ASUS Z550MA", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE), 9551 9552 SND_PCI_QUIRK(0x1043, 0x1c23, "Asus X55U", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 9552 9553 SND_PCI_QUIRK(0x1043, 0x1c62, "ASUS GU603", ALC289_FIXUP_ASUS_GA401), ··· 9566 9565 SND_PCI_QUIRK(0x1043, 0x1f12, "ASUS UM5302", ALC287_FIXUP_CS35L41_I2C_2), 9567 9566 SND_PCI_QUIRK(0x1043, 0x1f92, "ASUS ROG Flow X16", ALC289_FIXUP_ASUS_GA401), 9568 9567 SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2), 9568 + SND_PCI_QUIRK(0x1043, 0x3a20, "ASUS G614JZR", ALC245_FIXUP_CS35L41_SPI_2), 9569 + SND_PCI_QUIRK(0x1043, 0x3a30, "ASUS G814JVR/JIR", ALC245_FIXUP_CS35L41_SPI_2), 9570 + SND_PCI_QUIRK(0x1043, 0x3a40, "ASUS G814JZR", ALC245_FIXUP_CS35L41_SPI_2), 9571 + SND_PCI_QUIRK(0x1043, 0x3a50, "ASUS G834JYR/JZR", ALC245_FIXUP_CS35L41_SPI_2), 9572 + SND_PCI_QUIRK(0x1043, 0x3a60, "ASUS G634JYR/JZR", ALC245_FIXUP_CS35L41_SPI_2), 9569 9573 SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC), 9570 9574 SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC), 9571 9575 SND_PCI_QUIRK(0x1043, 0x8398, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC), ··· 9594 9588 SND_PCI_QUIRK(0x10ec, 0x124c, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK), 9595 9589 SND_PCI_QUIRK(0x10ec, 0x1252, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK), 9596 9590 SND_PCI_QUIRK(0x10ec, 0x1254, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK), 9591 + SND_PCI_QUIRK(0x10ec, 0x12cc, "Intel Reference board", ALC225_FIXUP_HEADSET_JACK), 9597 9592 SND_PCI_QUIRK(0x10f7, 0x8338, "Panasonic CF-SZ6", ALC269_FIXUP_HEADSET_MODE), 9598 9593 SND_PCI_QUIRK(0x144d, 0xc109, "Samsung Ativ book 9 (NP900X3G)", ALC269_FIXUP_INV_DMIC), 9599 9594 SND_PCI_QUIRK(0x144d, 0xc169, "Samsung Notebook 9 Pen (NP930SBE-K01US)", ALC298_FIXUP_SAMSUNG_AMP), ··· 9643 9636 SND_PCI_QUIRK(0x1558, 0x5101, "Clevo S510WU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9644 9637 SND_PCI_QUIRK(0x1558, 0x5157, "Clevo W517GU1", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9645 9638 SND_PCI_QUIRK(0x1558, 0x51a1, "Clevo NS50MU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9639 + SND_PCI_QUIRK(0x1558, 0x51b1, "Clevo NS50AU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9646 9640 SND_PCI_QUIRK(0x1558, 0x5630, "Clevo NP50RNJS", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9647 9641 SND_PCI_QUIRK(0x1558, 0x70a1, "Clevo NB70T[HJK]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9648 9642 SND_PCI_QUIRK(0x1558, 0x70b3, "Clevo NK70SB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ··· 9815 9807 SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC), 9816 9808 SND_PCI_QUIRK(0x8086, 0x2080, "Intel NUC 8 Rugged", ALC256_FIXUP_INTEL_NUC8_RUGGED), 9817 9809 SND_PCI_QUIRK(0x8086, 0x2081, "Intel NUC 10", ALC256_FIXUP_INTEL_NUC10), 9810 + SND_PCI_QUIRK(0x8086, 0x3038, "Intel NUC 13", ALC225_FIXUP_HEADSET_JACK), 9818 9811 SND_PCI_QUIRK(0xf111, 0x0001, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE), 9819 9812 9820 9813 #if 0 ··· 11703 11694 SND_PCI_QUIRK(0x103c, 0x8719, "HP", ALC897_FIXUP_HP_HSMIC_VERB), 11704 11695 SND_PCI_QUIRK(0x103c, 0x872b, "HP", ALC897_FIXUP_HP_HSMIC_VERB), 11705 11696 SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2), 11697 + SND_PCI_QUIRK(0x103c, 0x8768, "HP Slim Desktop S01", ALC671_FIXUP_HP_HEADSET_MIC2), 11706 11698 SND_PCI_QUIRK(0x103c, 0x877e, "HP 288 Pro G6", ALC671_FIXUP_HP_HEADSET_MIC2), 11707 11699 SND_PCI_QUIRK(0x103c, 0x885f, "HP 288 Pro G8", ALC671_FIXUP_HP_HEADSET_MIC2), 11708 11700 SND_PCI_QUIRK(0x1043, 0x1080, "Asus UX501VW", ALC668_FIXUP_HEADSET_MODE), ··· 11725 11715 SND_PCI_QUIRK(0x14cd, 0x5003, "USI", ALC662_FIXUP_USI_HEADSET_MODE), 11726 11716 SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC662_FIXUP_LENOVO_MULTI_CODECS), 11727 11717 SND_PCI_QUIRK(0x17aa, 0x1057, "Lenovo P360", ALC897_FIXUP_HEADSET_MIC_PIN), 11718 + SND_PCI_QUIRK(0x17aa, 0x1064, "Lenovo P3 Tower", ALC897_FIXUP_HEADSET_MIC_PIN), 11728 11719 SND_PCI_QUIRK(0x17aa, 0x32ca, "Lenovo ThinkCentre M80", ALC897_FIXUP_HEADSET_MIC_PIN), 11729 11720 SND_PCI_QUIRK(0x17aa, 0x32cb, "Lenovo ThinkCentre M70", ALC897_FIXUP_HEADSET_MIC_PIN), 11730 11721 SND_PCI_QUIRK(0x17aa, 0x32cf, "Lenovo ThinkCentre M950", ALC897_FIXUP_HEADSET_MIC_PIN),
+4 -3
sound/pci/ice1712/aureon.c
··· 1899 1899 else { 1900 1900 for (i = 0; i < ARRAY_SIZE(cs8415_controls); i++) { 1901 1901 struct snd_kcontrol *kctl; 1902 - err = snd_ctl_add(ice->card, (kctl = snd_ctl_new1(&cs8415_controls[i], ice))); 1903 - if (err < 0) 1904 - return err; 1902 + kctl = snd_ctl_new1(&cs8415_controls[i], ice); 1905 1903 if (i > 1) 1906 1904 kctl->id.device = ice->pcm->device; 1905 + err = snd_ctl_add(ice->card, kctl); 1906 + if (err < 0) 1907 + return err; 1907 1908 } 1908 1909 } 1909 1910 }
+9 -5
sound/pci/ice1712/ice1712.c
··· 2371 2371 2372 2372 if (snd_BUG_ON(!ice->pcm_pro)) 2373 2373 return -EIO; 2374 - err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_ice1712_spdif_default, ice)); 2374 + kctl = snd_ctl_new1(&snd_ice1712_spdif_default, ice); 2375 + kctl->id.device = ice->pcm_pro->device; 2376 + err = snd_ctl_add(ice->card, kctl); 2375 2377 if (err < 0) 2376 2378 return err; 2379 + kctl = snd_ctl_new1(&snd_ice1712_spdif_maskc, ice); 2377 2380 kctl->id.device = ice->pcm_pro->device; 2378 - err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_ice1712_spdif_maskc, ice)); 2381 + err = snd_ctl_add(ice->card, kctl); 2379 2382 if (err < 0) 2380 2383 return err; 2384 + kctl = snd_ctl_new1(&snd_ice1712_spdif_maskp, ice); 2381 2385 kctl->id.device = ice->pcm_pro->device; 2382 - err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_ice1712_spdif_maskp, ice)); 2386 + err = snd_ctl_add(ice->card, kctl); 2383 2387 if (err < 0) 2384 2388 return err; 2389 + kctl = snd_ctl_new1(&snd_ice1712_spdif_stream, ice); 2385 2390 kctl->id.device = ice->pcm_pro->device; 2386 - err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_ice1712_spdif_stream, ice)); 2391 + err = snd_ctl_add(ice->card, kctl); 2387 2392 if (err < 0) 2388 2393 return err; 2389 - kctl->id.device = ice->pcm_pro->device; 2390 2394 ice->spdif.stream_ctl = kctl; 2391 2395 return 0; 2392 2396 }
+10 -6
sound/pci/ice1712/ice1724.c
··· 2392 2392 if (err < 0) 2393 2393 return err; 2394 2394 2395 - err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_vt1724_spdif_default, ice)); 2395 + kctl = snd_ctl_new1(&snd_vt1724_spdif_default, ice); 2396 + kctl->id.device = ice->pcm->device; 2397 + err = snd_ctl_add(ice->card, kctl); 2396 2398 if (err < 0) 2397 2399 return err; 2400 + kctl = snd_ctl_new1(&snd_vt1724_spdif_maskc, ice); 2398 2401 kctl->id.device = ice->pcm->device; 2399 - err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_vt1724_spdif_maskc, ice)); 2402 + err = snd_ctl_add(ice->card, kctl); 2400 2403 if (err < 0) 2401 2404 return err; 2405 + kctl = snd_ctl_new1(&snd_vt1724_spdif_maskp, ice); 2402 2406 kctl->id.device = ice->pcm->device; 2403 - err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_vt1724_spdif_maskp, ice)); 2407 + err = snd_ctl_add(ice->card, kctl); 2404 2408 if (err < 0) 2405 2409 return err; 2406 - kctl->id.device = ice->pcm->device; 2407 2410 #if 0 /* use default only */ 2408 - err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_vt1724_spdif_stream, ice)); 2411 + kctl = snd_ctl_new1(&snd_vt1724_spdif_stream, ice); 2412 + kctl->id.device = ice->pcm->device; 2413 + err = snd_ctl_add(ice->card, kctl); 2409 2414 if (err < 0) 2410 2415 return err; 2411 - kctl->id.device = ice->pcm->device; 2412 2416 ice->spdif.stream_ctl = kctl; 2413 2417 #endif 2414 2418 return 0;
+3 -3
sound/pci/ymfpci/ymfpci_main.c
··· 1822 1822 if (snd_BUG_ON(!chip->pcm_spdif)) 1823 1823 return -ENXIO; 1824 1824 kctl = snd_ctl_new1(&snd_ymfpci_spdif_default, chip); 1825 + kctl->id.device = chip->pcm_spdif->device; 1825 1826 err = snd_ctl_add(chip->card, kctl); 1826 1827 if (err < 0) 1827 1828 return err; 1828 - kctl->id.device = chip->pcm_spdif->device; 1829 1829 kctl = snd_ctl_new1(&snd_ymfpci_spdif_mask, chip); 1830 + kctl->id.device = chip->pcm_spdif->device; 1830 1831 err = snd_ctl_add(chip->card, kctl); 1831 1832 if (err < 0) 1832 1833 return err; 1833 - kctl->id.device = chip->pcm_spdif->device; 1834 1834 kctl = snd_ctl_new1(&snd_ymfpci_spdif_stream, chip); 1835 + kctl->id.device = chip->pcm_spdif->device; 1835 1836 err = snd_ctl_add(chip->card, kctl); 1836 1837 if (err < 0) 1837 1838 return err; 1838 - kctl->id.device = chip->pcm_spdif->device; 1839 1839 chip->spdif_pcm_ctl = kctl; 1840 1840 1841 1841 /* direct recording source */
+1 -2
sound/soc/amd/ps/pci-ps.c
··· 211 211 case ACP63_PDM_DEV_MASK: 212 212 adata->pdm_dev_index = 0; 213 213 acp63_fill_platform_dev_info(&pdevinfo[0], parent, NULL, "acp_ps_pdm_dma", 214 - 0, adata->res, 1, &adata->acp_lock, 215 - sizeof(adata->acp_lock)); 214 + 0, adata->res, 1, NULL, 0); 216 215 acp63_fill_platform_dev_info(&pdevinfo[1], parent, NULL, "dmic-codec", 217 216 0, NULL, 0, NULL, 0); 218 217 acp63_fill_platform_dev_info(&pdevinfo[2], parent, NULL, "acp_ps_mach",
+5 -5
sound/soc/amd/ps/ps-pdm-dma.c
··· 361 361 { 362 362 struct resource *res; 363 363 struct pdm_dev_data *adata; 364 + struct acp63_dev_data *acp_data; 365 + struct device *parent; 364 366 int status; 365 367 366 - if (!pdev->dev.platform_data) { 367 - dev_err(&pdev->dev, "platform_data not retrieved\n"); 368 - return -ENODEV; 369 - } 368 + parent = pdev->dev.parent; 369 + acp_data = dev_get_drvdata(parent); 370 370 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 371 371 if (!res) { 372 372 dev_err(&pdev->dev, "IORESOURCE_MEM FAILED\n"); ··· 382 382 return -ENOMEM; 383 383 384 384 adata->capture_stream = NULL; 385 - adata->acp_lock = pdev->dev.platform_data; 385 + adata->acp_lock = &acp_data->acp_lock; 386 386 dev_set_drvdata(&pdev->dev, adata); 387 387 status = devm_snd_soc_register_component(&pdev->dev, 388 388 &acp63_pdm_component,
+7
sound/soc/amd/yc/acp6x-mach.c
··· 175 175 .driver_data = &acp6x_card, 176 176 .matches = { 177 177 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 178 + DMI_MATCH(DMI_PRODUCT_NAME, "21EF"), 179 + } 180 + }, 181 + { 182 + .driver_data = &acp6x_card, 183 + .matches = { 184 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 178 185 DMI_MATCH(DMI_PRODUCT_NAME, "21EM"), 179 186 } 180 187 },
-3
sound/soc/codecs/cs35l56.c
··· 704 704 static int cs35l56_sdw_dai_set_stream(struct snd_soc_dai *dai, 705 705 void *sdw_stream, int direction) 706 706 { 707 - if (!sdw_stream) 708 - return 0; 709 - 710 707 snd_soc_dai_dma_data_set(dai, direction, sdw_stream); 711 708 712 709 return 0;
+2 -2
sound/soc/codecs/max98363.c
··· 211 211 } 212 212 213 213 #define MAX98363_RATES SNDRV_PCM_RATE_8000_192000 214 - #define MAX98363_FORMATS (SNDRV_PCM_FMTBIT_S32_LE) 214 + #define MAX98363_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE) 215 215 216 216 static int max98363_sdw_dai_hw_params(struct snd_pcm_substream *substream, 217 217 struct snd_pcm_hw_params *params, ··· 246 246 stream_config.frame_rate = params_rate(params); 247 247 stream_config.bps = snd_pcm_format_width(params_format(params)); 248 248 stream_config.direction = direction; 249 - stream_config.ch_count = params_channels(params); 249 + stream_config.ch_count = 1; 250 250 251 251 if (stream_config.ch_count > runtime->hw.channels_max) { 252 252 stream_config.ch_count = runtime->hw.channels_max;
+24
sound/soc/codecs/nau8824.c
··· 1903 1903 }, 1904 1904 .driver_data = (void *)(NAU8824_MONO_SPEAKER), 1905 1905 }, 1906 + { 1907 + /* Positivo CW14Q01P */ 1908 + .matches = { 1909 + DMI_MATCH(DMI_SYS_VENDOR, "Positivo Tecnologia SA"), 1910 + DMI_MATCH(DMI_BOARD_NAME, "CW14Q01P"), 1911 + }, 1912 + .driver_data = (void *)(NAU8824_JD_ACTIVE_HIGH), 1913 + }, 1914 + { 1915 + /* Positivo K1424G */ 1916 + .matches = { 1917 + DMI_MATCH(DMI_SYS_VENDOR, "Positivo Tecnologia SA"), 1918 + DMI_MATCH(DMI_BOARD_NAME, "K1424G"), 1919 + }, 1920 + .driver_data = (void *)(NAU8824_JD_ACTIVE_HIGH), 1921 + }, 1922 + { 1923 + /* Positivo N14ZP74G */ 1924 + .matches = { 1925 + DMI_MATCH(DMI_SYS_VENDOR, "Positivo Tecnologia SA"), 1926 + DMI_MATCH(DMI_BOARD_NAME, "N14ZP74G"), 1927 + }, 1928 + .driver_data = (void *)(NAU8824_JD_ACTIVE_HIGH), 1929 + }, 1906 1930 {} 1907 1931 }; 1908 1932
-1
sound/soc/codecs/wcd938x-sdw.c
··· 1190 1190 .readable_reg = wcd938x_readable_register, 1191 1191 .writeable_reg = wcd938x_writeable_register, 1192 1192 .volatile_reg = wcd938x_volatile_register, 1193 - .can_multi_write = true, 1194 1193 }; 1195 1194 1196 1195 static const struct sdw_slave_ops wcd9380_slave_ops = {
-1
sound/soc/codecs/wsa881x.c
··· 645 645 .readable_reg = wsa881x_readable_register, 646 646 .reg_format_endian = REGMAP_ENDIAN_NATIVE, 647 647 .val_format_endian = REGMAP_ENDIAN_NATIVE, 648 - .can_multi_write = true, 649 648 }; 650 649 651 650 enum {
-1
sound/soc/codecs/wsa883x.c
··· 946 946 .writeable_reg = wsa883x_writeable_register, 947 947 .reg_format_endian = REGMAP_ENDIAN_NATIVE, 948 948 .val_format_endian = REGMAP_ENDIAN_NATIVE, 949 - .can_multi_write = true, 950 949 .use_single_read = true, 951 950 }; 952 951
+9 -2
sound/soc/fsl/fsl_sai.c
··· 491 491 regmap_update_bits(sai->regmap, reg, FSL_SAI_CR2_MSEL_MASK, 492 492 FSL_SAI_CR2_MSEL(sai->mclk_id[tx])); 493 493 494 - if (savediv == 1) 494 + if (savediv == 1) { 495 495 regmap_update_bits(sai->regmap, reg, 496 496 FSL_SAI_CR2_DIV_MASK | FSL_SAI_CR2_BYP, 497 497 FSL_SAI_CR2_BYP); 498 - else 498 + if (fsl_sai_dir_is_synced(sai, adir)) 499 + regmap_update_bits(sai->regmap, FSL_SAI_xCR2(tx, ofs), 500 + FSL_SAI_CR2_BCI, FSL_SAI_CR2_BCI); 501 + else 502 + regmap_update_bits(sai->regmap, FSL_SAI_xCR2(tx, ofs), 503 + FSL_SAI_CR2_BCI, 0); 504 + } else { 499 505 regmap_update_bits(sai->regmap, reg, 500 506 FSL_SAI_CR2_DIV_MASK | FSL_SAI_CR2_BYP, 501 507 savediv / 2 - 1); 508 + } 502 509 503 510 if (sai->soc_data->max_register >= FSL_SAI_MCTL) { 504 511 /* SAI is in master mode at this point, so enable MCLK */
+1
sound/soc/fsl/fsl_sai.h
··· 116 116 117 117 /* SAI Transmit and Receive Configuration 2 Register */ 118 118 #define FSL_SAI_CR2_SYNC BIT(30) 119 + #define FSL_SAI_CR2_BCI BIT(28) 119 120 #define FSL_SAI_CR2_MSEL_MASK (0x3 << 26) 120 121 #define FSL_SAI_CR2_MSEL_BUS 0 121 122 #define FSL_SAI_CR2_MSEL_MCLK1 BIT(26)
+1 -1
sound/soc/generic/simple-card-utils.c
··· 314 314 } 315 315 ret = snd_pcm_hw_constraint_minmax(substream->runtime, SNDRV_PCM_HW_PARAM_RATE, 316 316 fixed_rate, fixed_rate); 317 - if (ret) 317 + if (ret < 0) 318 318 goto codec_err; 319 319 } 320 320
+1
sound/soc/generic/simple-card.c
··· 416 416 417 417 if (ret < 0) { 418 418 of_node_put(codec); 419 + of_node_put(plat); 419 420 of_node_put(np); 420 421 goto error; 421 422 }
-7
sound/soc/mediatek/mt8188/mt8188-afe-clk.c
··· 418 418 return 0; 419 419 } 420 420 421 - void mt8188_afe_deinit_clock(void *priv) 422 - { 423 - struct mtk_base_afe *afe = priv; 424 - 425 - mt8188_audsys_clk_unregister(afe); 426 - } 427 - 428 421 int mt8188_afe_enable_clk(struct mtk_base_afe *afe, struct clk *clk) 429 422 { 430 423 int ret;
-1
sound/soc/mediatek/mt8188/mt8188-afe-clk.h
··· 100 100 int mt8188_afe_get_mclk_source_rate(struct mtk_base_afe *afe, int apll); 101 101 int mt8188_afe_get_default_mclk_source_by_rate(int rate); 102 102 int mt8188_afe_init_clock(struct mtk_base_afe *afe); 103 - void mt8188_afe_deinit_clock(void *priv); 104 103 int mt8188_afe_enable_clk(struct mtk_base_afe *afe, struct clk *clk); 105 104 void mt8188_afe_disable_clk(struct mtk_base_afe *afe, struct clk *clk); 106 105 int mt8188_afe_set_clk_rate(struct mtk_base_afe *afe, struct clk *clk,
-4
sound/soc/mediatek/mt8188/mt8188-afe-pcm.c
··· 3185 3185 if (ret) 3186 3186 return dev_err_probe(dev, ret, "init clock error"); 3187 3187 3188 - ret = devm_add_action_or_reset(dev, mt8188_afe_deinit_clock, (void *)afe); 3189 - if (ret) 3190 - return ret; 3191 - 3192 3188 spin_lock_init(&afe_priv->afe_ctrl_lock); 3193 3189 3194 3190 mutex_init(&afe->irq_alloc_lock);
+24 -23
sound/soc/mediatek/mt8188/mt8188-audsys-clk.c
··· 138 138 GATE_AUD6(CLK_AUD_GASRC11, "aud_gasrc11", "top_asm_h", 11), 139 139 }; 140 140 141 + static void mt8188_audsys_clk_unregister(void *data) 142 + { 143 + struct mtk_base_afe *afe = data; 144 + struct mt8188_afe_private *afe_priv = afe->platform_priv; 145 + struct clk *clk; 146 + struct clk_lookup *cl; 147 + int i; 148 + 149 + if (!afe_priv) 150 + return; 151 + 152 + for (i = 0; i < CLK_AUD_NR_CLK; i++) { 153 + cl = afe_priv->lookup[i]; 154 + if (!cl) 155 + continue; 156 + 157 + clk = cl->clk; 158 + clk_unregister_gate(clk); 159 + 160 + clkdev_drop(cl); 161 + } 162 + } 163 + 141 164 int mt8188_audsys_clk_register(struct mtk_base_afe *afe) 142 165 { 143 166 struct mt8188_afe_private *afe_priv = afe->platform_priv; ··· 202 179 afe_priv->lookup[i] = cl; 203 180 } 204 181 205 - return 0; 206 - } 207 - 208 - void mt8188_audsys_clk_unregister(struct mtk_base_afe *afe) 209 - { 210 - struct mt8188_afe_private *afe_priv = afe->platform_priv; 211 - struct clk *clk; 212 - struct clk_lookup *cl; 213 - int i; 214 - 215 - if (!afe_priv) 216 - return; 217 - 218 - for (i = 0; i < CLK_AUD_NR_CLK; i++) { 219 - cl = afe_priv->lookup[i]; 220 - if (!cl) 221 - continue; 222 - 223 - clk = cl->clk; 224 - clk_unregister_gate(clk); 225 - 226 - clkdev_drop(cl); 227 - } 182 + return devm_add_action_or_reset(afe->dev, mt8188_audsys_clk_unregister, afe); 228 183 }
-1
sound/soc/mediatek/mt8188/mt8188-audsys-clk.h
··· 10 10 #define _MT8188_AUDSYS_CLK_H_ 11 11 12 12 int mt8188_audsys_clk_register(struct mtk_base_afe *afe); 13 - void mt8188_audsys_clk_unregister(struct mtk_base_afe *afe); 14 13 15 14 #endif
-5
sound/soc/mediatek/mt8195/mt8195-afe-clk.c
··· 410 410 return 0; 411 411 } 412 412 413 - void mt8195_afe_deinit_clock(struct mtk_base_afe *afe) 414 - { 415 - mt8195_audsys_clk_unregister(afe); 416 - } 417 - 418 413 int mt8195_afe_enable_clk(struct mtk_base_afe *afe, struct clk *clk) 419 414 { 420 415 int ret;
-1
sound/soc/mediatek/mt8195/mt8195-afe-clk.h
··· 101 101 int mt8195_afe_get_mclk_source_rate(struct mtk_base_afe *afe, int apll); 102 102 int mt8195_afe_get_default_mclk_source_by_rate(int rate); 103 103 int mt8195_afe_init_clock(struct mtk_base_afe *afe); 104 - void mt8195_afe_deinit_clock(struct mtk_base_afe *afe); 105 104 int mt8195_afe_enable_clk(struct mtk_base_afe *afe, struct clk *clk); 106 105 void mt8195_afe_disable_clk(struct mtk_base_afe *afe, struct clk *clk); 107 106 int mt8195_afe_prepare_clk(struct mtk_base_afe *afe, struct clk *clk);
-4
sound/soc/mediatek/mt8195/mt8195-afe-pcm.c
··· 3255 3255 3256 3256 static void mt8195_afe_pcm_dev_remove(struct platform_device *pdev) 3257 3257 { 3258 - struct mtk_base_afe *afe = platform_get_drvdata(pdev); 3259 - 3260 3258 snd_soc_unregister_component(&pdev->dev); 3261 3259 3262 3260 pm_runtime_disable(&pdev->dev); 3263 3261 if (!pm_runtime_status_suspended(&pdev->dev)) 3264 3262 mt8195_afe_runtime_suspend(&pdev->dev); 3265 - 3266 - mt8195_afe_deinit_clock(afe); 3267 3263 } 3268 3264 3269 3265 static const struct of_device_id mt8195_afe_pcm_dt_match[] = {
+24 -23
sound/soc/mediatek/mt8195/mt8195-audsys-clk.c
··· 148 148 GATE_AUD6(CLK_AUD_GASRC19, "aud_gasrc19", "top_asm_h", 19), 149 149 }; 150 150 151 + static void mt8195_audsys_clk_unregister(void *data) 152 + { 153 + struct mtk_base_afe *afe = data; 154 + struct mt8195_afe_private *afe_priv = afe->platform_priv; 155 + struct clk *clk; 156 + struct clk_lookup *cl; 157 + int i; 158 + 159 + if (!afe_priv) 160 + return; 161 + 162 + for (i = 0; i < CLK_AUD_NR_CLK; i++) { 163 + cl = afe_priv->lookup[i]; 164 + if (!cl) 165 + continue; 166 + 167 + clk = cl->clk; 168 + clk_unregister_gate(clk); 169 + 170 + clkdev_drop(cl); 171 + } 172 + } 173 + 151 174 int mt8195_audsys_clk_register(struct mtk_base_afe *afe) 152 175 { 153 176 struct mt8195_afe_private *afe_priv = afe->platform_priv; ··· 211 188 afe_priv->lookup[i] = cl; 212 189 } 213 190 214 - return 0; 215 - } 216 - 217 - void mt8195_audsys_clk_unregister(struct mtk_base_afe *afe) 218 - { 219 - struct mt8195_afe_private *afe_priv = afe->platform_priv; 220 - struct clk *clk; 221 - struct clk_lookup *cl; 222 - int i; 223 - 224 - if (!afe_priv) 225 - return; 226 - 227 - for (i = 0; i < CLK_AUD_NR_CLK; i++) { 228 - cl = afe_priv->lookup[i]; 229 - if (!cl) 230 - continue; 231 - 232 - clk = cl->clk; 233 - clk_unregister_gate(clk); 234 - 235 - clkdev_drop(cl); 236 - } 191 + return devm_add_action_or_reset(afe->dev, mt8195_audsys_clk_unregister, afe); 237 192 }
-1
sound/soc/mediatek/mt8195/mt8195-audsys-clk.h
··· 10 10 #define _MT8195_AUDSYS_CLK_H_ 11 11 12 12 int mt8195_audsys_clk_register(struct mtk_base_afe *afe); 13 - void mt8195_audsys_clk_unregister(struct mtk_base_afe *afe); 14 13 15 14 #endif
+3 -2
tools/testing/radix-tree/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 - CFLAGS += -I. -I../../include -g -Og -Wall -D_LGPL_SOURCE -fsanitize=address \ 4 - -fsanitize=undefined 3 + CFLAGS += -I. -I../../include -I../../../lib -g -Og -Wall \ 4 + -D_LGPL_SOURCE -fsanitize=address -fsanitize=undefined 5 5 LDFLAGS += -fsanitize=address -fsanitize=undefined 6 6 LDLIBS+= -lpthread -lurcu 7 7 TARGETS = main idr-test multiorder xarray maple ··· 49 49 ../../../include/linux/xarray.h \ 50 50 ../../../include/linux/maple_tree.h \ 51 51 ../../../include/linux/radix-tree.h \ 52 + ../../../lib/radix-tree.h \ 52 53 ../../../include/linux/idr.h 53 54 54 55 radix-tree.c: ../../../lib/radix-tree.c
+5 -5
tools/testing/selftests/alsa/pcm-test.c
··· 381 381 goto __close; 382 382 } 383 383 if (rrate != rate) { 384 - snprintf(msg, sizeof(msg), "rate mismatch %ld != %ld", rate, rrate); 384 + snprintf(msg, sizeof(msg), "rate mismatch %ld != %d", rate, rrate); 385 385 goto __close; 386 386 } 387 387 rperiod_size = period_size; ··· 447 447 frames = snd_pcm_writei(handle, samples, rate); 448 448 if (frames < 0) { 449 449 snprintf(msg, sizeof(msg), 450 - "Write failed: expected %d, wrote %li", rate, frames); 450 + "Write failed: expected %ld, wrote %li", rate, frames); 451 451 goto __close; 452 452 } 453 453 if (frames < rate) { 454 454 snprintf(msg, sizeof(msg), 455 - "expected %d, wrote %li", rate, frames); 455 + "expected %ld, wrote %li", rate, frames); 456 456 goto __close; 457 457 } 458 458 } else { 459 459 frames = snd_pcm_readi(handle, samples, rate); 460 460 if (frames < 0) { 461 461 snprintf(msg, sizeof(msg), 462 - "expected %d, wrote %li", rate, frames); 462 + "expected %ld, wrote %li", rate, frames); 463 463 goto __close; 464 464 } 465 465 if (frames < rate) { 466 466 snprintf(msg, sizeof(msg), 467 - "expected %d, wrote %li", rate, frames); 467 + "expected %ld, wrote %li", rate, frames); 468 468 goto __close; 469 469 } 470 470 }
+7 -4
tools/testing/selftests/net/forwarding/hw_stats_l3.sh
··· 84 84 85 85 router_rp1_200_create() 86 86 { 87 - ip link add name $rp1.200 up \ 88 - link $rp1 addrgenmode eui64 type vlan id 200 87 + ip link add name $rp1.200 link $rp1 type vlan id 200 88 + ip link set dev $rp1.200 addrgenmode eui64 89 + ip link set dev $rp1.200 up 89 90 ip address add dev $rp1.200 192.0.2.2/28 90 91 ip address add dev $rp1.200 2001:db8:1::2/64 91 92 ip stats set dev $rp1.200 l3_stats on ··· 257 256 258 257 router_rp1_200_destroy 259 258 260 - ip link add name $rp1.200 link $rp1 addrgenmode none type vlan id 200 259 + ip link add name $rp1.200 link $rp1 type vlan id 200 260 + ip link set dev $rp1.200 addrgenmode none 261 261 ip stats set dev $rp1.200 l3_stats on 262 - ip link set dev $rp1.200 up addrgenmode eui64 262 + ip link set dev $rp1.200 addrgenmode eui64 263 + ip link set dev $rp1.200 up 263 264 ip address add dev $rp1.200 192.0.2.2/28 264 265 ip address add dev $rp1.200 2001:db8:1::2/64 265 266 }
+1
tools/testing/selftests/net/mptcp/config
··· 1 + CONFIG_KALLSYMS=y 1 2 CONFIG_MPTCP=y 2 3 CONFIG_IPV6=y 3 4 CONFIG_MPTCP_IPV6=y
+17 -25
tools/testing/selftests/net/mptcp/diag.sh
··· 55 55 { 56 56 local command="$1" 57 57 local expected=$2 58 - local msg nr 58 + local msg="$3" 59 + local skip="${4:-SKIP}" 60 + local nr 59 61 60 - shift 2 61 - msg=$* 62 62 nr=$(eval $command) 63 63 64 64 printf "%-50s" "$msg" 65 65 if [ $nr != $expected ]; then 66 - echo "[ fail ] expected $expected found $nr" 67 - ret=$test_cnt 66 + if [ $nr = "$skip" ] && ! mptcp_lib_expect_all_features; then 67 + echo "[ skip ] Feature probably not supported" 68 + else 69 + echo "[ fail ] expected $expected found $nr" 70 + ret=$test_cnt 71 + fi 68 72 else 69 73 echo "[ ok ]" 70 74 fi ··· 80 76 local condition=$1 81 77 shift 1 82 78 83 - __chk_nr "ss -inmHMN $ns | $condition" $* 79 + __chk_nr "ss -inmHMN $ns | $condition" "$@" 84 80 } 85 81 86 82 chk_msk_nr() 87 83 { 88 - __chk_msk_nr "grep -c token:" $* 84 + __chk_msk_nr "grep -c token:" "$@" 89 85 } 90 86 91 87 wait_msk_nr() ··· 123 119 124 120 chk_msk_fallback_nr() 125 121 { 126 - __chk_msk_nr "grep -c fallback" $* 122 + __chk_msk_nr "grep -c fallback" "$@" 127 123 } 128 124 129 125 chk_msk_remote_key_nr() 130 126 { 131 - __chk_msk_nr "grep -c remote_key" $* 127 + __chk_msk_nr "grep -c remote_key" "$@" 132 128 } 133 129 134 130 __chk_listen() 135 131 { 136 132 local filter="$1" 137 133 local expected=$2 134 + local msg="$3" 138 135 139 - shift 2 140 - msg=$* 141 - 142 - nr=$(ss -N $ns -Ml "$filter" | grep -c LISTEN) 143 - printf "%-50s" "$msg" 144 - 145 - if [ $nr != $expected ]; then 146 - echo "[ fail ] expected $expected found $nr" 147 - ret=$test_cnt 148 - else 149 - echo "[ ok ]" 150 - fi 136 + __chk_nr "ss -N $ns -Ml '$filter' | grep -c LISTEN" "$expected" "$msg" 0 151 137 } 152 138 153 139 chk_msk_listen() 154 140 { 155 141 lport=$1 156 - local msg="check for listen socket" 157 142 158 143 # destination port search should always return empty list 159 144 __chk_listen "dport $lport" 0 "listen match for dport $lport" ··· 160 167 chk_msk_inuse() 161 168 { 162 169 local expected=$1 170 + local msg="$2" 163 171 local listen_nr 164 - 165 - shift 1 166 172 167 173 listen_nr=$(ss -N "${ns}" -Ml | grep -c LISTEN) 168 174 expected=$((expected + listen_nr)) ··· 173 181 sleep 0.1 174 182 done 175 183 176 - __chk_nr get_msk_inuse $expected $* 184 + __chk_nr get_msk_inuse $expected "$msg" 0 177 185 } 178 186 179 187 # $1: ns, $2: port
+20
tools/testing/selftests/net/mptcp/mptcp_connect.sh
··· 144 144 } 145 145 146 146 mptcp_lib_check_mptcp 147 + mptcp_lib_check_kallsyms 147 148 148 149 ip -Version > /dev/null 2>&1 149 150 if [ $? -ne 0 ];then ··· 696 695 return 0 697 696 fi 698 697 698 + # IP(V6)_TRANSPARENT has been added after TOS support which came with 699 + # the required infrastructure in MPTCP sockopt code. To support TOS, the 700 + # following function has been exported (T). Not great but better than 701 + # checking for a specific kernel version. 702 + if ! mptcp_lib_kallsyms_has "T __ip_sock_set_tos$"; then 703 + echo "INFO: ${msg} not supported by the kernel: SKIP" 704 + return 705 + fi 706 + 699 707 ip netns exec "$listener_ns" nft -f /dev/stdin <<"EOF" 700 708 flush ruleset 701 709 table inet mangle { ··· 777 767 778 768 run_tests_mptfo() 779 769 { 770 + if ! mptcp_lib_kallsyms_has "mptcp_fastopen_"; then 771 + echo "INFO: TFO not supported by the kernel: SKIP" 772 + return 773 + fi 774 + 780 775 echo "INFO: with MPTFO start" 781 776 ip netns exec "$ns1" sysctl -q net.ipv4.tcp_fastopen=2 782 777 ip netns exec "$ns2" sysctl -q net.ipv4.tcp_fastopen=1 ··· 801 786 { 802 787 local old_cin=$cin 803 788 local old_sin=$sin 789 + 790 + if ! mptcp_lib_kallsyms_has "mptcp_pm_data_reset$"; then 791 + echo "INFO: Full disconnect not supported: SKIP" 792 + return 793 + fi 804 794 805 795 cat $cin $cin $cin > "$cin".disconnect 806 796
+325 -183
tools/testing/selftests/net/mptcp/mptcp_join.sh
··· 25 25 ns1="" 26 26 ns2="" 27 27 ksft_skip=4 28 + iptables="iptables" 29 + ip6tables="ip6tables" 28 30 timeout_poll=30 29 31 timeout_test=$((timeout_poll * 2 + 1)) 30 32 capture=0 ··· 85 83 ip netns add $netns || exit $ksft_skip 86 84 ip -net $netns link set lo up 87 85 ip netns exec $netns sysctl -q net.mptcp.enabled=1 88 - ip netns exec $netns sysctl -q net.mptcp.pm_type=0 86 + ip netns exec $netns sysctl -q net.mptcp.pm_type=0 2>/dev/null || true 89 87 ip netns exec $netns sysctl -q net.ipv4.conf.all.rp_filter=0 90 88 ip netns exec $netns sysctl -q net.ipv4.conf.default.rp_filter=0 91 89 if [ $checksum -eq 1 ]; then ··· 144 142 check_tools() 145 143 { 146 144 mptcp_lib_check_mptcp 145 + mptcp_lib_check_kallsyms 147 146 148 147 if ! ip -Version &> /dev/null; then 149 148 echo "SKIP: Could not run test without ip tool" 150 149 exit $ksft_skip 151 150 fi 152 151 153 - if ! iptables -V &> /dev/null; then 152 + # Use the legacy version if available to support old kernel versions 153 + if iptables-legacy -V &> /dev/null; then 154 + iptables="iptables-legacy" 155 + ip6tables="ip6tables-legacy" 156 + elif ! iptables -V &> /dev/null; then 154 157 echo "SKIP: Could not run all tests without iptables tool" 155 158 exit $ksft_skip 156 159 fi ··· 192 185 rm -f "$tmpfile" 193 186 rm -rf $evts_ns1 $evts_ns2 194 187 cleanup_partial 188 + } 189 + 190 + # $1: msg 191 + print_title() 192 + { 193 + printf "%03u %-36s %s" "${TEST_COUNT}" "${TEST_NAME}" "${1}" 194 + } 195 + 196 + # [ $1: fail msg ] 197 + mark_as_skipped() 198 + { 199 + local msg="${1:-"Feature not supported"}" 200 + 201 + mptcp_lib_fail_if_expected_feature "${msg}" 202 + 203 + print_title "[ skip ] ${msg}" 204 + printf "\n" 205 + } 206 + 207 + # $@: condition 208 + continue_if() 209 + { 210 + if ! "${@}"; then 211 + mark_as_skipped 212 + return 1 213 + fi 195 214 } 196 215 197 216 skip_test() ··· 263 230 return 0 264 231 } 265 232 233 + # $1: test name ; $2: counter to check 234 + reset_check_counter() 235 + { 236 + reset "${1}" || return 1 237 + 238 + local counter="${2}" 239 + 240 + if ! nstat -asz "${counter}" | grep -wq "${counter}"; then 241 + mark_as_skipped "counter '${counter}' is not available" 242 + return 1 243 + fi 244 + } 245 + 266 246 # $1: test name 267 247 reset_with_cookies() 268 248 { ··· 295 249 296 250 reset "${1}" || return 1 297 251 298 - tables="iptables" 252 + tables="${iptables}" 299 253 if [ $ip -eq 6 ]; then 300 - tables="ip6tables" 254 + tables="${ip6tables}" 301 255 fi 302 256 303 257 ip netns exec $ns1 sysctl -q net.mptcp.add_addr_timeout=1 304 - ip netns exec $ns2 $tables -A OUTPUT -p tcp \ 305 - -m tcp --tcp-option 30 \ 306 - -m bpf --bytecode \ 307 - "$CBPF_MPTCP_SUBOPTION_ADD_ADDR" \ 308 - -j DROP 258 + 259 + if ! ip netns exec $ns2 $tables -A OUTPUT -p tcp \ 260 + -m tcp --tcp-option 30 \ 261 + -m bpf --bytecode \ 262 + "$CBPF_MPTCP_SUBOPTION_ADD_ADDR" \ 263 + -j DROP; then 264 + mark_as_skipped "unable to set the 'add addr' rule" 265 + return 1 266 + fi 309 267 } 310 268 311 269 # $1: test name ··· 353 303 # tc action pedit offset 162 out of bounds 354 304 # 355 305 # Netfilter is used to mark packets with enough data. 356 - reset_with_fail() 306 + setup_fail_rules() 357 307 { 358 - reset "${1}" || return 1 359 - 360 - ip netns exec $ns1 sysctl -q net.mptcp.checksum_enabled=1 361 - ip netns exec $ns2 sysctl -q net.mptcp.checksum_enabled=1 362 - 363 308 check_invert=1 364 309 validate_checksum=1 365 - local i="$2" 366 - local ip="${3:-4}" 310 + local i="$1" 311 + local ip="${2:-4}" 367 312 local tables 368 313 369 - tables="iptables" 314 + tables="${iptables}" 370 315 if [ $ip -eq 6 ]; then 371 - tables="ip6tables" 316 + tables="${ip6tables}" 372 317 fi 373 318 374 319 ip netns exec $ns2 $tables \ ··· 373 328 -p tcp \ 374 329 -m length --length 150:9999 \ 375 330 -m statistic --mode nth --packet 1 --every 99999 \ 376 - -j MARK --set-mark 42 || exit 1 331 + -j MARK --set-mark 42 || return ${ksft_skip} 377 332 378 - tc -n $ns2 qdisc add dev ns2eth$i clsact || exit 1 333 + tc -n $ns2 qdisc add dev ns2eth$i clsact || return ${ksft_skip} 379 334 tc -n $ns2 filter add dev ns2eth$i egress \ 380 335 protocol ip prio 1000 \ 381 336 handle 42 fw \ 382 337 action pedit munge offset 148 u8 invert \ 383 338 pipe csum tcp \ 384 - index 100 || exit 1 339 + index 100 || return ${ksft_skip} 340 + } 341 + 342 + reset_with_fail() 343 + { 344 + reset_check_counter "${1}" "MPTcpExtInfiniteMapTx" || return 1 345 + shift 346 + 347 + ip netns exec $ns1 sysctl -q net.mptcp.checksum_enabled=1 348 + ip netns exec $ns2 sysctl -q net.mptcp.checksum_enabled=1 349 + 350 + local rc=0 351 + setup_fail_rules "${@}" || rc=$? 352 + 353 + if [ ${rc} -eq ${ksft_skip} ]; then 354 + mark_as_skipped "unable to set the 'fail' rules" 355 + return 1 356 + fi 385 357 } 386 358 387 359 reset_with_events() ··· 411 349 evts_ns1_pid=$! 412 350 ip netns exec $ns2 ./pm_nl_ctl events >> "$evts_ns2" 2>&1 & 413 351 evts_ns2_pid=$! 352 + } 353 + 354 + reset_with_tcp_filter() 355 + { 356 + reset "${1}" || return 1 357 + shift 358 + 359 + local ns="${!1}" 360 + local src="${2}" 361 + local target="${3}" 362 + 363 + if ! ip netns exec "${ns}" ${iptables} \ 364 + -A INPUT \ 365 + -s "${src}" \ 366 + -p tcp \ 367 + -j "${target}"; then 368 + mark_as_skipped "unable to set the filter rules" 369 + return 1 370 + fi 414 371 } 415 372 416 373 fail_test() ··· 553 472 done 554 473 } 555 474 475 + # $1: ns ; $2: counter 476 + get_counter() 477 + { 478 + local ns="${1}" 479 + local counter="${2}" 480 + local count 481 + 482 + count=$(ip netns exec ${ns} nstat -asz "${counter}" | awk 'NR==1 {next} {print $2}') 483 + if [ -z "${count}" ]; then 484 + mptcp_lib_fail_if_expected_feature "${counter} counter" 485 + return 1 486 + fi 487 + 488 + echo "${count}" 489 + } 490 + 556 491 rm_addr_count() 557 492 { 558 - local ns=${1} 559 - 560 - ip netns exec ${ns} nstat -as | grep MPTcpExtRmAddr | awk '{print $2}' 493 + get_counter "${1}" "MPTcpExtRmAddr" 561 494 } 562 495 563 496 # $1: ns, $2: old rm_addr counter in $ns ··· 594 499 local ns="${1}" 595 500 local cnt old_cnt 596 501 597 - old_cnt=$(ip netns exec ${ns} nstat -as | grep MPJoinAckRx | awk '{print $2}') 502 + old_cnt=$(get_counter ${ns} "MPTcpExtMPJoinAckRx") 598 503 599 504 local i 600 505 for i in $(seq 10); do 601 - cnt=$(ip netns exec ${ns} nstat -as | grep MPJoinAckRx | awk '{print $2}') 506 + cnt=$(get_counter ${ns} "MPTcpExtMPJoinAckRx") 602 507 [ "$cnt" = "${old_cnt}" ] || break 603 508 sleep 0.1 604 509 done ··· 796 701 echo "[fail] expected '$expected_line' found '$line'" 797 702 fail_test 798 703 fi 799 - } 800 - 801 - filter_tcp_from() 802 - { 803 - local ns="${1}" 804 - local src="${2}" 805 - local target="${3}" 806 - 807 - ip netns exec "${ns}" iptables -A INPUT -s "${src}" -p tcp -j "${target}" 808 704 } 809 705 810 706 do_transfer() ··· 1247 1161 fi 1248 1162 1249 1163 printf "%-${nr_blank}s %s" " " "sum" 1250 - count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtDataCsumErr | awk '{print $2}') 1251 - [ -z "$count" ] && count=0 1164 + count=$(get_counter ${ns1} "MPTcpExtDataCsumErr") 1252 1165 if [ "$count" != "$csum_ns1" ]; then 1253 1166 extra_msg="$extra_msg ns1=$count" 1254 1167 fi 1255 - if { [ "$count" != $csum_ns1 ] && [ $allow_multi_errors_ns1 -eq 0 ]; } || 1168 + if [ -z "$count" ]; then 1169 + echo -n "[skip]" 1170 + elif { [ "$count" != $csum_ns1 ] && [ $allow_multi_errors_ns1 -eq 0 ]; } || 1256 1171 { [ "$count" -lt $csum_ns1 ] && [ $allow_multi_errors_ns1 -eq 1 ]; }; then 1257 1172 echo "[fail] got $count data checksum error[s] expected $csum_ns1" 1258 1173 fail_test ··· 1261 1174 echo -n "[ ok ]" 1262 1175 fi 1263 1176 echo -n " - csum " 1264 - count=$(ip netns exec $ns2 nstat -as | grep MPTcpExtDataCsumErr | awk '{print $2}') 1265 - [ -z "$count" ] && count=0 1177 + count=$(get_counter ${ns2} "MPTcpExtDataCsumErr") 1266 1178 if [ "$count" != "$csum_ns2" ]; then 1267 1179 extra_msg="$extra_msg ns2=$count" 1268 1180 fi 1269 - if { [ "$count" != $csum_ns2 ] && [ $allow_multi_errors_ns2 -eq 0 ]; } || 1181 + if [ -z "$count" ]; then 1182 + echo -n "[skip]" 1183 + elif { [ "$count" != $csum_ns2 ] && [ $allow_multi_errors_ns2 -eq 0 ]; } || 1270 1184 { [ "$count" -lt $csum_ns2 ] && [ $allow_multi_errors_ns2 -eq 1 ]; }; then 1271 1185 echo "[fail] got $count data checksum error[s] expected $csum_ns2" 1272 1186 fail_test ··· 1306 1218 fi 1307 1219 1308 1220 printf "%-${nr_blank}s %s" " " "ftx" 1309 - count=$(ip netns exec $ns_tx nstat -as | grep MPTcpExtMPFailTx | awk '{print $2}') 1310 - [ -z "$count" ] && count=0 1221 + count=$(get_counter ${ns_tx} "MPTcpExtMPFailTx") 1311 1222 if [ "$count" != "$fail_tx" ]; then 1312 1223 extra_msg="$extra_msg,tx=$count" 1313 1224 fi 1314 - if { [ "$count" != "$fail_tx" ] && [ $allow_tx_lost -eq 0 ]; } || 1225 + if [ -z "$count" ]; then 1226 + echo -n "[skip]" 1227 + elif { [ "$count" != "$fail_tx" ] && [ $allow_tx_lost -eq 0 ]; } || 1315 1228 { [ "$count" -gt "$fail_tx" ] && [ $allow_tx_lost -eq 1 ]; }; then 1316 1229 echo "[fail] got $count MP_FAIL[s] TX expected $fail_tx" 1317 1230 fail_test ··· 1321 1232 fi 1322 1233 1323 1234 echo -n " - failrx" 1324 - count=$(ip netns exec $ns_rx nstat -as | grep MPTcpExtMPFailRx | awk '{print $2}') 1325 - [ -z "$count" ] && count=0 1235 + count=$(get_counter ${ns_rx} "MPTcpExtMPFailRx") 1326 1236 if [ "$count" != "$fail_rx" ]; then 1327 1237 extra_msg="$extra_msg,rx=$count" 1328 1238 fi 1329 - if { [ "$count" != "$fail_rx" ] && [ $allow_rx_lost -eq 0 ]; } || 1239 + if [ -z "$count" ]; then 1240 + echo -n "[skip]" 1241 + elif { [ "$count" != "$fail_rx" ] && [ $allow_rx_lost -eq 0 ]; } || 1330 1242 { [ "$count" -gt "$fail_rx" ] && [ $allow_rx_lost -eq 1 ]; }; then 1331 1243 echo "[fail] got $count MP_FAIL[s] RX expected $fail_rx" 1332 1244 fail_test ··· 1355 1265 fi 1356 1266 1357 1267 printf "%-${nr_blank}s %s" " " "ctx" 1358 - count=$(ip netns exec $ns_tx nstat -as | grep MPTcpExtMPFastcloseTx | awk '{print $2}') 1359 - [ -z "$count" ] && count=0 1360 - [ "$count" != "$fclose_tx" ] && extra_msg="$extra_msg,tx=$count" 1361 - if [ "$count" != "$fclose_tx" ]; then 1268 + count=$(get_counter ${ns_tx} "MPTcpExtMPFastcloseTx") 1269 + if [ -z "$count" ]; then 1270 + echo -n "[skip]" 1271 + elif [ "$count" != "$fclose_tx" ]; then 1272 + extra_msg="$extra_msg,tx=$count" 1362 1273 echo "[fail] got $count MP_FASTCLOSE[s] TX expected $fclose_tx" 1363 1274 fail_test 1364 1275 else ··· 1367 1276 fi 1368 1277 1369 1278 echo -n " - fclzrx" 1370 - count=$(ip netns exec $ns_rx nstat -as | grep MPTcpExtMPFastcloseRx | awk '{print $2}') 1371 - [ -z "$count" ] && count=0 1372 - [ "$count" != "$fclose_rx" ] && extra_msg="$extra_msg,rx=$count" 1373 - if [ "$count" != "$fclose_rx" ]; then 1279 + count=$(get_counter ${ns_rx} "MPTcpExtMPFastcloseRx") 1280 + if [ -z "$count" ]; then 1281 + echo -n "[skip]" 1282 + elif [ "$count" != "$fclose_rx" ]; then 1283 + extra_msg="$extra_msg,rx=$count" 1374 1284 echo "[fail] got $count MP_FASTCLOSE[s] RX expected $fclose_rx" 1375 1285 fail_test 1376 1286 else ··· 1398 1306 fi 1399 1307 1400 1308 printf "%-${nr_blank}s %s" " " "rtx" 1401 - count=$(ip netns exec $ns_tx nstat -as | grep MPTcpExtMPRstTx | awk '{print $2}') 1402 - [ -z "$count" ] && count=0 1403 - if [ $count -lt $rst_tx ]; then 1309 + count=$(get_counter ${ns_tx} "MPTcpExtMPRstTx") 1310 + if [ -z "$count" ]; then 1311 + echo -n "[skip]" 1312 + elif [ $count -lt $rst_tx ]; then 1404 1313 echo "[fail] got $count MP_RST[s] TX expected $rst_tx" 1405 1314 fail_test 1406 1315 else ··· 1409 1316 fi 1410 1317 1411 1318 echo -n " - rstrx " 1412 - count=$(ip netns exec $ns_rx nstat -as | grep MPTcpExtMPRstRx | awk '{print $2}') 1413 - [ -z "$count" ] && count=0 1414 - if [ "$count" -lt "$rst_rx" ]; then 1319 + count=$(get_counter ${ns_rx} "MPTcpExtMPRstRx") 1320 + if [ -z "$count" ]; then 1321 + echo -n "[skip]" 1322 + elif [ "$count" -lt "$rst_rx" ]; then 1415 1323 echo "[fail] got $count MP_RST[s] RX expected $rst_rx" 1416 1324 fail_test 1417 1325 else ··· 1429 1335 local count 1430 1336 1431 1337 printf "%-${nr_blank}s %s" " " "itx" 1432 - count=$(ip netns exec $ns2 nstat -as | grep InfiniteMapTx | awk '{print $2}') 1433 - [ -z "$count" ] && count=0 1434 - if [ "$count" != "$infi_tx" ]; then 1338 + count=$(get_counter ${ns2} "MPTcpExtInfiniteMapTx") 1339 + if [ -z "$count" ]; then 1340 + echo -n "[skip]" 1341 + elif [ "$count" != "$infi_tx" ]; then 1435 1342 echo "[fail] got $count infinite map[s] TX expected $infi_tx" 1436 1343 fail_test 1437 1344 else ··· 1440 1345 fi 1441 1346 1442 1347 echo -n " - infirx" 1443 - count=$(ip netns exec $ns1 nstat -as | grep InfiniteMapRx | awk '{print $2}') 1444 - [ -z "$count" ] && count=0 1445 - if [ "$count" != "$infi_rx" ]; then 1348 + count=$(get_counter ${ns1} "MPTcpExtInfiniteMapRx") 1349 + if [ -z "$count" ]; then 1350 + echo "[skip]" 1351 + elif [ "$count" != "$infi_rx" ]; then 1446 1352 echo "[fail] got $count infinite map[s] RX expected $infi_rx" 1447 1353 fail_test 1448 1354 else ··· 1471 1375 fi 1472 1376 1473 1377 printf "%03u %-36s %s" "${TEST_COUNT}" "${title}" "syn" 1474 - count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinSynRx | awk '{print $2}') 1475 - [ -z "$count" ] && count=0 1476 - if [ "$count" != "$syn_nr" ]; then 1378 + count=$(get_counter ${ns1} "MPTcpExtMPJoinSynRx") 1379 + if [ -z "$count" ]; then 1380 + echo -n "[skip]" 1381 + elif [ "$count" != "$syn_nr" ]; then 1477 1382 echo "[fail] got $count JOIN[s] syn expected $syn_nr" 1478 1383 fail_test 1479 1384 else ··· 1483 1386 1484 1387 echo -n " - synack" 1485 1388 with_cookie=$(ip netns exec $ns2 sysctl -n net.ipv4.tcp_syncookies) 1486 - count=$(ip netns exec $ns2 nstat -as | grep MPTcpExtMPJoinSynAckRx | awk '{print $2}') 1487 - [ -z "$count" ] && count=0 1488 - if [ "$count" != "$syn_ack_nr" ]; then 1389 + count=$(get_counter ${ns2} "MPTcpExtMPJoinSynAckRx") 1390 + if [ -z "$count" ]; then 1391 + echo -n "[skip]" 1392 + elif [ "$count" != "$syn_ack_nr" ]; then 1489 1393 # simult connections exceeding the limit with cookie enabled could go up to 1490 1394 # synack validation as the conn limit can be enforced reliably only after 1491 1395 # the subflow creation ··· 1501 1403 fi 1502 1404 1503 1405 echo -n " - ack" 1504 - count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinAckRx | awk '{print $2}') 1505 - [ -z "$count" ] && count=0 1506 - if [ "$count" != "$ack_nr" ]; then 1406 + count=$(get_counter ${ns1} "MPTcpExtMPJoinAckRx") 1407 + if [ -z "$count" ]; then 1408 + echo "[skip]" 1409 + elif [ "$count" != "$ack_nr" ]; then 1507 1410 echo "[fail] got $count JOIN[s] ack expected $ack_nr" 1508 1411 fail_test 1509 1412 else ··· 1534 1435 local recover_nr 1535 1436 1536 1437 printf "%-${nr_blank}s %-18s" " " "stale" 1537 - stale_nr=$(ip netns exec $ns nstat -as | grep MPTcpExtSubflowStale | awk '{print $2}') 1538 - [ -z "$stale_nr" ] && stale_nr=0 1539 - recover_nr=$(ip netns exec $ns nstat -as | grep MPTcpExtSubflowRecover | awk '{print $2}') 1540 - [ -z "$recover_nr" ] && recover_nr=0 1541 1438 1542 - if [ $stale_nr -lt $stale_min ] || 1439 + stale_nr=$(get_counter ${ns} "MPTcpExtSubflowStale") 1440 + recover_nr=$(get_counter ${ns} "MPTcpExtSubflowRecover") 1441 + if [ -z "$stale_nr" ] || [ -z "$recover_nr" ]; then 1442 + echo "[skip]" 1443 + elif [ $stale_nr -lt $stale_min ] || 1543 1444 { [ $stale_max -gt 0 ] && [ $stale_nr -gt $stale_max ]; } || 1544 1445 [ $((stale_nr - recover_nr)) -ne $stale_delta ]; then 1545 1446 echo "[fail] got $stale_nr stale[s] $recover_nr recover[s], " \ ··· 1574 1475 timeout=$(ip netns exec $ns1 sysctl -n net.mptcp.add_addr_timeout) 1575 1476 1576 1477 printf "%-${nr_blank}s %s" " " "add" 1577 - count=$(ip netns exec $ns2 nstat -as MPTcpExtAddAddr | grep MPTcpExtAddAddr | awk '{print $2}') 1578 - [ -z "$count" ] && count=0 1579 - 1478 + count=$(get_counter ${ns2} "MPTcpExtAddAddr") 1479 + if [ -z "$count" ]; then 1480 + echo -n "[skip]" 1580 1481 # if the test configured a short timeout tolerate greater then expected 1581 1482 # add addrs options, due to retransmissions 1582 - if [ "$count" != "$add_nr" ] && { [ "$timeout" -gt 1 ] || [ "$count" -lt "$add_nr" ]; }; then 1483 + elif [ "$count" != "$add_nr" ] && { [ "$timeout" -gt 1 ] || [ "$count" -lt "$add_nr" ]; }; then 1583 1484 echo "[fail] got $count ADD_ADDR[s] expected $add_nr" 1584 1485 fail_test 1585 1486 else ··· 1587 1488 fi 1588 1489 1589 1490 echo -n " - echo " 1590 - count=$(ip netns exec $ns1 nstat -as MPTcpExtEchoAdd | grep MPTcpExtEchoAdd | awk '{print $2}') 1591 - [ -z "$count" ] && count=0 1592 - if [ "$count" != "$echo_nr" ]; then 1491 + count=$(get_counter ${ns1} "MPTcpExtEchoAdd") 1492 + if [ -z "$count" ]; then 1493 + echo -n "[skip]" 1494 + elif [ "$count" != "$echo_nr" ]; then 1593 1495 echo "[fail] got $count ADD_ADDR echo[s] expected $echo_nr" 1594 1496 fail_test 1595 1497 else ··· 1599 1499 1600 1500 if [ $port_nr -gt 0 ]; then 1601 1501 echo -n " - pt " 1602 - count=$(ip netns exec $ns2 nstat -as | grep MPTcpExtPortAdd | awk '{print $2}') 1603 - [ -z "$count" ] && count=0 1604 - if [ "$count" != "$port_nr" ]; then 1502 + count=$(get_counter ${ns2} "MPTcpExtPortAdd") 1503 + if [ -z "$count" ]; then 1504 + echo "[skip]" 1505 + elif [ "$count" != "$port_nr" ]; then 1605 1506 echo "[fail] got $count ADD_ADDR[s] with a port-number expected $port_nr" 1606 1507 fail_test 1607 1508 else ··· 1610 1509 fi 1611 1510 1612 1511 printf "%-${nr_blank}s %s" " " "syn" 1613 - count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinPortSynRx | 1614 - awk '{print $2}') 1615 - [ -z "$count" ] && count=0 1616 - if [ "$count" != "$syn_nr" ]; then 1512 + count=$(get_counter ${ns1} "MPTcpExtMPJoinPortSynRx") 1513 + if [ -z "$count" ]; then 1514 + echo -n "[skip]" 1515 + elif [ "$count" != "$syn_nr" ]; then 1617 1516 echo "[fail] got $count JOIN[s] syn with a different \ 1618 1517 port-number expected $syn_nr" 1619 1518 fail_test ··· 1622 1521 fi 1623 1522 1624 1523 echo -n " - synack" 1625 - count=$(ip netns exec $ns2 nstat -as | grep MPTcpExtMPJoinPortSynAckRx | 1626 - awk '{print $2}') 1627 - [ -z "$count" ] && count=0 1628 - if [ "$count" != "$syn_ack_nr" ]; then 1524 + count=$(get_counter ${ns2} "MPTcpExtMPJoinPortSynAckRx") 1525 + if [ -z "$count" ]; then 1526 + echo -n "[skip]" 1527 + elif [ "$count" != "$syn_ack_nr" ]; then 1629 1528 echo "[fail] got $count JOIN[s] synack with a different \ 1630 1529 port-number expected $syn_ack_nr" 1631 1530 fail_test ··· 1634 1533 fi 1635 1534 1636 1535 echo -n " - ack" 1637 - count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinPortAckRx | 1638 - awk '{print $2}') 1639 - [ -z "$count" ] && count=0 1640 - if [ "$count" != "$ack_nr" ]; then 1536 + count=$(get_counter ${ns1} "MPTcpExtMPJoinPortAckRx") 1537 + if [ -z "$count" ]; then 1538 + echo "[skip]" 1539 + elif [ "$count" != "$ack_nr" ]; then 1641 1540 echo "[fail] got $count JOIN[s] ack with a different \ 1642 1541 port-number expected $ack_nr" 1643 1542 fail_test ··· 1646 1545 fi 1647 1546 1648 1547 printf "%-${nr_blank}s %s" " " "syn" 1649 - count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMismatchPortSynRx | 1650 - awk '{print $2}') 1651 - [ -z "$count" ] && count=0 1652 - if [ "$count" != "$mis_syn_nr" ]; then 1548 + count=$(get_counter ${ns1} "MPTcpExtMismatchPortSynRx") 1549 + if [ -z "$count" ]; then 1550 + echo -n "[skip]" 1551 + elif [ "$count" != "$mis_syn_nr" ]; then 1653 1552 echo "[fail] got $count JOIN[s] syn with a mismatched \ 1654 1553 port-number expected $mis_syn_nr" 1655 1554 fail_test ··· 1658 1557 fi 1659 1558 1660 1559 echo -n " - ack " 1661 - count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMismatchPortAckRx | 1662 - awk '{print $2}') 1663 - [ -z "$count" ] && count=0 1664 - if [ "$count" != "$mis_ack_nr" ]; then 1560 + count=$(get_counter ${ns1} "MPTcpExtMismatchPortAckRx") 1561 + if [ -z "$count" ]; then 1562 + echo "[skip]" 1563 + elif [ "$count" != "$mis_ack_nr" ]; then 1665 1564 echo "[fail] got $count JOIN[s] ack with a mismatched \ 1666 1565 port-number expected $mis_ack_nr" 1667 1566 fail_test ··· 1744 1643 fi 1745 1644 1746 1645 echo -n " - rmsf " 1747 - count=$(ip netns exec $subflow_ns nstat -as | grep MPTcpExtRmSubflow | awk '{print $2}') 1748 - [ -z "$count" ] && count=0 1749 - if [ -n "$simult" ]; then 1646 + count=$(get_counter ${subflow_ns} "MPTcpExtRmSubflow") 1647 + if [ -z "$count" ]; then 1648 + echo -n "[skip]" 1649 + elif [ -n "$simult" ]; then 1750 1650 local cnt suffix 1751 1651 1752 - cnt=$(ip netns exec $addr_ns nstat -as | grep MPTcpExtRmSubflow | awk '{print $2}') 1652 + cnt=$(get_counter ${addr_ns} "MPTcpExtRmSubflow") 1753 1653 1754 1654 # in case of simult flush, the subflow removal count on each side is 1755 1655 # unreliable 1756 - [ -z "$cnt" ] && cnt=0 1757 1656 count=$((count + cnt)) 1758 1657 [ "$count" != "$rm_subflow_nr" ] && suffix="$count in [$rm_subflow_nr:$((rm_subflow_nr*2))]" 1759 1658 if [ $count -ge "$rm_subflow_nr" ] && \ 1760 1659 [ "$count" -le "$((rm_subflow_nr *2 ))" ]; then 1761 - echo "[ ok ] $suffix" 1660 + echo -n "[ ok ] $suffix" 1762 1661 else 1763 1662 echo "[fail] got $count RM_SUBFLOW[s] expected in range [$rm_subflow_nr:$((rm_subflow_nr*2))]" 1764 1663 fail_test 1765 1664 fi 1766 - return 1767 - fi 1768 - if [ "$count" != "$rm_subflow_nr" ]; then 1665 + elif [ "$count" != "$rm_subflow_nr" ]; then 1769 1666 echo "[fail] got $count RM_SUBFLOW[s] expected $rm_subflow_nr" 1770 1667 fail_test 1771 1668 else ··· 1797 1698 local count 1798 1699 1799 1700 printf "%-${nr_blank}s %s" " " "ptx" 1800 - count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPPrioTx | awk '{print $2}') 1801 - [ -z "$count" ] && count=0 1802 - if [ "$count" != "$mp_prio_nr_tx" ]; then 1701 + count=$(get_counter ${ns1} "MPTcpExtMPPrioTx") 1702 + if [ -z "$count" ]; then 1703 + echo -n "[skip]" 1704 + elif [ "$count" != "$mp_prio_nr_tx" ]; then 1803 1705 echo "[fail] got $count MP_PRIO[s] TX expected $mp_prio_nr_tx" 1804 1706 fail_test 1805 1707 else ··· 1808 1708 fi 1809 1709 1810 1710 echo -n " - prx " 1811 - count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPPrioRx | awk '{print $2}') 1812 - [ -z "$count" ] && count=0 1813 - if [ "$count" != "$mp_prio_nr_rx" ]; then 1711 + count=$(get_counter ${ns1} "MPTcpExtMPPrioRx") 1712 + if [ -z "$count" ]; then 1713 + echo "[skip]" 1714 + elif [ "$count" != "$mp_prio_nr_rx" ]; then 1814 1715 echo "[fail] got $count MP_PRIO[s] RX expected $mp_prio_nr_rx" 1815 1716 fail_test 1816 1717 else ··· 1922 1821 while [ $time -lt $timeout_ms ]; do 1923 1822 local cnt 1924 1823 1925 - cnt=$(ip netns exec $ns nstat -as TcpAttemptFails | grep TcpAttemptFails | awk '{print $2}') 1824 + cnt=$(get_counter ${ns} "TcpAttemptFails") 1926 1825 1927 1826 [ "$cnt" = 1 ] && return 1 1928 1827 time=$((time + 100)) ··· 2015 1914 fi 2016 1915 2017 1916 # multiple subflows, with subflow creation error 2018 - if reset "multi subflows, with failing subflow"; then 1917 + if reset_with_tcp_filter "multi subflows, with failing subflow" ns1 10.0.3.2 REJECT && 1918 + continue_if mptcp_lib_kallsyms_has "mptcp_pm_subflow_check_next$"; then 2019 1919 pm_nl_set_limits $ns1 0 2 2020 1920 pm_nl_set_limits $ns2 0 2 2021 1921 pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow 2022 1922 pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow 2023 - filter_tcp_from $ns1 10.0.3.2 REJECT 2024 1923 run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow 2025 1924 chk_join_nr 1 1 1 2026 1925 fi 2027 1926 2028 1927 # multiple subflows, with subflow timeout on MPJ 2029 - if reset "multi subflows, with subflow timeout"; then 1928 + if reset_with_tcp_filter "multi subflows, with subflow timeout" ns1 10.0.3.2 DROP && 1929 + continue_if mptcp_lib_kallsyms_has "mptcp_pm_subflow_check_next$"; then 2030 1930 pm_nl_set_limits $ns1 0 2 2031 1931 pm_nl_set_limits $ns2 0 2 2032 1932 pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow 2033 1933 pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow 2034 - filter_tcp_from $ns1 10.0.3.2 DROP 2035 1934 run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow 2036 1935 chk_join_nr 1 1 1 2037 1936 fi ··· 2039 1938 # multiple subflows, check that the endpoint corresponding to 2040 1939 # closed subflow (due to reset) is not reused if additional 2041 1940 # subflows are added later 2042 - if reset "multi subflows, fair usage on close"; then 1941 + if reset_with_tcp_filter "multi subflows, fair usage on close" ns1 10.0.3.2 REJECT && 1942 + continue_if mptcp_lib_kallsyms_has "mptcp_pm_subflow_check_next$"; then 2043 1943 pm_nl_set_limits $ns1 0 1 2044 1944 pm_nl_set_limits $ns2 0 1 2045 1945 pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow 2046 - filter_tcp_from $ns1 10.0.3.2 REJECT 2047 1946 run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow & 2048 1947 2049 1948 # mpj subflow will be in TW after the reset ··· 2144 2043 # the peer could possibly miss some addr notification, allow retransmission 2145 2044 ip netns exec $ns1 sysctl -q net.mptcp.add_addr_timeout=1 2146 2045 run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow 2147 - chk_join_nr 3 3 3 2148 2046 2149 - # the server will not signal the address terminating 2150 - # the MPC subflow 2151 - chk_add_nr 3 3 2047 + # It is not directly linked to the commit introducing this 2048 + # symbol but for the parent one which is linked anyway. 2049 + if ! mptcp_lib_kallsyms_has "mptcp_pm_subflow_check_next$"; then 2050 + chk_join_nr 3 3 2 2051 + chk_add_nr 4 4 2052 + else 2053 + chk_join_nr 3 3 3 2054 + # the server will not signal the address terminating 2055 + # the MPC subflow 2056 + chk_add_nr 3 3 2057 + fi 2152 2058 fi 2153 2059 } 2154 2060 ··· 2398 2290 pm_nl_add_endpoint $ns2 10.0.4.2 flags subflow 2399 2291 run_tests $ns1 $ns2 10.0.1.1 0 -8 -8 slow 2400 2292 chk_join_nr 3 3 3 2401 - chk_rm_tx_nr 0 2402 - chk_rm_nr 0 3 simult 2293 + 2294 + if mptcp_lib_kversion_ge 5.18; then 2295 + chk_rm_tx_nr 0 2296 + chk_rm_nr 0 3 simult 2297 + else 2298 + chk_rm_nr 3 3 2299 + fi 2403 2300 fi 2404 2301 2405 2302 # addresses flush ··· 2642 2529 2643 2530 mixed_tests() 2644 2531 { 2645 - if reset "IPv4 sockets do not use IPv6 addresses"; then 2532 + if reset "IPv4 sockets do not use IPv6 addresses" && 2533 + continue_if mptcp_lib_kversion_ge 6.3; then 2646 2534 pm_nl_set_limits $ns1 0 1 2647 2535 pm_nl_set_limits $ns2 1 1 2648 2536 pm_nl_add_endpoint $ns1 dead:beef:2::1 flags signal ··· 2652 2538 fi 2653 2539 2654 2540 # Need an IPv6 mptcp socket to allow subflows of both families 2655 - if reset "simult IPv4 and IPv6 subflows"; then 2541 + if reset "simult IPv4 and IPv6 subflows" && 2542 + continue_if mptcp_lib_kversion_ge 6.3; then 2656 2543 pm_nl_set_limits $ns1 0 1 2657 2544 pm_nl_set_limits $ns2 1 1 2658 2545 pm_nl_add_endpoint $ns1 10.0.1.1 flags signal ··· 2662 2547 fi 2663 2548 2664 2549 # cross families subflows will not be created even in fullmesh mode 2665 - if reset "simult IPv4 and IPv6 subflows, fullmesh 1x1"; then 2550 + if reset "simult IPv4 and IPv6 subflows, fullmesh 1x1" && 2551 + continue_if mptcp_lib_kversion_ge 6.3; then 2666 2552 pm_nl_set_limits $ns1 0 4 2667 2553 pm_nl_set_limits $ns2 1 4 2668 2554 pm_nl_add_endpoint $ns2 dead:beef:2::2 flags subflow,fullmesh ··· 2674 2558 2675 2559 # fullmesh still tries to create all the possibly subflows with 2676 2560 # matching family 2677 - if reset "simult IPv4 and IPv6 subflows, fullmesh 2x2"; then 2561 + if reset "simult IPv4 and IPv6 subflows, fullmesh 2x2" && 2562 + continue_if mptcp_lib_kversion_ge 6.3; then 2678 2563 pm_nl_set_limits $ns1 0 4 2679 2564 pm_nl_set_limits $ns2 2 4 2680 2565 pm_nl_add_endpoint $ns1 10.0.2.1 flags signal ··· 2688 2571 backup_tests() 2689 2572 { 2690 2573 # single subflow, backup 2691 - if reset "single subflow, backup"; then 2574 + if reset "single subflow, backup" && 2575 + continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then 2692 2576 pm_nl_set_limits $ns1 0 1 2693 2577 pm_nl_set_limits $ns2 0 1 2694 2578 pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow,backup ··· 2699 2581 fi 2700 2582 2701 2583 # single address, backup 2702 - if reset "single address, backup"; then 2584 + if reset "single address, backup" && 2585 + continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then 2703 2586 pm_nl_set_limits $ns1 0 1 2704 2587 pm_nl_add_endpoint $ns1 10.0.2.1 flags signal 2705 2588 pm_nl_set_limits $ns2 1 1 ··· 2711 2592 fi 2712 2593 2713 2594 # single address with port, backup 2714 - if reset "single address with port, backup"; then 2595 + if reset "single address with port, backup" && 2596 + continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then 2715 2597 pm_nl_set_limits $ns1 0 1 2716 2598 pm_nl_add_endpoint $ns1 10.0.2.1 flags signal port 10100 2717 2599 pm_nl_set_limits $ns2 1 1 ··· 2722 2602 chk_prio_nr 1 1 2723 2603 fi 2724 2604 2725 - if reset "mpc backup"; then 2605 + if reset "mpc backup" && 2606 + continue_if mptcp_lib_kallsyms_doesnt_have "mptcp_subflow_send_ack$"; then 2726 2607 pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow,backup 2727 2608 run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow 2728 2609 chk_join_nr 0 0 0 2729 2610 chk_prio_nr 0 1 2730 2611 fi 2731 2612 2732 - if reset "mpc backup both sides"; then 2613 + if reset "mpc backup both sides" && 2614 + continue_if mptcp_lib_kallsyms_doesnt_have "mptcp_subflow_send_ack$"; then 2733 2615 pm_nl_add_endpoint $ns1 10.0.1.1 flags subflow,backup 2734 2616 pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow,backup 2735 2617 run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow ··· 2739 2617 chk_prio_nr 1 1 2740 2618 fi 2741 2619 2742 - if reset "mpc switch to backup"; then 2620 + if reset "mpc switch to backup" && 2621 + continue_if mptcp_lib_kallsyms_doesnt_have "mptcp_subflow_send_ack$"; then 2743 2622 pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow 2744 2623 run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow backup 2745 2624 chk_join_nr 0 0 0 2746 2625 chk_prio_nr 0 1 2747 2626 fi 2748 2627 2749 - if reset "mpc switch to backup both sides"; then 2628 + if reset "mpc switch to backup both sides" && 2629 + continue_if mptcp_lib_kallsyms_doesnt_have "mptcp_subflow_send_ack$"; then 2750 2630 pm_nl_add_endpoint $ns1 10.0.1.1 flags subflow 2751 2631 pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow 2752 2632 run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow backup ··· 2774 2650 local family 2775 2651 local saddr 2776 2652 local sport 2653 + local name 2777 2654 2778 2655 if [ $e_type = $LISTENER_CREATED ]; then 2779 - stdbuf -o0 -e0 printf "\t\t\t\t\t CREATE_LISTENER %s:%s"\ 2780 - $e_saddr $e_sport 2656 + name="LISTENER_CREATED" 2781 2657 elif [ $e_type = $LISTENER_CLOSED ]; then 2782 - stdbuf -o0 -e0 printf "\t\t\t\t\t CLOSE_LISTENER %s:%s "\ 2783 - $e_saddr $e_sport 2658 + name="LISTENER_CLOSED" 2659 + else 2660 + name="$e_type" 2784 2661 fi 2785 2662 2786 - type=$(grep "type:$e_type," $evt | 2787 - sed --unbuffered -n 's/.*\(type:\)\([[:digit:]]*\).*$/\2/p;q') 2788 - family=$(grep "type:$e_type," $evt | 2789 - sed --unbuffered -n 's/.*\(family:\)\([[:digit:]]*\).*$/\2/p;q') 2790 - sport=$(grep "type:$e_type," $evt | 2791 - sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q') 2663 + printf "%-${nr_blank}s %s %s:%s " " " "$name" "$e_saddr" "$e_sport" 2664 + 2665 + if ! mptcp_lib_kallsyms_has "mptcp_event_pm_listener$"; then 2666 + printf "[skip]: event not supported\n" 2667 + return 2668 + fi 2669 + 2670 + type=$(grep "type:$e_type," $evt | sed -n 's/.*\(type:\)\([[:digit:]]*\).*$/\2/p;q') 2671 + family=$(grep "type:$e_type," $evt | sed -n 's/.*\(family:\)\([[:digit:]]*\).*$/\2/p;q') 2672 + sport=$(grep "type:$e_type," $evt | sed -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q') 2792 2673 if [ $family ] && [ $family = $AF_INET6 ]; then 2793 - saddr=$(grep "type:$e_type," $evt | 2794 - sed --unbuffered -n 's/.*\(saddr6:\)\([0-9a-f:.]*\).*$/\2/p;q') 2674 + saddr=$(grep "type:$e_type," $evt | sed -n 's/.*\(saddr6:\)\([0-9a-f:.]*\).*$/\2/p;q') 2795 2675 else 2796 - saddr=$(grep "type:$e_type," $evt | 2797 - sed --unbuffered -n 's/.*\(saddr4:\)\([0-9.]*\).*$/\2/p;q') 2676 + saddr=$(grep "type:$e_type," $evt | sed -n 's/.*\(saddr4:\)\([0-9.]*\).*$/\2/p;q') 2798 2677 fi 2799 2678 2800 2679 if [ $type ] && [ $type = $e_type ] && 2801 2680 [ $family ] && [ $family = $e_family ] && 2802 2681 [ $saddr ] && [ $saddr = $e_saddr ] && 2803 2682 [ $sport ] && [ $sport = $e_sport ]; then 2804 - stdbuf -o0 -e0 printf "[ ok ]\n" 2683 + echo "[ ok ]" 2805 2684 return 0 2806 2685 fi 2807 2686 fail_test 2808 - stdbuf -o0 -e0 printf "[fail]\n" 2687 + echo "[fail]" 2809 2688 } 2810 2689 2811 2690 add_addr_ports_tests() ··· 3114 2987 fi 3115 2988 3116 2989 # set fullmesh flag 3117 - if reset "set fullmesh flag test"; then 2990 + if reset "set fullmesh flag test" && 2991 + continue_if mptcp_lib_kversion_ge 5.18; then 3118 2992 pm_nl_set_limits $ns1 4 4 3119 2993 pm_nl_add_endpoint $ns1 10.0.2.1 flags subflow 3120 2994 pm_nl_set_limits $ns2 4 4 ··· 3125 2997 fi 3126 2998 3127 2999 # set nofullmesh flag 3128 - if reset "set nofullmesh flag test"; then 3000 + if reset "set nofullmesh flag test" && 3001 + continue_if mptcp_lib_kversion_ge 5.18; then 3129 3002 pm_nl_set_limits $ns1 4 4 3130 3003 pm_nl_add_endpoint $ns1 10.0.2.1 flags subflow,fullmesh 3131 3004 pm_nl_set_limits $ns2 4 4 ··· 3136 3007 fi 3137 3008 3138 3009 # set backup,fullmesh flags 3139 - if reset "set backup,fullmesh flags test"; then 3010 + if reset "set backup,fullmesh flags test" && 3011 + continue_if mptcp_lib_kversion_ge 5.18; then 3140 3012 pm_nl_set_limits $ns1 4 4 3141 3013 pm_nl_add_endpoint $ns1 10.0.2.1 flags subflow 3142 3014 pm_nl_set_limits $ns2 4 4 ··· 3148 3018 fi 3149 3019 3150 3020 # set nobackup,nofullmesh flags 3151 - if reset "set nobackup,nofullmesh flags test"; then 3021 + if reset "set nobackup,nofullmesh flags test" && 3022 + continue_if mptcp_lib_kversion_ge 5.18; then 3152 3023 pm_nl_set_limits $ns1 4 4 3153 3024 pm_nl_set_limits $ns2 4 4 3154 3025 pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow,backup,fullmesh ··· 3162 3031 3163 3032 fastclose_tests() 3164 3033 { 3165 - if reset "fastclose test"; then 3034 + if reset_check_counter "fastclose test" "MPTcpExtMPFastcloseTx"; then 3166 3035 run_tests $ns1 $ns2 10.0.1.1 1024 0 fastclose_client 3167 3036 chk_join_nr 0 0 0 3168 3037 chk_fclose_nr 1 1 3169 3038 chk_rst_nr 1 1 invert 3170 3039 fi 3171 3040 3172 - if reset "fastclose server test"; then 3041 + if reset_check_counter "fastclose server test" "MPTcpExtMPFastcloseRx"; then 3173 3042 run_tests $ns1 $ns2 10.0.1.1 1024 0 fastclose_server 3174 3043 chk_join_nr 0 0 0 3175 3044 chk_fclose_nr 1 1 invert ··· 3207 3076 userspace_tests() 3208 3077 { 3209 3078 # userspace pm type prevents add_addr 3210 - if reset "userspace pm type prevents add_addr"; then 3079 + if reset "userspace pm type prevents add_addr" && 3080 + continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 3211 3081 set_userspace_pm $ns1 3212 3082 pm_nl_set_limits $ns1 0 2 3213 3083 pm_nl_set_limits $ns2 0 2 ··· 3219 3087 fi 3220 3088 3221 3089 # userspace pm type does not echo add_addr without daemon 3222 - if reset "userspace pm no echo w/o daemon"; then 3090 + if reset "userspace pm no echo w/o daemon" && 3091 + continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 3223 3092 set_userspace_pm $ns2 3224 3093 pm_nl_set_limits $ns1 0 2 3225 3094 pm_nl_set_limits $ns2 0 2 ··· 3231 3098 fi 3232 3099 3233 3100 # userspace pm type rejects join 3234 - if reset "userspace pm type rejects join"; then 3101 + if reset "userspace pm type rejects join" && 3102 + continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 3235 3103 set_userspace_pm $ns1 3236 3104 pm_nl_set_limits $ns1 1 1 3237 3105 pm_nl_set_limits $ns2 1 1 ··· 3242 3108 fi 3243 3109 3244 3110 # userspace pm type does not send join 3245 - if reset "userspace pm type does not send join"; then 3111 + if reset "userspace pm type does not send join" && 3112 + continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 3246 3113 set_userspace_pm $ns2 3247 3114 pm_nl_set_limits $ns1 1 1 3248 3115 pm_nl_set_limits $ns2 1 1 ··· 3253 3118 fi 3254 3119 3255 3120 # userspace pm type prevents mp_prio 3256 - if reset "userspace pm type prevents mp_prio"; then 3121 + if reset "userspace pm type prevents mp_prio" && 3122 + continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 3257 3123 set_userspace_pm $ns1 3258 3124 pm_nl_set_limits $ns1 1 1 3259 3125 pm_nl_set_limits $ns2 1 1 ··· 3265 3129 fi 3266 3130 3267 3131 # userspace pm type prevents rm_addr 3268 - if reset "userspace pm type prevents rm_addr"; then 3132 + if reset "userspace pm type prevents rm_addr" && 3133 + continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 3269 3134 set_userspace_pm $ns1 3270 3135 set_userspace_pm $ns2 3271 3136 pm_nl_set_limits $ns1 0 1 ··· 3278 3141 fi 3279 3142 3280 3143 # userspace pm add & remove address 3281 - if reset_with_events "userspace pm add & remove address"; then 3144 + if reset_with_events "userspace pm add & remove address" && 3145 + continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 3282 3146 set_userspace_pm $ns1 3283 3147 pm_nl_set_limits $ns2 1 1 3284 3148 run_tests $ns1 $ns2 10.0.1.1 0 userspace_1 0 slow ··· 3290 3152 fi 3291 3153 3292 3154 # userspace pm create destroy subflow 3293 - if reset_with_events "userspace pm create destroy subflow"; then 3155 + if reset_with_events "userspace pm create destroy subflow" && 3156 + continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 3294 3157 set_userspace_pm $ns2 3295 3158 pm_nl_set_limits $ns1 0 1 3296 3159 run_tests $ns1 $ns2 10.0.1.1 0 0 userspace_1 slow ··· 3303 3164 3304 3165 endpoint_tests() 3305 3166 { 3167 + # subflow_rebuild_header is needed to support the implicit flag 3306 3168 # userspace pm type prevents add_addr 3307 - if reset "implicit EP"; then 3169 + if reset "implicit EP" && 3170 + mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then 3308 3171 pm_nl_set_limits $ns1 2 2 3309 3172 pm_nl_set_limits $ns2 2 2 3310 3173 pm_nl_add_endpoint $ns1 10.0.2.1 flags signal ··· 3326 3185 kill_tests_wait 3327 3186 fi 3328 3187 3329 - if reset "delete and re-add"; then 3188 + if reset "delete and re-add" && 3189 + mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then 3330 3190 pm_nl_set_limits $ns1 1 1 3331 3191 pm_nl_set_limits $ns2 1 1 3332 3192 pm_nl_add_endpoint $ns2 10.0.2.2 id 2 dev ns2eth2 flags subflow
+64
tools/testing/selftests/net/mptcp/mptcp_lib.sh
··· 38 38 exit ${KSFT_SKIP} 39 39 fi 40 40 } 41 + 42 + mptcp_lib_check_kallsyms() { 43 + if ! mptcp_lib_has_file "/proc/kallsyms"; then 44 + echo "SKIP: CONFIG_KALLSYMS is missing" 45 + exit ${KSFT_SKIP} 46 + fi 47 + } 48 + 49 + # Internal: use mptcp_lib_kallsyms_has() instead 50 + __mptcp_lib_kallsyms_has() { 51 + local sym="${1}" 52 + 53 + mptcp_lib_check_kallsyms 54 + 55 + grep -q " ${sym}" /proc/kallsyms 56 + } 57 + 58 + # $1: part of a symbol to look at, add '$' at the end for full name 59 + mptcp_lib_kallsyms_has() { 60 + local sym="${1}" 61 + 62 + if __mptcp_lib_kallsyms_has "${sym}"; then 63 + return 0 64 + fi 65 + 66 + mptcp_lib_fail_if_expected_feature "${sym} symbol not found" 67 + } 68 + 69 + # $1: part of a symbol to look at, add '$' at the end for full name 70 + mptcp_lib_kallsyms_doesnt_have() { 71 + local sym="${1}" 72 + 73 + if ! __mptcp_lib_kallsyms_has "${sym}"; then 74 + return 0 75 + fi 76 + 77 + mptcp_lib_fail_if_expected_feature "${sym} symbol has been found" 78 + } 79 + 80 + # !!!AVOID USING THIS!!! 81 + # Features might not land in the expected version and features can be backported 82 + # 83 + # $1: kernel version, e.g. 6.3 84 + mptcp_lib_kversion_ge() { 85 + local exp_maj="${1%.*}" 86 + local exp_min="${1#*.}" 87 + local v maj min 88 + 89 + # If the kernel has backported features, set this env var to 1: 90 + if [ "${SELFTESTS_MPTCP_LIB_NO_KVERSION_CHECK:-}" = "1" ]; then 91 + return 0 92 + fi 93 + 94 + v=$(uname -r | cut -d'.' -f1,2) 95 + maj=${v%.*} 96 + min=${v#*.} 97 + 98 + if [ "${maj}" -gt "${exp_maj}" ] || 99 + { [ "${maj}" -eq "${exp_maj}" ] && [ "${min}" -ge "${exp_min}" ]; }; then 100 + return 0 101 + fi 102 + 103 + mptcp_lib_fail_if_expected_feature "kernel version ${1} lower than ${v}" 104 + }
+12 -6
tools/testing/selftests/net/mptcp/mptcp_sockopt.c
··· 87 87 uint64_t tcpi_rcv_delta; 88 88 }; 89 89 90 + #ifndef MIN 91 + #define MIN(a, b) ((a) < (b) ? (a) : (b)) 92 + #endif 93 + 90 94 static void die_perror(const char *msg) 91 95 { 92 96 perror(msg); ··· 353 349 xerror("getsockopt MPTCP_TCPINFO (tries %d, %m)"); 354 350 355 351 assert(olen <= sizeof(ti)); 356 - assert(ti.d.size_user == ti.d.size_kernel); 357 - assert(ti.d.size_user == sizeof(struct tcp_info)); 352 + assert(ti.d.size_kernel > 0); 353 + assert(ti.d.size_user == 354 + MIN(ti.d.size_kernel, sizeof(struct tcp_info))); 358 355 assert(ti.d.num_subflows == 1); 359 356 360 357 assert(olen > (socklen_t)sizeof(struct mptcp_subflow_data)); 361 358 olen -= sizeof(struct mptcp_subflow_data); 362 - assert(olen == sizeof(struct tcp_info)); 359 + assert(olen == ti.d.size_user); 363 360 364 361 if (ti.ti[0].tcpi_bytes_sent == w && 365 362 ti.ti[0].tcpi_bytes_received == r) ··· 406 401 die_perror("getsockopt MPTCP_SUBFLOW_ADDRS"); 407 402 408 403 assert(olen <= sizeof(addrs)); 409 - assert(addrs.d.size_user == addrs.d.size_kernel); 410 - assert(addrs.d.size_user == sizeof(struct mptcp_subflow_addrs)); 404 + assert(addrs.d.size_kernel > 0); 405 + assert(addrs.d.size_user == 406 + MIN(addrs.d.size_kernel, sizeof(struct mptcp_subflow_addrs))); 411 407 assert(addrs.d.num_subflows == 1); 412 408 413 409 assert(olen > (socklen_t)sizeof(struct mptcp_subflow_data)); 414 410 olen -= sizeof(struct mptcp_subflow_data); 415 - assert(olen == sizeof(struct mptcp_subflow_addrs)); 411 + assert(olen == addrs.d.size_user); 416 412 417 413 llen = sizeof(local); 418 414 ret = getsockname(fd, (struct sockaddr *)&local, &llen);
+18 -2
tools/testing/selftests/net/mptcp/mptcp_sockopt.sh
··· 87 87 } 88 88 89 89 mptcp_lib_check_mptcp 90 + mptcp_lib_check_kallsyms 90 91 91 92 ip -Version > /dev/null 2>&1 92 93 if [ $? -ne 0 ];then ··· 187 186 local_addr="0.0.0.0" 188 187 fi 189 188 189 + cmsg="TIMESTAMPNS" 190 + if mptcp_lib_kallsyms_has "mptcp_ioctl$"; then 191 + cmsg+=",TCPINQ" 192 + fi 193 + 190 194 timeout ${timeout_test} \ 191 195 ip netns exec ${listener_ns} \ 192 - $mptcp_connect -t ${timeout_poll} -l -M 1 -p $port -s ${srv_proto} -c TIMESTAMPNS,TCPINQ \ 196 + $mptcp_connect -t ${timeout_poll} -l -M 1 -p $port -s ${srv_proto} -c "${cmsg}" \ 193 197 ${local_addr} < "$sin" > "$sout" & 194 198 local spid=$! 195 199 ··· 202 196 203 197 timeout ${timeout_test} \ 204 198 ip netns exec ${connector_ns} \ 205 - $mptcp_connect -t ${timeout_poll} -M 2 -p $port -s ${cl_proto} -c TIMESTAMPNS,TCPINQ \ 199 + $mptcp_connect -t ${timeout_poll} -M 2 -p $port -s ${cl_proto} -c "${cmsg}" \ 206 200 $connect_addr < "$cin" > "$cout" & 207 201 208 202 local cpid=$! ··· 259 253 { 260 254 local lret=0 261 255 256 + if ! mptcp_lib_kallsyms_has "mptcp_diag_fill_info$"; then 257 + echo "INFO: MPTCP sockopt not supported: SKIP" 258 + return 259 + fi 260 + 262 261 ip netns exec "$ns_sbox" ./mptcp_sockopt 263 262 lret=$? 264 263 ··· 317 306 do_tcpinq_tests() 318 307 { 319 308 local lret=0 309 + 310 + if ! mptcp_lib_kallsyms_has "mptcp_ioctl$"; then 311 + echo "INFO: TCP_INQ not supported: SKIP" 312 + return 313 + fi 320 314 321 315 local args 322 316 for args in "-t tcp" "-r tcp"; do
+17 -10
tools/testing/selftests/net/mptcp/pm_netlink.sh
··· 73 73 } 74 74 75 75 check "ip netns exec $ns1 ./pm_nl_ctl dump" "" "defaults addr list" 76 - check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0 76 + 77 + default_limits="$(ip netns exec $ns1 ./pm_nl_ctl limits)" 78 + if mptcp_lib_expect_all_features; then 79 + check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0 77 80 subflows 2" "defaults limits" 81 + fi 78 82 79 83 ip netns exec $ns1 ./pm_nl_ctl add 10.0.1.1 80 84 ip netns exec $ns1 ./pm_nl_ctl add 10.0.1.2 flags subflow dev lo ··· 125 121 check "ip netns exec $ns1 ./pm_nl_ctl dump" "" "flush addrs" 126 122 127 123 ip netns exec $ns1 ./pm_nl_ctl limits 9 1 128 - check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0 129 - subflows 2" "rcv addrs above hard limit" 124 + check "ip netns exec $ns1 ./pm_nl_ctl limits" "$default_limits" "rcv addrs above hard limit" 130 125 131 126 ip netns exec $ns1 ./pm_nl_ctl limits 1 9 132 - check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0 133 - subflows 2" "subflows above hard limit" 127 + check "ip netns exec $ns1 ./pm_nl_ctl limits" "$default_limits" "subflows above hard limit" 134 128 135 129 ip netns exec $ns1 ./pm_nl_ctl limits 8 8 136 130 check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 8 ··· 178 176 ip netns exec $ns1 ./pm_nl_ctl set 10.0.1.1 flags nobackup 179 177 check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \ 180 178 subflow 10.0.1.1" " (nobackup)" 179 + 180 + # fullmesh support has been added later 181 181 ip netns exec $ns1 ./pm_nl_ctl set id 1 flags fullmesh 182 - check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \ 182 + if ip netns exec $ns1 ./pm_nl_ctl dump | grep -q "fullmesh" || 183 + mptcp_lib_expect_all_features; then 184 + check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \ 183 185 subflow,fullmesh 10.0.1.1" " (fullmesh)" 184 - ip netns exec $ns1 ./pm_nl_ctl set id 1 flags nofullmesh 185 - check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \ 186 + ip netns exec $ns1 ./pm_nl_ctl set id 1 flags nofullmesh 187 + check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \ 186 188 subflow 10.0.1.1" " (nofullmesh)" 187 - ip netns exec $ns1 ./pm_nl_ctl set id 1 flags backup,fullmesh 188 - check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \ 189 + ip netns exec $ns1 ./pm_nl_ctl set id 1 flags backup,fullmesh 190 + check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \ 189 191 subflow,backup,fullmesh 10.0.1.1" " (backup,fullmesh)" 192 + fi 190 193 191 194 exit $ret
+12 -1
tools/testing/selftests/net/mptcp/userspace_pm.sh
··· 4 4 . "$(dirname "${0}")/mptcp_lib.sh" 5 5 6 6 mptcp_lib_check_mptcp 7 + mptcp_lib_check_kallsyms 8 + 9 + if ! mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 10 + echo "userspace pm tests are not supported by the kernel: SKIP" 11 + exit ${KSFT_SKIP} 12 + fi 7 13 8 14 ip -Version > /dev/null 2>&1 9 15 if [ $? -ne 0 ];then 10 16 echo "SKIP: Cannot not run test without ip tool" 11 - exit 1 17 + exit ${KSFT_SKIP} 12 18 fi 13 19 14 20 ANNOUNCED=6 # MPTCP_EVENT_ANNOUNCED ··· 914 908 test_listener() 915 909 { 916 910 print_title "Listener tests" 911 + 912 + if ! mptcp_lib_kallsyms_has "mptcp_event_pm_listener$"; then 913 + stdbuf -o0 -e0 printf "LISTENER events \t[SKIP] Not supported\n" 914 + return 915 + fi 917 916 918 917 # Capture events on the network namespace running the client 919 918 :>$client_evts
+3 -3
tools/testing/selftests/ptp/testptp.c
··· 502 502 interval = t2 - t1; 503 503 offset = (t2 + t1) / 2 - tp; 504 504 505 - printf("system time: %lld.%u\n", 505 + printf("system time: %lld.%09u\n", 506 506 (pct+2*i)->sec, (pct+2*i)->nsec); 507 - printf("phc time: %lld.%u\n", 507 + printf("phc time: %lld.%09u\n", 508 508 (pct+2*i+1)->sec, (pct+2*i+1)->nsec); 509 - printf("system time: %lld.%u\n", 509 + printf("system time: %lld.%09u\n", 510 510 (pct+2*i+2)->sec, (pct+2*i+2)->nsec); 511 511 printf("system/phc clock time offset is %" PRId64 " ns\n" 512 512 "system clock time delay is %" PRId64 " ns\n",
+1 -5
tools/testing/selftests/tc-testing/config
··· 6 6 CONFIG_NF_CONNTRACK_ZONES=y 7 7 CONFIG_NF_CONNTRACK_LABELS=y 8 8 CONFIG_NF_NAT=m 9 + CONFIG_NETFILTER_XT_TARGET_LOG=m 9 10 10 11 CONFIG_NET_SCHED=y 11 12 12 13 # 13 14 # Queueing/Scheduling 14 15 # 15 - CONFIG_NET_SCH_ATM=m 16 16 CONFIG_NET_SCH_CAKE=m 17 - CONFIG_NET_SCH_CBQ=m 18 17 CONFIG_NET_SCH_CBS=m 19 18 CONFIG_NET_SCH_CHOKE=m 20 19 CONFIG_NET_SCH_CODEL=m 21 20 CONFIG_NET_SCH_DRR=m 22 - CONFIG_NET_SCH_DSMARK=m 23 21 CONFIG_NET_SCH_ETF=m 24 22 CONFIG_NET_SCH_FQ=m 25 23 CONFIG_NET_SCH_FQ_CODEL=m ··· 55 57 CONFIG_NET_CLS_FLOWER=m 56 58 CONFIG_NET_CLS_MATCHALL=m 57 59 CONFIG_NET_CLS_ROUTE4=m 58 - CONFIG_NET_CLS_RSVP=m 59 - CONFIG_NET_CLS_TCINDEX=m 60 60 CONFIG_NET_EMATCH=y 61 61 CONFIG_NET_EMATCH_STACK=32 62 62 CONFIG_NET_EMATCH_CMP=m
+2 -2
tools/testing/selftests/tc-testing/tc-tests/qdiscs/sfb.json
··· 58 58 "setup": [ 59 59 "$IP link add dev $DUMMY type dummy || /bin/true" 60 60 ], 61 - "cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfb db 10", 61 + "cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfb db 100", 62 62 "expExitCode": "0", 63 63 "verifyCmd": "$TC qdisc show dev $DUMMY", 64 - "matchPattern": "qdisc sfb 1: root refcnt [0-9]+ rehash 600s db 10ms", 64 + "matchPattern": "qdisc sfb 1: root refcnt [0-9]+ rehash 600s db 100ms", 65 65 "matchCount": "1", 66 66 "teardown": [ 67 67 "$TC qdisc del dev $DUMMY handle 1: root",
+1
tools/testing/selftests/tc-testing/tdc.sh
··· 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 4 modprobe netdevsim 5 + modprobe sch_teql 5 6 ./tdc.py -c actions --nobuildebpf 6 7 ./tdc.py -c qdisc
+7
tools/virtio/ringtest/.gitignore
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + /noring 3 + /ptr_ring 4 + /ring 5 + /virtio_ring_0_9 6 + /virtio_ring_inorder 7 + /virtio_ring_poll
+11
tools/virtio/ringtest/main.h
··· 8 8 #ifndef MAIN_H 9 9 #define MAIN_H 10 10 11 + #include <assert.h> 11 12 #include <stdbool.h> 12 13 13 14 extern int param; ··· 96 95 #define cpu_relax() asm ("rep; nop" ::: "memory") 97 96 #elif defined(__s390x__) 98 97 #define cpu_relax() barrier() 98 + #elif defined(__aarch64__) 99 + #define cpu_relax() asm ("yield" ::: "memory") 99 100 #else 100 101 #define cpu_relax() assert(0) 101 102 #endif ··· 115 112 116 113 #if defined(__x86_64__) || defined(__i386__) 117 114 #define smp_mb() asm volatile("lock; addl $0,-132(%%rsp)" ::: "memory", "cc") 115 + #elif defined(__aarch64__) 116 + #define smp_mb() asm volatile("dmb ish" ::: "memory") 118 117 #else 119 118 /* 120 119 * Not using __ATOMIC_SEQ_CST since gcc docs say they are only synchronized ··· 141 136 142 137 #if defined(__i386__) || defined(__x86_64__) || defined(__s390x__) 143 138 #define smp_wmb() barrier() 139 + #elif defined(__aarch64__) 140 + #define smp_wmb() asm volatile("dmb ishst" ::: "memory") 144 141 #else 145 142 #define smp_wmb() smp_release() 143 + #endif 144 + 145 + #ifndef __always_inline 146 + #define __always_inline inline __attribute__((always_inline)) 146 147 #endif 147 148 148 149 static __always_inline
+1 -1
tools/virtio/virtio-trace/README
··· 95 95 96 96 1) Enable ftrace in the guest 97 97 <Example> 98 - # echo 1 > /sys/kernel/debug/tracing/events/sched/enable 98 + # echo 1 > /sys/kernel/tracing/events/sched/enable 99 99 100 100 2) Run trace agent in the guest 101 101 This agent must be operated as root.
+8 -4
tools/virtio/virtio-trace/trace-agent.c
··· 18 18 #define PIPE_DEF_BUFS 16 19 19 #define PIPE_MIN_SIZE (PAGE_SIZE*PIPE_DEF_BUFS) 20 20 #define PIPE_MAX_SIZE (1024*1024) 21 - #define READ_PATH_FMT \ 22 - "/sys/kernel/debug/tracing/per_cpu/cpu%d/trace_pipe_raw" 21 + #define TRACEFS "/sys/kernel/tracing" 22 + #define DEBUGFS "/sys/kernel/debug/tracing" 23 + #define READ_PATH_FMT "%s/per_cpu/cpu%d/trace_pipe_raw" 23 24 #define WRITE_PATH_FMT "/dev/virtio-ports/trace-path-cpu%d" 24 25 #define CTL_PATH "/dev/virtio-ports/agent-ctl-path" 25 26 ··· 121 120 if (this_is_write_path) 122 121 /* write(output) path */ 123 122 ret = snprintf(buf, PATH_MAX, WRITE_PATH_FMT, cpu_num); 124 - else 123 + else { 125 124 /* read(input) path */ 126 - ret = snprintf(buf, PATH_MAX, READ_PATH_FMT, cpu_num); 125 + ret = snprintf(buf, PATH_MAX, READ_PATH_FMT, TRACEFS, cpu_num); 126 + if (ret > 0 && access(buf, F_OK) != 0) 127 + ret = snprintf(buf, PATH_MAX, READ_PATH_FMT, DEBUGFS, cpu_num); 128 + } 127 129 128 130 if (ret <= 0) { 129 131 pr_err("Failed to generate %s path(CPU#%d):%d\n",