Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Add Renesas PMIC RAA215300 and built-in RTC

Merge series from Biju Das <biju.das.jz@bp.renesas.com>:

This patch series aims to add support for Renesas PMIC RAA215300 and
built-in RTC found on this PMIC device.

The details of PMIC can be found here[1].

Renesas PMIC RAA215300 exposes two separate i2c devices, one for the main
device and another for rtc device.

+5630 -2482
+1
.mailmap
··· 233 233 Johan Hovold <johan@kernel.org> <jhovold@gmail.com> 234 234 Johan Hovold <johan@kernel.org> <johan@hovoldconsulting.com> 235 235 John Crispin <john@phrozen.org> <blogic@openwrt.org> 236 + John Keeping <john@keeping.me.uk> <john@metanate.com> 236 237 John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> 237 238 John Stultz <johnstul@us.ibm.com> 238 239 <jon.toppins+linux@gmail.com> <jtoppins@cumulusnetworks.com>
+10 -12
Documentation/admin-guide/cgroup-v2.rst
··· 1213 1213 A read-write single value file which exists on non-root 1214 1214 cgroups. The default is "max". 1215 1215 1216 - Memory usage throttle limit. This is the main mechanism to 1217 - control memory usage of a cgroup. If a cgroup's usage goes 1216 + Memory usage throttle limit. If a cgroup's usage goes 1218 1217 over the high boundary, the processes of the cgroup are 1219 1218 throttled and put under heavy reclaim pressure. 1220 1219 1221 1220 Going over the high limit never invokes the OOM killer and 1222 - under extreme conditions the limit may be breached. 1221 + under extreme conditions the limit may be breached. The high 1222 + limit should be used in scenarios where an external process 1223 + monitors the limited cgroup to alleviate heavy reclaim 1224 + pressure. 1223 1225 1224 1226 memory.max 1225 1227 A read-write single value file which exists on non-root 1226 1228 cgroups. The default is "max". 1227 1229 1228 - Memory usage hard limit. This is the final protection 1229 - mechanism. If a cgroup's memory usage reaches this limit and 1230 - can't be reduced, the OOM killer is invoked in the cgroup. 1231 - Under certain circumstances, the usage may go over the limit 1232 - temporarily. 1230 + Memory usage hard limit. This is the main mechanism to limit 1231 + memory usage of a cgroup. If a cgroup's memory usage reaches 1232 + this limit and can't be reduced, the OOM killer is invoked in 1233 + the cgroup. Under certain circumstances, the usage may go 1234 + over the limit temporarily. 1233 1235 1234 1236 In default configuration regular 0-order allocations always 1235 1237 succeed unless OOM killer chooses current task as a victim. ··· 1239 1237 Some kinds of allocations don't invoke the OOM killer. 1240 1238 Caller could retry them differently, return into userspace 1241 1239 as -ENOMEM or silently ignore in cases like disk readahead. 1242 - 1243 - This is the ultimate protection mechanism. As long as the 1244 - high limit is used and monitored properly, this limit's 1245 - utility is limited to providing the final safety net. 1246 1240 1247 1241 memory.reclaim 1248 1242 A write-only nested-keyed file which exists for all cgroups.
+1 -1
Documentation/devicetree/bindings/ata/ahci-common.yaml
··· 8 8 9 9 maintainers: 10 10 - Hans de Goede <hdegoede@redhat.com> 11 - - Damien Le Moal <damien.lemoal@opensource.wdc.com> 11 + - Damien Le Moal <dlemoal@kernel.org> 12 12 13 13 description: 14 14 This document defines device tree properties for a common AHCI SATA
+1
Documentation/devicetree/bindings/cache/qcom,llcc.yaml
··· 129 129 - qcom,sm8250-llcc 130 130 - qcom,sm8350-llcc 131 131 - qcom,sm8450-llcc 132 + - qcom,sm8550-llcc 132 133 then: 133 134 properties: 134 135 reg:
+1 -1
Documentation/devicetree/bindings/clock/canaan,k210-clk.yaml
··· 7 7 title: Canaan Kendryte K210 Clock 8 8 9 9 maintainers: 10 - - Damien Le Moal <damien.lemoal@wdc.com> 10 + - Damien Le Moal <dlemoal@kernel.org> 11 11 12 12 description: | 13 13 Canaan Kendryte K210 SoC clocks driver bindings. The clock
+1 -1
Documentation/devicetree/bindings/i3c/silvaco,i3c-master.yaml
··· 44 44 - clock-names 45 45 - clocks 46 46 47 - additionalProperties: true 47 + unevaluatedProperties: false 48 48 49 49 examples: 50 50 - |
+1 -1
Documentation/devicetree/bindings/mfd/canaan,k210-sysctl.yaml
··· 7 7 title: Canaan Kendryte K210 System Controller 8 8 9 9 maintainers: 10 - - Damien Le Moal <damien.lemoal@wdc.com> 10 + - Damien Le Moal <dlemoal@kernel.org> 11 11 12 12 description: 13 13 Canaan Inc. Kendryte K210 SoC system controller which provides a
+2 -2
Documentation/devicetree/bindings/net/realtek-bluetooth.yaml
··· 11 11 - Alistair Francis <alistair@alistair23.me> 12 12 13 13 description: 14 - RTL8723CS/RTL8723CS/RTL8821CS/RTL8822CS is a WiFi + BT chip. WiFi part 14 + RTL8723BS/RTL8723CS/RTL8821CS/RTL8822CS is a WiFi + BT chip. WiFi part 15 15 is connected over SDIO, while BT is connected over serial. It speaks 16 16 H5 protocol with few extra commands to upload firmware and change 17 17 module speed. ··· 27 27 - items: 28 28 - enum: 29 29 - realtek,rtl8821cs-bt 30 - - const: realtek,rtl8822cs-bt 30 + - const: realtek,rtl8723bs-bt 31 31 32 32 device-wake-gpios: 33 33 maxItems: 1
+1 -1
Documentation/devicetree/bindings/pinctrl/canaan,k210-fpioa.yaml
··· 7 7 title: Canaan Kendryte K210 FPIOA 8 8 9 9 maintainers: 10 - - Damien Le Moal <damien.lemoal@wdc.com> 10 + - Damien Le Moal <dlemoal@kernel.org> 11 11 12 12 description: 13 13 The Canaan Kendryte K210 SoC Fully Programmable IO Array (FPIOA)
+3 -2
Documentation/devicetree/bindings/pinctrl/qcom,pmic-mpp.yaml
··· 144 144 enum: [0, 1, 2, 3, 4, 5, 6, 7] 145 145 146 146 qcom,paired: 147 - - description: 148 - Indicates that the pin should be operating in paired mode. 147 + type: boolean 148 + description: 149 + Indicates that the pin should be operating in paired mode. 149 150 150 151 required: 151 152 - pins
+1
Documentation/devicetree/bindings/power/qcom,rpmpd.yaml
··· 29 29 - qcom,qcm2290-rpmpd 30 30 - qcom,qcs404-rpmpd 31 31 - qcom,qdu1000-rpmhpd 32 + - qcom,sa8155p-rpmhpd 32 33 - qcom,sa8540p-rpmhpd 33 34 - qcom,sa8775p-rpmhpd 34 35 - qcom,sdm660-rpmpd
+85
Documentation/devicetree/bindings/regulator/renesas,raa215300.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/regulator/renesas,raa215300.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Renesas RAA215300 Power Management Integrated Circuit (PMIC) 8 + 9 + maintainers: 10 + - Biju Das <biju.das.jz@bp.renesas.com> 11 + 12 + description: | 13 + The RAA215300 is a high-performance, low-cost 9-channel PMIC designed for 14 + 32-bit and 64-bit MCU and MPU applications. It supports DDR3, DDR3L, DDR4, 15 + and LPDDR4 memory power requirements. The internally compensated regulators, 16 + built-in Real-Time Clock (RTC), 32kHz crystal oscillator, and coin cell 17 + battery charger provide a highly integrated, small footprint power solution 18 + ideal for System-On-Module (SOM) applications. A spread spectrum feature 19 + provides an ease-of-use solution for noise-sensitive audio or RF applications. 20 + 21 + This device exposes two devices via I2C. One for the integrated RTC IP, and 22 + one for everything else. 23 + 24 + Link to datasheet: 25 + https://www.renesas.com/in/en/products/power-power-management/multi-channel-power-management-ics-pmics/ssdsoc-power-management-ics-pmic-and-pmus/raa215300-high-performance-9-channel-pmic-supporting-ddr-memory-built-charger-and-rtc 26 + 27 + properties: 28 + compatible: 29 + enum: 30 + - renesas,raa215300 31 + 32 + reg: 33 + maxItems: 2 34 + 35 + reg-names: 36 + items: 37 + - const: main 38 + - const: rtc 39 + 40 + interrupts: 41 + maxItems: 1 42 + 43 + clocks: 44 + description: | 45 + The clocks are optional. The RTC is disabled, if no clocks are 46 + provided(either xin or clkin). 47 + maxItems: 1 48 + 49 + clock-names: 50 + description: | 51 + Use xin, if connected to an external crystal. 52 + Use clkin, if connected to an external clock signal. 53 + enum: 54 + - xin 55 + - clkin 56 + 57 + required: 58 + - compatible 59 + - reg 60 + - reg-names 61 + 62 + additionalProperties: false 63 + 64 + examples: 65 + - | 66 + /* 32.768kHz crystal */ 67 + x2: x2-clock { 68 + compatible = "fixed-clock"; 69 + #clock-cells = <0>; 70 + clock-frequency = <32768>; 71 + }; 72 + 73 + i2c { 74 + #address-cells = <1>; 75 + #size-cells = <0>; 76 + 77 + raa215300: pmic@12 { 78 + compatible = "renesas,raa215300"; 79 + reg = <0x12>, <0x6f>; 80 + reg-names = "main", "rtc"; 81 + 82 + clocks = <&x2>; 83 + clock-names = "xin"; 84 + }; 85 + };
+1 -1
Documentation/devicetree/bindings/reset/canaan,k210-rst.yaml
··· 7 7 title: Canaan Kendryte K210 Reset Controller 8 8 9 9 maintainers: 10 - - Damien Le Moal <damien.lemoal@wdc.com> 10 + - Damien Le Moal <dlemoal@kernel.org> 11 11 12 12 description: | 13 13 Canaan Kendryte K210 reset controller driver which supports the SoC
+1 -1
Documentation/devicetree/bindings/riscv/canaan.yaml
··· 7 7 title: Canaan SoC-based boards 8 8 9 9 maintainers: 10 - - Damien Le Moal <damien.lemoal@wdc.com> 10 + - Damien Le Moal <dlemoal@kernel.org> 11 11 12 12 description: 13 13 Canaan Kendryte K210 SoC-based boards
+1 -1
Documentation/devicetree/usage-model.rst
··· 415 415 because it must decide whether to register each node as either a 416 416 platform_device or an amba_device. This unfortunately complicates the 417 417 device creation model a little bit, but the solution turns out not to 418 - be too invasive. If a node is compatible with "arm,amba-primecell", then 418 + be too invasive. If a node is compatible with "arm,primecell", then 419 419 of_platform_populate() will register it as an amba_device instead of a 420 420 platform_device.
+16 -16
Documentation/netlink/specs/ethtool.yaml
··· 223 223 name: tx-min-frag-size 224 224 type: u32 225 225 - 226 - name: tx-min-frag-size 226 + name: rx-min-frag-size 227 227 type: u32 228 228 - 229 229 name: verify-enabled ··· 294 294 name: master-slave-state 295 295 type: u8 296 296 - 297 - name: master-slave-lanes 297 + name: lanes 298 298 type: u32 299 299 - 300 300 name: rate-matching ··· 322 322 name: ext-substate 323 323 type: u8 324 324 - 325 - name: down-cnt 325 + name: ext-down-cnt 326 326 type: u32 327 327 - 328 328 name: debug ··· 577 577 name: phc-index 578 578 type: u32 579 579 - 580 - name: cable-test-nft-nest-result 580 + name: cable-test-ntf-nest-result 581 581 attributes: 582 582 - 583 583 name: pair ··· 586 586 name: code 587 587 type: u8 588 588 - 589 - name: cable-test-nft-nest-fault-length 589 + name: cable-test-ntf-nest-fault-length 590 590 attributes: 591 591 - 592 592 name: pair ··· 595 595 name: cm 596 596 type: u32 597 597 - 598 - name: cable-test-nft-nest 598 + name: cable-test-ntf-nest 599 599 attributes: 600 600 - 601 601 name: result 602 602 type: nest 603 - nested-attributes: cable-test-nft-nest-result 603 + nested-attributes: cable-test-ntf-nest-result 604 604 - 605 605 name: fault-length 606 606 type: nest 607 - nested-attributes: cable-test-nft-nest-fault-length 607 + nested-attributes: cable-test-ntf-nest-fault-length 608 608 - 609 609 name: cable-test 610 610 attributes: ··· 618 618 - 619 619 name: nest 620 620 type: nest 621 - nested-attributes: cable-test-nft-nest 621 + nested-attributes: cable-test-ntf-nest 622 622 - 623 623 name: cable-test-tdr-cfg 624 624 attributes: ··· 776 776 name: hist-bkt-hi 777 777 type: u32 778 778 - 779 - name: hist-bkt-val 779 + name: hist-val 780 780 type: u64 781 781 - 782 782 name: stats ··· 965 965 - duplex 966 966 - master-slave-cfg 967 967 - master-slave-state 968 - - master-slave-lanes 968 + - lanes 969 969 - rate-matching 970 970 dump: *linkmodes-get-op 971 971 - ··· 999 999 - sqi-max 1000 1000 - ext-state 1001 1001 - ext-substate 1002 - - down-cnt 1002 + - ext-down-cnt 1003 1003 dump: *linkstate-get-op 1004 1004 - 1005 1005 name: debug-get ··· 1351 1351 reply: 1352 1352 attributes: 1353 1353 - header 1354 - - cable-test-nft-nest 1354 + - cable-test-ntf-nest 1355 1355 - 1356 1356 name: cable-test-tdr-act 1357 1357 doc: Cable test TDR. ··· 1539 1539 - hkey 1540 1540 dump: *rss-get-op 1541 1541 - 1542 - name: plca-get 1542 + name: plca-get-cfg 1543 1543 doc: Get PLCA params. 1544 1544 1545 1545 attribute-set: plca ··· 1561 1561 - burst-tmr 1562 1562 dump: *plca-get-op 1563 1563 - 1564 - name: plca-set 1564 + name: plca-set-cfg 1565 1565 doc: Set PLCA params. 1566 1566 1567 1567 attribute-set: plca ··· 1585 1585 - 1586 1586 name: plca-ntf 1587 1587 doc: Notification for change in PLCA params. 1588 - notify: plca-get 1588 + notify: plca-get-cfg 1589 1589 - 1590 1590 name: mm-get 1591 1591 doc: Get MAC Merge configuration and state
+2 -2
Documentation/networking/ip-sysctl.rst
··· 1352 1352 Restrict ICMP_PROTO datagram sockets to users in the group range. 1353 1353 The default is "1 0", meaning, that nobody (not even root) may 1354 1354 create ping sockets. Setting it to "100 100" would grant permissions 1355 - to the single group. "0 4294967295" would enable it for the world, "100 1356 - 4294967295" would enable it for the users, but not daemons. 1355 + to the single group. "0 4294967294" would enable it for the world, "100 1356 + 4294967294" would enable it for the users, but not daemons. 1357 1357 1358 1358 tcp_early_demux - BOOLEAN 1359 1359 Enable early demux for established TCP sockets.
+18
Documentation/riscv/patch-acceptance.rst
··· 16 16 principles to the RISC-V-related code that will be accepted for 17 17 inclusion in the kernel. 18 18 19 + Patchwork 20 + --------- 21 + 22 + RISC-V has a patchwork instance, where the status of patches can be checked: 23 + 24 + https://patchwork.kernel.org/project/linux-riscv/list/ 25 + 26 + If your patch does not appear in the default view, the RISC-V maintainers have 27 + likely either requested changes, or expect it to be applied to another tree. 28 + 29 + Automation runs against this patchwork instance, building/testing patches as 30 + they arrive. The automation applies patches against the current HEAD of the 31 + RISC-V `for-next` and `fixes` branches, depending on whether the patch has been 32 + detected as a fix. Failing those, it will use the RISC-V `master` branch. 33 + The exact commit to which a series has been applied will be noted on patchwork. 34 + Patches for which any of the checks fail are unlikely to be applied and in most 35 + cases will need to be resubmitted. 36 + 19 37 Submit Checklist Addendum 20 38 ------------------------- 21 39 We'll only accept patches for new modules or extensions if the
+1 -1
Documentation/translations/zh_CN/devicetree/usage-model.rst
··· 325 325 326 326 当使用DT时,这给of_platform_populate()带来了问题,因为它必须决定是否将 327 327 每个节点注册为platform_device或amba_device。不幸的是,这使设备创建模型 328 - 变得有点复杂,但解决方案原来并不是太具有侵略性。如果一个节点与“arm,amba-primecell” 328 + 变得有点复杂,但解决方案原来并不是太具有侵略性。如果一个节点与“arm,primecell” 329 329 兼容,那么of_platform_populate()将把它注册为amba_device而不是 330 330 platform_device。
+22 -1
MAINTAINERS
··· 5728 5728 F: include/uapi/linux/dccp.h 5729 5729 F: net/dccp/ 5730 5730 5731 + DEBUGOBJECTS: 5732 + M: Thomas Gleixner <tglx@linutronix.de> 5733 + L: linux-kernel@vger.kernel.org 5734 + S: Maintained 5735 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git core/debugobjects 5736 + F: lib/debugobjects.c 5737 + F: include/linux/debugobjects.h 5738 + 5731 5739 DECSTATION PLATFORM SUPPORT 5732 5740 M: "Maciej W. Rozycki" <macro@orcam.me.uk> 5733 5741 L: linux-mips@vger.kernel.org ··· 8799 8791 GPIO SUBSYSTEM 8800 8792 M: Linus Walleij <linus.walleij@linaro.org> 8801 8793 M: Bartosz Golaszewski <brgl@bgdev.pl> 8794 + R: Andy Shevchenko <andy@kernel.org> 8802 8795 L: linux-gpio@vger.kernel.org 8803 8796 S: Maintained 8804 8797 T: git git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux.git ··· 9696 9687 F: include/uapi/linux/i2c.h 9697 9688 9698 9689 I2C SUBSYSTEM HOST DRIVERS 9690 + M: Andi Shyti <andi.shyti@kernel.org> 9699 9691 L: linux-i2c@vger.kernel.org 9700 - S: Odd Fixes 9692 + S: Maintained 9701 9693 W: https://i2c.wiki.kernel.org/ 9702 9694 Q: https://patchwork.ozlabs.org/project/linux-i2c/list/ 9703 9695 T: git git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux.git ··· 18048 18038 F: Documentation/devicetree/bindings/usb/renesas,rzn1-usbf.yaml 18049 18039 F: drivers/usb/gadget/udc/renesas_usbf.c 18050 18040 18041 + RENESAS RZ/V2M I2C DRIVER 18042 + M: Fabrizio Castro <fabrizio.castro.jz@renesas.com> 18043 + L: linux-i2c@vger.kernel.org 18044 + L: linux-renesas-soc@vger.kernel.org 18045 + S: Supported 18046 + F: Documentation/devicetree/bindings/i2c/renesas,rzv2m.yaml 18047 + F: drivers/i2c/busses/i2c-rzv2m.c 18048 + 18051 18049 RENESAS USB PHY DRIVER 18052 18050 M: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> 18053 18051 L: linux-renesas-soc@vger.kernel.org ··· 19140 19122 M: Karsten Graul <kgraul@linux.ibm.com> 19141 19123 M: Wenjia Zhang <wenjia@linux.ibm.com> 19142 19124 M: Jan Karcher <jaka@linux.ibm.com> 19125 + R: D. Wythe <alibuda@linux.alibaba.com> 19126 + R: Tony Lu <tonylu@linux.alibaba.com> 19127 + R: Wen Gu <guwen@linux.alibaba.com> 19143 19128 L: linux-s390@vger.kernel.org 19144 19129 S: Supported 19145 19130 F: net/smc/
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 4 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc5 5 + EXTRAVERSION = -rc7 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION*
+1 -1
arch/arm/boot/dts/am57xx-cl-som-am57x.dts
··· 527 527 528 528 interrupt-parent = <&gpio1>; 529 529 interrupts = <31 0>; 530 - pendown-gpio = <&gpio1 31 0>; 530 + pendown-gpio = <&gpio1 31 GPIO_ACTIVE_LOW>; 531 531 532 532 533 533 ti,x-min = /bits/ 16 <0x0>;
+1 -1
arch/arm/boot/dts/at91-sama7g5ek.dts
··· 792 792 }; 793 793 794 794 &shdwc { 795 - atmel,shdwc-debouncer = <976>; 795 + debounce-delay-us = <976>; 796 796 status = "okay"; 797 797 798 798 input@0 {
+1 -1
arch/arm/boot/dts/at91sam9261ek.dts
··· 156 156 compatible = "ti,ads7843"; 157 157 interrupts-extended = <&pioC 2 IRQ_TYPE_EDGE_BOTH>; 158 158 spi-max-frequency = <3000000>; 159 - pendown-gpio = <&pioC 2 GPIO_ACTIVE_HIGH>; 159 + pendown-gpio = <&pioC 2 GPIO_ACTIVE_LOW>; 160 160 161 161 ti,x-min = /bits/ 16 <150>; 162 162 ti,x-max = /bits/ 16 <3830>;
+1 -1
arch/arm/boot/dts/imx7d-pico-hobbit.dts
··· 64 64 interrupt-parent = <&gpio2>; 65 65 interrupts = <7 0>; 66 66 spi-max-frequency = <1000000>; 67 - pendown-gpio = <&gpio2 7 0>; 67 + pendown-gpio = <&gpio2 7 GPIO_ACTIVE_LOW>; 68 68 vcc-supply = <&reg_3p3v>; 69 69 ti,x-min = /bits/ 16 <0>; 70 70 ti,x-max = /bits/ 16 <4095>;
+1 -1
arch/arm/boot/dts/imx7d-sdb.dts
··· 205 205 pinctrl-0 = <&pinctrl_tsc2046_pendown>; 206 206 interrupt-parent = <&gpio2>; 207 207 interrupts = <29 0>; 208 - pendown-gpio = <&gpio2 29 GPIO_ACTIVE_HIGH>; 208 + pendown-gpio = <&gpio2 29 GPIO_ACTIVE_LOW>; 209 209 touchscreen-max-pressure = <255>; 210 210 wakeup-source; 211 211 };
+1 -1
arch/arm/boot/dts/omap3-cm-t3x.dtsi
··· 227 227 228 228 interrupt-parent = <&gpio2>; 229 229 interrupts = <25 0>; /* gpio_57 */ 230 - pendown-gpio = <&gpio2 25 GPIO_ACTIVE_HIGH>; 230 + pendown-gpio = <&gpio2 25 GPIO_ACTIVE_LOW>; 231 231 232 232 ti,x-min = /bits/ 16 <0x0>; 233 233 ti,x-max = /bits/ 16 <0x0fff>;
+1 -1
arch/arm/boot/dts/omap3-devkit8000-lcd-common.dtsi
··· 54 54 55 55 interrupt-parent = <&gpio1>; 56 56 interrupts = <27 0>; /* gpio_27 */ 57 - pendown-gpio = <&gpio1 27 GPIO_ACTIVE_HIGH>; 57 + pendown-gpio = <&gpio1 27 GPIO_ACTIVE_LOW>; 58 58 59 59 ti,x-min = /bits/ 16 <0x0>; 60 60 ti,x-max = /bits/ 16 <0x0fff>;
+1 -1
arch/arm/boot/dts/omap3-lilly-a83x.dtsi
··· 311 311 interrupt-parent = <&gpio1>; 312 312 interrupts = <8 0>; /* boot6 / gpio_8 */ 313 313 spi-max-frequency = <1000000>; 314 - pendown-gpio = <&gpio1 8 GPIO_ACTIVE_HIGH>; 314 + pendown-gpio = <&gpio1 8 GPIO_ACTIVE_LOW>; 315 315 vcc-supply = <&reg_vcc3>; 316 316 pinctrl-names = "default"; 317 317 pinctrl-0 = <&tsc2048_pins>;
+1 -1
arch/arm/boot/dts/omap3-overo-common-lcd35.dtsi
··· 149 149 150 150 interrupt-parent = <&gpio4>; 151 151 interrupts = <18 0>; /* gpio_114 */ 152 - pendown-gpio = <&gpio4 18 GPIO_ACTIVE_HIGH>; 152 + pendown-gpio = <&gpio4 18 GPIO_ACTIVE_LOW>; 153 153 154 154 ti,x-min = /bits/ 16 <0x0>; 155 155 ti,x-max = /bits/ 16 <0x0fff>;
+1 -1
arch/arm/boot/dts/omap3-overo-common-lcd43.dtsi
··· 160 160 161 161 interrupt-parent = <&gpio4>; 162 162 interrupts = <18 0>; /* gpio_114 */ 163 - pendown-gpio = <&gpio4 18 GPIO_ACTIVE_HIGH>; 163 + pendown-gpio = <&gpio4 18 GPIO_ACTIVE_LOW>; 164 164 165 165 ti,x-min = /bits/ 16 <0x0>; 166 166 ti,x-max = /bits/ 16 <0x0fff>;
+1 -1
arch/arm/boot/dts/omap3-pandora-common.dtsi
··· 651 651 pinctrl-0 = <&penirq_pins>; 652 652 interrupt-parent = <&gpio3>; 653 653 interrupts = <30 IRQ_TYPE_NONE>; /* GPIO_94 */ 654 - pendown-gpio = <&gpio3 30 GPIO_ACTIVE_HIGH>; 654 + pendown-gpio = <&gpio3 30 GPIO_ACTIVE_LOW>; 655 655 vcc-supply = <&vaux4>; 656 656 657 657 ti,x-min = /bits/ 16 <0>;
+1 -1
arch/arm/boot/dts/omap5-cm-t54.dts
··· 354 354 355 355 interrupt-parent = <&gpio1>; 356 356 interrupts = <15 0>; /* gpio1_wk15 */ 357 - pendown-gpio = <&gpio1 15 GPIO_ACTIVE_HIGH>; 357 + pendown-gpio = <&gpio1 15 GPIO_ACTIVE_LOW>; 358 358 359 359 360 360 ti,x-min = /bits/ 16 <0x0>;
-2
arch/arm/boot/dts/qcom-apq8026-asus-sparrow.dts
··· 268 268 function = "gpio"; 269 269 drive-strength = <8>; 270 270 bias-disable; 271 - input-enable; 272 271 }; 273 272 274 273 wlan_hostwake_default_state: wlan-hostwake-default-state { ··· 275 276 function = "gpio"; 276 277 drive-strength = <2>; 277 278 bias-disable; 278 - input-enable; 279 279 }; 280 280 281 281 wlan_regulator_default_state: wlan-regulator-default-state {
-1
arch/arm/boot/dts/qcom-apq8026-huawei-sturgeon.dts
··· 352 352 function = "gpio"; 353 353 drive-strength = <2>; 354 354 bias-disable; 355 - input-enable; 356 355 }; 357 356 358 357 wlan_regulator_default_state: wlan-regulator-default-state {
-3
arch/arm/boot/dts/qcom-apq8026-lg-lenok.dts
··· 307 307 function = "gpio"; 308 308 drive-strength = <2>; 309 309 bias-disable; 310 - input-enable; 311 310 }; 312 311 313 312 touch_pins: touch-state { ··· 316 317 317 318 drive-strength = <8>; 318 319 bias-pull-down; 319 - input-enable; 320 320 }; 321 321 322 322 reset-pins { ··· 333 335 function = "gpio"; 334 336 drive-strength = <2>; 335 337 bias-disable; 336 - input-enable; 337 338 }; 338 339 339 340 wlan_regulator_default_state: wlan-regulator-default-state {
+1
arch/arm/boot/dts/qcom-apq8064.dtsi
··· 83 83 L2: l2-cache { 84 84 compatible = "cache"; 85 85 cache-level = <2>; 86 + cache-unified; 86 87 }; 87 88 88 89 idle-states {
+1
arch/arm/boot/dts/qcom-apq8084.dtsi
··· 74 74 L2: l2-cache { 75 75 compatible = "cache"; 76 76 cache-level = <2>; 77 + cache-unified; 77 78 qcom,saw = <&saw_l2>; 78 79 }; 79 80
+1
arch/arm/boot/dts/qcom-ipq4019.dtsi
··· 102 102 L2: l2-cache { 103 103 compatible = "cache"; 104 104 cache-level = <2>; 105 + cache-unified; 105 106 qcom,saw = <&saw_l2>; 106 107 }; 107 108 };
+1
arch/arm/boot/dts/qcom-ipq8064.dtsi
··· 45 45 L2: l2-cache { 46 46 compatible = "cache"; 47 47 cache-level = <2>; 48 + cache-unified; 48 49 }; 49 50 }; 50 51
-1
arch/arm/boot/dts/qcom-mdm9615-wp8548-mangoh-green.dts
··· 49 49 gpioext1-pins { 50 50 pins = "gpio2"; 51 51 function = "gpio"; 52 - input-enable; 53 52 bias-disable; 54 53 }; 55 54 };
+1
arch/arm/boot/dts/qcom-msm8660.dtsi
··· 36 36 L2: l2-cache { 37 37 compatible = "cache"; 38 38 cache-level = <2>; 39 + cache-unified; 39 40 }; 40 41 }; 41 42
+1
arch/arm/boot/dts/qcom-msm8960.dtsi
··· 42 42 L2: l2-cache { 43 43 compatible = "cache"; 44 44 cache-level = <2>; 45 + cache-unified; 45 46 }; 46 47 }; 47 48
-2
arch/arm/boot/dts/qcom-msm8974-lge-nexus5-hammerhead.dts
··· 592 592 pins = "gpio73"; 593 593 function = "gpio"; 594 594 bias-disable; 595 - input-enable; 596 595 }; 597 596 598 597 touch_pin: touch-state { ··· 601 602 602 603 drive-strength = <2>; 603 604 bias-disable; 604 - input-enable; 605 605 }; 606 606 607 607 reset-pins {
-1
arch/arm/boot/dts/qcom-msm8974-sony-xperia-rhine.dtsi
··· 433 433 function = "gpio"; 434 434 drive-strength = <2>; 435 435 bias-disable; 436 - input-enable; 437 436 }; 438 437 439 438 sdc1_on: sdc1-on-state {
+1
arch/arm/boot/dts/qcom-msm8974.dtsi
··· 80 80 L2: l2-cache { 81 81 compatible = "cache"; 82 82 cache-level = <2>; 83 + cache-unified; 83 84 qcom,saw = <&saw_l2>; 84 85 }; 85 86
-1
arch/arm/boot/dts/qcom-msm8974pro-oneplus-bacon.dts
··· 461 461 function = "gpio"; 462 462 drive-strength = <2>; 463 463 bias-disable; 464 - input-enable; 465 464 }; 466 465 467 466 reset-pins {
-4
arch/arm/boot/dts/qcom-msm8974pro-samsung-klte.dts
··· 704 704 pins = "gpio75"; 705 705 function = "gpio"; 706 706 drive-strength = <16>; 707 - input-enable; 708 707 }; 709 708 710 709 devwake-pins { ··· 759 760 i2c_touchkey_pins: i2c-touchkey-state { 760 761 pins = "gpio95", "gpio96"; 761 762 function = "gpio"; 762 - input-enable; 763 763 bias-pull-up; 764 764 }; 765 765 766 766 i2c_led_gpioex_pins: i2c-led-gpioex-state { 767 767 pins = "gpio120", "gpio121"; 768 768 function = "gpio"; 769 - input-enable; 770 769 bias-pull-down; 771 770 }; 772 771 ··· 778 781 wifi_pin: wifi-state { 779 782 pins = "gpio92"; 780 783 function = "gpio"; 781 - input-enable; 782 784 bias-pull-down; 783 785 }; 784 786
-1
arch/arm/boot/dts/qcom-msm8974pro-sony-xperia-shinano-castor.dts
··· 631 631 function = "gpio"; 632 632 drive-strength = <2>; 633 633 bias-disable; 634 - input-enable; 635 634 }; 636 635 637 636 bt_host_wake_pin: bt-host-wake-state {
+9 -11
arch/arm/mach-at91/pm.c
··· 334 334 pdev = of_find_device_by_node(eth->np); 335 335 if (!pdev) 336 336 return false; 337 + /* put_device(eth->dev) is called at the end of suspend. */ 337 338 eth->dev = &pdev->dev; 338 339 } 339 340 340 341 /* No quirks if device isn't a wakeup source. */ 341 - if (!device_may_wakeup(eth->dev)) { 342 - put_device(eth->dev); 342 + if (!device_may_wakeup(eth->dev)) 343 343 return false; 344 - } 345 344 346 - /* put_device(eth->dev) is called at the end of suspend. */ 347 345 return true; 348 346 } 349 347 ··· 437 439 pr_err("AT91: PM: failed to enable %s clocks\n", 438 440 j == AT91_PM_G_ETH ? "geth" : "eth"); 439 441 } 440 - } else { 441 - /* 442 - * Release the reference to eth->dev taken in 443 - * at91_pm_eth_quirk_is_valid(). 444 - */ 445 - put_device(eth->dev); 446 - eth->dev = NULL; 447 442 } 443 + 444 + /* 445 + * Release the reference to eth->dev taken in 446 + * at91_pm_eth_quirk_is_valid(). 447 + */ 448 + put_device(eth->dev); 449 + eth->dev = NULL; 448 450 } 449 451 450 452 return ret;
+1 -1
arch/arm64/Kconfig
··· 1516 1516 # 16K | 27 | 14 | 13 | 11 | 1517 1517 # 64K | 29 | 16 | 13 | 13 | 1518 1518 config ARCH_FORCE_MAX_ORDER 1519 - int "Order of maximal physically contiguous allocations" if EXPERT && (ARM64_4K_PAGES || ARM64_16K_PAGES) 1519 + int 1520 1520 default "13" if ARM64_64K_PAGES 1521 1521 default "11" if ARM64_16K_PAGES 1522 1522 default "10"
+8
arch/arm64/boot/dts/freescale/imx8-ss-dma.dtsi
··· 90 90 clocks = <&uart0_lpcg IMX_LPCG_CLK_4>, 91 91 <&uart0_lpcg IMX_LPCG_CLK_0>; 92 92 clock-names = "ipg", "baud"; 93 + assigned-clocks = <&clk IMX_SC_R_UART_0 IMX_SC_PM_CLK_PER>; 94 + assigned-clock-rates = <80000000>; 93 95 power-domains = <&pd IMX_SC_R_UART_0>; 94 96 status = "disabled"; 95 97 }; ··· 102 100 clocks = <&uart1_lpcg IMX_LPCG_CLK_4>, 103 101 <&uart1_lpcg IMX_LPCG_CLK_0>; 104 102 clock-names = "ipg", "baud"; 103 + assigned-clocks = <&clk IMX_SC_R_UART_1 IMX_SC_PM_CLK_PER>; 104 + assigned-clock-rates = <80000000>; 105 105 power-domains = <&pd IMX_SC_R_UART_1>; 106 106 status = "disabled"; 107 107 }; ··· 114 110 clocks = <&uart2_lpcg IMX_LPCG_CLK_4>, 115 111 <&uart2_lpcg IMX_LPCG_CLK_0>; 116 112 clock-names = "ipg", "baud"; 113 + assigned-clocks = <&clk IMX_SC_R_UART_2 IMX_SC_PM_CLK_PER>; 114 + assigned-clock-rates = <80000000>; 117 115 power-domains = <&pd IMX_SC_R_UART_2>; 118 116 status = "disabled"; 119 117 }; ··· 126 120 clocks = <&uart3_lpcg IMX_LPCG_CLK_4>, 127 121 <&uart3_lpcg IMX_LPCG_CLK_0>; 128 122 clock-names = "ipg", "baud"; 123 + assigned-clocks = <&clk IMX_SC_R_UART_3 IMX_SC_PM_CLK_PER>; 124 + assigned-clock-rates = <80000000>; 129 125 power-domains = <&pd IMX_SC_R_UART_3>; 130 126 status = "disabled"; 131 127 };
+2 -2
arch/arm64/boot/dts/freescale/imx8mn-beacon-baseboard.dtsi
··· 81 81 &ecspi2 { 82 82 pinctrl-names = "default"; 83 83 pinctrl-0 = <&pinctrl_espi2>; 84 - cs-gpios = <&gpio5 9 GPIO_ACTIVE_LOW>; 84 + cs-gpios = <&gpio5 13 GPIO_ACTIVE_LOW>; 85 85 status = "okay"; 86 86 87 87 eeprom@0 { ··· 202 202 MX8MN_IOMUXC_ECSPI2_SCLK_ECSPI2_SCLK 0x82 203 203 MX8MN_IOMUXC_ECSPI2_MOSI_ECSPI2_MOSI 0x82 204 204 MX8MN_IOMUXC_ECSPI2_MISO_ECSPI2_MISO 0x82 205 - MX8MN_IOMUXC_ECSPI1_SS0_GPIO5_IO9 0x41 205 + MX8MN_IOMUXC_ECSPI2_SS0_GPIO5_IO13 0x41 206 206 >; 207 207 }; 208 208
+2 -2
arch/arm64/boot/dts/freescale/imx8qm-mek.dts
··· 82 82 pinctrl-0 = <&pinctrl_usdhc2>; 83 83 bus-width = <4>; 84 84 vmmc-supply = <&reg_usdhc2_vmmc>; 85 - cd-gpios = <&lsio_gpio4 22 GPIO_ACTIVE_LOW>; 86 - wp-gpios = <&lsio_gpio4 21 GPIO_ACTIVE_HIGH>; 85 + cd-gpios = <&lsio_gpio5 22 GPIO_ACTIVE_LOW>; 86 + wp-gpios = <&lsio_gpio5 21 GPIO_ACTIVE_HIGH>; 87 87 status = "okay"; 88 88 }; 89 89
+1
arch/arm64/boot/dts/qcom/ipq5332.dtsi
··· 73 73 L2_0: l2-cache { 74 74 compatible = "cache"; 75 75 cache-level = <2>; 76 + cache-unified; 76 77 }; 77 78 }; 78 79
+2 -1
arch/arm64/boot/dts/qcom/ipq6018.dtsi
··· 83 83 84 84 L2_0: l2-cache { 85 85 compatible = "cache"; 86 - cache-level = <0x2>; 86 + cache-level = <2>; 87 + cache-unified; 87 88 }; 88 89 }; 89 90
+2 -1
arch/arm64/boot/dts/qcom/ipq8074.dtsi
··· 66 66 67 67 L2_0: l2-cache { 68 68 compatible = "cache"; 69 - cache-level = <0x2>; 69 + cache-level = <2>; 70 + cache-unified; 70 71 }; 71 72 }; 72 73
+1
arch/arm64/boot/dts/qcom/ipq9574.dtsi
··· 72 72 L2_0: l2-cache { 73 73 compatible = "cache"; 74 74 cache-level = <2>; 75 + cache-unified; 75 76 }; 76 77 }; 77 78
+1
arch/arm64/boot/dts/qcom/msm8916.dtsi
··· 180 180 L2_0: l2-cache { 181 181 compatible = "cache"; 182 182 cache-level = <2>; 183 + cache-unified; 183 184 }; 184 185 185 186 idle-states {
+2
arch/arm64/boot/dts/qcom/msm8953.dtsi
··· 153 153 L2_0: l2-cache-0 { 154 154 compatible = "cache"; 155 155 cache-level = <2>; 156 + cache-unified; 156 157 }; 157 158 158 159 L2_1: l2-cache-1 { 159 160 compatible = "cache"; 160 161 cache-level = <2>; 162 + cache-unified; 161 163 }; 162 164 }; 163 165
+2
arch/arm64/boot/dts/qcom/msm8976.dtsi
··· 193 193 l2_0: l2-cache0 { 194 194 compatible = "cache"; 195 195 cache-level = <2>; 196 + cache-unified; 196 197 }; 197 198 198 199 l2_1: l2-cache1 { 199 200 compatible = "cache"; 200 201 cache-level = <2>; 202 + cache-unified; 201 203 }; 202 204 }; 203 205
+2
arch/arm64/boot/dts/qcom/msm8994.dtsi
··· 52 52 L2_0: l2-cache { 53 53 compatible = "cache"; 54 54 cache-level = <2>; 55 + cache-unified; 55 56 }; 56 57 }; 57 58 ··· 89 88 L2_1: l2-cache { 90 89 compatible = "cache"; 91 90 cache-level = <2>; 91 + cache-unified; 92 92 }; 93 93 }; 94 94
+6 -4
arch/arm64/boot/dts/qcom/msm8996.dtsi
··· 53 53 #cooling-cells = <2>; 54 54 next-level-cache = <&L2_0>; 55 55 L2_0: l2-cache { 56 - compatible = "cache"; 57 - cache-level = <2>; 56 + compatible = "cache"; 57 + cache-level = <2>; 58 + cache-unified; 58 59 }; 59 60 }; 60 61 ··· 84 83 #cooling-cells = <2>; 85 84 next-level-cache = <&L2_1>; 86 85 L2_1: l2-cache { 87 - compatible = "cache"; 88 - cache-level = <2>; 86 + compatible = "cache"; 87 + cache-level = <2>; 88 + cache-unified; 89 89 }; 90 90 }; 91 91
+2
arch/arm64/boot/dts/qcom/msm8998.dtsi
··· 146 146 L2_0: l2-cache { 147 147 compatible = "cache"; 148 148 cache-level = <2>; 149 + cache-unified; 149 150 }; 150 151 }; 151 152 ··· 191 190 L2_1: l2-cache { 192 191 compatible = "cache"; 193 192 cache-level = <2>; 193 + cache-unified; 194 194 }; 195 195 }; 196 196
+1
arch/arm64/boot/dts/qcom/qcm2290.dtsi
··· 51 51 L2_0: l2-cache { 52 52 compatible = "cache"; 53 53 cache-level = <2>; 54 + cache-unified; 54 55 }; 55 56 }; 56 57
+1
arch/arm64/boot/dts/qcom/qcs404.dtsi
··· 95 95 L2_0: l2-cache { 96 96 compatible = "cache"; 97 97 cache-level = <2>; 98 + cache-unified; 98 99 }; 99 100 100 101 idle-states {
+10
arch/arm64/boot/dts/qcom/qdu1000.dtsi
··· 35 35 next-level-cache = <&L2_0>; 36 36 L2_0: l2-cache { 37 37 compatible = "cache"; 38 + cache-level = <2>; 39 + cache-unified; 38 40 next-level-cache = <&L3_0>; 39 41 L3_0: l3-cache { 40 42 compatible = "cache"; 43 + cache-level = <3>; 44 + cache-unified; 41 45 }; 42 46 }; 43 47 }; ··· 58 54 next-level-cache = <&L2_100>; 59 55 L2_100: l2-cache { 60 56 compatible = "cache"; 57 + cache-level = <2>; 58 + cache-unified; 61 59 next-level-cache = <&L3_0>; 62 60 }; 63 61 }; ··· 76 70 next-level-cache = <&L2_200>; 77 71 L2_200: l2-cache { 78 72 compatible = "cache"; 73 + cache-level = <2>; 74 + cache-unified; 79 75 next-level-cache = <&L3_0>; 80 76 }; 81 77 }; ··· 94 86 next-level-cache = <&L2_300>; 95 87 L2_300: l2-cache { 96 88 compatible = "cache"; 89 + cache-level = <2>; 90 + cache-unified; 97 91 next-level-cache = <&L3_0>; 98 92 }; 99 93 };
+1 -1
arch/arm64/boot/dts/qcom/sa8155p-adp.dts
··· 7 7 8 8 #include <dt-bindings/regulator/qcom,rpmh-regulator.h> 9 9 #include <dt-bindings/gpio/gpio.h> 10 - #include "sm8150.dtsi" 10 + #include "sa8155p.dtsi" 11 11 #include "pmm8155au_1.dtsi" 12 12 #include "pmm8155au_2.dtsi" 13 13
+40
arch/arm64/boot/dts/qcom/sa8155p.dtsi
··· 1 + // SPDX-License-Identifier: BSD-3-Clause 2 + /* 3 + * Copyright (c) 2023, Linaro Limited 4 + * 5 + * SA8155P is an automotive variant of SM8150, with some minor changes. 6 + * Most notably, the RPMhPD setup differs: MMCX and LCX/LMX rails are gone, 7 + * though the cmd-db doesn't reflect that and access attemps result in a bite. 8 + */ 9 + 10 + #include "sm8150.dtsi" 11 + 12 + &dispcc { 13 + power-domains = <&rpmhpd SA8155P_CX>; 14 + }; 15 + 16 + &mdss_dsi0 { 17 + power-domains = <&rpmhpd SA8155P_CX>; 18 + }; 19 + 20 + &mdss_dsi1 { 21 + power-domains = <&rpmhpd SA8155P_CX>; 22 + }; 23 + 24 + &mdss_mdp { 25 + power-domains = <&rpmhpd SA8155P_CX>; 26 + }; 27 + 28 + &remoteproc_slpi { 29 + power-domains = <&rpmhpd SA8155P_CX>, 30 + <&rpmhpd SA8155P_MX>; 31 + }; 32 + 33 + &rpmhpd { 34 + /* 35 + * The bindings were crafted such that SA8155P PDs match their 36 + * SM8150 counterparts to make it more maintainable and only 37 + * necessitate adjusting entries that actually differ 38 + */ 39 + compatible = "qcom,sa8155p-rpmhpd"; 40 + };
+20
arch/arm64/boot/dts/qcom/sa8775p.dtsi
··· 42 42 next-level-cache = <&L2_0>; 43 43 L2_0: l2-cache { 44 44 compatible = "cache"; 45 + cache-level = <2>; 46 + cache-unified; 45 47 next-level-cache = <&L3_0>; 46 48 L3_0: l3-cache { 47 49 compatible = "cache"; 50 + cache-level = <3>; 51 + cache-unified; 48 52 }; 49 53 }; 50 54 }; ··· 62 58 next-level-cache = <&L2_1>; 63 59 L2_1: l2-cache { 64 60 compatible = "cache"; 61 + cache-level = <2>; 62 + cache-unified; 65 63 next-level-cache = <&L3_0>; 66 64 }; 67 65 }; ··· 77 71 next-level-cache = <&L2_2>; 78 72 L2_2: l2-cache { 79 73 compatible = "cache"; 74 + cache-level = <2>; 75 + cache-unified; 80 76 next-level-cache = <&L3_0>; 81 77 }; 82 78 }; ··· 92 84 next-level-cache = <&L2_3>; 93 85 L2_3: l2-cache { 94 86 compatible = "cache"; 87 + cache-level = <2>; 88 + cache-unified; 95 89 next-level-cache = <&L3_0>; 96 90 }; 97 91 }; ··· 107 97 next-level-cache = <&L2_4>; 108 98 L2_4: l2-cache { 109 99 compatible = "cache"; 100 + cache-level = <2>; 101 + cache-unified; 110 102 next-level-cache = <&L3_1>; 111 103 L3_1: l3-cache { 112 104 compatible = "cache"; 105 + cache-level = <3>; 106 + cache-unified; 113 107 }; 114 108 115 109 }; ··· 128 114 next-level-cache = <&L2_5>; 129 115 L2_5: l2-cache { 130 116 compatible = "cache"; 117 + cache-level = <2>; 118 + cache-unified; 131 119 next-level-cache = <&L3_1>; 132 120 }; 133 121 }; ··· 143 127 next-level-cache = <&L2_6>; 144 128 L2_6: l2-cache { 145 129 compatible = "cache"; 130 + cache-level = <2>; 131 + cache-unified; 146 132 next-level-cache = <&L3_1>; 147 133 }; 148 134 }; ··· 158 140 next-level-cache = <&L2_7>; 159 141 L2_7: l2-cache { 160 142 compatible = "cache"; 143 + cache-level = <2>; 144 + cache-unified; 161 145 next-level-cache = <&L3_1>; 162 146 }; 163 147 };
+8
arch/arm64/boot/dts/qcom/sc7180-lite.dtsi
··· 16 16 &cpu6_opp12 { 17 17 opp-peak-kBps = <8532000 23347200>; 18 18 }; 19 + 20 + &cpu6_opp13 { 21 + opp-peak-kBps = <8532000 23347200>; 22 + }; 23 + 24 + &cpu6_opp14 { 25 + opp-peak-kBps = <8532000 23347200>; 26 + };
+9
arch/arm64/boot/dts/qcom/sc7180.dtsi
··· 92 92 L2_0: l2-cache { 93 93 compatible = "cache"; 94 94 cache-level = <2>; 95 + cache-unified; 95 96 next-level-cache = <&L3_0>; 96 97 L3_0: l3-cache { 97 98 compatible = "cache"; 98 99 cache-level = <3>; 100 + cache-unified; 99 101 }; 100 102 }; 101 103 }; ··· 122 120 L2_100: l2-cache { 123 121 compatible = "cache"; 124 122 cache-level = <2>; 123 + cache-unified; 125 124 next-level-cache = <&L3_0>; 126 125 }; 127 126 }; ··· 147 144 L2_200: l2-cache { 148 145 compatible = "cache"; 149 146 cache-level = <2>; 147 + cache-unified; 150 148 next-level-cache = <&L3_0>; 151 149 }; 152 150 }; ··· 172 168 L2_300: l2-cache { 173 169 compatible = "cache"; 174 170 cache-level = <2>; 171 + cache-unified; 175 172 next-level-cache = <&L3_0>; 176 173 }; 177 174 }; ··· 197 192 L2_400: l2-cache { 198 193 compatible = "cache"; 199 194 cache-level = <2>; 195 + cache-unified; 200 196 next-level-cache = <&L3_0>; 201 197 }; 202 198 }; ··· 222 216 L2_500: l2-cache { 223 217 compatible = "cache"; 224 218 cache-level = <2>; 219 + cache-unified; 225 220 next-level-cache = <&L3_0>; 226 221 }; 227 222 }; ··· 247 240 L2_600: l2-cache { 248 241 compatible = "cache"; 249 242 cache-level = <2>; 243 + cache-unified; 250 244 next-level-cache = <&L3_0>; 251 245 }; 252 246 }; ··· 272 264 L2_700: l2-cache { 273 265 compatible = "cache"; 274 266 cache-level = <2>; 267 + cache-unified; 275 268 next-level-cache = <&L3_0>; 276 269 }; 277 270 };
-2
arch/arm64/boot/dts/qcom/sc7280-idp.dtsi
··· 480 480 wcd_rx: codec@0,4 { 481 481 compatible = "sdw20217010d00"; 482 482 reg = <0 4>; 483 - #sound-dai-cells = <1>; 484 483 qcom,rx-port-mapping = <1 2 3 4 5>; 485 484 }; 486 485 }; ··· 490 491 wcd_tx: codec@0,3 { 491 492 compatible = "sdw20217010d00"; 492 493 reg = <0 3>; 493 - #sound-dai-cells = <1>; 494 494 qcom,tx-port-mapping = <1 2 3 4>; 495 495 }; 496 496 };
-2
arch/arm64/boot/dts/qcom/sc7280-qcard.dtsi
··· 414 414 wcd_rx: codec@0,4 { 415 415 compatible = "sdw20217010d00"; 416 416 reg = <0 4>; 417 - #sound-dai-cells = <1>; 418 417 qcom,rx-port-mapping = <1 2 3 4 5>; 419 418 }; 420 419 }; ··· 422 423 wcd_tx: codec@0,3 { 423 424 compatible = "sdw20217010d00"; 424 425 reg = <0 3>; 425 - #sound-dai-cells = <1>; 426 426 qcom,tx-port-mapping = <1 2 3 4>; 427 427 }; 428 428 };
+9
arch/arm64/boot/dts/qcom/sc7280.dtsi
··· 182 182 L2_0: l2-cache { 183 183 compatible = "cache"; 184 184 cache-level = <2>; 185 + cache-unified; 185 186 next-level-cache = <&L3_0>; 186 187 L3_0: l3-cache { 187 188 compatible = "cache"; 188 189 cache-level = <3>; 190 + cache-unified; 189 191 }; 190 192 }; 191 193 }; ··· 210 208 L2_100: l2-cache { 211 209 compatible = "cache"; 212 210 cache-level = <2>; 211 + cache-unified; 213 212 next-level-cache = <&L3_0>; 214 213 }; 215 214 }; ··· 233 230 L2_200: l2-cache { 234 231 compatible = "cache"; 235 232 cache-level = <2>; 233 + cache-unified; 236 234 next-level-cache = <&L3_0>; 237 235 }; 238 236 }; ··· 256 252 L2_300: l2-cache { 257 253 compatible = "cache"; 258 254 cache-level = <2>; 255 + cache-unified; 259 256 next-level-cache = <&L3_0>; 260 257 }; 261 258 }; ··· 279 274 L2_400: l2-cache { 280 275 compatible = "cache"; 281 276 cache-level = <2>; 277 + cache-unified; 282 278 next-level-cache = <&L3_0>; 283 279 }; 284 280 }; ··· 302 296 L2_500: l2-cache { 303 297 compatible = "cache"; 304 298 cache-level = <2>; 299 + cache-unified; 305 300 next-level-cache = <&L3_0>; 306 301 }; 307 302 }; ··· 325 318 L2_600: l2-cache { 326 319 compatible = "cache"; 327 320 cache-level = <2>; 321 + cache-unified; 328 322 next-level-cache = <&L3_0>; 329 323 }; 330 324 }; ··· 348 340 L2_700: l2-cache { 349 341 compatible = "cache"; 350 342 cache-level = <2>; 343 + cache-unified; 351 344 next-level-cache = <&L3_0>; 352 345 }; 353 346 };
+16 -2
arch/arm64/boot/dts/qcom/sc8280xp.dtsi
··· 58 58 L2_0: l2-cache { 59 59 compatible = "cache"; 60 60 cache-level = <2>; 61 + cache-unified; 61 62 next-level-cache = <&L3_0>; 62 63 L3_0: l3-cache { 63 - compatible = "cache"; 64 - cache-level = <3>; 64 + compatible = "cache"; 65 + cache-level = <3>; 66 + cache-unified; 65 67 }; 66 68 }; 67 69 }; ··· 85 83 L2_100: l2-cache { 86 84 compatible = "cache"; 87 85 cache-level = <2>; 86 + cache-unified; 88 87 next-level-cache = <&L3_0>; 89 88 }; 90 89 }; ··· 107 104 L2_200: l2-cache { 108 105 compatible = "cache"; 109 106 cache-level = <2>; 107 + cache-unified; 110 108 next-level-cache = <&L3_0>; 111 109 }; 112 110 }; ··· 129 125 L2_300: l2-cache { 130 126 compatible = "cache"; 131 127 cache-level = <2>; 128 + cache-unified; 132 129 next-level-cache = <&L3_0>; 133 130 }; 134 131 }; ··· 151 146 L2_400: l2-cache { 152 147 compatible = "cache"; 153 148 cache-level = <2>; 149 + cache-unified; 154 150 next-level-cache = <&L3_0>; 155 151 }; 156 152 }; ··· 173 167 L2_500: l2-cache { 174 168 compatible = "cache"; 175 169 cache-level = <2>; 170 + cache-unified; 176 171 next-level-cache = <&L3_0>; 177 172 }; 178 173 }; ··· 195 188 L2_600: l2-cache { 196 189 compatible = "cache"; 197 190 cache-level = <2>; 191 + cache-unified; 198 192 next-level-cache = <&L3_0>; 199 193 }; 200 194 }; ··· 217 209 L2_700: l2-cache { 218 210 compatible = "cache"; 219 211 cache-level = <2>; 212 + cache-unified; 220 213 next-level-cache = <&L3_0>; 221 214 }; 222 215 }; ··· 2735 2726 pins = "gpio7"; 2736 2727 function = "dmic1_data"; 2737 2728 drive-strength = <8>; 2729 + input-enable; 2738 2730 }; 2739 2731 }; 2740 2732 ··· 2753 2743 function = "dmic1_data"; 2754 2744 drive-strength = <2>; 2755 2745 bias-pull-down; 2746 + input-enable; 2756 2747 }; 2757 2748 }; 2758 2749 ··· 2769 2758 pins = "gpio9"; 2770 2759 function = "dmic2_data"; 2771 2760 drive-strength = <8>; 2761 + input-enable; 2772 2762 }; 2773 2763 }; 2774 2764 ··· 2787 2775 function = "dmic2_data"; 2788 2776 drive-strength = <2>; 2789 2777 bias-pull-down; 2778 + input-enable; 2790 2779 }; 2791 2780 }; 2792 2781 ··· 3995 3982 qcom,tcs-config = <ACTIVE_TCS 2>, <SLEEP_TCS 3>, 3996 3983 <WAKE_TCS 3>, <CONTROL_TCS 1>; 3997 3984 label = "apps_rsc"; 3985 + power-domains = <&CLUSTER_PD>; 3998 3986 3999 3987 apps_bcm_voter: bcm-voter { 4000 3988 compatible = "qcom,bcm-voter";
+2
arch/arm64/boot/dts/qcom/sdm630.dtsi
··· 63 63 L2_1: l2-cache { 64 64 compatible = "cache"; 65 65 cache-level = <2>; 66 + cache-unified; 66 67 }; 67 68 }; 68 69 ··· 128 127 L2_0: l2-cache { 129 128 compatible = "cache"; 130 129 cache-level = <2>; 130 + cache-unified; 131 131 }; 132 132 }; 133 133
+19 -1
arch/arm64/boot/dts/qcom/sdm670.dtsi
··· 41 41 L2_0: l2-cache { 42 42 compatible = "cache"; 43 43 next-level-cache = <&L3_0>; 44 + cache-level = <2>; 45 + cache-unified; 44 46 L3_0: l3-cache { 45 - compatible = "cache"; 47 + compatible = "cache"; 48 + cache-level = <3>; 49 + cache-unified; 46 50 }; 47 51 }; 48 52 }; ··· 61 57 next-level-cache = <&L2_100>; 62 58 L2_100: l2-cache { 63 59 compatible = "cache"; 60 + cache-level = <2>; 61 + cache-unified; 64 62 next-level-cache = <&L3_0>; 65 63 }; 66 64 }; ··· 77 71 next-level-cache = <&L2_200>; 78 72 L2_200: l2-cache { 79 73 compatible = "cache"; 74 + cache-level = <2>; 75 + cache-unified; 80 76 next-level-cache = <&L3_0>; 81 77 }; 82 78 }; ··· 93 85 next-level-cache = <&L2_300>; 94 86 L2_300: l2-cache { 95 87 compatible = "cache"; 88 + cache-level = <2>; 89 + cache-unified; 96 90 next-level-cache = <&L3_0>; 97 91 }; 98 92 }; ··· 109 99 next-level-cache = <&L2_400>; 110 100 L2_400: l2-cache { 111 101 compatible = "cache"; 102 + cache-level = <2>; 103 + cache-unified; 112 104 next-level-cache = <&L3_0>; 113 105 }; 114 106 }; ··· 125 113 next-level-cache = <&L2_500>; 126 114 L2_500: l2-cache { 127 115 compatible = "cache"; 116 + cache-level = <2>; 117 + cache-unified; 128 118 next-level-cache = <&L3_0>; 129 119 }; 130 120 }; ··· 141 127 next-level-cache = <&L2_600>; 142 128 L2_600: l2-cache { 143 129 compatible = "cache"; 130 + cache-level = <2>; 131 + cache-unified; 144 132 next-level-cache = <&L3_0>; 145 133 }; 146 134 }; ··· 157 141 next-level-cache = <&L2_700>; 158 142 L2_700: l2-cache { 159 143 compatible = "cache"; 144 + cache-level = <2>; 145 + cache-unified; 160 146 next-level-cache = <&L3_0>; 161 147 }; 162 148 };
+11 -2
arch/arm64/boot/dts/qcom/sdm845.dtsi
··· 108 108 L2_0: l2-cache { 109 109 compatible = "cache"; 110 110 cache-level = <2>; 111 + cache-unified; 111 112 next-level-cache = <&L3_0>; 112 113 L3_0: l3-cache { 113 - compatible = "cache"; 114 - cache-level = <3>; 114 + compatible = "cache"; 115 + cache-level = <3>; 116 + cache-unified; 115 117 }; 116 118 }; 117 119 }; ··· 137 135 L2_100: l2-cache { 138 136 compatible = "cache"; 139 137 cache-level = <2>; 138 + cache-unified; 140 139 next-level-cache = <&L3_0>; 141 140 }; 142 141 }; ··· 161 158 L2_200: l2-cache { 162 159 compatible = "cache"; 163 160 cache-level = <2>; 161 + cache-unified; 164 162 next-level-cache = <&L3_0>; 165 163 }; 166 164 }; ··· 185 181 L2_300: l2-cache { 186 182 compatible = "cache"; 187 183 cache-level = <2>; 184 + cache-unified; 188 185 next-level-cache = <&L3_0>; 189 186 }; 190 187 }; ··· 209 204 L2_400: l2-cache { 210 205 compatible = "cache"; 211 206 cache-level = <2>; 207 + cache-unified; 212 208 next-level-cache = <&L3_0>; 213 209 }; 214 210 }; ··· 233 227 L2_500: l2-cache { 234 228 compatible = "cache"; 235 229 cache-level = <2>; 230 + cache-unified; 236 231 next-level-cache = <&L3_0>; 237 232 }; 238 233 }; ··· 257 250 L2_600: l2-cache { 258 251 compatible = "cache"; 259 252 cache-level = <2>; 253 + cache-unified; 260 254 next-level-cache = <&L3_0>; 261 255 }; 262 256 }; ··· 281 273 L2_700: l2-cache { 282 274 compatible = "cache"; 283 275 cache-level = <2>; 276 + cache-unified; 284 277 next-level-cache = <&L3_0>; 285 278 }; 286 279 };
+2
arch/arm64/boot/dts/qcom/sm6115.dtsi
··· 50 50 L2_0: l2-cache { 51 51 compatible = "cache"; 52 52 cache-level = <2>; 53 + cache-unified; 53 54 }; 54 55 }; 55 56 ··· 103 102 L2_1: l2-cache { 104 103 compatible = "cache"; 105 104 cache-level = <2>; 105 + cache-unified; 106 106 }; 107 107 }; 108 108
+2
arch/arm64/boot/dts/qcom/sm6125.dtsi
··· 47 47 L2_0: l2-cache { 48 48 compatible = "cache"; 49 49 cache-level = <2>; 50 + cache-unified; 50 51 }; 51 52 }; 52 53 ··· 88 87 L2_1: l2-cache { 89 88 compatible = "cache"; 90 89 cache-level = <2>; 90 + cache-unified; 91 91 }; 92 92 }; 93 93
+9
arch/arm64/boot/dts/qcom/sm6350.dtsi
··· 60 60 L2_0: l2-cache { 61 61 compatible = "cache"; 62 62 cache-level = <2>; 63 + cache-unified; 63 64 next-level-cache = <&L3_0>; 64 65 L3_0: l3-cache { 65 66 compatible = "cache"; 66 67 cache-level = <3>; 68 + cache-unified; 67 69 }; 68 70 }; 69 71 }; ··· 88 86 L2_100: l2-cache { 89 87 compatible = "cache"; 90 88 cache-level = <2>; 89 + cache-unified; 91 90 next-level-cache = <&L3_0>; 92 91 }; 93 92 }; ··· 111 108 L2_200: l2-cache { 112 109 compatible = "cache"; 113 110 cache-level = <2>; 111 + cache-unified; 114 112 next-level-cache = <&L3_0>; 115 113 }; 116 114 }; ··· 134 130 L2_300: l2-cache { 135 131 compatible = "cache"; 136 132 cache-level = <2>; 133 + cache-unified; 137 134 next-level-cache = <&L3_0>; 138 135 }; 139 136 }; ··· 157 152 L2_400: l2-cache { 158 153 compatible = "cache"; 159 154 cache-level = <2>; 155 + cache-unified; 160 156 next-level-cache = <&L3_0>; 161 157 }; 162 158 }; ··· 180 174 L2_500: l2-cache { 181 175 compatible = "cache"; 182 176 cache-level = <2>; 177 + cache-unified; 183 178 next-level-cache = <&L3_0>; 184 179 }; 185 180 }; ··· 203 196 L2_600: l2-cache { 204 197 compatible = "cache"; 205 198 cache-level = <2>; 199 + cache-unified; 206 200 next-level-cache = <&L3_0>; 207 201 }; 208 202 }; ··· 226 218 L2_700: l2-cache { 227 219 compatible = "cache"; 228 220 cache-level = <2>; 221 + cache-unified; 229 222 next-level-cache = <&L3_0>; 230 223 }; 231 224 };
+2 -2
arch/arm64/boot/dts/qcom/sm6375-sony-xperia-murray-pdx225.dts
··· 178 178 }; 179 179 180 180 &remoteproc_adsp { 181 - firmware-name = "qcom/Sony/murray/adsp.mbn"; 181 + firmware-name = "qcom/sm6375/Sony/murray/adsp.mbn"; 182 182 status = "okay"; 183 183 }; 184 184 185 185 &remoteproc_cdsp { 186 - firmware-name = "qcom/Sony/murray/cdsp.mbn"; 186 + firmware-name = "qcom/sm6375/Sony/murray/cdsp.mbn"; 187 187 status = "okay"; 188 188 }; 189 189
+35 -17
arch/arm64/boot/dts/qcom/sm6375.dtsi
··· 48 48 power-domain-names = "psci"; 49 49 #cooling-cells = <2>; 50 50 L2_0: l2-cache { 51 - compatible = "cache"; 52 - next-level-cache = <&L3_0>; 51 + compatible = "cache"; 52 + cache-level = <2>; 53 + cache-unified; 54 + next-level-cache = <&L3_0>; 53 55 L3_0: l3-cache { 54 - compatible = "cache"; 56 + compatible = "cache"; 57 + cache-level = <3>; 58 + cache-unified; 55 59 }; 56 60 }; 57 61 }; ··· 72 68 power-domain-names = "psci"; 73 69 #cooling-cells = <2>; 74 70 L2_100: l2-cache { 75 - compatible = "cache"; 76 - next-level-cache = <&L3_0>; 71 + compatible = "cache"; 72 + cache-level = <2>; 73 + cache-unified; 74 + next-level-cache = <&L3_0>; 77 75 }; 78 76 }; 79 77 ··· 91 85 power-domain-names = "psci"; 92 86 #cooling-cells = <2>; 93 87 L2_200: l2-cache { 94 - compatible = "cache"; 95 - next-level-cache = <&L3_0>; 88 + compatible = "cache"; 89 + cache-level = <2>; 90 + cache-unified; 91 + next-level-cache = <&L3_0>; 96 92 }; 97 93 }; 98 94 ··· 110 102 power-domain-names = "psci"; 111 103 #cooling-cells = <2>; 112 104 L2_300: l2-cache { 113 - compatible = "cache"; 114 - next-level-cache = <&L3_0>; 105 + compatible = "cache"; 106 + cache-level = <2>; 107 + cache-unified; 108 + next-level-cache = <&L3_0>; 115 109 }; 116 110 }; 117 111 ··· 129 119 power-domain-names = "psci"; 130 120 #cooling-cells = <2>; 131 121 L2_400: l2-cache { 132 - compatible = "cache"; 133 - next-level-cache = <&L3_0>; 122 + compatible = "cache"; 123 + cache-level = <2>; 124 + cache-unified; 125 + next-level-cache = <&L3_0>; 134 126 }; 135 127 }; 136 128 ··· 148 136 power-domain-names = "psci"; 149 137 #cooling-cells = <2>; 150 138 L2_500: l2-cache { 151 - compatible = "cache"; 152 - next-level-cache = <&L3_0>; 139 + compatible = "cache"; 140 + cache-level = <2>; 141 + cache-unified; 142 + next-level-cache = <&L3_0>; 153 143 }; 154 144 }; 155 145 ··· 167 153 power-domain-names = "psci"; 168 154 #cooling-cells = <2>; 169 155 L2_600: l2-cache { 170 - compatible = "cache"; 171 - next-level-cache = <&L3_0>; 156 + compatible = "cache"; 157 + cache-level = <2>; 158 + cache-unified; 159 + next-level-cache = <&L3_0>; 172 160 }; 173 161 }; 174 162 ··· 186 170 power-domain-names = "psci"; 187 171 #cooling-cells = <2>; 188 172 L2_700: l2-cache { 189 - compatible = "cache"; 190 - next-level-cache = <&L3_0>; 173 + compatible = "cache"; 174 + cache-level = <2>; 175 + cache-unified; 176 + next-level-cache = <&L3_0>; 191 177 }; 192 178 }; 193 179
+11 -2
arch/arm64/boot/dts/qcom/sm8150.dtsi
··· 63 63 L2_0: l2-cache { 64 64 compatible = "cache"; 65 65 cache-level = <2>; 66 + cache-unified; 66 67 next-level-cache = <&L3_0>; 67 68 L3_0: l3-cache { 68 - compatible = "cache"; 69 - cache-level = <3>; 69 + compatible = "cache"; 70 + cache-level = <3>; 71 + cache-unified; 70 72 }; 71 73 }; 72 74 }; ··· 92 90 L2_100: l2-cache { 93 91 compatible = "cache"; 94 92 cache-level = <2>; 93 + cache-unified; 95 94 next-level-cache = <&L3_0>; 96 95 }; 97 96 }; ··· 116 113 L2_200: l2-cache { 117 114 compatible = "cache"; 118 115 cache-level = <2>; 116 + cache-unified; 119 117 next-level-cache = <&L3_0>; 120 118 }; 121 119 }; ··· 140 136 L2_300: l2-cache { 141 137 compatible = "cache"; 142 138 cache-level = <2>; 139 + cache-unified; 143 140 next-level-cache = <&L3_0>; 144 141 }; 145 142 }; ··· 164 159 L2_400: l2-cache { 165 160 compatible = "cache"; 166 161 cache-level = <2>; 162 + cache-unified; 167 163 next-level-cache = <&L3_0>; 168 164 }; 169 165 }; ··· 188 182 L2_500: l2-cache { 189 183 compatible = "cache"; 190 184 cache-level = <2>; 185 + cache-unified; 191 186 next-level-cache = <&L3_0>; 192 187 }; 193 188 }; ··· 212 205 L2_600: l2-cache { 213 206 compatible = "cache"; 214 207 cache-level = <2>; 208 + cache-unified; 215 209 next-level-cache = <&L3_0>; 216 210 }; 217 211 }; ··· 236 228 L2_700: l2-cache { 237 229 compatible = "cache"; 238 230 cache-level = <2>; 231 + cache-unified; 239 232 next-level-cache = <&L3_0>; 240 233 }; 241 234 };
+1 -1
arch/arm64/boot/dts/qcom/sm8250-xiaomi-elish-boe.dts
··· 13 13 }; 14 14 15 15 &display_panel { 16 - compatible = "xiaomi,elish-boe-nt36523"; 16 + compatible = "xiaomi,elish-boe-nt36523", "novatek,nt36523"; 17 17 status = "okay"; 18 18 };
+1 -1
arch/arm64/boot/dts/qcom/sm8250-xiaomi-elish-csot.dts
··· 13 13 }; 14 14 15 15 &display_panel { 16 - compatible = "xiaomi,elish-csot-nt36523"; 16 + compatible = "xiaomi,elish-csot-nt36523", "novatek,nt36523"; 17 17 status = "okay"; 18 18 };
+35 -26
arch/arm64/boot/dts/qcom/sm8350.dtsi
··· 58 58 power-domain-names = "psci"; 59 59 #cooling-cells = <2>; 60 60 L2_0: l2-cache { 61 - compatible = "cache"; 62 - cache-level = <2>; 63 - next-level-cache = <&L3_0>; 61 + compatible = "cache"; 62 + cache-level = <2>; 63 + cache-unified; 64 + next-level-cache = <&L3_0>; 64 65 L3_0: l3-cache { 65 - compatible = "cache"; 66 - cache-level = <3>; 66 + compatible = "cache"; 67 + cache-level = <3>; 68 + cache-unified; 67 69 }; 68 70 }; 69 71 }; ··· 82 80 power-domain-names = "psci"; 83 81 #cooling-cells = <2>; 84 82 L2_100: l2-cache { 85 - compatible = "cache"; 86 - cache-level = <2>; 87 - next-level-cache = <&L3_0>; 83 + compatible = "cache"; 84 + cache-level = <2>; 85 + cache-unified; 86 + next-level-cache = <&L3_0>; 88 87 }; 89 88 }; 90 89 ··· 101 98 power-domain-names = "psci"; 102 99 #cooling-cells = <2>; 103 100 L2_200: l2-cache { 104 - compatible = "cache"; 105 - cache-level = <2>; 106 - next-level-cache = <&L3_0>; 101 + compatible = "cache"; 102 + cache-level = <2>; 103 + cache-unified; 104 + next-level-cache = <&L3_0>; 107 105 }; 108 106 }; 109 107 ··· 120 116 power-domain-names = "psci"; 121 117 #cooling-cells = <2>; 122 118 L2_300: l2-cache { 123 - compatible = "cache"; 124 - cache-level = <2>; 125 - next-level-cache = <&L3_0>; 119 + compatible = "cache"; 120 + cache-level = <2>; 121 + cache-unified; 122 + next-level-cache = <&L3_0>; 126 123 }; 127 124 }; 128 125 ··· 139 134 power-domain-names = "psci"; 140 135 #cooling-cells = <2>; 141 136 L2_400: l2-cache { 142 - compatible = "cache"; 143 - cache-level = <2>; 144 - next-level-cache = <&L3_0>; 137 + compatible = "cache"; 138 + cache-level = <2>; 139 + cache-unified; 140 + next-level-cache = <&L3_0>; 145 141 }; 146 142 }; 147 143 ··· 158 152 power-domain-names = "psci"; 159 153 #cooling-cells = <2>; 160 154 L2_500: l2-cache { 161 - compatible = "cache"; 162 - cache-level = <2>; 163 - next-level-cache = <&L3_0>; 155 + compatible = "cache"; 156 + cache-level = <2>; 157 + cache-unified; 158 + next-level-cache = <&L3_0>; 164 159 }; 165 160 }; 166 161 ··· 177 170 power-domain-names = "psci"; 178 171 #cooling-cells = <2>; 179 172 L2_600: l2-cache { 180 - compatible = "cache"; 181 - cache-level = <2>; 182 - next-level-cache = <&L3_0>; 173 + compatible = "cache"; 174 + cache-level = <2>; 175 + cache-unified; 176 + next-level-cache = <&L3_0>; 183 177 }; 184 178 }; 185 179 ··· 196 188 power-domain-names = "psci"; 197 189 #cooling-cells = <2>; 198 190 L2_700: l2-cache { 199 - compatible = "cache"; 200 - cache-level = <2>; 201 - next-level-cache = <&L3_0>; 191 + compatible = "cache"; 192 + cache-level = <2>; 193 + cache-unified; 194 + next-level-cache = <&L3_0>; 202 195 }; 203 196 }; 204 197
+35 -26
arch/arm64/boot/dts/qcom/sm8450.dtsi
··· 57 57 #cooling-cells = <2>; 58 58 clocks = <&cpufreq_hw 0>; 59 59 L2_0: l2-cache { 60 - compatible = "cache"; 61 - cache-level = <2>; 62 - next-level-cache = <&L3_0>; 60 + compatible = "cache"; 61 + cache-level = <2>; 62 + cache-unified; 63 + next-level-cache = <&L3_0>; 63 64 L3_0: l3-cache { 64 - compatible = "cache"; 65 - cache-level = <3>; 65 + compatible = "cache"; 66 + cache-level = <3>; 67 + cache-unified; 66 68 }; 67 69 }; 68 70 }; ··· 81 79 #cooling-cells = <2>; 82 80 clocks = <&cpufreq_hw 0>; 83 81 L2_100: l2-cache { 84 - compatible = "cache"; 85 - cache-level = <2>; 86 - next-level-cache = <&L3_0>; 82 + compatible = "cache"; 83 + cache-level = <2>; 84 + cache-unified; 85 + next-level-cache = <&L3_0>; 87 86 }; 88 87 }; 89 88 ··· 100 97 #cooling-cells = <2>; 101 98 clocks = <&cpufreq_hw 0>; 102 99 L2_200: l2-cache { 103 - compatible = "cache"; 104 - cache-level = <2>; 105 - next-level-cache = <&L3_0>; 100 + compatible = "cache"; 101 + cache-level = <2>; 102 + cache-unified; 103 + next-level-cache = <&L3_0>; 106 104 }; 107 105 }; 108 106 ··· 119 115 #cooling-cells = <2>; 120 116 clocks = <&cpufreq_hw 0>; 121 117 L2_300: l2-cache { 122 - compatible = "cache"; 123 - cache-level = <2>; 124 - next-level-cache = <&L3_0>; 118 + compatible = "cache"; 119 + cache-level = <2>; 120 + cache-unified; 121 + next-level-cache = <&L3_0>; 125 122 }; 126 123 }; 127 124 ··· 138 133 #cooling-cells = <2>; 139 134 clocks = <&cpufreq_hw 1>; 140 135 L2_400: l2-cache { 141 - compatible = "cache"; 142 - cache-level = <2>; 143 - next-level-cache = <&L3_0>; 136 + compatible = "cache"; 137 + cache-level = <2>; 138 + cache-unified; 139 + next-level-cache = <&L3_0>; 144 140 }; 145 141 }; 146 142 ··· 157 151 #cooling-cells = <2>; 158 152 clocks = <&cpufreq_hw 1>; 159 153 L2_500: l2-cache { 160 - compatible = "cache"; 161 - cache-level = <2>; 162 - next-level-cache = <&L3_0>; 154 + compatible = "cache"; 155 + cache-level = <2>; 156 + cache-unified; 157 + next-level-cache = <&L3_0>; 163 158 }; 164 159 }; 165 160 ··· 176 169 #cooling-cells = <2>; 177 170 clocks = <&cpufreq_hw 1>; 178 171 L2_600: l2-cache { 179 - compatible = "cache"; 180 - cache-level = <2>; 181 - next-level-cache = <&L3_0>; 172 + compatible = "cache"; 173 + cache-level = <2>; 174 + cache-unified; 175 + next-level-cache = <&L3_0>; 182 176 }; 183 177 }; 184 178 ··· 195 187 #cooling-cells = <2>; 196 188 clocks = <&cpufreq_hw 2>; 197 189 L2_700: l2-cache { 198 - compatible = "cache"; 199 - cache-level = <2>; 200 - next-level-cache = <&L3_0>; 190 + compatible = "cache"; 191 + cache-level = <2>; 192 + cache-unified; 193 + next-level-cache = <&L3_0>; 201 194 }; 202 195 }; 203 196
+21 -5
arch/arm64/boot/dts/qcom/sm8550.dtsi
··· 80 80 L2_0: l2-cache { 81 81 compatible = "cache"; 82 82 cache-level = <2>; 83 + cache-unified; 83 84 next-level-cache = <&L3_0>; 84 85 L3_0: l3-cache { 85 86 compatible = "cache"; 86 87 cache-level = <3>; 88 + cache-unified; 87 89 }; 88 90 }; 89 91 }; ··· 106 104 L2_100: l2-cache { 107 105 compatible = "cache"; 108 106 cache-level = <2>; 107 + cache-unified; 109 108 next-level-cache = <&L3_0>; 110 109 }; 111 110 }; ··· 127 124 L2_200: l2-cache { 128 125 compatible = "cache"; 129 126 cache-level = <2>; 127 + cache-unified; 130 128 next-level-cache = <&L3_0>; 131 129 }; 132 130 }; ··· 148 144 L2_300: l2-cache { 149 145 compatible = "cache"; 150 146 cache-level = <2>; 147 + cache-unified; 151 148 next-level-cache = <&L3_0>; 152 149 }; 153 150 }; ··· 169 164 L2_400: l2-cache { 170 165 compatible = "cache"; 171 166 cache-level = <2>; 167 + cache-unified; 172 168 next-level-cache = <&L3_0>; 173 169 }; 174 170 }; ··· 190 184 L2_500: l2-cache { 191 185 compatible = "cache"; 192 186 cache-level = <2>; 187 + cache-unified; 193 188 next-level-cache = <&L3_0>; 194 189 }; 195 190 }; ··· 211 204 L2_600: l2-cache { 212 205 compatible = "cache"; 213 206 cache-level = <2>; 207 + cache-unified; 214 208 next-level-cache = <&L3_0>; 215 209 }; 216 210 }; ··· 232 224 L2_700: l2-cache { 233 225 compatible = "cache"; 234 226 cache-level = <2>; 227 + cache-unified; 235 228 next-level-cache = <&L3_0>; 236 229 }; 237 230 }; ··· 2031 2022 qcom,din-ports = <4>; 2032 2023 qcom,dout-ports = <9>; 2033 2024 2034 - qcom,ports-sinterval = <0x07 0x1f 0x3f 0x07 0x1f 0x3f 0x18f 0xff 0xff 0x0f 0x0f 0xff 0x31f>; 2025 + qcom,ports-sinterval = /bits/ 16 <0x07 0x1f 0x3f 0x07 0x1f 0x3f 0x18f 0xff 0xff 0x0f 0x0f 0xff 0x31f>; 2035 2026 qcom,ports-offset1 = /bits/ 8 <0x01 0x03 0x05 0x02 0x04 0x15 0x00 0xff 0xff 0x06 0x0d 0xff 0x00>; 2036 2027 qcom,ports-offset2 = /bits/ 8 <0xff 0x07 0x1f 0xff 0x07 0x1f 0xff 0xff 0xff 0xff 0xff 0xff 0xff>; 2037 2028 qcom,ports-hstart = /bits/ 8 <0xff 0xff 0xff 0xff 0xff 0xff 0x08 0xff 0xff 0xff 0xff 0xff 0x0f>; ··· 2077 2068 qcom,din-ports = <0>; 2078 2069 qcom,dout-ports = <10>; 2079 2070 2080 - qcom,ports-sinterval = <0x03 0x3f 0x1f 0x07 0x00 0x18f 0xff 0xff 0xff 0xff>; 2071 + qcom,ports-sinterval = /bits/ 16 <0x03 0x3f 0x1f 0x07 0x00 0x18f 0xff 0xff 0xff 0xff>; 2081 2072 qcom,ports-offset1 = /bits/ 8 <0x00 0x00 0x0b 0x01 0x00 0x00 0xff 0xff 0xff 0xff>; 2082 2073 qcom,ports-offset2 = /bits/ 8 <0x00 0x00 0x0b 0x00 0x00 0x00 0xff 0xff 0xff 0xff>; 2083 2074 qcom,ports-hstart = /bits/ 8 <0xff 0x03 0xff 0xff 0xff 0x08 0xff 0xff 0xff 0xff>; ··· 2142 2133 qcom,din-ports = <4>; 2143 2134 qcom,dout-ports = <9>; 2144 2135 2145 - qcom,ports-sinterval = <0x07 0x1f 0x3f 0x07 0x1f 0x3f 0x18f 0xff 0xff 0x0f 0x0f 0xff 0x31f>; 2136 + qcom,ports-sinterval = /bits/ 16 <0x07 0x1f 0x3f 0x07 0x1f 0x3f 0x18f 0xff 0xff 0x0f 0x0f 0xff 0x31f>; 2146 2137 qcom,ports-offset1 = /bits/ 8 <0x01 0x03 0x05 0x02 0x04 0x15 0x00 0xff 0xff 0x06 0x0d 0xff 0x00>; 2147 2138 qcom,ports-offset2 = /bits/ 8 <0xff 0x07 0x1f 0xff 0x07 0x1f 0xff 0xff 0xff 0xff 0xff 0xff 0xff>; 2148 2139 qcom,ports-hstart = /bits/ 8 <0xff 0xff 0xff 0xff 0xff 0xff 0x08 0xff 0xff 0xff 0xff 0xff 0x0f>; ··· 3771 3762 3772 3763 system-cache-controller@25000000 { 3773 3764 compatible = "qcom,sm8550-llcc"; 3774 - reg = <0 0x25000000 0 0x800000>, 3765 + reg = <0 0x25000000 0 0x200000>, 3766 + <0 0x25200000 0 0x200000>, 3767 + <0 0x25400000 0 0x200000>, 3768 + <0 0x25600000 0 0x200000>, 3775 3769 <0 0x25800000 0 0x200000>; 3776 - reg-names = "llcc_base", "llcc_broadcast_base"; 3770 + reg-names = "llcc0_base", 3771 + "llcc1_base", 3772 + "llcc2_base", 3773 + "llcc3_base", 3774 + "llcc_broadcast_base"; 3777 3775 interrupts = <GIC_SPI 266 IRQ_TYPE_LEVEL_HIGH>; 3778 3776 }; 3779 3777
+1 -2
arch/arm64/mm/fault.c
··· 600 600 vma_end_read(vma); 601 601 goto lock_mmap; 602 602 } 603 - fault = handle_mm_fault(vma, addr & PAGE_MASK, 604 - mm_flags | FAULT_FLAG_VMA_LOCK, regs); 603 + fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs); 605 604 vma_end_read(vma); 606 605 607 606 if (!(fault & VM_FAULT_RETRY)) {
+1 -1
arch/loongarch/include/asm/loongarch.h
··· 1496 1496 #define write_fcsr(dest, val) \ 1497 1497 do { \ 1498 1498 __asm__ __volatile__( \ 1499 - " movgr2fcsr %0, "__stringify(dest)" \n" \ 1499 + " movgr2fcsr "__stringify(dest)", %0 \n" \ 1500 1500 : : "r" (val)); \ 1501 1501 } while (0) 1502 1502
+2
arch/loongarch/include/asm/pgtable-bits.h
··· 22 22 #define _PAGE_PFN_SHIFT 12 23 23 #define _PAGE_SWP_EXCLUSIVE_SHIFT 23 24 24 #define _PAGE_PFN_END_SHIFT 48 25 + #define _PAGE_PRESENT_INVALID_SHIFT 60 25 26 #define _PAGE_NO_READ_SHIFT 61 26 27 #define _PAGE_NO_EXEC_SHIFT 62 27 28 #define _PAGE_RPLV_SHIFT 63 28 29 29 30 /* Used by software */ 30 31 #define _PAGE_PRESENT (_ULCAST_(1) << _PAGE_PRESENT_SHIFT) 32 + #define _PAGE_PRESENT_INVALID (_ULCAST_(1) << _PAGE_PRESENT_INVALID_SHIFT) 31 33 #define _PAGE_WRITE (_ULCAST_(1) << _PAGE_WRITE_SHIFT) 32 34 #define _PAGE_ACCESSED (_ULCAST_(1) << _PAGE_ACCESSED_SHIFT) 33 35 #define _PAGE_MODIFIED (_ULCAST_(1) << _PAGE_MODIFIED_SHIFT)
+2 -1
arch/loongarch/include/asm/pgtable.h
··· 213 213 static inline int pmd_present(pmd_t pmd) 214 214 { 215 215 if (unlikely(pmd_val(pmd) & _PAGE_HUGE)) 216 - return !!(pmd_val(pmd) & (_PAGE_PRESENT | _PAGE_PROTNONE)); 216 + return !!(pmd_val(pmd) & (_PAGE_PRESENT | _PAGE_PROTNONE | _PAGE_PRESENT_INVALID)); 217 217 218 218 return pmd_val(pmd) != (unsigned long)invalid_pte_table; 219 219 } ··· 558 558 559 559 static inline pmd_t pmd_mkinvalid(pmd_t pmd) 560 560 { 561 + pmd_val(pmd) |= _PAGE_PRESENT_INVALID; 561 562 pmd_val(pmd) &= ~(_PAGE_PRESENT | _PAGE_VALID | _PAGE_DIRTY | _PAGE_PROTNONE); 562 563 563 564 return pmd;
+2
arch/loongarch/kernel/hw_breakpoint.c
··· 396 396 397 397 if (hw->ctrl.type != LOONGARCH_BREAKPOINT_EXECUTE) 398 398 alignment_mask = 0x7; 399 + else 400 + alignment_mask = 0x3; 399 401 offset = hw->address & alignment_mask; 400 402 401 403 hw->address &= ~alignment_mask;
+3 -3
arch/loongarch/kernel/perf_event.c
··· 271 271 WARN_ON(idx < 0 || idx >= loongarch_pmu.num_counters); 272 272 273 273 /* Make sure interrupt enabled. */ 274 - cpuc->saved_ctrl[idx] = M_PERFCTL_EVENT(evt->event_base & 0xff) | 274 + cpuc->saved_ctrl[idx] = M_PERFCTL_EVENT(evt->event_base) | 275 275 (evt->config_base & M_PERFCTL_CONFIG_MASK) | CSR_PERFCTRL_IE; 276 276 277 277 cpu = (event->cpu >= 0) ? event->cpu : smp_processor_id(); ··· 594 594 595 595 static unsigned int loongarch_pmu_perf_event_encode(const struct loongarch_perf_event *pev) 596 596 { 597 - return (pev->event_id & 0xff); 597 + return M_PERFCTL_EVENT(pev->event_id); 598 598 } 599 599 600 600 static const struct loongarch_perf_event *loongarch_pmu_map_general_event(int idx) ··· 849 849 850 850 static const struct loongarch_perf_event *loongarch_pmu_map_raw_event(u64 config) 851 851 { 852 - raw_event.event_id = config & 0xff; 852 + raw_event.event_id = M_PERFCTL_EVENT(config); 853 853 854 854 return &raw_event; 855 855 }
+1 -1
arch/loongarch/kernel/unaligned.c
··· 485 485 struct dentry *d; 486 486 487 487 d = debugfs_create_dir("loongarch", NULL); 488 - if (!d) 488 + if (IS_ERR_OR_NULL(d)) 489 489 return -ENOMEM; 490 490 491 491 debugfs_create_u32("unaligned_instructions_user",
+1 -1
arch/nios2/boot/dts/10m50_devboard.dts
··· 97 97 rx-fifo-depth = <8192>; 98 98 tx-fifo-depth = <8192>; 99 99 address-bits = <48>; 100 - max-frame-size = <1518>; 100 + max-frame-size = <1500>; 101 101 local-mac-address = [00 00 00 00 00 00]; 102 102 altr,has-supplementary-unicast; 103 103 altr,enable-sup-addr = <1>;
+1 -1
arch/nios2/boot/dts/3c120_devboard.dts
··· 106 106 interrupt-names = "rx_irq", "tx_irq"; 107 107 rx-fifo-depth = <8192>; 108 108 tx-fifo-depth = <8192>; 109 - max-frame-size = <1518>; 109 + max-frame-size = <1500>; 110 110 local-mac-address = [ 00 00 00 00 00 00 ]; 111 111 phy-mode = "rgmii-id"; 112 112 phy-handle = <&phy0>;
-4
arch/parisc/include/asm/assembly.h
··· 90 90 #include <asm/asmregs.h> 91 91 #include <asm/psw.h> 92 92 93 - sp = 30 94 - gp = 27 95 - ipsw = 22 96 - 97 93 /* 98 94 * We provide two versions of each macro to convert from physical 99 95 * to virtual and vice versa. The "_r1" versions take one argument
+5
arch/powerpc/purgatory/Makefile
··· 5 5 6 6 targets += trampoline_$(BITS).o purgatory.ro 7 7 8 + # When profile-guided optimization is enabled, llvm emits two different 9 + # overlapping text sections, which is not supported by kexec. Remove profile 10 + # optimization flags. 11 + KBUILD_CFLAGS := $(filter-out -fprofile-sample-use=% -fprofile-use=%,$(KBUILD_CFLAGS)) 12 + 8 13 LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined 9 14 10 15 $(obj)/purgatory.ro: $(obj)/trampoline_$(BITS).o FORCE
+1
arch/riscv/Kconfig
··· 26 26 select ARCH_HAS_GIGANTIC_PAGE 27 27 select ARCH_HAS_KCOV 28 28 select ARCH_HAS_MMIOWB 29 + select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE 29 30 select ARCH_HAS_PMEM_API 30 31 select ARCH_HAS_PTE_SPECIAL 31 32 select ARCH_HAS_SET_DIRECT_MAP if MMU
-33
arch/riscv/include/asm/kfence.h
··· 8 8 #include <asm-generic/pgalloc.h> 9 9 #include <asm/pgtable.h> 10 10 11 - static inline int split_pmd_page(unsigned long addr) 12 - { 13 - int i; 14 - unsigned long pfn = PFN_DOWN(__pa((addr & PMD_MASK))); 15 - pmd_t *pmd = pmd_off_k(addr); 16 - pte_t *pte = pte_alloc_one_kernel(&init_mm); 17 - 18 - if (!pte) 19 - return -ENOMEM; 20 - 21 - for (i = 0; i < PTRS_PER_PTE; i++) 22 - set_pte(pte + i, pfn_pte(pfn + i, PAGE_KERNEL)); 23 - set_pmd(pmd, pfn_pmd(PFN_DOWN(__pa(pte)), PAGE_TABLE)); 24 - 25 - flush_tlb_kernel_range(addr, addr + PMD_SIZE); 26 - return 0; 27 - } 28 - 29 11 static inline bool arch_kfence_init_pool(void) 30 12 { 31 - int ret; 32 - unsigned long addr; 33 - pmd_t *pmd; 34 - 35 - for (addr = (unsigned long)__kfence_pool; is_kfence_address((void *)addr); 36 - addr += PAGE_SIZE) { 37 - pmd = pmd_off_k(addr); 38 - 39 - if (pmd_leaf(*pmd)) { 40 - ret = split_pmd_page(addr); 41 - if (ret) 42 - return false; 43 - } 44 - } 45 - 46 13 return true; 47 14 } 48 15
+1 -2
arch/riscv/include/asm/pgtable.h
··· 165 165 _PAGE_EXEC | _PAGE_WRITE) 166 166 167 167 #define PAGE_COPY PAGE_READ 168 - #define PAGE_COPY_EXEC PAGE_EXEC 169 - #define PAGE_COPY_READ_EXEC PAGE_READ_EXEC 168 + #define PAGE_COPY_EXEC PAGE_READ_EXEC 170 169 #define PAGE_SHARED PAGE_WRITE 171 170 #define PAGE_SHARED_EXEC PAGE_WRITE_EXEC 172 171
+37 -11
arch/riscv/mm/init.c
··· 23 23 #ifdef CONFIG_RELOCATABLE 24 24 #include <linux/elf.h> 25 25 #endif 26 + #include <linux/kfence.h> 26 27 27 28 #include <asm/fixmap.h> 28 29 #include <asm/tlbflush.h> ··· 294 293 [VM_EXEC] = PAGE_EXEC, 295 294 [VM_EXEC | VM_READ] = PAGE_READ_EXEC, 296 295 [VM_EXEC | VM_WRITE] = PAGE_COPY_EXEC, 297 - [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_READ_EXEC, 296 + [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_EXEC, 298 297 [VM_SHARED] = PAGE_NONE, 299 298 [VM_SHARED | VM_READ] = PAGE_READ, 300 299 [VM_SHARED | VM_WRITE] = PAGE_SHARED, ··· 660 659 create_pgd_next_mapping(nextp, va, pa, sz, prot); 661 660 } 662 661 663 - static uintptr_t __init best_map_size(phys_addr_t base, phys_addr_t size) 662 + static uintptr_t __init best_map_size(phys_addr_t pa, uintptr_t va, 663 + phys_addr_t size) 664 664 { 665 - if (!(base & (PGDIR_SIZE - 1)) && size >= PGDIR_SIZE) 665 + if (!(pa & (PGDIR_SIZE - 1)) && !(va & (PGDIR_SIZE - 1)) && size >= PGDIR_SIZE) 666 666 return PGDIR_SIZE; 667 667 668 - if (!(base & (P4D_SIZE - 1)) && size >= P4D_SIZE) 668 + if (!(pa & (P4D_SIZE - 1)) && !(va & (P4D_SIZE - 1)) && size >= P4D_SIZE) 669 669 return P4D_SIZE; 670 670 671 - if (!(base & (PUD_SIZE - 1)) && size >= PUD_SIZE) 671 + if (!(pa & (PUD_SIZE - 1)) && !(va & (PUD_SIZE - 1)) && size >= PUD_SIZE) 672 672 return PUD_SIZE; 673 673 674 - if (!(base & (PMD_SIZE - 1)) && size >= PMD_SIZE) 674 + if (!(pa & (PMD_SIZE - 1)) && !(va & (PMD_SIZE - 1)) && size >= PMD_SIZE) 675 675 return PMD_SIZE; 676 676 677 677 return PAGE_SIZE; ··· 1169 1167 } 1170 1168 1171 1169 static void __init create_linear_mapping_range(phys_addr_t start, 1172 - phys_addr_t end) 1170 + phys_addr_t end, 1171 + uintptr_t fixed_map_size) 1173 1172 { 1174 1173 phys_addr_t pa; 1175 1174 uintptr_t va, map_size; 1176 1175 1177 1176 for (pa = start; pa < end; pa += map_size) { 1178 1177 va = (uintptr_t)__va(pa); 1179 - map_size = best_map_size(pa, end - pa); 1178 + map_size = fixed_map_size ? fixed_map_size : 1179 + best_map_size(pa, va, end - pa); 1180 1180 1181 1181 create_pgd_mapping(swapper_pg_dir, va, pa, map_size, 1182 1182 pgprot_from_va(va)); ··· 1188 1184 static void __init create_linear_mapping_page_table(void) 1189 1185 { 1190 1186 phys_addr_t start, end; 1187 + phys_addr_t kfence_pool __maybe_unused; 1191 1188 u64 i; 1192 1189 1193 1190 #ifdef CONFIG_STRICT_KERNEL_RWX ··· 1202 1197 memblock_mark_nomap(krodata_start, krodata_size); 1203 1198 #endif 1204 1199 1200 + #ifdef CONFIG_KFENCE 1201 + /* 1202 + * kfence pool must be backed by PAGE_SIZE mappings, so allocate it 1203 + * before we setup the linear mapping so that we avoid using hugepages 1204 + * for this region. 1205 + */ 1206 + kfence_pool = memblock_phys_alloc(KFENCE_POOL_SIZE, PAGE_SIZE); 1207 + BUG_ON(!kfence_pool); 1208 + 1209 + memblock_mark_nomap(kfence_pool, KFENCE_POOL_SIZE); 1210 + __kfence_pool = __va(kfence_pool); 1211 + #endif 1212 + 1205 1213 /* Map all memory banks in the linear mapping */ 1206 1214 for_each_mem_range(i, &start, &end) { 1207 1215 if (start >= end) ··· 1225 1207 if (end >= __pa(PAGE_OFFSET) + memory_limit) 1226 1208 end = __pa(PAGE_OFFSET) + memory_limit; 1227 1209 1228 - create_linear_mapping_range(start, end); 1210 + create_linear_mapping_range(start, end, 0); 1229 1211 } 1230 1212 1231 1213 #ifdef CONFIG_STRICT_KERNEL_RWX 1232 - create_linear_mapping_range(ktext_start, ktext_start + ktext_size); 1214 + create_linear_mapping_range(ktext_start, ktext_start + ktext_size, 0); 1233 1215 create_linear_mapping_range(krodata_start, 1234 - krodata_start + krodata_size); 1216 + krodata_start + krodata_size, 0); 1235 1217 1236 1218 memblock_clear_nomap(ktext_start, ktext_size); 1237 1219 memblock_clear_nomap(krodata_start, krodata_size); 1220 + #endif 1221 + 1222 + #ifdef CONFIG_KFENCE 1223 + create_linear_mapping_range(kfence_pool, 1224 + kfence_pool + KFENCE_POOL_SIZE, 1225 + PAGE_SIZE); 1226 + 1227 + memblock_clear_nomap(kfence_pool, KFENCE_POOL_SIZE); 1238 1228 #endif 1239 1229 } 1240 1230
+5
arch/riscv/purgatory/Makefile
··· 35 35 CFLAGS_string.o := -D__DISABLE_EXPORTS 36 36 CFLAGS_ctype.o := -D__DISABLE_EXPORTS 37 37 38 + # When profile-guided optimization is enabled, llvm emits two different 39 + # overlapping text sections, which is not supported by kexec. Remove profile 40 + # optimization flags. 41 + KBUILD_CFLAGS := $(filter-out -fprofile-sample-use=% -fprofile-use=%,$(KBUILD_CFLAGS)) 42 + 38 43 # When linking purgatory.ro with -r unresolved symbols are not checked, 39 44 # also link a purgatory.chk binary without -r to check for unresolved symbols. 40 45 PURGATORY_LDFLAGS := -e purgatory_start -z nodefaultlib
+1
arch/s390/purgatory/Makefile
··· 26 26 KBUILD_CFLAGS += -fno-zero-initialized-in-bss -fno-builtin -ffreestanding 27 27 KBUILD_CFLAGS += -Os -m64 -msoft-float -fno-common 28 28 KBUILD_CFLAGS += -fno-stack-protector 29 + KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING 29 30 KBUILD_CFLAGS += $(CLANG_FLAGS) 30 31 KBUILD_CFLAGS += $(call cc-option,-fno-PIE) 31 32 KBUILD_AFLAGS := $(filter-out -DCC_USING_EXPOLINE,$(KBUILD_AFLAGS))
+9 -9
arch/x86/kernel/head_64.S
··· 77 77 call startup_64_setup_env 78 78 popq %rsi 79 79 80 + /* Now switch to __KERNEL_CS so IRET works reliably */ 81 + pushq $__KERNEL_CS 82 + leaq .Lon_kernel_cs(%rip), %rax 83 + pushq %rax 84 + lretq 85 + 86 + .Lon_kernel_cs: 87 + UNWIND_HINT_END_OF_STACK 88 + 80 89 #ifdef CONFIG_AMD_MEM_ENCRYPT 81 90 /* 82 91 * Activate SEV/SME memory encryption if supported/enabled. This needs to ··· 98 89 call sme_enable 99 90 popq %rsi 100 91 #endif 101 - 102 - /* Now switch to __KERNEL_CS so IRET works reliably */ 103 - pushq $__KERNEL_CS 104 - leaq .Lon_kernel_cs(%rip), %rax 105 - pushq %rax 106 - lretq 107 - 108 - .Lon_kernel_cs: 109 - UNWIND_HINT_END_OF_STACK 110 92 111 93 /* Sanitize CPU configuration */ 112 94 call verify_cpu
+5
arch/x86/purgatory/Makefile
··· 14 14 15 15 CFLAGS_sha256.o := -D__DISABLE_EXPORTS 16 16 17 + # When profile-guided optimization is enabled, llvm emits two different 18 + # overlapping text sections, which is not supported by kexec. Remove profile 19 + # optimization flags. 20 + KBUILD_CFLAGS := $(filter-out -fprofile-sample-use=% -fprofile-use=%,$(KBUILD_CFLAGS)) 21 + 17 22 # When linking purgatory.ro with -r unresolved symbols are not checked, 18 23 # also link a purgatory.chk binary without -r to check for unresolved symbols. 19 24 PURGATORY_LDFLAGS := -e purgatory_start -z nodefaultlib
+31 -9
block/blk-cgroup.c
··· 34 34 #include "blk-ioprio.h" 35 35 #include "blk-throttle.h" 36 36 37 + static void __blkcg_rstat_flush(struct blkcg *blkcg, int cpu); 38 + 37 39 /* 38 40 * blkcg_pol_mutex protects blkcg_policy[] and policy [de]activation. 39 41 * blkcg_pol_register_mutex nests outside of it and synchronizes entire ··· 57 55 static LIST_HEAD(all_blkcgs); /* protected by blkcg_pol_mutex */ 58 56 59 57 bool blkcg_debug_stats = false; 58 + 59 + static DEFINE_RAW_SPINLOCK(blkg_stat_lock); 60 60 61 61 #define BLKG_DESTROY_BATCH_SIZE 64 62 62 ··· 167 163 static void __blkg_release(struct rcu_head *rcu) 168 164 { 169 165 struct blkcg_gq *blkg = container_of(rcu, struct blkcg_gq, rcu_head); 166 + struct blkcg *blkcg = blkg->blkcg; 167 + int cpu; 170 168 171 169 #ifdef CONFIG_BLK_CGROUP_PUNT_BIO 172 170 WARN_ON(!bio_list_empty(&blkg->async_bios)); 173 171 #endif 172 + /* 173 + * Flush all the non-empty percpu lockless lists before releasing 174 + * us, given these stat belongs to us. 175 + * 176 + * blkg_stat_lock is for serializing blkg stat update 177 + */ 178 + for_each_possible_cpu(cpu) 179 + __blkcg_rstat_flush(blkcg, cpu); 174 180 175 181 /* release the blkcg and parent blkg refs this blkg has been holding */ 176 182 css_put(&blkg->blkcg->css); ··· 965 951 u64_stats_update_end_irqrestore(&blkg->iostat.sync, flags); 966 952 } 967 953 968 - static void blkcg_rstat_flush(struct cgroup_subsys_state *css, int cpu) 954 + static void __blkcg_rstat_flush(struct blkcg *blkcg, int cpu) 969 955 { 970 - struct blkcg *blkcg = css_to_blkcg(css); 971 956 struct llist_head *lhead = per_cpu_ptr(blkcg->lhead, cpu); 972 957 struct llist_node *lnode; 973 958 struct blkg_iostat_set *bisc, *next_bisc; 974 - 975 - /* Root-level stats are sourced from system-wide IO stats */ 976 - if (!cgroup_parent(css->cgroup)) 977 - return; 978 959 979 960 rcu_read_lock(); 980 961 981 962 lnode = llist_del_all(lhead); 982 963 if (!lnode) 983 964 goto out; 965 + 966 + /* 967 + * For covering concurrent parent blkg update from blkg_release(). 968 + * 969 + * When flushing from cgroup, cgroup_rstat_lock is always held, so 970 + * this lock won't cause contention most of time. 971 + */ 972 + raw_spin_lock(&blkg_stat_lock); 984 973 985 974 /* 986 975 * Iterate only the iostat_cpu's queued in the lockless list. ··· 1008 991 if (parent && parent->parent) 1009 992 blkcg_iostat_update(parent, &blkg->iostat.cur, 1010 993 &blkg->iostat.last); 1011 - percpu_ref_put(&blkg->refcnt); 1012 994 } 1013 - 995 + raw_spin_unlock(&blkg_stat_lock); 1014 996 out: 1015 997 rcu_read_unlock(); 998 + } 999 + 1000 + static void blkcg_rstat_flush(struct cgroup_subsys_state *css, int cpu) 1001 + { 1002 + /* Root-level stats are sourced from system-wide IO stats */ 1003 + if (cgroup_parent(css->cgroup)) 1004 + __blkcg_rstat_flush(css_to_blkcg(css), cpu); 1016 1005 } 1017 1006 1018 1007 /* ··· 2098 2075 2099 2076 llist_add(&bis->lnode, lhead); 2100 2077 WRITE_ONCE(bis->lqueued, true); 2101 - percpu_ref_get(&bis->blkg->refcnt); 2102 2078 } 2103 2079 2104 2080 u64_stats_update_end_irqrestore(&bis->sync, flags);
+4 -4
block/blk-mq.c
··· 683 683 blk_crypto_free_request(rq); 684 684 blk_pm_mark_last_busy(rq); 685 685 rq->mq_hctx = NULL; 686 + 687 + if (rq->rq_flags & RQF_MQ_INFLIGHT) 688 + __blk_mq_dec_active_requests(hctx); 689 + 686 690 if (rq->tag != BLK_MQ_NO_TAG) 687 691 blk_mq_put_tag(hctx->tags, ctx, rq->tag); 688 692 if (sched_tag != BLK_MQ_NO_TAG) ··· 698 694 void blk_mq_free_request(struct request *rq) 699 695 { 700 696 struct request_queue *q = rq->q; 701 - struct blk_mq_hw_ctx *hctx = rq->mq_hctx; 702 697 703 698 if ((rq->rq_flags & RQF_ELVPRIV) && 704 699 q->elevator->type->ops.finish_request) 705 700 q->elevator->type->ops.finish_request(rq); 706 - 707 - if (rq->rq_flags & RQF_MQ_INFLIGHT) 708 - __blk_mq_dec_active_requests(hctx); 709 701 710 702 if (unlikely(laptop_mode && !blk_rq_is_passthrough(rq))) 711 703 laptop_io_completion(q->disk->bdi);
+21 -17
crypto/asymmetric_keys/public_key.c
··· 380 380 struct crypto_wait cwait; 381 381 struct crypto_akcipher *tfm; 382 382 struct akcipher_request *req; 383 - struct scatterlist src_sg[2]; 383 + struct scatterlist src_sg; 384 384 char alg_name[CRYPTO_MAX_ALG_NAME]; 385 - char *key, *ptr; 385 + char *buf, *ptr; 386 + size_t buf_len; 386 387 int ret; 387 388 388 389 pr_devel("==>%s()\n", __func__); ··· 421 420 if (!req) 422 421 goto error_free_tfm; 423 422 424 - key = kmalloc(pkey->keylen + sizeof(u32) * 2 + pkey->paramlen, 425 - GFP_KERNEL); 426 - if (!key) 423 + buf_len = max_t(size_t, pkey->keylen + sizeof(u32) * 2 + pkey->paramlen, 424 + sig->s_size + sig->digest_size); 425 + 426 + buf = kmalloc(buf_len, GFP_KERNEL); 427 + if (!buf) 427 428 goto error_free_req; 428 429 429 - memcpy(key, pkey->key, pkey->keylen); 430 - ptr = key + pkey->keylen; 430 + memcpy(buf, pkey->key, pkey->keylen); 431 + ptr = buf + pkey->keylen; 431 432 ptr = pkey_pack_u32(ptr, pkey->algo); 432 433 ptr = pkey_pack_u32(ptr, pkey->paramlen); 433 434 memcpy(ptr, pkey->params, pkey->paramlen); 434 435 435 436 if (pkey->key_is_private) 436 - ret = crypto_akcipher_set_priv_key(tfm, key, pkey->keylen); 437 + ret = crypto_akcipher_set_priv_key(tfm, buf, pkey->keylen); 437 438 else 438 - ret = crypto_akcipher_set_pub_key(tfm, key, pkey->keylen); 439 + ret = crypto_akcipher_set_pub_key(tfm, buf, pkey->keylen); 439 440 if (ret) 440 - goto error_free_key; 441 + goto error_free_buf; 441 442 442 443 if (strcmp(pkey->pkey_algo, "sm2") == 0 && sig->data_size) { 443 444 ret = cert_sig_digest_update(sig, tfm); 444 445 if (ret) 445 - goto error_free_key; 446 + goto error_free_buf; 446 447 } 447 448 448 - sg_init_table(src_sg, 2); 449 - sg_set_buf(&src_sg[0], sig->s, sig->s_size); 450 - sg_set_buf(&src_sg[1], sig->digest, sig->digest_size); 451 - akcipher_request_set_crypt(req, src_sg, NULL, sig->s_size, 449 + memcpy(buf, sig->s, sig->s_size); 450 + memcpy(buf + sig->s_size, sig->digest, sig->digest_size); 451 + 452 + sg_init_one(&src_sg, buf, sig->s_size + sig->digest_size); 453 + akcipher_request_set_crypt(req, &src_sg, NULL, sig->s_size, 452 454 sig->digest_size); 453 455 crypto_init_wait(&cwait); 454 456 akcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG | ··· 459 455 crypto_req_done, &cwait); 460 456 ret = crypto_wait_req(crypto_akcipher_verify(req), &cwait); 461 457 462 - error_free_key: 463 - kfree(key); 458 + error_free_buf: 459 + kfree(buf); 464 460 error_free_req: 465 461 akcipher_request_free(req); 466 462 error_free_tfm:
+1
drivers/accel/ivpu/Kconfig
··· 7 7 depends on PCI && PCI_MSI 8 8 select FW_LOADER 9 9 select SHMEM 10 + select GENERIC_ALLOCATOR 10 11 help 11 12 Choose this option if you have a system that has an 14th generation Intel CPU 12 13 or newer. VPU stands for Versatile Processing Unit and it's a CPU-integrated
+17 -5
drivers/accel/ivpu/ivpu_hw_mtl.c
··· 197 197 hw->pll.pn_ratio = clamp_t(u8, fuse_pn_ratio, hw->pll.min_ratio, hw->pll.max_ratio); 198 198 } 199 199 200 + static int ivpu_hw_mtl_wait_for_vpuip_bar(struct ivpu_device *vdev) 201 + { 202 + return REGV_POLL_FLD(MTL_VPU_HOST_SS_CPR_RST_CLR, AON, 0, 100); 203 + } 204 + 200 205 static int ivpu_pll_drive(struct ivpu_device *vdev, bool enable) 201 206 { 202 207 struct ivpu_hw_info *hw = vdev->hw; ··· 244 239 ivpu_err(vdev, "Timed out waiting for PLL ready status\n"); 245 240 return ret; 246 241 } 242 + 243 + ret = ivpu_hw_mtl_wait_for_vpuip_bar(vdev); 244 + if (ret) { 245 + ivpu_err(vdev, "Timed out waiting for VPUIP bar\n"); 246 + return ret; 247 + } 247 248 } 248 249 249 250 return 0; ··· 267 256 268 257 static void ivpu_boot_host_ss_rst_clr_assert(struct ivpu_device *vdev) 269 258 { 270 - u32 val = REGV_RD32(MTL_VPU_HOST_SS_CPR_RST_CLR); 259 + u32 val = 0; 271 260 272 261 val = REG_SET_FLD(MTL_VPU_HOST_SS_CPR_RST_CLR, TOP_NOC, val); 273 262 val = REG_SET_FLD(MTL_VPU_HOST_SS_CPR_RST_CLR, DSS_MAS, val); ··· 765 754 { 766 755 int ret = 0; 767 756 768 - if (ivpu_hw_mtl_reset(vdev)) { 757 + if (!ivpu_hw_mtl_is_idle(vdev) && ivpu_hw_mtl_reset(vdev)) { 769 758 ivpu_err(vdev, "Failed to reset the VPU\n"); 770 - ret = -EIO; 771 759 } 772 760 773 761 if (ivpu_pll_disable(vdev)) { ··· 774 764 ret = -EIO; 775 765 } 776 766 777 - if (ivpu_hw_mtl_d0i3_enable(vdev)) 778 - ivpu_warn(vdev, "Failed to enable D0I3\n"); 767 + if (ivpu_hw_mtl_d0i3_enable(vdev)) { 768 + ivpu_err(vdev, "Failed to enter D0I3\n"); 769 + ret = -EIO; 770 + } 779 771 780 772 return ret; 781 773 }
+1
drivers/accel/ivpu/ivpu_hw_mtl_reg.h
··· 91 91 #define MTL_VPU_HOST_SS_CPR_RST_SET_MSS_MAS_MASK BIT_MASK(11) 92 92 93 93 #define MTL_VPU_HOST_SS_CPR_RST_CLR 0x00000098u 94 + #define MTL_VPU_HOST_SS_CPR_RST_CLR_AON_MASK BIT_MASK(0) 94 95 #define MTL_VPU_HOST_SS_CPR_RST_CLR_TOP_NOC_MASK BIT_MASK(1) 95 96 #define MTL_VPU_HOST_SS_CPR_RST_CLR_DSS_MAS_MASK BIT_MASK(10) 96 97 #define MTL_VPU_HOST_SS_CPR_RST_CLR_MSS_MAS_MASK BIT_MASK(11)
+1 -3
drivers/accel/ivpu/ivpu_ipc.c
··· 183 183 struct ivpu_ipc_info *ipc = vdev->ipc; 184 184 int ret; 185 185 186 - ret = mutex_lock_interruptible(&ipc->lock); 187 - if (ret) 188 - return ret; 186 + mutex_lock(&ipc->lock); 189 187 190 188 if (!ipc->on) { 191 189 ret = -EAGAIN;
+14 -7
drivers/accel/ivpu/ivpu_job.c
··· 431 431 struct ivpu_file_priv *file_priv = file->driver_priv; 432 432 struct ivpu_device *vdev = file_priv->vdev; 433 433 struct ww_acquire_ctx acquire_ctx; 434 + enum dma_resv_usage usage; 434 435 struct ivpu_bo *bo; 435 436 int ret; 436 437 u32 i; ··· 462 461 463 462 job->cmd_buf_vpu_addr = bo->vpu_addr + commands_offset; 464 463 465 - ret = drm_gem_lock_reservations((struct drm_gem_object **)job->bos, 1, &acquire_ctx); 464 + ret = drm_gem_lock_reservations((struct drm_gem_object **)job->bos, buf_count, 465 + &acquire_ctx); 466 466 if (ret) { 467 467 ivpu_warn(vdev, "Failed to lock reservations: %d\n", ret); 468 468 return ret; 469 469 } 470 470 471 - ret = dma_resv_reserve_fences(bo->base.resv, 1); 472 - if (ret) { 473 - ivpu_warn(vdev, "Failed to reserve fences: %d\n", ret); 474 - goto unlock_reservations; 471 + for (i = 0; i < buf_count; i++) { 472 + ret = dma_resv_reserve_fences(job->bos[i]->base.resv, 1); 473 + if (ret) { 474 + ivpu_warn(vdev, "Failed to reserve fences: %d\n", ret); 475 + goto unlock_reservations; 476 + } 475 477 } 476 478 477 - dma_resv_add_fence(bo->base.resv, job->done_fence, DMA_RESV_USAGE_WRITE); 479 + for (i = 0; i < buf_count; i++) { 480 + usage = (i == CMD_BUF_IDX) ? DMA_RESV_USAGE_WRITE : DMA_RESV_USAGE_BOOKKEEP; 481 + dma_resv_add_fence(job->bos[i]->base.resv, job->done_fence, usage); 482 + } 478 483 479 484 unlock_reservations: 480 - drm_gem_unlock_reservations((struct drm_gem_object **)job->bos, 1, &acquire_ctx); 485 + drm_gem_unlock_reservations((struct drm_gem_object **)job->bos, buf_count, &acquire_ctx); 481 486 482 487 wmb(); /* Flush write combining buffers */ 483 488
+6 -16
drivers/accel/ivpu/ivpu_mmu.c
··· 587 587 int ivpu_mmu_invalidate_tlb(struct ivpu_device *vdev, u16 ssid) 588 588 { 589 589 struct ivpu_mmu_info *mmu = vdev->mmu; 590 - int ret; 590 + int ret = 0; 591 591 592 - ret = mutex_lock_interruptible(&mmu->lock); 593 - if (ret) 594 - return ret; 595 - 596 - if (!mmu->on) { 597 - ret = 0; 592 + mutex_lock(&mmu->lock); 593 + if (!mmu->on) 598 594 goto unlock; 599 - } 600 595 601 596 ret = ivpu_mmu_cmdq_write_tlbi_nh_asid(vdev, ssid); 602 597 if (ret) ··· 609 614 struct ivpu_mmu_cdtab *cdtab = &mmu->cdtab; 610 615 u64 *entry; 611 616 u64 cd[4]; 612 - int ret; 617 + int ret = 0; 613 618 614 619 if (ssid > IVPU_MMU_CDTAB_ENT_COUNT) 615 620 return -EINVAL; ··· 650 655 ivpu_dbg(vdev, MMU, "CDTAB %s entry (SSID=%u, dma=%pad): 0x%llx, 0x%llx, 0x%llx, 0x%llx\n", 651 656 cd_dma ? "write" : "clear", ssid, &cd_dma, cd[0], cd[1], cd[2], cd[3]); 652 657 653 - ret = mutex_lock_interruptible(&mmu->lock); 654 - if (ret) 655 - return ret; 656 - 657 - if (!mmu->on) { 658 - ret = 0; 658 + mutex_lock(&mmu->lock); 659 + if (!mmu->on) 659 660 goto unlock; 660 - } 661 661 662 662 ret = ivpu_mmu_cmdq_write_cfgi_all(vdev); 663 663 if (ret)
+4
drivers/accel/qaic/qaic_drv.c
··· 97 97 98 98 cleanup_usr: 99 99 cleanup_srcu_struct(&usr->qddev_lock); 100 + ida_free(&qaic_usrs, usr->handle); 100 101 free_usr: 101 102 kfree(usr); 102 103 dev_unlock: ··· 225 224 struct qaic_user *usr; 226 225 227 226 qddev = qdev->qddev; 227 + qdev->qddev = NULL; 228 + if (!qddev) 229 + return; 228 230 229 231 /* 230 232 * Existing users get unresolvable errors till they close FDs.
+2 -1
drivers/ata/libata-core.c
··· 5348 5348 5349 5349 mutex_init(&ap->scsi_scan_mutex); 5350 5350 INIT_DELAYED_WORK(&ap->hotplug_task, ata_scsi_hotplug); 5351 - INIT_WORK(&ap->scsi_rescan_task, ata_scsi_dev_rescan); 5351 + INIT_DELAYED_WORK(&ap->scsi_rescan_task, ata_scsi_dev_rescan); 5352 5352 INIT_LIST_HEAD(&ap->eh_done_q); 5353 5353 init_waitqueue_head(&ap->eh_wait_q); 5354 5354 init_completion(&ap->park_req_pending); ··· 5954 5954 WARN_ON(!(ap->pflags & ATA_PFLAG_UNLOADED)); 5955 5955 5956 5956 cancel_delayed_work_sync(&ap->hotplug_task); 5957 + cancel_delayed_work_sync(&ap->scsi_rescan_task); 5957 5958 5958 5959 skip_eh: 5959 5960 /* clean up zpodd on port removal */
+1 -1
drivers/ata/libata-eh.c
··· 2984 2984 ehc->i.flags |= ATA_EHI_SETMODE; 2985 2985 2986 2986 /* schedule the scsi_rescan_device() here */ 2987 - schedule_work(&(ap->scsi_rescan_task)); 2987 + schedule_delayed_work(&ap->scsi_rescan_task, 0); 2988 2988 } else if (dev->class == ATA_DEV_UNKNOWN && 2989 2989 ehc->tries[dev->devno] && 2990 2990 ata_class_enabled(ehc->classes[dev->devno])) {
+21 -1
drivers/ata/libata-scsi.c
··· 4597 4597 void ata_scsi_dev_rescan(struct work_struct *work) 4598 4598 { 4599 4599 struct ata_port *ap = 4600 - container_of(work, struct ata_port, scsi_rescan_task); 4600 + container_of(work, struct ata_port, scsi_rescan_task.work); 4601 4601 struct ata_link *link; 4602 4602 struct ata_device *dev; 4603 4603 unsigned long flags; 4604 + bool delay_rescan = false; 4604 4605 4605 4606 mutex_lock(&ap->scsi_scan_mutex); 4606 4607 spin_lock_irqsave(ap->lock, flags); ··· 4615 4614 if (scsi_device_get(sdev)) 4616 4615 continue; 4617 4616 4617 + /* 4618 + * If the rescan work was scheduled because of a resume 4619 + * event, the port is already fully resumed, but the 4620 + * SCSI device may not yet be fully resumed. In such 4621 + * case, executing scsi_rescan_device() may cause a 4622 + * deadlock with the PM code on device_lock(). Prevent 4623 + * this by giving up and retrying rescan after a short 4624 + * delay. 4625 + */ 4626 + delay_rescan = sdev->sdev_gendev.power.is_suspended; 4627 + if (delay_rescan) { 4628 + scsi_device_put(sdev); 4629 + break; 4630 + } 4631 + 4618 4632 spin_unlock_irqrestore(ap->lock, flags); 4619 4633 scsi_rescan_device(&(sdev->sdev_gendev)); 4620 4634 scsi_device_put(sdev); ··· 4639 4623 4640 4624 spin_unlock_irqrestore(ap->lock, flags); 4641 4625 mutex_unlock(&ap->scsi_scan_mutex); 4626 + 4627 + if (delay_rescan) 4628 + schedule_delayed_work(&ap->scsi_rescan_task, 4629 + msecs_to_jiffies(5)); 4642 4630 }
+3
drivers/base/regmap/regcache.c
··· 284 284 { 285 285 int ret; 286 286 287 + if (!regmap_writeable(map, reg)) 288 + return false; 289 + 287 290 /* If we don't know the chip just got reset, then sync everything. */ 288 291 if (!map->no_sync_defaults) 289 292 return true;
+1
drivers/block/null_blk/main.c
··· 2244 2244 struct nullb_device *dev = nullb->dev; 2245 2245 2246 2246 null_del_dev(nullb); 2247 + null_free_device_storage(dev, false); 2247 2248 null_free_dev(dev); 2248 2249 } 2249 2250
+44 -18
drivers/block/rbd.c
··· 1334 1334 /* 1335 1335 * Must be called after rbd_obj_calc_img_extents(). 1336 1336 */ 1337 - static bool rbd_obj_copyup_enabled(struct rbd_obj_request *obj_req) 1337 + static void rbd_obj_set_copyup_enabled(struct rbd_obj_request *obj_req) 1338 1338 { 1339 - if (!obj_req->num_img_extents || 1340 - (rbd_obj_is_entire(obj_req) && 1341 - !obj_req->img_request->snapc->num_snaps)) 1342 - return false; 1339 + rbd_assert(obj_req->img_request->snapc); 1343 1340 1344 - return true; 1341 + if (obj_req->img_request->op_type == OBJ_OP_DISCARD) { 1342 + dout("%s %p objno %llu discard\n", __func__, obj_req, 1343 + obj_req->ex.oe_objno); 1344 + return; 1345 + } 1346 + 1347 + if (!obj_req->num_img_extents) { 1348 + dout("%s %p objno %llu not overlapping\n", __func__, obj_req, 1349 + obj_req->ex.oe_objno); 1350 + return; 1351 + } 1352 + 1353 + if (rbd_obj_is_entire(obj_req) && 1354 + !obj_req->img_request->snapc->num_snaps) { 1355 + dout("%s %p objno %llu entire\n", __func__, obj_req, 1356 + obj_req->ex.oe_objno); 1357 + return; 1358 + } 1359 + 1360 + obj_req->flags |= RBD_OBJ_FLAG_COPYUP_ENABLED; 1345 1361 } 1346 1362 1347 1363 static u64 rbd_obj_img_extents_bytes(struct rbd_obj_request *obj_req) ··· 1458 1442 static struct ceph_osd_request * 1459 1443 rbd_obj_add_osd_request(struct rbd_obj_request *obj_req, int num_ops) 1460 1444 { 1445 + rbd_assert(obj_req->img_request->snapc); 1461 1446 return __rbd_obj_add_osd_request(obj_req, obj_req->img_request->snapc, 1462 1447 num_ops); 1463 1448 } ··· 1595 1578 mutex_init(&img_request->state_mutex); 1596 1579 } 1597 1580 1581 + /* 1582 + * Only snap_id is captured here, for reads. For writes, snapshot 1583 + * context is captured in rbd_img_object_requests() after exclusive 1584 + * lock is ensured to be held. 1585 + */ 1598 1586 static void rbd_img_capture_header(struct rbd_img_request *img_req) 1599 1587 { 1600 1588 struct rbd_device *rbd_dev = img_req->rbd_dev; 1601 1589 1602 1590 lockdep_assert_held(&rbd_dev->header_rwsem); 1603 1591 1604 - if (rbd_img_is_write(img_req)) 1605 - img_req->snapc = ceph_get_snap_context(rbd_dev->header.snapc); 1606 - else 1592 + if (!rbd_img_is_write(img_req)) 1607 1593 img_req->snap_id = rbd_dev->spec->snap_id; 1608 1594 1609 1595 if (rbd_dev_parent_get(rbd_dev)) ··· 2253 2233 if (ret) 2254 2234 return ret; 2255 2235 2256 - if (rbd_obj_copyup_enabled(obj_req)) 2257 - obj_req->flags |= RBD_OBJ_FLAG_COPYUP_ENABLED; 2258 - 2259 2236 obj_req->write_state = RBD_OBJ_WRITE_START; 2260 2237 return 0; 2261 2238 } ··· 2358 2341 if (ret) 2359 2342 return ret; 2360 2343 2361 - if (rbd_obj_copyup_enabled(obj_req)) 2362 - obj_req->flags |= RBD_OBJ_FLAG_COPYUP_ENABLED; 2363 2344 if (!obj_req->num_img_extents) { 2364 2345 obj_req->flags |= RBD_OBJ_FLAG_NOOP_FOR_NONEXISTENT; 2365 2346 if (rbd_obj_is_entire(obj_req)) ··· 3301 3286 case RBD_OBJ_WRITE_START: 3302 3287 rbd_assert(!*result); 3303 3288 3289 + rbd_obj_set_copyup_enabled(obj_req); 3304 3290 if (rbd_obj_write_is_noop(obj_req)) 3305 3291 return true; 3306 3292 ··· 3488 3472 3489 3473 static void rbd_img_object_requests(struct rbd_img_request *img_req) 3490 3474 { 3475 + struct rbd_device *rbd_dev = img_req->rbd_dev; 3491 3476 struct rbd_obj_request *obj_req; 3492 3477 3493 3478 rbd_assert(!img_req->pending.result && !img_req->pending.num_pending); 3479 + rbd_assert(!need_exclusive_lock(img_req) || 3480 + __rbd_is_lock_owner(rbd_dev)); 3481 + 3482 + if (rbd_img_is_write(img_req)) { 3483 + rbd_assert(!img_req->snapc); 3484 + down_read(&rbd_dev->header_rwsem); 3485 + img_req->snapc = ceph_get_snap_context(rbd_dev->header.snapc); 3486 + up_read(&rbd_dev->header_rwsem); 3487 + } 3494 3488 3495 3489 for_each_obj_request(img_req, obj_req) { 3496 3490 int result = 0; ··· 3518 3492 3519 3493 static bool rbd_img_advance(struct rbd_img_request *img_req, int *result) 3520 3494 { 3521 - struct rbd_device *rbd_dev = img_req->rbd_dev; 3522 3495 int ret; 3523 3496 3524 3497 again: ··· 3537 3512 case RBD_IMG_EXCLUSIVE_LOCK: 3538 3513 if (*result) 3539 3514 return true; 3540 - 3541 - rbd_assert(!need_exclusive_lock(img_req) || 3542 - __rbd_is_lock_owner(rbd_dev)); 3543 3515 3544 3516 rbd_img_object_requests(img_req); 3545 3517 if (!img_req->pending.num_pending) { ··· 3998 3976 static int rbd_post_acquire_action(struct rbd_device *rbd_dev) 3999 3977 { 4000 3978 int ret; 3979 + 3980 + ret = rbd_dev_refresh(rbd_dev); 3981 + if (ret) 3982 + return ret; 4001 3983 4002 3984 if (rbd_dev->header.features & RBD_FEATURE_OBJECT_MAP) { 4003 3985 ret = rbd_object_map_open(rbd_dev);
+5 -1
drivers/bluetooth/hci_qca.c
··· 78 78 QCA_HW_ERROR_EVENT, 79 79 QCA_SSR_TRIGGERED, 80 80 QCA_BT_OFF, 81 - QCA_ROM_FW 81 + QCA_ROM_FW, 82 + QCA_DEBUGFS_CREATED, 82 83 }; 83 84 84 85 enum qca_capabilities { ··· 634 633 umode_t mode; 635 634 636 635 if (!hdev->debugfs) 636 + return; 637 + 638 + if (test_and_set_bit(QCA_DEBUGFS_CREATED, &qca->flags)) 637 639 return; 638 640 639 641 ibs_dir = debugfs_create_dir("ibs", hdev->debugfs);
+4 -1
drivers/clk/clk-composite.c
··· 119 119 if (ret) 120 120 continue; 121 121 122 - rate_diff = abs(req->rate - tmp_req.rate); 122 + if (req->rate >= tmp_req.rate) 123 + rate_diff = req->rate - tmp_req.rate; 124 + else 125 + rate_diff = tmp_req.rate - req->rate; 123 126 124 127 if (!rate_diff || !req->best_parent_hw 125 128 || best_rate_diff > rate_diff) {
+1 -1
drivers/clk/clk-loongson2.c
··· 40 40 { 41 41 int ret; 42 42 struct clk_hw *hw; 43 - struct clk_init_data init; 43 + struct clk_init_data init = { }; 44 44 45 45 hw = devm_kzalloc(dev, sizeof(*hw), GFP_KERNEL); 46 46 if (!hw)
+15 -3
drivers/clk/mediatek/clk-mt8365.c
··· 23 23 static DEFINE_SPINLOCK(mt8365_clk_lock); 24 24 25 25 static const struct mtk_fixed_clk top_fixed_clks[] = { 26 + FIXED_CLK(CLK_TOP_CLK_NULL, "clk_null", NULL, 0), 26 27 FIXED_CLK(CLK_TOP_I2S0_BCK, "i2s0_bck", NULL, 26000000), 27 28 FIXED_CLK(CLK_TOP_DSI0_LNTC_DSICK, "dsi0_lntc_dsick", "clk26m", 28 29 75000000), ··· 560 559 0x324, 16, 8, CLK_DIVIDER_ROUND_CLOSEST), 561 560 DIV_ADJ_F(CLK_TOP_APLL12_CK_DIV3, "apll12_ck_div3", "apll_i2s3_sel", 562 561 0x324, 24, 8, CLK_DIVIDER_ROUND_CLOSEST), 562 + DIV_ADJ_F(CLK_TOP_APLL12_CK_DIV4, "apll12_ck_div4", "apll_tdmout_sel", 563 + 0x328, 0, 8, CLK_DIVIDER_ROUND_CLOSEST), 564 + DIV_ADJ_F(CLK_TOP_APLL12_CK_DIV4B, "apll12_ck_div4b", "apll_tdmout_sel", 565 + 0x328, 8, 8, CLK_DIVIDER_ROUND_CLOSEST), 566 + DIV_ADJ_F(CLK_TOP_APLL12_CK_DIV5, "apll12_ck_div5", "apll_tdmin_sel", 567 + 0x328, 16, 8, CLK_DIVIDER_ROUND_CLOSEST), 568 + DIV_ADJ_F(CLK_TOP_APLL12_CK_DIV5B, "apll12_ck_div5b", "apll_tdmin_sel", 569 + 0x328, 24, 8, CLK_DIVIDER_ROUND_CLOSEST), 563 570 DIV_ADJ_F(CLK_TOP_APLL12_CK_DIV6, "apll12_ck_div6", "apll_spdif_sel", 564 571 0x32c, 0, 8, CLK_DIVIDER_ROUND_CLOSEST), 565 572 }; ··· 592 583 593 584 #define GATE_TOP0(_id, _name, _parent, _shift) \ 594 585 GATE_MTK(_id, _name, _parent, &top0_cg_regs, \ 595 - _shift, &mtk_clk_gate_ops_no_setclr_inv) 586 + _shift, &mtk_clk_gate_ops_no_setclr) 596 587 597 588 #define GATE_TOP1(_id, _name, _parent, _shift) \ 598 589 GATE_MTK(_id, _name, _parent, &top1_cg_regs, \ 599 - _shift, &mtk_clk_gate_ops_no_setclr) 590 + _shift, &mtk_clk_gate_ops_no_setclr_inv) 600 591 601 592 #define GATE_TOP2(_id, _name, _parent, _shift) \ 602 593 GATE_MTK(_id, _name, _parent, &top2_cg_regs, \ 603 - _shift, &mtk_clk_gate_ops_no_setclr) 594 + _shift, &mtk_clk_gate_ops_no_setclr_inv) 604 595 605 596 static const struct mtk_gate top_clk_gates[] = { 606 597 GATE_TOP0(CLK_TOP_CONN_32K, "conn_32k", "clk32k", 10), ··· 705 696 GATE_IFR3(CLK_IFR_GCPU, "ifr_gcpu", "axi_sel", 8), 706 697 GATE_IFR3(CLK_IFR_TRNG, "ifr_trng", "axi_sel", 9), 707 698 GATE_IFR3(CLK_IFR_AUXADC, "ifr_auxadc", "clk26m", 10), 699 + GATE_IFR3(CLK_IFR_CPUM, "ifr_cpum", "clk26m", 11), 708 700 GATE_IFR3(CLK_IFR_AUXADC_MD, "ifr_auxadc_md", "clk26m", 14), 709 701 GATE_IFR3(CLK_IFR_AP_DMA, "ifr_ap_dma", "axi_sel", 18), 710 702 GATE_IFR3(CLK_IFR_DEBUGSYS, "ifr_debugsys", "axi_sel", 24), ··· 727 717 GATE_IFR5(CLK_IFR_PWRAP_TMR, "ifr_pwrap_tmr", "clk26m", 12), 728 718 GATE_IFR5(CLK_IFR_PWRAP_SPI, "ifr_pwrap_spi", "clk26m", 13), 729 719 GATE_IFR5(CLK_IFR_PWRAP_SYS, "ifr_pwrap_sys", "clk26m", 14), 720 + GATE_MTK_FLAGS(CLK_IFR_MCU_PM_BK, "ifr_mcu_pm_bk", NULL, &ifr5_cg_regs, 721 + 17, &mtk_clk_gate_ops_setclr, CLK_IGNORE_UNUSED), 730 722 GATE_IFR5(CLK_IFR_IRRX_26M, "ifr_irrx_26m", "clk26m", 22), 731 723 GATE_IFR5(CLK_IFR_IRRX_32K, "ifr_irrx_32k", "clk32k", 23), 732 724 GATE_IFR5(CLK_IFR_I2C0_AXI, "ifr_i2c0_axi", "i2c_sel", 24),
+1 -1
drivers/clk/pxa/clk-pxa3xx.c
··· 164 164 accr &= ~disable; 165 165 accr |= enable; 166 166 167 - writel(accr, ACCR); 167 + writel(accr, clk_regs + ACCR); 168 168 if (xclkcfg) 169 169 __asm__("mcr p14, 0, %0, c6, c0, 0\n" : : "r"(xclkcfg)); 170 170
+59 -59
drivers/edac/qcom_edac.c
··· 21 21 #define TRP_SYN_REG_CNT 6 22 22 #define DRP_SYN_REG_CNT 8 23 23 24 - #define LLCC_COMMON_STATUS0 0x0003000c 25 24 #define LLCC_LB_CNT_MASK GENMASK(31, 28) 26 25 #define LLCC_LB_CNT_SHIFT 28 27 - 28 - /* Single & double bit syndrome register offsets */ 29 - #define TRP_ECC_SB_ERR_SYN0 0x0002304c 30 - #define TRP_ECC_DB_ERR_SYN0 0x00020370 31 - #define DRP_ECC_SB_ERR_SYN0 0x0004204c 32 - #define DRP_ECC_DB_ERR_SYN0 0x00042070 33 - 34 - /* Error register offsets */ 35 - #define TRP_ECC_ERROR_STATUS1 0x00020348 36 - #define TRP_ECC_ERROR_STATUS0 0x00020344 37 - #define DRP_ECC_ERROR_STATUS1 0x00042048 38 - #define DRP_ECC_ERROR_STATUS0 0x00042044 39 - 40 - /* TRP, DRP interrupt register offsets */ 41 - #define DRP_INTERRUPT_STATUS 0x00041000 42 - #define TRP_INTERRUPT_0_STATUS 0x00020480 43 - #define DRP_INTERRUPT_CLEAR 0x00041008 44 - #define DRP_ECC_ERROR_CNTR_CLEAR 0x00040004 45 - #define TRP_INTERRUPT_0_CLEAR 0x00020484 46 - #define TRP_ECC_ERROR_CNTR_CLEAR 0x00020440 47 26 48 27 /* Mask and shift macros */ 49 28 #define ECC_DB_ERR_COUNT_MASK GENMASK(4, 0) ··· 38 59 39 60 #define DRP_TRP_INT_CLEAR GENMASK(1, 0) 40 61 #define DRP_TRP_CNT_CLEAR GENMASK(1, 0) 41 - 42 - /* Config registers offsets*/ 43 - #define DRP_ECC_ERROR_CFG 0x00040000 44 - 45 - /* Tag RAM, Data RAM interrupt register offsets */ 46 - #define CMN_INTERRUPT_0_ENABLE 0x0003001c 47 - #define CMN_INTERRUPT_2_ENABLE 0x0003003c 48 - #define TRP_INTERRUPT_0_ENABLE 0x00020488 49 - #define DRP_INTERRUPT_ENABLE 0x0004100c 50 62 51 63 #define SB_ERROR_THRESHOLD 0x1 52 64 #define SB_ERROR_THRESHOLD_SHIFT 24 ··· 58 88 static const struct llcc_edac_reg_data edac_reg_data[] = { 59 89 [LLCC_DRAM_CE] = { 60 90 .name = "DRAM Single-bit", 61 - .synd_reg = DRP_ECC_SB_ERR_SYN0, 62 - .count_status_reg = DRP_ECC_ERROR_STATUS1, 63 - .ways_status_reg = DRP_ECC_ERROR_STATUS0, 64 91 .reg_cnt = DRP_SYN_REG_CNT, 65 92 .count_mask = ECC_SB_ERR_COUNT_MASK, 66 93 .ways_mask = ECC_SB_ERR_WAYS_MASK, ··· 65 98 }, 66 99 [LLCC_DRAM_UE] = { 67 100 .name = "DRAM Double-bit", 68 - .synd_reg = DRP_ECC_DB_ERR_SYN0, 69 - .count_status_reg = DRP_ECC_ERROR_STATUS1, 70 - .ways_status_reg = DRP_ECC_ERROR_STATUS0, 71 101 .reg_cnt = DRP_SYN_REG_CNT, 72 102 .count_mask = ECC_DB_ERR_COUNT_MASK, 73 103 .ways_mask = ECC_DB_ERR_WAYS_MASK, ··· 72 108 }, 73 109 [LLCC_TRAM_CE] = { 74 110 .name = "TRAM Single-bit", 75 - .synd_reg = TRP_ECC_SB_ERR_SYN0, 76 - .count_status_reg = TRP_ECC_ERROR_STATUS1, 77 - .ways_status_reg = TRP_ECC_ERROR_STATUS0, 78 111 .reg_cnt = TRP_SYN_REG_CNT, 79 112 .count_mask = ECC_SB_ERR_COUNT_MASK, 80 113 .ways_mask = ECC_SB_ERR_WAYS_MASK, ··· 79 118 }, 80 119 [LLCC_TRAM_UE] = { 81 120 .name = "TRAM Double-bit", 82 - .synd_reg = TRP_ECC_DB_ERR_SYN0, 83 - .count_status_reg = TRP_ECC_ERROR_STATUS1, 84 - .ways_status_reg = TRP_ECC_ERROR_STATUS0, 85 121 .reg_cnt = TRP_SYN_REG_CNT, 86 122 .count_mask = ECC_DB_ERR_COUNT_MASK, 87 123 .ways_mask = ECC_DB_ERR_WAYS_MASK, ··· 86 128 }, 87 129 }; 88 130 89 - static int qcom_llcc_core_setup(struct regmap *llcc_bcast_regmap) 131 + static int qcom_llcc_core_setup(struct llcc_drv_data *drv, struct regmap *llcc_bcast_regmap) 90 132 { 91 133 u32 sb_err_threshold; 92 134 int ret; ··· 95 137 * Configure interrupt enable registers such that Tag, Data RAM related 96 138 * interrupts are propagated to interrupt controller for servicing 97 139 */ 98 - ret = regmap_update_bits(llcc_bcast_regmap, CMN_INTERRUPT_2_ENABLE, 140 + ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_2_enable, 99 141 TRP0_INTERRUPT_ENABLE, 100 142 TRP0_INTERRUPT_ENABLE); 101 143 if (ret) 102 144 return ret; 103 145 104 - ret = regmap_update_bits(llcc_bcast_regmap, TRP_INTERRUPT_0_ENABLE, 146 + ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->trp_interrupt_0_enable, 105 147 SB_DB_TRP_INTERRUPT_ENABLE, 106 148 SB_DB_TRP_INTERRUPT_ENABLE); 107 149 if (ret) 108 150 return ret; 109 151 110 152 sb_err_threshold = (SB_ERROR_THRESHOLD << SB_ERROR_THRESHOLD_SHIFT); 111 - ret = regmap_write(llcc_bcast_regmap, DRP_ECC_ERROR_CFG, 153 + ret = regmap_write(llcc_bcast_regmap, drv->edac_reg_offset->drp_ecc_error_cfg, 112 154 sb_err_threshold); 113 155 if (ret) 114 156 return ret; 115 157 116 - ret = regmap_update_bits(llcc_bcast_regmap, CMN_INTERRUPT_2_ENABLE, 158 + ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_2_enable, 117 159 DRP0_INTERRUPT_ENABLE, 118 160 DRP0_INTERRUPT_ENABLE); 119 161 if (ret) 120 162 return ret; 121 163 122 - ret = regmap_write(llcc_bcast_regmap, DRP_INTERRUPT_ENABLE, 164 + ret = regmap_write(llcc_bcast_regmap, drv->edac_reg_offset->drp_interrupt_enable, 123 165 SB_DB_DRP_INTERRUPT_ENABLE); 124 166 return ret; 125 167 } ··· 128 170 static int 129 171 qcom_llcc_clear_error_status(int err_type, struct llcc_drv_data *drv) 130 172 { 131 - int ret = 0; 173 + int ret; 132 174 133 175 switch (err_type) { 134 176 case LLCC_DRAM_CE: 135 177 case LLCC_DRAM_UE: 136 - ret = regmap_write(drv->bcast_regmap, DRP_INTERRUPT_CLEAR, 178 + ret = regmap_write(drv->bcast_regmap, 179 + drv->edac_reg_offset->drp_interrupt_clear, 137 180 DRP_TRP_INT_CLEAR); 138 181 if (ret) 139 182 return ret; 140 183 141 - ret = regmap_write(drv->bcast_regmap, DRP_ECC_ERROR_CNTR_CLEAR, 184 + ret = regmap_write(drv->bcast_regmap, 185 + drv->edac_reg_offset->drp_ecc_error_cntr_clear, 142 186 DRP_TRP_CNT_CLEAR); 143 187 if (ret) 144 188 return ret; 145 189 break; 146 190 case LLCC_TRAM_CE: 147 191 case LLCC_TRAM_UE: 148 - ret = regmap_write(drv->bcast_regmap, TRP_INTERRUPT_0_CLEAR, 192 + ret = regmap_write(drv->bcast_regmap, 193 + drv->edac_reg_offset->trp_interrupt_0_clear, 149 194 DRP_TRP_INT_CLEAR); 150 195 if (ret) 151 196 return ret; 152 197 153 - ret = regmap_write(drv->bcast_regmap, TRP_ECC_ERROR_CNTR_CLEAR, 198 + ret = regmap_write(drv->bcast_regmap, 199 + drv->edac_reg_offset->trp_ecc_error_cntr_clear, 154 200 DRP_TRP_CNT_CLEAR); 155 201 if (ret) 156 202 return ret; ··· 167 205 return ret; 168 206 } 169 207 208 + struct qcom_llcc_syn_regs { 209 + u32 synd_reg; 210 + u32 count_status_reg; 211 + u32 ways_status_reg; 212 + }; 213 + 214 + static void get_reg_offsets(struct llcc_drv_data *drv, int err_type, 215 + struct qcom_llcc_syn_regs *syn_regs) 216 + { 217 + const struct llcc_edac_reg_offset *edac_reg_offset = drv->edac_reg_offset; 218 + 219 + switch (err_type) { 220 + case LLCC_DRAM_CE: 221 + syn_regs->synd_reg = edac_reg_offset->drp_ecc_sb_err_syn0; 222 + syn_regs->count_status_reg = edac_reg_offset->drp_ecc_error_status1; 223 + syn_regs->ways_status_reg = edac_reg_offset->drp_ecc_error_status0; 224 + break; 225 + case LLCC_DRAM_UE: 226 + syn_regs->synd_reg = edac_reg_offset->drp_ecc_db_err_syn0; 227 + syn_regs->count_status_reg = edac_reg_offset->drp_ecc_error_status1; 228 + syn_regs->ways_status_reg = edac_reg_offset->drp_ecc_error_status0; 229 + break; 230 + case LLCC_TRAM_CE: 231 + syn_regs->synd_reg = edac_reg_offset->trp_ecc_sb_err_syn0; 232 + syn_regs->count_status_reg = edac_reg_offset->trp_ecc_error_status1; 233 + syn_regs->ways_status_reg = edac_reg_offset->trp_ecc_error_status0; 234 + break; 235 + case LLCC_TRAM_UE: 236 + syn_regs->synd_reg = edac_reg_offset->trp_ecc_db_err_syn0; 237 + syn_regs->count_status_reg = edac_reg_offset->trp_ecc_error_status1; 238 + syn_regs->ways_status_reg = edac_reg_offset->trp_ecc_error_status0; 239 + break; 240 + } 241 + } 242 + 170 243 /* Dump Syndrome registers data for Tag RAM, Data RAM bit errors*/ 171 244 static int 172 245 dump_syn_reg_values(struct llcc_drv_data *drv, u32 bank, int err_type) 173 246 { 174 247 struct llcc_edac_reg_data reg_data = edac_reg_data[err_type]; 248 + struct qcom_llcc_syn_regs regs = { }; 175 249 int err_cnt, err_ways, ret, i; 176 250 u32 synd_reg, synd_val; 177 251 252 + get_reg_offsets(drv, err_type, &regs); 253 + 178 254 for (i = 0; i < reg_data.reg_cnt; i++) { 179 - synd_reg = reg_data.synd_reg + (i * 4); 255 + synd_reg = regs.synd_reg + (i * 4); 180 256 ret = regmap_read(drv->regmaps[bank], synd_reg, 181 257 &synd_val); 182 258 if (ret) ··· 224 224 reg_data.name, i, synd_val); 225 225 } 226 226 227 - ret = regmap_read(drv->regmaps[bank], reg_data.count_status_reg, 227 + ret = regmap_read(drv->regmaps[bank], regs.count_status_reg, 228 228 &err_cnt); 229 229 if (ret) 230 230 goto clear; ··· 234 234 edac_printk(KERN_CRIT, EDAC_LLCC, "%s: Error count: 0x%4x\n", 235 235 reg_data.name, err_cnt); 236 236 237 - ret = regmap_read(drv->regmaps[bank], reg_data.ways_status_reg, 237 + ret = regmap_read(drv->regmaps[bank], regs.ways_status_reg, 238 238 &err_ways); 239 239 if (ret) 240 240 goto clear; ··· 295 295 296 296 /* Iterate over the banks and look for Tag RAM or Data RAM errors */ 297 297 for (i = 0; i < drv->num_banks; i++) { 298 - ret = regmap_read(drv->regmaps[i], DRP_INTERRUPT_STATUS, 298 + ret = regmap_read(drv->regmaps[i], drv->edac_reg_offset->drp_interrupt_status, 299 299 &drp_error); 300 300 301 301 if (!ret && (drp_error & SB_ECC_ERROR)) { ··· 310 310 if (!ret) 311 311 irq_rc = IRQ_HANDLED; 312 312 313 - ret = regmap_read(drv->regmaps[i], TRP_INTERRUPT_0_STATUS, 313 + ret = regmap_read(drv->regmaps[i], drv->edac_reg_offset->trp_interrupt_0_status, 314 314 &trp_error); 315 315 316 316 if (!ret && (trp_error & SB_ECC_ERROR)) { ··· 342 342 int ecc_irq; 343 343 int rc; 344 344 345 - rc = qcom_llcc_core_setup(llcc_driv_data->bcast_regmap); 345 + rc = qcom_llcc_core_setup(llcc_driv_data, llcc_driv_data->bcast_regmap); 346 346 if (rc) 347 347 return rc; 348 348
+1
drivers/firmware/arm_ffa/driver.c
··· 424 424 ep_mem_access->flag = 0; 425 425 ep_mem_access->reserved = 0; 426 426 } 427 + mem_region->handle = 0; 427 428 mem_region->reserved_0 = 0; 428 429 mem_region->reserved_1 = 0; 429 430 mem_region->ep_count = args->nattrs;
+3 -2
drivers/firmware/cirrus/cs_dsp.c
··· 2124 2124 file, blocks, le32_to_cpu(blk->len), 2125 2125 type, le32_to_cpu(blk->id)); 2126 2126 2127 + region_name = cs_dsp_mem_region_name(type); 2127 2128 mem = cs_dsp_find_region(dsp, type); 2128 2129 if (!mem) { 2129 2130 cs_dsp_err(dsp, "No base for region %x\n", type); ··· 2148 2147 reg = dsp->ops->region_to_reg(mem, reg); 2149 2148 reg += offset; 2150 2149 } else { 2151 - cs_dsp_err(dsp, "No %x for algorithm %x\n", 2152 - type, le32_to_cpu(blk->id)); 2150 + cs_dsp_err(dsp, "No %s for algorithm %x\n", 2151 + region_name, le32_to_cpu(blk->id)); 2153 2152 } 2154 2153 break; 2155 2154
+16 -2
drivers/gpio/gpio-sim.c
··· 696 696 char **line_names; 697 697 698 698 list_for_each_entry(line, &bank->line_list, siblings) { 699 + if (line->offset >= bank->num_lines) 700 + continue; 701 + 699 702 if (line->name) { 700 703 if (line->offset > max_offset) 701 704 max_offset = line->offset; ··· 724 721 if (!line_names) 725 722 return ERR_PTR(-ENOMEM); 726 723 727 - list_for_each_entry(line, &bank->line_list, siblings) 728 - line_names[line->offset] = line->name; 724 + list_for_each_entry(line, &bank->line_list, siblings) { 725 + if (line->offset >= bank->num_lines) 726 + continue; 727 + 728 + if (line->name && (line->offset <= max_offset)) 729 + line_names[line->offset] = line->name; 730 + } 729 731 730 732 return line_names; 731 733 } ··· 762 754 763 755 list_for_each_entry(bank, &dev->bank_list, siblings) { 764 756 list_for_each_entry(line, &bank->line_list, siblings) { 757 + if (line->offset >= bank->num_lines) 758 + continue; 759 + 765 760 if (line->hog) 766 761 num_hogs++; 767 762 } ··· 780 769 781 770 list_for_each_entry(bank, &dev->bank_list, siblings) { 782 771 list_for_each_entry(line, &bank->line_list, siblings) { 772 + if (line->offset >= bank->num_lines) 773 + continue; 774 + 783 775 if (!line->hog) 784 776 continue; 785 777
+8 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
··· 1092 1092 * S0ix even though the system is suspending to idle, so return false 1093 1093 * in that case. 1094 1094 */ 1095 - if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0)) 1096 - dev_warn_once(adev->dev, 1095 + if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0)) { 1096 + dev_err_once(adev->dev, 1097 1097 "Power consumption will be higher as BIOS has not been configured for suspend-to-idle.\n" 1098 1098 "To use suspend-to-idle change the sleep mode in BIOS setup.\n"); 1099 + return false; 1100 + } 1099 1101 1100 1102 #if !IS_ENABLED(CONFIG_AMD_PMC) 1101 - dev_warn_once(adev->dev, 1103 + dev_err_once(adev->dev, 1102 1104 "Power consumption will be higher as the kernel has not been compiled with CONFIG_AMD_PMC.\n"); 1103 - #endif /* CONFIG_AMD_PMC */ 1105 + return false; 1106 + #else 1104 1107 return true; 1108 + #endif /* CONFIG_AMD_PMC */ 1105 1109 } 1106 1110 1107 1111 #endif /* CONFIG_SUSPEND */
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 1615 1615 0x5874, 1616 1616 0x5940, 1617 1617 0x5941, 1618 + 0x5b70, 1618 1619 0x5b72, 1619 1620 0x5b73, 1620 1621 0x5b74,
+5 -7
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
··· 79 79 static void amdgpu_bo_vm_destroy(struct ttm_buffer_object *tbo) 80 80 { 81 81 struct amdgpu_device *adev = amdgpu_ttm_adev(tbo->bdev); 82 - struct amdgpu_bo *bo = ttm_to_amdgpu_bo(tbo); 82 + struct amdgpu_bo *shadow_bo = ttm_to_amdgpu_bo(tbo), *bo; 83 83 struct amdgpu_bo_vm *vmbo; 84 84 85 + bo = shadow_bo->parent; 85 86 vmbo = to_amdgpu_bo_vm(bo); 86 87 /* in case amdgpu_device_recover_vram got NULL of bo->parent */ 87 88 if (!list_empty(&vmbo->shadow_list)) { ··· 140 139 141 140 if (flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED) 142 141 places[c].lpfn = visible_pfn; 143 - else if (adev->gmc.real_vram_size != adev->gmc.visible_vram_size) 142 + else 144 143 places[c].flags |= TTM_PL_FLAG_TOPDOWN; 145 144 146 145 if (flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS) ··· 695 694 return r; 696 695 697 696 *vmbo_ptr = to_amdgpu_bo_vm(bo_ptr); 698 - INIT_LIST_HEAD(&(*vmbo_ptr)->shadow_list); 699 - /* Set destroy callback to amdgpu_bo_vm_destroy after vmbo->shadow_list 700 - * is initialized. 701 - */ 702 - bo_ptr->tbo.destroy = &amdgpu_bo_vm_destroy; 703 697 return r; 704 698 } 705 699 ··· 711 715 712 716 mutex_lock(&adev->shadow_list_lock); 713 717 list_add_tail(&vmbo->shadow_list, &adev->shadow_list); 718 + vmbo->shadow->parent = amdgpu_bo_ref(&vmbo->bo); 719 + vmbo->shadow->tbo.destroy = &amdgpu_bo_vm_destroy; 714 720 mutex_unlock(&adev->shadow_list_lock); 715 721 } 716 722
+5 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
··· 3548 3548 void *fw_pri_cpu_addr; 3549 3549 int ret; 3550 3550 3551 + if (adev->psp.vbflash_image_size == 0) 3552 + return -EINVAL; 3553 + 3551 3554 dev_info(adev->dev, "VBIOS flash to PSP started"); 3552 3555 3553 3556 ret = amdgpu_bo_create_kernel(adev, adev->psp.vbflash_image_size, ··· 3602 3599 } 3603 3600 3604 3601 static const struct bin_attribute psp_vbflash_bin_attr = { 3605 - .attr = {.name = "psp_vbflash", .mode = 0664}, 3602 + .attr = {.name = "psp_vbflash", .mode = 0660}, 3606 3603 .size = 0, 3607 3604 .write = amdgpu_psp_vbflash_write, 3608 3605 .read = amdgpu_psp_vbflash_read, 3609 3606 }; 3610 3607 3611 - static DEVICE_ATTR(psp_vbflash_status, 0444, amdgpu_psp_vbflash_status, NULL); 3608 + static DEVICE_ATTR(psp_vbflash_status, 0440, amdgpu_psp_vbflash_status, NULL); 3612 3609 3613 3610 int amdgpu_psp_sysfs_init(struct amdgpu_device *adev) 3614 3611 {
+18
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
··· 581 581 if (ring->is_sw_ring) 582 582 amdgpu_sw_ring_ib_end(ring); 583 583 } 584 + 585 + void amdgpu_ring_ib_on_emit_cntl(struct amdgpu_ring *ring) 586 + { 587 + if (ring->is_sw_ring) 588 + amdgpu_sw_ring_ib_mark_offset(ring, AMDGPU_MUX_OFFSET_TYPE_CONTROL); 589 + } 590 + 591 + void amdgpu_ring_ib_on_emit_ce(struct amdgpu_ring *ring) 592 + { 593 + if (ring->is_sw_ring) 594 + amdgpu_sw_ring_ib_mark_offset(ring, AMDGPU_MUX_OFFSET_TYPE_CE); 595 + } 596 + 597 + void amdgpu_ring_ib_on_emit_de(struct amdgpu_ring *ring) 598 + { 599 + if (ring->is_sw_ring) 600 + amdgpu_sw_ring_ib_mark_offset(ring, AMDGPU_MUX_OFFSET_TYPE_DE); 601 + }
+9
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
··· 227 227 int (*preempt_ib)(struct amdgpu_ring *ring); 228 228 void (*emit_mem_sync)(struct amdgpu_ring *ring); 229 229 void (*emit_wave_limit)(struct amdgpu_ring *ring, bool enable); 230 + void (*patch_cntl)(struct amdgpu_ring *ring, unsigned offset); 231 + void (*patch_ce)(struct amdgpu_ring *ring, unsigned offset); 232 + void (*patch_de)(struct amdgpu_ring *ring, unsigned offset); 230 233 }; 231 234 232 235 struct amdgpu_ring { ··· 321 318 #define amdgpu_ring_init_cond_exec(r) (r)->funcs->init_cond_exec((r)) 322 319 #define amdgpu_ring_patch_cond_exec(r,o) (r)->funcs->patch_cond_exec((r),(o)) 323 320 #define amdgpu_ring_preempt_ib(r) (r)->funcs->preempt_ib(r) 321 + #define amdgpu_ring_patch_cntl(r, o) ((r)->funcs->patch_cntl((r), (o))) 322 + #define amdgpu_ring_patch_ce(r, o) ((r)->funcs->patch_ce((r), (o))) 323 + #define amdgpu_ring_patch_de(r, o) ((r)->funcs->patch_de((r), (o))) 324 324 325 325 int amdgpu_ring_alloc(struct amdgpu_ring *ring, unsigned ndw); 326 326 void amdgpu_ring_ib_begin(struct amdgpu_ring *ring); 327 327 void amdgpu_ring_ib_end(struct amdgpu_ring *ring); 328 + void amdgpu_ring_ib_on_emit_cntl(struct amdgpu_ring *ring); 329 + void amdgpu_ring_ib_on_emit_ce(struct amdgpu_ring *ring); 330 + void amdgpu_ring_ib_on_emit_de(struct amdgpu_ring *ring); 328 331 329 332 void amdgpu_ring_insert_nop(struct amdgpu_ring *ring, uint32_t count); 330 333 void amdgpu_ring_generic_pad_ib(struct amdgpu_ring *ring, struct amdgpu_ib *ib);
+60
drivers/gpu/drm/amd/amdgpu/amdgpu_ring_mux.c
··· 105 105 amdgpu_fence_update_start_timestamp(e->ring, 106 106 chunk->sync_seq, 107 107 ktime_get()); 108 + if (chunk->sync_seq == 109 + le32_to_cpu(*(e->ring->fence_drv.cpu_addr + 2))) { 110 + if (chunk->cntl_offset <= e->ring->buf_mask) 111 + amdgpu_ring_patch_cntl(e->ring, 112 + chunk->cntl_offset); 113 + if (chunk->ce_offset <= e->ring->buf_mask) 114 + amdgpu_ring_patch_ce(e->ring, chunk->ce_offset); 115 + if (chunk->de_offset <= e->ring->buf_mask) 116 + amdgpu_ring_patch_de(e->ring, chunk->de_offset); 117 + } 108 118 amdgpu_ring_mux_copy_pkt_from_sw_ring(mux, e->ring, 109 119 chunk->start, 110 120 chunk->end); ··· 417 407 amdgpu_ring_mux_end_ib(mux, ring); 418 408 } 419 409 410 + void amdgpu_sw_ring_ib_mark_offset(struct amdgpu_ring *ring, enum amdgpu_ring_mux_offset_type type) 411 + { 412 + struct amdgpu_device *adev = ring->adev; 413 + struct amdgpu_ring_mux *mux = &adev->gfx.muxer; 414 + unsigned offset; 415 + 416 + offset = ring->wptr & ring->buf_mask; 417 + 418 + amdgpu_ring_mux_ib_mark_offset(mux, ring, offset, type); 419 + } 420 + 420 421 void amdgpu_ring_mux_start_ib(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring) 421 422 { 422 423 struct amdgpu_mux_entry *e; ··· 450 429 } 451 430 452 431 chunk->start = ring->wptr; 432 + /* the initialized value used to check if they are set by the ib submission*/ 433 + chunk->cntl_offset = ring->buf_mask + 1; 434 + chunk->de_offset = ring->buf_mask + 1; 435 + chunk->ce_offset = ring->buf_mask + 1; 453 436 list_add_tail(&chunk->entry, &e->list); 454 437 } 455 438 ··· 476 451 list_del(&chunk->entry); 477 452 kmem_cache_free(amdgpu_mux_chunk_slab, chunk); 478 453 } 454 + } 455 + } 456 + 457 + void amdgpu_ring_mux_ib_mark_offset(struct amdgpu_ring_mux *mux, 458 + struct amdgpu_ring *ring, u64 offset, 459 + enum amdgpu_ring_mux_offset_type type) 460 + { 461 + struct amdgpu_mux_entry *e; 462 + struct amdgpu_mux_chunk *chunk; 463 + 464 + e = amdgpu_ring_mux_sw_entry(mux, ring); 465 + if (!e) { 466 + DRM_ERROR("cannot find entry!\n"); 467 + return; 468 + } 469 + 470 + chunk = list_last_entry(&e->list, struct amdgpu_mux_chunk, entry); 471 + if (!chunk) { 472 + DRM_ERROR("cannot find chunk!\n"); 473 + return; 474 + } 475 + 476 + switch (type) { 477 + case AMDGPU_MUX_OFFSET_TYPE_CONTROL: 478 + chunk->cntl_offset = offset; 479 + break; 480 + case AMDGPU_MUX_OFFSET_TYPE_DE: 481 + chunk->de_offset = offset; 482 + break; 483 + case AMDGPU_MUX_OFFSET_TYPE_CE: 484 + chunk->ce_offset = offset; 485 + break; 486 + default: 487 + DRM_ERROR("invalid type (%d)\n", type); 488 + break; 479 489 } 480 490 } 481 491
+15
drivers/gpu/drm/amd/amdgpu/amdgpu_ring_mux.h
··· 50 50 struct list_head list; 51 51 }; 52 52 53 + enum amdgpu_ring_mux_offset_type { 54 + AMDGPU_MUX_OFFSET_TYPE_CONTROL, 55 + AMDGPU_MUX_OFFSET_TYPE_DE, 56 + AMDGPU_MUX_OFFSET_TYPE_CE, 57 + }; 58 + 53 59 struct amdgpu_ring_mux { 54 60 struct amdgpu_ring *real_ring; 55 61 ··· 78 72 * @sync_seq: the fence seqno related with the saved IB. 79 73 * @start:- start location on the software ring. 80 74 * @end:- end location on the software ring. 75 + * @control_offset:- the PRE_RESUME bit position used for resubmission. 76 + * @de_offset:- the anchor in write_data for de meta of resubmission. 77 + * @ce_offset:- the anchor in write_data for ce meta of resubmission. 81 78 */ 82 79 struct amdgpu_mux_chunk { 83 80 struct list_head entry; 84 81 uint32_t sync_seq; 85 82 u64 start; 86 83 u64 end; 84 + u64 cntl_offset; 85 + u64 de_offset; 86 + u64 ce_offset; 87 87 }; 88 88 89 89 int amdgpu_ring_mux_init(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring, ··· 101 89 u64 amdgpu_ring_mux_get_rptr(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring); 102 90 void amdgpu_ring_mux_start_ib(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring); 103 91 void amdgpu_ring_mux_end_ib(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring); 92 + void amdgpu_ring_mux_ib_mark_offset(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring, 93 + u64 offset, enum amdgpu_ring_mux_offset_type type); 104 94 bool amdgpu_mcbp_handle_trailing_fence_irq(struct amdgpu_ring_mux *mux); 105 95 106 96 u64 amdgpu_sw_ring_get_rptr_gfx(struct amdgpu_ring *ring); ··· 111 97 void amdgpu_sw_ring_insert_nop(struct amdgpu_ring *ring, uint32_t count); 112 98 void amdgpu_sw_ring_ib_begin(struct amdgpu_ring *ring); 113 99 void amdgpu_sw_ring_ib_end(struct amdgpu_ring *ring); 100 + void amdgpu_sw_ring_ib_mark_offset(struct amdgpu_ring *ring, enum amdgpu_ring_mux_offset_type type); 114 101 const char *amdgpu_sw_ring_name(int idx); 115 102 unsigned int amdgpu_sw_ring_priority(int idx); 116 103
-1
drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
··· 564 564 return r; 565 565 } 566 566 567 - (*vmbo)->shadow->parent = amdgpu_bo_ref(bo); 568 567 amdgpu_bo_add_to_shadow_list(*vmbo); 569 568 570 569 return 0;
+4 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
··· 800 800 { 801 801 struct amdgpu_vram_mgr *mgr = to_vram_mgr(man); 802 802 struct drm_buddy *mm = &mgr->mm; 803 - struct drm_buddy_block *block; 803 + struct amdgpu_vram_reservation *rsv; 804 804 805 805 drm_printf(printer, " vis usage:%llu\n", 806 806 amdgpu_vram_mgr_vis_usage(mgr)); ··· 812 812 drm_buddy_print(mm, printer); 813 813 814 814 drm_printf(printer, "reserved:\n"); 815 - list_for_each_entry(block, &mgr->reserved_pages, link) 816 - drm_buddy_block_print(mm, block, printer); 815 + list_for_each_entry(rsv, &mgr->reserved_pages, blocks) 816 + drm_printf(printer, "%#018llx-%#018llx: %llu\n", 817 + rsv->start, rsv->start + rsv->size, rsv->size); 817 818 mutex_unlock(&mgr->lock); 818 819 } 819 820
+92 -44
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
··· 149 149 #define mmGOLDEN_TSC_COUNT_LOWER_Renoir 0x0026 150 150 #define mmGOLDEN_TSC_COUNT_LOWER_Renoir_BASE_IDX 1 151 151 152 - #define mmGOLDEN_TSC_COUNT_UPPER_Raven 0x007a 153 - #define mmGOLDEN_TSC_COUNT_UPPER_Raven_BASE_IDX 0 154 - #define mmGOLDEN_TSC_COUNT_LOWER_Raven 0x007b 155 - #define mmGOLDEN_TSC_COUNT_LOWER_Raven_BASE_IDX 0 156 - 157 - #define mmGOLDEN_TSC_COUNT_UPPER_Raven2 0x0068 158 - #define mmGOLDEN_TSC_COUNT_UPPER_Raven2_BASE_IDX 0 159 - #define mmGOLDEN_TSC_COUNT_LOWER_Raven2 0x0069 160 - #define mmGOLDEN_TSC_COUNT_LOWER_Raven2_BASE_IDX 0 161 - 162 152 enum ta_ras_gfx_subblock { 163 153 /*CPC*/ 164 154 TA_RAS_BLOCK__GFX_CPC_INDEX_START = 0, ··· 755 765 static int gfx_v9_0_get_cu_info(struct amdgpu_device *adev, 756 766 struct amdgpu_cu_info *cu_info); 757 767 static uint64_t gfx_v9_0_get_gpu_clock_counter(struct amdgpu_device *adev); 758 - static void gfx_v9_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume); 768 + static void gfx_v9_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume, bool usegds); 759 769 static u64 gfx_v9_0_ring_get_rptr_compute(struct amdgpu_ring *ring); 760 770 static void gfx_v9_0_query_ras_error_count(struct amdgpu_device *adev, 761 771 void *ras_error_status); ··· 3994 4004 preempt_enable(); 3995 4005 clock = clock_lo | (clock_hi << 32ULL); 3996 4006 break; 3997 - case IP_VERSION(9, 1, 0): 3998 - case IP_VERSION(9, 2, 2): 3999 - preempt_disable(); 4000 - if (adev->rev_id >= 0x8) { 4001 - clock_hi = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_UPPER_Raven2); 4002 - clock_lo = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_LOWER_Raven2); 4003 - hi_check = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_UPPER_Raven2); 4004 - } else { 4005 - clock_hi = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_UPPER_Raven); 4006 - clock_lo = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_LOWER_Raven); 4007 - hi_check = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_UPPER_Raven); 4008 - } 4009 - /* The PWR TSC clock frequency is 100MHz, which sets 32-bit carry over 4010 - * roughly every 42 seconds. 4011 - */ 4012 - if (hi_check != clock_hi) { 4013 - if (adev->rev_id >= 0x8) 4014 - clock_lo = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_LOWER_Raven2); 4015 - else 4016 - clock_lo = RREG32_SOC15_NO_KIQ(PWR, 0, mmGOLDEN_TSC_COUNT_LOWER_Raven); 4017 - clock_hi = hi_check; 4018 - } 4019 - preempt_enable(); 4020 - clock = clock_lo | (clock_hi << 32ULL); 4021 - break; 4022 4007 default: 4023 4008 amdgpu_gfx_off_ctrl(adev, false); 4024 4009 mutex_lock(&adev->gfx.gpu_clock_mutex); ··· 5127 5162 gfx_v9_0_ring_emit_de_meta(ring, 5128 5163 (!amdgpu_sriov_vf(ring->adev) && 5129 5164 flags & AMDGPU_IB_PREEMPTED) ? 5130 - true : false); 5165 + true : false, 5166 + job->gds_size > 0 && job->gds_base != 0); 5131 5167 } 5132 5168 5133 5169 amdgpu_ring_write(ring, header); ··· 5139 5173 #endif 5140 5174 lower_32_bits(ib->gpu_addr)); 5141 5175 amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr)); 5176 + amdgpu_ring_ib_on_emit_cntl(ring); 5142 5177 amdgpu_ring_write(ring, control); 5178 + } 5179 + 5180 + static void gfx_v9_0_ring_patch_cntl(struct amdgpu_ring *ring, 5181 + unsigned offset) 5182 + { 5183 + u32 control = ring->ring[offset]; 5184 + 5185 + control |= INDIRECT_BUFFER_PRE_RESUME(1); 5186 + ring->ring[offset] = control; 5187 + } 5188 + 5189 + static void gfx_v9_0_ring_patch_ce_meta(struct amdgpu_ring *ring, 5190 + unsigned offset) 5191 + { 5192 + struct amdgpu_device *adev = ring->adev; 5193 + void *ce_payload_cpu_addr; 5194 + uint64_t payload_offset, payload_size; 5195 + 5196 + payload_size = sizeof(struct v9_ce_ib_state); 5197 + 5198 + if (ring->is_mes_queue) { 5199 + payload_offset = offsetof(struct amdgpu_mes_ctx_meta_data, 5200 + gfx[0].gfx_meta_data) + 5201 + offsetof(struct v9_gfx_meta_data, ce_payload); 5202 + ce_payload_cpu_addr = 5203 + amdgpu_mes_ctx_get_offs_cpu_addr(ring, payload_offset); 5204 + } else { 5205 + payload_offset = offsetof(struct v9_gfx_meta_data, ce_payload); 5206 + ce_payload_cpu_addr = adev->virt.csa_cpu_addr + payload_offset; 5207 + } 5208 + 5209 + if (offset + (payload_size >> 2) <= ring->buf_mask + 1) { 5210 + memcpy((void *)&ring->ring[offset], ce_payload_cpu_addr, payload_size); 5211 + } else { 5212 + memcpy((void *)&ring->ring[offset], ce_payload_cpu_addr, 5213 + (ring->buf_mask + 1 - offset) << 2); 5214 + payload_size -= (ring->buf_mask + 1 - offset) << 2; 5215 + memcpy((void *)&ring->ring[0], 5216 + ce_payload_cpu_addr + ((ring->buf_mask + 1 - offset) << 2), 5217 + payload_size); 5218 + } 5219 + } 5220 + 5221 + static void gfx_v9_0_ring_patch_de_meta(struct amdgpu_ring *ring, 5222 + unsigned offset) 5223 + { 5224 + struct amdgpu_device *adev = ring->adev; 5225 + void *de_payload_cpu_addr; 5226 + uint64_t payload_offset, payload_size; 5227 + 5228 + payload_size = sizeof(struct v9_de_ib_state); 5229 + 5230 + if (ring->is_mes_queue) { 5231 + payload_offset = offsetof(struct amdgpu_mes_ctx_meta_data, 5232 + gfx[0].gfx_meta_data) + 5233 + offsetof(struct v9_gfx_meta_data, de_payload); 5234 + de_payload_cpu_addr = 5235 + amdgpu_mes_ctx_get_offs_cpu_addr(ring, payload_offset); 5236 + } else { 5237 + payload_offset = offsetof(struct v9_gfx_meta_data, de_payload); 5238 + de_payload_cpu_addr = adev->virt.csa_cpu_addr + payload_offset; 5239 + } 5240 + 5241 + if (offset + (payload_size >> 2) <= ring->buf_mask + 1) { 5242 + memcpy((void *)&ring->ring[offset], de_payload_cpu_addr, payload_size); 5243 + } else { 5244 + memcpy((void *)&ring->ring[offset], de_payload_cpu_addr, 5245 + (ring->buf_mask + 1 - offset) << 2); 5246 + payload_size -= (ring->buf_mask + 1 - offset) << 2; 5247 + memcpy((void *)&ring->ring[0], 5248 + de_payload_cpu_addr + ((ring->buf_mask + 1 - offset) << 2), 5249 + payload_size); 5250 + } 5143 5251 } 5144 5252 5145 5253 static void gfx_v9_0_ring_emit_ib_compute(struct amdgpu_ring *ring, ··· 5411 5371 amdgpu_ring_write(ring, lower_32_bits(ce_payload_gpu_addr)); 5412 5372 amdgpu_ring_write(ring, upper_32_bits(ce_payload_gpu_addr)); 5413 5373 5374 + amdgpu_ring_ib_on_emit_ce(ring); 5375 + 5414 5376 if (resume) 5415 5377 amdgpu_ring_write_multiple(ring, ce_payload_cpu_addr, 5416 5378 sizeof(ce_payload) >> 2); ··· 5446 5404 amdgpu_ring_alloc(ring, 13); 5447 5405 gfx_v9_0_ring_emit_fence(ring, ring->trail_fence_gpu_addr, 5448 5406 ring->trail_seq, AMDGPU_FENCE_FLAG_EXEC | AMDGPU_FENCE_FLAG_INT); 5449 - /*reset the CP_VMID_PREEMPT after trailing fence*/ 5450 - amdgpu_ring_emit_wreg(ring, 5451 - SOC15_REG_OFFSET(GC, 0, mmCP_VMID_PREEMPT), 5452 - 0x0); 5453 5407 5454 5408 /* assert IB preemption, emit the trailing fence */ 5455 5409 kiq->pmf->kiq_unmap_queues(kiq_ring, ring, PREEMPT_QUEUES_NO_UNMAP, ··· 5468 5430 DRM_WARN("ring %d timeout to preempt ib\n", ring->idx); 5469 5431 } 5470 5432 5433 + /*reset the CP_VMID_PREEMPT after trailing fence*/ 5434 + amdgpu_ring_emit_wreg(ring, 5435 + SOC15_REG_OFFSET(GC, 0, mmCP_VMID_PREEMPT), 5436 + 0x0); 5471 5437 amdgpu_ring_commit(ring); 5472 5438 5473 5439 /* deassert preemption condition */ ··· 5479 5437 return r; 5480 5438 } 5481 5439 5482 - static void gfx_v9_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume) 5440 + static void gfx_v9_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume, bool usegds) 5483 5441 { 5484 5442 struct amdgpu_device *adev = ring->adev; 5485 5443 struct v9_de_ib_state de_payload = {0}; ··· 5510 5468 PAGE_SIZE); 5511 5469 } 5512 5470 5513 - de_payload.gds_backup_addrlo = lower_32_bits(gds_addr); 5514 - de_payload.gds_backup_addrhi = upper_32_bits(gds_addr); 5471 + if (usegds) { 5472 + de_payload.gds_backup_addrlo = lower_32_bits(gds_addr); 5473 + de_payload.gds_backup_addrhi = upper_32_bits(gds_addr); 5474 + } 5515 5475 5516 5476 cnt = (sizeof(de_payload) >> 2) + 4 - 2; 5517 5477 amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, cnt)); ··· 5524 5480 amdgpu_ring_write(ring, lower_32_bits(de_payload_gpu_addr)); 5525 5481 amdgpu_ring_write(ring, upper_32_bits(de_payload_gpu_addr)); 5526 5482 5483 + amdgpu_ring_ib_on_emit_de(ring); 5527 5484 if (resume) 5528 5485 amdgpu_ring_write_multiple(ring, de_payload_cpu_addr, 5529 5486 sizeof(de_payload) >> 2); ··· 6935 6890 .emit_reg_write_reg_wait = gfx_v9_0_ring_emit_reg_write_reg_wait, 6936 6891 .soft_recovery = gfx_v9_0_ring_soft_recovery, 6937 6892 .emit_mem_sync = gfx_v9_0_emit_mem_sync, 6893 + .patch_cntl = gfx_v9_0_ring_patch_cntl, 6894 + .patch_de = gfx_v9_0_ring_patch_de_meta, 6895 + .patch_ce = gfx_v9_0_ring_patch_ce_meta, 6938 6896 }; 6939 6897 6940 6898 static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_compute = {
+4 -3
drivers/gpu/drm/amd/amdgpu/soc15.c
··· 301 301 u32 reference_clock = adev->clock.spll.reference_freq; 302 302 303 303 if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(12, 0, 0) || 304 - adev->ip_versions[MP1_HWIP][0] == IP_VERSION(12, 0, 1) || 305 - adev->ip_versions[MP1_HWIP][0] == IP_VERSION(10, 0, 0) || 306 - adev->ip_versions[MP1_HWIP][0] == IP_VERSION(10, 0, 1)) 304 + adev->ip_versions[MP1_HWIP][0] == IP_VERSION(12, 0, 1)) 307 305 return 10000; 306 + if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(10, 0, 0) || 307 + adev->ip_versions[MP1_HWIP][0] == IP_VERSION(10, 0, 1)) 308 + return reference_clock / 4; 308 309 309 310 return reference_clock; 310 311 }
+5 -1
drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
··· 129 129 if (adev->vcn.harvest_config & (1 << i)) 130 130 continue; 131 131 132 - atomic_set(&adev->vcn.inst[i].sched_score, 0); 132 + /* Init instance 0 sched_score to 1, so it's scheduled after other instances */ 133 + if (i == 0) 134 + atomic_set(&adev->vcn.inst[i].sched_score, 1); 135 + else 136 + atomic_set(&adev->vcn.inst[i].sched_score, 0); 133 137 134 138 /* VCN UNIFIED TRAP */ 135 139 r = amdgpu_irq_add_id(adev, amdgpu_ih_clientid_vcns[i],
+9 -2
drivers/gpu/drm/amd/amdgpu/vi.c
··· 542 542 u32 reference_clock = adev->clock.spll.reference_freq; 543 543 u32 tmp; 544 544 545 - if (adev->flags & AMD_IS_APU) 546 - return reference_clock; 545 + if (adev->flags & AMD_IS_APU) { 546 + switch (adev->asic_type) { 547 + case CHIP_STONEY: 548 + /* vbios says 48Mhz, but the actual freq is 100Mhz */ 549 + return 10000; 550 + default: 551 + return reference_clock; 552 + } 553 + } 547 554 548 555 tmp = RREG32_SMC(ixCG_CLKPIN_CNTL_2); 549 556 if (REG_GET_FIELD(tmp, CG_CLKPIN_CNTL_2, MUX_TCLK_TO_XCLK))
+13 -5
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 7196 7196 drm_add_modes_noedid(connector, 1920, 1080); 7197 7197 } else { 7198 7198 amdgpu_dm_connector_ddc_get_modes(connector, edid); 7199 - amdgpu_dm_connector_add_common_modes(encoder, connector); 7199 + /* most eDP supports only timings from its edid, 7200 + * usually only detailed timings are available 7201 + * from eDP edid. timings which are not from edid 7202 + * may damage eDP 7203 + */ 7204 + if (connector->connector_type != DRM_MODE_CONNECTOR_eDP) 7205 + amdgpu_dm_connector_add_common_modes(encoder, connector); 7200 7206 amdgpu_dm_connector_add_freesync_modes(connector, edid); 7201 7207 } 7202 7208 amdgpu_dm_fbc_init(connector); ··· 8204 8198 if (acrtc_state->abm_level != dm_old_crtc_state->abm_level) 8205 8199 bundle->stream_update.abm_level = &acrtc_state->abm_level; 8206 8200 8201 + mutex_lock(&dm->dc_lock); 8202 + if ((acrtc_state->update_type > UPDATE_TYPE_FAST) && 8203 + acrtc_state->stream->link->psr_settings.psr_allow_active) 8204 + amdgpu_dm_psr_disable(acrtc_state->stream); 8205 + mutex_unlock(&dm->dc_lock); 8206 + 8207 8207 /* 8208 8208 * If FreeSync state on the stream has changed then we need to 8209 8209 * re-adjust the min/max bounds now that DC doesn't handle this ··· 8223 8211 spin_unlock_irqrestore(&pcrtc->dev->event_lock, flags); 8224 8212 } 8225 8213 mutex_lock(&dm->dc_lock); 8226 - if ((acrtc_state->update_type > UPDATE_TYPE_FAST) && 8227 - acrtc_state->stream->link->psr_settings.psr_allow_active) 8228 - amdgpu_dm_psr_disable(acrtc_state->stream); 8229 - 8230 8214 update_planes_and_stream_adapter(dm->dc, 8231 8215 acrtc_state->update_type, 8232 8216 planes_count,
+35 -1
drivers/gpu/drm/amd/display/dc/core/dc.c
··· 1981 1981 return result; 1982 1982 } 1983 1983 1984 + static bool commit_minimal_transition_state(struct dc *dc, 1985 + struct dc_state *transition_base_context); 1986 + 1984 1987 /** 1985 1988 * dc_commit_streams - Commit current stream state 1986 1989 * ··· 2005 2002 struct dc_state *context; 2006 2003 enum dc_status res = DC_OK; 2007 2004 struct dc_validation_set set[MAX_STREAMS] = {0}; 2005 + struct pipe_ctx *pipe; 2006 + bool handle_exit_odm2to1 = false; 2008 2007 2009 2008 if (dc->ctx->dce_environment == DCE_ENV_VIRTUAL_HW) 2010 2009 return res; ··· 2030 2025 set[i].plane_states[j] = status->plane_states[j]; 2031 2026 } 2032 2027 } 2028 + 2029 + /* Check for case where we are going from odm 2:1 to max 2030 + * pipe scenario. For these cases, we will call 2031 + * commit_minimal_transition_state() to exit out of odm 2:1 2032 + * first before processing new streams 2033 + */ 2034 + if (stream_count == dc->res_pool->pipe_count) { 2035 + for (i = 0; i < dc->res_pool->pipe_count; i++) { 2036 + pipe = &dc->current_state->res_ctx.pipe_ctx[i]; 2037 + if (pipe->next_odm_pipe) 2038 + handle_exit_odm2to1 = true; 2039 + } 2040 + } 2041 + 2042 + if (handle_exit_odm2to1) 2043 + res = commit_minimal_transition_state(dc, dc->current_state); 2033 2044 2034 2045 context = dc_create_state(dc); 2035 2046 if (!context) ··· 3893 3872 unsigned int i, j; 3894 3873 unsigned int pipe_in_use = 0; 3895 3874 bool subvp_in_use = false; 3875 + bool odm_in_use = false; 3896 3876 3897 3877 if (!transition_context) 3898 3878 return false; ··· 3922 3900 } 3923 3901 } 3924 3902 3903 + /* If ODM is enabled and we are adding or removing planes from any ODM 3904 + * pipe, we must use the minimal transition. 3905 + */ 3906 + for (i = 0; i < dc->res_pool->pipe_count; i++) { 3907 + struct pipe_ctx *pipe = &dc->current_state->res_ctx.pipe_ctx[i]; 3908 + 3909 + if (pipe->stream && pipe->next_odm_pipe) { 3910 + odm_in_use = true; 3911 + break; 3912 + } 3913 + } 3914 + 3925 3915 /* When the OS add a new surface if we have been used all of pipes with odm combine 3926 3916 * and mpc split feature, it need use commit_minimal_transition_state to transition safely. 3927 3917 * After OS exit MPO, it will back to use odm and mpc split with all of pipes, we need ··· 3942 3908 * Reduce the scenarios to use dc_commit_state_no_check in the stage of flip. Especially 3943 3909 * enter/exit MPO when DCN still have enough resources. 3944 3910 */ 3945 - if (pipe_in_use != dc->res_pool->pipe_count && !subvp_in_use) { 3911 + if (pipe_in_use != dc->res_pool->pipe_count && !subvp_in_use && !odm_in_use) { 3946 3912 dc_release_state(transition_context); 3947 3913 return true; 3948 3914 }
+20
drivers/gpu/drm/amd/display/dc/core/dc_resource.c
··· 1446 1446 1447 1447 split_pipe->stream = stream; 1448 1448 return i; 1449 + } else if (split_pipe->prev_odm_pipe && 1450 + split_pipe->prev_odm_pipe->plane_state == split_pipe->plane_state) { 1451 + split_pipe->prev_odm_pipe->next_odm_pipe = split_pipe->next_odm_pipe; 1452 + if (split_pipe->next_odm_pipe) 1453 + split_pipe->next_odm_pipe->prev_odm_pipe = split_pipe->prev_odm_pipe; 1454 + 1455 + if (split_pipe->prev_odm_pipe->plane_state) 1456 + resource_build_scaling_params(split_pipe->prev_odm_pipe); 1457 + 1458 + memset(split_pipe, 0, sizeof(*split_pipe)); 1459 + split_pipe->stream_res.tg = pool->timing_generators[i]; 1460 + split_pipe->plane_res.hubp = pool->hubps[i]; 1461 + split_pipe->plane_res.ipp = pool->ipps[i]; 1462 + split_pipe->plane_res.dpp = pool->dpps[i]; 1463 + split_pipe->stream_res.opp = pool->opps[i]; 1464 + split_pipe->plane_res.mpcc_inst = pool->dpps[i]->inst; 1465 + split_pipe->pipe_idx = i; 1466 + 1467 + split_pipe->stream = stream; 1468 + return i; 1449 1469 } 1450 1470 } 1451 1471 return -1;
+1 -1
drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
··· 138 138 .urgent_out_of_order_return_per_channel_pixel_only_bytes = 4096, 139 139 .urgent_out_of_order_return_per_channel_pixel_and_vm_bytes = 4096, 140 140 .urgent_out_of_order_return_per_channel_vm_only_bytes = 4096, 141 - .pct_ideal_sdp_bw_after_urgent = 100.0, 141 + .pct_ideal_sdp_bw_after_urgent = 90.0, 142 142 .pct_ideal_fabric_bw_after_urgent = 67.0, 143 143 .pct_ideal_dram_sdp_bw_after_urgent_pixel_only = 20.0, 144 144 .pct_ideal_dram_sdp_bw_after_urgent_pixel_and_vm = 60.0, // N/A, for now keep as is until DML implemented
+74 -18
drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
··· 2067 2067 return ret; 2068 2068 } 2069 2069 2070 + static void sienna_cichlid_get_override_pcie_settings(struct smu_context *smu, 2071 + uint32_t *gen_speed_override, 2072 + uint32_t *lane_width_override) 2073 + { 2074 + struct amdgpu_device *adev = smu->adev; 2075 + 2076 + *gen_speed_override = 0xff; 2077 + *lane_width_override = 0xff; 2078 + 2079 + switch (adev->pdev->device) { 2080 + case 0x73A0: 2081 + case 0x73A1: 2082 + case 0x73A2: 2083 + case 0x73A3: 2084 + case 0x73AB: 2085 + case 0x73AE: 2086 + /* Bit 7:0: PCIE lane width, 1 to 7 corresponds is x1 to x32 */ 2087 + *lane_width_override = 6; 2088 + break; 2089 + case 0x73E0: 2090 + case 0x73E1: 2091 + case 0x73E3: 2092 + *lane_width_override = 4; 2093 + break; 2094 + case 0x7420: 2095 + case 0x7421: 2096 + case 0x7422: 2097 + case 0x7423: 2098 + case 0x7424: 2099 + *lane_width_override = 3; 2100 + break; 2101 + default: 2102 + break; 2103 + } 2104 + } 2105 + 2106 + #define MAX(a, b) ((a) > (b) ? (a) : (b)) 2107 + 2070 2108 static int sienna_cichlid_update_pcie_parameters(struct smu_context *smu, 2071 2109 uint32_t pcie_gen_cap, 2072 2110 uint32_t pcie_width_cap) 2073 2111 { 2074 2112 struct smu_11_0_dpm_context *dpm_context = smu->smu_dpm.dpm_context; 2075 - 2076 - uint32_t smu_pcie_arg; 2113 + struct smu_11_0_pcie_table *pcie_table = &dpm_context->dpm_tables.pcie_table; 2114 + uint32_t gen_speed_override, lane_width_override; 2077 2115 uint8_t *table_member1, *table_member2; 2116 + uint32_t min_gen_speed, max_gen_speed; 2117 + uint32_t min_lane_width, max_lane_width; 2118 + uint32_t smu_pcie_arg; 2078 2119 int ret, i; 2079 2120 2080 2121 GET_PPTABLE_MEMBER(PcieGenSpeed, &table_member1); 2081 2122 GET_PPTABLE_MEMBER(PcieLaneCount, &table_member2); 2082 2123 2083 - /* lclk dpm table setup */ 2084 - for (i = 0; i < MAX_PCIE_CONF; i++) { 2085 - dpm_context->dpm_tables.pcie_table.pcie_gen[i] = table_member1[i]; 2086 - dpm_context->dpm_tables.pcie_table.pcie_lane[i] = table_member2[i]; 2124 + sienna_cichlid_get_override_pcie_settings(smu, 2125 + &gen_speed_override, 2126 + &lane_width_override); 2127 + 2128 + /* PCIE gen speed override */ 2129 + if (gen_speed_override != 0xff) { 2130 + min_gen_speed = MIN(pcie_gen_cap, gen_speed_override); 2131 + max_gen_speed = MIN(pcie_gen_cap, gen_speed_override); 2132 + } else { 2133 + min_gen_speed = MAX(0, table_member1[0]); 2134 + max_gen_speed = MIN(pcie_gen_cap, table_member1[1]); 2135 + min_gen_speed = min_gen_speed > max_gen_speed ? 2136 + max_gen_speed : min_gen_speed; 2087 2137 } 2138 + pcie_table->pcie_gen[0] = min_gen_speed; 2139 + pcie_table->pcie_gen[1] = max_gen_speed; 2140 + 2141 + /* PCIE lane width override */ 2142 + if (lane_width_override != 0xff) { 2143 + min_lane_width = MIN(pcie_width_cap, lane_width_override); 2144 + max_lane_width = MIN(pcie_width_cap, lane_width_override); 2145 + } else { 2146 + min_lane_width = MAX(1, table_member2[0]); 2147 + max_lane_width = MIN(pcie_width_cap, table_member2[1]); 2148 + min_lane_width = min_lane_width > max_lane_width ? 2149 + max_lane_width : min_lane_width; 2150 + } 2151 + pcie_table->pcie_lane[0] = min_lane_width; 2152 + pcie_table->pcie_lane[1] = max_lane_width; 2088 2153 2089 2154 for (i = 0; i < NUM_LINK_LEVELS; i++) { 2090 - smu_pcie_arg = (i << 16) | 2091 - ((table_member1[i] <= pcie_gen_cap) ? 2092 - (table_member1[i] << 8) : 2093 - (pcie_gen_cap << 8)) | 2094 - ((table_member2[i] <= pcie_width_cap) ? 2095 - table_member2[i] : 2096 - pcie_width_cap); 2155 + smu_pcie_arg = (i << 16 | 2156 + pcie_table->pcie_gen[i] << 8 | 2157 + pcie_table->pcie_lane[i]); 2097 2158 2098 2159 ret = smu_cmn_send_smc_msg_with_param(smu, 2099 2160 SMU_MSG_OverridePcieParameters, ··· 2162 2101 NULL); 2163 2102 if (ret) 2164 2103 return ret; 2165 - 2166 - if (table_member1[i] > pcie_gen_cap) 2167 - dpm_context->dpm_tables.pcie_table.pcie_gen[i] = pcie_gen_cap; 2168 - if (table_member2[i] > pcie_width_cap) 2169 - dpm_context->dpm_tables.pcie_table.pcie_lane[i] = pcie_width_cap; 2170 2104 } 2171 2105 2172 2106 return 0;
+2 -2
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
··· 573 573 if (smu_power->power_context || smu_power->power_context_size != 0) 574 574 return -EINVAL; 575 575 576 - smu_power->power_context = kzalloc(sizeof(struct smu_13_0_dpm_context), 576 + smu_power->power_context = kzalloc(sizeof(struct smu_13_0_power_context), 577 577 GFP_KERNEL); 578 578 if (!smu_power->power_context) 579 579 return -ENOMEM; 580 - smu_power->power_context_size = sizeof(struct smu_13_0_dpm_context); 580 + smu_power->power_context_size = sizeof(struct smu_13_0_power_context); 581 581 582 582 return 0; 583 583 }
+31 -2
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
··· 1696 1696 } 1697 1697 } 1698 1698 1699 - /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */ 1700 - workload_type = smu_cmn_to_asic_specific_index(smu, 1699 + if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_COMPUTE && 1700 + (((smu->adev->pdev->device == 0x744C) && (smu->adev->pdev->revision == 0xC8)) || 1701 + ((smu->adev->pdev->device == 0x744C) && (smu->adev->pdev->revision == 0xCC)))) { 1702 + ret = smu_cmn_update_table(smu, 1703 + SMU_TABLE_ACTIVITY_MONITOR_COEFF, 1704 + WORKLOAD_PPLIB_COMPUTE_BIT, 1705 + (void *)(&activity_monitor_external), 1706 + false); 1707 + if (ret) { 1708 + dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__); 1709 + return ret; 1710 + } 1711 + 1712 + ret = smu_cmn_update_table(smu, 1713 + SMU_TABLE_ACTIVITY_MONITOR_COEFF, 1714 + WORKLOAD_PPLIB_CUSTOM_BIT, 1715 + (void *)(&activity_monitor_external), 1716 + true); 1717 + if (ret) { 1718 + dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__); 1719 + return ret; 1720 + } 1721 + 1722 + workload_type = smu_cmn_to_asic_specific_index(smu, 1723 + CMN2ASIC_MAPPING_WORKLOAD, 1724 + PP_SMC_POWER_PROFILE_CUSTOM); 1725 + } else { 1726 + /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */ 1727 + workload_type = smu_cmn_to_asic_specific_index(smu, 1701 1728 CMN2ASIC_MAPPING_WORKLOAD, 1702 1729 smu->power_profile_mode); 1730 + } 1731 + 1703 1732 if (workload_type < 0) 1704 1733 return -EINVAL; 1705 1734
+18 -39
drivers/gpu/drm/ast/ast_dp.c
··· 119 119 /* 120 120 * Launch Aspeed DP 121 121 */ 122 - void ast_dp_launch(struct drm_device *dev, u8 bPower) 122 + void ast_dp_launch(struct drm_device *dev) 123 123 { 124 - u32 i = 0, j = 0, WaitCount = 1; 125 - u8 bDPTX = 0; 124 + u32 i = 0; 126 125 u8 bDPExecute = 1; 127 - 128 126 struct ast_device *ast = to_ast_device(dev); 129 - // S3 come back, need more time to wait BMC ready. 130 - if (bPower) 131 - WaitCount = 300; 132 127 133 - 134 - // Wait total count by different condition. 135 - for (j = 0; j < WaitCount; j++) { 136 - bDPTX = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xD1, TX_TYPE_MASK); 137 - 138 - if (bDPTX) 139 - break; 140 - 128 + // Wait one second then timeout. 129 + while (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xD1, ASTDP_MCU_FW_EXECUTING) != 130 + ASTDP_MCU_FW_EXECUTING) { 131 + i++; 132 + // wait 100 ms 141 133 msleep(100); 142 - } 143 134 144 - // 0xE : ASTDP with DPMCU FW handling 145 - if (bDPTX == ASTDP_DPMCU_TX) { 146 - // Wait one second then timeout. 147 - i = 0; 148 - 149 - while (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xD1, COPROCESSOR_LAUNCH) != 150 - COPROCESSOR_LAUNCH) { 151 - i++; 152 - // wait 100 ms 153 - msleep(100); 154 - 155 - if (i >= 10) { 156 - // DP would not be ready. 157 - bDPExecute = 0; 158 - break; 159 - } 135 + if (i >= 10) { 136 + // DP would not be ready. 137 + bDPExecute = 0; 138 + break; 160 139 } 161 - 162 - if (bDPExecute) 163 - ast->tx_chip_types |= BIT(AST_TX_ASTDP); 164 - 165 - ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xE5, 166 - (u8) ~ASTDP_HOST_EDID_READ_DONE_MASK, 167 - ASTDP_HOST_EDID_READ_DONE); 168 140 } 141 + 142 + if (!bDPExecute) 143 + drm_err(dev, "Wait DPMCU executing timeout\n"); 144 + 145 + ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xE5, 146 + (u8) ~ASTDP_HOST_EDID_READ_DONE_MASK, 147 + ASTDP_HOST_EDID_READ_DONE); 169 148 } 170 149 171 150
+1 -4
drivers/gpu/drm/ast/ast_drv.h
··· 350 350 #define AST_DP501_LINKRATE 0xf014 351 351 #define AST_DP501_EDID_DATA 0xf020 352 352 353 - /* Define for Soc scratched reg */ 354 - #define COPROCESSOR_LAUNCH BIT(5) 355 - 356 353 /* 357 354 * Display Transmitter Type: 358 355 */ ··· 477 480 478 481 /* aspeed DP */ 479 482 int ast_astdp_read_edid(struct drm_device *dev, u8 *ediddata); 480 - void ast_dp_launch(struct drm_device *dev, u8 bPower); 483 + void ast_dp_launch(struct drm_device *dev); 481 484 void ast_dp_power_on_off(struct drm_device *dev, bool no); 482 485 void ast_dp_set_on_off(struct drm_device *dev, bool no); 483 486 void ast_dp_set_mode(struct drm_crtc *crtc, struct ast_vbios_mode_info *vbios_mode);
+9 -2
drivers/gpu/drm/ast/ast_main.c
··· 254 254 case 0x0c: 255 255 ast->tx_chip_types = AST_TX_DP501_BIT; 256 256 } 257 - } else if (ast->chip == AST2600) 258 - ast_dp_launch(&ast->base, 0); 257 + } else if (ast->chip == AST2600) { 258 + if (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xD1, TX_TYPE_MASK) == 259 + ASTDP_DPMCU_TX) { 260 + ast->tx_chip_types = AST_TX_ASTDP_BIT; 261 + ast_dp_launch(&ast->base); 262 + } 263 + } 259 264 260 265 /* Print stuff for diagnostic purposes */ 261 266 if (ast->tx_chip_types & AST_TX_NONE_BIT) ··· 269 264 drm_info(dev, "Using Sil164 TMDS transmitter\n"); 270 265 if (ast->tx_chip_types & AST_TX_DP501_BIT) 271 266 drm_info(dev, "Using DP501 DisplayPort transmitter\n"); 267 + if (ast->tx_chip_types & AST_TX_ASTDP_BIT) 268 + drm_info(dev, "Using ASPEED DisplayPort transmitter\n"); 272 269 273 270 return 0; 274 271 }
+13 -2
drivers/gpu/drm/ast/ast_mode.c
··· 1647 1647 static int ast_astdp_connector_helper_get_modes(struct drm_connector *connector) 1648 1648 { 1649 1649 void *edid; 1650 + struct drm_device *dev = connector->dev; 1651 + struct ast_device *ast = to_ast_device(dev); 1650 1652 1651 1653 int succ; 1652 1654 int count; ··· 1657 1655 if (!edid) 1658 1656 goto err_drm_connector_update_edid_property; 1659 1657 1658 + /* 1659 + * Protect access to I/O registers from concurrent modesetting 1660 + * by acquiring the I/O-register lock. 1661 + */ 1662 + mutex_lock(&ast->ioregs_lock); 1663 + 1660 1664 succ = ast_astdp_read_edid(connector->dev, edid); 1661 1665 if (succ < 0) 1662 - goto err_kfree; 1666 + goto err_mutex_unlock; 1667 + 1668 + mutex_unlock(&ast->ioregs_lock); 1663 1669 1664 1670 drm_connector_update_edid_property(connector, edid); 1665 1671 count = drm_add_edid_modes(connector, edid); ··· 1675 1665 1676 1666 return count; 1677 1667 1678 - err_kfree: 1668 + err_mutex_unlock: 1669 + mutex_unlock(&ast->ioregs_lock); 1679 1670 kfree(edid); 1680 1671 err_drm_connector_update_edid_property: 1681 1672 drm_connector_update_edid_property(connector, NULL);
+2 -1
drivers/gpu/drm/ast/ast_post.c
··· 380 380 ast_set_def_ext_reg(dev); 381 381 382 382 if (ast->chip == AST2600) { 383 - ast_dp_launch(dev, 1); 383 + if (ast->tx_chip_types & AST_TX_ASTDP_BIT) 384 + ast_dp_launch(dev); 384 385 } else if (ast->config_mode == ast_use_p2a) { 385 386 if (ast->chip == AST2500) 386 387 ast_post_chip_2500(dev);
+4
drivers/gpu/drm/bridge/ti-sn65dsi86.c
··· 298 298 if (refclk_lut[i] == refclk_rate) 299 299 break; 300 300 301 + /* avoid buffer overflow and "1" is the default rate in the datasheet. */ 302 + if (i >= refclk_lut_size) 303 + i = 1; 304 + 301 305 regmap_update_bits(pdata->regmap, SN_DPPLL_SRC_REG, REFCLK_FREQ_MASK, 302 306 REFCLK_FREQ(i)); 303 307
+7 -5
drivers/gpu/drm/drm_fb_helper.c
··· 1545 1545 } 1546 1546 } 1547 1547 1548 - static void __fill_var(struct fb_var_screeninfo *var, 1548 + static void __fill_var(struct fb_var_screeninfo *var, struct fb_info *info, 1549 1549 struct drm_framebuffer *fb) 1550 1550 { 1551 1551 int i; 1552 1552 1553 1553 var->xres_virtual = fb->width; 1554 1554 var->yres_virtual = fb->height; 1555 - var->accel_flags = FB_ACCELF_TEXT; 1555 + var->accel_flags = 0; 1556 1556 var->bits_per_pixel = drm_format_info_bpp(fb->format, 0); 1557 1557 1558 - var->height = var->width = 0; 1558 + var->height = info->var.height; 1559 + var->width = info->var.width; 1560 + 1559 1561 var->left_margin = var->right_margin = 0; 1560 1562 var->upper_margin = var->lower_margin = 0; 1561 1563 var->hsync_len = var->vsync_len = 0; ··· 1620 1618 return -EINVAL; 1621 1619 } 1622 1620 1623 - __fill_var(var, fb); 1621 + __fill_var(var, info, fb); 1624 1622 1625 1623 /* 1626 1624 * fb_pan_display() validates this, but fb_set_par() doesn't and just ··· 2076 2074 info->pseudo_palette = fb_helper->pseudo_palette; 2077 2075 info->var.xoffset = 0; 2078 2076 info->var.yoffset = 0; 2079 - __fill_var(&info->var, fb); 2077 + __fill_var(&info->var, info, fb); 2080 2078 info->var.activate = FB_ACTIVATE_NOW; 2081 2079 2082 2080 drm_fb_helper_fill_pixel_fmt(&info->var, format);
+1 -1
drivers/gpu/drm/exynos/exynos_drm_g2d.c
··· 1335 1335 /* Let the runqueue know that there is work to do. */ 1336 1336 queue_work(g2d->g2d_workq, &g2d->runqueue_work); 1337 1337 1338 - if (runqueue_node->async) 1338 + if (req->async) 1339 1339 goto out; 1340 1340 1341 1341 wait_for_completion(&runqueue_node->complete);
-2
drivers/gpu/drm/exynos/exynos_drm_vidi.c
··· 469 469 if (ctx->raw_edid != (struct edid *)fake_edid_info) { 470 470 kfree(ctx->raw_edid); 471 471 ctx->raw_edid = NULL; 472 - 473 - return -EINVAL; 474 472 } 475 473 476 474 component_del(&pdev->dev, &vidi_component_ops);
+26 -4
drivers/gpu/drm/i915/display/intel_cdclk.c
··· 1453 1453 return 0; 1454 1454 } 1455 1455 1456 + static u8 rplu_calc_voltage_level(int cdclk) 1457 + { 1458 + if (cdclk > 556800) 1459 + return 3; 1460 + else if (cdclk > 480000) 1461 + return 2; 1462 + else if (cdclk > 312000) 1463 + return 1; 1464 + else 1465 + return 0; 1466 + } 1467 + 1456 1468 static void icl_readout_refclk(struct drm_i915_private *dev_priv, 1457 1469 struct intel_cdclk_config *cdclk_config) 1458 1470 { ··· 3254 3242 .calc_voltage_level = tgl_calc_voltage_level, 3255 3243 }; 3256 3244 3245 + static const struct intel_cdclk_funcs rplu_cdclk_funcs = { 3246 + .get_cdclk = bxt_get_cdclk, 3247 + .set_cdclk = bxt_set_cdclk, 3248 + .modeset_calc_cdclk = bxt_modeset_calc_cdclk, 3249 + .calc_voltage_level = rplu_calc_voltage_level, 3250 + }; 3251 + 3257 3252 static const struct intel_cdclk_funcs tgl_cdclk_funcs = { 3258 3253 .get_cdclk = bxt_get_cdclk, 3259 3254 .set_cdclk = bxt_set_cdclk, ··· 3403 3384 dev_priv->display.funcs.cdclk = &tgl_cdclk_funcs; 3404 3385 dev_priv->display.cdclk.table = dg2_cdclk_table; 3405 3386 } else if (IS_ALDERLAKE_P(dev_priv)) { 3406 - dev_priv->display.funcs.cdclk = &tgl_cdclk_funcs; 3407 3387 /* Wa_22011320316:adl-p[a0] */ 3408 - if (IS_ADLP_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0)) 3388 + if (IS_ADLP_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0)) { 3409 3389 dev_priv->display.cdclk.table = adlp_a_step_cdclk_table; 3410 - else if (IS_ADLP_RPLU(dev_priv)) 3390 + dev_priv->display.funcs.cdclk = &tgl_cdclk_funcs; 3391 + } else if (IS_ADLP_RPLU(dev_priv)) { 3411 3392 dev_priv->display.cdclk.table = rplu_cdclk_table; 3412 - else 3393 + dev_priv->display.funcs.cdclk = &rplu_cdclk_funcs; 3394 + } else { 3413 3395 dev_priv->display.cdclk.table = adlp_cdclk_table; 3396 + dev_priv->display.funcs.cdclk = &tgl_cdclk_funcs; 3397 + } 3414 3398 } else if (IS_ROCKETLAKE(dev_priv)) { 3415 3399 dev_priv->display.funcs.cdclk = &tgl_cdclk_funcs; 3416 3400 dev_priv->display.cdclk.table = rkl_cdclk_table;
+1 -1
drivers/gpu/drm/i915/display/intel_dp_aux.c
··· 129 129 130 130 static int intel_dp_aux_fw_sync_len(void) 131 131 { 132 - int precharge = 16; /* 10-16 */ 132 + int precharge = 10; /* 10-16 */ 133 133 int preamble = 8; 134 134 135 135 return precharge + preamble;
+10 -4
drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
··· 346 346 continue; 347 347 348 348 ce = intel_context_create(data[m].ce[0]->engine); 349 - if (IS_ERR(ce)) 349 + if (IS_ERR(ce)) { 350 + err = PTR_ERR(ce); 350 351 goto out; 352 + } 351 353 352 354 err = intel_context_pin(ce); 353 355 if (err) { ··· 369 367 370 368 worker = kthread_create_worker(0, "igt/parallel:%s", 371 369 data[n].ce[0]->engine->name); 372 - if (IS_ERR(worker)) 370 + if (IS_ERR(worker)) { 371 + err = PTR_ERR(worker); 373 372 goto out; 373 + } 374 374 375 375 data[n].worker = worker; 376 376 } ··· 401 397 } 402 398 } 403 399 404 - if (igt_live_test_end(&t)) 405 - err = -EIO; 400 + if (igt_live_test_end(&t)) { 401 + err = err ?: -EIO; 402 + break; 403 + } 406 404 } 407 405 408 406 out:
+8 -4
drivers/gpu/drm/i915/gt/selftest_execlists.c
··· 1530 1530 struct drm_i915_gem_object *obj; 1531 1531 struct i915_vma *vma; 1532 1532 enum intel_engine_id id; 1533 - int err = -ENOMEM; 1534 1533 u32 *map; 1534 + int err; 1535 1535 1536 1536 /* 1537 1537 * Verify that even without HAS_LOGICAL_RING_PREEMPTION, we can ··· 1539 1539 */ 1540 1540 1541 1541 ctx_hi = kernel_context(gt->i915, NULL); 1542 - if (!ctx_hi) 1543 - return -ENOMEM; 1542 + if (IS_ERR(ctx_hi)) 1543 + return PTR_ERR(ctx_hi); 1544 + 1544 1545 ctx_hi->sched.priority = I915_CONTEXT_MAX_USER_PRIORITY; 1545 1546 1546 1547 ctx_lo = kernel_context(gt->i915, NULL); 1547 - if (!ctx_lo) 1548 + if (IS_ERR(ctx_lo)) { 1549 + err = PTR_ERR(ctx_lo); 1548 1550 goto err_ctx_hi; 1551 + } 1552 + 1549 1553 ctx_lo->sched.priority = I915_CONTEXT_MIN_USER_PRIORITY; 1550 1554 1551 1555 obj = i915_gem_object_create_internal(gt->i915, PAGE_SIZE);
+1 -1
drivers/gpu/drm/lima/lima_sched.c
··· 165 165 void lima_sched_context_fini(struct lima_sched_pipe *pipe, 166 166 struct lima_sched_context *context) 167 167 { 168 - drm_sched_entity_fini(&context->base); 168 + drm_sched_entity_destroy(&context->base); 169 169 } 170 170 171 171 struct dma_fence *lima_sched_context_queue_task(struct lima_sched_task *task)
-2
drivers/gpu/drm/msm/adreno/a6xx_gmu.c
··· 1526 1526 if (!pdev) 1527 1527 return -ENODEV; 1528 1528 1529 - mutex_init(&gmu->lock); 1530 - 1531 1529 gmu->dev = &pdev->dev; 1532 1530 1533 1531 of_dma_configure(gmu->dev, node, true);
+2
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
··· 1981 1981 adreno_gpu = &a6xx_gpu->base; 1982 1982 gpu = &adreno_gpu->base; 1983 1983 1984 + mutex_init(&a6xx_gpu->gmu.lock); 1985 + 1984 1986 adreno_gpu->registers = NULL; 1985 1987 1986 1988 /*
+14 -1
drivers/gpu/drm/msm/dp/dp_catalog.c
··· 620 620 config & DP_DP_HPD_INT_MASK); 621 621 } 622 622 623 - void dp_catalog_ctrl_hpd_config(struct dp_catalog *dp_catalog) 623 + void dp_catalog_ctrl_hpd_enable(struct dp_catalog *dp_catalog) 624 624 { 625 625 struct dp_catalog_private *catalog = container_of(dp_catalog, 626 626 struct dp_catalog_private, dp_catalog); ··· 633 633 634 634 /* Enable HPD */ 635 635 dp_write_aux(catalog, REG_DP_DP_HPD_CTRL, DP_DP_HPD_CTRL_HPD_EN); 636 + } 637 + 638 + void dp_catalog_ctrl_hpd_disable(struct dp_catalog *dp_catalog) 639 + { 640 + struct dp_catalog_private *catalog = container_of(dp_catalog, 641 + struct dp_catalog_private, dp_catalog); 642 + 643 + u32 reftimer = dp_read_aux(catalog, REG_DP_DP_HPD_REFTIMER); 644 + 645 + reftimer &= ~DP_DP_HPD_REFTIMER_ENABLE; 646 + dp_write_aux(catalog, REG_DP_DP_HPD_REFTIMER, reftimer); 647 + 648 + dp_write_aux(catalog, REG_DP_DP_HPD_CTRL, 0); 636 649 } 637 650 638 651 static void dp_catalog_enable_sdp(struct dp_catalog_private *catalog)
+2 -1
drivers/gpu/drm/msm/dp/dp_catalog.h
··· 104 104 void dp_catalog_ctrl_enable_irq(struct dp_catalog *dp_catalog, bool enable); 105 105 void dp_catalog_hpd_config_intr(struct dp_catalog *dp_catalog, 106 106 u32 intr_mask, bool en); 107 - void dp_catalog_ctrl_hpd_config(struct dp_catalog *dp_catalog); 107 + void dp_catalog_ctrl_hpd_enable(struct dp_catalog *dp_catalog); 108 + void dp_catalog_ctrl_hpd_disable(struct dp_catalog *dp_catalog); 108 109 void dp_catalog_ctrl_config_psr(struct dp_catalog *dp_catalog); 109 110 void dp_catalog_ctrl_set_psr(struct dp_catalog *dp_catalog, bool enter); 110 111 u32 dp_catalog_link_is_connected(struct dp_catalog *dp_catalog);
+24 -53
drivers/gpu/drm/msm/dp/dp_display.c
··· 28 28 #include "dp_audio.h" 29 29 #include "dp_debug.h" 30 30 31 + static bool psr_enabled = false; 32 + module_param(psr_enabled, bool, 0); 33 + MODULE_PARM_DESC(psr_enabled, "enable PSR for eDP and DP displays"); 34 + 31 35 #define HPD_STRING_SIZE 30 32 36 33 37 enum { ··· 411 407 412 408 edid = dp->panel->edid; 413 409 414 - dp->dp_display.psr_supported = dp->panel->psr_cap.version; 410 + dp->dp_display.psr_supported = dp->panel->psr_cap.version && psr_enabled; 415 411 416 412 dp->audio_supported = drm_detect_monitor_audio(edid); 417 413 dp_panel_handle_sink_request(dp->panel); ··· 620 616 dp->hpd_state = ST_MAINLINK_READY; 621 617 } 622 618 623 - /* enable HDP irq_hpd/replug interrupt */ 624 - if (dp->dp_display.internal_hpd) 625 - dp_catalog_hpd_config_intr(dp->catalog, 626 - DP_DP_IRQ_HPD_INT_MASK | DP_DP_HPD_REPLUG_INT_MASK, 627 - true); 628 - 629 619 drm_dbg_dp(dp->drm_dev, "After, type=%d hpd_state=%d\n", 630 620 dp->dp_display.connector_type, state); 631 621 mutex_unlock(&dp->event_mutex); ··· 657 659 drm_dbg_dp(dp->drm_dev, "Before, type=%d hpd_state=%d\n", 658 660 dp->dp_display.connector_type, state); 659 661 660 - /* disable irq_hpd/replug interrupts */ 661 - if (dp->dp_display.internal_hpd) 662 - dp_catalog_hpd_config_intr(dp->catalog, 663 - DP_DP_IRQ_HPD_INT_MASK | DP_DP_HPD_REPLUG_INT_MASK, 664 - false); 665 - 666 662 /* unplugged, no more irq_hpd handle */ 667 663 dp_del_event(dp, EV_IRQ_HPD_INT); 668 664 ··· 680 688 return 0; 681 689 } 682 690 683 - /* disable HPD plug interrupts */ 684 - if (dp->dp_display.internal_hpd) 685 - dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK, false); 686 - 687 691 /* 688 692 * We don't need separate work for disconnect as 689 693 * connect/attention interrupts are disabled ··· 694 706 695 707 /* signal the disconnect event early to ensure proper teardown */ 696 708 dp_display_handle_plugged_change(&dp->dp_display, false); 697 - 698 - /* enable HDP plug interrupt to prepare for next plugin */ 699 - if (dp->dp_display.internal_hpd) 700 - dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK, true); 701 709 702 710 drm_dbg_dp(dp->drm_dev, "After, type=%d hpd_state=%d\n", 703 711 dp->dp_display.connector_type, state); ··· 1067 1083 mutex_unlock(&dp_display->event_mutex); 1068 1084 } 1069 1085 1070 - static void dp_display_config_hpd(struct dp_display_private *dp) 1071 - { 1072 - 1073 - dp_display_host_init(dp); 1074 - dp_catalog_ctrl_hpd_config(dp->catalog); 1075 - 1076 - /* Enable plug and unplug interrupts only if requested */ 1077 - if (dp->dp_display.internal_hpd) 1078 - dp_catalog_hpd_config_intr(dp->catalog, 1079 - DP_DP_HPD_PLUG_INT_MASK | 1080 - DP_DP_HPD_UNPLUG_INT_MASK, 1081 - true); 1082 - 1083 - /* Enable interrupt first time 1084 - * we are leaving dp clocks on during disconnect 1085 - * and never disable interrupt 1086 - */ 1087 - enable_irq(dp->irq); 1088 - } 1089 - 1090 1086 void dp_display_set_psr(struct msm_dp *dp_display, bool enter) 1091 1087 { 1092 1088 struct dp_display_private *dp; ··· 1141 1177 1142 1178 switch (todo->event_id) { 1143 1179 case EV_HPD_INIT_SETUP: 1144 - dp_display_config_hpd(dp_priv); 1180 + dp_display_host_init(dp_priv); 1145 1181 break; 1146 1182 case EV_HPD_PLUG_INT: 1147 1183 dp_hpd_plug_handle(dp_priv, todo->data); ··· 1247 1283 dp->irq, rc); 1248 1284 return rc; 1249 1285 } 1250 - disable_irq(dp->irq); 1251 1286 1252 1287 return 0; 1253 1288 } ··· 1358 1395 /* turn on dp ctrl/phy */ 1359 1396 dp_display_host_init(dp); 1360 1397 1361 - dp_catalog_ctrl_hpd_config(dp->catalog); 1362 - 1363 - if (dp->dp_display.internal_hpd) 1364 - dp_catalog_hpd_config_intr(dp->catalog, 1365 - DP_DP_HPD_PLUG_INT_MASK | 1366 - DP_DP_HPD_UNPLUG_INT_MASK, 1367 - true); 1398 + if (dp_display->is_edp) 1399 + dp_catalog_ctrl_hpd_enable(dp->catalog); 1368 1400 1369 1401 if (dp_catalog_link_is_connected(dp->catalog)) { 1370 1402 /* ··· 1527 1569 1528 1570 if (aux_bus && dp->is_edp) { 1529 1571 dp_display_host_init(dp_priv); 1530 - dp_catalog_ctrl_hpd_config(dp_priv->catalog); 1572 + dp_catalog_ctrl_hpd_enable(dp_priv->catalog); 1531 1573 dp_display_host_phy_init(dp_priv); 1532 - enable_irq(dp_priv->irq); 1533 1574 1534 1575 /* 1535 1576 * The code below assumes that the panel will finish probing ··· 1570 1613 1571 1614 error: 1572 1615 if (dp->is_edp) { 1573 - disable_irq(dp_priv->irq); 1574 1616 dp_display_host_phy_exit(dp_priv); 1575 1617 dp_display_host_deinit(dp_priv); 1576 1618 } ··· 1758 1802 { 1759 1803 struct msm_dp_bridge *dp_bridge = to_dp_bridge(bridge); 1760 1804 struct msm_dp *dp_display = dp_bridge->dp_display; 1805 + struct dp_display_private *dp = container_of(dp_display, struct dp_display_private, dp_display); 1806 + 1807 + mutex_lock(&dp->event_mutex); 1808 + dp_catalog_ctrl_hpd_enable(dp->catalog); 1809 + 1810 + /* enable HDP interrupts */ 1811 + dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, true); 1761 1812 1762 1813 dp_display->internal_hpd = true; 1814 + mutex_unlock(&dp->event_mutex); 1763 1815 } 1764 1816 1765 1817 void dp_bridge_hpd_disable(struct drm_bridge *bridge) 1766 1818 { 1767 1819 struct msm_dp_bridge *dp_bridge = to_dp_bridge(bridge); 1768 1820 struct msm_dp *dp_display = dp_bridge->dp_display; 1821 + struct dp_display_private *dp = container_of(dp_display, struct dp_display_private, dp_display); 1822 + 1823 + mutex_lock(&dp->event_mutex); 1824 + /* disable HDP interrupts */ 1825 + dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, false); 1826 + dp_catalog_ctrl_hpd_disable(dp->catalog); 1769 1827 1770 1828 dp_display->internal_hpd = false; 1829 + mutex_unlock(&dp->event_mutex); 1771 1830 } 1772 1831 1773 1832 void dp_bridge_hpd_notify(struct drm_bridge *bridge,
+2 -2
drivers/gpu/drm/msm/msm_drv.c
··· 449 449 if (ret) 450 450 goto err_cleanup_mode_config; 451 451 452 + dma_set_max_seg_size(dev, UINT_MAX); 453 + 452 454 /* Bind all our sub-components: */ 453 455 ret = component_bind_all(dev, ddev); 454 456 if (ret) ··· 460 458 ret = drm_aperture_remove_framebuffers(false, drv); 461 459 if (ret) 462 460 goto err_msm_uninit; 463 - 464 - dma_set_max_seg_size(dev, UINT_MAX); 465 461 466 462 msm_gem_shrinker_init(ddev); 467 463
+3
drivers/gpu/drm/nouveau/nouveau_acpi.c
··· 220 220 int optimus_funcs; 221 221 struct pci_dev *parent_pdev; 222 222 223 + if (pdev->vendor != PCI_VENDOR_ID_NVIDIA) 224 + return; 225 + 223 226 *has_pr3 = false; 224 227 parent_pdev = pci_upstream_bridge(pdev); 225 228 if (parent_pdev) {
+4 -3
drivers/gpu/drm/nouveau/nouveau_connector.c
··· 730 730 #endif 731 731 732 732 nouveau_connector_set_edid(nv_connector, edid); 733 - nouveau_connector_set_encoder(connector, nv_encoder); 733 + if (nv_encoder) 734 + nouveau_connector_set_encoder(connector, nv_encoder); 734 735 return status; 735 736 } 736 737 ··· 967 966 /* Determine display colour depth for everything except LVDS now, 968 967 * DP requires this before mode_valid() is called. 969 968 */ 970 - if (connector->connector_type != DRM_MODE_CONNECTOR_LVDS) 969 + if (connector->connector_type != DRM_MODE_CONNECTOR_LVDS && nv_connector->native_mode) 971 970 nouveau_connector_detect_depth(connector); 972 971 973 972 /* Find the native mode if this is a digital panel, if we didn't ··· 988 987 * "native" mode as some VBIOS tables require us to use the 989 988 * pixel clock as part of the lookup... 990 989 */ 991 - if (connector->connector_type == DRM_MODE_CONNECTOR_LVDS) 990 + if (connector->connector_type == DRM_MODE_CONNECTOR_LVDS && nv_connector->native_mode) 992 991 nouveau_connector_detect_depth(connector); 993 992 994 993 if (nv_encoder->dcb->type == DCB_OUTPUT_TV)
+10 -4
drivers/gpu/drm/nouveau/nouveau_drm.c
··· 137 137 static inline bool 138 138 nouveau_cli_work_ready(struct dma_fence *fence) 139 139 { 140 - if (!dma_fence_is_signaled(fence)) 141 - return false; 142 - dma_fence_put(fence); 143 - return true; 140 + bool ret = true; 141 + 142 + spin_lock_irq(fence->lock); 143 + if (!dma_fence_is_signaled_locked(fence)) 144 + ret = false; 145 + spin_unlock_irq(fence->lock); 146 + 147 + if (ret == true) 148 + dma_fence_put(fence); 149 + return ret; 144 150 } 145 151 146 152 static void
+1
drivers/gpu/drm/radeon/radeon_fbdev.c
··· 307 307 308 308 if (fb_helper->info) { 309 309 vga_switcheroo_client_fb_set(rdev->pdev, NULL); 310 + drm_helper_force_disable_all(dev); 310 311 drm_fb_helper_unregister_info(fb_helper); 311 312 } else { 312 313 drm_client_release(&fb_helper->client);
+1 -3
drivers/gpu/drm/radeon/radeon_gem.c
··· 459 459 struct radeon_device *rdev = dev->dev_private; 460 460 struct drm_radeon_gem_set_domain *args = data; 461 461 struct drm_gem_object *gobj; 462 - struct radeon_bo *robj; 463 462 int r; 464 463 465 464 /* for now if someone requests domain CPU - ··· 471 472 up_read(&rdev->exclusive_lock); 472 473 return -ENOENT; 473 474 } 474 - robj = gem_to_radeon_bo(gobj); 475 475 476 476 r = radeon_gem_set_domain(gobj, args->read_domains, args->write_domain); 477 477 478 478 drm_gem_object_put(gobj); 479 479 up_read(&rdev->exclusive_lock); 480 - r = radeon_gem_handle_lockup(robj->rdev, r); 480 + r = radeon_gem_handle_lockup(rdev, r); 481 481 return r; 482 482 } 483 483
+6 -7
drivers/hid/hid-logitech-hidpp.c
··· 286 286 struct hidpp_report *message, 287 287 struct hidpp_report *response) 288 288 { 289 - int ret; 289 + int ret = -1; 290 290 int max_retries = 3; 291 291 292 292 mutex_lock(&hidpp->send_mutex); ··· 300 300 */ 301 301 *response = *message; 302 302 303 - for (; max_retries != 0; max_retries--) { 303 + for (; max_retries != 0 && ret; max_retries--) { 304 304 ret = __hidpp_send_report(hidpp->hid_dev, message); 305 305 306 306 if (ret) { 307 307 dbg_hid("__hidpp_send_report returned err: %d\n", ret); 308 308 memset(response, 0, sizeof(struct hidpp_report)); 309 - goto exit; 309 + break; 310 310 } 311 311 312 312 if (!wait_event_timeout(hidpp->wait, hidpp->answer_available, ··· 314 314 dbg_hid("%s:timeout waiting for response\n", __func__); 315 315 memset(response, 0, sizeof(struct hidpp_report)); 316 316 ret = -ETIMEDOUT; 317 - goto exit; 317 + break; 318 318 } 319 319 320 320 if (response->report_id == REPORT_ID_HIDPP_SHORT && 321 321 response->rap.sub_id == HIDPP_ERROR) { 322 322 ret = response->rap.params[1]; 323 323 dbg_hid("%s:got hidpp error %02X\n", __func__, ret); 324 - goto exit; 324 + break; 325 325 } 326 326 327 327 if ((response->report_id == REPORT_ID_HIDPP_LONG || ··· 330 330 ret = response->fap.params[1]; 331 331 if (ret != HIDPP20_ERROR_BUSY) { 332 332 dbg_hid("%s:got hidpp 2.0 error %02X\n", __func__, ret); 333 - goto exit; 333 + break; 334 334 } 335 335 dbg_hid("%s:got busy hidpp 2.0 error %02X, retrying\n", __func__, ret); 336 336 } 337 337 } 338 338 339 - exit: 340 339 mutex_unlock(&hidpp->send_mutex); 341 340 return ret; 342 341
+1
drivers/i2c/busses/i2c-designware-core.h
··· 40 40 #define DW_IC_CON_BUS_CLEAR_CTRL BIT(11) 41 41 42 42 #define DW_IC_DATA_CMD_DAT GENMASK(7, 0) 43 + #define DW_IC_DATA_CMD_FIRST_DATA_BYTE BIT(11) 43 44 44 45 /* 45 46 * Registers offset
+4
drivers/i2c/busses/i2c-designware-slave.c
··· 176 176 177 177 do { 178 178 regmap_read(dev->map, DW_IC_DATA_CMD, &tmp); 179 + if (tmp & DW_IC_DATA_CMD_FIRST_DATA_BYTE) 180 + i2c_slave_event(dev->slave, 181 + I2C_SLAVE_WRITE_REQUESTED, 182 + &val); 179 183 val = tmp; 180 184 i2c_slave_event(dev->slave, I2C_SLAVE_WRITE_RECEIVED, 181 185 &val);
+1 -1
drivers/i2c/busses/i2c-img-scb.c
··· 257 257 #define IMG_I2C_TIMEOUT (msecs_to_jiffies(1000)) 258 258 259 259 /* 260 - * Worst incs are 1 (innacurate) and 16*256 (irregular). 260 + * Worst incs are 1 (inaccurate) and 16*256 (irregular). 261 261 * So a sensible inc is the logarithmic mean: 64 (2^6), which is 262 262 * in the middle of the valid range (0-127). 263 263 */
+4 -2
drivers/i2c/busses/i2c-mchp-pci1xxxx.c
··· 1118 1118 static DEFINE_SIMPLE_DEV_PM_OPS(pci1xxxx_i2c_pm_ops, pci1xxxx_i2c_suspend, 1119 1119 pci1xxxx_i2c_resume); 1120 1120 1121 - static void pci1xxxx_i2c_shutdown(struct pci1xxxx_i2c *i2c) 1121 + static void pci1xxxx_i2c_shutdown(void *data) 1122 1122 { 1123 + struct pci1xxxx_i2c *i2c = data; 1124 + 1123 1125 pci1xxxx_i2c_config_padctrl(i2c, false); 1124 1126 pci1xxxx_i2c_configure_core_reg(i2c, false); 1125 1127 } ··· 1158 1156 init_completion(&i2c->i2c_xfer_done); 1159 1157 pci1xxxx_i2c_init(i2c); 1160 1158 1161 - ret = devm_add_action(dev, (void (*)(void *))pci1xxxx_i2c_shutdown, i2c); 1159 + ret = devm_add_action(dev, pci1xxxx_i2c_shutdown, i2c); 1162 1160 if (ret) 1163 1161 return ret; 1164 1162
+11
drivers/i2c/busses/i2c-mv64xxx.c
··· 520 520 521 521 while (readl(drv_data->reg_base + drv_data->reg_offsets.control) & 522 522 MV64XXX_I2C_REG_CONTROL_IFLG) { 523 + /* 524 + * It seems that sometime the controller updates the status 525 + * register only after it asserts IFLG in control register. 526 + * This may result in weird bugs when in atomic mode. A delay 527 + * of 100 ns before reading the status register solves this 528 + * issue. This bug does not seem to appear when using 529 + * interrupts. 530 + */ 531 + if (drv_data->atomic) 532 + ndelay(100); 533 + 523 534 status = readl(drv_data->reg_base + drv_data->reg_offsets.status); 524 535 mv64xxx_i2c_fsm(drv_data, status); 525 536 mv64xxx_i2c_do_action(drv_data);
+5 -3
drivers/i2c/busses/i2c-sprd.c
··· 576 576 struct sprd_i2c *i2c_dev = platform_get_drvdata(pdev); 577 577 int ret; 578 578 579 - ret = pm_runtime_resume_and_get(i2c_dev->dev); 579 + ret = pm_runtime_get_sync(i2c_dev->dev); 580 580 if (ret < 0) 581 - return ret; 581 + dev_err(&pdev->dev, "Failed to resume device (%pe)\n", ERR_PTR(ret)); 582 582 583 583 i2c_del_adapter(&i2c_dev->adap); 584 - clk_disable_unprepare(i2c_dev->clk); 584 + 585 + if (ret >= 0) 586 + clk_disable_unprepare(i2c_dev->clk); 585 587 586 588 pm_runtime_put_noidle(i2c_dev->dev); 587 589 pm_runtime_disable(i2c_dev->dev);
+2 -2
drivers/infiniband/core/cma.c
··· 3295 3295 route->path_rec->traffic_class = tos; 3296 3296 route->path_rec->mtu = iboe_get_mtu(ndev->mtu); 3297 3297 route->path_rec->rate_selector = IB_SA_EQ; 3298 - route->path_rec->rate = iboe_get_rate(ndev); 3298 + route->path_rec->rate = IB_RATE_PORT_CURRENT; 3299 3299 dev_put(ndev); 3300 3300 route->path_rec->packet_life_time_selector = IB_SA_EQ; 3301 3301 /* In case ACK timeout is set, use this value to calculate ··· 4964 4964 if (!ndev) 4965 4965 return -ENODEV; 4966 4966 4967 - ib.rec.rate = iboe_get_rate(ndev); 4967 + ib.rec.rate = IB_RATE_PORT_CURRENT; 4968 4968 ib.rec.hop_limit = 1; 4969 4969 ib.rec.mtu = iboe_get_mtu(ndev->mtu); 4970 4970
+6 -1
drivers/infiniband/core/uverbs_cmd.c
··· 1850 1850 attr->path_mtu = cmd->base.path_mtu; 1851 1851 if (cmd->base.attr_mask & IB_QP_PATH_MIG_STATE) 1852 1852 attr->path_mig_state = cmd->base.path_mig_state; 1853 - if (cmd->base.attr_mask & IB_QP_QKEY) 1853 + if (cmd->base.attr_mask & IB_QP_QKEY) { 1854 + if (cmd->base.qkey & IB_QP_SET_QKEY && !capable(CAP_NET_RAW)) { 1855 + ret = -EPERM; 1856 + goto release_qp; 1857 + } 1854 1858 attr->qkey = cmd->base.qkey; 1859 + } 1855 1860 if (cmd->base.attr_mask & IB_QP_RQ_PSN) 1856 1861 attr->rq_psn = cmd->base.rq_psn; 1857 1862 if (cmd->base.attr_mask & IB_QP_SQ_PSN)
+5 -7
drivers/infiniband/core/uverbs_main.c
··· 222 222 spin_lock_irq(&ev_queue->lock); 223 223 224 224 while (list_empty(&ev_queue->event_list)) { 225 - spin_unlock_irq(&ev_queue->lock); 225 + if (ev_queue->is_closed) { 226 + spin_unlock_irq(&ev_queue->lock); 227 + return -EIO; 228 + } 226 229 230 + spin_unlock_irq(&ev_queue->lock); 227 231 if (filp->f_flags & O_NONBLOCK) 228 232 return -EAGAIN; 229 233 ··· 237 233 return -ERESTARTSYS; 238 234 239 235 spin_lock_irq(&ev_queue->lock); 240 - 241 - /* If device was disassociated and no event exists set an error */ 242 - if (list_empty(&ev_queue->event_list) && ev_queue->is_closed) { 243 - spin_unlock_irq(&ev_queue->lock); 244 - return -EIO; 245 - } 246 236 } 247 237 248 238 event = list_entry(ev_queue->event_list.next, struct ib_uverbs_event, list);
-2
drivers/infiniband/hw/bnxt_re/bnxt_re.h
··· 135 135 136 136 struct delayed_work worker; 137 137 u8 cur_prio_map; 138 - u16 active_speed; 139 - u8 active_width; 140 138 141 139 /* FP Notification Queue (CQ & SRQ) */ 142 140 struct tasklet_struct nq_task;
+4 -3
drivers/infiniband/hw/bnxt_re/ib_verbs.c
··· 199 199 { 200 200 struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev); 201 201 struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr; 202 + int rc; 202 203 203 204 memset(port_attr, 0, sizeof(*port_attr)); 204 205 ··· 229 228 port_attr->sm_sl = 0; 230 229 port_attr->subnet_timeout = 0; 231 230 port_attr->init_type_reply = 0; 232 - port_attr->active_speed = rdev->active_speed; 233 - port_attr->active_width = rdev->active_width; 231 + rc = ib_get_eth_speed(&rdev->ibdev, port_num, &port_attr->active_speed, 232 + &port_attr->active_width); 234 233 235 - return 0; 234 + return rc; 236 235 } 237 236 238 237 int bnxt_re_get_port_immutable(struct ib_device *ibdev, u32 port_num,
-2
drivers/infiniband/hw/bnxt_re/main.c
··· 1077 1077 return rc; 1078 1078 } 1079 1079 dev_info(rdev_to_dev(rdev), "Device registered with IB successfully"); 1080 - ib_get_eth_speed(&rdev->ibdev, 1, &rdev->active_speed, 1081 - &rdev->active_width); 1082 1080 set_bit(BNXT_RE_FLAG_ISSUE_ROCE_STATS, &rdev->flags); 1083 1081 1084 1082 event = netif_running(rdev->netdev) && netif_carrier_ok(rdev->netdev) ?
+62 -27
drivers/infiniband/hw/mlx5/counters.c
··· 209 209 !vport_qcounters_supported(dev)) || !port_num) 210 210 return &dev->port[0].cnts; 211 211 212 - return &dev->port[port_num - 1].cnts; 212 + return is_mdev_switchdev_mode(dev->mdev) ? 213 + &dev->port[1].cnts : &dev->port[port_num - 1].cnts; 213 214 } 214 215 215 216 /** ··· 263 262 mlx5_ib_alloc_hw_port_stats(struct ib_device *ibdev, u32 port_num) 264 263 { 265 264 struct mlx5_ib_dev *dev = to_mdev(ibdev); 266 - const struct mlx5_ib_counters *cnts = &dev->port[port_num - 1].cnts; 265 + const struct mlx5_ib_counters *cnts = get_counters(dev, port_num); 267 266 268 267 return do_alloc_stats(cnts); 269 268 } ··· 330 329 { 331 330 u32 out[MLX5_ST_SZ_DW(query_q_counter_out)] = {}; 332 331 u32 in[MLX5_ST_SZ_DW(query_q_counter_in)] = {}; 332 + struct mlx5_core_dev *mdev; 333 333 __be32 val; 334 334 int ret, i; 335 335 ··· 338 336 dev->port[port_num].rep->vport == MLX5_VPORT_UPLINK) 339 337 return 0; 340 338 339 + mdev = mlx5_eswitch_get_core_dev(dev->port[port_num].rep->esw); 340 + if (!mdev) 341 + return -EOPNOTSUPP; 342 + 341 343 MLX5_SET(query_q_counter_in, in, opcode, MLX5_CMD_OP_QUERY_Q_COUNTER); 342 344 MLX5_SET(query_q_counter_in, in, other_vport, 1); 343 345 MLX5_SET(query_q_counter_in, in, vport_number, 344 346 dev->port[port_num].rep->vport); 345 347 MLX5_SET(query_q_counter_in, in, aggregate, 1); 346 - ret = mlx5_cmd_exec_inout(dev->mdev, query_q_counter, in, out); 348 + ret = mlx5_cmd_exec_inout(mdev, query_q_counter, in, out); 347 349 if (ret) 348 350 return ret; 349 351 ··· 581 575 bool is_vport = is_mdev_switchdev_mode(dev->mdev) && 582 576 port_num != MLX5_VPORT_PF; 583 577 const struct mlx5_ib_counter *names; 584 - int j = 0, i; 578 + int j = 0, i, size; 585 579 586 580 names = is_vport ? vport_basic_q_cnts : basic_q_cnts; 587 - for (i = 0; i < ARRAY_SIZE(basic_q_cnts); i++, j++) { 581 + size = is_vport ? ARRAY_SIZE(vport_basic_q_cnts) : 582 + ARRAY_SIZE(basic_q_cnts); 583 + for (i = 0; i < size; i++, j++) { 588 584 descs[j].name = names[i].name; 589 - offsets[j] = basic_q_cnts[i].offset; 585 + offsets[j] = names[i].offset; 590 586 } 591 587 592 588 names = is_vport ? vport_out_of_seq_q_cnts : out_of_seq_q_cnts; 589 + size = is_vport ? ARRAY_SIZE(vport_out_of_seq_q_cnts) : 590 + ARRAY_SIZE(out_of_seq_q_cnts); 593 591 if (MLX5_CAP_GEN(dev->mdev, out_of_seq_cnt)) { 594 - for (i = 0; i < ARRAY_SIZE(out_of_seq_q_cnts); i++, j++) { 592 + for (i = 0; i < size; i++, j++) { 595 593 descs[j].name = names[i].name; 596 - offsets[j] = out_of_seq_q_cnts[i].offset; 594 + offsets[j] = names[i].offset; 597 595 } 598 596 } 599 597 600 598 names = is_vport ? vport_retrans_q_cnts : retrans_q_cnts; 599 + size = is_vport ? ARRAY_SIZE(vport_retrans_q_cnts) : 600 + ARRAY_SIZE(retrans_q_cnts); 601 601 if (MLX5_CAP_GEN(dev->mdev, retransmission_q_counters)) { 602 - for (i = 0; i < ARRAY_SIZE(retrans_q_cnts); i++, j++) { 602 + for (i = 0; i < size; i++, j++) { 603 603 descs[j].name = names[i].name; 604 - offsets[j] = retrans_q_cnts[i].offset; 604 + offsets[j] = names[i].offset; 605 605 } 606 606 } 607 607 608 608 names = is_vport ? vport_extended_err_cnts : extended_err_cnts; 609 + size = is_vport ? ARRAY_SIZE(vport_extended_err_cnts) : 610 + ARRAY_SIZE(extended_err_cnts); 609 611 if (MLX5_CAP_GEN(dev->mdev, enhanced_error_q_counters)) { 610 - for (i = 0; i < ARRAY_SIZE(extended_err_cnts); i++, j++) { 612 + for (i = 0; i < size; i++, j++) { 611 613 descs[j].name = names[i].name; 612 - offsets[j] = extended_err_cnts[i].offset; 614 + offsets[j] = names[i].offset; 613 615 } 614 616 } 615 617 616 618 names = is_vport ? vport_roce_accl_cnts : roce_accl_cnts; 619 + size = is_vport ? ARRAY_SIZE(vport_roce_accl_cnts) : 620 + ARRAY_SIZE(roce_accl_cnts); 617 621 if (MLX5_CAP_GEN(dev->mdev, roce_accl)) { 618 - for (i = 0; i < ARRAY_SIZE(roce_accl_cnts); i++, j++) { 622 + for (i = 0; i < size; i++, j++) { 619 623 descs[j].name = names[i].name; 620 - offsets[j] = roce_accl_cnts[i].offset; 624 + offsets[j] = names[i].offset; 621 625 } 622 626 } 623 627 ··· 677 661 static int __mlx5_ib_alloc_counters(struct mlx5_ib_dev *dev, 678 662 struct mlx5_ib_counters *cnts, u32 port_num) 679 663 { 680 - u32 num_counters, num_op_counters = 0; 664 + bool is_vport = is_mdev_switchdev_mode(dev->mdev) && 665 + port_num != MLX5_VPORT_PF; 666 + u32 num_counters, num_op_counters = 0, size; 681 667 682 - num_counters = ARRAY_SIZE(basic_q_cnts); 668 + size = is_vport ? ARRAY_SIZE(vport_basic_q_cnts) : 669 + ARRAY_SIZE(basic_q_cnts); 670 + num_counters = size; 683 671 672 + size = is_vport ? ARRAY_SIZE(vport_out_of_seq_q_cnts) : 673 + ARRAY_SIZE(out_of_seq_q_cnts); 684 674 if (MLX5_CAP_GEN(dev->mdev, out_of_seq_cnt)) 685 - num_counters += ARRAY_SIZE(out_of_seq_q_cnts); 675 + num_counters += size; 686 676 677 + size = is_vport ? ARRAY_SIZE(vport_retrans_q_cnts) : 678 + ARRAY_SIZE(retrans_q_cnts); 687 679 if (MLX5_CAP_GEN(dev->mdev, retransmission_q_counters)) 688 - num_counters += ARRAY_SIZE(retrans_q_cnts); 680 + num_counters += size; 689 681 682 + size = is_vport ? ARRAY_SIZE(vport_extended_err_cnts) : 683 + ARRAY_SIZE(extended_err_cnts); 690 684 if (MLX5_CAP_GEN(dev->mdev, enhanced_error_q_counters)) 691 - num_counters += ARRAY_SIZE(extended_err_cnts); 685 + num_counters += size; 692 686 687 + size = is_vport ? ARRAY_SIZE(vport_roce_accl_cnts) : 688 + ARRAY_SIZE(roce_accl_cnts); 693 689 if (MLX5_CAP_GEN(dev->mdev, roce_accl)) 694 - num_counters += ARRAY_SIZE(roce_accl_cnts); 690 + num_counters += size; 695 691 696 692 cnts->num_q_counters = num_counters; 697 693 698 - if (is_mdev_switchdev_mode(dev->mdev) && port_num != MLX5_VPORT_PF) 694 + if (is_vport) 699 695 goto skip_non_qcounters; 700 696 701 697 if (MLX5_CAP_GEN(dev->mdev, cc_query_allowed)) { ··· 753 725 static void mlx5_ib_dealloc_counters(struct mlx5_ib_dev *dev) 754 726 { 755 727 u32 in[MLX5_ST_SZ_DW(dealloc_q_counter_in)] = {}; 756 - int num_cnt_ports; 728 + int num_cnt_ports = dev->num_ports; 757 729 int i, j; 758 730 759 - num_cnt_ports = (!is_mdev_switchdev_mode(dev->mdev) || 760 - vport_qcounters_supported(dev)) ? dev->num_ports : 1; 731 + if (is_mdev_switchdev_mode(dev->mdev)) 732 + num_cnt_ports = min(2, num_cnt_ports); 761 733 762 734 MLX5_SET(dealloc_q_counter_in, in, opcode, 763 735 MLX5_CMD_OP_DEALLOC_Q_COUNTER); ··· 789 761 { 790 762 u32 out[MLX5_ST_SZ_DW(alloc_q_counter_out)] = {}; 791 763 u32 in[MLX5_ST_SZ_DW(alloc_q_counter_in)] = {}; 792 - int num_cnt_ports; 764 + int num_cnt_ports = dev->num_ports; 793 765 int err = 0; 794 766 int i; 795 767 bool is_shared; 796 768 797 769 MLX5_SET(alloc_q_counter_in, in, opcode, MLX5_CMD_OP_ALLOC_Q_COUNTER); 798 770 is_shared = MLX5_CAP_GEN(dev->mdev, log_max_uctx) != 0; 799 - num_cnt_ports = (!is_mdev_switchdev_mode(dev->mdev) || 800 - vport_qcounters_supported(dev)) ? dev->num_ports : 1; 771 + 772 + /* 773 + * In switchdev we need to allocate two ports, one that is used for 774 + * the device Q_counters and it is essentially the real Q_counters of 775 + * this device, while the other is used as a helper for PF to be able to 776 + * query all other vports. 777 + */ 778 + if (is_mdev_switchdev_mode(dev->mdev)) 779 + num_cnt_ports = min(2, num_cnt_ports); 801 780 802 781 for (i = 0; i < num_cnt_ports; i++) { 803 782 err = __mlx5_ib_alloc_counters(dev, &dev->port[i].cnts, i);
+269 -7
drivers/infiniband/hw/mlx5/fs.c
··· 695 695 struct mlx5_flow_table_attr ft_attr = {}; 696 696 struct mlx5_flow_table *ft; 697 697 698 - if (mlx5_ib_shared_ft_allowed(&dev->ib_dev)) 699 - ft_attr.uid = MLX5_SHARED_RESOURCE_UID; 700 698 ft_attr.prio = priority; 701 699 ft_attr.max_fte = num_entries; 702 700 ft_attr.flags = flags; ··· 2023 2025 return 0; 2024 2026 } 2025 2027 2028 + static int steering_anchor_create_ft(struct mlx5_ib_dev *dev, 2029 + struct mlx5_ib_flow_prio *ft_prio, 2030 + enum mlx5_flow_namespace_type ns_type) 2031 + { 2032 + struct mlx5_flow_table_attr ft_attr = {}; 2033 + struct mlx5_flow_namespace *ns; 2034 + struct mlx5_flow_table *ft; 2035 + 2036 + if (ft_prio->anchor.ft) 2037 + return 0; 2038 + 2039 + ns = mlx5_get_flow_namespace(dev->mdev, ns_type); 2040 + if (!ns) 2041 + return -EOPNOTSUPP; 2042 + 2043 + ft_attr.flags = MLX5_FLOW_TABLE_UNMANAGED; 2044 + ft_attr.uid = MLX5_SHARED_RESOURCE_UID; 2045 + ft_attr.prio = 0; 2046 + ft_attr.max_fte = 2; 2047 + ft_attr.level = 1; 2048 + 2049 + ft = mlx5_create_flow_table(ns, &ft_attr); 2050 + if (IS_ERR(ft)) 2051 + return PTR_ERR(ft); 2052 + 2053 + ft_prio->anchor.ft = ft; 2054 + 2055 + return 0; 2056 + } 2057 + 2058 + static void steering_anchor_destroy_ft(struct mlx5_ib_flow_prio *ft_prio) 2059 + { 2060 + if (ft_prio->anchor.ft) { 2061 + mlx5_destroy_flow_table(ft_prio->anchor.ft); 2062 + ft_prio->anchor.ft = NULL; 2063 + } 2064 + } 2065 + 2066 + static int 2067 + steering_anchor_create_fg_drop(struct mlx5_ib_flow_prio *ft_prio) 2068 + { 2069 + int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); 2070 + struct mlx5_flow_group *fg; 2071 + void *flow_group_in; 2072 + int err = 0; 2073 + 2074 + if (ft_prio->anchor.fg_drop) 2075 + return 0; 2076 + 2077 + flow_group_in = kvzalloc(inlen, GFP_KERNEL); 2078 + if (!flow_group_in) 2079 + return -ENOMEM; 2080 + 2081 + MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 1); 2082 + MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 1); 2083 + 2084 + fg = mlx5_create_flow_group(ft_prio->anchor.ft, flow_group_in); 2085 + if (IS_ERR(fg)) { 2086 + err = PTR_ERR(fg); 2087 + goto out; 2088 + } 2089 + 2090 + ft_prio->anchor.fg_drop = fg; 2091 + 2092 + out: 2093 + kvfree(flow_group_in); 2094 + 2095 + return err; 2096 + } 2097 + 2098 + static void 2099 + steering_anchor_destroy_fg_drop(struct mlx5_ib_flow_prio *ft_prio) 2100 + { 2101 + if (ft_prio->anchor.fg_drop) { 2102 + mlx5_destroy_flow_group(ft_prio->anchor.fg_drop); 2103 + ft_prio->anchor.fg_drop = NULL; 2104 + } 2105 + } 2106 + 2107 + static int 2108 + steering_anchor_create_fg_goto_table(struct mlx5_ib_flow_prio *ft_prio) 2109 + { 2110 + int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); 2111 + struct mlx5_flow_group *fg; 2112 + void *flow_group_in; 2113 + int err = 0; 2114 + 2115 + if (ft_prio->anchor.fg_goto_table) 2116 + return 0; 2117 + 2118 + flow_group_in = kvzalloc(inlen, GFP_KERNEL); 2119 + if (!flow_group_in) 2120 + return -ENOMEM; 2121 + 2122 + fg = mlx5_create_flow_group(ft_prio->anchor.ft, flow_group_in); 2123 + if (IS_ERR(fg)) { 2124 + err = PTR_ERR(fg); 2125 + goto out; 2126 + } 2127 + ft_prio->anchor.fg_goto_table = fg; 2128 + 2129 + out: 2130 + kvfree(flow_group_in); 2131 + 2132 + return err; 2133 + } 2134 + 2135 + static void 2136 + steering_anchor_destroy_fg_goto_table(struct mlx5_ib_flow_prio *ft_prio) 2137 + { 2138 + if (ft_prio->anchor.fg_goto_table) { 2139 + mlx5_destroy_flow_group(ft_prio->anchor.fg_goto_table); 2140 + ft_prio->anchor.fg_goto_table = NULL; 2141 + } 2142 + } 2143 + 2144 + static int 2145 + steering_anchor_create_rule_drop(struct mlx5_ib_flow_prio *ft_prio) 2146 + { 2147 + struct mlx5_flow_act flow_act = {}; 2148 + struct mlx5_flow_handle *handle; 2149 + 2150 + if (ft_prio->anchor.rule_drop) 2151 + return 0; 2152 + 2153 + flow_act.fg = ft_prio->anchor.fg_drop; 2154 + flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP; 2155 + 2156 + handle = mlx5_add_flow_rules(ft_prio->anchor.ft, NULL, &flow_act, 2157 + NULL, 0); 2158 + if (IS_ERR(handle)) 2159 + return PTR_ERR(handle); 2160 + 2161 + ft_prio->anchor.rule_drop = handle; 2162 + 2163 + return 0; 2164 + } 2165 + 2166 + static void steering_anchor_destroy_rule_drop(struct mlx5_ib_flow_prio *ft_prio) 2167 + { 2168 + if (ft_prio->anchor.rule_drop) { 2169 + mlx5_del_flow_rules(ft_prio->anchor.rule_drop); 2170 + ft_prio->anchor.rule_drop = NULL; 2171 + } 2172 + } 2173 + 2174 + static int 2175 + steering_anchor_create_rule_goto_table(struct mlx5_ib_flow_prio *ft_prio) 2176 + { 2177 + struct mlx5_flow_destination dest = {}; 2178 + struct mlx5_flow_act flow_act = {}; 2179 + struct mlx5_flow_handle *handle; 2180 + 2181 + if (ft_prio->anchor.rule_goto_table) 2182 + return 0; 2183 + 2184 + flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; 2185 + flow_act.flags |= FLOW_ACT_IGNORE_FLOW_LEVEL; 2186 + flow_act.fg = ft_prio->anchor.fg_goto_table; 2187 + 2188 + dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; 2189 + dest.ft = ft_prio->flow_table; 2190 + 2191 + handle = mlx5_add_flow_rules(ft_prio->anchor.ft, NULL, &flow_act, 2192 + &dest, 1); 2193 + if (IS_ERR(handle)) 2194 + return PTR_ERR(handle); 2195 + 2196 + ft_prio->anchor.rule_goto_table = handle; 2197 + 2198 + return 0; 2199 + } 2200 + 2201 + static void 2202 + steering_anchor_destroy_rule_goto_table(struct mlx5_ib_flow_prio *ft_prio) 2203 + { 2204 + if (ft_prio->anchor.rule_goto_table) { 2205 + mlx5_del_flow_rules(ft_prio->anchor.rule_goto_table); 2206 + ft_prio->anchor.rule_goto_table = NULL; 2207 + } 2208 + } 2209 + 2210 + static int steering_anchor_create_res(struct mlx5_ib_dev *dev, 2211 + struct mlx5_ib_flow_prio *ft_prio, 2212 + enum mlx5_flow_namespace_type ns_type) 2213 + { 2214 + int err; 2215 + 2216 + err = steering_anchor_create_ft(dev, ft_prio, ns_type); 2217 + if (err) 2218 + return err; 2219 + 2220 + err = steering_anchor_create_fg_drop(ft_prio); 2221 + if (err) 2222 + goto destroy_ft; 2223 + 2224 + err = steering_anchor_create_fg_goto_table(ft_prio); 2225 + if (err) 2226 + goto destroy_fg_drop; 2227 + 2228 + err = steering_anchor_create_rule_drop(ft_prio); 2229 + if (err) 2230 + goto destroy_fg_goto_table; 2231 + 2232 + err = steering_anchor_create_rule_goto_table(ft_prio); 2233 + if (err) 2234 + goto destroy_rule_drop; 2235 + 2236 + return 0; 2237 + 2238 + destroy_rule_drop: 2239 + steering_anchor_destroy_rule_drop(ft_prio); 2240 + destroy_fg_goto_table: 2241 + steering_anchor_destroy_fg_goto_table(ft_prio); 2242 + destroy_fg_drop: 2243 + steering_anchor_destroy_fg_drop(ft_prio); 2244 + destroy_ft: 2245 + steering_anchor_destroy_ft(ft_prio); 2246 + 2247 + return err; 2248 + } 2249 + 2250 + static void mlx5_steering_anchor_destroy_res(struct mlx5_ib_flow_prio *ft_prio) 2251 + { 2252 + steering_anchor_destroy_rule_goto_table(ft_prio); 2253 + steering_anchor_destroy_rule_drop(ft_prio); 2254 + steering_anchor_destroy_fg_goto_table(ft_prio); 2255 + steering_anchor_destroy_fg_drop(ft_prio); 2256 + steering_anchor_destroy_ft(ft_prio); 2257 + } 2258 + 2026 2259 static int steering_anchor_cleanup(struct ib_uobject *uobject, 2027 2260 enum rdma_remove_reason why, 2028 2261 struct uverbs_attr_bundle *attrs) ··· 2264 2035 return -EBUSY; 2265 2036 2266 2037 mutex_lock(&obj->dev->flow_db->lock); 2038 + if (!--obj->ft_prio->anchor.rule_goto_table_ref) 2039 + steering_anchor_destroy_rule_goto_table(obj->ft_prio); 2040 + 2267 2041 put_flow_table(obj->dev, obj->ft_prio, true); 2268 2042 mutex_unlock(&obj->dev->flow_db->lock); 2269 2043 2270 2044 kfree(obj); 2271 2045 return 0; 2046 + } 2047 + 2048 + static void fs_cleanup_anchor(struct mlx5_ib_flow_prio *prio, 2049 + int count) 2050 + { 2051 + while (count--) 2052 + mlx5_steering_anchor_destroy_res(&prio[count]); 2053 + } 2054 + 2055 + void mlx5_ib_fs_cleanup_anchor(struct mlx5_ib_dev *dev) 2056 + { 2057 + fs_cleanup_anchor(dev->flow_db->prios, MLX5_IB_NUM_FLOW_FT); 2058 + fs_cleanup_anchor(dev->flow_db->egress_prios, MLX5_IB_NUM_FLOW_FT); 2059 + fs_cleanup_anchor(dev->flow_db->sniffer, MLX5_IB_NUM_SNIFFER_FTS); 2060 + fs_cleanup_anchor(dev->flow_db->egress, MLX5_IB_NUM_EGRESS_FTS); 2061 + fs_cleanup_anchor(dev->flow_db->fdb, MLX5_IB_NUM_FDB_FTS); 2062 + fs_cleanup_anchor(dev->flow_db->rdma_rx, MLX5_IB_NUM_FLOW_FT); 2063 + fs_cleanup_anchor(dev->flow_db->rdma_tx, MLX5_IB_NUM_FLOW_FT); 2272 2064 } 2273 2065 2274 2066 static int mlx5_ib_matcher_ns(struct uverbs_attr_bundle *attrs, ··· 2432 2182 return -ENOMEM; 2433 2183 2434 2184 mutex_lock(&dev->flow_db->lock); 2185 + 2435 2186 ft_prio = _get_flow_table(dev, priority, ns_type, 0); 2436 2187 if (IS_ERR(ft_prio)) { 2437 - mutex_unlock(&dev->flow_db->lock); 2438 2188 err = PTR_ERR(ft_prio); 2439 2189 goto free_obj; 2440 2190 } 2441 2191 2442 2192 ft_prio->refcount++; 2443 - ft_id = mlx5_flow_table_id(ft_prio->flow_table); 2444 - mutex_unlock(&dev->flow_db->lock); 2193 + 2194 + if (!ft_prio->anchor.rule_goto_table_ref) { 2195 + err = steering_anchor_create_res(dev, ft_prio, ns_type); 2196 + if (err) 2197 + goto put_flow_table; 2198 + } 2199 + 2200 + ft_prio->anchor.rule_goto_table_ref++; 2201 + 2202 + ft_id = mlx5_flow_table_id(ft_prio->anchor.ft); 2445 2203 2446 2204 err = uverbs_copy_to(attrs, MLX5_IB_ATTR_STEERING_ANCHOR_FT_ID, 2447 2205 &ft_id, sizeof(ft_id)); 2448 2206 if (err) 2449 - goto put_flow_table; 2207 + goto destroy_res; 2208 + 2209 + mutex_unlock(&dev->flow_db->lock); 2450 2210 2451 2211 uobj->object = obj; 2452 2212 obj->dev = dev; ··· 2465 2205 2466 2206 return 0; 2467 2207 2208 + destroy_res: 2209 + --ft_prio->anchor.rule_goto_table_ref; 2210 + mlx5_steering_anchor_destroy_res(ft_prio); 2468 2211 put_flow_table: 2469 - mutex_lock(&dev->flow_db->lock); 2470 2212 put_flow_table(dev, ft_prio, true); 2471 2213 mutex_unlock(&dev->flow_db->lock); 2472 2214 free_obj:
+16
drivers/infiniband/hw/mlx5/fs.h
··· 10 10 11 11 #if IS_ENABLED(CONFIG_INFINIBAND_USER_ACCESS) 12 12 int mlx5_ib_fs_init(struct mlx5_ib_dev *dev); 13 + void mlx5_ib_fs_cleanup_anchor(struct mlx5_ib_dev *dev); 13 14 #else 14 15 static inline int mlx5_ib_fs_init(struct mlx5_ib_dev *dev) 15 16 { ··· 22 21 mutex_init(&dev->flow_db->lock); 23 22 return 0; 24 23 } 24 + 25 + inline void mlx5_ib_fs_cleanup_anchor(struct mlx5_ib_dev *dev) {} 25 26 #endif 27 + 26 28 static inline void mlx5_ib_fs_cleanup(struct mlx5_ib_dev *dev) 27 29 { 30 + /* When a steering anchor is created, a special flow table is also 31 + * created for the user to reference. Since the user can reference it, 32 + * the kernel cannot trust that when the user destroys the steering 33 + * anchor, they no longer reference the flow table. 34 + * 35 + * To address this issue, when a user destroys a steering anchor, only 36 + * the flow steering rule in the table is destroyed, but the table 37 + * itself is kept to deal with the above scenario. The remaining 38 + * resources are only removed when the RDMA device is destroyed, which 39 + * is a safe assumption that all references are gone. 40 + */ 41 + mlx5_ib_fs_cleanup_anchor(dev); 28 42 kfree(dev->flow_db); 29 43 } 30 44 #endif /* _MLX5_IB_FS_H */
+3
drivers/infiniband/hw/mlx5/main.c
··· 4275 4275 STAGE_CREATE(MLX5_IB_STAGE_POST_IB_REG_UMR, 4276 4276 mlx5_ib_stage_post_ib_reg_umr_init, 4277 4277 NULL), 4278 + STAGE_CREATE(MLX5_IB_STAGE_DELAY_DROP, 4279 + mlx5_ib_stage_delay_drop_init, 4280 + mlx5_ib_stage_delay_drop_cleanup), 4278 4281 STAGE_CREATE(MLX5_IB_STAGE_RESTRACK, 4279 4282 mlx5_ib_restrack_init, 4280 4283 NULL),
+14
drivers/infiniband/hw/mlx5/mlx5_ib.h
··· 237 237 #define MLX5_IB_NUM_SNIFFER_FTS 2 238 238 #define MLX5_IB_NUM_EGRESS_FTS 1 239 239 #define MLX5_IB_NUM_FDB_FTS MLX5_BY_PASS_NUM_REGULAR_PRIOS 240 + 241 + struct mlx5_ib_anchor { 242 + struct mlx5_flow_table *ft; 243 + struct mlx5_flow_group *fg_goto_table; 244 + struct mlx5_flow_group *fg_drop; 245 + struct mlx5_flow_handle *rule_goto_table; 246 + struct mlx5_flow_handle *rule_drop; 247 + unsigned int rule_goto_table_ref; 248 + }; 249 + 240 250 struct mlx5_ib_flow_prio { 241 251 struct mlx5_flow_table *flow_table; 252 + struct mlx5_ib_anchor anchor; 242 253 unsigned int refcount; 243 254 }; 244 255 ··· 1596 1585 if (dev->lag_active && 1597 1586 mlx5_lag_mode_is_hash(dev->mdev) && 1598 1587 MLX5_CAP_PORT_SELECTION(dev->mdev, port_select_flow_table_bypass)) 1588 + return 0; 1589 + 1590 + if (mlx5_lag_is_lacp_owner(dev->mdev) && !dev->lag_active) 1599 1591 return 0; 1600 1592 1601 1593 return dev->lag_active ||
+3
drivers/infiniband/hw/mlx5/qp.c
··· 1237 1237 1238 1238 MLX5_SET(create_tis_in, in, uid, to_mpd(pd)->uid); 1239 1239 MLX5_SET(tisc, tisc, transport_domain, tdn); 1240 + if (!mlx5_ib_lag_should_assign_affinity(dev) && 1241 + mlx5_lag_is_lacp_owner(dev->mdev)) 1242 + MLX5_SET(tisc, tisc, strict_lag_tx_port_affinity, 1); 1240 1243 if (qp->flags & IB_QP_CREATE_SOURCE_QPN) 1241 1244 MLX5_SET(tisc, tisc, underlay_qpn, qp->underlay_qpn); 1242 1245
+2 -2
drivers/infiniband/sw/rxe/rxe_cq.c
··· 113 113 114 114 queue_advance_producer(cq->queue, QUEUE_TYPE_TO_CLIENT); 115 115 116 - spin_unlock_irqrestore(&cq->cq_lock, flags); 117 - 118 116 if ((cq->notify == IB_CQ_NEXT_COMP) || 119 117 (cq->notify == IB_CQ_SOLICITED && solicited)) { 120 118 cq->notify = 0; 121 119 122 120 cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context); 123 121 } 122 + 123 + spin_unlock_irqrestore(&cq->cq_lock, flags); 124 124 125 125 return 0; 126 126 }
+6
drivers/infiniband/sw/rxe/rxe_net.c
··· 159 159 pkt->mask = RXE_GRH_MASK; 160 160 pkt->paylen = be16_to_cpu(udph->len) - sizeof(*udph); 161 161 162 + /* remove udp header */ 163 + skb_pull(skb, sizeof(struct udphdr)); 164 + 162 165 rxe_rcv(skb); 163 166 164 167 return 0; ··· 403 400 kfree_skb(skb); 404 401 return -EIO; 405 402 } 403 + 404 + /* remove udp header */ 405 + skb_pull(skb, sizeof(struct udphdr)); 406 406 407 407 rxe_rcv(skb); 408 408
+3 -4
drivers/infiniband/sw/rxe/rxe_qp.c
··· 176 176 spin_lock_init(&qp->rq.producer_lock); 177 177 spin_lock_init(&qp->rq.consumer_lock); 178 178 179 + skb_queue_head_init(&qp->req_pkts); 180 + skb_queue_head_init(&qp->resp_pkts); 181 + 179 182 atomic_set(&qp->ssn, 0); 180 183 atomic_set(&qp->skb_out, 0); 181 184 } ··· 237 234 qp->req.opcode = -1; 238 235 qp->comp.opcode = -1; 239 236 240 - skb_queue_head_init(&qp->req_pkts); 241 - 242 237 rxe_init_task(&qp->req.task, qp, rxe_requester); 243 238 rxe_init_task(&qp->comp.task, qp, rxe_completer); 244 239 ··· 279 278 return err; 280 279 } 281 280 } 282 - 283 - skb_queue_head_init(&qp->resp_pkts); 284 281 285 282 rxe_init_task(&qp->resp.task, qp, rxe_responder); 286 283
+2 -1
drivers/infiniband/sw/rxe/rxe_resp.c
··· 489 489 if (mw->access & IB_ZERO_BASED) 490 490 qp->resp.offset = mw->addr; 491 491 492 - rxe_put(mw); 493 492 rxe_get(mr); 493 + rxe_put(mw); 494 + mw = NULL; 494 495 } else { 495 496 mr = lookup_mr(qp->pd, access, rkey, RXE_LOOKUP_REMOTE); 496 497 if (!mr) {
+12 -4
drivers/infiniband/ulp/isert/ib_isert.c
··· 657 657 isert_connect_error(struct rdma_cm_id *cma_id) 658 658 { 659 659 struct isert_conn *isert_conn = cma_id->qp->qp_context; 660 + struct isert_np *isert_np = cma_id->context; 660 661 661 662 ib_drain_qp(isert_conn->qp); 663 + 664 + mutex_lock(&isert_np->mutex); 662 665 list_del_init(&isert_conn->node); 666 + mutex_unlock(&isert_np->mutex); 663 667 isert_conn->cm_id = NULL; 664 668 isert_put_conn(isert_conn); 665 669 ··· 2435 2431 { 2436 2432 struct isert_np *isert_np = np->np_context; 2437 2433 struct isert_conn *isert_conn, *n; 2434 + LIST_HEAD(drop_conn_list); 2438 2435 2439 2436 if (isert_np->cm_id) 2440 2437 rdma_destroy_id(isert_np->cm_id); ··· 2455 2450 node) { 2456 2451 isert_info("cleaning isert_conn %p state (%d)\n", 2457 2452 isert_conn, isert_conn->state); 2458 - isert_connect_release(isert_conn); 2453 + list_move_tail(&isert_conn->node, &drop_conn_list); 2459 2454 } 2460 2455 } 2461 2456 ··· 2466 2461 node) { 2467 2462 isert_info("cleaning isert_conn %p state (%d)\n", 2468 2463 isert_conn, isert_conn->state); 2469 - isert_connect_release(isert_conn); 2464 + list_move_tail(&isert_conn->node, &drop_conn_list); 2470 2465 } 2471 2466 } 2472 2467 mutex_unlock(&isert_np->mutex); 2468 + 2469 + list_for_each_entry_safe(isert_conn, n, &drop_conn_list, node) { 2470 + list_del_init(&isert_conn->node); 2471 + isert_connect_release(isert_conn); 2472 + } 2473 2473 2474 2474 np->np_context = NULL; 2475 2475 kfree(isert_np); ··· 2570 2560 isert_put_unsol_pending_cmds(conn); 2571 2561 isert_wait4cmds(conn); 2572 2562 isert_wait4logout(isert_conn); 2573 - 2574 - queue_work(isert_release_wq, &isert_conn->release_work); 2575 2563 } 2576 2564 2577 2565 static void isert_free_conn(struct iscsit_conn *conn)
+23 -32
drivers/infiniband/ulp/rtrs/rtrs-clt.c
··· 2040 2040 return 0; 2041 2041 } 2042 2042 2043 + /* The caller should do the cleanup in case of error */ 2043 2044 static int create_cm(struct rtrs_clt_con *con) 2044 2045 { 2045 2046 struct rtrs_path *s = con->c.path; ··· 2063 2062 err = rdma_set_reuseaddr(cm_id, 1); 2064 2063 if (err != 0) { 2065 2064 rtrs_err(s, "Set address reuse failed, err: %d\n", err); 2066 - goto destroy_cm; 2065 + return err; 2067 2066 } 2068 2067 err = rdma_resolve_addr(cm_id, (struct sockaddr *)&clt_path->s.src_addr, 2069 2068 (struct sockaddr *)&clt_path->s.dst_addr, 2070 2069 RTRS_CONNECT_TIMEOUT_MS); 2071 2070 if (err) { 2072 2071 rtrs_err(s, "Failed to resolve address, err: %d\n", err); 2073 - goto destroy_cm; 2072 + return err; 2074 2073 } 2075 2074 /* 2076 2075 * Combine connection status and session events. This is needed ··· 2085 2084 if (err == 0) 2086 2085 err = -ETIMEDOUT; 2087 2086 /* Timedout or interrupted */ 2088 - goto errr; 2087 + return err; 2089 2088 } 2090 - if (con->cm_err < 0) { 2091 - err = con->cm_err; 2092 - goto errr; 2093 - } 2094 - if (READ_ONCE(clt_path->state) != RTRS_CLT_CONNECTING) { 2089 + if (con->cm_err < 0) 2090 + return con->cm_err; 2091 + if (READ_ONCE(clt_path->state) != RTRS_CLT_CONNECTING) 2095 2092 /* Device removal */ 2096 - err = -ECONNABORTED; 2097 - goto errr; 2098 - } 2093 + return -ECONNABORTED; 2099 2094 2100 2095 return 0; 2101 - 2102 - errr: 2103 - stop_cm(con); 2104 - mutex_lock(&con->con_mutex); 2105 - destroy_con_cq_qp(con); 2106 - mutex_unlock(&con->con_mutex); 2107 - destroy_cm: 2108 - destroy_cm(con); 2109 - 2110 - return err; 2111 2096 } 2112 2097 2113 2098 static void rtrs_clt_path_up(struct rtrs_clt_path *clt_path) ··· 2321 2334 static int init_conns(struct rtrs_clt_path *clt_path) 2322 2335 { 2323 2336 unsigned int cid; 2324 - int err; 2337 + int err, i; 2325 2338 2326 2339 /* 2327 2340 * On every new session connections increase reconnect counter ··· 2337 2350 goto destroy; 2338 2351 2339 2352 err = create_cm(to_clt_con(clt_path->s.con[cid])); 2340 - if (err) { 2341 - destroy_con(to_clt_con(clt_path->s.con[cid])); 2353 + if (err) 2342 2354 goto destroy; 2343 - } 2344 2355 } 2345 2356 err = alloc_path_reqs(clt_path); 2346 2357 if (err) ··· 2349 2364 return 0; 2350 2365 2351 2366 destroy: 2352 - while (cid--) { 2353 - struct rtrs_clt_con *con = to_clt_con(clt_path->s.con[cid]); 2367 + /* Make sure we do the cleanup in the order they are created */ 2368 + for (i = 0; i <= cid; i++) { 2369 + struct rtrs_clt_con *con; 2354 2370 2355 - stop_cm(con); 2371 + if (!clt_path->s.con[i]) 2372 + break; 2356 2373 2357 - mutex_lock(&con->con_mutex); 2358 - destroy_con_cq_qp(con); 2359 - mutex_unlock(&con->con_mutex); 2360 - destroy_cm(con); 2374 + con = to_clt_con(clt_path->s.con[i]); 2375 + if (con->c.cm_id) { 2376 + stop_cm(con); 2377 + mutex_lock(&con->con_mutex); 2378 + destroy_con_cq_qp(con); 2379 + mutex_unlock(&con->con_mutex); 2380 + destroy_cm(con); 2381 + } 2361 2382 destroy_con(con); 2362 2383 } 2363 2384 /*
+3 -1
drivers/infiniband/ulp/rtrs/rtrs.c
··· 37 37 goto err; 38 38 39 39 iu->dma_addr = ib_dma_map_single(dma_dev, iu->buf, size, dir); 40 - if (ib_dma_mapping_error(dma_dev, iu->dma_addr)) 40 + if (ib_dma_mapping_error(dma_dev, iu->dma_addr)) { 41 + kfree(iu->buf); 41 42 goto err; 43 + } 42 44 43 45 iu->cqe.done = done; 44 46 iu->size = size;
+1 -1
drivers/input/input.c
··· 703 703 704 704 __input_release_device(handle); 705 705 706 - if (!dev->inhibited && !--dev->users) { 706 + if (!--dev->users && !dev->inhibited) { 707 707 if (dev->poller) 708 708 input_dev_poller_stop(dev->poller); 709 709 if (dev->close)
-1
drivers/input/joystick/xpad.c
··· 281 281 { 0x1430, 0xf801, "RedOctane Controller", 0, XTYPE_XBOX360 }, 282 282 { 0x146b, 0x0601, "BigBen Interactive XBOX 360 Controller", 0, XTYPE_XBOX360 }, 283 283 { 0x146b, 0x0604, "Bigben Interactive DAIJA Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 }, 284 - { 0x1532, 0x0037, "Razer Sabertooth", 0, XTYPE_XBOX360 }, 285 284 { 0x1532, 0x0a00, "Razer Atrox Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE }, 286 285 { 0x1532, 0x0a03, "Razer Wildcat", 0, XTYPE_XBOXONE }, 287 286 { 0x15e4, 0x3f00, "Power A Mini Pro Elite", 0, XTYPE_XBOX360 },
+30
drivers/input/misc/soc_button_array.c
··· 109 109 }; 110 110 111 111 /* 112 + * Some devices have a wrong entry which points to a GPIO which is 113 + * required in another driver, so this driver must not claim it. 114 + */ 115 + static const struct dmi_system_id dmi_invalid_acpi_index[] = { 116 + { 117 + /* 118 + * Lenovo Yoga Book X90F / X90L, the PNP0C40 home button entry 119 + * points to a GPIO which is not a home button and which is 120 + * required by the lenovo-yogabook driver. 121 + */ 122 + .matches = { 123 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Intel Corporation"), 124 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"), 125 + DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "YETI-11"), 126 + }, 127 + .driver_data = (void *)1l, 128 + }, 129 + {} /* Terminating entry */ 130 + }; 131 + 132 + /* 112 133 * Get the Nth GPIO number from the ACPI object. 113 134 */ 114 135 static int soc_button_lookup_gpio(struct device *dev, int acpi_index, ··· 158 137 struct platform_device *pd; 159 138 struct gpio_keys_button *gpio_keys; 160 139 struct gpio_keys_platform_data *gpio_keys_pdata; 140 + const struct dmi_system_id *dmi_id; 141 + int invalid_acpi_index = -1; 161 142 int error, gpio, irq; 162 143 int n_buttons = 0; 163 144 ··· 177 154 gpio_keys = (void *)(gpio_keys_pdata + 1); 178 155 n_buttons = 0; 179 156 157 + dmi_id = dmi_first_match(dmi_invalid_acpi_index); 158 + if (dmi_id) 159 + invalid_acpi_index = (long)dmi_id->driver_data; 160 + 180 161 for (info = button_info; info->name; info++) { 181 162 if (info->autorepeat != autorepeat) 163 + continue; 164 + 165 + if (info->acpi_index == invalid_acpi_index) 182 166 continue; 183 167 184 168 error = soc_button_lookup_gpio(&pdev->dev, info->acpi_index, &gpio, &irq);
+5 -4
drivers/input/mouse/elantech.c
··· 674 674 struct input_dev *dev = psmouse->dev; 675 675 struct elantech_data *etd = psmouse->private; 676 676 unsigned char *packet = psmouse->packet; 677 - int id = ((packet[3] & 0xe0) >> 5) - 1; 677 + int id; 678 678 int pres, traces; 679 679 680 - if (id < 0) 680 + id = ((packet[3] & 0xe0) >> 5) - 1; 681 + if (id < 0 || id >= ETP_MAX_FINGERS) 681 682 return; 682 683 683 684 etd->mt[id].x = ((packet[1] & 0x0f) << 8) | packet[2]; ··· 708 707 int id, sid; 709 708 710 709 id = ((packet[0] & 0xe0) >> 5) - 1; 711 - if (id < 0) 710 + if (id < 0 || id >= ETP_MAX_FINGERS) 712 711 return; 713 712 714 713 sid = ((packet[3] & 0xe0) >> 5) - 1; ··· 729 728 input_report_abs(dev, ABS_MT_POSITION_X, etd->mt[id].x); 730 729 input_report_abs(dev, ABS_MT_POSITION_Y, etd->mt[id].y); 731 730 732 - if (sid >= 0) { 731 + if (sid >= 0 && sid < ETP_MAX_FINGERS) { 733 732 etd->mt[sid].x += delta_x2 * weight; 734 733 etd->mt[sid].y -= delta_y2 * weight; 735 734 input_mt_slot(dev, sid);
+1 -1
drivers/input/touchscreen/cyttsp5.c
··· 560 560 static int cyttsp5_hid_output_bl_launch_app(struct cyttsp5 *ts) 561 561 { 562 562 int rc; 563 - u8 cmd[HID_OUTPUT_BL_LAUNCH_APP]; 563 + u8 cmd[HID_OUTPUT_BL_LAUNCH_APP_SIZE]; 564 564 u16 crc; 565 565 566 566 put_unaligned_le16(HID_OUTPUT_BL_LAUNCH_APP_SIZE, cmd);
+1 -4
drivers/md/dm-ioctl.c
··· 1168 1168 /* Do we need to load a new map ? */ 1169 1169 if (new_map) { 1170 1170 sector_t old_size, new_size; 1171 - int srcu_idx; 1172 1171 1173 1172 /* Suspend if it isn't already suspended */ 1174 - old_map = dm_get_live_table(md, &srcu_idx); 1175 - if ((param->flags & DM_SKIP_LOCKFS_FLAG) || !old_map) 1173 + if (param->flags & DM_SKIP_LOCKFS_FLAG) 1176 1174 suspend_flags &= ~DM_SUSPEND_LOCKFS_FLAG; 1177 - dm_put_live_table(md, srcu_idx); 1178 1175 if (param->flags & DM_NOFLUSH_FLAG) 1179 1176 suspend_flags |= DM_SUSPEND_NOFLUSH_FLAG; 1180 1177 if (!dm_suspended_md(md))
+12 -8
drivers/md/dm-thin-metadata.c
··· 1756 1756 1757 1757 int dm_pool_block_is_shared(struct dm_pool_metadata *pmd, dm_block_t b, bool *result) 1758 1758 { 1759 - int r; 1759 + int r = -EINVAL; 1760 1760 uint32_t ref_count; 1761 1761 1762 1762 down_read(&pmd->root_lock); 1763 - r = dm_sm_get_count(pmd->data_sm, b, &ref_count); 1764 - if (!r) 1765 - *result = (ref_count > 1); 1763 + if (!pmd->fail_io) { 1764 + r = dm_sm_get_count(pmd->data_sm, b, &ref_count); 1765 + if (!r) 1766 + *result = (ref_count > 1); 1767 + } 1766 1768 up_read(&pmd->root_lock); 1767 1769 1768 1770 return r; ··· 1772 1770 1773 1771 int dm_pool_inc_data_range(struct dm_pool_metadata *pmd, dm_block_t b, dm_block_t e) 1774 1772 { 1775 - int r = 0; 1773 + int r = -EINVAL; 1776 1774 1777 1775 pmd_write_lock(pmd); 1778 - r = dm_sm_inc_blocks(pmd->data_sm, b, e); 1776 + if (!pmd->fail_io) 1777 + r = dm_sm_inc_blocks(pmd->data_sm, b, e); 1779 1778 pmd_write_unlock(pmd); 1780 1779 1781 1780 return r; ··· 1784 1781 1785 1782 int dm_pool_dec_data_range(struct dm_pool_metadata *pmd, dm_block_t b, dm_block_t e) 1786 1783 { 1787 - int r = 0; 1784 + int r = -EINVAL; 1788 1785 1789 1786 pmd_write_lock(pmd); 1790 - r = dm_sm_dec_blocks(pmd->data_sm, b, e); 1787 + if (!pmd->fail_io) 1788 + r = dm_sm_dec_blocks(pmd->data_sm, b, e); 1791 1789 pmd_write_unlock(pmd); 1792 1790 1793 1791 return r;
+1 -2
drivers/md/dm-thin.c
··· 401 401 sector_t s = block_to_sectors(tc->pool, data_b); 402 402 sector_t len = block_to_sectors(tc->pool, data_e - data_b); 403 403 404 - return __blkdev_issue_discard(tc->pool_dev->bdev, s, len, GFP_NOWAIT, 405 - &op->bio); 404 + return __blkdev_issue_discard(tc->pool_dev->bdev, s, len, GFP_NOIO, &op->bio); 406 405 } 407 406 408 407 static void end_discard(struct discard_op *op, int r)
+20 -9
drivers/md/dm.c
··· 1172 1172 } 1173 1173 1174 1174 static sector_t __max_io_len(struct dm_target *ti, sector_t sector, 1175 - unsigned int max_granularity) 1175 + unsigned int max_granularity, 1176 + unsigned int max_sectors) 1176 1177 { 1177 1178 sector_t target_offset = dm_target_offset(ti, sector); 1178 1179 sector_t len = max_io_len_target_boundary(ti, target_offset); ··· 1187 1186 if (!max_granularity) 1188 1187 return len; 1189 1188 return min_t(sector_t, len, 1190 - min(queue_max_sectors(ti->table->md->queue), 1189 + min(max_sectors ? : queue_max_sectors(ti->table->md->queue), 1191 1190 blk_chunk_sectors_left(target_offset, max_granularity))); 1192 1191 } 1193 1192 1194 1193 static inline sector_t max_io_len(struct dm_target *ti, sector_t sector) 1195 1194 { 1196 - return __max_io_len(ti, sector, ti->max_io_len); 1195 + return __max_io_len(ti, sector, ti->max_io_len, 0); 1197 1196 } 1198 1197 1199 1198 int dm_set_target_max_io_len(struct dm_target *ti, sector_t len) ··· 1582 1581 1583 1582 static void __send_changing_extent_only(struct clone_info *ci, struct dm_target *ti, 1584 1583 unsigned int num_bios, 1585 - unsigned int max_granularity) 1584 + unsigned int max_granularity, 1585 + unsigned int max_sectors) 1586 1586 { 1587 1587 unsigned int len, bios; 1588 1588 1589 1589 len = min_t(sector_t, ci->sector_count, 1590 - __max_io_len(ti, ci->sector, max_granularity)); 1590 + __max_io_len(ti, ci->sector, max_granularity, max_sectors)); 1591 1591 1592 1592 atomic_add(num_bios, &ci->io->io_count); 1593 1593 bios = __send_duplicate_bios(ci, ti, num_bios, &len); ··· 1625 1623 { 1626 1624 unsigned int num_bios = 0; 1627 1625 unsigned int max_granularity = 0; 1626 + unsigned int max_sectors = 0; 1628 1627 struct queue_limits *limits = dm_get_queue_limits(ti->table->md); 1629 1628 1630 1629 switch (bio_op(ci->bio)) { 1631 1630 case REQ_OP_DISCARD: 1632 1631 num_bios = ti->num_discard_bios; 1632 + max_sectors = limits->max_discard_sectors; 1633 1633 if (ti->max_discard_granularity) 1634 - max_granularity = limits->max_discard_sectors; 1634 + max_granularity = max_sectors; 1635 1635 break; 1636 1636 case REQ_OP_SECURE_ERASE: 1637 1637 num_bios = ti->num_secure_erase_bios; 1638 + max_sectors = limits->max_secure_erase_sectors; 1638 1639 if (ti->max_secure_erase_granularity) 1639 - max_granularity = limits->max_secure_erase_sectors; 1640 + max_granularity = max_sectors; 1640 1641 break; 1641 1642 case REQ_OP_WRITE_ZEROES: 1642 1643 num_bios = ti->num_write_zeroes_bios; 1644 + max_sectors = limits->max_write_zeroes_sectors; 1643 1645 if (ti->max_write_zeroes_granularity) 1644 - max_granularity = limits->max_write_zeroes_sectors; 1646 + max_granularity = max_sectors; 1645 1647 break; 1646 1648 default: 1647 1649 break; ··· 1660 1654 if (unlikely(!num_bios)) 1661 1655 return BLK_STS_NOTSUPP; 1662 1656 1663 - __send_changing_extent_only(ci, ti, num_bios, max_granularity); 1657 + __send_changing_extent_only(ci, ti, num_bios, 1658 + max_granularity, max_sectors); 1664 1659 return BLK_STS_OK; 1665 1660 } 1666 1661 ··· 2815 2808 } 2816 2809 2817 2810 map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock)); 2811 + if (!map) { 2812 + /* avoid deadlock with fs/namespace.c:do_mount() */ 2813 + suspend_flags &= ~DM_SUSPEND_LOCKFS_FLAG; 2814 + } 2818 2815 2819 2816 r = __dm_suspend(md, map, suspend_flags, TASK_INTERRUPTIBLE, DMF_SUSPENDED); 2820 2817 if (r)
+9 -44
drivers/media/dvb-core/dvb_frontend.c
··· 817 817 818 818 dev_dbg(fe->dvb->device, "%s:\n", __func__); 819 819 820 - mutex_lock(&fe->remove_mutex); 821 - 822 820 if (fe->exit != DVB_FE_DEVICE_REMOVED) 823 821 fe->exit = DVB_FE_NORMAL_EXIT; 824 822 mb(); 825 823 826 - if (!fepriv->thread) { 827 - mutex_unlock(&fe->remove_mutex); 824 + if (!fepriv->thread) 828 825 return; 829 - } 830 826 831 827 kthread_stop(fepriv->thread); 832 - 833 - mutex_unlock(&fe->remove_mutex); 834 - 835 - if (fepriv->dvbdev->users < -1) { 836 - wait_event(fepriv->dvbdev->wait_queue, 837 - fepriv->dvbdev->users == -1); 838 - } 839 828 840 829 sema_init(&fepriv->sem, 1); 841 830 fepriv->state = FESTATE_IDLE; ··· 2769 2780 struct dvb_adapter *adapter = fe->dvb; 2770 2781 int ret; 2771 2782 2772 - mutex_lock(&fe->remove_mutex); 2773 - 2774 2783 dev_dbg(fe->dvb->device, "%s:\n", __func__); 2775 - if (fe->exit == DVB_FE_DEVICE_REMOVED) { 2776 - ret = -ENODEV; 2777 - goto err_remove_mutex; 2778 - } 2784 + if (fe->exit == DVB_FE_DEVICE_REMOVED) 2785 + return -ENODEV; 2779 2786 2780 2787 if (adapter->mfe_shared == 2) { 2781 2788 mutex_lock(&adapter->mfe_lock); ··· 2779 2794 if (adapter->mfe_dvbdev && 2780 2795 !adapter->mfe_dvbdev->writers) { 2781 2796 mutex_unlock(&adapter->mfe_lock); 2782 - ret = -EBUSY; 2783 - goto err_remove_mutex; 2797 + return -EBUSY; 2784 2798 } 2785 2799 adapter->mfe_dvbdev = dvbdev; 2786 2800 } ··· 2802 2818 while (mferetry-- && (mfedev->users != -1 || 2803 2819 mfepriv->thread)) { 2804 2820 if (msleep_interruptible(500)) { 2805 - if (signal_pending(current)) { 2806 - ret = -EINTR; 2807 - goto err_remove_mutex; 2808 - } 2821 + if (signal_pending(current)) 2822 + return -EINTR; 2809 2823 } 2810 2824 } 2811 2825 ··· 2815 2833 if (mfedev->users != -1 || 2816 2834 mfepriv->thread) { 2817 2835 mutex_unlock(&adapter->mfe_lock); 2818 - ret = -EBUSY; 2819 - goto err_remove_mutex; 2836 + return -EBUSY; 2820 2837 } 2821 2838 adapter->mfe_dvbdev = dvbdev; 2822 2839 } ··· 2874 2893 2875 2894 if (adapter->mfe_shared) 2876 2895 mutex_unlock(&adapter->mfe_lock); 2877 - 2878 - mutex_unlock(&fe->remove_mutex); 2879 2896 return ret; 2880 2897 2881 2898 err3: ··· 2895 2916 err0: 2896 2917 if (adapter->mfe_shared) 2897 2918 mutex_unlock(&adapter->mfe_lock); 2898 - 2899 - err_remove_mutex: 2900 - mutex_unlock(&fe->remove_mutex); 2901 2919 return ret; 2902 2920 } 2903 2921 ··· 2904 2928 struct dvb_frontend *fe = dvbdev->priv; 2905 2929 struct dvb_frontend_private *fepriv = fe->frontend_priv; 2906 2930 int ret; 2907 - 2908 - mutex_lock(&fe->remove_mutex); 2909 2931 2910 2932 dev_dbg(fe->dvb->device, "%s:\n", __func__); 2911 2933 ··· 2926 2952 } 2927 2953 mutex_unlock(&fe->dvb->mdev_lock); 2928 2954 #endif 2955 + if (fe->exit != DVB_FE_NO_EXIT) 2956 + wake_up(&dvbdev->wait_queue); 2929 2957 if (fe->ops.ts_bus_ctrl) 2930 2958 fe->ops.ts_bus_ctrl(fe, 0); 2931 - 2932 - if (fe->exit != DVB_FE_NO_EXIT) { 2933 - mutex_unlock(&fe->remove_mutex); 2934 - wake_up(&dvbdev->wait_queue); 2935 - } else { 2936 - mutex_unlock(&fe->remove_mutex); 2937 - } 2938 - 2939 - } else { 2940 - mutex_unlock(&fe->remove_mutex); 2941 2959 } 2942 2960 2943 2961 dvb_frontend_put(fe); ··· 3030 3064 fepriv = fe->frontend_priv; 3031 3065 3032 3066 kref_init(&fe->refcount); 3033 - mutex_init(&fe->remove_mutex); 3034 3067 3035 3068 /* 3036 3069 * After initialization, there need to be two references: one
+1
drivers/misc/eeprom/Kconfig
··· 6 6 depends on I2C && SYSFS 7 7 select NVMEM 8 8 select NVMEM_SYSFS 9 + select REGMAP 9 10 select REGMAP_I2C 10 11 help 11 12 Enable this driver to get read/write support to most I2C EEPROMs
-4
drivers/net/dsa/lan9303-core.c
··· 1188 1188 struct lan9303 *chip = ds->priv; 1189 1189 1190 1190 dev_dbg(chip->dev, "%s(%d, %pM, %d)\n", __func__, port, addr, vid); 1191 - if (vid) 1192 - return -EOPNOTSUPP; 1193 1191 1194 1192 return lan9303_alr_add_port(chip, addr, port, false); 1195 1193 } ··· 1199 1201 struct lan9303 *chip = ds->priv; 1200 1202 1201 1203 dev_dbg(chip->dev, "%s(%d, %pM, %d)\n", __func__, port, addr, vid); 1202 - if (vid) 1203 - return -EOPNOTSUPP; 1204 1204 lan9303_alr_del_port(chip, addr, port); 1205 1205 1206 1206 return 0;
+1 -1
drivers/net/dsa/ocelot/felix_vsc9959.c
··· 1263 1263 /* Consider the standard Ethernet overhead of 8 octets preamble+SFD, 1264 1264 * 4 octets FCS, 12 octets IFG. 1265 1265 */ 1266 - needed_bit_time_ps = (maxlen + 24) * picos_per_byte; 1266 + needed_bit_time_ps = (u64)(maxlen + 24) * picos_per_byte; 1267 1267 1268 1268 dev_dbg(ocelot->dev, 1269 1269 "port %d: max frame size %d needs %llu ps at speed %d\n",
+1
drivers/net/dsa/qca/Kconfig
··· 20 20 bool "Qualcomm Atheros QCA8K Ethernet switch family LEDs support" 21 21 depends on NET_DSA_QCA8K 22 22 depends on LEDS_CLASS=y || LEDS_CLASS=NET_DSA_QCA8K 23 + depends on LEDS_TRIGGERS 23 24 help 24 25 This enabled support for LEDs present on the Qualcomm Atheros 25 26 QCA8K Ethernet switch chips.
+8 -2
drivers/net/ethernet/amd/pds_core/dev.c
··· 68 68 69 69 bool pdsc_is_fw_good(struct pdsc *pdsc) 70 70 { 71 - u8 gen = pdsc->fw_status & PDS_CORE_FW_STS_F_GENERATION; 71 + bool fw_running = pdsc_is_fw_running(pdsc); 72 + u8 gen; 72 73 73 - return pdsc_is_fw_running(pdsc) && gen == pdsc->fw_generation; 74 + /* Make sure to update the cached fw_status by calling 75 + * pdsc_is_fw_running() before getting the generation 76 + */ 77 + gen = pdsc->fw_status & PDS_CORE_FW_STS_F_GENERATION; 78 + 79 + return fw_running && gen == pdsc->fw_generation; 74 80 } 75 81 76 82 static u8 pdsc_devcmd_status(struct pdsc *pdsc)
+2 -2
drivers/net/ethernet/broadcom/bcmsysport.c
··· 2531 2531 priv->irq0 = platform_get_irq(pdev, 0); 2532 2532 if (!priv->is_lite) { 2533 2533 priv->irq1 = platform_get_irq(pdev, 1); 2534 - priv->wol_irq = platform_get_irq(pdev, 2); 2534 + priv->wol_irq = platform_get_irq_optional(pdev, 2); 2535 2535 } else { 2536 - priv->wol_irq = platform_get_irq(pdev, 1); 2536 + priv->wol_irq = platform_get_irq_optional(pdev, 1); 2537 2537 } 2538 2538 if (priv->irq0 <= 0 || (priv->irq1 <= 0 && !priv->is_lite)) { 2539 2539 ret = -EINVAL;
+7 -2
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 14294 14294 bp->fw_seq = SHMEM_RD(bp, func_mb[BP_FW_MB_IDX(bp)].drv_mb_header) & 14295 14295 DRV_MSG_SEQ_NUMBER_MASK; 14296 14296 14297 - if (netif_running(dev)) 14298 - bnx2x_nic_load(bp, LOAD_NORMAL); 14297 + if (netif_running(dev)) { 14298 + if (bnx2x_nic_load(bp, LOAD_NORMAL)) { 14299 + netdev_err(bp->dev, "Error during driver initialization, try unloading/reloading the driver\n"); 14300 + goto done; 14301 + } 14302 + } 14299 14303 14300 14304 netif_device_attach(dev); 14301 14305 14306 + done: 14302 14307 rtnl_unlock(); 14303 14308 } 14304 14309
+32 -10
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 692 692 693 693 __netif_txq_completed_wake(txq, nr_pkts, tx_bytes, 694 694 bnxt_tx_avail(bp, txr), bp->tx_wake_thresh, 695 - READ_ONCE(txr->dev_state) != BNXT_DEV_STATE_CLOSING); 695 + READ_ONCE(txr->dev_state) == BNXT_DEV_STATE_CLOSING); 696 696 } 697 697 698 698 static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping, ··· 2364 2364 if (BNXT_PTP_USE_RTC(bp)) { 2365 2365 struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; 2366 2366 u64 ns; 2367 + 2368 + if (!ptp) 2369 + goto async_event_process_exit; 2367 2370 2368 2371 spin_lock_bh(&ptp->ptp_lock); 2369 2372 bnxt_ptp_update_current_time(bp); ··· 4766 4763 if (event_id == ASYNC_EVENT_CMPL_EVENT_ID_ERROR_RECOVERY && 4767 4764 !(bp->fw_cap & BNXT_FW_CAP_ERROR_RECOVERY)) 4768 4765 continue; 4766 + if (event_id == ASYNC_EVENT_CMPL_EVENT_ID_PHC_UPDATE && 4767 + !bp->ptp_cfg) 4768 + continue; 4769 4769 __set_bit(bnxt_async_events_arr[i], async_events_bmap); 4770 4770 } 4771 4771 if (bmap && bmap_size) { ··· 5356 5350 if (hwrm_req_init(bp, req, HWRM_VNIC_RSS_QCFG)) 5357 5351 return; 5358 5352 5353 + req->vnic_id = cpu_to_le16(vnic->fw_vnic_id); 5359 5354 /* all contexts configured to same hash_type, zero always exists */ 5360 5355 req->rss_ctx_idx = cpu_to_le16(vnic->fw_rss_cos_lb_ctx[0]); 5361 5356 resp = hwrm_req_hold(bp, req); ··· 8819 8812 goto err_out; 8820 8813 } 8821 8814 8815 + if (BNXT_VF(bp)) 8816 + bnxt_hwrm_func_qcfg(bp); 8817 + 8822 8818 rc = bnxt_setup_vnic(bp, 0); 8823 8819 if (rc) 8824 8820 goto err_out; ··· 11608 11598 static void bnxt_fw_health_check(struct bnxt *bp) 11609 11599 { 11610 11600 struct bnxt_fw_health *fw_health = bp->fw_health; 11601 + struct pci_dev *pdev = bp->pdev; 11611 11602 u32 val; 11612 11603 11613 11604 if (!fw_health->enabled || test_bit(BNXT_STATE_IN_FW_RESET, &bp->state)) ··· 11622 11611 } 11623 11612 11624 11613 val = bnxt_fw_health_readl(bp, BNXT_FW_HEARTBEAT_REG); 11625 - if (val == fw_health->last_fw_heartbeat) { 11614 + if (val == fw_health->last_fw_heartbeat && pci_device_is_present(pdev)) { 11626 11615 fw_health->arrests++; 11627 11616 goto fw_reset; 11628 11617 } ··· 11630 11619 fw_health->last_fw_heartbeat = val; 11631 11620 11632 11621 val = bnxt_fw_health_readl(bp, BNXT_FW_RESET_CNT_REG); 11633 - if (val != fw_health->last_fw_reset_cnt) { 11622 + if (val != fw_health->last_fw_reset_cnt && pci_device_is_present(pdev)) { 11634 11623 fw_health->discoveries++; 11635 11624 goto fw_reset; 11636 11625 } ··· 13036 13025 13037 13026 #endif /* CONFIG_RFS_ACCEL */ 13038 13027 13039 - static int bnxt_udp_tunnel_sync(struct net_device *netdev, unsigned int table) 13028 + static int bnxt_udp_tunnel_set_port(struct net_device *netdev, unsigned int table, 13029 + unsigned int entry, struct udp_tunnel_info *ti) 13040 13030 { 13041 13031 struct bnxt *bp = netdev_priv(netdev); 13042 - struct udp_tunnel_info ti; 13043 13032 unsigned int cmd; 13044 13033 13045 - udp_tunnel_nic_get_port(netdev, table, 0, &ti); 13046 - if (ti.type == UDP_TUNNEL_TYPE_VXLAN) 13034 + if (ti->type == UDP_TUNNEL_TYPE_VXLAN) 13047 13035 cmd = TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN; 13048 13036 else 13049 13037 cmd = TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE; 13050 13038 13051 - if (ti.port) 13052 - return bnxt_hwrm_tunnel_dst_port_alloc(bp, ti.port, cmd); 13039 + return bnxt_hwrm_tunnel_dst_port_alloc(bp, ti->port, cmd); 13040 + } 13041 + 13042 + static int bnxt_udp_tunnel_unset_port(struct net_device *netdev, unsigned int table, 13043 + unsigned int entry, struct udp_tunnel_info *ti) 13044 + { 13045 + struct bnxt *bp = netdev_priv(netdev); 13046 + unsigned int cmd; 13047 + 13048 + if (ti->type == UDP_TUNNEL_TYPE_VXLAN) 13049 + cmd = TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN; 13050 + else 13051 + cmd = TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE; 13053 13052 13054 13053 return bnxt_hwrm_tunnel_dst_port_free(bp, cmd); 13055 13054 } 13056 13055 13057 13056 static const struct udp_tunnel_nic_info bnxt_udp_tunnels = { 13058 - .sync_table = bnxt_udp_tunnel_sync, 13057 + .set_port = bnxt_udp_tunnel_set_port, 13058 + .unset_port = bnxt_udp_tunnel_unset_port, 13059 13059 .flags = UDP_TUNNEL_NIC_INFO_MAY_SLEEP | 13060 13060 UDP_TUNNEL_NIC_INFO_OPEN_ONLY, 13061 13061 .tables = {
+1 -1
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 3831 3831 } 3832 3832 } 3833 3833 3834 - if (req & BNXT_FW_RESET_AP) { 3834 + if (!BNXT_CHIP_P4_PLUS(bp) && (req & BNXT_FW_RESET_AP)) { 3835 3835 /* This feature is not supported in older firmware versions */ 3836 3836 if (bp->hwrm_spec_code >= 0x10803) { 3837 3837 if (!bnxt_firmware_reset_ap(dev)) {
+1
drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
··· 952 952 bnxt_ptp_timecounter_init(bp, true); 953 953 bnxt_ptp_adjfine_rtc(bp, 0); 954 954 } 955 + bnxt_hwrm_func_drv_rgtr(bp, NULL, 0, true); 955 956 956 957 ptp->ptp_info = bnxt_ptp_caps; 957 958 if ((bp->fw_cap & BNXT_FW_CAP_PTP_PPS)) {
+8 -14
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 1272 1272 } 1273 1273 } 1274 1274 1275 - static void bcmgenet_eee_enable_set(struct net_device *dev, bool enable) 1275 + void bcmgenet_eee_enable_set(struct net_device *dev, bool enable, 1276 + bool tx_lpi_enabled) 1276 1277 { 1277 1278 struct bcmgenet_priv *priv = netdev_priv(dev); 1278 1279 u32 off = priv->hw_params->tbuf_offset + TBUF_ENERGY_CTRL; ··· 1293 1292 1294 1293 /* Enable EEE and switch to a 27Mhz clock automatically */ 1295 1294 reg = bcmgenet_readl(priv->base + off); 1296 - if (enable) 1295 + if (tx_lpi_enabled) 1297 1296 reg |= TBUF_EEE_EN | TBUF_PM_EN; 1298 1297 else 1299 1298 reg &= ~(TBUF_EEE_EN | TBUF_PM_EN); ··· 1314 1313 1315 1314 priv->eee.eee_enabled = enable; 1316 1315 priv->eee.eee_active = enable; 1316 + priv->eee.tx_lpi_enabled = tx_lpi_enabled; 1317 1317 } 1318 1318 1319 1319 static int bcmgenet_get_eee(struct net_device *dev, struct ethtool_eee *e) ··· 1330 1328 1331 1329 e->eee_enabled = p->eee_enabled; 1332 1330 e->eee_active = p->eee_active; 1331 + e->tx_lpi_enabled = p->tx_lpi_enabled; 1333 1332 e->tx_lpi_timer = bcmgenet_umac_readl(priv, UMAC_EEE_LPI_TIMER); 1334 1333 1335 1334 return phy_ethtool_get_eee(dev->phydev, e); ··· 1340 1337 { 1341 1338 struct bcmgenet_priv *priv = netdev_priv(dev); 1342 1339 struct ethtool_eee *p = &priv->eee; 1343 - int ret = 0; 1344 1340 1345 1341 if (GENET_IS_V1(priv)) 1346 1342 return -EOPNOTSUPP; ··· 1350 1348 p->eee_enabled = e->eee_enabled; 1351 1349 1352 1350 if (!p->eee_enabled) { 1353 - bcmgenet_eee_enable_set(dev, false); 1351 + bcmgenet_eee_enable_set(dev, false, false); 1354 1352 } else { 1355 - ret = phy_init_eee(dev->phydev, false); 1356 - if (ret) { 1357 - netif_err(priv, hw, dev, "EEE initialization failed\n"); 1358 - return ret; 1359 - } 1360 - 1353 + p->eee_active = phy_init_eee(dev->phydev, false) >= 0; 1361 1354 bcmgenet_umac_writel(priv, e->tx_lpi_timer, UMAC_EEE_LPI_TIMER); 1362 - bcmgenet_eee_enable_set(dev, true); 1355 + bcmgenet_eee_enable_set(dev, p->eee_active, e->tx_lpi_enabled); 1363 1356 } 1364 1357 1365 1358 return phy_ethtool_set_eee(dev->phydev, e); ··· 4275 4278 4276 4279 if (!device_may_wakeup(d)) 4277 4280 phy_resume(dev->phydev); 4278 - 4279 - if (priv->eee.eee_enabled) 4280 - bcmgenet_eee_enable_set(dev, true); 4281 4281 4282 4282 bcmgenet_netif_start(dev); 4283 4283
+3
drivers/net/ethernet/broadcom/genet/bcmgenet.h
··· 703 703 void bcmgenet_wol_power_up_cfg(struct bcmgenet_priv *priv, 704 704 enum bcmgenet_power_mode mode); 705 705 706 + void bcmgenet_eee_enable_set(struct net_device *dev, bool enable, 707 + bool tx_lpi_enabled); 708 + 706 709 #endif /* __BCMGENET_H__ */
+5
drivers/net/ethernet/broadcom/genet/bcmmii.c
··· 87 87 reg |= CMD_TX_EN | CMD_RX_EN; 88 88 } 89 89 bcmgenet_umac_writel(priv, reg, UMAC_CMD); 90 + 91 + priv->eee.eee_active = phy_init_eee(phydev, 0) >= 0; 92 + bcmgenet_eee_enable_set(dev, 93 + priv->eee.eee_enabled && priv->eee.eee_active, 94 + priv->eee.tx_lpi_enabled); 90 95 } 91 96 92 97 /* setup netdev link state when PHY link status change and
+15 -1
drivers/net/ethernet/freescale/enetc/enetc.c
··· 1229 1229 if (!skb) 1230 1230 break; 1231 1231 1232 - rx_byte_cnt += skb->len; 1232 + /* When set, the outer VLAN header is extracted and reported 1233 + * in the receive buffer descriptor. So rx_byte_cnt should 1234 + * add the length of the extracted VLAN header. 1235 + */ 1236 + if (bd_status & ENETC_RXBD_FLAG_VLAN) 1237 + rx_byte_cnt += VLAN_HLEN; 1238 + rx_byte_cnt += skb->len + ETH_HLEN; 1233 1239 rx_frm_cnt++; 1234 1240 1235 1241 napi_gro_receive(napi, skb); ··· 1570 1564 1571 1565 enetc_build_xdp_buff(rx_ring, bd_status, &rxbd, &i, 1572 1566 &cleaned_cnt, &xdp_buff); 1567 + 1568 + /* When set, the outer VLAN header is extracted and reported 1569 + * in the receive buffer descriptor. So rx_byte_cnt should 1570 + * add the length of the extracted VLAN header. 1571 + */ 1572 + if (bd_status & ENETC_RXBD_FLAG_VLAN) 1573 + rx_byte_cnt += VLAN_HLEN; 1574 + rx_byte_cnt += xdp_get_buff_len(&xdp_buff); 1573 1575 1574 1576 xdp_act = bpf_prog_run_xdp(prog, &xdp_buff); 1575 1577
+2 -2
drivers/net/ethernet/freescale/enetc/enetc_qos.c
··· 181 181 int bw_sum = 0; 182 182 u8 bw; 183 183 184 - prio_top = netdev_get_prio_tc_map(ndev, tc_nums - 1); 185 - prio_next = netdev_get_prio_tc_map(ndev, tc_nums - 2); 184 + prio_top = tc_nums - 1; 185 + prio_next = tc_nums - 2; 186 186 187 187 /* Support highest prio and second prio tc in cbs mode */ 188 188 if (tc != prio_top && tc != prio_next)
+1 -1
drivers/net/ethernet/intel/iavf/iavf.h
··· 525 525 void iavf_update_stats(struct iavf_adapter *adapter); 526 526 void iavf_reset_interrupt_capability(struct iavf_adapter *adapter); 527 527 int iavf_init_interrupt_scheme(struct iavf_adapter *adapter); 528 - void iavf_irq_enable_queues(struct iavf_adapter *adapter, u32 mask); 528 + void iavf_irq_enable_queues(struct iavf_adapter *adapter); 529 529 void iavf_free_all_tx_resources(struct iavf_adapter *adapter); 530 530 void iavf_free_all_rx_resources(struct iavf_adapter *adapter); 531 531
+6 -9
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 359 359 } 360 360 361 361 /** 362 - * iavf_irq_enable_queues - Enable interrupt for specified queues 362 + * iavf_irq_enable_queues - Enable interrupt for all queues 363 363 * @adapter: board private structure 364 - * @mask: bitmap of queues to enable 365 364 **/ 366 - void iavf_irq_enable_queues(struct iavf_adapter *adapter, u32 mask) 365 + void iavf_irq_enable_queues(struct iavf_adapter *adapter) 367 366 { 368 367 struct iavf_hw *hw = &adapter->hw; 369 368 int i; 370 369 371 370 for (i = 1; i < adapter->num_msix_vectors; i++) { 372 - if (mask & BIT(i - 1)) { 373 - wr32(hw, IAVF_VFINT_DYN_CTLN1(i - 1), 374 - IAVF_VFINT_DYN_CTLN1_INTENA_MASK | 375 - IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK); 376 - } 371 + wr32(hw, IAVF_VFINT_DYN_CTLN1(i - 1), 372 + IAVF_VFINT_DYN_CTLN1_INTENA_MASK | 373 + IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK); 377 374 } 378 375 } 379 376 ··· 384 387 struct iavf_hw *hw = &adapter->hw; 385 388 386 389 iavf_misc_irq_enable(adapter); 387 - iavf_irq_enable_queues(adapter, ~0); 390 + iavf_irq_enable_queues(adapter); 388 391 389 392 if (flush) 390 393 iavf_flush(hw);
+1 -1
drivers/net/ethernet/intel/iavf/iavf_register.h
··· 40 40 #define IAVF_VFINT_DYN_CTL01_INTENA_MASK IAVF_MASK(0x1, IAVF_VFINT_DYN_CTL01_INTENA_SHIFT) 41 41 #define IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT 3 42 42 #define IAVF_VFINT_DYN_CTL01_ITR_INDX_MASK IAVF_MASK(0x3, IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT) 43 - #define IAVF_VFINT_DYN_CTLN1(_INTVF) (0x00003800 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */ 43 + #define IAVF_VFINT_DYN_CTLN1(_INTVF) (0x00003800 + ((_INTVF) * 4)) /* _i=0...63 */ /* Reset: VFR */ 44 44 #define IAVF_VFINT_DYN_CTLN1_INTENA_SHIFT 0 45 45 #define IAVF_VFINT_DYN_CTLN1_INTENA_MASK IAVF_MASK(0x1, IAVF_VFINT_DYN_CTLN1_INTENA_SHIFT) 46 46 #define IAVF_VFINT_DYN_CTLN1_SWINT_TRIG_SHIFT 2
+1 -1
drivers/net/ethernet/intel/ice/ice_common.c
··· 5160 5160 */ 5161 5161 int 5162 5162 ice_aq_write_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr, 5163 - u16 bus_addr, __le16 addr, u8 params, u8 *data, 5163 + u16 bus_addr, __le16 addr, u8 params, const u8 *data, 5164 5164 struct ice_sq_cd *cd) 5165 5165 { 5166 5166 struct ice_aq_desc desc = { 0 };
+1 -1
drivers/net/ethernet/intel/ice/ice_common.h
··· 229 229 struct ice_sq_cd *cd); 230 230 int 231 231 ice_aq_write_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr, 232 - u16 bus_addr, __le16 addr, u8 params, u8 *data, 232 + u16 bus_addr, __le16 addr, u8 params, const u8 *data, 233 233 struct ice_sq_cd *cd); 234 234 bool ice_fw_supports_report_dflt_cfg(struct ice_hw *hw); 235 235 #endif /* _ICE_COMMON_H_ */
+5 -67
drivers/net/ethernet/intel/ice/ice_gnss.c
··· 16 16 * * number of bytes written - success 17 17 * * negative - error code 18 18 */ 19 - static unsigned int 20 - ice_gnss_do_write(struct ice_pf *pf, unsigned char *buf, unsigned int size) 19 + static int 20 + ice_gnss_do_write(struct ice_pf *pf, const unsigned char *buf, unsigned int size) 21 21 { 22 22 struct ice_aqc_link_topo_addr link_topo; 23 23 struct ice_hw *hw = &pf->hw; ··· 72 72 dev_err(ice_pf_to_dev(pf), "GNSS failed to write, offset=%u, size=%u, err=%d\n", 73 73 offset, size, err); 74 74 75 - return offset; 76 - } 77 - 78 - /** 79 - * ice_gnss_write_pending - Write all pending data to internal GNSS 80 - * @work: GNSS write work structure 81 - */ 82 - static void ice_gnss_write_pending(struct kthread_work *work) 83 - { 84 - struct gnss_serial *gnss = container_of(work, struct gnss_serial, 85 - write_work); 86 - struct ice_pf *pf = gnss->back; 87 - 88 - if (!pf) 89 - return; 90 - 91 - if (!test_bit(ICE_FLAG_GNSS, pf->flags)) 92 - return; 93 - 94 - if (!list_empty(&gnss->queue)) { 95 - struct gnss_write_buf *write_buf = NULL; 96 - unsigned int bytes; 97 - 98 - write_buf = list_first_entry(&gnss->queue, 99 - struct gnss_write_buf, queue); 100 - 101 - bytes = ice_gnss_do_write(pf, write_buf->buf, write_buf->size); 102 - dev_dbg(ice_pf_to_dev(pf), "%u bytes written to GNSS\n", bytes); 103 - 104 - list_del(&write_buf->queue); 105 - kfree(write_buf->buf); 106 - kfree(write_buf); 107 - } 75 + return err; 108 76 } 109 77 110 78 /** ··· 96 128 int err = 0; 97 129 98 130 pf = gnss->back; 99 - if (!pf) { 100 - err = -EFAULT; 101 - goto exit; 102 - } 103 - 104 - if (!test_bit(ICE_FLAG_GNSS, pf->flags)) 131 + if (!pf || !test_bit(ICE_FLAG_GNSS, pf->flags)) 105 132 return; 106 133 107 134 hw = &pf->hw; ··· 154 191 free_page((unsigned long)buf); 155 192 requeue: 156 193 kthread_queue_delayed_work(gnss->kworker, &gnss->read_work, delay); 157 - exit: 158 194 if (err) 159 195 dev_dbg(ice_pf_to_dev(pf), "GNSS failed to read err=%d\n", err); 160 196 } ··· 182 220 pf->gnss_serial = gnss; 183 221 184 222 kthread_init_delayed_work(&gnss->read_work, ice_gnss_read); 185 - INIT_LIST_HEAD(&gnss->queue); 186 - kthread_init_work(&gnss->write_work, ice_gnss_write_pending); 187 223 kworker = kthread_create_worker(0, "ice-gnss-%s", dev_name(dev)); 188 224 if (IS_ERR(kworker)) { 189 225 kfree(gnss); ··· 241 281 if (!gnss) 242 282 return; 243 283 244 - kthread_cancel_work_sync(&gnss->write_work); 245 284 kthread_cancel_delayed_work_sync(&gnss->read_work); 246 285 } 247 286 ··· 259 300 size_t count) 260 301 { 261 302 struct ice_pf *pf = gnss_get_drvdata(gdev); 262 - struct gnss_write_buf *write_buf; 263 303 struct gnss_serial *gnss; 264 - unsigned char *cmd_buf; 265 - int err = count; 266 304 267 305 /* We cannot write a single byte using our I2C implementation. */ 268 306 if (count <= 1 || count > ICE_GNSS_TTY_WRITE_BUF) ··· 275 319 if (!gnss) 276 320 return -ENODEV; 277 321 278 - cmd_buf = kcalloc(count, sizeof(*buf), GFP_KERNEL); 279 - if (!cmd_buf) 280 - return -ENOMEM; 281 - 282 - memcpy(cmd_buf, buf, count); 283 - write_buf = kzalloc(sizeof(*write_buf), GFP_KERNEL); 284 - if (!write_buf) { 285 - kfree(cmd_buf); 286 - return -ENOMEM; 287 - } 288 - 289 - write_buf->buf = cmd_buf; 290 - write_buf->size = count; 291 - INIT_LIST_HEAD(&write_buf->queue); 292 - list_add_tail(&write_buf->queue, &gnss->queue); 293 - kthread_queue_work(gnss->kworker, &gnss->write_work); 294 - 295 - return err; 322 + return ice_gnss_do_write(pf, buf, count); 296 323 } 297 324 298 325 static const struct gnss_operations ice_gnss_ops = { ··· 371 432 if (pf->gnss_serial) { 372 433 struct gnss_serial *gnss = pf->gnss_serial; 373 434 374 - kthread_cancel_work_sync(&gnss->write_work); 375 435 kthread_cancel_delayed_work_sync(&gnss->read_work); 376 436 kthread_destroy_worker(gnss->kworker); 377 437 gnss->kworker = NULL;
-10
drivers/net/ethernet/intel/ice/ice_gnss.h
··· 22 22 */ 23 23 #define ICE_GNSS_UBX_WRITE_BYTES (ICE_MAX_I2C_WRITE_BYTES + 1) 24 24 25 - struct gnss_write_buf { 26 - struct list_head queue; 27 - unsigned int size; 28 - unsigned char *buf; 29 - }; 30 - 31 25 /** 32 26 * struct gnss_serial - data used to initialize GNSS TTY port 33 27 * @back: back pointer to PF 34 28 * @kworker: kwork thread for handling periodic work 35 29 * @read_work: read_work function for handling GNSS reads 36 - * @write_work: write_work function for handling GNSS writes 37 - * @queue: write buffers queue 38 30 */ 39 31 struct gnss_serial { 40 32 struct ice_pf *back; 41 33 struct kthread_worker *kworker; 42 34 struct kthread_delayed_work read_work; 43 - struct kthread_work write_work; 44 - struct list_head queue; 45 35 }; 46 36 47 37 #if IS_ENABLED(CONFIG_GNSS)
+9 -11
drivers/net/ethernet/intel/ice/ice_main.c
··· 4802 4802 static void ice_deinit_dev(struct ice_pf *pf) 4803 4803 { 4804 4804 ice_free_irq_msix_misc(pf); 4805 - ice_clear_interrupt_scheme(pf); 4806 4805 ice_deinit_pf(pf); 4807 4806 ice_deinit_hw(&pf->hw); 4807 + 4808 + /* Service task is already stopped, so call reset directly. */ 4809 + ice_reset(&pf->hw, ICE_RESET_PFR); 4810 + pci_wait_for_pending_transaction(pf->pdev); 4811 + ice_clear_interrupt_scheme(pf); 4808 4812 } 4809 4813 4810 4814 static void ice_init_features(struct ice_pf *pf) ··· 5098 5094 struct ice_vsi *vsi; 5099 5095 int err; 5100 5096 5101 - err = ice_reset(&pf->hw, ICE_RESET_PFR); 5102 - if (err) 5103 - return err; 5104 - 5105 5097 err = ice_init_dev(pf); 5106 5098 if (err) 5107 5099 return err; ··· 5354 5354 ice_setup_mc_magic_wake(pf); 5355 5355 ice_set_wake(pf); 5356 5356 5357 - /* Issue a PFR as part of the prescribed driver unload flow. Do not 5358 - * do it via ice_schedule_reset() since there is no need to rebuild 5359 - * and the service task is already stopped. 5360 - */ 5361 - ice_reset(&pf->hw, ICE_RESET_PFR); 5362 - pci_wait_for_pending_transaction(pdev); 5363 5357 pci_disable_device(pdev); 5364 5358 } 5365 5359 ··· 7049 7055 7050 7056 ice_for_each_txq(vsi, i) 7051 7057 ice_clean_tx_ring(vsi->tx_rings[i]); 7058 + 7059 + if (ice_is_xdp_ena_vsi(vsi)) 7060 + ice_for_each_xdp_txq(vsi, i) 7061 + ice_clean_tx_ring(vsi->xdp_rings[i]); 7052 7062 7053 7063 ice_for_each_rxq(vsi, i) 7054 7064 ice_clean_rx_ring(vsi->rx_rings[i]);
+3
drivers/net/ethernet/intel/igb/igb_ethtool.c
··· 822 822 */ 823 823 ret_val = hw->nvm.ops.read(hw, last_word, 1, 824 824 &eeprom_buff[last_word - first_word]); 825 + if (ret_val) 826 + goto out; 825 827 } 826 828 827 829 /* Device's eeprom is always little-endian, word addressable */ ··· 843 841 hw->nvm.ops.update(hw); 844 842 845 843 igb_set_fw_version(adapter); 844 + out: 846 845 kfree(eeprom_buff); 847 846 return ret_val; 848 847 }
+6 -2
drivers/net/ethernet/intel/igb/igb_main.c
··· 6947 6947 struct e1000_hw *hw = &adapter->hw; 6948 6948 struct ptp_clock_event event; 6949 6949 struct timespec64 ts; 6950 + unsigned long flags; 6950 6951 6951 6952 if (pin < 0 || pin >= IGB_N_SDP) 6952 6953 return; ··· 6955 6954 if (hw->mac.type == e1000_82580 || 6956 6955 hw->mac.type == e1000_i354 || 6957 6956 hw->mac.type == e1000_i350) { 6958 - s64 ns = rd32(auxstmpl); 6957 + u64 ns = rd32(auxstmpl); 6959 6958 6960 - ns += ((s64)(rd32(auxstmph) & 0xFF)) << 32; 6959 + ns += ((u64)(rd32(auxstmph) & 0xFF)) << 32; 6960 + spin_lock_irqsave(&adapter->tmreg_lock, flags); 6961 + ns = timecounter_cyc2time(&adapter->tc, ns); 6962 + spin_unlock_irqrestore(&adapter->tmreg_lock, flags); 6961 6963 ts = ns_to_timespec64(ns); 6962 6964 } else { 6963 6965 ts.tv_nsec = rd32(auxstmpl);
+11 -1
drivers/net/ethernet/intel/igc/igc_main.c
··· 254 254 /* reset BQL for queue */ 255 255 netdev_tx_reset_queue(txring_txq(tx_ring)); 256 256 257 + /* Zero out the buffer ring */ 258 + memset(tx_ring->tx_buffer_info, 0, 259 + sizeof(*tx_ring->tx_buffer_info) * tx_ring->count); 260 + 261 + /* Zero out the descriptor ring */ 262 + memset(tx_ring->desc, 0, tx_ring->size); 263 + 257 264 /* reset next_to_use and next_to_clean */ 258 265 tx_ring->next_to_use = 0; 259 266 tx_ring->next_to_clean = 0; ··· 274 267 */ 275 268 void igc_free_tx_resources(struct igc_ring *tx_ring) 276 269 { 277 - igc_clean_tx_ring(tx_ring); 270 + igc_disable_tx_ring(tx_ring); 278 271 279 272 vfree(tx_ring->tx_buffer_info); 280 273 tx_ring->tx_buffer_info = NULL; ··· 6729 6722 igc_flush_nfc_rules(adapter); 6730 6723 6731 6724 igc_ptp_stop(adapter); 6725 + 6726 + pci_disable_ptm(pdev); 6727 + pci_clear_master(pdev); 6732 6728 6733 6729 set_bit(__IGC_DOWN, &adapter->state); 6734 6730
+1 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 1256 1256 if (!__netif_txq_completed_wake(txq, total_packets, total_bytes, 1257 1257 ixgbe_desc_unused(tx_ring), 1258 1258 TX_WAKE_THRESHOLD, 1259 - netif_carrier_ok(tx_ring->netdev) && 1259 + !netif_carrier_ok(tx_ring->netdev) || 1260 1260 test_bit(__IXGBE_DOWN, &adapter->state))) 1261 1261 ++tx_ring->tx_stats.restart_queue; 1262 1262
+6 -1
drivers/net/ethernet/marvell/octeon_ep/octep_main.c
··· 981 981 oct->mmio[i].hw_addr = 982 982 ioremap(pci_resource_start(oct->pdev, i * 2), 983 983 pci_resource_len(oct->pdev, i * 2)); 984 + if (!oct->mmio[i].hw_addr) 985 + goto unmap_prev; 986 + 984 987 oct->mmio[i].mapped = 1; 985 988 } 986 989 ··· 1018 1015 return 0; 1019 1016 1020 1017 unsupported_dev: 1021 - for (i = 0; i < OCTEP_MMIO_REGIONS; i++) 1018 + i = OCTEP_MMIO_REGIONS; 1019 + unmap_prev: 1020 + while (i--) 1022 1021 iounmap(oct->mmio[i].hw_addr); 1023 1022 1024 1023 kfree(oct->conf);
+2 -5
drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
··· 1878 1878 free_cnt = rvu_rsrc_free_count(&txsch->schq); 1879 1879 } 1880 1880 1881 - if (free_cnt < req_schq || req_schq > MAX_TXSCHQ_PER_FUNC) 1881 + if (free_cnt < req_schq || req->schq[lvl] > MAX_TXSCHQ_PER_FUNC || 1882 + req->schq_contig[lvl] > MAX_TXSCHQ_PER_FUNC) 1882 1883 return NIX_AF_ERR_TLX_ALLOC_FAIL; 1883 1884 1884 1885 /* If contiguous queues are needed, check for availability */ ··· 4081 4080 4082 4081 static u64 rvu_get_lbk_link_credits(struct rvu *rvu, u16 lbk_max_frs) 4083 4082 { 4084 - /* CN10k supports 72KB FIFO size and max packet size of 64k */ 4085 - if (rvu->hw->lbk_bufsize == 0x12000) 4086 - return (rvu->hw->lbk_bufsize - lbk_max_frs) / 16; 4087 - 4088 4083 return 1600; /* 16 * max LBK datarate = 16 * 100Gbps */ 4089 4084 } 4090 4085
+2 -27
drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c
··· 1164 1164 { 1165 1165 struct npc_exact_table *table; 1166 1166 u16 *cnt, old_cnt; 1167 - bool promisc; 1168 1167 1169 1168 table = rvu->hw->table; 1170 - promisc = table->promisc_mode[drop_mcam_idx]; 1171 1169 1172 1170 cnt = &table->cnt_cmd_rules[drop_mcam_idx]; 1173 1171 old_cnt = *cnt; ··· 1177 1179 1178 1180 *enable_or_disable_cam = false; 1179 1181 1180 - if (promisc) 1181 - goto done; 1182 - 1183 - /* If all rules are deleted and not already in promisc mode; disable cam */ 1182 + /* If all rules are deleted, disable cam */ 1184 1183 if (!*cnt && val < 0) { 1185 1184 *enable_or_disable_cam = true; 1186 1185 goto done; 1187 1186 } 1188 1187 1189 - /* If rule got added and not already in promisc mode; enable cam */ 1188 + /* If rule got added, enable cam */ 1190 1189 if (!old_cnt && val > 0) { 1191 1190 *enable_or_disable_cam = true; 1192 1191 goto done; ··· 1438 1443 u32 drop_mcam_idx; 1439 1444 bool *promisc; 1440 1445 bool rc; 1441 - u32 cnt; 1442 1446 1443 1447 table = rvu->hw->table; 1444 1448 ··· 1460 1466 return LMAC_AF_ERR_INVALID_PARAM; 1461 1467 } 1462 1468 *promisc = false; 1463 - cnt = __rvu_npc_exact_cmd_rules_cnt_update(rvu, drop_mcam_idx, 0, NULL); 1464 1469 mutex_unlock(&table->lock); 1465 1470 1466 - /* If no dmac filter entries configured, disable drop rule */ 1467 - if (!cnt) 1468 - rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX, false); 1469 - else 1470 - rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX, !*promisc); 1471 - 1472 - dev_dbg(rvu->dev, "%s: disabled promisc mode (cgx=%d lmac=%d, cnt=%d)\n", 1473 - __func__, cgx_id, lmac_id, cnt); 1474 1471 return 0; 1475 1472 } 1476 1473 ··· 1479 1494 u32 drop_mcam_idx; 1480 1495 bool *promisc; 1481 1496 bool rc; 1482 - u32 cnt; 1483 1497 1484 1498 table = rvu->hw->table; 1485 1499 ··· 1501 1517 return LMAC_AF_ERR_INVALID_PARAM; 1502 1518 } 1503 1519 *promisc = true; 1504 - cnt = __rvu_npc_exact_cmd_rules_cnt_update(rvu, drop_mcam_idx, 0, NULL); 1505 1520 mutex_unlock(&table->lock); 1506 1521 1507 - /* If no dmac filter entries configured, disable drop rule */ 1508 - if (!cnt) 1509 - rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX, false); 1510 - else 1511 - rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX, !*promisc); 1512 - 1513 - dev_dbg(rvu->dev, "%s: Enabled promisc mode (cgx=%d lmac=%d cnt=%d)\n", 1514 - __func__, cgx_id, lmac_id, cnt); 1515 1522 return 0; 1516 1523 } 1517 1524
-12
drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
··· 276 276 return pci_num_vf(dev->pdev) ? true : false; 277 277 } 278 278 279 - static inline int mlx5_lag_is_lacp_owner(struct mlx5_core_dev *dev) 280 - { 281 - /* LACP owner conditions: 282 - * 1) Function is physical. 283 - * 2) LAG is supported by FW. 284 - * 3) LAG is managed by driver (currently the only option). 285 - */ 286 - return MLX5_CAP_GEN(dev, vport_group_manager) && 287 - (MLX5_CAP_GEN(dev, num_lag_ports) > 1) && 288 - MLX5_CAP_GEN(dev, lag_master); 289 - } 290 - 291 279 int mlx5_rescan_drivers_locked(struct mlx5_core_dev *dev); 292 280 static inline int mlx5_rescan_drivers(struct mlx5_core_dev *dev) 293 281 {
+1 -1
drivers/net/ethernet/qlogic/qed/qed_l2.c
··· 1903 1903 { 1904 1904 u32 i; 1905 1905 1906 - if (!cdev) { 1906 + if (!cdev || cdev->recov_in_prog) { 1907 1907 memset(stats, 0, sizeof(*stats)); 1908 1908 return; 1909 1909 }
+4
drivers/net/ethernet/qlogic/qede/qede.h
··· 269 269 #define QEDE_ERR_WARN 3 270 270 271 271 struct qede_dump_info dump_info; 272 + struct delayed_work periodic_task; 273 + unsigned long stats_coal_ticks; 274 + u32 stats_coal_usecs; 275 + spinlock_t stats_lock; /* lock for vport stats access */ 272 276 }; 273 277 274 278 enum QEDE_STATE {
+22 -2
drivers/net/ethernet/qlogic/qede/qede_ethtool.c
··· 429 429 } 430 430 } 431 431 432 + spin_lock(&edev->stats_lock); 433 + 432 434 for (i = 0; i < QEDE_NUM_STATS; i++) { 433 435 if (qede_is_irrelevant_stat(edev, i)) 434 436 continue; ··· 439 437 440 438 buf++; 441 439 } 440 + 441 + spin_unlock(&edev->stats_lock); 442 442 443 443 __qede_unlock(edev); 444 444 } ··· 833 829 834 830 coal->rx_coalesce_usecs = rx_coal; 835 831 coal->tx_coalesce_usecs = tx_coal; 832 + coal->stats_block_coalesce_usecs = edev->stats_coal_usecs; 836 833 837 834 return rc; 838 835 } ··· 846 841 struct qede_fastpath *fp; 847 842 int i, rc = 0; 848 843 u16 rxc, txc; 844 + 845 + if (edev->stats_coal_usecs != coal->stats_block_coalesce_usecs) { 846 + edev->stats_coal_usecs = coal->stats_block_coalesce_usecs; 847 + if (edev->stats_coal_usecs) { 848 + edev->stats_coal_ticks = usecs_to_jiffies(edev->stats_coal_usecs); 849 + schedule_delayed_work(&edev->periodic_task, 0); 850 + 851 + DP_INFO(edev, "Configured stats coal ticks=%lu jiffies\n", 852 + edev->stats_coal_ticks); 853 + } else { 854 + cancel_delayed_work_sync(&edev->periodic_task); 855 + } 856 + } 849 857 850 858 if (!netif_running(dev)) { 851 859 DP_INFO(edev, "Interface is down\n"); ··· 2270 2252 } 2271 2253 2272 2254 static const struct ethtool_ops qede_ethtool_ops = { 2273 - .supported_coalesce_params = ETHTOOL_COALESCE_USECS, 2255 + .supported_coalesce_params = ETHTOOL_COALESCE_USECS | 2256 + ETHTOOL_COALESCE_STATS_BLOCK_USECS, 2274 2257 .get_link_ksettings = qede_get_link_ksettings, 2275 2258 .set_link_ksettings = qede_set_link_ksettings, 2276 2259 .get_drvinfo = qede_get_drvinfo, ··· 2322 2303 }; 2323 2304 2324 2305 static const struct ethtool_ops qede_vf_ethtool_ops = { 2325 - .supported_coalesce_params = ETHTOOL_COALESCE_USECS, 2306 + .supported_coalesce_params = ETHTOOL_COALESCE_USECS | 2307 + ETHTOOL_COALESCE_STATS_BLOCK_USECS, 2326 2308 .get_link_ksettings = qede_get_link_ksettings, 2327 2309 .get_drvinfo = qede_get_drvinfo, 2328 2310 .get_msglevel = qede_get_msglevel,
+33 -1
drivers/net/ethernet/qlogic/qede/qede_main.c
··· 307 307 308 308 edev->ops->get_vport_stats(edev->cdev, &stats); 309 309 310 + spin_lock(&edev->stats_lock); 311 + 310 312 p_common->no_buff_discards = stats.common.no_buff_discards; 311 313 p_common->packet_too_big_discard = stats.common.packet_too_big_discard; 312 314 p_common->ttl0_discard = stats.common.ttl0_discard; ··· 406 404 p_ah->tx_1519_to_max_byte_packets = 407 405 stats.ah.tx_1519_to_max_byte_packets; 408 406 } 407 + 408 + spin_unlock(&edev->stats_lock); 409 409 } 410 410 411 411 static void qede_get_stats64(struct net_device *dev, ··· 416 412 struct qede_dev *edev = netdev_priv(dev); 417 413 struct qede_stats_common *p_common; 418 414 419 - qede_fill_by_demand_stats(edev); 420 415 p_common = &edev->stats.common; 416 + 417 + spin_lock(&edev->stats_lock); 421 418 422 419 stats->rx_packets = p_common->rx_ucast_pkts + p_common->rx_mcast_pkts + 423 420 p_common->rx_bcast_pkts; ··· 439 434 stats->collisions = edev->stats.bb.tx_total_collisions; 440 435 stats->rx_crc_errors = p_common->rx_crc_errors; 441 436 stats->rx_frame_errors = p_common->rx_align_errors; 437 + 438 + spin_unlock(&edev->stats_lock); 442 439 } 443 440 444 441 #ifdef CONFIG_QED_SRIOV ··· 1070 1063 rtnl_unlock(); 1071 1064 } 1072 1065 1066 + static void qede_periodic_task(struct work_struct *work) 1067 + { 1068 + struct qede_dev *edev = container_of(work, struct qede_dev, 1069 + periodic_task.work); 1070 + 1071 + qede_fill_by_demand_stats(edev); 1072 + schedule_delayed_work(&edev->periodic_task, edev->stats_coal_ticks); 1073 + } 1074 + 1075 + static void qede_init_periodic_task(struct qede_dev *edev) 1076 + { 1077 + INIT_DELAYED_WORK(&edev->periodic_task, qede_periodic_task); 1078 + spin_lock_init(&edev->stats_lock); 1079 + edev->stats_coal_usecs = USEC_PER_SEC; 1080 + edev->stats_coal_ticks = usecs_to_jiffies(USEC_PER_SEC); 1081 + } 1082 + 1073 1083 static void qede_sp_task(struct work_struct *work) 1074 1084 { 1075 1085 struct qede_dev *edev = container_of(work, struct qede_dev, ··· 1106 1082 */ 1107 1083 1108 1084 if (test_and_clear_bit(QEDE_SP_RECOVERY, &edev->sp_flags)) { 1085 + cancel_delayed_work_sync(&edev->periodic_task); 1109 1086 #ifdef CONFIG_QED_SRIOV 1110 1087 /* SRIOV must be disabled outside the lock to avoid a deadlock. 1111 1088 * The recovery of the active VFs is currently not supported. ··· 1297 1272 */ 1298 1273 INIT_DELAYED_WORK(&edev->sp_task, qede_sp_task); 1299 1274 mutex_init(&edev->qede_lock); 1275 + qede_init_periodic_task(edev); 1300 1276 1301 1277 rc = register_netdev(edev->ndev); 1302 1278 if (rc) { ··· 1322 1296 edev->rx_copybreak = QEDE_RX_HDR_SIZE; 1323 1297 1324 1298 qede_log_probe(edev); 1299 + 1300 + /* retain user config (for example - after recovery) */ 1301 + if (edev->stats_coal_usecs) 1302 + schedule_delayed_work(&edev->periodic_task, 0); 1303 + 1325 1304 return 0; 1326 1305 1327 1306 err4: ··· 1395 1364 unregister_netdev(ndev); 1396 1365 1397 1366 cancel_delayed_work_sync(&edev->sp_task); 1367 + cancel_delayed_work_sync(&edev->periodic_task); 1398 1368 1399 1369 edev->ops->common->set_power_state(cdev, PCI_D0); 1400 1370
+22 -14
drivers/net/ethernet/renesas/rswitch.c
··· 347 347 return -ENOMEM; 348 348 } 349 349 350 - static int rswitch_gwca_ts_queue_alloc(struct rswitch_private *priv) 351 - { 352 - struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue; 353 - 354 - gq->ring_size = TS_RING_SIZE; 355 - gq->ts_ring = dma_alloc_coherent(&priv->pdev->dev, 356 - sizeof(struct rswitch_ts_desc) * 357 - (gq->ring_size + 1), &gq->ring_dma, GFP_KERNEL); 358 - return !gq->ts_ring ? -ENOMEM : 0; 359 - } 360 - 361 350 static void rswitch_desc_set_dptr(struct rswitch_desc *desc, dma_addr_t addr) 362 351 { 363 352 desc->dptrl = cpu_to_le32(lower_32_bits(addr)); ··· 520 531 dma_free_coherent(&priv->pdev->dev, gwca->linkfix_table_size, 521 532 gwca->linkfix_table, gwca->linkfix_table_dma); 522 533 gwca->linkfix_table = NULL; 534 + } 535 + 536 + static int rswitch_gwca_ts_queue_alloc(struct rswitch_private *priv) 537 + { 538 + struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue; 539 + struct rswitch_ts_desc *desc; 540 + 541 + gq->ring_size = TS_RING_SIZE; 542 + gq->ts_ring = dma_alloc_coherent(&priv->pdev->dev, 543 + sizeof(struct rswitch_ts_desc) * 544 + (gq->ring_size + 1), &gq->ring_dma, GFP_KERNEL); 545 + 546 + if (!gq->ts_ring) 547 + return -ENOMEM; 548 + 549 + rswitch_gwca_ts_queue_fill(priv, 0, TS_RING_SIZE); 550 + desc = &gq->ts_ring[gq->ring_size]; 551 + desc->desc.die_dt = DT_LINKFIX; 552 + rswitch_desc_set_dptr(&desc->desc, gq->ring_dma); 553 + INIT_LIST_HEAD(&priv->gwca.ts_info_list); 554 + 555 + return 0; 523 556 } 524 557 525 558 static struct rswitch_gwca_queue *rswitch_gwca_get(struct rswitch_private *priv) ··· 1790 1779 err = rswitch_gwca_ts_queue_alloc(priv); 1791 1780 if (err < 0) 1792 1781 goto err_ts_queue_alloc; 1793 - 1794 - rswitch_gwca_ts_queue_fill(priv, 0, TS_RING_SIZE); 1795 - INIT_LIST_HEAD(&priv->gwca.ts_info_list); 1796 1782 1797 1783 for (i = 0; i < RSWITCH_NUM_PORTS; i++) { 1798 1784 err = rswitch_device_alloc(priv, i);
+2
drivers/net/ethernet/sfc/efx_channels.c
··· 301 301 efx->tx_channel_offset = 0; 302 302 efx->n_xdp_channels = 0; 303 303 efx->xdp_channel_offset = efx->n_channels; 304 + efx->xdp_txq_queues_mode = EFX_XDP_TX_QUEUES_BORROWED; 304 305 rc = pci_enable_msi(efx->pci_dev); 305 306 if (rc == 0) { 306 307 efx_get_channel(efx, 0)->irq = efx->pci_dev->irq; ··· 323 322 efx->tx_channel_offset = efx_separate_tx_channels ? 1 : 0; 324 323 efx->n_xdp_channels = 0; 325 324 efx->xdp_channel_offset = efx->n_channels; 325 + efx->xdp_txq_queues_mode = EFX_XDP_TX_QUEUES_BORROWED; 326 326 efx->legacy_irq = efx->pci_dev->irq; 327 327 } 328 328
+2
drivers/net/ethernet/sfc/siena/efx_channels.c
··· 302 302 efx->tx_channel_offset = 0; 303 303 efx->n_xdp_channels = 0; 304 304 efx->xdp_channel_offset = efx->n_channels; 305 + efx->xdp_txq_queues_mode = EFX_XDP_TX_QUEUES_BORROWED; 305 306 rc = pci_enable_msi(efx->pci_dev); 306 307 if (rc == 0) { 307 308 efx_get_channel(efx, 0)->irq = efx->pci_dev->irq; ··· 324 323 efx->tx_channel_offset = efx_siena_separate_tx_channels ? 1 : 0; 325 324 efx->n_xdp_channels = 0; 326 325 efx->xdp_channel_offset = efx->n_channels; 326 + efx->xdp_txq_queues_mode = EFX_XDP_TX_QUEUES_BORROWED; 327 327 efx->legacy_irq = efx->pci_dev->irq; 328 328 } 329 329
+2 -1
drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
··· 644 644 plat_dat->fix_mac_speed = ethqos_fix_mac_speed; 645 645 plat_dat->dump_debug_regs = rgmii_dump; 646 646 plat_dat->has_gmac4 = 1; 647 - plat_dat->dwmac4_addrs = &data->dwmac4_addrs; 647 + if (ethqos->has_emac3) 648 + plat_dat->dwmac4_addrs = &data->dwmac4_addrs; 648 649 plat_dat->pmt = 1; 649 650 plat_dat->tso_en = of_property_read_bool(np, "snps,tso"); 650 651 if (of_device_is_compatible(np, "qcom,qcs404-ethqos"))
+7 -2
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 3873 3873 3874 3874 stmmac_hw_teardown(dev); 3875 3875 init_error: 3876 - free_dma_desc_resources(priv, &priv->dma_conf); 3877 3876 phylink_disconnect_phy(priv->phylink); 3878 3877 init_phy_error: 3879 3878 pm_runtime_put(priv->device); ··· 3890 3891 return PTR_ERR(dma_conf); 3891 3892 3892 3893 ret = __stmmac_open(dev, dma_conf); 3894 + if (ret) 3895 + free_dma_desc_resources(priv, dma_conf); 3896 + 3893 3897 kfree(dma_conf); 3894 3898 return ret; 3895 3899 } ··· 5635 5633 stmmac_release(dev); 5636 5634 5637 5635 ret = __stmmac_open(dev, dma_conf); 5638 - kfree(dma_conf); 5639 5636 if (ret) { 5637 + free_dma_desc_resources(priv, dma_conf); 5638 + kfree(dma_conf); 5640 5639 netdev_err(priv->dev, "failed reopening the interface after MTU change\n"); 5641 5640 return ret; 5642 5641 } 5642 + 5643 + kfree(dma_conf); 5643 5644 5644 5645 stmmac_set_rx_mode(dev); 5645 5646 }
+1 -1
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 2068 2068 /* Initialize the Serdes PHY for the port */ 2069 2069 ret = am65_cpsw_init_serdes_phy(dev, port_np, port); 2070 2070 if (ret) 2071 - return ret; 2071 + goto of_node_put; 2072 2072 2073 2073 port->slave.mac_only = 2074 2074 of_property_read_bool(port_np, "ti,mac-only");
+4
drivers/net/ipvlan/ipvlan_l3s.c
··· 102 102 103 103 skb->dev = addr->master->dev; 104 104 skb->skb_iif = skb->dev->ifindex; 105 + #if IS_ENABLED(CONFIG_IPV6) 106 + if (addr->atype == IPVL_IPV6) 107 + IP6CB(skb)->iif = skb->dev->ifindex; 108 + #endif 105 109 len = skb->len + ETH_HLEN; 106 110 ipvlan_count_rx(addr->master, len, true, false); 107 111 out:
+5 -7
drivers/net/macsec.c
··· 3997 3997 return -ENOMEM; 3998 3998 3999 3999 secy->tx_sc.stats = netdev_alloc_pcpu_stats(struct pcpu_tx_sc_stats); 4000 - if (!secy->tx_sc.stats) { 4001 - free_percpu(macsec->stats); 4000 + if (!secy->tx_sc.stats) 4002 4001 return -ENOMEM; 4003 - } 4004 4002 4005 4003 secy->tx_sc.md_dst = metadata_dst_alloc(0, METADATA_MACSEC, GFP_KERNEL); 4006 - if (!secy->tx_sc.md_dst) { 4007 - free_percpu(secy->tx_sc.stats); 4008 - free_percpu(macsec->stats); 4004 + if (!secy->tx_sc.md_dst) 4005 + /* macsec and secy percpu stats will be freed when unregistering 4006 + * net_device in macsec_free_netdev() 4007 + */ 4009 4008 return -ENOMEM; 4010 - } 4011 4009 4012 4010 if (sci == MACSEC_UNDEF_SCI) 4013 4011 sci = dev_to_sci(dev, MACSEC_PORT_ES);
+49 -7
drivers/net/phy/phylink.c
··· 188 188 case PHY_INTERFACE_MODE_RGMII_ID: 189 189 case PHY_INTERFACE_MODE_RGMII: 190 190 case PHY_INTERFACE_MODE_QSGMII: 191 + case PHY_INTERFACE_MODE_QUSGMII: 191 192 case PHY_INTERFACE_MODE_SGMII: 192 193 case PHY_INTERFACE_MODE_GMII: 193 194 return SPEED_1000; ··· 205 204 case PHY_INTERFACE_MODE_10GBASER: 206 205 case PHY_INTERFACE_MODE_10GKR: 207 206 case PHY_INTERFACE_MODE_USXGMII: 208 - case PHY_INTERFACE_MODE_QUSGMII: 209 207 return SPEED_10000; 210 208 211 209 case PHY_INTERFACE_MODE_25GBASER: ··· 2225 2225 2226 2226 ASSERT_RTNL(); 2227 2227 2228 - /* Mask out unsupported advertisements */ 2229 - linkmode_and(config.advertising, kset->link_modes.advertising, 2230 - pl->supported); 2231 - 2232 2228 if (pl->phydev) { 2229 + struct ethtool_link_ksettings phy_kset = *kset; 2230 + 2231 + linkmode_and(phy_kset.link_modes.advertising, 2232 + phy_kset.link_modes.advertising, 2233 + pl->supported); 2234 + 2233 2235 /* We can rely on phylib for this update; we also do not need 2234 2236 * to update the pl->link_config settings: 2235 2237 * - the configuration returned via ksettings_get() will come ··· 2250 2248 * the presence of a PHY, this should not be changed as that 2251 2249 * should be determined from the media side advertisement. 2252 2250 */ 2253 - return phy_ethtool_ksettings_set(pl->phydev, kset); 2251 + return phy_ethtool_ksettings_set(pl->phydev, &phy_kset); 2254 2252 } 2255 2253 2256 2254 config = pl->link_config; 2255 + /* Mask out unsupported advertisements */ 2256 + linkmode_and(config.advertising, kset->link_modes.advertising, 2257 + pl->supported); 2257 2258 2258 2259 /* FIXME: should we reject autoneg if phy/mac does not support it? */ 2259 2260 switch (kset->base.autoneg) { ··· 3299 3294 EXPORT_SYMBOL_GPL(phylink_decode_usxgmii_word); 3300 3295 3301 3296 /** 3297 + * phylink_decode_usgmii_word() - decode the USGMII word from a MAC PCS 3298 + * @state: a pointer to a struct phylink_link_state. 3299 + * @lpa: a 16 bit value which stores the USGMII auto-negotiation word 3300 + * 3301 + * Helper for MAC PCS supporting the USGMII protocol and the auto-negotiation 3302 + * code word. Decode the USGMII code word and populate the corresponding fields 3303 + * (speed, duplex) into the phylink_link_state structure. The structure for this 3304 + * word is the same as the USXGMII word, except it only supports speeds up to 3305 + * 1Gbps. 3306 + */ 3307 + static void phylink_decode_usgmii_word(struct phylink_link_state *state, 3308 + uint16_t lpa) 3309 + { 3310 + switch (lpa & MDIO_USXGMII_SPD_MASK) { 3311 + case MDIO_USXGMII_10: 3312 + state->speed = SPEED_10; 3313 + break; 3314 + case MDIO_USXGMII_100: 3315 + state->speed = SPEED_100; 3316 + break; 3317 + case MDIO_USXGMII_1000: 3318 + state->speed = SPEED_1000; 3319 + break; 3320 + default: 3321 + state->link = false; 3322 + return; 3323 + } 3324 + 3325 + if (lpa & MDIO_USXGMII_FULL_DUPLEX) 3326 + state->duplex = DUPLEX_FULL; 3327 + else 3328 + state->duplex = DUPLEX_HALF; 3329 + } 3330 + 3331 + /** 3302 3332 * phylink_mii_c22_pcs_decode_state() - Decode MAC PCS state from MII registers 3303 3333 * @state: a pointer to a &struct phylink_link_state. 3304 3334 * @bmsr: The value of the %MII_BMSR register ··· 3370 3330 3371 3331 case PHY_INTERFACE_MODE_SGMII: 3372 3332 case PHY_INTERFACE_MODE_QSGMII: 3373 - case PHY_INTERFACE_MODE_QUSGMII: 3374 3333 phylink_decode_sgmii_word(state, lpa); 3334 + break; 3335 + case PHY_INTERFACE_MODE_QUSGMII: 3336 + phylink_decode_usgmii_word(state, lpa); 3375 3337 break; 3376 3338 3377 3339 default:
+2
drivers/net/usb/qmi_wwan.c
··· 1220 1220 {QMI_FIXED_INTF(0x05c6, 0x9080, 8)}, 1221 1221 {QMI_FIXED_INTF(0x05c6, 0x9083, 3)}, 1222 1222 {QMI_FIXED_INTF(0x05c6, 0x9084, 4)}, 1223 + {QMI_QUIRK_SET_DTR(0x05c6, 0x9091, 2)}, /* Compal RXM-G1 */ 1223 1224 {QMI_FIXED_INTF(0x05c6, 0x90b2, 3)}, /* ublox R410M */ 1225 + {QMI_QUIRK_SET_DTR(0x05c6, 0x90db, 2)}, /* Compal RXM-G1 */ 1224 1226 {QMI_FIXED_INTF(0x05c6, 0x920d, 0)}, 1225 1227 {QMI_FIXED_INTF(0x05c6, 0x920d, 5)}, 1226 1228 {QMI_QUIRK_SET_DTR(0x05c6, 0x9625, 4)}, /* YUGA CLM920-NC5 */
+8 -8
drivers/net/virtio_net.c
··· 205 205 __virtio16 vid; 206 206 __virtio64 offloads; 207 207 struct virtio_net_ctrl_rss rss; 208 + struct virtio_net_ctrl_coal_tx coal_tx; 209 + struct virtio_net_ctrl_coal_rx coal_rx; 208 210 }; 209 211 210 212 struct virtnet_info { ··· 2936 2934 struct ethtool_coalesce *ec) 2937 2935 { 2938 2936 struct scatterlist sgs_tx, sgs_rx; 2939 - struct virtio_net_ctrl_coal_tx coal_tx; 2940 - struct virtio_net_ctrl_coal_rx coal_rx; 2941 2937 2942 - coal_tx.tx_usecs = cpu_to_le32(ec->tx_coalesce_usecs); 2943 - coal_tx.tx_max_packets = cpu_to_le32(ec->tx_max_coalesced_frames); 2944 - sg_init_one(&sgs_tx, &coal_tx, sizeof(coal_tx)); 2938 + vi->ctrl->coal_tx.tx_usecs = cpu_to_le32(ec->tx_coalesce_usecs); 2939 + vi->ctrl->coal_tx.tx_max_packets = cpu_to_le32(ec->tx_max_coalesced_frames); 2940 + sg_init_one(&sgs_tx, &vi->ctrl->coal_tx, sizeof(vi->ctrl->coal_tx)); 2945 2941 2946 2942 if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_NOTF_COAL, 2947 2943 VIRTIO_NET_CTRL_NOTF_COAL_TX_SET, ··· 2950 2950 vi->tx_usecs = ec->tx_coalesce_usecs; 2951 2951 vi->tx_max_packets = ec->tx_max_coalesced_frames; 2952 2952 2953 - coal_rx.rx_usecs = cpu_to_le32(ec->rx_coalesce_usecs); 2954 - coal_rx.rx_max_packets = cpu_to_le32(ec->rx_max_coalesced_frames); 2955 - sg_init_one(&sgs_rx, &coal_rx, sizeof(coal_rx)); 2953 + vi->ctrl->coal_rx.rx_usecs = cpu_to_le32(ec->rx_coalesce_usecs); 2954 + vi->ctrl->coal_rx.rx_max_packets = cpu_to_le32(ec->rx_max_coalesced_frames); 2955 + sg_init_one(&sgs_rx, &vi->ctrl->coal_rx, sizeof(vi->ctrl->coal_rx)); 2956 2956 2957 2957 if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_NOTF_COAL, 2958 2958 VIRTIO_NET_CTRL_NOTF_COAL_RX_SET,
+3
drivers/net/wan/lapbether.c
··· 384 384 385 385 ASSERT_RTNL(); 386 386 387 + if (dev->type != ARPHRD_ETHER) 388 + return -EINVAL; 389 + 387 390 ndev = alloc_netdev(sizeof(*lapbeth), "lapb%d", NET_NAME_UNKNOWN, 388 391 lapbeth_setup); 389 392 if (!ndev)
+2 -6
drivers/net/wireless/intel/iwlwifi/mvm/d3.c
··· 2732 2732 if (wowlan_info_ver < 2) { 2733 2733 struct iwl_wowlan_info_notif_v1 *notif_v1 = (void *)pkt->data; 2734 2734 2735 - notif = kmemdup(notif_v1, 2736 - offsetofend(struct iwl_wowlan_info_notif, 2737 - received_beacons), 2738 - GFP_ATOMIC); 2739 - 2735 + notif = kmemdup(notif_v1, sizeof(*notif), GFP_ATOMIC); 2740 2736 if (!notif) 2741 2737 return false; 2742 2738 2743 2739 notif->tid_tear_down = notif_v1->tid_tear_down; 2744 2740 notif->station_id = notif_v1->station_id; 2745 - 2741 + memset_after(notif, 0, station_id); 2746 2742 } else { 2747 2743 notif = (void *)pkt->data; 2748 2744 }
+6 -6
drivers/net/wireless/intel/iwlwifi/mvm/rs.c
··· 2692 2692 2693 2693 lq_sta = mvm_sta; 2694 2694 2695 - spin_lock(&lq_sta->pers.lock); 2695 + spin_lock_bh(&lq_sta->pers.lock); 2696 2696 iwl_mvm_hwrate_to_tx_rate_v1(lq_sta->last_rate_n_flags, 2697 2697 info->band, &info->control.rates[0]); 2698 2698 info->control.rates[0].count = 1; ··· 2707 2707 iwl_mvm_hwrate_to_tx_rate_v1(last_ucode_rate, info->band, 2708 2708 &txrc->reported_rate); 2709 2709 } 2710 - spin_unlock(&lq_sta->pers.lock); 2710 + spin_unlock_bh(&lq_sta->pers.lock); 2711 2711 } 2712 2712 2713 2713 static void *rs_drv_alloc_sta(void *mvm_rate, struct ieee80211_sta *sta, ··· 3264 3264 /* If it's locked we are in middle of init flow 3265 3265 * just wait for next tx status to update the lq_sta data 3266 3266 */ 3267 - if (!spin_trylock(&mvmsta->deflink.lq_sta.rs_drv.pers.lock)) 3267 + if (!spin_trylock_bh(&mvmsta->deflink.lq_sta.rs_drv.pers.lock)) 3268 3268 return; 3269 3269 3270 3270 __iwl_mvm_rs_tx_status(mvm, sta, tid, info, ndp); 3271 - spin_unlock(&mvmsta->deflink.lq_sta.rs_drv.pers.lock); 3271 + spin_unlock_bh(&mvmsta->deflink.lq_sta.rs_drv.pers.lock); 3272 3272 } 3273 3273 3274 3274 #ifdef CONFIG_MAC80211_DEBUGFS ··· 4117 4117 } else { 4118 4118 struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta); 4119 4119 4120 - spin_lock(&mvmsta->deflink.lq_sta.rs_drv.pers.lock); 4120 + spin_lock_bh(&mvmsta->deflink.lq_sta.rs_drv.pers.lock); 4121 4121 rs_drv_rate_init(mvm, sta, band); 4122 - spin_unlock(&mvmsta->deflink.lq_sta.rs_drv.pers.lock); 4122 + spin_unlock_bh(&mvmsta->deflink.lq_sta.rs_drv.pers.lock); 4123 4123 } 4124 4124 } 4125 4125
+3
drivers/net/wireless/mediatek/mt76/mt7615/mac.c
··· 914 914 915 915 msta = list_first_entry(&sta_poll_list, struct mt7615_sta, 916 916 poll_list); 917 + 918 + spin_lock_bh(&dev->sta_poll_lock); 917 919 list_del_init(&msta->poll_list); 920 + spin_unlock_bh(&dev->sta_poll_lock); 918 921 919 922 addr = mt7615_mac_wtbl_addr(dev, msta->wcid.idx) + 19 * 4; 920 923
+12 -7
drivers/net/wireless/mediatek/mt76/mt7996/mac.c
··· 1004 1004 { 1005 1005 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 1006 1006 struct ieee80211_vif *vif = info->control.vif; 1007 - struct mt7996_vif *mvif = (struct mt7996_vif *)vif->drv_priv; 1008 1007 u8 band_idx = (info->hw_queue & MT_TX_HW_QUEUE_PHY) >> 2; 1009 1008 u8 p_fmt, q_idx, omac_idx = 0, wmm_idx = 0; 1010 1009 bool is_8023 = info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP; 1010 + struct mt7996_vif *mvif; 1011 1011 u16 tx_count = 15; 1012 1012 u32 val; 1013 1013 bool beacon = !!(changed & (BSS_CHANGED_BEACON | ··· 1015 1015 bool inband_disc = !!(changed & (BSS_CHANGED_UNSOL_BCAST_PROBE_RESP | 1016 1016 BSS_CHANGED_FILS_DISCOVERY)); 1017 1017 1018 - if (vif) { 1018 + mvif = vif ? (struct mt7996_vif *)vif->drv_priv : NULL; 1019 + if (mvif) { 1019 1020 omac_idx = mvif->mt76.omac_idx; 1020 1021 wmm_idx = mvif->mt76.wmm_idx; 1021 1022 band_idx = mvif->mt76.band_idx; ··· 1082 1081 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 1083 1082 bool mcast = ieee80211_is_data(hdr->frame_control) && 1084 1083 is_multicast_ether_addr(hdr->addr1); 1085 - u8 idx = mvif->basic_rates_idx; 1084 + u8 idx = MT7996_BASIC_RATES_TBL; 1086 1085 1087 - if (mcast && mvif->mcast_rates_idx) 1088 - idx = mvif->mcast_rates_idx; 1089 - else if (beacon && mvif->beacon_rates_idx) 1090 - idx = mvif->beacon_rates_idx; 1086 + if (mvif) { 1087 + if (mcast && mvif->mcast_rates_idx) 1088 + idx = mvif->mcast_rates_idx; 1089 + else if (beacon && mvif->beacon_rates_idx) 1090 + idx = mvif->beacon_rates_idx; 1091 + else 1092 + idx = mvif->basic_rates_idx; 1093 + } 1091 1094 1092 1095 txwi[6] |= cpu_to_le32(FIELD_PREP(MT_TXD6_TX_RATE, idx)); 1093 1096 txwi[3] |= cpu_to_le32(MT_TXD3_BA_DISABLE);
+5 -9
drivers/net/wireless/realtek/rtw88/mac80211.c
··· 88 88 } 89 89 } 90 90 91 - if (changed & IEEE80211_CONF_CHANGE_PS) { 92 - if (hw->conf.flags & IEEE80211_CONF_PS) { 93 - rtwdev->ps_enabled = true; 94 - } else { 95 - rtwdev->ps_enabled = false; 96 - rtw_leave_lps(rtwdev); 97 - } 98 - } 99 - 100 91 if (changed & IEEE80211_CONF_CHANGE_CHANNEL) 101 92 rtw_set_channel(rtwdev); 102 93 ··· 204 213 config |= PORT_SET_BCN_CTRL; 205 214 rtw_vif_port_config(rtwdev, rtwvif, config); 206 215 rtw_core_port_switch(rtwdev, vif); 216 + rtw_recalc_lps(rtwdev, vif); 207 217 208 218 mutex_unlock(&rtwdev->mutex); 209 219 ··· 236 244 config |= PORT_SET_BCN_CTRL; 237 245 rtw_vif_port_config(rtwdev, rtwvif, config); 238 246 clear_bit(rtwvif->port, rtwdev->hw_port); 247 + rtw_recalc_lps(rtwdev, NULL); 239 248 240 249 mutex_unlock(&rtwdev->mutex); 241 250 } ··· 430 437 431 438 if (changed & BSS_CHANGED_ERP_SLOT) 432 439 rtw_conf_tx(rtwdev, rtwvif); 440 + 441 + if (changed & BSS_CHANGED_PS) 442 + rtw_recalc_lps(rtwdev, NULL); 433 443 434 444 rtw_vif_port_config(rtwdev, rtwvif, config); 435 445
+2 -2
drivers/net/wireless/realtek/rtw88/main.c
··· 271 271 * more than two stations associated to the AP, then we can not enter 272 272 * lps, because fw does not handle the overlapped beacon interval 273 273 * 274 - * mac80211 should iterate vifs and determine if driver can enter 275 - * ps by passing IEEE80211_CONF_PS to us, all we need to do is to 274 + * rtw_recalc_lps() iterate vifs and determine if driver can enter 275 + * ps by vif->type and vif->cfg.ps, all we need to do here is to 276 276 * get that vif and check if device is having traffic more than the 277 277 * threshold. 278 278 */
+43
drivers/net/wireless/realtek/rtw88/ps.c
··· 299 299 300 300 __rtw_leave_lps_deep(rtwdev); 301 301 } 302 + 303 + struct rtw_vif_recalc_lps_iter_data { 304 + struct rtw_dev *rtwdev; 305 + struct ieee80211_vif *found_vif; 306 + int count; 307 + }; 308 + 309 + static void __rtw_vif_recalc_lps(struct rtw_vif_recalc_lps_iter_data *data, 310 + struct ieee80211_vif *vif) 311 + { 312 + if (data->count < 0) 313 + return; 314 + 315 + if (vif->type != NL80211_IFTYPE_STATION) { 316 + data->count = -1; 317 + return; 318 + } 319 + 320 + data->count++; 321 + data->found_vif = vif; 322 + } 323 + 324 + static void rtw_vif_recalc_lps_iter(void *data, u8 *mac, 325 + struct ieee80211_vif *vif) 326 + { 327 + __rtw_vif_recalc_lps(data, vif); 328 + } 329 + 330 + void rtw_recalc_lps(struct rtw_dev *rtwdev, struct ieee80211_vif *new_vif) 331 + { 332 + struct rtw_vif_recalc_lps_iter_data data = { .rtwdev = rtwdev }; 333 + 334 + if (new_vif) 335 + __rtw_vif_recalc_lps(&data, new_vif); 336 + rtw_iterate_vifs(rtwdev, rtw_vif_recalc_lps_iter, &data); 337 + 338 + if (data.count == 1 && data.found_vif->cfg.ps) { 339 + rtwdev->ps_enabled = true; 340 + } else { 341 + rtwdev->ps_enabled = false; 342 + rtw_leave_lps(rtwdev); 343 + } 344 + }
+2
drivers/net/wireless/realtek/rtw88/ps.h
··· 23 23 void rtw_leave_lps(struct rtw_dev *rtwdev); 24 24 void rtw_leave_lps_deep(struct rtw_dev *rtwdev); 25 25 enum rtw_lps_deep_mode rtw_get_lps_deep_mode(struct rtw_dev *rtwdev); 26 + void rtw_recalc_lps(struct rtw_dev *rtwdev, struct ieee80211_vif *new_vif); 27 + 26 28 #endif
-3
drivers/net/wireless/realtek/rtw89/core.c
··· 2531 2531 rtwvif->tdls_peer) 2532 2532 return; 2533 2533 2534 - if (rtwdev->total_sta_assoc > 1) 2535 - return; 2536 - 2537 2534 if (rtwvif->offchan) 2538 2535 return; 2539 2536
+6 -9
drivers/net/wireless/realtek/rtw89/mac80211.c
··· 89 89 !(hw->conf.flags & IEEE80211_CONF_IDLE)) 90 90 rtw89_leave_ips(rtwdev); 91 91 92 - if (changed & IEEE80211_CONF_CHANGE_PS) { 93 - if (hw->conf.flags & IEEE80211_CONF_PS) { 94 - rtwdev->lps_enabled = true; 95 - } else { 96 - rtw89_leave_lps(rtwdev); 97 - rtwdev->lps_enabled = false; 98 - } 99 - } 100 - 101 92 if (changed & IEEE80211_CONF_CHANGE_CHANNEL) { 102 93 rtw89_config_entity_chandef(rtwdev, RTW89_SUB_ENTITY_0, 103 94 &hw->conf.chandef); ··· 159 168 rtw89_core_txq_init(rtwdev, vif->txq); 160 169 161 170 rtw89_btc_ntfy_role_info(rtwdev, rtwvif, NULL, BTC_ROLE_START); 171 + 172 + rtw89_recalc_lps(rtwdev); 162 173 out: 163 174 mutex_unlock(&rtwdev->mutex); 164 175 ··· 185 192 rtw89_mac_remove_vif(rtwdev, rtwvif); 186 193 rtw89_core_release_bit_map(rtwdev->hw_port, rtwvif->port); 187 194 list_del_init(&rtwvif->list); 195 + rtw89_recalc_lps(rtwdev); 188 196 rtw89_enter_ips_by_hwflags(rtwdev); 189 197 190 198 mutex_unlock(&rtwdev->mutex); ··· 444 450 445 451 if (changed & BSS_CHANGED_CQM) 446 452 rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, vif, true); 453 + 454 + if (changed & BSS_CHANGED_PS) 455 + rtw89_recalc_lps(rtwdev); 447 456 448 457 mutex_unlock(&rtwdev->mutex); 449 458 }
+26
drivers/net/wireless/realtek/rtw89/ps.c
··· 252 252 rtw89_p2p_disable_all_noa(rtwdev, vif); 253 253 rtw89_p2p_update_noa(rtwdev, vif); 254 254 } 255 + 256 + void rtw89_recalc_lps(struct rtw89_dev *rtwdev) 257 + { 258 + struct ieee80211_vif *vif, *found_vif = NULL; 259 + struct rtw89_vif *rtwvif; 260 + int count = 0; 261 + 262 + rtw89_for_each_rtwvif(rtwdev, rtwvif) { 263 + vif = rtwvif_to_vif(rtwvif); 264 + 265 + if (vif->type != NL80211_IFTYPE_STATION) { 266 + count = 0; 267 + break; 268 + } 269 + 270 + count++; 271 + found_vif = vif; 272 + } 273 + 274 + if (count == 1 && found_vif->cfg.ps) { 275 + rtwdev->lps_enabled = true; 276 + } else { 277 + rtw89_leave_lps(rtwdev); 278 + rtwdev->lps_enabled = false; 279 + } 280 + }
+1
drivers/net/wireless/realtek/rtw89/ps.h
··· 15 15 void rtw89_leave_ips(struct rtw89_dev *rtwdev); 16 16 void rtw89_set_coex_ctrl_lps(struct rtw89_dev *rtwdev, bool btc_ctrl); 17 17 void rtw89_process_p2p_ps(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif); 18 + void rtw89_recalc_lps(struct rtw89_dev *rtwdev); 18 19 19 20 static inline void rtw89_leave_ips_by_hwflags(struct rtw89_dev *rtwdev) 20 21 {
+1
drivers/of/overlay.c
··· 811 811 if (!fragment->target) { 812 812 pr_err("symbols in overlay, but not in live tree\n"); 813 813 ret = -EINVAL; 814 + of_node_put(node); 814 815 goto err_out; 815 816 } 816 817
+1
drivers/pinctrl/meson/pinctrl-meson-axg.c
··· 400 400 GPIO_GROUP(GPIOA_15), 401 401 GPIO_GROUP(GPIOA_16), 402 402 GPIO_GROUP(GPIOA_17), 403 + GPIO_GROUP(GPIOA_18), 403 404 GPIO_GROUP(GPIOA_19), 404 405 GPIO_GROUP(GPIOA_20), 405 406
+1 -1
drivers/platform/surface/aggregator/controller.c
··· 825 825 826 826 cplt->dev = dev; 827 827 828 - cplt->wq = create_workqueue(SSAM_CPLT_WQ_NAME); 828 + cplt->wq = alloc_workqueue(SSAM_CPLT_WQ_NAME, WQ_UNBOUND | WQ_MEM_RECLAIM, 0); 829 829 if (!cplt->wq) 830 830 return -ENOMEM; 831 831
+10
drivers/platform/surface/surface_aggregator_tabletsw.c
··· 210 210 SSAM_KIP_COVER_STATE_LAPTOP = 0x03, 211 211 SSAM_KIP_COVER_STATE_FOLDED_CANVAS = 0x04, 212 212 SSAM_KIP_COVER_STATE_FOLDED_BACK = 0x05, 213 + SSAM_KIP_COVER_STATE_BOOK = 0x06, 213 214 }; 214 215 215 216 static const char *ssam_kip_cover_state_name(struct ssam_tablet_sw *sw, ··· 232 231 case SSAM_KIP_COVER_STATE_FOLDED_BACK: 233 232 return "folded-back"; 234 233 234 + case SSAM_KIP_COVER_STATE_BOOK: 235 + return "book"; 236 + 235 237 default: 236 238 dev_warn(&sw->sdev->dev, "unknown KIP cover state: %u\n", state->state); 237 239 return "<unknown>"; ··· 248 244 case SSAM_KIP_COVER_STATE_DISCONNECTED: 249 245 case SSAM_KIP_COVER_STATE_FOLDED_CANVAS: 250 246 case SSAM_KIP_COVER_STATE_FOLDED_BACK: 247 + case SSAM_KIP_COVER_STATE_BOOK: 251 248 return true; 252 249 253 250 case SSAM_KIP_COVER_STATE_CLOSED: ··· 340 335 SSAM_POS_COVER_LAPTOP = 0x03, 341 336 SSAM_POS_COVER_FOLDED_CANVAS = 0x04, 342 337 SSAM_POS_COVER_FOLDED_BACK = 0x05, 338 + SSAM_POS_COVER_BOOK = 0x06, 343 339 }; 344 340 345 341 enum ssam_pos_state_sls { ··· 372 366 373 367 case SSAM_POS_COVER_FOLDED_BACK: 374 368 return "folded-back"; 369 + 370 + case SSAM_POS_COVER_BOOK: 371 + return "book"; 375 372 376 373 default: 377 374 dev_warn(&sw->sdev->dev, "unknown device posture for type-cover: %u\n", state); ··· 425 416 case SSAM_POS_COVER_DISCONNECTED: 426 417 case SSAM_POS_COVER_FOLDED_CANVAS: 427 418 case SSAM_POS_COVER_FOLDED_BACK: 419 + case SSAM_POS_COVER_BOOK: 428 420 return true; 429 421 430 422 case SSAM_POS_COVER_CLOSED:
+8 -5
drivers/platform/x86/intel/int3472/clk_and_regulator.c
··· 101 101 102 102 int3472->clock.ena_gpio = acpi_get_and_request_gpiod(path, agpio->pin_table[0], 103 103 "int3472,clk-enable"); 104 - if (IS_ERR(int3472->clock.ena_gpio)) 105 - return dev_err_probe(int3472->dev, PTR_ERR(int3472->clock.ena_gpio), 106 - "getting clk-enable GPIO\n"); 104 + if (IS_ERR(int3472->clock.ena_gpio)) { 105 + ret = PTR_ERR(int3472->clock.ena_gpio); 106 + int3472->clock.ena_gpio = NULL; 107 + return dev_err_probe(int3472->dev, ret, "getting clk-enable GPIO\n"); 108 + } 107 109 108 110 if (polarity == GPIO_ACTIVE_LOW) 109 111 gpiod_toggle_active_low(int3472->clock.ena_gpio); ··· 201 199 int3472->regulator.gpio = acpi_get_and_request_gpiod(path, agpio->pin_table[0], 202 200 "int3472,regulator"); 203 201 if (IS_ERR(int3472->regulator.gpio)) { 204 - dev_err(int3472->dev, "Failed to get regulator GPIO line\n"); 205 - return PTR_ERR(int3472->regulator.gpio); 202 + ret = PTR_ERR(int3472->regulator.gpio); 203 + int3472->regulator.gpio = NULL; 204 + return dev_err_probe(int3472->dev, ret, "getting regulator GPIO\n"); 206 205 } 207 206 208 207 /* Ensure the pin is in output mode and non-active state */
+7
drivers/regulator/Kconfig
··· 1033 1033 Say M here if you want to include support for enabling the VBUS output 1034 1034 as a module. The module will be named "qcom_usb_vbus_regulator". 1035 1035 1036 + config REGULATOR_RAA215300 1037 + tristate "Renesas RAA215300 driver" 1038 + select REGMAP_I2C 1039 + depends on I2C 1040 + help 1041 + Support for the Renesas RAA215300 PMIC. 1042 + 1036 1043 config REGULATOR_RASPBERRYPI_TOUCHSCREEN_ATTINY 1037 1044 tristate "Raspberry Pi 7-inch touchscreen panel ATTINY regulator" 1038 1045 depends on BACKLIGHT_CLASS_DEVICE
+1
drivers/regulator/Makefile
··· 124 124 obj-$(CONFIG_REGULATOR_PBIAS) += pbias-regulator.o 125 125 obj-$(CONFIG_REGULATOR_PCAP) += pcap-regulator.o 126 126 obj-$(CONFIG_REGULATOR_PCF50633) += pcf50633-regulator.o 127 + obj-$(CONFIG_REGULATOR_RAA215300) += raa215300.o 127 128 obj-$(CONFIG_REGULATOR_RASPBERRYPI_TOUCHSCREEN_ATTINY) += rpi-panel-attiny-regulator.o 128 129 obj-$(CONFIG_REGULATOR_RC5T583) += rc5t583-regulator.o 129 130 obj-$(CONFIG_REGULATOR_RK808) += rk808-regulator.o
+15 -15
drivers/regulator/qcom-rpmh-regulator.c
··· 1057 1057 }; 1058 1058 1059 1059 static const struct rpmh_vreg_init_data pm8550_vreg_data[] = { 1060 - RPMH_VREG("ldo1", "ldo%s1", &pmic5_pldo, "vdd-l1-l4-l10"), 1060 + RPMH_VREG("ldo1", "ldo%s1", &pmic5_nldo515, "vdd-l1-l4-l10"), 1061 1061 RPMH_VREG("ldo2", "ldo%s2", &pmic5_pldo, "vdd-l2-l13-l14"), 1062 - RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo, "vdd-l3"), 1063 - RPMH_VREG("ldo4", "ldo%s4", &pmic5_nldo, "vdd-l1-l4-l10"), 1062 + RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo515, "vdd-l3"), 1063 + RPMH_VREG("ldo4", "ldo%s4", &pmic5_nldo515, "vdd-l1-l4-l10"), 1064 1064 RPMH_VREG("ldo5", "ldo%s5", &pmic5_pldo, "vdd-l5-l16"), 1065 - RPMH_VREG("ldo6", "ldo%s6", &pmic5_pldo_lv, "vdd-l6-l7"), 1066 - RPMH_VREG("ldo7", "ldo%s7", &pmic5_pldo_lv, "vdd-l6-l7"), 1067 - RPMH_VREG("ldo8", "ldo%s8", &pmic5_pldo_lv, "vdd-l8-l9"), 1065 + RPMH_VREG("ldo6", "ldo%s6", &pmic5_pldo, "vdd-l6-l7"), 1066 + RPMH_VREG("ldo7", "ldo%s7", &pmic5_pldo, "vdd-l6-l7"), 1067 + RPMH_VREG("ldo8", "ldo%s8", &pmic5_pldo, "vdd-l8-l9"), 1068 1068 RPMH_VREG("ldo9", "ldo%s9", &pmic5_pldo, "vdd-l8-l9"), 1069 - RPMH_VREG("ldo10", "ldo%s10", &pmic5_nldo, "vdd-l1-l4-l10"), 1070 - RPMH_VREG("ldo11", "ldo%s11", &pmic5_nldo, "vdd-l11"), 1069 + RPMH_VREG("ldo10", "ldo%s10", &pmic5_nldo515, "vdd-l1-l4-l10"), 1070 + RPMH_VREG("ldo11", "ldo%s11", &pmic5_nldo515, "vdd-l11"), 1071 1071 RPMH_VREG("ldo12", "ldo%s12", &pmic5_pldo, "vdd-l12"), 1072 1072 RPMH_VREG("ldo13", "ldo%s13", &pmic5_pldo, "vdd-l2-l13-l14"), 1073 1073 RPMH_VREG("ldo14", "ldo%s14", &pmic5_pldo, "vdd-l2-l13-l14"), 1074 - RPMH_VREG("ldo15", "ldo%s15", &pmic5_pldo, "vdd-l15"), 1074 + RPMH_VREG("ldo15", "ldo%s15", &pmic5_nldo515, "vdd-l15"), 1075 1075 RPMH_VREG("ldo16", "ldo%s16", &pmic5_pldo, "vdd-l5-l16"), 1076 1076 RPMH_VREG("ldo17", "ldo%s17", &pmic5_pldo, "vdd-l17"), 1077 1077 RPMH_VREG("bob1", "bob%s1", &pmic5_bob, "vdd-bob1"), ··· 1086 1086 RPMH_VREG("smps4", "smp%s4", &pmic5_ftsmps525_lv, "vdd-s4"), 1087 1087 RPMH_VREG("smps5", "smp%s5", &pmic5_ftsmps525_lv, "vdd-s5"), 1088 1088 RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525_mv, "vdd-s6"), 1089 - RPMH_VREG("ldo1", "ldo%s1", &pmic5_nldo, "vdd-l1"), 1090 - RPMH_VREG("ldo2", "ldo%s2", &pmic5_nldo, "vdd-l2"), 1091 - RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo, "vdd-l3"), 1089 + RPMH_VREG("ldo1", "ldo%s1", &pmic5_nldo515, "vdd-l1"), 1090 + RPMH_VREG("ldo2", "ldo%s2", &pmic5_nldo515, "vdd-l2"), 1091 + RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo515, "vdd-l3"), 1092 1092 {} 1093 1093 }; 1094 1094 ··· 1101 1101 RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525_lv, "vdd-s6"), 1102 1102 RPMH_VREG("smps7", "smp%s7", &pmic5_ftsmps525_lv, "vdd-s7"), 1103 1103 RPMH_VREG("smps8", "smp%s8", &pmic5_ftsmps525_lv, "vdd-s8"), 1104 - RPMH_VREG("ldo1", "ldo%s1", &pmic5_nldo, "vdd-l1"), 1105 - RPMH_VREG("ldo2", "ldo%s2", &pmic5_nldo, "vdd-l2"), 1106 - RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo, "vdd-l3"), 1104 + RPMH_VREG("ldo1", "ldo%s1", &pmic5_nldo515, "vdd-l1"), 1105 + RPMH_VREG("ldo2", "ldo%s2", &pmic5_nldo515, "vdd-l2"), 1106 + RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo515, "vdd-l3"), 1107 1107 {} 1108 1108 }; 1109 1109
+190
drivers/regulator/raa215300.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // 3 + // Renesas RAA215300 PMIC driver 4 + // 5 + // Copyright (C) 2023 Renesas Electronics Corporation 6 + // 7 + 8 + #include <linux/clk.h> 9 + #include <linux/clkdev.h> 10 + #include <linux/clk-provider.h> 11 + #include <linux/err.h> 12 + #include <linux/i2c.h> 13 + #include <linux/module.h> 14 + #include <linux/of.h> 15 + #include <linux/regmap.h> 16 + 17 + #define RAA215300_FAULT_LATCHED_STATUS_1 0x59 18 + #define RAA215300_FAULT_LATCHED_STATUS_2 0x5a 19 + #define RAA215300_FAULT_LATCHED_STATUS_3 0x5b 20 + #define RAA215300_FAULT_LATCHED_STATUS_4 0x5c 21 + #define RAA215300_FAULT_LATCHED_STATUS_6 0x5e 22 + 23 + #define RAA215300_INT_MASK_1 0x64 24 + #define RAA215300_INT_MASK_2 0x65 25 + #define RAA215300_INT_MASK_3 0x66 26 + #define RAA215300_INT_MASK_4 0x67 27 + #define RAA215300_INT_MASK_6 0x68 28 + 29 + #define RAA215300_REG_BLOCK_EN 0x6c 30 + #define RAA215300_HW_REV 0xf8 31 + 32 + #define RAA215300_INT_MASK_1_ALL GENMASK(5, 0) 33 + #define RAA215300_INT_MASK_2_ALL GENMASK(3, 0) 34 + #define RAA215300_INT_MASK_3_ALL GENMASK(5, 0) 35 + #define RAA215300_INT_MASK_4_ALL BIT(0) 36 + #define RAA215300_INT_MASK_6_ALL GENMASK(7, 0) 37 + 38 + #define RAA215300_REG_BLOCK_EN_RTC_EN BIT(6) 39 + #define RAA215300_RTC_DEFAULT_ADDR 0x6f 40 + 41 + const char *clkin_name = "clkin"; 42 + const char *xin_name = "xin"; 43 + static struct clk *clk; 44 + 45 + static const struct regmap_config raa215300_regmap_config = { 46 + .reg_bits = 8, 47 + .val_bits = 8, 48 + .max_register = 0xff, 49 + }; 50 + 51 + static void raa215300_rtc_unregister_device(void *data) 52 + { 53 + i2c_unregister_device(data); 54 + if (!clk) { 55 + clk_unregister_fixed_rate(clk); 56 + clk = NULL; 57 + } 58 + } 59 + 60 + static int raa215300_clk_present(struct i2c_client *client, const char *name) 61 + { 62 + struct clk *clk; 63 + 64 + clk = devm_clk_get_optional(&client->dev, name); 65 + if (IS_ERR(clk)) 66 + return PTR_ERR(clk); 67 + 68 + return !!clk; 69 + } 70 + 71 + static int raa215300_i2c_probe(struct i2c_client *client) 72 + { 73 + struct device *dev = &client->dev; 74 + const char *clk_name = xin_name; 75 + unsigned int pmic_version, val; 76 + struct regmap *regmap; 77 + int ret; 78 + 79 + regmap = devm_regmap_init_i2c(client, &raa215300_regmap_config); 80 + if (IS_ERR(regmap)) 81 + return dev_err_probe(dev, PTR_ERR(regmap), 82 + "regmap i2c init failed\n"); 83 + 84 + ret = regmap_read(regmap, RAA215300_HW_REV, &pmic_version); 85 + if (ret < 0) 86 + return dev_err_probe(dev, ret, "HW rev read failed\n"); 87 + 88 + dev_dbg(dev, "RAA215300 PMIC version 0x%04x\n", pmic_version); 89 + 90 + /* Clear all blocks except RTC, if enabled */ 91 + regmap_read(regmap, RAA215300_REG_BLOCK_EN, &val); 92 + val &= RAA215300_REG_BLOCK_EN_RTC_EN; 93 + regmap_write(regmap, RAA215300_REG_BLOCK_EN, val); 94 + 95 + /*Clear the latched registers */ 96 + regmap_read(regmap, RAA215300_FAULT_LATCHED_STATUS_1, &val); 97 + regmap_write(regmap, RAA215300_FAULT_LATCHED_STATUS_1, val); 98 + regmap_read(regmap, RAA215300_FAULT_LATCHED_STATUS_2, &val); 99 + regmap_write(regmap, RAA215300_FAULT_LATCHED_STATUS_2, val); 100 + regmap_read(regmap, RAA215300_FAULT_LATCHED_STATUS_3, &val); 101 + regmap_write(regmap, RAA215300_FAULT_LATCHED_STATUS_3, val); 102 + regmap_read(regmap, RAA215300_FAULT_LATCHED_STATUS_4, &val); 103 + regmap_write(regmap, RAA215300_FAULT_LATCHED_STATUS_4, val); 104 + regmap_read(regmap, RAA215300_FAULT_LATCHED_STATUS_6, &val); 105 + regmap_write(regmap, RAA215300_FAULT_LATCHED_STATUS_6, val); 106 + 107 + /* Mask all the PMIC interrupts */ 108 + regmap_write(regmap, RAA215300_INT_MASK_1, RAA215300_INT_MASK_1_ALL); 109 + regmap_write(regmap, RAA215300_INT_MASK_2, RAA215300_INT_MASK_2_ALL); 110 + regmap_write(regmap, RAA215300_INT_MASK_3, RAA215300_INT_MASK_3_ALL); 111 + regmap_write(regmap, RAA215300_INT_MASK_4, RAA215300_INT_MASK_4_ALL); 112 + regmap_write(regmap, RAA215300_INT_MASK_6, RAA215300_INT_MASK_6_ALL); 113 + 114 + ret = raa215300_clk_present(client, xin_name); 115 + if (ret < 0) { 116 + return ret; 117 + } else if (!ret) { 118 + ret = raa215300_clk_present(client, clkin_name); 119 + if (ret < 0) 120 + return ret; 121 + 122 + clk_name = clkin_name; 123 + } 124 + 125 + if (ret) { 126 + char *name = pmic_version >= 0x12 ? "isl1208" : "raa215300_a0"; 127 + struct device_node *np = client->dev.of_node; 128 + u32 addr = RAA215300_RTC_DEFAULT_ADDR; 129 + struct i2c_board_info info = {}; 130 + struct i2c_client *rtc_client; 131 + ssize_t size; 132 + 133 + clk = clk_register_fixed_rate(NULL, clk_name, NULL, 0, 32000); 134 + clk_register_clkdev(clk, clk_name, NULL); 135 + 136 + if (np) { 137 + int i; 138 + 139 + i = of_property_match_string(np, "reg-names", "rtc"); 140 + if (i >= 0) 141 + of_property_read_u32_index(np, "reg", i, &addr); 142 + } 143 + 144 + info.addr = addr; 145 + if (client->irq > 0) 146 + info.irq = client->irq; 147 + 148 + size = strscpy(info.type, name, sizeof(info.type)); 149 + if (size < 0) 150 + return dev_err_probe(dev, size, 151 + "Invalid device name: %s\n", name); 152 + 153 + /* Enable RTC block */ 154 + regmap_update_bits(regmap, RAA215300_REG_BLOCK_EN, 155 + RAA215300_REG_BLOCK_EN_RTC_EN, 156 + RAA215300_REG_BLOCK_EN_RTC_EN); 157 + 158 + rtc_client = i2c_new_client_device(client->adapter, &info); 159 + if (IS_ERR(rtc_client)) 160 + return PTR_ERR(rtc_client); 161 + 162 + ret = devm_add_action_or_reset(dev, 163 + raa215300_rtc_unregister_device, 164 + rtc_client); 165 + if (ret < 0) 166 + return ret; 167 + } 168 + 169 + return 0; 170 + } 171 + 172 + static const struct of_device_id raa215300_dt_match[] = { 173 + { .compatible = "renesas,raa215300" }, 174 + { /* sentinel */ } 175 + }; 176 + MODULE_DEVICE_TABLE(of, raa215300_dt_match); 177 + 178 + static struct i2c_driver raa215300_i2c_driver = { 179 + .driver = { 180 + .name = "raa215300", 181 + .of_match_table = raa215300_dt_match, 182 + }, 183 + .probe_new = raa215300_i2c_probe, 184 + }; 185 + module_i2c_driver(raa215300_i2c_driver); 186 + 187 + MODULE_DESCRIPTION("Renesas RAA215300 PMIC driver"); 188 + MODULE_AUTHOR("Fabrizio Castro <fabrizio.castro.jz@renesas.com>"); 189 + MODULE_AUTHOR("Biju Das <biju.das.jz@bp.renesas.com>"); 190 + MODULE_LICENSE("GPL");
+2 -2
drivers/s390/block/dasd_ioctl.c
··· 552 552 553 553 memcpy(dasd_info->type, base->discipline->name, 4); 554 554 555 - spin_lock_irqsave(&block->queue_lock, flags); 555 + spin_lock_irqsave(get_ccwdev_lock(base->cdev), flags); 556 556 list_for_each(l, &base->ccw_queue) 557 557 dasd_info->chanq_len++; 558 - spin_unlock_irqrestore(&block->queue_lock, flags); 558 + spin_unlock_irqrestore(get_ccwdev_lock(base->cdev), flags); 559 559 return 0; 560 560 } 561 561
+4 -1
drivers/s390/cio/device.c
··· 1376 1376 enum io_sch_action { 1377 1377 IO_SCH_UNREG, 1378 1378 IO_SCH_ORPH_UNREG, 1379 + IO_SCH_UNREG_CDEV, 1379 1380 IO_SCH_ATTACH, 1380 1381 IO_SCH_UNREG_ATTACH, 1381 1382 IO_SCH_ORPH_ATTACH, ··· 1409 1408 } 1410 1409 if ((sch->schib.pmcw.pam & sch->opm) == 0) { 1411 1410 if (ccw_device_notify(cdev, CIO_NO_PATH) != NOTIFY_OK) 1412 - return IO_SCH_UNREG; 1411 + return IO_SCH_UNREG_CDEV; 1413 1412 return IO_SCH_DISC; 1414 1413 } 1415 1414 if (device_is_disconnected(cdev)) ··· 1471 1470 case IO_SCH_ORPH_ATTACH: 1472 1471 ccw_device_set_disconnected(cdev); 1473 1472 break; 1473 + case IO_SCH_UNREG_CDEV: 1474 1474 case IO_SCH_UNREG_ATTACH: 1475 1475 case IO_SCH_UNREG: 1476 1476 if (!cdev) ··· 1505 1503 if (rc) 1506 1504 goto out; 1507 1505 break; 1506 + case IO_SCH_UNREG_CDEV: 1508 1507 case IO_SCH_UNREG_ATTACH: 1509 1508 spin_lock_irqsave(sch->lock, flags); 1510 1509 sch_set_cdev(sch, NULL);
-8
drivers/s390/net/ism_drv.c
··· 771 771 772 772 static void __exit ism_exit(void) 773 773 { 774 - struct ism_dev *ism; 775 - 776 - mutex_lock(&ism_dev_list.mutex); 777 - list_for_each_entry(ism, &ism_dev_list.list, list) { 778 - ism_dev_exit(ism); 779 - } 780 - mutex_unlock(&ism_dev_list.mutex); 781 - 782 774 pci_unregister_driver(&ism_driver); 783 775 debug_unregister(ism_debug_info); 784 776 }
+1
drivers/scsi/aacraid/aacraid.h
··· 1678 1678 u32 handle_pci_error; 1679 1679 bool init_reset; 1680 1680 u8 soft_reset_support; 1681 + u8 use_map_queue; 1681 1682 }; 1682 1683 1683 1684 #define aac_adapter_interrupt(dev) \
+5 -1
drivers/scsi/aacraid/commsup.c
··· 223 223 struct fib *aac_fib_alloc_tag(struct aac_dev *dev, struct scsi_cmnd *scmd) 224 224 { 225 225 struct fib *fibptr; 226 + u32 blk_tag; 227 + int i; 226 228 227 - fibptr = &dev->fibs[scsi_cmd_to_rq(scmd)->tag]; 229 + blk_tag = blk_mq_unique_tag(scsi_cmd_to_rq(scmd)); 230 + i = blk_mq_unique_tag_to_tag(blk_tag); 231 + fibptr = &dev->fibs[i]; 228 232 /* 229 233 * Null out fields that depend on being zero at the start of 230 234 * each I/O
+14
drivers/scsi/aacraid/linit.c
··· 19 19 20 20 #include <linux/compat.h> 21 21 #include <linux/blkdev.h> 22 + #include <linux/blk-mq-pci.h> 22 23 #include <linux/completion.h> 23 24 #include <linux/init.h> 24 25 #include <linux/interrupt.h> ··· 503 502 sdev->tagged_supported = 1; 504 503 505 504 return 0; 505 + } 506 + 507 + static void aac_map_queues(struct Scsi_Host *shost) 508 + { 509 + struct aac_dev *aac = (struct aac_dev *)shost->hostdata; 510 + 511 + blk_mq_pci_map_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT], 512 + aac->pdev, 0); 513 + aac->use_map_queue = true; 506 514 } 507 515 508 516 /** ··· 1498 1488 .bios_param = aac_biosparm, 1499 1489 .shost_groups = aac_host_groups, 1500 1490 .slave_configure = aac_slave_configure, 1491 + .map_queues = aac_map_queues, 1501 1492 .change_queue_depth = aac_change_queue_depth, 1502 1493 .sdev_groups = aac_dev_groups, 1503 1494 .eh_abort_handler = aac_eh_abort, ··· 1786 1775 shost->max_lun = AAC_MAX_LUN; 1787 1776 1788 1777 pci_set_drvdata(pdev, shost); 1778 + shost->nr_hw_queues = aac->max_msix; 1779 + shost->host_tagset = 1; 1789 1780 1790 1781 error = scsi_add_host(shost, &pdev->dev); 1791 1782 if (error) ··· 1919 1906 struct aac_dev *aac = (struct aac_dev *)shost->hostdata; 1920 1907 1921 1908 aac_cancel_rescan_worker(aac); 1909 + aac->use_map_queue = false; 1922 1910 scsi_remove_host(shost); 1923 1911 1924 1912 __aac_shutdown(aac);
+23 -2
drivers/scsi/aacraid/src.c
··· 493 493 #endif 494 494 495 495 u16 vector_no; 496 + struct scsi_cmnd *scmd; 497 + u32 blk_tag; 498 + struct Scsi_Host *shost = dev->scsi_host_ptr; 499 + struct blk_mq_queue_map *qmap; 496 500 497 501 atomic_inc(&q->numpending); 498 502 ··· 509 505 if ((dev->comm_interface == AAC_COMM_MESSAGE_TYPE3) 510 506 && dev->sa_firmware) 511 507 vector_no = aac_get_vector(dev); 512 - else 513 - vector_no = fib->vector_no; 508 + else { 509 + if (!fib->vector_no || !fib->callback_data) { 510 + if (shost && dev->use_map_queue) { 511 + qmap = &shost->tag_set.map[HCTX_TYPE_DEFAULT]; 512 + vector_no = qmap->mq_map[raw_smp_processor_id()]; 513 + } 514 + /* 515 + * We hardcode the vector_no for 516 + * reserved commands as a valid shost is 517 + * absent during the init 518 + */ 519 + else 520 + vector_no = 0; 521 + } else { 522 + scmd = (struct scsi_cmnd *)fib->callback_data; 523 + blk_tag = blk_mq_unique_tag(scsi_cmd_to_rq(scmd)); 524 + vector_no = blk_mq_unique_tag_to_hwq(blk_tag); 525 + } 526 + } 514 527 515 528 if (native_hba) { 516 529 if (fib->flags & FIB_CONTEXT_FLAG_NATIVE_HBA_TMF) {
+4 -4
drivers/scsi/lpfc/lpfc_bsg.c
··· 889 889 struct lpfc_iocbq *piocbq) 890 890 { 891 891 uint32_t evt_req_id = 0; 892 - uint32_t cmd; 892 + u16 cmd; 893 893 struct lpfc_dmabuf *dmabuf = NULL; 894 894 struct lpfc_bsg_event *evt; 895 895 struct event_data *evt_dat = NULL; ··· 915 915 916 916 ct_req = (struct lpfc_sli_ct_request *)bdeBuf1->virt; 917 917 evt_req_id = ct_req->FsType; 918 - cmd = ct_req->CommandResponse.bits.CmdRsp; 918 + cmd = be16_to_cpu(ct_req->CommandResponse.bits.CmdRsp); 919 919 920 920 spin_lock_irqsave(&phba->ct_ev_lock, flags); 921 921 list_for_each_entry(evt, &phba->ct_ev_waiters, node) { ··· 3186 3186 ctreq->RevisionId.bits.InId = 0; 3187 3187 ctreq->FsType = SLI_CT_ELX_LOOPBACK; 3188 3188 ctreq->FsSubType = 0; 3189 - ctreq->CommandResponse.bits.CmdRsp = ELX_LOOPBACK_DATA; 3190 - ctreq->CommandResponse.bits.Size = size; 3189 + ctreq->CommandResponse.bits.CmdRsp = cpu_to_be16(ELX_LOOPBACK_DATA); 3190 + ctreq->CommandResponse.bits.Size = cpu_to_be16(size); 3191 3191 segment_offset = ELX_LOOPBACK_HEADER_SZ; 3192 3192 } else 3193 3193 segment_offset = 0;
+2
drivers/scsi/storvsc_drv.c
··· 1567 1567 { 1568 1568 blk_queue_rq_timeout(sdevice->request_queue, (storvsc_timeout * HZ)); 1569 1569 1570 + /* storvsc devices don't support MAINTENANCE_IN SCSI cmd */ 1571 + sdevice->no_report_opcodes = 1; 1570 1572 sdevice->no_write_same = 1; 1571 1573 1572 1574 /*
+2 -1
drivers/soc/qcom/Makefile
··· 32 32 obj-$(CONFIG_QCOM_RPMPD) += rpmpd.o 33 33 obj-$(CONFIG_QCOM_KRYO_L2_ACCESSORS) += kryo-l2-accessors.o 34 34 obj-$(CONFIG_QCOM_ICC_BWMON) += icc-bwmon.o 35 - obj-$(CONFIG_QCOM_INLINE_CRYPTO_ENGINE) += ice.o 35 + qcom_ice-objs += ice.o 36 + obj-$(CONFIG_QCOM_INLINE_CRYPTO_ENGINE) += qcom_ice.o
+2 -2
drivers/soc/qcom/icc-bwmon.c
··· 773 773 bwmon->max_bw_kbps = UINT_MAX; 774 774 opp = dev_pm_opp_find_bw_floor(dev, &bwmon->max_bw_kbps, 0); 775 775 if (IS_ERR(opp)) 776 - return dev_err_probe(dev, ret, "failed to find max peak bandwidth\n"); 776 + return dev_err_probe(dev, PTR_ERR(opp), "failed to find max peak bandwidth\n"); 777 777 778 778 bwmon->min_bw_kbps = 0; 779 779 opp = dev_pm_opp_find_bw_ceil(dev, &bwmon->min_bw_kbps, 0); 780 780 if (IS_ERR(opp)) 781 - return dev_err_probe(dev, ret, "failed to find min peak bandwidth\n"); 781 + return dev_err_probe(dev, PTR_ERR(opp), "failed to find min peak bandwidth\n"); 782 782 783 783 bwmon->dev = dev; 784 784
+1 -1
drivers/soc/qcom/ramp_controller.c
··· 296 296 return -ENOMEM; 297 297 298 298 qrc->desc = device_get_match_data(&pdev->dev); 299 - if (!qrc) 299 + if (!qrc->desc) 300 300 return -EINVAL; 301 301 302 302 qrc->regmap = devm_regmap_init_mmio(&pdev->dev, base, &qrc_regmap_config);
+1
drivers/soc/qcom/rmtfs_mem.c
··· 233 233 num_vmids = 0; 234 234 } else if (num_vmids < 0) { 235 235 dev_err(&pdev->dev, "failed to count qcom,vmid elements: %d\n", num_vmids); 236 + ret = num_vmids; 236 237 goto remove_cdev; 237 238 } else if (num_vmids > NUM_MAX_VMIDS) { 238 239 dev_warn(&pdev->dev,
+1 -1
drivers/soc/qcom/rpmh-rsc.c
··· 1073 1073 drv->ver.minor = rsc_id & (MINOR_VER_MASK << MINOR_VER_SHIFT); 1074 1074 drv->ver.minor >>= MINOR_VER_SHIFT; 1075 1075 1076 - if (drv->ver.major == 3 && drv->ver.minor >= 0) 1076 + if (drv->ver.major == 3) 1077 1077 drv->regs = rpmh_rsc_reg_offset_ver_3_0; 1078 1078 else 1079 1079 drv->regs = rpmh_rsc_reg_offset_ver_2_7;
+16
drivers/soc/qcom/rpmhpd.c
··· 342 342 .num_pds = ARRAY_SIZE(sm8150_rpmhpds), 343 343 }; 344 344 345 + static struct rpmhpd *sa8155p_rpmhpds[] = { 346 + [SA8155P_CX] = &cx_w_mx_parent, 347 + [SA8155P_CX_AO] = &cx_ao_w_mx_parent, 348 + [SA8155P_EBI] = &ebi, 349 + [SA8155P_GFX] = &gfx, 350 + [SA8155P_MSS] = &mss, 351 + [SA8155P_MX] = &mx, 352 + [SA8155P_MX_AO] = &mx_ao, 353 + }; 354 + 355 + static const struct rpmhpd_desc sa8155p_desc = { 356 + .rpmhpds = sa8155p_rpmhpds, 357 + .num_pds = ARRAY_SIZE(sa8155p_rpmhpds), 358 + }; 359 + 345 360 /* SM8250 RPMH powerdomains */ 346 361 static struct rpmhpd *sm8250_rpmhpds[] = { 347 362 [SM8250_CX] = &cx_w_mx_parent, ··· 534 519 535 520 static const struct of_device_id rpmhpd_match_table[] = { 536 521 { .compatible = "qcom,qdu1000-rpmhpd", .data = &qdu1000_desc }, 522 + { .compatible = "qcom,sa8155p-rpmhpd", .data = &sa8155p_desc }, 537 523 { .compatible = "qcom,sa8540p-rpmhpd", .data = &sa8540p_desc }, 538 524 { .compatible = "qcom,sa8775p-rpmhpd", .data = &sa8775p_desc }, 539 525 { .compatible = "qcom,sc7180-rpmhpd", .data = &sc7180_desc },
+7
drivers/soundwire/dmi-quirks.c
··· 100 100 .driver_data = (void *)intel_tgl_bios, 101 101 }, 102 102 { 103 + .matches = { 104 + DMI_MATCH(DMI_SYS_VENDOR, "HP"), 105 + DMI_MATCH(DMI_BOARD_NAME, "8709"), 106 + }, 107 + .driver_data = (void *)intel_tgl_bios, 108 + }, 109 + { 103 110 /* quirk used for NUC15 'Bishop County' LAPBC510 and LAPBC710 skews */ 104 111 .matches = { 105 112 DMI_MATCH(DMI_SYS_VENDOR, "Intel(R) Client Systems"),
+13 -4
drivers/soundwire/qcom.c
··· 1099 1099 } 1100 1100 1101 1101 sruntime = sdw_alloc_stream(dai->name); 1102 - if (!sruntime) 1103 - return -ENOMEM; 1102 + if (!sruntime) { 1103 + ret = -ENOMEM; 1104 + goto err_alloc; 1105 + } 1104 1106 1105 1107 ctrl->sruntime[dai->id] = sruntime; 1106 1108 ··· 1112 1110 if (ret < 0 && ret != -ENOTSUPP) { 1113 1111 dev_err(dai->dev, "Failed to set sdw stream on %s\n", 1114 1112 codec_dai->name); 1115 - sdw_release_stream(sruntime); 1116 - return ret; 1113 + goto err_set_stream; 1117 1114 } 1118 1115 } 1119 1116 1120 1117 return 0; 1118 + 1119 + err_set_stream: 1120 + sdw_release_stream(sruntime); 1121 + err_alloc: 1122 + pm_runtime_mark_last_busy(ctrl->dev); 1123 + pm_runtime_put_autosuspend(ctrl->dev); 1124 + 1125 + return ret; 1121 1126 } 1122 1127 1123 1128 static void qcom_swrm_shutdown(struct snd_pcm_substream *substream,
+3 -1
drivers/soundwire/stream.c
··· 2021 2021 2022 2022 skip_alloc_master_rt: 2023 2023 s_rt = sdw_slave_rt_find(slave, stream); 2024 - if (s_rt) 2024 + if (s_rt) { 2025 + alloc_slave_rt = false; 2025 2026 goto skip_alloc_slave_rt; 2027 + } 2026 2028 2027 2029 s_rt = sdw_slave_rt_alloc(slave, m_rt); 2028 2030 if (!s_rt) {
+5 -2
drivers/spi/spi-cadence-quadspi.c
··· 1756 1756 cqspi->slow_sram = true; 1757 1757 1758 1758 if (of_device_is_compatible(pdev->dev.of_node, 1759 - "xlnx,versal-ospi-1.0")) 1760 - dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); 1759 + "xlnx,versal-ospi-1.0")) { 1760 + ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); 1761 + if (ret) 1762 + goto probe_reset_failed; 1763 + } 1761 1764 } 1762 1765 1763 1766 ret = devm_request_irq(dev, irq, cqspi_irq_handler, 0,
+1 -1
drivers/spi/spi-dw-mmio.c
··· 274 274 */ 275 275 spi_set_chipselect(spi, 0, 0); 276 276 dw_spi_set_cs(spi, enable); 277 - spi_get_chipselect(spi, cs); 277 + spi_set_chipselect(spi, 0, cs); 278 278 } 279 279 280 280 static int dw_spi_elba_init(struct platform_device *pdev,
+15
drivers/spi/spi-fsl-dspi.c
··· 1002 1002 static int dspi_setup(struct spi_device *spi) 1003 1003 { 1004 1004 struct fsl_dspi *dspi = spi_controller_get_devdata(spi->controller); 1005 + u32 period_ns = DIV_ROUND_UP(NSEC_PER_SEC, spi->max_speed_hz); 1005 1006 unsigned char br = 0, pbr = 0, pcssck = 0, cssck = 0; 1007 + u32 quarter_period_ns = DIV_ROUND_UP(period_ns, 4); 1006 1008 u32 cs_sck_delay = 0, sck_cs_delay = 0; 1007 1009 struct fsl_dspi_platform_data *pdata; 1008 1010 unsigned char pasc = 0, asc = 0; ··· 1032 1030 cs_sck_delay = pdata->cs_sck_delay; 1033 1031 sck_cs_delay = pdata->sck_cs_delay; 1034 1032 } 1033 + 1034 + /* Since tCSC and tASC apply to continuous transfers too, avoid SCK 1035 + * glitches of half a cycle by never allowing tCSC + tASC to go below 1036 + * half a SCK period. 1037 + */ 1038 + if (cs_sck_delay < quarter_period_ns) 1039 + cs_sck_delay = quarter_period_ns; 1040 + if (sck_cs_delay < quarter_period_ns) 1041 + sck_cs_delay = quarter_period_ns; 1042 + 1043 + dev_dbg(&spi->dev, 1044 + "DSPI controller timing params: CS-to-SCK delay %u ns, SCK-to-CS delay %u ns\n", 1045 + cs_sck_delay, sck_cs_delay); 1035 1046 1036 1047 clkrate = clk_get_rate(dspi->clk); 1037 1048 hz_to_spi_baud(&pbr, &br, spi->max_speed_hz, clkrate);
+6 -1
drivers/spi/spi-fsl-lpspi.c
··· 910 910 ret = fsl_lpspi_dma_init(&pdev->dev, fsl_lpspi, controller); 911 911 if (ret == -EPROBE_DEFER) 912 912 goto out_pm_get; 913 - 914 913 if (ret < 0) 915 914 dev_err(&pdev->dev, "dma setup error %d, use pio\n", ret); 915 + else 916 + /* 917 + * disable LPSPI module IRQ when enable DMA mode successfully, 918 + * to prevent the unexpected LPSPI module IRQ events. 919 + */ 920 + disable_irq(irq); 916 921 917 922 ret = devm_spi_register_controller(&pdev->dev, controller); 918 923 if (ret < 0) {
+3
drivers/spi/spi-mt65xx.c
··· 1275 1275 struct mtk_spi *mdata = spi_master_get_devdata(master); 1276 1276 int ret; 1277 1277 1278 + if (mdata->use_spimem && !completion_done(&mdata->spimem_done)) 1279 + complete(&mdata->spimem_done); 1280 + 1278 1281 ret = pm_runtime_resume_and_get(&pdev->dev); 1279 1282 if (ret < 0) 1280 1283 return ret;
+18 -19
drivers/spi/spi-qup.c
··· 1028 1028 return -ENXIO; 1029 1029 } 1030 1030 1031 - ret = clk_prepare_enable(cclk); 1032 - if (ret) { 1033 - dev_err(dev, "cannot enable core clock\n"); 1034 - return ret; 1035 - } 1036 - 1037 - ret = clk_prepare_enable(iclk); 1038 - if (ret) { 1039 - clk_disable_unprepare(cclk); 1040 - dev_err(dev, "cannot enable iface clock\n"); 1041 - return ret; 1042 - } 1043 - 1044 1031 master = spi_alloc_master(dev, sizeof(struct spi_qup)); 1045 1032 if (!master) { 1046 - clk_disable_unprepare(cclk); 1047 - clk_disable_unprepare(iclk); 1048 1033 dev_err(dev, "cannot allocate master\n"); 1049 1034 return -ENOMEM; 1050 1035 } ··· 1077 1092 spin_lock_init(&controller->lock); 1078 1093 init_completion(&controller->done); 1079 1094 1095 + ret = clk_prepare_enable(cclk); 1096 + if (ret) { 1097 + dev_err(dev, "cannot enable core clock\n"); 1098 + goto error_dma; 1099 + } 1100 + 1101 + ret = clk_prepare_enable(iclk); 1102 + if (ret) { 1103 + clk_disable_unprepare(cclk); 1104 + dev_err(dev, "cannot enable iface clock\n"); 1105 + goto error_dma; 1106 + } 1107 + 1080 1108 iomode = readl_relaxed(base + QUP_IO_M_MODES); 1081 1109 1082 1110 size = QUP_IO_M_OUTPUT_BLOCK_SIZE(iomode); ··· 1119 1121 ret = spi_qup_set_state(controller, QUP_STATE_RESET); 1120 1122 if (ret) { 1121 1123 dev_err(dev, "cannot set RESET state\n"); 1122 - goto error_dma; 1124 + goto error_clk; 1123 1125 } 1124 1126 1125 1127 writel_relaxed(0, base + QUP_OPERATIONAL); ··· 1143 1145 ret = devm_request_irq(dev, irq, spi_qup_qup_irq, 1144 1146 IRQF_TRIGGER_HIGH, pdev->name, controller); 1145 1147 if (ret) 1146 - goto error_dma; 1148 + goto error_clk; 1147 1149 1148 1150 pm_runtime_set_autosuspend_delay(dev, MSEC_PER_SEC); 1149 1151 pm_runtime_use_autosuspend(dev); ··· 1158 1160 1159 1161 disable_pm: 1160 1162 pm_runtime_disable(&pdev->dev); 1163 + error_clk: 1164 + clk_disable_unprepare(cclk); 1165 + clk_disable_unprepare(iclk); 1161 1166 error_dma: 1162 1167 spi_qup_release_dma(master); 1163 1168 error: 1164 - clk_disable_unprepare(cclk); 1165 - clk_disable_unprepare(iclk); 1166 1169 spi_master_put(master); 1167 1170 return ret; 1168 1171 }
-1
drivers/staging/octeon/TODO
··· 6 6 - make driver self-contained instead of being split between staging and 7 7 arch/mips/cavium-octeon. 8 8 9 - Contact: Aaro Koskinen <aaro.koskinen@iki.fi>
+2
drivers/target/target_core_transport.c
··· 504 504 505 505 free_sess: 506 506 transport_free_session(sess); 507 + return ERR_PTR(rc); 508 + 507 509 free_cnt: 508 510 target_free_cmd_counter(cmd_cnt); 509 511 return ERR_PTR(rc);
+6 -4
drivers/tee/amdtee/amdtee_if.h
··· 118 118 119 119 /** 120 120 * struct tee_cmd_load_ta - load Trusted Application (TA) binary into TEE 121 - * @low_addr: [in] bits [31:0] of the physical address of the TA binary 122 - * @hi_addr: [in] bits [63:32] of the physical address of the TA binary 123 - * @size: [in] size of TA binary in bytes 124 - * @ta_handle: [out] return handle of the loaded TA 121 + * @low_addr: [in] bits [31:0] of the physical address of the TA binary 122 + * @hi_addr: [in] bits [63:32] of the physical address of the TA binary 123 + * @size: [in] size of TA binary in bytes 124 + * @ta_handle: [out] return handle of the loaded TA 125 + * @return_origin: [out] origin of return code after TEE processing 125 126 */ 126 127 struct tee_cmd_load_ta { 127 128 u32 low_addr; 128 129 u32 hi_addr; 129 130 u32 size; 130 131 u32 ta_handle; 132 + u32 return_origin; 131 133 }; 132 134 133 135 /**
+16 -12
drivers/tee/amdtee/call.c
··· 423 423 if (ret) { 424 424 arg->ret_origin = TEEC_ORIGIN_COMMS; 425 425 arg->ret = TEEC_ERROR_COMMUNICATION; 426 - } else if (arg->ret == TEEC_SUCCESS) { 427 - ret = get_ta_refcount(load_cmd.ta_handle); 428 - if (!ret) { 429 - arg->ret_origin = TEEC_ORIGIN_COMMS; 430 - arg->ret = TEEC_ERROR_OUT_OF_MEMORY; 426 + } else { 427 + arg->ret_origin = load_cmd.return_origin; 431 428 432 - /* Unload the TA on error */ 433 - unload_cmd.ta_handle = load_cmd.ta_handle; 434 - psp_tee_process_cmd(TEE_CMD_ID_UNLOAD_TA, 435 - (void *)&unload_cmd, 436 - sizeof(unload_cmd), &ret); 437 - } else { 438 - set_session_id(load_cmd.ta_handle, 0, &arg->session); 429 + if (arg->ret == TEEC_SUCCESS) { 430 + ret = get_ta_refcount(load_cmd.ta_handle); 431 + if (!ret) { 432 + arg->ret_origin = TEEC_ORIGIN_COMMS; 433 + arg->ret = TEEC_ERROR_OUT_OF_MEMORY; 434 + 435 + /* Unload the TA on error */ 436 + unload_cmd.ta_handle = load_cmd.ta_handle; 437 + psp_tee_process_cmd(TEE_CMD_ID_UNLOAD_TA, 438 + (void *)&unload_cmd, 439 + sizeof(unload_cmd), &ret); 440 + } else { 441 + set_session_id(load_cmd.ta_handle, 0, &arg->session); 442 + } 439 443 } 440 444 } 441 445 mutex_unlock(&ta_refcount_mutex);
+4 -4
drivers/thunderbolt/dma_test.c
··· 192 192 } 193 193 194 194 ret = tb_xdomain_enable_paths(dt->xd, dt->tx_hopid, 195 - dt->tx_ring ? dt->tx_ring->hop : 0, 195 + dt->tx_ring ? dt->tx_ring->hop : -1, 196 196 dt->rx_hopid, 197 - dt->rx_ring ? dt->rx_ring->hop : 0); 197 + dt->rx_ring ? dt->rx_ring->hop : -1); 198 198 if (ret) { 199 199 dma_test_free_rings(dt); 200 200 return ret; ··· 218 218 tb_ring_stop(dt->tx_ring); 219 219 220 220 ret = tb_xdomain_disable_paths(dt->xd, dt->tx_hopid, 221 - dt->tx_ring ? dt->tx_ring->hop : 0, 221 + dt->tx_ring ? dt->tx_ring->hop : -1, 222 222 dt->rx_hopid, 223 - dt->rx_ring ? dt->rx_ring->hop : 0); 223 + dt->rx_ring ? dt->rx_ring->hop : -1); 224 224 if (ret) 225 225 dev_warn(&dt->svc->dev, "failed to disable DMA paths\n"); 226 226
+8 -3
drivers/thunderbolt/nhi.c
··· 56 56 57 57 static void nhi_mask_interrupt(struct tb_nhi *nhi, int mask, int ring) 58 58 { 59 - if (nhi->quirks & QUIRK_AUTO_CLEAR_INT) 60 - return; 61 - iowrite32(mask, nhi->iobase + REG_RING_INTERRUPT_MASK_CLEAR_BASE + ring); 59 + if (nhi->quirks & QUIRK_AUTO_CLEAR_INT) { 60 + u32 val; 61 + 62 + val = ioread32(nhi->iobase + REG_RING_INTERRUPT_BASE + ring); 63 + iowrite32(val & ~mask, nhi->iobase + REG_RING_INTERRUPT_BASE + ring); 64 + } else { 65 + iowrite32(mask, nhi->iobase + REG_RING_INTERRUPT_MASK_CLEAR_BASE + ring); 66 + } 62 67 } 63 68 64 69 static void nhi_clear_interrupt(struct tb_nhi *nhi, int ring)
+12 -5
drivers/thunderbolt/tb.c
··· 737 737 { 738 738 struct tb_cm *tcm = tb_priv(port->sw->tb); 739 739 struct tb_port *upstream_port; 740 + bool discovery = false; 740 741 struct tb_switch *sw; 741 742 int ret; 742 743 ··· 805 804 * tunnels and know which switches were authorized already by 806 805 * the boot firmware. 807 806 */ 808 - if (!tcm->hotplug_active) 807 + if (!tcm->hotplug_active) { 809 808 dev_set_uevent_suppress(&sw->dev, true); 809 + discovery = true; 810 + } 810 811 811 812 /* 812 813 * At the moment Thunderbolt 2 and beyond (devices with LC) we ··· 838 835 * CL0s and CL1 are enabled and supported together. 839 836 * Silently ignore CLx enabling in case CLx is not supported. 840 837 */ 841 - ret = tb_switch_enable_clx(sw, TB_CL1); 842 - if (ret && ret != -EOPNOTSUPP) 843 - tb_sw_warn(sw, "failed to enable %s on upstream port\n", 844 - tb_switch_clx_name(TB_CL1)); 838 + if (discovery) { 839 + tb_sw_dbg(sw, "discovery, not touching CL states\n"); 840 + } else { 841 + ret = tb_switch_enable_clx(sw, TB_CL1); 842 + if (ret && ret != -EOPNOTSUPP) 843 + tb_sw_warn(sw, "failed to enable %s on upstream port\n", 844 + tb_switch_clx_name(TB_CL1)); 845 + } 845 846 846 847 if (tb_switch_is_clx_enabled(sw, TB_CL1)) 847 848 /*
+1 -1
drivers/thunderbolt/tunnel.c
··· 526 526 * Perform connection manager handshake between IN and OUT ports 527 527 * before capabilities exchange can take place. 528 528 */ 529 - ret = tb_dp_cm_handshake(in, out, 1500); 529 + ret = tb_dp_cm_handshake(in, out, 3000); 530 530 if (ret) 531 531 return ret; 532 532
+1 -1
drivers/tty/serial/fsl_lpuart.c
··· 310 310 static const struct lpuart_soc_data ls1028a_data = { 311 311 .devtype = LS1028A_LPUART, 312 312 .iotype = UPIO_MEM32, 313 - .rx_watermark = 1, 313 + .rx_watermark = 0, 314 314 }; 315 315 316 316 static struct lpuart_soc_data imx7ulp_data = {
+1
drivers/tty/serial/lantiq.c
··· 250 250 struct ltq_uart_port *ltq_port = to_ltq_uart_port(port); 251 251 252 252 spin_lock_irqsave(&ltq_port->lock, flags); 253 + __raw_writel(ASC_IRNCR_EIR, port->membase + LTQ_ASC_IRNCR); 253 254 /* clear any pending interrupts */ 254 255 asc_update_bits(0, ASCWHBSTATE_CLRPE | ASCWHBSTATE_CLRFE | 255 256 ASCWHBSTATE_CLRROE, port->membase + LTQ_ASC_WHBSTATE);
+5
drivers/usb/dwc3/core.c
··· 1929 1929 pm_runtime_disable(&pdev->dev); 1930 1930 pm_runtime_dont_use_autosuspend(&pdev->dev); 1931 1931 pm_runtime_put_noidle(&pdev->dev); 1932 + /* 1933 + * HACK: Clear the driver data, which is currently accessed by parent 1934 + * glue drivers, before allowing the parent to suspend. 1935 + */ 1936 + platform_set_drvdata(pdev, NULL); 1932 1937 pm_runtime_set_suspended(&pdev->dev); 1933 1938 1934 1939 dwc3_free_event_buffers(dwc);
+10 -1
drivers/usb/dwc3/dwc3-qcom.c
··· 308 308 /* Only usable in contexts where the role can not change. */ 309 309 static bool dwc3_qcom_is_host(struct dwc3_qcom *qcom) 310 310 { 311 - struct dwc3 *dwc = platform_get_drvdata(qcom->dwc3); 311 + struct dwc3 *dwc; 312 + 313 + /* 314 + * FIXME: Fix this layering violation. 315 + */ 316 + dwc = platform_get_drvdata(qcom->dwc3); 317 + 318 + /* Core driver may not have probed yet. */ 319 + if (!dwc) 320 + return false; 312 321 313 322 return dwc->xhci; 314 323 }
+1
drivers/usb/dwc3/gadget.c
··· 198 198 list_del(&req->list); 199 199 req->remaining = 0; 200 200 req->needs_extra_trb = false; 201 + req->num_trbs = 0; 201 202 202 203 if (req->request.status == -EINPROGRESS) 203 204 req->request.status = status;
+131 -49
drivers/usb/gadget/udc/core.c
··· 37 37 * @vbus: for udcs who care about vbus status, this value is real vbus status; 38 38 * for udcs who do not care about vbus status, this value is always true 39 39 * @started: the UDC's started state. True if the UDC had started. 40 + * @allow_connect: Indicates whether UDC is allowed to be pulled up. 41 + * Set/cleared by gadget_(un)bind_driver() after gadget driver is bound or 42 + * unbound. 43 + * @connect_lock: protects udc->started, gadget->connect, 44 + * gadget->allow_connect and gadget->deactivate. The routines 45 + * usb_gadget_connect_locked(), usb_gadget_disconnect_locked(), 46 + * usb_udc_connect_control_locked(), usb_gadget_udc_start_locked() and 47 + * usb_gadget_udc_stop_locked() are called with this lock held. 40 48 * 41 49 * This represents the internal data structure which is used by the UDC-class 42 50 * to hold information about udc driver and gadget together. ··· 56 48 struct list_head list; 57 49 bool vbus; 58 50 bool started; 51 + bool allow_connect; 52 + struct work_struct vbus_work; 53 + struct mutex connect_lock; 59 54 }; 60 55 61 56 static struct class *udc_class; ··· 698 687 } 699 688 EXPORT_SYMBOL_GPL(usb_gadget_vbus_disconnect); 700 689 701 - /** 702 - * usb_gadget_connect - software-controlled connect to USB host 703 - * @gadget:the peripheral being connected 704 - * 705 - * Enables the D+ (or potentially D-) pullup. The host will start 706 - * enumerating this gadget when the pullup is active and a VBUS session 707 - * is active (the link is powered). 708 - * 709 - * Returns zero on success, else negative errno. 710 - */ 711 - int usb_gadget_connect(struct usb_gadget *gadget) 690 + static int usb_gadget_connect_locked(struct usb_gadget *gadget) 691 + __must_hold(&gadget->udc->connect_lock) 712 692 { 713 693 int ret = 0; 714 694 ··· 708 706 goto out; 709 707 } 710 708 711 - if (gadget->deactivated) { 709 + if (gadget->deactivated || !gadget->udc->allow_connect || !gadget->udc->started) { 712 710 /* 713 - * If gadget is deactivated we only save new state. 714 - * Gadget will be connected automatically after activation. 711 + * If the gadget isn't usable (because it is deactivated, 712 + * unbound, or not yet started), we only save the new state. 713 + * The gadget will be connected automatically when it is 714 + * activated/bound/started. 715 715 */ 716 716 gadget->connected = true; 717 717 goto out; ··· 728 724 729 725 return ret; 730 726 } 731 - EXPORT_SYMBOL_GPL(usb_gadget_connect); 732 727 733 728 /** 734 - * usb_gadget_disconnect - software-controlled disconnect from USB host 735 - * @gadget:the peripheral being disconnected 729 + * usb_gadget_connect - software-controlled connect to USB host 730 + * @gadget:the peripheral being connected 736 731 * 737 - * Disables the D+ (or potentially D-) pullup, which the host may see 738 - * as a disconnect (when a VBUS session is active). Not all systems 739 - * support software pullup controls. 740 - * 741 - * Following a successful disconnect, invoke the ->disconnect() callback 742 - * for the current gadget driver so that UDC drivers don't need to. 732 + * Enables the D+ (or potentially D-) pullup. The host will start 733 + * enumerating this gadget when the pullup is active and a VBUS session 734 + * is active (the link is powered). 743 735 * 744 736 * Returns zero on success, else negative errno. 745 737 */ 746 - int usb_gadget_disconnect(struct usb_gadget *gadget) 738 + int usb_gadget_connect(struct usb_gadget *gadget) 739 + { 740 + int ret; 741 + 742 + mutex_lock(&gadget->udc->connect_lock); 743 + ret = usb_gadget_connect_locked(gadget); 744 + mutex_unlock(&gadget->udc->connect_lock); 745 + 746 + return ret; 747 + } 748 + EXPORT_SYMBOL_GPL(usb_gadget_connect); 749 + 750 + static int usb_gadget_disconnect_locked(struct usb_gadget *gadget) 751 + __must_hold(&gadget->udc->connect_lock) 747 752 { 748 753 int ret = 0; 749 754 ··· 764 751 if (!gadget->connected) 765 752 goto out; 766 753 767 - if (gadget->deactivated) { 754 + if (gadget->deactivated || !gadget->udc->started) { 768 755 /* 769 756 * If gadget is deactivated we only save new state. 770 757 * Gadget will stay disconnected after activation. ··· 787 774 788 775 return ret; 789 776 } 777 + 778 + /** 779 + * usb_gadget_disconnect - software-controlled disconnect from USB host 780 + * @gadget:the peripheral being disconnected 781 + * 782 + * Disables the D+ (or potentially D-) pullup, which the host may see 783 + * as a disconnect (when a VBUS session is active). Not all systems 784 + * support software pullup controls. 785 + * 786 + * Following a successful disconnect, invoke the ->disconnect() callback 787 + * for the current gadget driver so that UDC drivers don't need to. 788 + * 789 + * Returns zero on success, else negative errno. 790 + */ 791 + int usb_gadget_disconnect(struct usb_gadget *gadget) 792 + { 793 + int ret; 794 + 795 + mutex_lock(&gadget->udc->connect_lock); 796 + ret = usb_gadget_disconnect_locked(gadget); 797 + mutex_unlock(&gadget->udc->connect_lock); 798 + 799 + return ret; 800 + } 790 801 EXPORT_SYMBOL_GPL(usb_gadget_disconnect); 791 802 792 803 /** ··· 828 791 { 829 792 int ret = 0; 830 793 794 + mutex_lock(&gadget->udc->connect_lock); 831 795 if (gadget->deactivated) 832 - goto out; 796 + goto unlock; 833 797 834 798 if (gadget->connected) { 835 - ret = usb_gadget_disconnect(gadget); 799 + ret = usb_gadget_disconnect_locked(gadget); 836 800 if (ret) 837 - goto out; 801 + goto unlock; 838 802 839 803 /* 840 804 * If gadget was being connected before deactivation, we want ··· 845 807 } 846 808 gadget->deactivated = true; 847 809 848 - out: 810 + unlock: 811 + mutex_unlock(&gadget->udc->connect_lock); 849 812 trace_usb_gadget_deactivate(gadget, ret); 850 813 851 814 return ret; ··· 866 827 { 867 828 int ret = 0; 868 829 830 + mutex_lock(&gadget->udc->connect_lock); 869 831 if (!gadget->deactivated) 870 - goto out; 832 + goto unlock; 871 833 872 834 gadget->deactivated = false; 873 835 ··· 877 837 * while it was being deactivated, we call usb_gadget_connect(). 878 838 */ 879 839 if (gadget->connected) 880 - ret = usb_gadget_connect(gadget); 840 + ret = usb_gadget_connect_locked(gadget); 841 + mutex_unlock(&gadget->udc->connect_lock); 881 842 882 - out: 843 + unlock: 844 + mutex_unlock(&gadget->udc->connect_lock); 883 845 trace_usb_gadget_activate(gadget, ret); 884 846 885 847 return ret; ··· 1120 1078 1121 1079 /* ------------------------------------------------------------------------- */ 1122 1080 1123 - static void usb_udc_connect_control(struct usb_udc *udc) 1081 + /* Acquire connect_lock before calling this function. */ 1082 + static void usb_udc_connect_control_locked(struct usb_udc *udc) __must_hold(&udc->connect_lock) 1124 1083 { 1125 1084 if (udc->vbus) 1126 - usb_gadget_connect(udc->gadget); 1085 + usb_gadget_connect_locked(udc->gadget); 1127 1086 else 1128 - usb_gadget_disconnect(udc->gadget); 1087 + usb_gadget_disconnect_locked(udc->gadget); 1088 + } 1089 + 1090 + static void vbus_event_work(struct work_struct *work) 1091 + { 1092 + struct usb_udc *udc = container_of(work, struct usb_udc, vbus_work); 1093 + 1094 + mutex_lock(&udc->connect_lock); 1095 + usb_udc_connect_control_locked(udc); 1096 + mutex_unlock(&udc->connect_lock); 1129 1097 } 1130 1098 1131 1099 /** ··· 1146 1094 * 1147 1095 * The udc driver calls it when it wants to connect or disconnect gadget 1148 1096 * according to vbus status. 1097 + * 1098 + * This function can be invoked from interrupt context by irq handlers of 1099 + * the gadget drivers, however, usb_udc_connect_control() has to run in 1100 + * non-atomic context due to the following: 1101 + * a. Some of the gadget driver implementations expect the ->pullup 1102 + * callback to be invoked in non-atomic context. 1103 + * b. usb_gadget_disconnect() acquires udc_lock which is a mutex. 1104 + * Hence offload invocation of usb_udc_connect_control() to workqueue. 1149 1105 */ 1150 1106 void usb_udc_vbus_handler(struct usb_gadget *gadget, bool status) 1151 1107 { ··· 1161 1101 1162 1102 if (udc) { 1163 1103 udc->vbus = status; 1164 - usb_udc_connect_control(udc); 1104 + schedule_work(&udc->vbus_work); 1165 1105 } 1166 1106 } 1167 1107 EXPORT_SYMBOL_GPL(usb_udc_vbus_handler); ··· 1184 1124 EXPORT_SYMBOL_GPL(usb_gadget_udc_reset); 1185 1125 1186 1126 /** 1187 - * usb_gadget_udc_start - tells usb device controller to start up 1127 + * usb_gadget_udc_start_locked - tells usb device controller to start up 1188 1128 * @udc: The UDC to be started 1189 1129 * 1190 1130 * This call is issued by the UDC Class driver when it's about ··· 1195 1135 * necessary to have it powered on. 1196 1136 * 1197 1137 * Returns zero on success, else negative errno. 1138 + * 1139 + * Caller should acquire connect_lock before invoking this function. 1198 1140 */ 1199 - static inline int usb_gadget_udc_start(struct usb_udc *udc) 1141 + static inline int usb_gadget_udc_start_locked(struct usb_udc *udc) 1142 + __must_hold(&udc->connect_lock) 1200 1143 { 1201 1144 int ret; 1202 1145 ··· 1216 1153 } 1217 1154 1218 1155 /** 1219 - * usb_gadget_udc_stop - tells usb device controller we don't need it anymore 1156 + * usb_gadget_udc_stop_locked - tells usb device controller we don't need it anymore 1220 1157 * @udc: The UDC to be stopped 1221 1158 * 1222 1159 * This call is issued by the UDC Class driver after calling ··· 1225 1162 * The details are implementation specific, but it can go as 1226 1163 * far as powering off UDC completely and disable its data 1227 1164 * line pullups. 1165 + * 1166 + * Caller should acquire connect lock before invoking this function. 1228 1167 */ 1229 - static inline void usb_gadget_udc_stop(struct usb_udc *udc) 1168 + static inline void usb_gadget_udc_stop_locked(struct usb_udc *udc) 1169 + __must_hold(&udc->connect_lock) 1230 1170 { 1231 1171 if (!udc->started) { 1232 1172 dev_err(&udc->dev, "UDC had already stopped\n"); ··· 1388 1322 1389 1323 udc->gadget = gadget; 1390 1324 gadget->udc = udc; 1325 + mutex_init(&udc->connect_lock); 1391 1326 1392 1327 udc->started = false; 1393 1328 1394 1329 mutex_lock(&udc_lock); 1395 1330 list_add_tail(&udc->list, &udc_list); 1396 1331 mutex_unlock(&udc_lock); 1332 + INIT_WORK(&udc->vbus_work, vbus_event_work); 1397 1333 1398 1334 ret = device_add(&udc->dev); 1399 1335 if (ret) ··· 1527 1459 flush_work(&gadget->work); 1528 1460 device_del(&gadget->dev); 1529 1461 ida_free(&gadget_id_numbers, gadget->id_number); 1462 + cancel_work_sync(&udc->vbus_work); 1530 1463 device_unregister(&udc->dev); 1531 1464 } 1532 1465 EXPORT_SYMBOL_GPL(usb_del_gadget); ··· 1592 1523 if (ret) 1593 1524 goto err_bind; 1594 1525 1595 - ret = usb_gadget_udc_start(udc); 1596 - if (ret) 1526 + mutex_lock(&udc->connect_lock); 1527 + ret = usb_gadget_udc_start_locked(udc); 1528 + if (ret) { 1529 + mutex_unlock(&udc->connect_lock); 1597 1530 goto err_start; 1531 + } 1598 1532 usb_gadget_enable_async_callbacks(udc); 1599 - usb_udc_connect_control(udc); 1533 + udc->allow_connect = true; 1534 + usb_udc_connect_control_locked(udc); 1535 + mutex_unlock(&udc->connect_lock); 1600 1536 1601 1537 kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE); 1602 1538 return 0; ··· 1632 1558 1633 1559 kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE); 1634 1560 1635 - usb_gadget_disconnect(gadget); 1561 + udc->allow_connect = false; 1562 + cancel_work_sync(&udc->vbus_work); 1563 + mutex_lock(&udc->connect_lock); 1564 + usb_gadget_disconnect_locked(gadget); 1636 1565 usb_gadget_disable_async_callbacks(udc); 1637 1566 if (gadget->irq) 1638 1567 synchronize_irq(gadget->irq); 1639 1568 udc->driver->unbind(gadget); 1640 - usb_gadget_udc_stop(udc); 1569 + usb_gadget_udc_stop_locked(udc); 1570 + mutex_unlock(&udc->connect_lock); 1641 1571 1642 1572 mutex_lock(&udc_lock); 1643 1573 driver->is_bound = false; ··· 1727 1649 } 1728 1650 1729 1651 if (sysfs_streq(buf, "connect")) { 1730 - usb_gadget_udc_start(udc); 1731 - usb_gadget_connect(udc->gadget); 1652 + mutex_lock(&udc->connect_lock); 1653 + usb_gadget_udc_start_locked(udc); 1654 + usb_gadget_connect_locked(udc->gadget); 1655 + mutex_unlock(&udc->connect_lock); 1732 1656 } else if (sysfs_streq(buf, "disconnect")) { 1733 - usb_gadget_disconnect(udc->gadget); 1734 - usb_gadget_udc_stop(udc); 1657 + mutex_lock(&udc->connect_lock); 1658 + usb_gadget_disconnect_locked(udc->gadget); 1659 + usb_gadget_udc_stop_locked(udc); 1660 + mutex_unlock(&udc->connect_lock); 1735 1661 } else { 1736 1662 dev_err(dev, "unsupported command '%s'\n", buf); 1737 1663 ret = -EINVAL;
+2 -2
drivers/usb/gadget/udc/renesas_usb3.c
··· 2877 2877 struct rzv2m_usb3drd *ddata = dev_get_drvdata(pdev->dev.parent); 2878 2878 2879 2879 usb3->drd_reg = ddata->reg; 2880 - ret = devm_request_irq(ddata->dev, ddata->drd_irq, 2880 + ret = devm_request_irq(&pdev->dev, ddata->drd_irq, 2881 2881 renesas_usb3_otg_irq, 0, 2882 - dev_name(ddata->dev), usb3); 2882 + dev_name(&pdev->dev), usb3); 2883 2883 if (ret < 0) 2884 2884 return ret; 2885 2885 }
+16
drivers/usb/serial/option.c
··· 248 248 #define QUECTEL_VENDOR_ID 0x2c7c 249 249 /* These Quectel products use Quectel's vendor ID */ 250 250 #define QUECTEL_PRODUCT_EC21 0x0121 251 + #define QUECTEL_PRODUCT_EM061K_LTA 0x0123 252 + #define QUECTEL_PRODUCT_EM061K_LMS 0x0124 251 253 #define QUECTEL_PRODUCT_EC25 0x0125 252 254 #define QUECTEL_PRODUCT_EG91 0x0191 253 255 #define QUECTEL_PRODUCT_EG95 0x0195 ··· 268 266 #define QUECTEL_PRODUCT_RM520N 0x0801 269 267 #define QUECTEL_PRODUCT_EC200U 0x0901 270 268 #define QUECTEL_PRODUCT_EC200S_CN 0x6002 269 + #define QUECTEL_PRODUCT_EM061K_LWW 0x6008 270 + #define QUECTEL_PRODUCT_EM061K_LCN 0x6009 271 271 #define QUECTEL_PRODUCT_EC200T 0x6026 272 272 #define QUECTEL_PRODUCT_RM500K 0x7001 273 273 ··· 1193 1189 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0x00, 0x40) }, 1194 1190 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x30) }, 1195 1191 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x40) }, 1192 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0xff, 0x30) }, 1193 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0x00, 0x40) }, 1194 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0xff, 0x40) }, 1195 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LMS, 0xff, 0xff, 0x30) }, 1196 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LMS, 0xff, 0x00, 0x40) }, 1197 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LMS, 0xff, 0xff, 0x40) }, 1198 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LTA, 0xff, 0xff, 0x30) }, 1199 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LTA, 0xff, 0x00, 0x40) }, 1200 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LTA, 0xff, 0xff, 0x40) }, 1201 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LWW, 0xff, 0xff, 0x30) }, 1202 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LWW, 0xff, 0x00, 0x40) }, 1203 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LWW, 0xff, 0xff, 0x40) }, 1196 1204 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0xff, 0xff), 1197 1205 .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 }, 1198 1206 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) },
+1 -1
drivers/usb/typec/pd.c
··· 95 95 static ssize_t 96 96 fast_role_swap_current_show(struct device *dev, struct device_attribute *attr, char *buf) 97 97 { 98 - return sysfs_emit(buf, "%u\n", to_pdo(dev)->pdo >> PDO_FIXED_FRS_CURR_SHIFT) & 3; 98 + return sysfs_emit(buf, "%u\n", (to_pdo(dev)->pdo >> PDO_FIXED_FRS_CURR_SHIFT) & 3); 99 99 } 100 100 static DEVICE_ATTR_RO(fast_role_swap_current); 101 101
+7 -4
drivers/usb/typec/ucsi/ucsi.c
··· 132 132 if (ret) 133 133 return ret; 134 134 135 - if (cci & UCSI_CCI_BUSY) { 136 - ucsi->ops->async_write(ucsi, UCSI_CANCEL, NULL, 0); 137 - return -EBUSY; 138 - } 135 + if (cmd != UCSI_CANCEL && cci & UCSI_CCI_BUSY) 136 + return ucsi_exec_command(ucsi, UCSI_CANCEL); 139 137 140 138 if (!(cci & UCSI_CCI_COMMAND_COMPLETE)) 141 139 return -EIO; ··· 145 147 if (cmd == UCSI_GET_ERROR_STATUS) 146 148 return -EIO; 147 149 return ucsi_read_error(ucsi); 150 + } 151 + 152 + if (cmd == UCSI_CANCEL && cci & UCSI_CCI_CANCEL_COMPLETE) { 153 + ret = ucsi_acknowledge_command(ucsi); 154 + return ret ? ret : -EBUSY; 148 155 } 149 156 150 157 return UCSI_CCI_LENGTH(cci);
+1 -1
drivers/vdpa/mlx5/net/mlx5_vnet.c
··· 3349 3349 mlx5_vdpa_remove_debugfs(ndev->debugfs); 3350 3350 ndev->debugfs = NULL; 3351 3351 unregister_link_notifier(ndev); 3352 + _vdpa_unregister_device(dev); 3352 3353 wq = mvdev->wq; 3353 3354 mvdev->wq = NULL; 3354 3355 destroy_workqueue(wq); 3355 - _vdpa_unregister_device(dev); 3356 3356 mgtdev->ndev = NULL; 3357 3357 } 3358 3358
+3
drivers/vdpa/vdpa_user/vduse_dev.c
··· 1685 1685 if (config->vq_num > 0xffff) 1686 1686 return false; 1687 1687 1688 + if (!config->name[0]) 1689 + return false; 1690 + 1688 1691 if (!device_is_allowed(config->device_id)) 1689 1692 return false; 1690 1693
+8 -3
drivers/vhost/net.c
··· 935 935 936 936 err = sock->ops->sendmsg(sock, &msg, len); 937 937 if (unlikely(err < 0)) { 938 + bool retry = err == -EAGAIN || err == -ENOMEM || err == -ENOBUFS; 939 + 938 940 if (zcopy_used) { 939 941 if (vq->heads[ubuf->desc].len == VHOST_DMA_IN_PROGRESS) 940 942 vhost_net_ubuf_put(ubufs); 941 - nvq->upend_idx = ((unsigned)nvq->upend_idx - 1) 942 - % UIO_MAXIOV; 943 + if (retry) 944 + nvq->upend_idx = ((unsigned)nvq->upend_idx - 1) 945 + % UIO_MAXIOV; 946 + else 947 + vq->heads[ubuf->desc].len = VHOST_DMA_DONE_LEN; 943 948 } 944 - if (err == -EAGAIN || err == -ENOMEM || err == -ENOBUFS) { 949 + if (retry) { 945 950 vhost_discard_vq_desc(vq, 1); 946 951 vhost_net_enable_vq(net, vq); 947 952 break;
+30 -4
drivers/vhost/vdpa.c
··· 407 407 { 408 408 struct vdpa_device *vdpa = v->vdpa; 409 409 const struct vdpa_config_ops *ops = vdpa->config; 410 + struct vhost_dev *d = &v->vdev; 411 + u64 actual_features; 410 412 u64 features; 413 + int i; 411 414 412 415 /* 413 416 * It's not allowed to change the features after they have ··· 424 421 425 422 if (vdpa_set_features(vdpa, features)) 426 423 return -EINVAL; 424 + 425 + /* let the vqs know what has been configured */ 426 + actual_features = ops->get_driver_features(vdpa); 427 + for (i = 0; i < d->nvqs; ++i) { 428 + struct vhost_virtqueue *vq = d->vqs[i]; 429 + 430 + mutex_lock(&vq->mutex); 431 + vq->acked_features = actual_features; 432 + mutex_unlock(&vq->mutex); 433 + } 427 434 428 435 return 0; 429 436 } ··· 607 594 if (r) 608 595 return r; 609 596 610 - vq->last_avail_idx = vq_state.split.avail_index; 597 + if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) { 598 + vq->last_avail_idx = vq_state.packed.last_avail_idx | 599 + (vq_state.packed.last_avail_counter << 15); 600 + vq->last_used_idx = vq_state.packed.last_used_idx | 601 + (vq_state.packed.last_used_counter << 15); 602 + } else { 603 + vq->last_avail_idx = vq_state.split.avail_index; 604 + } 611 605 break; 612 606 } 613 607 ··· 632 612 break; 633 613 634 614 case VHOST_SET_VRING_BASE: 635 - vq_state.split.avail_index = vq->last_avail_idx; 636 - if (ops->set_vq_state(vdpa, idx, &vq_state)) 637 - r = -EINVAL; 615 + if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) { 616 + vq_state.packed.last_avail_idx = vq->last_avail_idx & 0x7fff; 617 + vq_state.packed.last_avail_counter = !!(vq->last_avail_idx & 0x8000); 618 + vq_state.packed.last_used_idx = vq->last_used_idx & 0x7fff; 619 + vq_state.packed.last_used_counter = !!(vq->last_used_idx & 0x8000); 620 + } else { 621 + vq_state.split.avail_index = vq->last_avail_idx; 622 + } 623 + r = ops->set_vq_state(vdpa, idx, &vq_state); 638 624 break; 639 625 640 626 case VHOST_SET_VRING_CALL:
+34 -41
drivers/vhost/vhost.c
··· 235 235 { 236 236 struct vhost_flush_struct flush; 237 237 238 - if (dev->worker) { 238 + if (dev->worker.vtsk) { 239 239 init_completion(&flush.wait_event); 240 240 vhost_work_init(&flush.work, vhost_flush_work); 241 241 ··· 247 247 248 248 void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work) 249 249 { 250 - if (!dev->worker) 250 + if (!dev->worker.vtsk) 251 251 return; 252 252 253 253 if (!test_and_set_bit(VHOST_WORK_QUEUED, &work->flags)) { ··· 255 255 * sure it was not in the list. 256 256 * test_and_set_bit() implies a memory barrier. 257 257 */ 258 - llist_add(&work->node, &dev->worker->work_list); 259 - vhost_task_wake(dev->worker->vtsk); 258 + llist_add(&work->node, &dev->worker.work_list); 259 + vhost_task_wake(dev->worker.vtsk); 260 260 } 261 261 } 262 262 EXPORT_SYMBOL_GPL(vhost_work_queue); ··· 264 264 /* A lockless hint for busy polling code to exit the loop */ 265 265 bool vhost_has_work(struct vhost_dev *dev) 266 266 { 267 - return dev->worker && !llist_empty(&dev->worker->work_list); 267 + return !llist_empty(&dev->worker.work_list); 268 268 } 269 269 EXPORT_SYMBOL_GPL(vhost_has_work); 270 270 ··· 341 341 342 342 node = llist_del_all(&worker->work_list); 343 343 if (node) { 344 + __set_current_state(TASK_RUNNING); 345 + 344 346 node = llist_reverse_order(node); 345 347 /* make sure flag is seen after deletion */ 346 348 smp_wmb(); ··· 458 456 dev->umem = NULL; 459 457 dev->iotlb = NULL; 460 458 dev->mm = NULL; 461 - dev->worker = NULL; 459 + memset(&dev->worker, 0, sizeof(dev->worker)); 460 + init_llist_head(&dev->worker.work_list); 462 461 dev->iov_limit = iov_limit; 463 462 dev->weight = weight; 464 463 dev->byte_weight = byte_weight; ··· 533 530 534 531 static void vhost_worker_free(struct vhost_dev *dev) 535 532 { 536 - struct vhost_worker *worker = dev->worker; 537 - 538 - if (!worker) 533 + if (!dev->worker.vtsk) 539 534 return; 540 535 541 - dev->worker = NULL; 542 - WARN_ON(!llist_empty(&worker->work_list)); 543 - vhost_task_stop(worker->vtsk); 544 - kfree(worker); 536 + WARN_ON(!llist_empty(&dev->worker.work_list)); 537 + vhost_task_stop(dev->worker.vtsk); 538 + dev->worker.kcov_handle = 0; 539 + dev->worker.vtsk = NULL; 545 540 } 546 541 547 542 static int vhost_worker_create(struct vhost_dev *dev) 548 543 { 549 - struct vhost_worker *worker; 550 544 struct vhost_task *vtsk; 551 545 char name[TASK_COMM_LEN]; 552 - int ret; 553 546 554 - worker = kzalloc(sizeof(*worker), GFP_KERNEL_ACCOUNT); 555 - if (!worker) 556 - return -ENOMEM; 557 - 558 - dev->worker = worker; 559 - worker->kcov_handle = kcov_common_handle(); 560 - init_llist_head(&worker->work_list); 561 547 snprintf(name, sizeof(name), "vhost-%d", current->pid); 562 548 563 - vtsk = vhost_task_create(vhost_worker, worker, name); 564 - if (!vtsk) { 565 - ret = -ENOMEM; 566 - goto free_worker; 567 - } 549 + vtsk = vhost_task_create(vhost_worker, &dev->worker, name); 550 + if (!vtsk) 551 + return -ENOMEM; 568 552 569 - worker->vtsk = vtsk; 553 + dev->worker.kcov_handle = kcov_common_handle(); 554 + dev->worker.vtsk = vtsk; 570 555 vhost_task_start(vtsk); 571 556 return 0; 572 - 573 - free_worker: 574 - kfree(worker); 575 - dev->worker = NULL; 576 - return ret; 577 557 } 578 558 579 559 /* Caller should have device mutex */ ··· 1600 1614 r = -EFAULT; 1601 1615 break; 1602 1616 } 1603 - if (s.num > 0xffff) { 1604 - r = -EINVAL; 1605 - break; 1617 + if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) { 1618 + vq->last_avail_idx = s.num & 0xffff; 1619 + vq->last_used_idx = (s.num >> 16) & 0xffff; 1620 + } else { 1621 + if (s.num > 0xffff) { 1622 + r = -EINVAL; 1623 + break; 1624 + } 1625 + vq->last_avail_idx = s.num; 1606 1626 } 1607 - vq->last_avail_idx = s.num; 1608 1627 /* Forget the cached index value. */ 1609 1628 vq->avail_idx = vq->last_avail_idx; 1610 1629 break; 1611 1630 case VHOST_GET_VRING_BASE: 1612 1631 s.index = idx; 1613 - s.num = vq->last_avail_idx; 1632 + if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) 1633 + s.num = (u32)vq->last_avail_idx | ((u32)vq->last_used_idx << 16); 1634 + else 1635 + s.num = vq->last_avail_idx; 1614 1636 if (copy_to_user(argp, &s, sizeof s)) 1615 1637 r = -EFAULT; 1616 1638 break; ··· 2557 2563 /* Create a new message. */ 2558 2564 struct vhost_msg_node *vhost_new_msg(struct vhost_virtqueue *vq, int type) 2559 2565 { 2560 - struct vhost_msg_node *node = kmalloc(sizeof *node, GFP_KERNEL); 2566 + /* Make sure all padding within the structure is initialized. */ 2567 + struct vhost_msg_node *node = kzalloc(sizeof(*node), GFP_KERNEL); 2561 2568 if (!node) 2562 2569 return NULL; 2563 2570 2564 - /* Make sure all padding within the structure is initialized. */ 2565 - memset(&node->msg, 0, sizeof node->msg); 2566 2571 node->vq = vq; 2567 2572 node->msg.type = type; 2568 2573 return node;
+7 -3
drivers/vhost/vhost.h
··· 92 92 /* The routine to call when the Guest pings us, or timeout. */ 93 93 vhost_work_fn_t handle_kick; 94 94 95 - /* Last available index we saw. */ 95 + /* Last available index we saw. 96 + * Values are limited to 0x7fff, and the high bit is used as 97 + * a wrap counter when using VIRTIO_F_RING_PACKED. */ 96 98 u16 last_avail_idx; 97 99 98 100 /* Caches available index value from user. */ 99 101 u16 avail_idx; 100 102 101 - /* Last index we used. */ 103 + /* Last index we used. 104 + * Values are limited to 0x7fff, and the high bit is used as 105 + * a wrap counter when using VIRTIO_F_RING_PACKED. */ 102 106 u16 last_used_idx; 103 107 104 108 /* Used flags */ ··· 158 154 struct vhost_virtqueue **vqs; 159 155 int nvqs; 160 156 struct eventfd_ctx *log_ctx; 161 - struct vhost_worker *worker; 157 + struct vhost_worker worker; 162 158 struct vhost_iotlb *umem; 163 159 struct vhost_iotlb *iotlb; 164 160 spinlock_t iotlb_lock;
+3
fs/afs/dir.c
··· 1358 1358 op->dentry = dentry; 1359 1359 op->create.mode = S_IFDIR | mode; 1360 1360 op->create.reason = afs_edit_dir_for_mkdir; 1361 + op->mtime = current_time(dir); 1361 1362 op->ops = &afs_mkdir_operation; 1362 1363 return afs_do_sync_operation(op); 1363 1364 } ··· 1662 1661 op->dentry = dentry; 1663 1662 op->create.mode = S_IFREG | mode; 1664 1663 op->create.reason = afs_edit_dir_for_create; 1664 + op->mtime = current_time(dir); 1665 1665 op->ops = &afs_create_operation; 1666 1666 return afs_do_sync_operation(op); 1667 1667 ··· 1798 1796 op->ops = &afs_symlink_operation; 1799 1797 op->create.reason = afs_edit_dir_for_symlink; 1800 1798 op->create.symlink = content; 1799 + op->mtime = current_time(dir); 1801 1800 return afs_do_sync_operation(op); 1802 1801 1803 1802 error:
+2 -2
fs/afs/vl_probe.c
··· 115 115 } 116 116 } 117 117 118 - if (rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us) && 119 - rtt_us < server->probe.rtt) { 118 + rxrpc_kernel_get_srtt(call->net->socket, call->rxcall, &rtt_us); 119 + if (rtt_us < server->probe.rtt) { 120 120 server->probe.rtt = rtt_us; 121 121 server->rtt = rtt_us; 122 122 alist->preferred = index;
+13 -6
fs/btrfs/disk-io.c
··· 242 242 int mirror_num) 243 243 { 244 244 struct btrfs_fs_info *fs_info = eb->fs_info; 245 - u64 start = eb->start; 246 245 int i, num_pages = num_extent_pages(eb); 247 246 int ret = 0; 248 247 ··· 250 251 251 252 for (i = 0; i < num_pages; i++) { 252 253 struct page *p = eb->pages[i]; 254 + u64 start = max_t(u64, eb->start, page_offset(p)); 255 + u64 end = min_t(u64, eb->start + eb->len, page_offset(p) + PAGE_SIZE); 256 + u32 len = end - start; 253 257 254 - ret = btrfs_repair_io_failure(fs_info, 0, start, PAGE_SIZE, 255 - start, p, start - page_offset(p), mirror_num); 258 + ret = btrfs_repair_io_failure(fs_info, 0, start, len, 259 + start, p, offset_in_page(start), mirror_num); 256 260 if (ret) 257 261 break; 258 - start += PAGE_SIZE; 259 262 } 260 263 261 264 return ret; ··· 996 995 { 997 996 struct btrfs_fs_info *fs_info = root->fs_info; 998 997 struct rb_node *tmp; 998 + int ret = 0; 999 999 1000 1000 write_lock(&fs_info->global_root_lock); 1001 1001 tmp = rb_find_add(&root->rb_node, &fs_info->global_root_tree, global_root_cmp); 1002 1002 write_unlock(&fs_info->global_root_lock); 1003 - ASSERT(!tmp); 1004 1003 1005 - return tmp ? -EEXIST : 0; 1004 + if (tmp) { 1005 + ret = -EEXIST; 1006 + btrfs_warn(fs_info, "global root %llu %llu already exists", 1007 + root->root_key.objectid, root->root_key.offset); 1008 + } 1009 + return ret; 1006 1010 } 1007 1011 1008 1012 void btrfs_global_root_delete(struct btrfs_root *root) ··· 2847 2841 /* We can't trust the free space cache either */ 2848 2842 btrfs_set_opt(fs_info->mount_opt, CLEAR_CACHE); 2849 2843 2844 + btrfs_warn(fs_info, "try to load backup roots slot %d", i); 2850 2845 ret = read_backup_root(fs_info, i); 2851 2846 backup_index = ret; 2852 2847 if (ret < 0)
+13 -7
fs/btrfs/inode.c
··· 1864 1864 1865 1865 ret = btrfs_cross_ref_exist(root, btrfs_ino(inode), 1866 1866 key->offset - args->extent_offset, 1867 - args->disk_bytenr, false, path); 1867 + args->disk_bytenr, args->strict, path); 1868 1868 WARN_ON_ONCE(ret > 0 && is_freespace_inode); 1869 1869 if (ret != 0) 1870 1870 goto out; ··· 7264 7264 static int btrfs_get_blocks_direct_write(struct extent_map **map, 7265 7265 struct inode *inode, 7266 7266 struct btrfs_dio_data *dio_data, 7267 - u64 start, u64 len, 7267 + u64 start, u64 *lenp, 7268 7268 unsigned int iomap_flags) 7269 7269 { 7270 7270 const bool nowait = (iomap_flags & IOMAP_NOWAIT); ··· 7275 7275 struct btrfs_block_group *bg; 7276 7276 bool can_nocow = false; 7277 7277 bool space_reserved = false; 7278 + u64 len = *lenp; 7278 7279 u64 prev_len; 7279 7280 int ret = 0; 7280 7281 ··· 7346 7345 free_extent_map(em); 7347 7346 *map = NULL; 7348 7347 7349 - if (nowait) 7350 - return -EAGAIN; 7348 + if (nowait) { 7349 + ret = -EAGAIN; 7350 + goto out; 7351 + } 7351 7352 7352 7353 /* 7353 7354 * If we could not allocate data space before locking the file 7354 7355 * range and we can't do a NOCOW write, then we have to fail. 7355 7356 */ 7356 - if (!dio_data->data_space_reserved) 7357 - return -ENOSPC; 7357 + if (!dio_data->data_space_reserved) { 7358 + ret = -ENOSPC; 7359 + goto out; 7360 + } 7358 7361 7359 7362 /* 7360 7363 * We have to COW and we have already reserved data space before, ··· 7399 7394 btrfs_delalloc_release_extents(BTRFS_I(inode), len); 7400 7395 btrfs_delalloc_release_metadata(BTRFS_I(inode), len, true); 7401 7396 } 7397 + *lenp = len; 7402 7398 return ret; 7403 7399 } 7404 7400 ··· 7576 7570 7577 7571 if (write) { 7578 7572 ret = btrfs_get_blocks_direct_write(&em, inode, dio_data, 7579 - start, len, flags); 7573 + start, &len, flags); 7580 7574 if (ret < 0) 7581 7575 goto unlock_err; 7582 7576 unlock_extents = true;
+20 -8
fs/btrfs/scrub.c
··· 134 134 * The errors hit during the initial read of the stripe. 135 135 * 136 136 * Would be utilized for error reporting and repair. 137 + * 138 + * The remaining init_nr_* records the number of errors hit, only used 139 + * by error reporting. 137 140 */ 138 141 unsigned long init_error_bitmap; 142 + unsigned int init_nr_io_errors; 143 + unsigned int init_nr_csum_errors; 144 + unsigned int init_nr_meta_errors; 139 145 140 146 /* 141 147 * The following error bitmaps are all for the current status. ··· 1009 1003 sctx->stat.data_bytes_scrubbed += nr_data_sectors << fs_info->sectorsize_bits; 1010 1004 sctx->stat.tree_bytes_scrubbed += nr_meta_sectors << fs_info->sectorsize_bits; 1011 1005 sctx->stat.no_csum += nr_nodatacsum_sectors; 1012 - sctx->stat.read_errors += 1013 - bitmap_weight(&stripe->io_error_bitmap, stripe->nr_sectors); 1014 - sctx->stat.csum_errors += 1015 - bitmap_weight(&stripe->csum_error_bitmap, stripe->nr_sectors); 1016 - sctx->stat.verify_errors += 1017 - bitmap_weight(&stripe->meta_error_bitmap, stripe->nr_sectors); 1006 + sctx->stat.read_errors += stripe->init_nr_io_errors; 1007 + sctx->stat.csum_errors += stripe->init_nr_csum_errors; 1008 + sctx->stat.verify_errors += stripe->init_nr_meta_errors; 1018 1009 sctx->stat.uncorrectable_errors += 1019 1010 bitmap_weight(&stripe->error_bitmap, stripe->nr_sectors); 1020 1011 sctx->stat.corrected_errors += nr_repaired_sectors; ··· 1044 1041 scrub_verify_one_stripe(stripe, stripe->extent_sector_bitmap); 1045 1042 /* Save the initial failed bitmap for later repair and report usage. */ 1046 1043 stripe->init_error_bitmap = stripe->error_bitmap; 1044 + stripe->init_nr_io_errors = bitmap_weight(&stripe->io_error_bitmap, 1045 + stripe->nr_sectors); 1046 + stripe->init_nr_csum_errors = bitmap_weight(&stripe->csum_error_bitmap, 1047 + stripe->nr_sectors); 1048 + stripe->init_nr_meta_errors = bitmap_weight(&stripe->meta_error_bitmap, 1049 + stripe->nr_sectors); 1047 1050 1048 1051 if (bitmap_empty(&stripe->init_error_bitmap, stripe->nr_sectors)) 1049 1052 goto out; ··· 1499 1490 { 1500 1491 stripe->extent_sector_bitmap = 0; 1501 1492 stripe->init_error_bitmap = 0; 1493 + stripe->init_nr_io_errors = 0; 1494 + stripe->init_nr_csum_errors = 0; 1495 + stripe->init_nr_meta_errors = 0; 1502 1496 stripe->error_bitmap = 0; 1503 1497 stripe->io_error_bitmap = 0; 1504 1498 stripe->csum_error_bitmap = 0; ··· 1742 1730 break; 1743 1731 } 1744 1732 } 1745 - } else { 1733 + } else if (!sctx->readonly) { 1746 1734 for (int i = 0; i < nr_stripes; i++) { 1747 1735 unsigned long repaired; 1748 1736 ··· 2266 2254 } 2267 2255 out: 2268 2256 ret2 = flush_scrub_stripes(sctx); 2269 - if (!ret2) 2257 + if (!ret) 2270 2258 ret = ret2; 2271 2259 if (sctx->raid56_data_stripes) { 2272 2260 for (int i = 0; i < nr_data_stripes(map); i++)
+6
fs/btrfs/super.c
··· 1841 1841 btrfs_clear_sb_rdonly(sb); 1842 1842 1843 1843 set_bit(BTRFS_FS_OPEN, &fs_info->flags); 1844 + 1845 + /* 1846 + * If we've gone from readonly -> read/write, we need to get 1847 + * our sync/async discard lists in the right state. 1848 + */ 1849 + btrfs_discard_resume(fs_info); 1844 1850 } 1845 1851 out: 1846 1852 /*
+6
fs/ceph/caps.c
··· 1627 1627 struct inode *inode = &ci->netfs.inode; 1628 1628 struct ceph_mds_client *mdsc = ceph_inode_to_client(inode)->mdsc; 1629 1629 struct ceph_mds_session *session = NULL; 1630 + bool need_put = false; 1630 1631 int mds; 1631 1632 1632 1633 dout("ceph_flush_snaps %p\n", inode); ··· 1672 1671 ceph_put_mds_session(session); 1673 1672 /* we flushed them all; remove this inode from the queue */ 1674 1673 spin_lock(&mdsc->snap_flush_lock); 1674 + if (!list_empty(&ci->i_snap_flush_item)) 1675 + need_put = true; 1675 1676 list_del_init(&ci->i_snap_flush_item); 1676 1677 spin_unlock(&mdsc->snap_flush_lock); 1678 + 1679 + if (need_put) 1680 + iput(inode); 1677 1681 } 1678 1682 1679 1683 /*
+3 -1
fs/ceph/snap.c
··· 693 693 capsnap->size); 694 694 695 695 spin_lock(&mdsc->snap_flush_lock); 696 - if (list_empty(&ci->i_snap_flush_item)) 696 + if (list_empty(&ci->i_snap_flush_item)) { 697 + ihold(inode); 697 698 list_add_tail(&ci->i_snap_flush_item, &mdsc->snap_flush_list); 699 + } 698 700 spin_unlock(&mdsc->snap_flush_lock); 699 701 return 1; /* caller may want to ceph_flush_snaps */ 700 702 }
+5 -1
fs/eventpoll.c
··· 1805 1805 { 1806 1806 int ret = default_wake_function(wq_entry, mode, sync, key); 1807 1807 1808 - list_del_init(&wq_entry->entry); 1808 + /* 1809 + * Pairs with list_empty_careful in ep_poll, and ensures future loop 1810 + * iterations see the cause of this wakeup. 1811 + */ 1812 + list_del_init_careful(&wq_entry->entry); 1809 1813 return ret; 1810 1814 } 1811 1815
+12 -11
fs/ext4/balloc.c
··· 324 324 struct ext4_group_info *ext4_get_group_info(struct super_block *sb, 325 325 ext4_group_t group) 326 326 { 327 - struct ext4_group_info **grp_info; 328 - long indexv, indexh; 327 + struct ext4_group_info **grp_info; 328 + long indexv, indexh; 329 329 330 - if (unlikely(group >= EXT4_SB(sb)->s_groups_count)) { 331 - ext4_error(sb, "invalid group %u", group); 332 - return NULL; 333 - } 334 - indexv = group >> (EXT4_DESC_PER_BLOCK_BITS(sb)); 335 - indexh = group & ((EXT4_DESC_PER_BLOCK(sb)) - 1); 336 - grp_info = sbi_array_rcu_deref(EXT4_SB(sb), s_group_info, indexv); 337 - return grp_info[indexh]; 330 + if (unlikely(group >= EXT4_SB(sb)->s_groups_count)) 331 + return NULL; 332 + indexv = group >> (EXT4_DESC_PER_BLOCK_BITS(sb)); 333 + indexh = group & ((EXT4_DESC_PER_BLOCK(sb)) - 1); 334 + grp_info = sbi_array_rcu_deref(EXT4_SB(sb), s_group_info, indexv); 335 + return grp_info[indexh]; 338 336 } 339 337 340 338 /* ··· 884 886 if (!ext4_bg_has_super(sb, group)) 885 887 return 0; 886 888 887 - return EXT4_SB(sb)->s_gdb_count; 889 + if (ext4_has_feature_meta_bg(sb)) 890 + return le32_to_cpu(EXT4_SB(sb)->s_es->s_first_meta_bg); 891 + else 892 + return EXT4_SB(sb)->s_gdb_count; 888 893 } 889 894 890 895 /**
+1 -5
fs/ext4/super.c
··· 6388 6388 struct ext4_mount_options old_opts; 6389 6389 ext4_group_t g; 6390 6390 int err = 0; 6391 - int enable_rw = 0; 6392 6391 #ifdef CONFIG_QUOTA 6393 6392 int enable_quota = 0; 6394 6393 int i, j; ··· 6574 6575 if (err) 6575 6576 goto restore_opts; 6576 6577 6577 - enable_rw = 1; 6578 + sb->s_flags &= ~SB_RDONLY; 6578 6579 if (ext4_has_feature_mmp(sb)) { 6579 6580 err = ext4_multi_mount_protect(sb, 6580 6581 le64_to_cpu(es->s_mmp_block)); ··· 6620 6621 #endif 6621 6622 if (!test_opt(sb, BLOCK_VALIDITY) && sbi->s_system_blks) 6622 6623 ext4_release_system_zone(sb); 6623 - 6624 - if (enable_rw) 6625 - sb->s_flags &= ~SB_RDONLY; 6626 6624 6627 6625 /* 6628 6626 * Reinitialize lazy itable initialization thread based on
+4 -2
fs/ext4/xattr.c
··· 2056 2056 else { 2057 2057 u32 ref; 2058 2058 2059 + #ifdef EXT4_XATTR_DEBUG 2059 2060 WARN_ON_ONCE(dquot_initialize_needed(inode)); 2060 - 2061 + #endif 2061 2062 /* The old block is released after updating 2062 2063 the inode. */ 2063 2064 error = dquot_alloc_block(inode, ··· 2121 2120 /* We need to allocate a new block */ 2122 2121 ext4_fsblk_t goal, block; 2123 2122 2123 + #ifdef EXT4_XATTR_DEBUG 2124 2124 WARN_ON_ONCE(dquot_initialize_needed(inode)); 2125 - 2125 + #endif 2126 2126 goal = ext4_group_first_block_no(sb, 2127 2127 EXT4_I(inode)->i_block_group); 2128 2128 block = ext4_new_meta_blocks(handle, inode, goal, 0,
+14 -3
fs/gfs2/file.c
··· 784 784 if (!user_backed_iter(i)) 785 785 return false; 786 786 787 + /* 788 + * Try to fault in multiple pages initially. When that doesn't result 789 + * in any progress, fall back to a single page. 790 + */ 787 791 size = PAGE_SIZE; 788 792 offs = offset_in_page(iocb->ki_pos); 789 - if (*prev_count != count || !*window_size) { 793 + if (*prev_count != count) { 790 794 size_t nr_dirtied; 791 795 792 796 nr_dirtied = max(current->nr_dirtied_pause - ··· 874 870 struct gfs2_inode *ip = GFS2_I(inode); 875 871 size_t prev_count = 0, window_size = 0; 876 872 size_t written = 0; 873 + bool enough_retries; 877 874 ssize_t ret; 878 875 879 876 /* ··· 918 913 if (ret > 0) 919 914 written = ret; 920 915 916 + enough_retries = prev_count == iov_iter_count(from) && 917 + window_size <= PAGE_SIZE; 921 918 if (should_fault_in_pages(from, iocb, &prev_count, &window_size)) { 922 919 gfs2_glock_dq(gh); 923 920 window_size -= fault_in_iov_iter_readable(from, window_size); 924 - if (window_size) 925 - goto retry; 921 + if (window_size) { 922 + if (!enough_retries) 923 + goto retry; 924 + /* fall back to buffered I/O */ 925 + ret = 0; 926 + } 926 927 } 927 928 out_unlock: 928 929 if (gfs2_holder_queued(gh))
+10 -2
fs/nilfs2/btnode.c
··· 285 285 if (nbh == NULL) { /* blocksize == pagesize */ 286 286 xa_erase_irq(&btnc->i_pages, newkey); 287 287 unlock_page(ctxt->bh->b_page); 288 - } else 289 - brelse(nbh); 288 + } else { 289 + /* 290 + * When canceling a buffer that a prepare operation has 291 + * allocated to copy a node block to another location, use 292 + * nilfs_btnode_delete() to initialize and release the buffer 293 + * so that the buffer flags will not be in an inconsistent 294 + * state when it is reallocated. 295 + */ 296 + nilfs_btnode_delete(nbh); 297 + } 290 298 }
+9
fs/nilfs2/sufile.c
··· 779 779 goto out_header; 780 780 781 781 sui->ncleansegs -= nsegs - newnsegs; 782 + 783 + /* 784 + * If the sufile is successfully truncated, immediately adjust 785 + * the segment allocation space while locking the semaphore 786 + * "mi_sem" so that nilfs_sufile_alloc() never allocates 787 + * segments in the truncated space. 788 + */ 789 + sui->allocmax = newnsegs - 1; 790 + sui->allocmin = 0; 782 791 } 783 792 784 793 kaddr = kmap_atomic(header_bh->b_page);
+42 -1
fs/nilfs2/the_nilfs.c
··· 405 405 100)); 406 406 } 407 407 408 + /** 409 + * nilfs_max_segment_count - calculate the maximum number of segments 410 + * @nilfs: nilfs object 411 + */ 412 + static u64 nilfs_max_segment_count(struct the_nilfs *nilfs) 413 + { 414 + u64 max_count = U64_MAX; 415 + 416 + do_div(max_count, nilfs->ns_blocks_per_segment); 417 + return min_t(u64, max_count, ULONG_MAX); 418 + } 419 + 408 420 void nilfs_set_nsegments(struct the_nilfs *nilfs, unsigned long nsegs) 409 421 { 410 422 nilfs->ns_nsegments = nsegs; ··· 426 414 static int nilfs_store_disk_layout(struct the_nilfs *nilfs, 427 415 struct nilfs_super_block *sbp) 428 416 { 417 + u64 nsegments, nblocks; 418 + 429 419 if (le32_to_cpu(sbp->s_rev_level) < NILFS_MIN_SUPP_REV) { 430 420 nilfs_err(nilfs->ns_sb, 431 421 "unsupported revision (superblock rev.=%d.%d, current rev.=%d.%d). Please check the version of mkfs.nilfs(2).", ··· 471 457 return -EINVAL; 472 458 } 473 459 474 - nilfs_set_nsegments(nilfs, le64_to_cpu(sbp->s_nsegments)); 460 + nsegments = le64_to_cpu(sbp->s_nsegments); 461 + if (nsegments > nilfs_max_segment_count(nilfs)) { 462 + nilfs_err(nilfs->ns_sb, 463 + "segment count %llu exceeds upper limit (%llu segments)", 464 + (unsigned long long)nsegments, 465 + (unsigned long long)nilfs_max_segment_count(nilfs)); 466 + return -EINVAL; 467 + } 468 + 469 + nblocks = sb_bdev_nr_blocks(nilfs->ns_sb); 470 + if (nblocks) { 471 + u64 min_block_count = nsegments * nilfs->ns_blocks_per_segment; 472 + /* 473 + * To avoid failing to mount early device images without a 474 + * second superblock, exclude that block count from the 475 + * "min_block_count" calculation. 476 + */ 477 + 478 + if (nblocks < min_block_count) { 479 + nilfs_err(nilfs->ns_sb, 480 + "total number of segment blocks %llu exceeds device size (%llu blocks)", 481 + (unsigned long long)min_block_count, 482 + (unsigned long long)nblocks); 483 + return -EINVAL; 484 + } 485 + } 486 + 487 + nilfs_set_nsegments(nilfs, nsegments); 475 488 nilfs->ns_crc_seed = le32_to_cpu(sbp->s_crc_seed); 476 489 return 0; 477 490 }
+7 -1
fs/ocfs2/file.c
··· 2100 2100 struct ocfs2_space_resv sr; 2101 2101 int change_size = 1; 2102 2102 int cmd = OCFS2_IOC_RESVSP64; 2103 + int ret = 0; 2103 2104 2104 2105 if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE)) 2105 2106 return -EOPNOTSUPP; 2106 2107 if (!ocfs2_writes_unwritten_extents(osb)) 2107 2108 return -EOPNOTSUPP; 2108 2109 2109 - if (mode & FALLOC_FL_KEEP_SIZE) 2110 + if (mode & FALLOC_FL_KEEP_SIZE) { 2110 2111 change_size = 0; 2112 + } else { 2113 + ret = inode_newsize_ok(inode, offset + len); 2114 + if (ret) 2115 + return ret; 2116 + } 2111 2117 2112 2118 if (mode & FALLOC_FL_PUNCH_HOLE) 2113 2119 cmd = OCFS2_IOC_UNRESVSP64;
+4 -2
fs/ocfs2/super.c
··· 952 952 for (type = 0; type < OCFS2_MAXQUOTAS; type++) { 953 953 if (!sb_has_quota_loaded(sb, type)) 954 954 continue; 955 - oinfo = sb_dqinfo(sb, type)->dqi_priv; 956 - cancel_delayed_work_sync(&oinfo->dqi_sync_work); 955 + if (!sb_has_quota_suspended(sb, type)) { 956 + oinfo = sb_dqinfo(sb, type)->dqi_priv; 957 + cancel_delayed_work_sync(&oinfo->dqi_sync_work); 958 + } 957 959 inode = igrab(sb->s_dquot.files[type]); 958 960 /* Turn off quotas. This will remove all dquot structures from 959 961 * memory and so they will be automatically synced to global
+54 -4
fs/smb/client/cifs_debug.c
··· 12 12 #include <linux/module.h> 13 13 #include <linux/proc_fs.h> 14 14 #include <linux/uaccess.h> 15 + #include <uapi/linux/ethtool.h> 15 16 #include "cifspdu.h" 16 17 #include "cifsglob.h" 17 18 #include "cifsproto.h" ··· 131 130 struct TCP_Server_Info *server = chan->server; 132 131 133 132 seq_printf(m, "\n\n\t\tChannel: %d ConnectionId: 0x%llx" 134 - "\n\t\tNumber of credits: %d Dialect 0x%x" 133 + "\n\t\tNumber of credits: %d,%d,%d Dialect 0x%x" 135 134 "\n\t\tTCP status: %d Instance: %d" 136 135 "\n\t\tLocal Users To Server: %d SecMode: 0x%x Req On Wire: %d" 137 136 "\n\t\tIn Send: %d In MaxReq Wait: %d", 138 137 i+1, server->conn_id, 139 138 server->credits, 139 + server->echo_credits, 140 + server->oplock_credits, 140 141 server->dialect, 141 142 server->tcpStatus, 142 143 server->reconnect_instance, ··· 149 146 atomic_read(&server->num_waiters)); 150 147 } 151 148 149 + static inline const char *smb_speed_to_str(size_t bps) 150 + { 151 + size_t mbps = bps / 1000 / 1000; 152 + 153 + switch (mbps) { 154 + case SPEED_10: 155 + return "10Mbps"; 156 + case SPEED_100: 157 + return "100Mbps"; 158 + case SPEED_1000: 159 + return "1Gbps"; 160 + case SPEED_2500: 161 + return "2.5Gbps"; 162 + case SPEED_5000: 163 + return "5Gbps"; 164 + case SPEED_10000: 165 + return "10Gbps"; 166 + case SPEED_14000: 167 + return "14Gbps"; 168 + case SPEED_20000: 169 + return "20Gbps"; 170 + case SPEED_25000: 171 + return "25Gbps"; 172 + case SPEED_40000: 173 + return "40Gbps"; 174 + case SPEED_50000: 175 + return "50Gbps"; 176 + case SPEED_56000: 177 + return "56Gbps"; 178 + case SPEED_100000: 179 + return "100Gbps"; 180 + case SPEED_200000: 181 + return "200Gbps"; 182 + case SPEED_400000: 183 + return "400Gbps"; 184 + case SPEED_800000: 185 + return "800Gbps"; 186 + default: 187 + return "Unknown"; 188 + } 189 + } 190 + 152 191 static void 153 192 cifs_dump_iface(struct seq_file *m, struct cifs_server_iface *iface) 154 193 { 155 194 struct sockaddr_in *ipv4 = (struct sockaddr_in *)&iface->sockaddr; 156 195 struct sockaddr_in6 *ipv6 = (struct sockaddr_in6 *)&iface->sockaddr; 157 196 158 - seq_printf(m, "\tSpeed: %zu bps\n", iface->speed); 197 + seq_printf(m, "\tSpeed: %s\n", smb_speed_to_str(iface->speed)); 159 198 seq_puts(m, "\t\tCapabilities: "); 160 199 if (iface->rdma_capable) 161 200 seq_puts(m, "rdma "); 162 201 if (iface->rss_capable) 163 202 seq_puts(m, "rss "); 203 + if (!iface->rdma_capable && !iface->rss_capable) 204 + seq_puts(m, "None"); 164 205 seq_putc(m, '\n'); 165 206 if (iface->sockaddr.ss_family == AF_INET) 166 207 seq_printf(m, "\t\tIPv4: %pI4\n", &ipv4->sin_addr); ··· 397 350 atomic_read(&server->smbd_conn->mr_used_count)); 398 351 skip_rdma: 399 352 #endif 400 - seq_printf(m, "\nNumber of credits: %d Dialect 0x%x", 401 - server->credits, server->dialect); 353 + seq_printf(m, "\nNumber of credits: %d,%d,%d Dialect 0x%x", 354 + server->credits, 355 + server->echo_credits, 356 + server->oplock_credits, 357 + server->dialect); 402 358 if (server->compress_algorithm == SMB3_COMPRESS_LZNT1) 403 359 seq_printf(m, " COMPRESS_LZNT1"); 404 360 else if (server->compress_algorithm == SMB3_COMPRESS_LZ77)
-37
fs/smb/client/cifsglob.h
··· 970 970 kfree(iface); 971 971 } 972 972 973 - /* 974 - * compare two interfaces a and b 975 - * return 0 if everything matches. 976 - * return 1 if a has higher link speed, or rdma capable, or rss capable 977 - * return -1 otherwise. 978 - */ 979 - static inline int 980 - iface_cmp(struct cifs_server_iface *a, struct cifs_server_iface *b) 981 - { 982 - int cmp_ret = 0; 983 - 984 - WARN_ON(!a || !b); 985 - if (a->speed == b->speed) { 986 - if (a->rdma_capable == b->rdma_capable) { 987 - if (a->rss_capable == b->rss_capable) { 988 - cmp_ret = memcmp(&a->sockaddr, &b->sockaddr, 989 - sizeof(a->sockaddr)); 990 - if (!cmp_ret) 991 - return 0; 992 - else if (cmp_ret > 0) 993 - return 1; 994 - else 995 - return -1; 996 - } else if (a->rss_capable > b->rss_capable) 997 - return 1; 998 - else 999 - return -1; 1000 - } else if (a->rdma_capable > b->rdma_capable) 1001 - return 1; 1002 - else 1003 - return -1; 1004 - } else if (a->speed > b->speed) 1005 - return 1; 1006 - else 1007 - return -1; 1008 - } 1009 - 1010 973 struct cifs_chan { 1011 974 unsigned int in_reconnect : 1; /* if session setup in progress for this channel */ 1012 975 struct TCP_Server_Info *server;
+1
fs/smb/client/cifsproto.h
··· 87 87 struct mid_q_entry *mid); 88 88 extern int smb3_parse_devname(const char *devname, struct smb3_fs_context *ctx); 89 89 extern int smb3_parse_opt(const char *options, const char *key, char **val); 90 + extern int cifs_ipaddr_cmp(struct sockaddr *srcaddr, struct sockaddr *rhs); 90 91 extern bool cifs_match_ipaddr(struct sockaddr *srcaddr, struct sockaddr *rhs); 91 92 extern int cifs_discard_remaining_data(struct TCP_Server_Info *server); 92 93 extern int cifs_call_async(struct TCP_Server_Info *server,
+55 -4
fs/smb/client/connect.c
··· 1288 1288 module_put_and_kthread_exit(0); 1289 1289 } 1290 1290 1291 + int 1292 + cifs_ipaddr_cmp(struct sockaddr *srcaddr, struct sockaddr *rhs) 1293 + { 1294 + struct sockaddr_in *saddr4 = (struct sockaddr_in *)srcaddr; 1295 + struct sockaddr_in *vaddr4 = (struct sockaddr_in *)rhs; 1296 + struct sockaddr_in6 *saddr6 = (struct sockaddr_in6 *)srcaddr; 1297 + struct sockaddr_in6 *vaddr6 = (struct sockaddr_in6 *)rhs; 1298 + 1299 + switch (srcaddr->sa_family) { 1300 + case AF_UNSPEC: 1301 + switch (rhs->sa_family) { 1302 + case AF_UNSPEC: 1303 + return 0; 1304 + case AF_INET: 1305 + case AF_INET6: 1306 + return 1; 1307 + default: 1308 + return -1; 1309 + } 1310 + case AF_INET: { 1311 + switch (rhs->sa_family) { 1312 + case AF_UNSPEC: 1313 + return -1; 1314 + case AF_INET: 1315 + return memcmp(saddr4, vaddr4, 1316 + sizeof(struct sockaddr_in)); 1317 + case AF_INET6: 1318 + return 1; 1319 + default: 1320 + return -1; 1321 + } 1322 + } 1323 + case AF_INET6: { 1324 + switch (rhs->sa_family) { 1325 + case AF_UNSPEC: 1326 + case AF_INET: 1327 + return -1; 1328 + case AF_INET6: 1329 + return memcmp(saddr6, 1330 + vaddr6, 1331 + sizeof(struct sockaddr_in6)); 1332 + default: 1333 + return -1; 1334 + } 1335 + } 1336 + default: 1337 + return -1; /* don't expect to be here */ 1338 + } 1339 + } 1340 + 1291 1341 /* 1292 1342 * Returns true if srcaddr isn't specified and rhs isn't specified, or 1293 1343 * if srcaddr is specified and matches the IP address of the rhs argument ··· 4136 4086 4137 4087 /* only send once per connect */ 4138 4088 spin_lock(&tcon->tc_lock); 4089 + if (tcon->status == TID_GOOD) { 4090 + spin_unlock(&tcon->tc_lock); 4091 + return 0; 4092 + } 4093 + 4139 4094 if (tcon->status != TID_NEW && 4140 4095 tcon->status != TID_NEED_TCON) { 4141 4096 spin_unlock(&tcon->tc_lock); 4142 4097 return -EHOSTDOWN; 4143 4098 } 4144 4099 4145 - if (tcon->status == TID_GOOD) { 4146 - spin_unlock(&tcon->tc_lock); 4147 - return 0; 4148 - } 4149 4100 tcon->status = TID_IN_TCON; 4150 4101 spin_unlock(&tcon->tc_lock); 4151 4102
+5 -4
fs/smb/client/dfs.c
··· 575 575 576 576 /* only send once per connect */ 577 577 spin_lock(&tcon->tc_lock); 578 + if (tcon->status == TID_GOOD) { 579 + spin_unlock(&tcon->tc_lock); 580 + return 0; 581 + } 582 + 578 583 if (tcon->status != TID_NEW && 579 584 tcon->status != TID_NEED_TCON) { 580 585 spin_unlock(&tcon->tc_lock); 581 586 return -EHOSTDOWN; 582 587 } 583 588 584 - if (tcon->status == TID_GOOD) { 585 - spin_unlock(&tcon->tc_lock); 586 - return 0; 587 - } 588 589 tcon->status = TID_IN_TCON; 589 590 spin_unlock(&tcon->tc_lock); 590 591
+6 -2
fs/smb/client/file.c
··· 4942 4942 * disconnected since oplock already released by the server 4943 4943 */ 4944 4944 if (!oplock_break_cancelled) { 4945 - rc = tcon->ses->server->ops->oplock_response(tcon, persistent_fid, 4945 + /* check for server null since can race with kill_sb calling tree disconnect */ 4946 + if (tcon->ses && tcon->ses->server) { 4947 + rc = tcon->ses->server->ops->oplock_response(tcon, persistent_fid, 4946 4948 volatile_fid, net_fid, cinode); 4947 - cifs_dbg(FYI, "Oplock release rc = %d\n", rc); 4949 + cifs_dbg(FYI, "Oplock release rc = %d\n", rc); 4950 + } else 4951 + pr_warn_once("lease break not sent for unmounted share\n"); 4948 4952 } 4949 4953 4950 4954 cifs_done_oplock_break(cinode);
+40
fs/smb/client/smb2ops.c
··· 34 34 change_conf(struct TCP_Server_Info *server) 35 35 { 36 36 server->credits += server->echo_credits + server->oplock_credits; 37 + if (server->credits > server->max_credits) 38 + server->credits = server->max_credits; 37 39 server->oplock_credits = server->echo_credits = 0; 38 40 switch (server->credits) { 39 41 case 0: ··· 93 91 server->conn_id, server->hostname, *val, 94 92 add, server->in_flight); 95 93 } 94 + WARN_ON_ONCE(server->in_flight == 0); 96 95 server->in_flight--; 97 96 if (server->in_flight == 0 && 98 97 ((optype & CIFS_OP_MASK) != CIFS_NEG_OP) && ··· 511 508 rsize = min_t(unsigned int, rsize, SMB2_MAX_BUFFER_SIZE); 512 509 513 510 return rsize; 511 + } 512 + 513 + /* 514 + * compare two interfaces a and b 515 + * return 0 if everything matches. 516 + * return 1 if a is rdma capable, or rss capable, or has higher link speed 517 + * return -1 otherwise. 518 + */ 519 + static int 520 + iface_cmp(struct cifs_server_iface *a, struct cifs_server_iface *b) 521 + { 522 + int cmp_ret = 0; 523 + 524 + WARN_ON(!a || !b); 525 + if (a->rdma_capable == b->rdma_capable) { 526 + if (a->rss_capable == b->rss_capable) { 527 + if (a->speed == b->speed) { 528 + cmp_ret = cifs_ipaddr_cmp((struct sockaddr *) &a->sockaddr, 529 + (struct sockaddr *) &b->sockaddr); 530 + if (!cmp_ret) 531 + return 0; 532 + else if (cmp_ret > 0) 533 + return 1; 534 + else 535 + return -1; 536 + } else if (a->speed > b->speed) 537 + return 1; 538 + else 539 + return -1; 540 + } else if (a->rss_capable > b->rss_capable) 541 + return 1; 542 + else 543 + return -1; 544 + } else if (a->rdma_capable > b->rdma_capable) 545 + return 1; 546 + else 547 + return -1; 514 548 } 515 549 516 550 static int
+28 -4
fs/smb/client/smb2pdu.c
··· 1305 1305 } 1306 1306 1307 1307 /* enough to enable echos and oplocks and one max size write */ 1308 - req->hdr.CreditRequest = cpu_to_le16(130); 1308 + if (server->credits >= server->max_credits) 1309 + req->hdr.CreditRequest = cpu_to_le16(0); 1310 + else 1311 + req->hdr.CreditRequest = cpu_to_le16( 1312 + min_t(int, server->max_credits - 1313 + server->credits, 130)); 1309 1314 1310 1315 /* only one of SMB2 signing flags may be set in SMB2 request */ 1311 1316 if (server->sign) ··· 1904 1899 rqst.rq_nvec = 2; 1905 1900 1906 1901 /* Need 64 for max size write so ask for more in case not there yet */ 1907 - req->hdr.CreditRequest = cpu_to_le16(64); 1902 + if (server->credits >= server->max_credits) 1903 + req->hdr.CreditRequest = cpu_to_le16(0); 1904 + else 1905 + req->hdr.CreditRequest = cpu_to_le16( 1906 + min_t(int, server->max_credits - 1907 + server->credits, 64)); 1908 1908 1909 1909 rc = cifs_send_recv(xid, ses, server, 1910 1910 &rqst, &resp_buftype, flags, &rsp_iov); ··· 4237 4227 struct TCP_Server_Info *server; 4238 4228 struct cifs_tcon *tcon = tlink_tcon(rdata->cfile->tlink); 4239 4229 unsigned int total_len; 4230 + int credit_request; 4240 4231 4241 4232 cifs_dbg(FYI, "%s: offset=%llu bytes=%u\n", 4242 4233 __func__, rdata->offset, rdata->bytes); ··· 4269 4258 if (rdata->credits.value > 0) { 4270 4259 shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(rdata->bytes, 4271 4260 SMB2_MAX_BUFFER_SIZE)); 4272 - shdr->CreditRequest = cpu_to_le16(le16_to_cpu(shdr->CreditCharge) + 8); 4261 + credit_request = le16_to_cpu(shdr->CreditCharge) + 8; 4262 + if (server->credits >= server->max_credits) 4263 + shdr->CreditRequest = cpu_to_le16(0); 4264 + else 4265 + shdr->CreditRequest = cpu_to_le16( 4266 + min_t(int, server->max_credits - 4267 + server->credits, credit_request)); 4273 4268 4274 4269 rc = adjust_credits(server, &rdata->credits, rdata->bytes); 4275 4270 if (rc) ··· 4485 4468 unsigned int total_len; 4486 4469 struct cifs_io_parms _io_parms; 4487 4470 struct cifs_io_parms *io_parms = NULL; 4471 + int credit_request; 4488 4472 4489 4473 if (!wdata->server) 4490 4474 server = wdata->server = cifs_pick_channel(tcon->ses); ··· 4590 4572 if (wdata->credits.value > 0) { 4591 4573 shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(wdata->bytes, 4592 4574 SMB2_MAX_BUFFER_SIZE)); 4593 - shdr->CreditRequest = cpu_to_le16(le16_to_cpu(shdr->CreditCharge) + 8); 4575 + credit_request = le16_to_cpu(shdr->CreditCharge) + 8; 4576 + if (server->credits >= server->max_credits) 4577 + shdr->CreditRequest = cpu_to_le16(0); 4578 + else 4579 + shdr->CreditRequest = cpu_to_le16( 4580 + min_t(int, server->max_credits - 4581 + server->credits, credit_request)); 4594 4582 4595 4583 rc = adjust_credits(server, &wdata->credits, io_parms->length); 4596 4584 if (rc)
+1 -1
fs/smb/client/transport.c
··· 55 55 temp->pid = current->pid; 56 56 temp->command = cpu_to_le16(smb_buffer->Command); 57 57 cifs_dbg(FYI, "For smb_command %d\n", smb_buffer->Command); 58 - /* do_gettimeofday(&temp->when_sent);*/ /* easier to use jiffies */ 58 + /* easier to use jiffies */ 59 59 /* when mid allocated can be before when sent */ 60 60 temp->when_alloc = jiffies; 61 61 temp->server = server;
+15 -2
fs/smb/server/connection.c
··· 294 294 return true; 295 295 } 296 296 297 + #define SMB1_MIN_SUPPORTED_HEADER_SIZE (sizeof(struct smb_hdr)) 298 + #define SMB2_MIN_SUPPORTED_HEADER_SIZE (sizeof(struct smb2_hdr) + 4) 299 + 297 300 /** 298 301 * ksmbd_conn_handler_loop() - session thread to listen on new smb requests 299 302 * @p: connection instance ··· 353 350 if (pdu_size > MAX_STREAM_PROT_LEN) 354 351 break; 355 352 353 + if (pdu_size < SMB1_MIN_SUPPORTED_HEADER_SIZE) 354 + break; 355 + 356 356 /* 4 for rfc1002 length field */ 357 357 /* 1 for implied bcc[0] */ 358 358 size = pdu_size + 4 + 1; ··· 364 358 break; 365 359 366 360 memcpy(conn->request_buf, hdr_buf, sizeof(hdr_buf)); 367 - if (!ksmbd_smb_request(conn)) 368 - break; 369 361 370 362 /* 371 363 * We already read 4 bytes to find out PDU size, now ··· 379 375 pr_err("PDU error. Read: %d, Expected: %d\n", 380 376 size, pdu_size); 381 377 continue; 378 + } 379 + 380 + if (!ksmbd_smb_request(conn)) 381 + break; 382 + 383 + if (((struct smb2_hdr *)smb2_get_msg(conn->request_buf))->ProtocolId == 384 + SMB2_PROTO_NUMBER) { 385 + if (pdu_size < SMB2_MIN_SUPPORTED_HEADER_SIZE) 386 + break; 382 387 } 383 388 384 389 if (!default_conn_ops.process_fn) {
+24 -42
fs/smb/server/oplock.c
··· 1415 1415 */ 1416 1416 struct lease_ctx_info *parse_lease_state(void *open_req) 1417 1417 { 1418 - char *data_offset; 1419 1418 struct create_context *cc; 1420 - unsigned int next = 0; 1421 - char *name; 1422 - bool found = false; 1423 1419 struct smb2_create_req *req = (struct smb2_create_req *)open_req; 1424 - struct lease_ctx_info *lreq = kzalloc(sizeof(struct lease_ctx_info), 1425 - GFP_KERNEL); 1420 + struct lease_ctx_info *lreq; 1421 + 1422 + cc = smb2_find_context_vals(req, SMB2_CREATE_REQUEST_LEASE, 4); 1423 + if (IS_ERR_OR_NULL(cc)) 1424 + return NULL; 1425 + 1426 + lreq = kzalloc(sizeof(struct lease_ctx_info), GFP_KERNEL); 1426 1427 if (!lreq) 1427 1428 return NULL; 1428 1429 1429 - data_offset = (char *)req + le32_to_cpu(req->CreateContextsOffset); 1430 - cc = (struct create_context *)data_offset; 1431 - do { 1432 - cc = (struct create_context *)((char *)cc + next); 1433 - name = le16_to_cpu(cc->NameOffset) + (char *)cc; 1434 - if (le16_to_cpu(cc->NameLength) != 4 || 1435 - strncmp(name, SMB2_CREATE_REQUEST_LEASE, 4)) { 1436 - next = le32_to_cpu(cc->Next); 1437 - continue; 1438 - } 1439 - found = true; 1440 - break; 1441 - } while (next != 0); 1430 + if (sizeof(struct lease_context_v2) == le32_to_cpu(cc->DataLength)) { 1431 + struct create_lease_v2 *lc = (struct create_lease_v2 *)cc; 1442 1432 1443 - if (found) { 1444 - if (sizeof(struct lease_context_v2) == le32_to_cpu(cc->DataLength)) { 1445 - struct create_lease_v2 *lc = (struct create_lease_v2 *)cc; 1433 + memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE); 1434 + lreq->req_state = lc->lcontext.LeaseState; 1435 + lreq->flags = lc->lcontext.LeaseFlags; 1436 + lreq->duration = lc->lcontext.LeaseDuration; 1437 + memcpy(lreq->parent_lease_key, lc->lcontext.ParentLeaseKey, 1438 + SMB2_LEASE_KEY_SIZE); 1439 + lreq->version = 2; 1440 + } else { 1441 + struct create_lease *lc = (struct create_lease *)cc; 1446 1442 1447 - memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE); 1448 - lreq->req_state = lc->lcontext.LeaseState; 1449 - lreq->flags = lc->lcontext.LeaseFlags; 1450 - lreq->duration = lc->lcontext.LeaseDuration; 1451 - memcpy(lreq->parent_lease_key, lc->lcontext.ParentLeaseKey, 1452 - SMB2_LEASE_KEY_SIZE); 1453 - lreq->version = 2; 1454 - } else { 1455 - struct create_lease *lc = (struct create_lease *)cc; 1456 - 1457 - memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE); 1458 - lreq->req_state = lc->lcontext.LeaseState; 1459 - lreq->flags = lc->lcontext.LeaseFlags; 1460 - lreq->duration = lc->lcontext.LeaseDuration; 1461 - lreq->version = 1; 1462 - } 1463 - return lreq; 1443 + memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE); 1444 + lreq->req_state = lc->lcontext.LeaseState; 1445 + lreq->flags = lc->lcontext.LeaseFlags; 1446 + lreq->duration = lc->lcontext.LeaseDuration; 1447 + lreq->version = 1; 1464 1448 } 1465 - 1466 - kfree(lreq); 1467 - return NULL; 1449 + return lreq; 1468 1450 } 1469 1451 1470 1452 /**
+6 -7
fs/smb/server/smb2pdu.c
··· 963 963 964 964 static __le32 deassemble_neg_contexts(struct ksmbd_conn *conn, 965 965 struct smb2_negotiate_req *req, 966 - int len_of_smb) 966 + unsigned int len_of_smb) 967 967 { 968 968 /* +4 is to account for the RFC1001 len field */ 969 969 struct smb2_neg_context *pctx = (struct smb2_neg_context *)req; 970 970 int i = 0, len_of_ctxts; 971 - int offset = le32_to_cpu(req->NegotiateContextOffset); 972 - int neg_ctxt_cnt = le16_to_cpu(req->NegotiateContextCount); 971 + unsigned int offset = le32_to_cpu(req->NegotiateContextOffset); 972 + unsigned int neg_ctxt_cnt = le16_to_cpu(req->NegotiateContextCount); 973 973 __le32 status = STATUS_INVALID_PARAMETER; 974 974 975 975 ksmbd_debug(SMB, "decoding %d negotiate contexts\n", neg_ctxt_cnt); ··· 983 983 while (i++ < neg_ctxt_cnt) { 984 984 int clen, ctxt_len; 985 985 986 - if (len_of_ctxts < sizeof(struct smb2_neg_context)) 986 + if (len_of_ctxts < (int)sizeof(struct smb2_neg_context)) 987 987 break; 988 988 989 989 pctx = (struct smb2_neg_context *)((char *)pctx + offset); ··· 1038 1038 } 1039 1039 1040 1040 /* offsets must be 8 byte aligned */ 1041 - clen = (clen + 7) & ~0x7; 1042 - offset = clen + sizeof(struct smb2_neg_context); 1043 - len_of_ctxts -= clen + sizeof(struct smb2_neg_context); 1041 + offset = (ctxt_len + 7) & ~0x7; 1042 + len_of_ctxts -= offset; 1044 1043 } 1045 1044 return status; 1046 1045 }
+13 -1
fs/smb/server/smb_common.c
··· 158 158 */ 159 159 bool ksmbd_smb_request(struct ksmbd_conn *conn) 160 160 { 161 - return conn->request_buf[0] == 0; 161 + __le32 *proto = (__le32 *)smb2_get_msg(conn->request_buf); 162 + 163 + if (*proto == SMB2_COMPRESSION_TRANSFORM_ID) { 164 + pr_err_ratelimited("smb2 compression not support yet"); 165 + return false; 166 + } 167 + 168 + if (*proto != SMB1_PROTO_NUMBER && 169 + *proto != SMB2_PROTO_NUMBER && 170 + *proto != SMB2_TRANSFORM_PROTO_NUM) 171 + return false; 172 + 173 + return true; 162 174 } 163 175 164 176 static bool supported_protocol(int idx)
+2 -2
fs/smb/server/smbacl.c
··· 1290 1290 1291 1291 if (IS_ENABLED(CONFIG_FS_POSIX_ACL)) { 1292 1292 posix_acls = get_inode_acl(d_inode(path->dentry), ACL_TYPE_ACCESS); 1293 - if (posix_acls && !found) { 1293 + if (!IS_ERR_OR_NULL(posix_acls) && !found) { 1294 1294 unsigned int id = -1; 1295 1295 1296 1296 pa_entry = posix_acls->a_entries; ··· 1314 1314 } 1315 1315 } 1316 1316 } 1317 - if (posix_acls) 1317 + if (!IS_ERR_OR_NULL(posix_acls)) 1318 1318 posix_acl_release(posix_acls); 1319 1319 } 1320 1320
+2 -2
fs/smb/server/vfs.c
··· 1321 1321 return NULL; 1322 1322 1323 1323 posix_acls = get_inode_acl(inode, acl_type); 1324 - if (!posix_acls) 1324 + if (IS_ERR_OR_NULL(posix_acls)) 1325 1325 return NULL; 1326 1326 1327 1327 smb_acl = kzalloc(sizeof(struct xattr_smb_acl) + ··· 1830 1830 return -EOPNOTSUPP; 1831 1831 1832 1832 acls = get_inode_acl(parent_inode, ACL_TYPE_DEFAULT); 1833 - if (!acls) 1833 + if (IS_ERR_OR_NULL(acls)) 1834 1834 return -ENOENT; 1835 1835 pace = acls->a_entries; 1836 1836
+11 -2
fs/userfaultfd.c
··· 1332 1332 bool basic_ioctls; 1333 1333 unsigned long start, end, vma_end; 1334 1334 struct vma_iterator vmi; 1335 + pgoff_t pgoff; 1335 1336 1336 1337 user_uffdio_register = (struct uffdio_register __user *) arg; 1337 1338 ··· 1460 1459 1461 1460 vma_iter_set(&vmi, start); 1462 1461 prev = vma_prev(&vmi); 1462 + if (vma->vm_start < start) 1463 + prev = vma; 1463 1464 1464 1465 ret = 0; 1465 1466 for_each_vma_range(vmi, vma, end) { ··· 1485 1482 vma_end = min(end, vma->vm_end); 1486 1483 1487 1484 new_flags = (vma->vm_flags & ~__VM_UFFD_FLAGS) | vm_flags; 1485 + pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT); 1488 1486 prev = vma_merge(&vmi, mm, prev, start, vma_end, new_flags, 1489 - vma->anon_vma, vma->vm_file, vma->vm_pgoff, 1487 + vma->anon_vma, vma->vm_file, pgoff, 1490 1488 vma_policy(vma), 1491 1489 ((struct vm_userfaultfd_ctx){ ctx }), 1492 1490 anon_vma_name(vma)); ··· 1567 1563 unsigned long start, end, vma_end; 1568 1564 const void __user *buf = (void __user *)arg; 1569 1565 struct vma_iterator vmi; 1566 + pgoff_t pgoff; 1570 1567 1571 1568 ret = -EFAULT; 1572 1569 if (copy_from_user(&uffdio_unregister, buf, sizeof(uffdio_unregister))) ··· 1630 1625 1631 1626 vma_iter_set(&vmi, start); 1632 1627 prev = vma_prev(&vmi); 1628 + if (vma->vm_start < start) 1629 + prev = vma; 1630 + 1633 1631 ret = 0; 1634 1632 for_each_vma_range(vmi, vma, end) { 1635 1633 cond_resched(); ··· 1670 1662 uffd_wp_range(vma, start, vma_end - start, false); 1671 1663 1672 1664 new_flags = vma->vm_flags & ~__VM_UFFD_FLAGS; 1665 + pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT); 1673 1666 prev = vma_merge(&vmi, mm, prev, start, vma_end, new_flags, 1674 - vma->anon_vma, vma->vm_file, vma->vm_pgoff, 1667 + vma->anon_vma, vma->vm_file, pgoff, 1675 1668 vma_policy(vma), 1676 1669 NULL_VM_UFFD_CTX, anon_vma_name(vma)); 1677 1670 if (prev) {
+4 -1
fs/xfs/libxfs/xfs_ag.c
··· 984 984 if (err2 != -ENOSPC) 985 985 goto resv_err; 986 986 987 - __xfs_free_extent_later(*tpp, args.fsbno, delta, NULL, true); 987 + err2 = __xfs_free_extent_later(*tpp, args.fsbno, delta, NULL, 988 + true); 989 + if (err2) 990 + goto resv_err; 988 991 989 992 /* 990 993 * Roll the transaction before trying to re-init the per-ag
+65 -26
fs/xfs/libxfs/xfs_alloc.c
··· 628 628 return 0; 629 629 } 630 630 631 + /* 632 + * We do not verify the AGFL contents against AGF-based index counters here, 633 + * even though we may have access to the perag that contains shadow copies. We 634 + * don't know if the AGF based counters have been checked, and if they have they 635 + * still may be inconsistent because they haven't yet been reset on the first 636 + * allocation after the AGF has been read in. 637 + * 638 + * This means we can only check that all agfl entries contain valid or null 639 + * values because we can't reliably determine the active range to exclude 640 + * NULLAGBNO as a valid value. 641 + * 642 + * However, we can't even do that for v4 format filesystems because there are 643 + * old versions of mkfs out there that does not initialise the AGFL to known, 644 + * verifiable values. HEnce we can't tell the difference between a AGFL block 645 + * allocated by mkfs and a corrupted AGFL block here on v4 filesystems. 646 + * 647 + * As a result, we can only fully validate AGFL block numbers when we pull them 648 + * from the freelist in xfs_alloc_get_freelist(). 649 + */ 631 650 static xfs_failaddr_t 632 651 xfs_agfl_verify( 633 652 struct xfs_buf *bp) ··· 656 637 __be32 *agfl_bno = xfs_buf_to_agfl_bno(bp); 657 638 int i; 658 639 659 - /* 660 - * There is no verification of non-crc AGFLs because mkfs does not 661 - * initialise the AGFL to zero or NULL. Hence the only valid part of the 662 - * AGFL is what the AGF says is active. We can't get to the AGF, so we 663 - * can't verify just those entries are valid. 664 - */ 665 640 if (!xfs_has_crc(mp)) 666 641 return NULL; 667 642 ··· 2334 2321 } 2335 2322 2336 2323 /* 2337 - * Check the agfl fields of the agf for inconsistency or corruption. The purpose 2338 - * is to detect an agfl header padding mismatch between current and early v5 2339 - * kernels. This problem manifests as a 1-slot size difference between the 2340 - * on-disk flcount and the active [first, last] range of a wrapped agfl. This 2341 - * may also catch variants of agfl count corruption unrelated to padding. Either 2342 - * way, we'll reset the agfl and warn the user. 2324 + * Check the agfl fields of the agf for inconsistency or corruption. 2325 + * 2326 + * The original purpose was to detect an agfl header padding mismatch between 2327 + * current and early v5 kernels. This problem manifests as a 1-slot size 2328 + * difference between the on-disk flcount and the active [first, last] range of 2329 + * a wrapped agfl. 2330 + * 2331 + * However, we need to use these same checks to catch agfl count corruptions 2332 + * unrelated to padding. This could occur on any v4 or v5 filesystem, so either 2333 + * way, we need to reset the agfl and warn the user. 2343 2334 * 2344 2335 * Return true if a reset is required before the agfl can be used, false 2345 2336 * otherwise. ··· 2358 2341 uint32_t c = be32_to_cpu(agf->agf_flcount); 2359 2342 int agfl_size = xfs_agfl_size(mp); 2360 2343 int active; 2361 - 2362 - /* no agfl header on v4 supers */ 2363 - if (!xfs_has_crc(mp)) 2364 - return false; 2365 2344 2366 2345 /* 2367 2346 * The agf read verifier catches severe corruption of these fields. ··· 2431 2418 * the real allocation can proceed. Deferring the free disconnects freeing up 2432 2419 * the AGFL slot from freeing the block. 2433 2420 */ 2434 - STATIC void 2421 + static int 2435 2422 xfs_defer_agfl_block( 2436 2423 struct xfs_trans *tp, 2437 2424 xfs_agnumber_t agno, ··· 2450 2437 xefi->xefi_blockcount = 1; 2451 2438 xefi->xefi_owner = oinfo->oi_owner; 2452 2439 2440 + if (XFS_IS_CORRUPT(mp, !xfs_verify_fsbno(mp, xefi->xefi_startblock))) 2441 + return -EFSCORRUPTED; 2442 + 2453 2443 trace_xfs_agfl_free_defer(mp, agno, 0, agbno, 1); 2454 2444 2455 2445 xfs_extent_free_get_group(mp, xefi); 2456 2446 xfs_defer_add(tp, XFS_DEFER_OPS_TYPE_AGFL_FREE, &xefi->xefi_list); 2447 + return 0; 2457 2448 } 2458 2449 2459 2450 /* 2460 2451 * Add the extent to the list of extents to be free at transaction end. 2461 2452 * The list is maintained sorted (by block number). 2462 2453 */ 2463 - void 2454 + int 2464 2455 __xfs_free_extent_later( 2465 2456 struct xfs_trans *tp, 2466 2457 xfs_fsblock_t bno, ··· 2491 2474 #endif 2492 2475 ASSERT(xfs_extfree_item_cache != NULL); 2493 2476 2477 + if (XFS_IS_CORRUPT(mp, !xfs_verify_fsbext(mp, bno, len))) 2478 + return -EFSCORRUPTED; 2479 + 2494 2480 xefi = kmem_cache_zalloc(xfs_extfree_item_cache, 2495 2481 GFP_KERNEL | __GFP_NOFAIL); 2496 2482 xefi->xefi_startblock = bno; ··· 2517 2497 2518 2498 xfs_extent_free_get_group(mp, xefi); 2519 2499 xfs_defer_add(tp, XFS_DEFER_OPS_TYPE_FREE, &xefi->xefi_list); 2500 + return 0; 2520 2501 } 2521 2502 2522 2503 #ifdef DEBUG ··· 2678 2657 goto out_agbp_relse; 2679 2658 2680 2659 /* defer agfl frees */ 2681 - xfs_defer_agfl_block(tp, args->agno, bno, &targs.oinfo); 2660 + error = xfs_defer_agfl_block(tp, args->agno, bno, &targs.oinfo); 2661 + if (error) 2662 + goto out_agbp_relse; 2682 2663 } 2683 2664 2684 2665 targs.tp = tp; ··· 2790 2767 */ 2791 2768 agfl_bno = xfs_buf_to_agfl_bno(agflbp); 2792 2769 bno = be32_to_cpu(agfl_bno[be32_to_cpu(agf->agf_flfirst)]); 2770 + if (XFS_IS_CORRUPT(tp->t_mountp, !xfs_verify_agbno(pag, bno))) 2771 + return -EFSCORRUPTED; 2772 + 2793 2773 be32_add_cpu(&agf->agf_flfirst, 1); 2794 2774 xfs_trans_brelse(tp, agflbp); 2795 2775 if (be32_to_cpu(agf->agf_flfirst) == xfs_agfl_size(mp)) ··· 2915 2889 return 0; 2916 2890 } 2917 2891 2892 + /* 2893 + * Verify the AGF is consistent. 2894 + * 2895 + * We do not verify the AGFL indexes in the AGF are fully consistent here 2896 + * because of issues with variable on-disk structure sizes. Instead, we check 2897 + * the agfl indexes for consistency when we initialise the perag from the AGF 2898 + * information after a read completes. 2899 + * 2900 + * If the index is inconsistent, then we mark the perag as needing an AGFL 2901 + * reset. The first AGFL update performed then resets the AGFL indexes and 2902 + * refills the AGFL with known good free blocks, allowing the filesystem to 2903 + * continue operating normally at the cost of a few leaked free space blocks. 2904 + */ 2918 2905 static xfs_failaddr_t 2919 2906 xfs_agf_verify( 2920 2907 struct xfs_buf *bp) ··· 3001 2962 return __this_address; 3002 2963 3003 2964 return NULL; 3004 - 3005 2965 } 3006 2966 3007 2967 static void ··· 3225 3187 */ 3226 3188 static int 3227 3189 xfs_alloc_vextent_prepare_ag( 3228 - struct xfs_alloc_arg *args) 3190 + struct xfs_alloc_arg *args, 3191 + uint32_t flags) 3229 3192 { 3230 3193 bool need_pag = !args->pag; 3231 3194 int error; ··· 3235 3196 args->pag = xfs_perag_get(args->mp, args->agno); 3236 3197 3237 3198 args->agbp = NULL; 3238 - error = xfs_alloc_fix_freelist(args, 0); 3199 + error = xfs_alloc_fix_freelist(args, flags); 3239 3200 if (error) { 3240 3201 trace_xfs_alloc_vextent_nofix(args); 3241 3202 if (need_pag) ··· 3375 3336 return error; 3376 3337 } 3377 3338 3378 - error = xfs_alloc_vextent_prepare_ag(args); 3339 + error = xfs_alloc_vextent_prepare_ag(args, 0); 3379 3340 if (!error && args->agbp) 3380 3341 error = xfs_alloc_ag_vextent_size(args); 3381 3342 ··· 3419 3380 for_each_perag_wrap_range(mp, start_agno, restart_agno, 3420 3381 mp->m_sb.sb_agcount, agno, args->pag) { 3421 3382 args->agno = agno; 3422 - error = xfs_alloc_vextent_prepare_ag(args); 3383 + error = xfs_alloc_vextent_prepare_ag(args, flags); 3423 3384 if (error) 3424 3385 break; 3425 3386 if (!args->agbp) { ··· 3585 3546 return error; 3586 3547 } 3587 3548 3588 - error = xfs_alloc_vextent_prepare_ag(args); 3549 + error = xfs_alloc_vextent_prepare_ag(args, 0); 3589 3550 if (!error && args->agbp) 3590 3551 error = xfs_alloc_ag_vextent_exact(args); 3591 3552 ··· 3626 3587 if (needs_perag) 3627 3588 args->pag = xfs_perag_grab(mp, args->agno); 3628 3589 3629 - error = xfs_alloc_vextent_prepare_ag(args); 3590 + error = xfs_alloc_vextent_prepare_ag(args, 0); 3630 3591 if (!error && args->agbp) 3631 3592 error = xfs_alloc_ag_vextent_near(args); 3632 3593
+3 -3
fs/xfs/libxfs/xfs_alloc.h
··· 230 230 return bp->b_addr; 231 231 } 232 232 233 - void __xfs_free_extent_later(struct xfs_trans *tp, xfs_fsblock_t bno, 233 + int __xfs_free_extent_later(struct xfs_trans *tp, xfs_fsblock_t bno, 234 234 xfs_filblks_t len, const struct xfs_owner_info *oinfo, 235 235 bool skip_discard); 236 236 ··· 254 254 #define XFS_EFI_ATTR_FORK (1U << 1) /* freeing attr fork block */ 255 255 #define XFS_EFI_BMBT_BLOCK (1U << 2) /* freeing bmap btree block */ 256 256 257 - static inline void 257 + static inline int 258 258 xfs_free_extent_later( 259 259 struct xfs_trans *tp, 260 260 xfs_fsblock_t bno, 261 261 xfs_filblks_t len, 262 262 const struct xfs_owner_info *oinfo) 263 263 { 264 - __xfs_free_extent_later(tp, bno, len, oinfo, false); 264 + return __xfs_free_extent_later(tp, bno, len, oinfo, false); 265 265 } 266 266 267 267
+8 -2
fs/xfs/libxfs/xfs_bmap.c
··· 572 572 cblock = XFS_BUF_TO_BLOCK(cbp); 573 573 if ((error = xfs_btree_check_block(cur, cblock, 0, cbp))) 574 574 return error; 575 + 575 576 xfs_rmap_ino_bmbt_owner(&oinfo, ip->i_ino, whichfork); 576 - xfs_free_extent_later(cur->bc_tp, cbno, 1, &oinfo); 577 + error = xfs_free_extent_later(cur->bc_tp, cbno, 1, &oinfo); 578 + if (error) 579 + return error; 580 + 577 581 ip->i_nblocks--; 578 582 xfs_trans_mod_dquot_byino(tp, ip, XFS_TRANS_DQ_BCOUNT, -1L); 579 583 xfs_trans_binval(tp, cbp); ··· 5234 5230 if (xfs_is_reflink_inode(ip) && whichfork == XFS_DATA_FORK) { 5235 5231 xfs_refcount_decrease_extent(tp, del); 5236 5232 } else { 5237 - __xfs_free_extent_later(tp, del->br_startblock, 5233 + error = __xfs_free_extent_later(tp, del->br_startblock, 5238 5234 del->br_blockcount, NULL, 5239 5235 (bflags & XFS_BMAPI_NODISCARD) || 5240 5236 del->br_state == XFS_EXT_UNWRITTEN); 5237 + if (error) 5238 + goto done; 5241 5239 } 5242 5240 } 5243 5241
+5 -2
fs/xfs/libxfs/xfs_bmap_btree.c
··· 268 268 struct xfs_trans *tp = cur->bc_tp; 269 269 xfs_fsblock_t fsbno = XFS_DADDR_TO_FSB(mp, xfs_buf_daddr(bp)); 270 270 struct xfs_owner_info oinfo; 271 + int error; 271 272 272 273 xfs_rmap_ino_bmbt_owner(&oinfo, ip->i_ino, cur->bc_ino.whichfork); 273 - xfs_free_extent_later(cur->bc_tp, fsbno, 1, &oinfo); 274 - ip->i_nblocks--; 274 + error = xfs_free_extent_later(cur->bc_tp, fsbno, 1, &oinfo); 275 + if (error) 276 + return error; 275 277 278 + ip->i_nblocks--; 276 279 xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE); 277 280 xfs_trans_mod_dquot_byino(tp, ip, XFS_TRANS_DQ_BCOUNT, -1L); 278 281 return 0;
+16 -8
fs/xfs/libxfs/xfs_ialloc.c
··· 1834 1834 * might be sparse and only free the regions that are allocated as part of the 1835 1835 * chunk. 1836 1836 */ 1837 - STATIC void 1837 + static int 1838 1838 xfs_difree_inode_chunk( 1839 1839 struct xfs_trans *tp, 1840 1840 xfs_agnumber_t agno, ··· 1851 1851 1852 1852 if (!xfs_inobt_issparse(rec->ir_holemask)) { 1853 1853 /* not sparse, calculate extent info directly */ 1854 - xfs_free_extent_later(tp, XFS_AGB_TO_FSB(mp, agno, sagbno), 1855 - M_IGEO(mp)->ialloc_blks, 1856 - &XFS_RMAP_OINFO_INODES); 1857 - return; 1854 + return xfs_free_extent_later(tp, 1855 + XFS_AGB_TO_FSB(mp, agno, sagbno), 1856 + M_IGEO(mp)->ialloc_blks, 1857 + &XFS_RMAP_OINFO_INODES); 1858 1858 } 1859 1859 1860 1860 /* holemask is only 16-bits (fits in an unsigned long) */ ··· 1871 1871 XFS_INOBT_HOLEMASK_BITS); 1872 1872 nextbit = startidx + 1; 1873 1873 while (startidx < XFS_INOBT_HOLEMASK_BITS) { 1874 + int error; 1875 + 1874 1876 nextbit = find_next_zero_bit(holemask, XFS_INOBT_HOLEMASK_BITS, 1875 1877 nextbit); 1876 1878 /* ··· 1898 1896 1899 1897 ASSERT(agbno % mp->m_sb.sb_spino_align == 0); 1900 1898 ASSERT(contigblk % mp->m_sb.sb_spino_align == 0); 1901 - xfs_free_extent_later(tp, XFS_AGB_TO_FSB(mp, agno, agbno), 1902 - contigblk, &XFS_RMAP_OINFO_INODES); 1899 + error = xfs_free_extent_later(tp, 1900 + XFS_AGB_TO_FSB(mp, agno, agbno), 1901 + contigblk, &XFS_RMAP_OINFO_INODES); 1902 + if (error) 1903 + return error; 1903 1904 1904 1905 /* reset range to current bit and carry on... */ 1905 1906 startidx = endidx = nextbit; ··· 1910 1905 next: 1911 1906 nextbit++; 1912 1907 } 1908 + return 0; 1913 1909 } 1914 1910 1915 1911 STATIC int ··· 2009 2003 goto error0; 2010 2004 } 2011 2005 2012 - xfs_difree_inode_chunk(tp, pag->pag_agno, &rec); 2006 + error = xfs_difree_inode_chunk(tp, pag->pag_agno, &rec); 2007 + if (error) 2008 + goto error0; 2013 2009 } else { 2014 2010 xic->deleted = false; 2015 2011
+8 -1
fs/xfs/libxfs/xfs_log_format.h
··· 324 324 #define XFS_ILOG_DOWNER 0x200 /* change the data fork owner on replay */ 325 325 #define XFS_ILOG_AOWNER 0x400 /* change the attr fork owner on replay */ 326 326 327 - 328 327 /* 329 328 * The timestamps are dirty, but not necessarily anything else in the inode 330 329 * core. Unlike the other fields above this one must never make it to disk ··· 331 332 * ili_fields in the inode_log_item. 332 333 */ 333 334 #define XFS_ILOG_TIMESTAMP 0x4000 335 + 336 + /* 337 + * The version field has been changed, but not necessarily anything else of 338 + * interest. This must never make it to disk - it is used purely to ensure that 339 + * the inode item ->precommit operation can update the fsync flag triggers 340 + * in the inode item correctly. 341 + */ 342 + #define XFS_ILOG_IVERSION 0x8000 334 343 335 344 #define XFS_ILOG_NONCORE (XFS_ILOG_DDATA | XFS_ILOG_DEXT | \ 336 345 XFS_ILOG_DBROOT | XFS_ILOG_DEV | \
+10 -3
fs/xfs/libxfs/xfs_refcount.c
··· 1151 1151 fsbno = XFS_AGB_TO_FSB(cur->bc_mp, 1152 1152 cur->bc_ag.pag->pag_agno, 1153 1153 tmp.rc_startblock); 1154 - xfs_free_extent_later(cur->bc_tp, fsbno, 1154 + error = xfs_free_extent_later(cur->bc_tp, fsbno, 1155 1155 tmp.rc_blockcount, NULL); 1156 + if (error) 1157 + goto out_error; 1156 1158 } 1157 1159 1158 1160 (*agbno) += tmp.rc_blockcount; ··· 1212 1210 fsbno = XFS_AGB_TO_FSB(cur->bc_mp, 1213 1211 cur->bc_ag.pag->pag_agno, 1214 1212 ext.rc_startblock); 1215 - xfs_free_extent_later(cur->bc_tp, fsbno, 1213 + error = xfs_free_extent_later(cur->bc_tp, fsbno, 1216 1214 ext.rc_blockcount, NULL); 1215 + if (error) 1216 + goto out_error; 1217 1217 } 1218 1218 1219 1219 skip: ··· 1980 1976 rr->rr_rrec.rc_blockcount); 1981 1977 1982 1978 /* Free the block. */ 1983 - xfs_free_extent_later(tp, fsb, rr->rr_rrec.rc_blockcount, NULL); 1979 + error = xfs_free_extent_later(tp, fsb, 1980 + rr->rr_rrec.rc_blockcount, NULL); 1981 + if (error) 1982 + goto out_trans; 1984 1983 1985 1984 error = xfs_trans_commit(tp); 1986 1985 if (error)
+8 -105
fs/xfs/libxfs/xfs_trans_inode.c
··· 40 40 iip->ili_lock_flags = lock_flags; 41 41 ASSERT(!xfs_iflags_test(ip, XFS_ISTALE)); 42 42 43 - /* 44 - * Get a log_item_desc to point at the new item. 45 - */ 43 + /* Reset the per-tx dirty context and add the item to the tx. */ 44 + iip->ili_dirty_flags = 0; 46 45 xfs_trans_add_item(tp, &iip->ili_item); 47 46 } 48 47 ··· 75 76 /* 76 77 * This is called to mark the fields indicated in fieldmask as needing to be 77 78 * logged when the transaction is committed. The inode must already be 78 - * associated with the given transaction. 79 - * 80 - * The values for fieldmask are defined in xfs_inode_item.h. We always log all 81 - * of the core inode if any of it has changed, and we always log all of the 82 - * inline data/extents/b-tree root if any of them has changed. 83 - * 84 - * Grab and pin the cluster buffer associated with this inode to avoid RMW 85 - * cycles at inode writeback time. Avoid the need to add error handling to every 86 - * xfs_trans_log_inode() call by shutting down on read error. This will cause 87 - * transactions to fail and everything to error out, just like if we return a 88 - * read error in a dirty transaction and cancel it. 79 + * associated with the given transaction. All we do here is record where the 80 + * inode was dirtied and mark the transaction and inode log item dirty; 81 + * everything else is done in the ->precommit log item operation after the 82 + * changes in the transaction have been completed. 89 83 */ 90 84 void 91 85 xfs_trans_log_inode( ··· 88 96 { 89 97 struct xfs_inode_log_item *iip = ip->i_itemp; 90 98 struct inode *inode = VFS_I(ip); 91 - uint iversion_flags = 0; 92 99 93 100 ASSERT(iip); 94 101 ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); 95 102 ASSERT(!xfs_iflags_test(ip, XFS_ISTALE)); 96 103 97 104 tp->t_flags |= XFS_TRANS_DIRTY; 98 - 99 - /* 100 - * Don't bother with i_lock for the I_DIRTY_TIME check here, as races 101 - * don't matter - we either will need an extra transaction in 24 hours 102 - * to log the timestamps, or will clear already cleared fields in the 103 - * worst case. 104 - */ 105 - if (inode->i_state & I_DIRTY_TIME) { 106 - spin_lock(&inode->i_lock); 107 - inode->i_state &= ~I_DIRTY_TIME; 108 - spin_unlock(&inode->i_lock); 109 - } 110 105 111 106 /* 112 107 * First time we log the inode in a transaction, bump the inode change ··· 107 128 if (!test_and_set_bit(XFS_LI_DIRTY, &iip->ili_item.li_flags)) { 108 129 if (IS_I_VERSION(inode) && 109 130 inode_maybe_inc_iversion(inode, flags & XFS_ILOG_CORE)) 110 - iversion_flags = XFS_ILOG_CORE; 131 + flags |= XFS_ILOG_IVERSION; 111 132 } 112 133 113 - /* 114 - * If we're updating the inode core or the timestamps and it's possible 115 - * to upgrade this inode to bigtime format, do so now. 116 - */ 117 - if ((flags & (XFS_ILOG_CORE | XFS_ILOG_TIMESTAMP)) && 118 - xfs_has_bigtime(ip->i_mount) && 119 - !xfs_inode_has_bigtime(ip)) { 120 - ip->i_diflags2 |= XFS_DIFLAG2_BIGTIME; 121 - flags |= XFS_ILOG_CORE; 122 - } 123 - 124 - /* 125 - * Inode verifiers do not check that the extent size hint is an integer 126 - * multiple of the rt extent size on a directory with both rtinherit 127 - * and extszinherit flags set. If we're logging a directory that is 128 - * misconfigured in this way, clear the hint. 129 - */ 130 - if ((ip->i_diflags & XFS_DIFLAG_RTINHERIT) && 131 - (ip->i_diflags & XFS_DIFLAG_EXTSZINHERIT) && 132 - (ip->i_extsize % ip->i_mount->m_sb.sb_rextsize) > 0) { 133 - ip->i_diflags &= ~(XFS_DIFLAG_EXTSIZE | 134 - XFS_DIFLAG_EXTSZINHERIT); 135 - ip->i_extsize = 0; 136 - flags |= XFS_ILOG_CORE; 137 - } 138 - 139 - /* 140 - * Record the specific change for fdatasync optimisation. This allows 141 - * fdatasync to skip log forces for inodes that are only timestamp 142 - * dirty. 143 - */ 144 - spin_lock(&iip->ili_lock); 145 - iip->ili_fsync_fields |= flags; 146 - 147 - if (!iip->ili_item.li_buf) { 148 - struct xfs_buf *bp; 149 - int error; 150 - 151 - /* 152 - * We hold the ILOCK here, so this inode is not going to be 153 - * flushed while we are here. Further, because there is no 154 - * buffer attached to the item, we know that there is no IO in 155 - * progress, so nothing will clear the ili_fields while we read 156 - * in the buffer. Hence we can safely drop the spin lock and 157 - * read the buffer knowing that the state will not change from 158 - * here. 159 - */ 160 - spin_unlock(&iip->ili_lock); 161 - error = xfs_imap_to_bp(ip->i_mount, tp, &ip->i_imap, &bp); 162 - if (error) { 163 - xfs_force_shutdown(ip->i_mount, SHUTDOWN_META_IO_ERROR); 164 - return; 165 - } 166 - 167 - /* 168 - * We need an explicit buffer reference for the log item but 169 - * don't want the buffer to remain attached to the transaction. 170 - * Hold the buffer but release the transaction reference once 171 - * we've attached the inode log item to the buffer log item 172 - * list. 173 - */ 174 - xfs_buf_hold(bp); 175 - spin_lock(&iip->ili_lock); 176 - iip->ili_item.li_buf = bp; 177 - bp->b_flags |= _XBF_INODES; 178 - list_add_tail(&iip->ili_item.li_bio_list, &bp->b_li_list); 179 - xfs_trans_brelse(tp, bp); 180 - } 181 - 182 - /* 183 - * Always OR in the bits from the ili_last_fields field. This is to 184 - * coordinate with the xfs_iflush() and xfs_buf_inode_iodone() routines 185 - * in the eventual clearing of the ili_fields bits. See the big comment 186 - * in xfs_iflush() for an explanation of this coordination mechanism. 187 - */ 188 - iip->ili_fields |= (flags | iip->ili_last_fields | iversion_flags); 189 - spin_unlock(&iip->ili_lock); 134 + iip->ili_dirty_flags |= flags; 190 135 } 191 136 192 137 int
+13 -12
fs/xfs/scrub/bmap.c
··· 769 769 * mapping or false if there are no more mappings. Caller must ensure that 770 770 * @info.icur is zeroed before the first call. 771 771 */ 772 - static int 772 + static bool 773 773 xchk_bmap_iext_iter( 774 774 struct xchk_bmap_info *info, 775 775 struct xfs_bmbt_irec *irec) 776 776 { 777 777 struct xfs_bmbt_irec got; 778 778 struct xfs_ifork *ifp; 779 - xfs_filblks_t prev_len; 779 + unsigned int nr = 0; 780 780 781 781 ifp = xfs_ifork_ptr(info->sc->ip, info->whichfork); 782 782 ··· 790 790 irec->br_startoff); 791 791 return false; 792 792 } 793 + nr++; 793 794 794 795 /* 795 796 * Iterate subsequent iextent records and merge them with the one 796 797 * that we just read, if possible. 797 798 */ 798 - prev_len = irec->br_blockcount; 799 799 while (xfs_iext_peek_next_extent(ifp, &info->icur, &got)) { 800 800 if (!xchk_are_bmaps_contiguous(irec, &got)) 801 801 break; ··· 805 805 got.br_startoff); 806 806 return false; 807 807 } 808 - 809 - /* 810 - * Notify the user of mergeable records in the data or attr 811 - * forks. CoW forks only exist in memory so we ignore them. 812 - */ 813 - if (info->whichfork != XFS_COW_FORK && 814 - prev_len + got.br_blockcount > BMBT_BLOCKCOUNT_MASK) 815 - xchk_ino_set_preen(info->sc, info->sc->ip->i_ino); 808 + nr++; 816 809 817 810 irec->br_blockcount += got.br_blockcount; 818 - prev_len = got.br_blockcount; 819 811 xfs_iext_next(ifp, &info->icur); 820 812 } 813 + 814 + /* 815 + * If the merged mapping could be expressed with fewer bmbt records 816 + * than we actually found, notify the user that this fork could be 817 + * optimized. CoW forks only exist in memory so we ignore them. 818 + */ 819 + if (nr > 1 && info->whichfork != XFS_COW_FORK && 820 + howmany_64(irec->br_blockcount, XFS_MAX_BMBT_EXTLEN) < nr) 821 + xchk_ino_set_preen(info->sc, info->sc->ip->i_ino); 821 822 822 823 return true; 823 824 }
+4 -4
fs/xfs/scrub/scrub.h
··· 105 105 }; 106 106 107 107 /* XCHK state flags grow up from zero, XREP state flags grown down from 2^31 */ 108 - #define XCHK_TRY_HARDER (1 << 0) /* can't get resources, try again */ 109 - #define XCHK_FSGATES_DRAIN (1 << 2) /* defer ops draining enabled */ 110 - #define XCHK_NEED_DRAIN (1 << 3) /* scrub needs to drain defer ops */ 111 - #define XREP_ALREADY_FIXED (1 << 31) /* checking our repair work */ 108 + #define XCHK_TRY_HARDER (1U << 0) /* can't get resources, try again */ 109 + #define XCHK_FSGATES_DRAIN (1U << 2) /* defer ops draining enabled */ 110 + #define XCHK_NEED_DRAIN (1U << 3) /* scrub needs to drain defer ops */ 111 + #define XREP_ALREADY_FIXED (1U << 31) /* checking our repair work */ 112 112 113 113 /* 114 114 * The XCHK_FSGATES* flags reflect functionality in the main filesystem that
+65 -23
fs/xfs/xfs_buf_item.c
··· 452 452 * This is called to pin the buffer associated with the buf log item in memory 453 453 * so it cannot be written out. 454 454 * 455 - * We also always take a reference to the buffer log item here so that the bli 456 - * is held while the item is pinned in memory. This means that we can 457 - * unconditionally drop the reference count a transaction holds when the 458 - * transaction is completed. 455 + * We take a reference to the buffer log item here so that the BLI life cycle 456 + * extends at least until the buffer is unpinned via xfs_buf_item_unpin() and 457 + * inserted into the AIL. 458 + * 459 + * We also need to take a reference to the buffer itself as the BLI unpin 460 + * processing requires accessing the buffer after the BLI has dropped the final 461 + * BLI reference. See xfs_buf_item_unpin() for an explanation. 462 + * If unpins race to drop the final BLI reference and only the 463 + * BLI owns a reference to the buffer, then the loser of the race can have the 464 + * buffer fgreed from under it (e.g. on shutdown). Taking a buffer reference per 465 + * pin count ensures the life cycle of the buffer extends for as 466 + * long as we hold the buffer pin reference in xfs_buf_item_unpin(). 459 467 */ 460 468 STATIC void 461 469 xfs_buf_item_pin( ··· 478 470 479 471 trace_xfs_buf_item_pin(bip); 480 472 473 + xfs_buf_hold(bip->bli_buf); 481 474 atomic_inc(&bip->bli_refcount); 482 475 atomic_inc(&bip->bli_buf->b_pin_count); 483 476 } 484 477 485 478 /* 486 - * This is called to unpin the buffer associated with the buf log item which 487 - * was previously pinned with a call to xfs_buf_item_pin(). 479 + * This is called to unpin the buffer associated with the buf log item which was 480 + * previously pinned with a call to xfs_buf_item_pin(). We enter this function 481 + * with a buffer pin count, a buffer reference and a BLI reference. 482 + * 483 + * We must drop the BLI reference before we unpin the buffer because the AIL 484 + * doesn't acquire a BLI reference whenever it accesses it. Therefore if the 485 + * refcount drops to zero, the bli could still be AIL resident and the buffer 486 + * submitted for I/O at any point before we return. This can result in IO 487 + * completion freeing the buffer while we are still trying to access it here. 488 + * This race condition can also occur in shutdown situations where we abort and 489 + * unpin buffers from contexts other that journal IO completion. 490 + * 491 + * Hence we have to hold a buffer reference per pin count to ensure that the 492 + * buffer cannot be freed until we have finished processing the unpin operation. 493 + * The reference is taken in xfs_buf_item_pin(), and we must hold it until we 494 + * are done processing the buffer state. In the case of an abort (remove = 495 + * true) then we re-use the current pin reference as the IO reference we hand 496 + * off to IO failure handling. 488 497 */ 489 498 STATIC void 490 499 xfs_buf_item_unpin( ··· 518 493 519 494 trace_xfs_buf_item_unpin(bip); 520 495 521 - /* 522 - * Drop the bli ref associated with the pin and grab the hold required 523 - * for the I/O simulation failure in the abort case. We have to do this 524 - * before the pin count drops because the AIL doesn't acquire a bli 525 - * reference. Therefore if the refcount drops to zero, the bli could 526 - * still be AIL resident and the buffer submitted for I/O (and freed on 527 - * completion) at any point before we return. This can be removed once 528 - * the AIL properly holds a reference on the bli. 529 - */ 530 496 freed = atomic_dec_and_test(&bip->bli_refcount); 531 - if (freed && !stale && remove) 532 - xfs_buf_hold(bp); 533 497 if (atomic_dec_and_test(&bp->b_pin_count)) 534 498 wake_up_all(&bp->b_waiters); 535 499 536 - /* nothing to do but drop the pin count if the bli is active */ 537 - if (!freed) 500 + /* 501 + * Nothing to do but drop the buffer pin reference if the BLI is 502 + * still active. 503 + */ 504 + if (!freed) { 505 + xfs_buf_rele(bp); 538 506 return; 507 + } 539 508 540 509 if (stale) { 541 510 ASSERT(bip->bli_flags & XFS_BLI_STALE); ··· 540 521 ASSERT(!bp->b_transp); 541 522 542 523 trace_xfs_buf_item_unpin_stale(bip); 524 + 525 + /* 526 + * The buffer has been locked and referenced since it was marked 527 + * stale so we own both lock and reference exclusively here. We 528 + * do not need the pin reference any more, so drop it now so 529 + * that we only have one reference to drop once item completion 530 + * processing is complete. 531 + */ 532 + xfs_buf_rele(bp); 543 533 544 534 /* 545 535 * If we get called here because of an IO error, we may or may ··· 566 538 ASSERT(bp->b_log_item == NULL); 567 539 } 568 540 xfs_buf_relse(bp); 569 - } else if (remove) { 541 + return; 542 + } 543 + 544 + if (remove) { 570 545 /* 571 - * The buffer must be locked and held by the caller to simulate 572 - * an async I/O failure. We acquired the hold for this case 573 - * before the buffer was unpinned. 546 + * We need to simulate an async IO failures here to ensure that 547 + * the correct error completion is run on this buffer. This 548 + * requires a reference to the buffer and for the buffer to be 549 + * locked. We can safely pass ownership of the pin reference to 550 + * the IO to ensure that nothing can free the buffer while we 551 + * wait for the lock and then run the IO failure completion. 574 552 */ 575 553 xfs_buf_lock(bp); 576 554 bp->b_flags |= XBF_ASYNC; 577 555 xfs_buf_ioend_fail(bp); 556 + return; 578 557 } 558 + 559 + /* 560 + * BLI has no more active references - it will be moved to the AIL to 561 + * manage the remaining BLI/buffer life cycle. There is nothing left for 562 + * us to do here so drop the pin reference to the buffer. 563 + */ 564 + xfs_buf_rele(bp); 579 565 } 580 566 581 567 STATIC uint
-1
fs/xfs/xfs_filestream.c
··· 78 78 *longest = 0; 79 79 err = xfs_bmap_longest_free_extent(pag, NULL, longest); 80 80 if (err) { 81 - xfs_perag_rele(pag); 82 81 if (err != -EAGAIN) 83 82 break; 84 83 /* Couldn't lock the AGF, skip this AG. */
+37 -9
fs/xfs/xfs_icache.c
··· 454 454 return ret; 455 455 } 456 456 457 + /* Wait for all queued work and collect errors */ 458 + static int 459 + xfs_inodegc_wait_all( 460 + struct xfs_mount *mp) 461 + { 462 + int cpu; 463 + int error = 0; 464 + 465 + flush_workqueue(mp->m_inodegc_wq); 466 + for_each_online_cpu(cpu) { 467 + struct xfs_inodegc *gc; 468 + 469 + gc = per_cpu_ptr(mp->m_inodegc, cpu); 470 + if (gc->error && !error) 471 + error = gc->error; 472 + gc->error = 0; 473 + } 474 + 475 + return error; 476 + } 477 + 457 478 /* 458 479 * Check the validity of the inode we just found it the cache 459 480 */ ··· 1512 1491 if (error) 1513 1492 return error; 1514 1493 1515 - xfs_inodegc_flush(mp); 1516 - return 0; 1494 + return xfs_inodegc_flush(mp); 1517 1495 } 1518 1496 1519 1497 /* 1520 1498 * Reclaim all the free space that we can by scheduling the background blockgc 1521 1499 * and inodegc workers immediately and waiting for them all to clear. 1522 1500 */ 1523 - void 1501 + int 1524 1502 xfs_blockgc_flush_all( 1525 1503 struct xfs_mount *mp) 1526 1504 { ··· 1540 1520 for_each_perag_tag(mp, agno, pag, XFS_ICI_BLOCKGC_TAG) 1541 1521 flush_delayed_work(&pag->pag_blockgc_work); 1542 1522 1543 - xfs_inodegc_flush(mp); 1523 + return xfs_inodegc_flush(mp); 1544 1524 } 1545 1525 1546 1526 /* ··· 1862 1842 * This is the last chance to make changes to an otherwise unreferenced file 1863 1843 * before incore reclamation happens. 1864 1844 */ 1865 - static void 1845 + static int 1866 1846 xfs_inodegc_inactivate( 1867 1847 struct xfs_inode *ip) 1868 1848 { 1849 + int error; 1850 + 1869 1851 trace_xfs_inode_inactivating(ip); 1870 - xfs_inactive(ip); 1852 + error = xfs_inactive(ip); 1871 1853 xfs_inodegc_set_reclaimable(ip); 1854 + return error; 1855 + 1872 1856 } 1873 1857 1874 1858 void ··· 1904 1880 1905 1881 WRITE_ONCE(gc->shrinker_hits, 0); 1906 1882 llist_for_each_entry_safe(ip, n, node, i_gclist) { 1883 + int error; 1884 + 1907 1885 xfs_iflags_set(ip, XFS_INACTIVATING); 1908 - xfs_inodegc_inactivate(ip); 1886 + error = xfs_inodegc_inactivate(ip); 1887 + if (error && !gc->error) 1888 + gc->error = error; 1909 1889 } 1910 1890 1911 1891 memalloc_nofs_restore(nofs_flag); ··· 1933 1905 * Force all currently queued inode inactivation work to run immediately and 1934 1906 * wait for the work to finish. 1935 1907 */ 1936 - void 1908 + int 1937 1909 xfs_inodegc_flush( 1938 1910 struct xfs_mount *mp) 1939 1911 { 1940 1912 xfs_inodegc_push(mp); 1941 1913 trace_xfs_inodegc_flush(mp, __return_address); 1942 - flush_workqueue(mp->m_inodegc_wq); 1914 + return xfs_inodegc_wait_all(mp); 1943 1915 } 1944 1916 1945 1917 /*
+2 -2
fs/xfs/xfs_icache.h
··· 62 62 unsigned int iwalk_flags); 63 63 int xfs_blockgc_free_quota(struct xfs_inode *ip, unsigned int iwalk_flags); 64 64 int xfs_blockgc_free_space(struct xfs_mount *mp, struct xfs_icwalk *icm); 65 - void xfs_blockgc_flush_all(struct xfs_mount *mp); 65 + int xfs_blockgc_flush_all(struct xfs_mount *mp); 66 66 67 67 void xfs_inode_set_eofblocks_tag(struct xfs_inode *ip); 68 68 void xfs_inode_clear_eofblocks_tag(struct xfs_inode *ip); ··· 80 80 81 81 void xfs_inodegc_worker(struct work_struct *work); 82 82 void xfs_inodegc_push(struct xfs_mount *mp); 83 - void xfs_inodegc_flush(struct xfs_mount *mp); 83 + int xfs_inodegc_flush(struct xfs_mount *mp); 84 84 void xfs_inodegc_stop(struct xfs_mount *mp); 85 85 void xfs_inodegc_start(struct xfs_mount *mp); 86 86 void xfs_inodegc_cpu_dead(struct xfs_mount *mp, unsigned int cpu);
+6 -14
fs/xfs/xfs_inode.c
··· 1620 1620 */ 1621 1621 xfs_trans_mod_dquot_byino(tp, ip, XFS_TRANS_DQ_ICOUNT, -1); 1622 1622 1623 - /* 1624 - * Just ignore errors at this point. There is nothing we can do except 1625 - * to try to keep going. Make sure it's not a silent error. 1626 - */ 1627 - error = xfs_trans_commit(tp); 1628 - if (error) 1629 - xfs_notice(mp, "%s: xfs_trans_commit returned error %d", 1630 - __func__, error); 1631 - 1632 - return 0; 1623 + return xfs_trans_commit(tp); 1633 1624 } 1634 1625 1635 1626 /* ··· 1684 1693 * now be truncated. Also, we clear all of the read-ahead state 1685 1694 * kept for the inode here since the file is now closed. 1686 1695 */ 1687 - void 1696 + int 1688 1697 xfs_inactive( 1689 1698 xfs_inode_t *ip) 1690 1699 { 1691 1700 struct xfs_mount *mp; 1692 - int error; 1701 + int error = 0; 1693 1702 int truncate = 0; 1694 1703 1695 1704 /* ··· 1727 1736 * reference to the inode at this point anyways. 1728 1737 */ 1729 1738 if (xfs_can_free_eofblocks(ip, true)) 1730 - xfs_free_eofblocks(ip); 1739 + error = xfs_free_eofblocks(ip); 1731 1740 1732 1741 goto out; 1733 1742 } ··· 1764 1773 /* 1765 1774 * Free the inode. 1766 1775 */ 1767 - xfs_inactive_ifree(ip); 1776 + error = xfs_inactive_ifree(ip); 1768 1777 1769 1778 out: 1770 1779 /* ··· 1772 1781 * the attached dquots. 1773 1782 */ 1774 1783 xfs_qm_dqdetach(ip); 1784 + return error; 1775 1785 } 1776 1786 1777 1787 /*
+1 -1
fs/xfs/xfs_inode.h
··· 470 470 (xfs_has_grpid((pip)->i_mount) || (VFS_I(pip)->i_mode & S_ISGID)) 471 471 472 472 int xfs_release(struct xfs_inode *ip); 473 - void xfs_inactive(struct xfs_inode *ip); 473 + int xfs_inactive(struct xfs_inode *ip); 474 474 int xfs_lookup(struct xfs_inode *dp, const struct xfs_name *name, 475 475 struct xfs_inode **ipp, struct xfs_name *ci_name); 476 476 int xfs_create(struct mnt_idmap *idmap,
+149
fs/xfs/xfs_inode_item.c
··· 29 29 return container_of(lip, struct xfs_inode_log_item, ili_item); 30 30 } 31 31 32 + static uint64_t 33 + xfs_inode_item_sort( 34 + struct xfs_log_item *lip) 35 + { 36 + return INODE_ITEM(lip)->ili_inode->i_ino; 37 + } 38 + 39 + /* 40 + * Prior to finally logging the inode, we have to ensure that all the 41 + * per-modification inode state changes are applied. This includes VFS inode 42 + * state updates, format conversions, verifier state synchronisation and 43 + * ensuring the inode buffer remains in memory whilst the inode is dirty. 44 + * 45 + * We have to be careful when we grab the inode cluster buffer due to lock 46 + * ordering constraints. The unlinked inode modifications (xfs_iunlink_item) 47 + * require AGI -> inode cluster buffer lock order. The inode cluster buffer is 48 + * not locked until ->precommit, so it happens after everything else has been 49 + * modified. 50 + * 51 + * Further, we have AGI -> AGF lock ordering, and with O_TMPFILE handling we 52 + * have AGI -> AGF -> iunlink item -> inode cluster buffer lock order. Hence we 53 + * cannot safely lock the inode cluster buffer in xfs_trans_log_inode() because 54 + * it can be called on a inode (e.g. via bumplink/droplink) before we take the 55 + * AGF lock modifying directory blocks. 56 + * 57 + * Rather than force a complete rework of all the transactions to call 58 + * xfs_trans_log_inode() once and once only at the end of every transaction, we 59 + * move the pinning of the inode cluster buffer to a ->precommit operation. This 60 + * matches how the xfs_iunlink_item locks the inode cluster buffer, and it 61 + * ensures that the inode cluster buffer locking is always done last in a 62 + * transaction. i.e. we ensure the lock order is always AGI -> AGF -> inode 63 + * cluster buffer. 64 + * 65 + * If we return the inode number as the precommit sort key then we'll also 66 + * guarantee that the order all inode cluster buffer locking is the same all the 67 + * inodes and unlink items in the transaction. 68 + */ 69 + static int 70 + xfs_inode_item_precommit( 71 + struct xfs_trans *tp, 72 + struct xfs_log_item *lip) 73 + { 74 + struct xfs_inode_log_item *iip = INODE_ITEM(lip); 75 + struct xfs_inode *ip = iip->ili_inode; 76 + struct inode *inode = VFS_I(ip); 77 + unsigned int flags = iip->ili_dirty_flags; 78 + 79 + /* 80 + * Don't bother with i_lock for the I_DIRTY_TIME check here, as races 81 + * don't matter - we either will need an extra transaction in 24 hours 82 + * to log the timestamps, or will clear already cleared fields in the 83 + * worst case. 84 + */ 85 + if (inode->i_state & I_DIRTY_TIME) { 86 + spin_lock(&inode->i_lock); 87 + inode->i_state &= ~I_DIRTY_TIME; 88 + spin_unlock(&inode->i_lock); 89 + } 90 + 91 + /* 92 + * If we're updating the inode core or the timestamps and it's possible 93 + * to upgrade this inode to bigtime format, do so now. 94 + */ 95 + if ((flags & (XFS_ILOG_CORE | XFS_ILOG_TIMESTAMP)) && 96 + xfs_has_bigtime(ip->i_mount) && 97 + !xfs_inode_has_bigtime(ip)) { 98 + ip->i_diflags2 |= XFS_DIFLAG2_BIGTIME; 99 + flags |= XFS_ILOG_CORE; 100 + } 101 + 102 + /* 103 + * Inode verifiers do not check that the extent size hint is an integer 104 + * multiple of the rt extent size on a directory with both rtinherit 105 + * and extszinherit flags set. If we're logging a directory that is 106 + * misconfigured in this way, clear the hint. 107 + */ 108 + if ((ip->i_diflags & XFS_DIFLAG_RTINHERIT) && 109 + (ip->i_diflags & XFS_DIFLAG_EXTSZINHERIT) && 110 + (ip->i_extsize % ip->i_mount->m_sb.sb_rextsize) > 0) { 111 + ip->i_diflags &= ~(XFS_DIFLAG_EXTSIZE | 112 + XFS_DIFLAG_EXTSZINHERIT); 113 + ip->i_extsize = 0; 114 + flags |= XFS_ILOG_CORE; 115 + } 116 + 117 + /* 118 + * Record the specific change for fdatasync optimisation. This allows 119 + * fdatasync to skip log forces for inodes that are only timestamp 120 + * dirty. Once we've processed the XFS_ILOG_IVERSION flag, convert it 121 + * to XFS_ILOG_CORE so that the actual on-disk dirty tracking 122 + * (ili_fields) correctly tracks that the version has changed. 123 + */ 124 + spin_lock(&iip->ili_lock); 125 + iip->ili_fsync_fields |= (flags & ~XFS_ILOG_IVERSION); 126 + if (flags & XFS_ILOG_IVERSION) 127 + flags = ((flags & ~XFS_ILOG_IVERSION) | XFS_ILOG_CORE); 128 + 129 + if (!iip->ili_item.li_buf) { 130 + struct xfs_buf *bp; 131 + int error; 132 + 133 + /* 134 + * We hold the ILOCK here, so this inode is not going to be 135 + * flushed while we are here. Further, because there is no 136 + * buffer attached to the item, we know that there is no IO in 137 + * progress, so nothing will clear the ili_fields while we read 138 + * in the buffer. Hence we can safely drop the spin lock and 139 + * read the buffer knowing that the state will not change from 140 + * here. 141 + */ 142 + spin_unlock(&iip->ili_lock); 143 + error = xfs_imap_to_bp(ip->i_mount, tp, &ip->i_imap, &bp); 144 + if (error) 145 + return error; 146 + 147 + /* 148 + * We need an explicit buffer reference for the log item but 149 + * don't want the buffer to remain attached to the transaction. 150 + * Hold the buffer but release the transaction reference once 151 + * we've attached the inode log item to the buffer log item 152 + * list. 153 + */ 154 + xfs_buf_hold(bp); 155 + spin_lock(&iip->ili_lock); 156 + iip->ili_item.li_buf = bp; 157 + bp->b_flags |= _XBF_INODES; 158 + list_add_tail(&iip->ili_item.li_bio_list, &bp->b_li_list); 159 + xfs_trans_brelse(tp, bp); 160 + } 161 + 162 + /* 163 + * Always OR in the bits from the ili_last_fields field. This is to 164 + * coordinate with the xfs_iflush() and xfs_buf_inode_iodone() routines 165 + * in the eventual clearing of the ili_fields bits. See the big comment 166 + * in xfs_iflush() for an explanation of this coordination mechanism. 167 + */ 168 + iip->ili_fields |= (flags | iip->ili_last_fields); 169 + spin_unlock(&iip->ili_lock); 170 + 171 + /* 172 + * We are done with the log item transaction dirty state, so clear it so 173 + * that it doesn't pollute future transactions. 174 + */ 175 + iip->ili_dirty_flags = 0; 176 + return 0; 177 + } 178 + 32 179 /* 33 180 * The logged size of an inode fork is always the current size of the inode 34 181 * fork. This means that when an inode fork is relogged, the size of the logged ··· 809 662 } 810 663 811 664 static const struct xfs_item_ops xfs_inode_item_ops = { 665 + .iop_sort = xfs_inode_item_sort, 666 + .iop_precommit = xfs_inode_item_precommit, 812 667 .iop_size = xfs_inode_item_size, 813 668 .iop_format = xfs_inode_item_format, 814 669 .iop_pin = xfs_inode_item_pin,
+1
fs/xfs/xfs_inode_item.h
··· 17 17 struct xfs_log_item ili_item; /* common portion */ 18 18 struct xfs_inode *ili_inode; /* inode ptr */ 19 19 unsigned short ili_lock_flags; /* inode lock flags */ 20 + unsigned int ili_dirty_flags; /* dirty in current tx */ 20 21 /* 21 22 * The ili_lock protects the interactions between the dirty state and 22 23 * the flush state of the inode log item. This allows us to do atomic
+9 -10
fs/xfs/xfs_log_recover.c
··· 2711 2711 * just to flush the inodegc queue and wait for it to 2712 2712 * complete. 2713 2713 */ 2714 - xfs_inodegc_flush(mp); 2714 + error = xfs_inodegc_flush(mp); 2715 + if (error) 2716 + break; 2715 2717 } 2716 2718 2717 2719 prev_agino = agino; ··· 2721 2719 } 2722 2720 2723 2721 if (prev_ip) { 2722 + int error2; 2723 + 2724 2724 ip->i_prev_unlinked = prev_agino; 2725 2725 xfs_irele(prev_ip); 2726 + 2727 + error2 = xfs_inodegc_flush(mp); 2728 + if (error2 && !error) 2729 + return error2; 2726 2730 } 2727 - xfs_inodegc_flush(mp); 2728 2731 return error; 2729 2732 } 2730 2733 ··· 2796 2789 * bucket and remaining inodes on it unreferenced and 2797 2790 * unfreeable. 2798 2791 */ 2799 - xfs_inodegc_flush(pag->pag_mount); 2800 2792 xlog_recover_clear_agi_bucket(pag, bucket); 2801 2793 } 2802 2794 } ··· 2812 2806 2813 2807 for_each_perag(log->l_mp, agno, pag) 2814 2808 xlog_recover_iunlink_ag(pag); 2815 - 2816 - /* 2817 - * Flush the pending unlinked inodes to ensure that the inactivations 2818 - * are fully completed on disk and the incore inodes can be reclaimed 2819 - * before we signal that recovery is complete. 2820 - */ 2821 - xfs_inodegc_flush(log->l_mp); 2822 2809 } 2823 2810 2824 2811 STATIC void
+1
fs/xfs/xfs_mount.h
··· 62 62 struct xfs_inodegc { 63 63 struct llist_head list; 64 64 struct delayed_work work; 65 + int error; 65 66 66 67 /* approximate count of inodes in the list */ 67 68 unsigned int items;
+3 -1
fs/xfs/xfs_reflink.c
··· 616 616 xfs_refcount_free_cow_extent(*tpp, del.br_startblock, 617 617 del.br_blockcount); 618 618 619 - xfs_free_extent_later(*tpp, del.br_startblock, 619 + error = xfs_free_extent_later(*tpp, del.br_startblock, 620 620 del.br_blockcount, NULL); 621 + if (error) 622 + break; 621 623 622 624 /* Roll the transaction */ 623 625 error = xfs_defer_finish(tpp);
+1
fs/xfs/xfs_super.c
··· 1100 1100 #endif 1101 1101 init_llist_head(&gc->list); 1102 1102 gc->items = 0; 1103 + gc->error = 0; 1103 1104 INIT_DELAYED_WORK(&gc->work, xfs_inodegc_worker); 1104 1105 } 1105 1106 return 0;
+8 -1
fs/xfs/xfs_trans.c
··· 290 290 * Do not perform a synchronous scan because callers can hold 291 291 * other locks. 292 292 */ 293 - xfs_blockgc_flush_all(mp); 293 + error = xfs_blockgc_flush_all(mp); 294 + if (error) 295 + return error; 294 296 want_retry = false; 295 297 goto retry; 296 298 } ··· 970 968 !(tp->t_flags & XFS_TRANS_PERM_LOG_RES)); 971 969 if (!regrant && (tp->t_flags & XFS_TRANS_PERM_LOG_RES)) { 972 970 error = xfs_defer_finish_noroll(&tp); 971 + if (error) 972 + goto out_unreserve; 973 + 974 + /* Run precommits from final tx in defer chain. */ 975 + error = xfs_trans_run_precommits(tp); 973 976 if (error) 974 977 goto out_unreserve; 975 978 }
+9
include/dt-bindings/power/qcom-rpmpd.h
··· 90 90 #define SM8150_MMCX 9 91 91 #define SM8150_MMCX_AO 10 92 92 93 + /* SA8155P is a special case, kept for backwards compatibility */ 94 + #define SA8155P_CX SM8150_CX 95 + #define SA8155P_CX_AO SM8150_CX_AO 96 + #define SA8155P_EBI SM8150_EBI 97 + #define SA8155P_GFX SM8150_GFX 98 + #define SA8155P_MSS SM8150_MSS 99 + #define SA8155P_MX SM8150_MX 100 + #define SA8155P_MX_AO SM8150_MX_AO 101 + 93 102 /* SM8250 Power Domain Indexes */ 94 103 #define SM8250_CX 0 95 104 #define SM8250_CX_AO 1
+1 -1
include/linux/libata.h
··· 836 836 837 837 struct mutex scsi_scan_mutex; 838 838 struct delayed_work hotplug_task; 839 - struct work_struct scsi_rescan_task; 839 + struct delayed_work scsi_rescan_task; 840 840 841 841 unsigned int hsm_task_state; 842 842
+12
include/linux/mlx5/driver.h
··· 1238 1238 return dev->priv.sriov.max_vfs; 1239 1239 } 1240 1240 1241 + static inline int mlx5_lag_is_lacp_owner(struct mlx5_core_dev *dev) 1242 + { 1243 + /* LACP owner conditions: 1244 + * 1) Function is physical. 1245 + * 2) LAG is supported by FW. 1246 + * 3) LAG is managed by driver (currently the only option). 1247 + */ 1248 + return MLX5_CAP_GEN(dev, vport_group_manager) && 1249 + (MLX5_CAP_GEN(dev, num_lag_ports) > 1) && 1250 + MLX5_CAP_GEN(dev, lag_master); 1251 + } 1252 + 1241 1253 static inline int mlx5_get_gid_table_len(u16 param) 1242 1254 { 1243 1255 if (param > 4) {
+6 -3
include/linux/netdevice.h
··· 620 620 netdevice_tracker dev_tracker; 621 621 622 622 struct Qdisc __rcu *qdisc; 623 - struct Qdisc *qdisc_sleeping; 623 + struct Qdisc __rcu *qdisc_sleeping; 624 624 #ifdef CONFIG_SYSFS 625 625 struct kobject kobj; 626 626 #endif ··· 768 768 /* We only give a hint, preemption can change CPU under us */ 769 769 val |= raw_smp_processor_id(); 770 770 771 - if (table->ents[index] != val) 772 - table->ents[index] = val; 771 + /* The following WRITE_ONCE() is paired with the READ_ONCE() 772 + * here, and another one in get_rps_cpu(). 773 + */ 774 + if (READ_ONCE(table->ents[index]) != val) 775 + WRITE_ONCE(table->ents[index], val); 773 776 } 774 777 } 775 778
+10
include/linux/notifier.h
··· 106 106 #define RAW_NOTIFIER_INIT(name) { \ 107 107 .head = NULL } 108 108 109 + #ifdef CONFIG_TREE_SRCU 110 + #define SRCU_NOTIFIER_INIT(name, pcpu) \ 111 + { \ 112 + .mutex = __MUTEX_INITIALIZER(name.mutex), \ 113 + .head = NULL, \ 114 + .srcuu = __SRCU_USAGE_INIT(name.srcuu), \ 115 + .srcu = __SRCU_STRUCT_INIT(name.srcu, name.srcuu, pcpu), \ 116 + } 117 + #else 109 118 #define SRCU_NOTIFIER_INIT(name, pcpu) \ 110 119 { \ 111 120 .mutex = __MUTEX_INITIALIZER(name.mutex), \ 112 121 .head = NULL, \ 113 122 .srcu = __SRCU_STRUCT_INIT(name.srcu, name.srcuu, pcpu), \ 114 123 } 124 + #endif 115 125 116 126 #define ATOMIC_NOTIFIER_HEAD(name) \ 117 127 struct atomic_notifier_head name = \
-6
include/linux/soc/qcom/llcc-qcom.h
··· 69 69 /** 70 70 * struct llcc_edac_reg_data - llcc edac registers data for each error type 71 71 * @name: Name of the error 72 - * @synd_reg: Syndrome register address 73 - * @count_status_reg: Status register address to read the error count 74 - * @ways_status_reg: Status register address to read the error ways 75 72 * @reg_cnt: Number of registers 76 73 * @count_mask: Mask value to get the error count 77 74 * @ways_mask: Mask value to get the error ways ··· 77 80 */ 78 81 struct llcc_edac_reg_data { 79 82 char *name; 80 - u64 synd_reg; 81 - u64 count_status_reg; 82 - u64 ways_status_reg; 83 83 u32 reg_cnt; 84 84 u32 count_mask; 85 85 u32 ways_mask;
+1 -5
include/linux/surface_aggregator/device.h
··· 243 243 * Return: Returns the pointer to the &struct ssam_device_driver wrapping the 244 244 * given device driver @d. 245 245 */ 246 - static inline 247 - struct ssam_device_driver *to_ssam_device_driver(struct device_driver *d) 248 - { 249 - return container_of(d, struct ssam_device_driver, driver); 250 - } 246 + #define to_ssam_device_driver(d) container_of_const(d, struct ssam_device_driver, driver) 251 247 252 248 const struct ssam_device_id *ssam_device_id_match(const struct ssam_device_id *table, 253 249 const struct ssam_device_uid uid);
+1 -5
include/media/dvb_frontend.h
··· 686 686 * @id: Frontend ID 687 687 * @exit: Used to inform the DVB core that the frontend 688 688 * thread should exit (usually, means that the hardware 689 - * got disconnected). 690 - * @remove_mutex: mutex that avoids a race condition between a callback 691 - * called when the hardware is disconnected and the 692 - * file_operations of dvb_frontend. 689 + * got disconnected. 693 690 */ 694 691 695 692 struct dvb_frontend { ··· 704 707 int (*callback)(void *adapter_priv, int component, int cmd, int arg); 705 708 int id; 706 709 unsigned int exit; 707 - struct mutex remove_mutex; 708 710 }; 709 711 710 712 /**
+1
include/net/bluetooth/hci.h
··· 350 350 enum { 351 351 HCI_SETUP, 352 352 HCI_CONFIG, 353 + HCI_DEBUGFS_CREATED, 353 354 HCI_AUTO_OFF, 354 355 HCI_RFKILLED, 355 356 HCI_MGMT,
+3 -1
include/net/bluetooth/hci_core.h
··· 515 515 struct work_struct cmd_sync_work; 516 516 struct list_head cmd_sync_work_list; 517 517 struct mutex cmd_sync_work_lock; 518 + struct mutex unregister_lock; 518 519 struct work_struct cmd_sync_cancel_work; 519 520 struct work_struct reenable_adv_work; 520 521 ··· 1202 1201 if (id != BT_ISO_QOS_CIS_UNSET && id != c->iso_qos.ucast.cis) 1203 1202 continue; 1204 1203 1205 - if (ba_type == c->dst_type && !bacmp(&c->dst, ba)) { 1204 + /* Match destination address if set */ 1205 + if (!ba || (ba_type == c->dst_type && !bacmp(&c->dst, ba))) { 1206 1206 rcu_read_unlock(); 1207 1207 return c; 1208 1208 }
+1 -1
include/net/neighbour.h
··· 180 180 netdevice_tracker dev_tracker; 181 181 u32 flags; 182 182 u8 protocol; 183 - u8 key[]; 183 + u32 key[]; 184 184 }; 185 185 186 186 /*
+1 -1
include/net/netfilter/nf_flow_table.h
··· 268 268 269 269 int flow_offload_add(struct nf_flowtable *flow_table, struct flow_offload *flow); 270 270 void flow_offload_refresh(struct nf_flowtable *flow_table, 271 - struct flow_offload *flow); 271 + struct flow_offload *flow, bool force); 272 272 273 273 struct flow_offload_tuple_rhash *flow_offload_lookup(struct nf_flowtable *flow_table, 274 274 struct flow_offload_tuple *tuple);
+3 -1
include/net/netfilter/nf_tables.h
··· 462 462 const struct nft_set *set, 463 463 const struct nft_set_elem *elem, 464 464 unsigned int flags); 465 - 465 + void (*commit)(const struct nft_set *set); 466 + void (*abort)(const struct nft_set *set); 466 467 u64 (*privsize)(const struct nlattr * const nla[], 467 468 const struct nft_set_desc *desc); 468 469 bool (*estimate)(const struct nft_set_desc *desc, ··· 558 557 u16 policy; 559 558 u16 udlen; 560 559 unsigned char *udata; 560 + struct list_head pending_update; 561 561 /* runtime data below here */ 562 562 const struct nft_set_ops *ops ____cacheline_aligned; 563 563 u16 flags:14,
+1 -1
include/net/netns/ipv6.h
··· 53 53 int seg6_flowlabel; 54 54 u32 ioam6_id; 55 55 u64 ioam6_id_wide; 56 - bool skip_notify_on_dev_down; 56 + u8 skip_notify_on_dev_down; 57 57 u8 fib_notify_on_flag_change; 58 58 u8 icmpv6_error_anycast_as_unicast; 59 59 };
+1 -5
include/net/ping.h
··· 16 16 #define PING_HTABLE_SIZE 64 17 17 #define PING_HTABLE_MASK (PING_HTABLE_SIZE-1) 18 18 19 - /* 20 - * gid_t is either uint or ushort. We want to pass it to 21 - * proc_dointvec_minmax(), so it must not be larger than MAX_INT 22 - */ 23 - #define GID_T_MAX (((gid_t)~0U) >> 1) 19 + #define GID_T_MAX (((gid_t)~0U) - 1) 24 20 25 21 /* Compatibility glue so we can support IPv6 when it's compiled as a module */ 26 22 struct pingv6_ops {
+2
include/net/pkt_sched.h
··· 127 127 } 128 128 } 129 129 130 + extern const struct nla_policy rtm_tca_policy[TCA_MAX + 1]; 131 + 130 132 /* Calculate maximal size of packet seen by hard_start_xmit 131 133 routine of this device. 132 134 */
-3
include/net/rpl.h
··· 23 23 static inline void rpl_exit(void) {} 24 24 #endif 25 25 26 - /* Worst decompression memory usage ipv6 address (16) + pad 7 */ 27 - #define IPV6_RPL_SRH_WORST_SWAP_SIZE (sizeof(struct in6_addr) + 7) 28 - 29 26 size_t ipv6_rpl_srh_size(unsigned char n, unsigned char cmpri, 30 27 unsigned char cmpre); 31 28
+12 -2
include/net/sch_generic.h
··· 137 137 refcount_inc(&qdisc->refcnt); 138 138 } 139 139 140 + static inline bool qdisc_refcount_dec_if_one(struct Qdisc *qdisc) 141 + { 142 + if (qdisc->flags & TCQ_F_BUILTIN) 143 + return true; 144 + return refcount_dec_if_one(&qdisc->refcnt); 145 + } 146 + 140 147 /* Intended to be used by unlocked users, when concurrent qdisc release is 141 148 * possible. 142 149 */ ··· 552 545 553 546 static inline struct Qdisc *qdisc_root_sleeping(const struct Qdisc *qdisc) 554 547 { 555 - return qdisc->dev_queue->qdisc_sleeping; 548 + return rcu_dereference_rtnl(qdisc->dev_queue->qdisc_sleeping); 556 549 } 557 550 558 551 static inline spinlock_t *qdisc_root_sleeping_lock(const struct Qdisc *qdisc) ··· 659 652 struct Qdisc *dev_graft_qdisc(struct netdev_queue *dev_queue, 660 653 struct Qdisc *qdisc); 661 654 void qdisc_reset(struct Qdisc *qdisc); 655 + void qdisc_destroy(struct Qdisc *qdisc); 662 656 void qdisc_put(struct Qdisc *qdisc); 663 657 void qdisc_put_unlocked(struct Qdisc *qdisc); 664 658 void qdisc_tree_reduce_backlog(struct Qdisc *qdisc, int n, int len); ··· 762 754 763 755 for (i = 0; i < dev->num_tx_queues; i++) { 764 756 struct netdev_queue *txq = netdev_get_tx_queue(dev, i); 765 - if (rcu_access_pointer(txq->qdisc) != txq->qdisc_sleeping) 757 + 758 + if (rcu_access_pointer(txq->qdisc) != 759 + rcu_access_pointer(txq->qdisc_sleeping)) 766 760 return true; 767 761 } 768 762 return false;
+13 -5
include/net/sock.h
··· 1152 1152 * OR an additional socket flag 1153 1153 * [1] : sk_state and sk_prot are in the same cache line. 1154 1154 */ 1155 - if (sk->sk_state == TCP_ESTABLISHED) 1156 - sock_rps_record_flow_hash(sk->sk_rxhash); 1155 + if (sk->sk_state == TCP_ESTABLISHED) { 1156 + /* This READ_ONCE() is paired with the WRITE_ONCE() 1157 + * from sock_rps_save_rxhash() and sock_rps_reset_rxhash(). 1158 + */ 1159 + sock_rps_record_flow_hash(READ_ONCE(sk->sk_rxhash)); 1160 + } 1157 1161 } 1158 1162 #endif 1159 1163 } ··· 1166 1162 const struct sk_buff *skb) 1167 1163 { 1168 1164 #ifdef CONFIG_RPS 1169 - if (unlikely(sk->sk_rxhash != skb->hash)) 1170 - sk->sk_rxhash = skb->hash; 1165 + /* The following WRITE_ONCE() is paired with the READ_ONCE() 1166 + * here, and another one in sock_rps_record_flow(). 1167 + */ 1168 + if (unlikely(READ_ONCE(sk->sk_rxhash) != skb->hash)) 1169 + WRITE_ONCE(sk->sk_rxhash, skb->hash); 1171 1170 #endif 1172 1171 } 1173 1172 1174 1173 static inline void sock_rps_reset_rxhash(struct sock *sk) 1175 1174 { 1176 1175 #ifdef CONFIG_RPS 1177 - sk->sk_rxhash = 0; 1176 + /* Paired with READ_ONCE() in sock_rps_record_flow() */ 1177 + WRITE_ONCE(sk->sk_rxhash, 0); 1178 1178 #endif 1179 1179 } 1180 1180
-23
include/rdma/ib_addr.h
··· 194 194 return 0; 195 195 } 196 196 197 - static inline int iboe_get_rate(struct net_device *dev) 198 - { 199 - struct ethtool_link_ksettings cmd; 200 - int err; 201 - 202 - rtnl_lock(); 203 - err = __ethtool_get_link_ksettings(dev, &cmd); 204 - rtnl_unlock(); 205 - if (err) 206 - return IB_RATE_PORT_CURRENT; 207 - 208 - if (cmd.base.speed >= 40000) 209 - return IB_RATE_40_GBPS; 210 - else if (cmd.base.speed >= 30000) 211 - return IB_RATE_30_GBPS; 212 - else if (cmd.base.speed >= 20000) 213 - return IB_RATE_20_GBPS; 214 - else if (cmd.base.speed >= 10000) 215 - return IB_RATE_10_GBPS; 216 - else 217 - return IB_RATE_PORT_CURRENT; 218 - } 219 - 220 197 static inline int rdma_link_local_addr(struct in6_addr *addr) 221 198 { 222 199 if (addr->s6_addr32[0] == htonl(0xfe800000) &&
+1
include/uapi/linux/bpf.h
··· 1035 1035 BPF_TRACE_KPROBE_MULTI, 1036 1036 BPF_LSM_CGROUP, 1037 1037 BPF_STRUCT_OPS, 1038 + BPF_NETFILTER, 1038 1039 __MAX_BPF_ATTACH_TYPE 1039 1040 }; 1040 1041
+1 -1
include/uapi/linux/ethtool_netlink.h
··· 783 783 784 784 /* add new constants above here */ 785 785 __ETHTOOL_A_STATS_GRP_CNT, 786 - ETHTOOL_A_STATS_GRP_MAX = (__ETHTOOL_A_STATS_CNT - 1) 786 + ETHTOOL_A_STATS_GRP_MAX = (__ETHTOOL_A_STATS_GRP_CNT - 1) 787 787 }; 788 788 789 789 enum {
+6 -4
io_uring/io-wq.c
··· 220 220 list_del_rcu(&worker->all_list); 221 221 raw_spin_unlock(&wq->lock); 222 222 io_wq_dec_running(worker); 223 - worker->flags = 0; 224 - preempt_disable(); 225 - current->flags &= ~PF_IO_WORKER; 226 - preempt_enable(); 223 + /* 224 + * this worker is a goner, clear ->worker_private to avoid any 225 + * inc/dec running calls that could happen as part of exit from 226 + * touching 'worker'. 227 + */ 228 + current->worker_private = NULL; 227 229 228 230 kfree_rcu(worker, rcu); 229 231 io_worker_ref_put(wq);
+7 -1
io_uring/net.c
··· 65 65 u16 addr_len; 66 66 u16 buf_group; 67 67 void __user *addr; 68 + void __user *msg_control; 68 69 /* used only for send zerocopy */ 69 70 struct io_kiocb *notif; 70 71 }; ··· 196 195 struct io_async_msghdr *iomsg) 197 196 { 198 197 struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg); 198 + int ret; 199 199 200 200 iomsg->msg.msg_name = &iomsg->addr; 201 201 iomsg->free_iov = iomsg->fast_iov; 202 - return sendmsg_copy_msghdr(&iomsg->msg, sr->umsg, sr->msg_flags, 202 + ret = sendmsg_copy_msghdr(&iomsg->msg, sr->umsg, sr->msg_flags, 203 203 &iomsg->free_iov); 204 + /* save msg_control as sys_sendmsg() overwrites it */ 205 + sr->msg_control = iomsg->msg.msg_control; 206 + return ret; 204 207 } 205 208 206 209 int io_send_prep_async(struct io_kiocb *req) ··· 302 297 303 298 if (req_has_async_data(req)) { 304 299 kmsg = req->async_data; 300 + kmsg->msg.msg_control = sr->msg_control; 305 301 } else { 306 302 ret = io_sendmsg_copy_hdr(req, &iomsg); 307 303 if (ret)
+6 -2
kernel/bpf/map_in_map.c
··· 69 69 /* Misc members not needed in bpf_map_meta_equal() check. */ 70 70 inner_map_meta->ops = inner_map->ops; 71 71 if (inner_map->ops == &array_map_ops) { 72 + struct bpf_array *inner_array_meta = 73 + container_of(inner_map_meta, struct bpf_array, map); 74 + struct bpf_array *inner_array = container_of(inner_map, struct bpf_array, map); 75 + 76 + inner_array_meta->index_mask = inner_array->index_mask; 77 + inner_array_meta->elem_size = inner_array->elem_size; 72 78 inner_map_meta->bypass_spec_v1 = inner_map->bypass_spec_v1; 73 - container_of(inner_map_meta, struct bpf_array, map)->index_mask = 74 - container_of(inner_map, struct bpf_array, map)->index_mask; 75 79 } 76 80 77 81 fdput(f);
+9
kernel/bpf/syscall.c
··· 2433 2433 default: 2434 2434 return -EINVAL; 2435 2435 } 2436 + case BPF_PROG_TYPE_NETFILTER: 2437 + if (expected_attach_type == BPF_NETFILTER) 2438 + return 0; 2439 + return -EINVAL; 2436 2440 case BPF_PROG_TYPE_SYSCALL: 2437 2441 case BPF_PROG_TYPE_EXT: 2438 2442 if (expected_attach_type) ··· 4594 4590 4595 4591 switch (prog->type) { 4596 4592 case BPF_PROG_TYPE_EXT: 4593 + break; 4597 4594 case BPF_PROG_TYPE_NETFILTER: 4595 + if (attr->link_create.attach_type != BPF_NETFILTER) { 4596 + ret = -EINVAL; 4597 + goto out; 4598 + } 4598 4599 break; 4599 4600 case BPF_PROG_TYPE_PERF_EVENT: 4600 4601 case BPF_PROG_TYPE_TRACEPOINT:
+2 -2
kernel/cgroup/cgroup-v1.c
··· 108 108 109 109 cgroup_lock(); 110 110 111 - percpu_down_write(&cgroup_threadgroup_rwsem); 111 + cgroup_attach_lock(true); 112 112 113 113 /* all tasks in @from are being moved, all csets are source */ 114 114 spin_lock_irq(&css_set_lock); ··· 144 144 } while (task && !ret); 145 145 out_err: 146 146 cgroup_migrate_finish(&mgctx); 147 - percpu_up_write(&cgroup_threadgroup_rwsem); 147 + cgroup_attach_unlock(true); 148 148 cgroup_unlock(); 149 149 return ret; 150 150 }
+8 -9
kernel/cgroup/cgroup.c
··· 6486 6486 static void cgroup_css_set_put_fork(struct kernel_clone_args *kargs) 6487 6487 __releases(&cgroup_threadgroup_rwsem) __releases(&cgroup_mutex) 6488 6488 { 6489 + struct cgroup *cgrp = kargs->cgrp; 6490 + struct css_set *cset = kargs->cset; 6491 + 6489 6492 cgroup_threadgroup_change_end(current); 6490 6493 6494 + if (cset) { 6495 + put_css_set(cset); 6496 + kargs->cset = NULL; 6497 + } 6498 + 6491 6499 if (kargs->flags & CLONE_INTO_CGROUP) { 6492 - struct cgroup *cgrp = kargs->cgrp; 6493 - struct css_set *cset = kargs->cset; 6494 - 6495 6500 cgroup_unlock(); 6496 - 6497 - if (cset) { 6498 - put_css_set(cset); 6499 - kargs->cset = NULL; 6500 - } 6501 - 6502 6501 if (cgrp) { 6503 6502 cgroup_put(cgrp); 6504 6503 kargs->cgrp = NULL;
+1 -1
kernel/fork.c
··· 627 627 arch_release_task_struct(tsk); 628 628 if (tsk->flags & PF_KTHREAD) 629 629 free_kthread_struct(tsk); 630 + bpf_task_storage_free(tsk); 630 631 free_task_struct(tsk); 631 632 } 632 633 EXPORT_SYMBOL(free_task); ··· 980 979 cgroup_free(tsk); 981 980 task_numa_free(tsk, true); 982 981 security_task_free(tsk); 983 - bpf_task_storage_free(tsk); 984 982 exit_creds(tsk); 985 983 delayacct_tsk_free(tsk); 986 984 put_signal_struct(tsk->signal);
+13 -1
kernel/kexec_file.c
··· 901 901 } 902 902 903 903 offset = ALIGN(offset, align); 904 + 905 + /* 906 + * Check if the segment contains the entry point, if so, 907 + * calculate the value of image->start based on it. 908 + * If the compiler has produced more than one .text section 909 + * (Eg: .text.hot), they are generally after the main .text 910 + * section, and they shall not be used to calculate 911 + * image->start. So do not re-calculate image->start if it 912 + * is not set to the initial value, and warn the user so they 913 + * have a chance to fix their purgatory's linker script. 914 + */ 904 915 if (sechdrs[i].sh_flags & SHF_EXECINSTR && 905 916 pi->ehdr->e_entry >= sechdrs[i].sh_addr && 906 917 pi->ehdr->e_entry < (sechdrs[i].sh_addr 907 - + sechdrs[i].sh_size)) { 918 + + sechdrs[i].sh_size) && 919 + !WARN_ON(kbuf->image->start != pi->ehdr->e_entry)) { 908 920 kbuf->image->start -= sechdrs[i].sh_addr; 909 921 kbuf->image->start += kbuf->mem + offset; 910 922 }
+11 -1
kernel/trace/bpf_trace.c
··· 900 900 901 901 BPF_CALL_3(bpf_d_path, struct path *, path, char *, buf, u32, sz) 902 902 { 903 + struct path copy; 903 904 long len; 904 905 char *p; 905 906 906 907 if (!sz) 907 908 return 0; 908 909 909 - p = d_path(path, buf, sz); 910 + /* 911 + * The path pointer is verified as trusted and safe to use, 912 + * but let's double check it's valid anyway to workaround 913 + * potentially broken verifier. 914 + */ 915 + len = copy_from_kernel_nofault(&copy, path, sizeof(*path)); 916 + if (len < 0) 917 + return len; 918 + 919 + p = d_path(&copy, buf, sz); 910 920 if (IS_ERR(p)) { 911 921 len = PTR_ERR(p); 912 922 } else {
+10 -8
kernel/vhost_task.c
··· 28 28 for (;;) { 29 29 bool did_work; 30 30 31 - /* mb paired w/ vhost_task_stop */ 32 - if (test_bit(VHOST_TASK_FLAGS_STOP, &vtsk->flags)) 33 - break; 34 - 35 31 if (!dead && signal_pending(current)) { 36 32 struct ksignal ksig; 37 33 /* ··· 44 48 clear_thread_flag(TIF_SIGPENDING); 45 49 } 46 50 47 - did_work = vtsk->fn(vtsk->data); 48 - if (!did_work) { 49 - set_current_state(TASK_INTERRUPTIBLE); 50 - schedule(); 51 + /* mb paired w/ vhost_task_stop */ 52 + set_current_state(TASK_INTERRUPTIBLE); 53 + 54 + if (test_bit(VHOST_TASK_FLAGS_STOP, &vtsk->flags)) { 55 + __set_current_state(TASK_RUNNING); 56 + break; 51 57 } 58 + 59 + did_work = vtsk->fn(vtsk->data); 60 + if (!did_work) 61 + schedule(); 52 62 } 53 63 54 64 complete(&vtsk->exited);
+1 -1
lib/cpu_rmap.c
··· 280 280 struct irq_glue *glue = 281 281 container_of(ref, struct irq_glue, notify.kref); 282 282 283 - cpu_rmap_put(glue->rmap); 284 283 glue->rmap->obj[glue->index] = NULL; 284 + cpu_rmap_put(glue->rmap); 285 285 kfree(glue); 286 286 } 287 287
+2
lib/radix-tree.c
··· 27 27 #include <linux/string.h> 28 28 #include <linux/xarray.h> 29 29 30 + #include "radix-tree.h" 31 + 30 32 /* 31 33 * Radix tree node cache. 32 34 */
+8
lib/radix-tree.h
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* radix-tree helpers that are only shared with xarray */ 3 + 4 + struct kmem_cache; 5 + struct rcu_head; 6 + 7 + extern struct kmem_cache *radix_tree_node_cachep; 8 + extern void radix_tree_node_rcu_free(struct rcu_head *head);
+1 -1
lib/test_vmalloc.c
··· 369 369 int i; 370 370 371 371 map_nr_pages = nr_pages > 0 ? nr_pages:1; 372 - pages = kmalloc(map_nr_pages * sizeof(struct page), GFP_KERNEL); 372 + pages = kcalloc(map_nr_pages, sizeof(struct page *), GFP_KERNEL); 373 373 if (!pages) 374 374 return -1; 375 375
+2 -4
lib/xarray.c
··· 12 12 #include <linux/slab.h> 13 13 #include <linux/xarray.h> 14 14 15 + #include "radix-tree.h" 16 + 15 17 /* 16 18 * Coding conventions in this file: 17 19 * ··· 248 246 return entry; 249 247 } 250 248 EXPORT_SYMBOL_GPL(xas_load); 251 - 252 - /* Move the radix tree node cache here */ 253 - extern struct kmem_cache *radix_tree_node_cachep; 254 - extern void radix_tree_node_rcu_free(struct rcu_head *head); 255 249 256 250 #define XA_RCU_FREE ((struct xarray *)1) 257 251
+2
mm/damon/core.c
··· 551 551 return -EINVAL; 552 552 if (attrs->min_nr_regions > attrs->max_nr_regions) 553 553 return -EINVAL; 554 + if (attrs->sample_interval > attrs->aggr_interval) 555 + return -EINVAL; 554 556 555 557 damon_update_monitoring_results(ctx, attrs); 556 558 ctx->attrs = *attrs;
+16 -10
mm/filemap.c
··· 1760 1760 * 1761 1761 * Return: The index of the gap if found, otherwise an index outside the 1762 1762 * range specified (in which case 'return - index >= max_scan' will be true). 1763 - * In the rare case of index wrap-around, 0 will be returned. 1763 + * In the rare case of index wrap-around, 0 will be returned. 0 will also 1764 + * be returned if index == 0 and there is a gap at the index. We can not 1765 + * wrap-around if passed index == 0. 1764 1766 */ 1765 1767 pgoff_t page_cache_next_miss(struct address_space *mapping, 1766 1768 pgoff_t index, unsigned long max_scan) ··· 1772 1770 while (max_scan--) { 1773 1771 void *entry = xas_next(&xas); 1774 1772 if (!entry || xa_is_value(entry)) 1775 - break; 1776 - if (xas.xa_index == 0) 1777 - break; 1773 + return xas.xa_index; 1774 + if (xas.xa_index == 0 && index != 0) 1775 + return xas.xa_index; 1778 1776 } 1779 1777 1780 - return xas.xa_index; 1778 + /* No gaps in range and no wrap-around, return index beyond range */ 1779 + return xas.xa_index + 1; 1781 1780 } 1782 1781 EXPORT_SYMBOL(page_cache_next_miss); 1783 1782 ··· 1799 1796 * 1800 1797 * Return: The index of the gap if found, otherwise an index outside the 1801 1798 * range specified (in which case 'index - return >= max_scan' will be true). 1802 - * In the rare case of wrap-around, ULONG_MAX will be returned. 1799 + * In the rare case of wrap-around, ULONG_MAX will be returned. ULONG_MAX 1800 + * will also be returned if index == ULONG_MAX and there is a gap at the 1801 + * index. We can not wrap-around if passed index == ULONG_MAX. 1803 1802 */ 1804 1803 pgoff_t page_cache_prev_miss(struct address_space *mapping, 1805 1804 pgoff_t index, unsigned long max_scan) ··· 1811 1806 while (max_scan--) { 1812 1807 void *entry = xas_prev(&xas); 1813 1808 if (!entry || xa_is_value(entry)) 1814 - break; 1815 - if (xas.xa_index == ULONG_MAX) 1816 - break; 1809 + return xas.xa_index; 1810 + if (xas.xa_index == ULONG_MAX && index != ULONG_MAX) 1811 + return xas.xa_index; 1817 1812 } 1818 1813 1819 - return xas.xa_index; 1814 + /* No gaps in range and no wrap-around, return index beyond range */ 1815 + return xas.xa_index - 1; 1820 1816 } 1821 1817 EXPORT_SYMBOL(page_cache_prev_miss); 1822 1818
+1
mm/gup_test.c
··· 381 381 static const struct file_operations gup_test_fops = { 382 382 .open = nonseekable_open, 383 383 .unlocked_ioctl = gup_test_ioctl, 384 + .compat_ioctl = compat_ptr_ioctl, 384 385 .release = gup_test_release, 385 386 }; 386 387
+17 -20
mm/mmap.c
··· 2318 2318 return __split_vma(vmi, vma, addr, new_below); 2319 2319 } 2320 2320 2321 - static inline int munmap_sidetree(struct vm_area_struct *vma, 2322 - struct ma_state *mas_detach) 2323 - { 2324 - vma_start_write(vma); 2325 - mas_set_range(mas_detach, vma->vm_start, vma->vm_end - 1); 2326 - if (mas_store_gfp(mas_detach, vma, GFP_KERNEL)) 2327 - return -ENOMEM; 2328 - 2329 - vma_mark_detached(vma, true); 2330 - if (vma->vm_flags & VM_LOCKED) 2331 - vma->vm_mm->locked_vm -= vma_pages(vma); 2332 - 2333 - return 0; 2334 - } 2335 - 2336 2321 /* 2337 2322 * do_vmi_align_munmap() - munmap the aligned region from @start to @end. 2338 2323 * @vmi: The vma iterator ··· 2339 2354 struct maple_tree mt_detach; 2340 2355 int count = 0; 2341 2356 int error = -ENOMEM; 2357 + unsigned long locked_vm = 0; 2342 2358 MA_STATE(mas_detach, &mt_detach, 0, 0); 2343 2359 mt_init_flags(&mt_detach, vmi->mas.tree->ma_flags & MT_FLAGS_LOCK_MASK); 2344 2360 mt_set_external_lock(&mt_detach, &mm->mmap_lock); ··· 2385 2399 if (error) 2386 2400 goto end_split_failed; 2387 2401 } 2388 - error = munmap_sidetree(next, &mas_detach); 2389 - if (error) 2390 - goto munmap_sidetree_failed; 2402 + vma_start_write(next); 2403 + mas_set_range(&mas_detach, next->vm_start, next->vm_end - 1); 2404 + if (mas_store_gfp(&mas_detach, next, GFP_KERNEL)) 2405 + goto munmap_gather_failed; 2406 + vma_mark_detached(next, true); 2407 + if (next->vm_flags & VM_LOCKED) 2408 + locked_vm += vma_pages(next); 2391 2409 2392 2410 count++; 2393 2411 #ifdef CONFIG_DEBUG_VM_MAPLE_TREE ··· 2437 2447 } 2438 2448 #endif 2439 2449 /* Point of no return */ 2450 + error = -ENOMEM; 2440 2451 vma_iter_set(vmi, start); 2441 2452 if (vma_iter_clear_gfp(vmi, start, end, GFP_KERNEL)) 2442 - return -ENOMEM; 2453 + goto clear_tree_failed; 2443 2454 2455 + mm->locked_vm -= locked_vm; 2444 2456 mm->map_count -= count; 2445 2457 /* 2446 2458 * Do not downgrade mmap_lock if we are next to VM_GROWSDOWN or ··· 2472 2480 validate_mm(mm); 2473 2481 return downgrade ? 1 : 0; 2474 2482 2483 + clear_tree_failed: 2475 2484 userfaultfd_error: 2476 - munmap_sidetree_failed: 2485 + munmap_gather_failed: 2477 2486 end_split_failed: 2487 + mas_set(&mas_detach, 0); 2488 + mas_for_each(&mas_detach, next, end) 2489 + vma_mark_detached(next, false); 2490 + 2478 2491 __mt_destroy(&mt_detach); 2479 2492 start_split_failed: 2480 2493 map_count_exceeded:
+9 -2
mm/zswap.c
··· 1174 1174 goto reject; 1175 1175 } 1176 1176 1177 + /* 1178 + * XXX: zswap reclaim does not work with cgroups yet. Without a 1179 + * cgroup-aware entry LRU, we will push out entries system-wide based on 1180 + * local cgroup limits. 1181 + */ 1177 1182 objcg = get_obj_cgroup_from_page(page); 1178 - if (objcg && !obj_cgroup_may_zswap(objcg)) 1179 - goto shrink; 1183 + if (objcg && !obj_cgroup_may_zswap(objcg)) { 1184 + ret = -ENOMEM; 1185 + goto reject; 1186 + } 1180 1187 1181 1188 /* reclaim space if needed */ 1182 1189 if (zswap_is_full()) {
+1 -1
net/batman-adv/distributed-arp-table.c
··· 101 101 */ 102 102 static void batadv_dat_start_timer(struct batadv_priv *bat_priv) 103 103 { 104 - INIT_DELAYED_WORK(&bat_priv->dat.work, batadv_dat_purge); 105 104 queue_delayed_work(batadv_event_workqueue, &bat_priv->dat.work, 106 105 msecs_to_jiffies(10000)); 107 106 } ··· 818 819 if (!bat_priv->dat.hash) 819 820 return -ENOMEM; 820 821 822 + INIT_DELAYED_WORK(&bat_priv->dat.work, batadv_dat_purge); 821 823 batadv_dat_start_timer(bat_priv); 822 824 823 825 batadv_tvlv_handler_register(bat_priv, batadv_dat_tvlv_ogm_handler_v1,
+13 -9
net/bluetooth/hci_conn.c
··· 947 947 { 948 948 struct iso_list_data *d = data; 949 949 950 - /* Ignore broadcast */ 951 - if (!bacmp(&conn->dst, BDADDR_ANY)) 950 + /* Ignore broadcast or if CIG don't match */ 951 + if (!bacmp(&conn->dst, BDADDR_ANY) || d->cig != conn->iso_qos.ucast.cig) 952 952 return; 953 953 954 954 d->count++; ··· 963 963 struct hci_dev *hdev = conn->hdev; 964 964 struct iso_list_data d; 965 965 966 + if (conn->iso_qos.ucast.cig == BT_ISO_QOS_CIG_UNSET) 967 + return; 968 + 966 969 memset(&d, 0, sizeof(d)); 967 970 d.cig = conn->iso_qos.ucast.cig; 968 971 969 972 /* Check if ISO connection is a CIS and remove CIG if there are 970 973 * no other connections using it. 971 974 */ 975 + hci_conn_hash_list_state(hdev, find_cis, ISO_LINK, BT_BOUND, &d); 976 + hci_conn_hash_list_state(hdev, find_cis, ISO_LINK, BT_CONNECT, &d); 972 977 hci_conn_hash_list_state(hdev, find_cis, ISO_LINK, BT_CONNECTED, &d); 973 978 if (d.count) 974 979 return; ··· 1771 1766 1772 1767 memset(&data, 0, sizeof(data)); 1773 1768 1774 - /* Allocate a CIG if not set */ 1769 + /* Allocate first still reconfigurable CIG if not set */ 1775 1770 if (qos->ucast.cig == BT_ISO_QOS_CIG_UNSET) { 1776 - for (data.cig = 0x00; data.cig < 0xff; data.cig++) { 1771 + for (data.cig = 0x00; data.cig < 0xf0; data.cig++) { 1777 1772 data.count = 0; 1778 - data.cis = 0xff; 1779 1773 1780 - hci_conn_hash_list_state(hdev, cis_list, ISO_LINK, 1781 - BT_BOUND, &data); 1774 + hci_conn_hash_list_state(hdev, find_cis, ISO_LINK, 1775 + BT_CONNECT, &data); 1782 1776 if (data.count) 1783 1777 continue; 1784 1778 1785 - hci_conn_hash_list_state(hdev, cis_list, ISO_LINK, 1779 + hci_conn_hash_list_state(hdev, find_cis, ISO_LINK, 1786 1780 BT_CONNECTED, &data); 1787 1781 if (!data.count) 1788 1782 break; 1789 1783 } 1790 1784 1791 - if (data.cig == 0xff) 1785 + if (data.cig == 0xf0) 1792 1786 return false; 1793 1787 1794 1788 /* Update CIG */
+6 -4
net/bluetooth/hci_core.c
··· 1416 1416 1417 1417 int hci_remove_ltk(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 bdaddr_type) 1418 1418 { 1419 - struct smp_ltk *k; 1419 + struct smp_ltk *k, *tmp; 1420 1420 int removed = 0; 1421 1421 1422 - list_for_each_entry_rcu(k, &hdev->long_term_keys, list) { 1422 + list_for_each_entry_safe(k, tmp, &hdev->long_term_keys, list) { 1423 1423 if (bacmp(bdaddr, &k->bdaddr) || k->bdaddr_type != bdaddr_type) 1424 1424 continue; 1425 1425 ··· 1435 1435 1436 1436 void hci_remove_irk(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 addr_type) 1437 1437 { 1438 - struct smp_irk *k; 1438 + struct smp_irk *k, *tmp; 1439 1439 1440 - list_for_each_entry_rcu(k, &hdev->identity_resolving_keys, list) { 1440 + list_for_each_entry_safe(k, tmp, &hdev->identity_resolving_keys, list) { 1441 1441 if (bacmp(bdaddr, &k->bdaddr) || k->addr_type != addr_type) 1442 1442 continue; 1443 1443 ··· 2686 2686 { 2687 2687 BT_DBG("%p name %s bus %d", hdev, hdev->name, hdev->bus); 2688 2688 2689 + mutex_lock(&hdev->unregister_lock); 2689 2690 hci_dev_set_flag(hdev, HCI_UNREGISTER); 2691 + mutex_unlock(&hdev->unregister_lock); 2690 2692 2691 2693 write_lock(&hci_dev_list_lock); 2692 2694 list_del(&hdev->list);
+27 -19
net/bluetooth/hci_event.c
··· 3804 3804 struct sk_buff *skb) 3805 3805 { 3806 3806 struct hci_rp_le_set_cig_params *rp = data; 3807 + struct hci_cp_le_set_cig_params *cp; 3807 3808 struct hci_conn *conn; 3808 - int i = 0; 3809 + u8 status = rp->status; 3810 + int i; 3809 3811 3810 3812 bt_dev_dbg(hdev, "status 0x%2.2x", rp->status); 3811 3813 3814 + cp = hci_sent_cmd_data(hdev, HCI_OP_LE_SET_CIG_PARAMS); 3815 + if (!cp || rp->num_handles != cp->num_cis || rp->cig_id != cp->cig_id) { 3816 + bt_dev_err(hdev, "unexpected Set CIG Parameters response data"); 3817 + status = HCI_ERROR_UNSPECIFIED; 3818 + } 3819 + 3812 3820 hci_dev_lock(hdev); 3813 3821 3814 - if (rp->status) { 3822 + if (status) { 3815 3823 while ((conn = hci_conn_hash_lookup_cig(hdev, rp->cig_id))) { 3816 3824 conn->state = BT_CLOSED; 3817 - hci_connect_cfm(conn, rp->status); 3825 + hci_connect_cfm(conn, status); 3818 3826 hci_conn_del(conn); 3819 3827 } 3820 3828 goto unlock; 3821 3829 } 3822 3830 3823 - rcu_read_lock(); 3824 - 3825 - list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) { 3826 - if (conn->type != ISO_LINK || 3827 - conn->iso_qos.ucast.cig != rp->cig_id || 3828 - conn->state == BT_CONNECTED) 3831 + /* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E page 2553 3832 + * 3833 + * If the Status return parameter is zero, then the Controller shall 3834 + * set the Connection_Handle arrayed return parameter to the connection 3835 + * handle(s) corresponding to the CIS configurations specified in 3836 + * the CIS_IDs command parameter, in the same order. 3837 + */ 3838 + for (i = 0; i < rp->num_handles; ++i) { 3839 + conn = hci_conn_hash_lookup_cis(hdev, NULL, 0, rp->cig_id, 3840 + cp->cis[i].cis_id); 3841 + if (!conn || !bacmp(&conn->dst, BDADDR_ANY)) 3829 3842 continue; 3830 3843 3831 - conn->handle = __le16_to_cpu(rp->handle[i++]); 3844 + if (conn->state != BT_BOUND && conn->state != BT_CONNECT) 3845 + continue; 3846 + 3847 + conn->handle = __le16_to_cpu(rp->handle[i]); 3832 3848 3833 3849 bt_dev_dbg(hdev, "%p handle 0x%4.4x parent %p", conn, 3834 3850 conn->handle, conn->parent); 3835 3851 3836 3852 /* Create CIS if LE is already connected */ 3837 - if (conn->parent && conn->parent->state == BT_CONNECTED) { 3838 - rcu_read_unlock(); 3853 + if (conn->parent && conn->parent->state == BT_CONNECTED) 3839 3854 hci_le_create_cis(conn); 3840 - rcu_read_lock(); 3841 - } 3842 - 3843 - if (i == rp->num_handles) 3844 - break; 3845 3855 } 3846 - 3847 - rcu_read_unlock(); 3848 3856 3849 3857 unlock: 3850 3858 hci_dev_unlock(hdev);
+17 -6
net/bluetooth/hci_sync.c
··· 629 629 INIT_WORK(&hdev->cmd_sync_work, hci_cmd_sync_work); 630 630 INIT_LIST_HEAD(&hdev->cmd_sync_work_list); 631 631 mutex_init(&hdev->cmd_sync_work_lock); 632 + mutex_init(&hdev->unregister_lock); 632 633 633 634 INIT_WORK(&hdev->cmd_sync_cancel_work, hci_cmd_sync_cancel_work); 634 635 INIT_WORK(&hdev->reenable_adv_work, reenable_adv); ··· 693 692 void *data, hci_cmd_sync_work_destroy_t destroy) 694 693 { 695 694 struct hci_cmd_sync_work_entry *entry; 695 + int err = 0; 696 696 697 - if (hci_dev_test_flag(hdev, HCI_UNREGISTER)) 698 - return -ENODEV; 697 + mutex_lock(&hdev->unregister_lock); 698 + if (hci_dev_test_flag(hdev, HCI_UNREGISTER)) { 699 + err = -ENODEV; 700 + goto unlock; 701 + } 699 702 700 703 entry = kmalloc(sizeof(*entry), GFP_KERNEL); 701 - if (!entry) 702 - return -ENOMEM; 703 - 704 + if (!entry) { 705 + err = -ENOMEM; 706 + goto unlock; 707 + } 704 708 entry->func = func; 705 709 entry->data = data; 706 710 entry->destroy = destroy; ··· 716 710 717 711 queue_work(hdev->req_workqueue, &hdev->cmd_sync_work); 718 712 719 - return 0; 713 + unlock: 714 + mutex_unlock(&hdev->unregister_lock); 715 + return err; 720 716 } 721 717 EXPORT_SYMBOL(hci_cmd_sync_submit); 722 718 ··· 4549 4541 */ 4550 4542 if (!hci_dev_test_flag(hdev, HCI_SETUP) && 4551 4543 !hci_dev_test_flag(hdev, HCI_CONFIG)) 4544 + return 0; 4545 + 4546 + if (hci_dev_test_and_set_flag(hdev, HCI_DEBUGFS_CREATED)) 4552 4547 return 0; 4553 4548 4554 4549 hci_debugfs_create_common(hdev);
+13
net/bluetooth/l2cap_core.c
··· 4306 4306 result = __le16_to_cpu(rsp->result); 4307 4307 status = __le16_to_cpu(rsp->status); 4308 4308 4309 + if (result == L2CAP_CR_SUCCESS && (dcid < L2CAP_CID_DYN_START || 4310 + dcid > L2CAP_CID_DYN_END)) 4311 + return -EPROTO; 4312 + 4309 4313 BT_DBG("dcid 0x%4.4x scid 0x%4.4x result 0x%2.2x status 0x%2.2x", 4310 4314 dcid, scid, result, status); 4311 4315 ··· 4341 4337 4342 4338 switch (result) { 4343 4339 case L2CAP_CR_SUCCESS: 4340 + if (__l2cap_get_chan_by_dcid(conn, dcid)) { 4341 + err = -EBADSLT; 4342 + break; 4343 + } 4344 + 4344 4345 l2cap_state_change(chan, BT_CONFIG); 4345 4346 chan->ident = 0; 4346 4347 chan->dcid = dcid; ··· 4672 4663 4673 4664 chan->ops->set_shutdown(chan); 4674 4665 4666 + l2cap_chan_unlock(chan); 4675 4667 mutex_lock(&conn->chan_lock); 4668 + l2cap_chan_lock(chan); 4676 4669 l2cap_chan_del(chan, ECONNRESET); 4677 4670 mutex_unlock(&conn->chan_lock); 4678 4671 ··· 4713 4702 return 0; 4714 4703 } 4715 4704 4705 + l2cap_chan_unlock(chan); 4716 4706 mutex_lock(&conn->chan_lock); 4707 + l2cap_chan_lock(chan); 4717 4708 l2cap_chan_del(chan, 0); 4718 4709 mutex_unlock(&conn->chan_lock); 4719 4710
+13 -11
net/can/j1939/main.c
··· 126 126 #define J1939_CAN_ID CAN_EFF_FLAG 127 127 #define J1939_CAN_MASK (CAN_EFF_FLAG | CAN_RTR_FLAG) 128 128 129 - static DEFINE_SPINLOCK(j1939_netdev_lock); 129 + static DEFINE_MUTEX(j1939_netdev_lock); 130 130 131 131 static struct j1939_priv *j1939_priv_create(struct net_device *ndev) 132 132 { ··· 220 220 j1939_can_rx_unregister(priv); 221 221 j1939_ecu_unmap_all(priv); 222 222 j1939_priv_set(priv->ndev, NULL); 223 - spin_unlock(&j1939_netdev_lock); 223 + mutex_unlock(&j1939_netdev_lock); 224 224 } 225 225 226 226 /* get pointer to priv without increasing ref counter */ ··· 248 248 { 249 249 struct j1939_priv *priv; 250 250 251 - spin_lock(&j1939_netdev_lock); 251 + mutex_lock(&j1939_netdev_lock); 252 252 priv = j1939_priv_get_by_ndev_locked(ndev); 253 - spin_unlock(&j1939_netdev_lock); 253 + mutex_unlock(&j1939_netdev_lock); 254 254 255 255 return priv; 256 256 } ··· 260 260 struct j1939_priv *priv, *priv_new; 261 261 int ret; 262 262 263 - spin_lock(&j1939_netdev_lock); 263 + mutex_lock(&j1939_netdev_lock); 264 264 priv = j1939_priv_get_by_ndev_locked(ndev); 265 265 if (priv) { 266 266 kref_get(&priv->rx_kref); 267 - spin_unlock(&j1939_netdev_lock); 267 + mutex_unlock(&j1939_netdev_lock); 268 268 return priv; 269 269 } 270 - spin_unlock(&j1939_netdev_lock); 270 + mutex_unlock(&j1939_netdev_lock); 271 271 272 272 priv = j1939_priv_create(ndev); 273 273 if (!priv) ··· 277 277 spin_lock_init(&priv->j1939_socks_lock); 278 278 INIT_LIST_HEAD(&priv->j1939_socks); 279 279 280 - spin_lock(&j1939_netdev_lock); 280 + mutex_lock(&j1939_netdev_lock); 281 281 priv_new = j1939_priv_get_by_ndev_locked(ndev); 282 282 if (priv_new) { 283 283 /* Someone was faster than us, use their priv and roll 284 284 * back our's. 285 285 */ 286 286 kref_get(&priv_new->rx_kref); 287 - spin_unlock(&j1939_netdev_lock); 287 + mutex_unlock(&j1939_netdev_lock); 288 288 dev_put(ndev); 289 289 kfree(priv); 290 290 return priv_new; 291 291 } 292 292 j1939_priv_set(ndev, priv); 293 - spin_unlock(&j1939_netdev_lock); 294 293 295 294 ret = j1939_can_rx_register(priv); 296 295 if (ret < 0) 297 296 goto out_priv_put; 298 297 298 + mutex_unlock(&j1939_netdev_lock); 299 299 return priv; 300 300 301 301 out_priv_put: 302 302 j1939_priv_set(ndev, NULL); 303 + mutex_unlock(&j1939_netdev_lock); 304 + 303 305 dev_put(ndev); 304 306 kfree(priv); 305 307 ··· 310 308 311 309 void j1939_netdev_stop(struct j1939_priv *priv) 312 310 { 313 - kref_put_lock(&priv->rx_kref, __j1939_rx_release, &j1939_netdev_lock); 311 + kref_put_mutex(&priv->rx_kref, __j1939_rx_release, &j1939_netdev_lock); 314 312 j1939_priv_put(priv); 315 313 } 316 314
+5
net/can/j1939/socket.c
··· 1088 1088 1089 1089 void j1939_sk_send_loop_abort(struct sock *sk, int err) 1090 1090 { 1091 + struct j1939_sock *jsk = j1939_sk(sk); 1092 + 1093 + if (jsk->state & J1939_SOCK_ERRQUEUE) 1094 + return; 1095 + 1091 1096 sk->sk_err = err; 1092 1097 1093 1098 sk_error_report(sk);
+5 -3
net/core/dev.c
··· 4471 4471 u32 next_cpu; 4472 4472 u32 ident; 4473 4473 4474 - /* First check into global flow table if there is a match */ 4475 - ident = sock_flow_table->ents[hash & sock_flow_table->mask]; 4474 + /* First check into global flow table if there is a match. 4475 + * This READ_ONCE() pairs with WRITE_ONCE() from rps_record_sock_flow(). 4476 + */ 4477 + ident = READ_ONCE(sock_flow_table->ents[hash & sock_flow_table->mask]); 4476 4478 if ((ident ^ hash) & ~rps_cpu_mask) 4477 4479 goto try_rps; 4478 4480 ··· 10543 10541 return NULL; 10544 10542 netdev_init_one_queue(dev, queue, NULL); 10545 10543 RCU_INIT_POINTER(queue->qdisc, &noop_qdisc); 10546 - queue->qdisc_sleeping = &noop_qdisc; 10544 + RCU_INIT_POINTER(queue->qdisc_sleeping, &noop_qdisc); 10547 10545 rcu_assign_pointer(dev->ingress_queue, queue); 10548 10546 #endif 10549 10547 return queue;
+2 -1
net/core/skmsg.c
··· 1210 1210 1211 1211 rcu_read_lock(); 1212 1212 psock = sk_psock(sk); 1213 - psock->saved_data_ready(sk); 1213 + if (psock) 1214 + psock->saved_data_ready(sk); 1214 1215 rcu_read_unlock(); 1215 1216 } 1216 1217 }
+3
net/dccp/proto.c
··· 191 191 struct dccp_sock *dp = dccp_sk(sk); 192 192 struct inet_connection_sock *icsk = inet_csk(sk); 193 193 194 + pr_warn_once("DCCP is deprecated and scheduled to be removed in 2025, " 195 + "please contact the netdev mailing list\n"); 196 + 194 197 icsk->icsk_rto = DCCP_TIMEOUT_INIT; 195 198 icsk->icsk_syn_retries = sysctl_dccp_request_retries; 196 199 sk->sk_state = DCCP_CLOSED;
-1
net/handshake/handshake.h
··· 31 31 struct list_head hr_list; 32 32 struct rhash_head hr_rhash; 33 33 unsigned long hr_flags; 34 - struct file *hr_file; 35 34 const struct handshake_proto *hr_proto; 36 35 struct sock *hr_sk; 37 36 void (*hr_odestruct)(struct sock *sk);
-4
net/handshake/request.c
··· 239 239 } 240 240 req->hr_odestruct = req->hr_sk->sk_destruct; 241 241 req->hr_sk->sk_destruct = handshake_sk_destruct; 242 - req->hr_file = sock->file; 243 242 244 243 ret = -EOPNOTSUPP; 245 244 net = sock_net(req->hr_sk); ··· 333 334 trace_handshake_cancel_busy(net, req, sk); 334 335 return false; 335 336 } 336 - 337 - /* Request accepted and waiting for DONE */ 338 - fput(req->hr_file); 339 337 340 338 out_true: 341 339 trace_handshake_cancel(net, req, sk);
+4 -4
net/ipv4/sysctl_net_ipv4.c
··· 34 34 static int ip_ttl_max = 255; 35 35 static int tcp_syn_retries_min = 1; 36 36 static int tcp_syn_retries_max = MAX_TCP_SYNCNT; 37 - static int ip_ping_group_range_min[] = { 0, 0 }; 38 - static int ip_ping_group_range_max[] = { GID_T_MAX, GID_T_MAX }; 37 + static unsigned long ip_ping_group_range_min[] = { 0, 0 }; 38 + static unsigned long ip_ping_group_range_max[] = { GID_T_MAX, GID_T_MAX }; 39 39 static u32 u32_max_div_HZ = UINT_MAX / HZ; 40 40 static int one_day_secs = 24 * 3600; 41 41 static u32 fib_multipath_hash_fields_all_mask __maybe_unused = ··· 165 165 { 166 166 struct user_namespace *user_ns = current_user_ns(); 167 167 int ret; 168 - gid_t urange[2]; 168 + unsigned long urange[2]; 169 169 kgid_t low, high; 170 170 struct ctl_table tmp = { 171 171 .data = &urange, ··· 178 178 inet_get_ping_group_range_table(table, &low, &high); 179 179 urange[0] = from_kgid_munged(user_ns, low); 180 180 urange[1] = from_kgid_munged(user_ns, high); 181 - ret = proc_dointvec_minmax(&tmp, write, buffer, lenp, ppos); 181 + ret = proc_doulongvec_minmax(&tmp, write, buffer, lenp, ppos); 182 182 183 183 if (write && ret == 0) { 184 184 low = make_kgid(user_ns, urange[0]);
+9 -10
net/ipv4/tcp_offload.c
··· 60 60 struct tcphdr *th; 61 61 unsigned int thlen; 62 62 unsigned int seq; 63 - __be32 delta; 64 63 unsigned int oldlen; 65 64 unsigned int mss; 66 65 struct sk_buff *gso_skb = skb; 67 66 __sum16 newcheck; 68 67 bool ooo_okay, copy_destructor; 68 + __wsum delta; 69 69 70 70 th = tcp_hdr(skb); 71 71 thlen = th->doff * 4; ··· 75 75 if (!pskb_may_pull(skb, thlen)) 76 76 goto out; 77 77 78 - oldlen = (u16)~skb->len; 78 + oldlen = ~skb->len; 79 79 __skb_pull(skb, thlen); 80 80 81 81 mss = skb_shinfo(skb)->gso_size; ··· 110 110 if (skb_is_gso(segs)) 111 111 mss *= skb_shinfo(segs)->gso_segs; 112 112 113 - delta = htonl(oldlen + (thlen + mss)); 113 + delta = (__force __wsum)htonl(oldlen + thlen + mss); 114 114 115 115 skb = segs; 116 116 th = tcp_hdr(skb); ··· 119 119 if (unlikely(skb_shinfo(gso_skb)->tx_flags & SKBTX_SW_TSTAMP)) 120 120 tcp_gso_tstamp(segs, skb_shinfo(gso_skb)->tskey, seq, mss); 121 121 122 - newcheck = ~csum_fold((__force __wsum)((__force u32)th->check + 123 - (__force u32)delta)); 122 + newcheck = ~csum_fold(csum_add(csum_unfold(th->check), delta)); 124 123 125 124 while (skb->next) { 126 125 th->fin = th->psh = 0; ··· 164 165 WARN_ON_ONCE(refcount_sub_and_test(-delta, &skb->sk->sk_wmem_alloc)); 165 166 } 166 167 167 - delta = htonl(oldlen + (skb_tail_pointer(skb) - 168 - skb_transport_header(skb)) + 169 - skb->data_len); 170 - th->check = ~csum_fold((__force __wsum)((__force u32)th->check + 171 - (__force u32)delta)); 168 + delta = (__force __wsum)htonl(oldlen + 169 + (skb_tail_pointer(skb) - 170 + skb_transport_header(skb)) + 171 + skb->data_len); 172 + th->check = ~csum_fold(csum_add(csum_unfold(th->check), delta)); 172 173 if (skb->ip_summed == CHECKSUM_PARTIAL) 173 174 gso_reset_checksum(skb, ~th->check); 174 175 else
+2
net/ipv4/udplite.c
··· 22 22 { 23 23 udp_init_sock(sk); 24 24 udp_sk(sk)->pcflag = UDPLITE_BIT; 25 + pr_warn_once("UDP-Lite is deprecated and scheduled to be removed in 2025, " 26 + "please contact the netdev mailing list\n"); 25 27 return 0; 26 28 } 27 29
+11 -18
net/ipv6/exthdrs.c
··· 569 569 return -1; 570 570 } 571 571 572 - if (skb_cloned(skb)) { 573 - if (pskb_expand_head(skb, IPV6_RPL_SRH_WORST_SWAP_SIZE, 0, 574 - GFP_ATOMIC)) { 575 - __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)), 576 - IPSTATS_MIB_OUTDISCARDS); 577 - kfree_skb(skb); 578 - return -1; 579 - } 580 - } else { 581 - err = skb_cow_head(skb, IPV6_RPL_SRH_WORST_SWAP_SIZE); 582 - if (unlikely(err)) { 583 - kfree_skb(skb); 584 - return -1; 585 - } 586 - } 587 - 588 - hdr = (struct ipv6_rpl_sr_hdr *)skb_transport_header(skb); 589 - 590 572 if (!pskb_may_pull(skb, ipv6_rpl_srh_size(n, hdr->cmpri, 591 573 hdr->cmpre))) { 592 574 kfree_skb(skb); ··· 612 630 skb_pull(skb, ((hdr->hdrlen + 1) << 3)); 613 631 skb_postpull_rcsum(skb, oldhdr, 614 632 sizeof(struct ipv6hdr) + ((hdr->hdrlen + 1) << 3)); 633 + if (unlikely(!hdr->segments_left)) { 634 + if (pskb_expand_head(skb, sizeof(struct ipv6hdr) + ((chdr->hdrlen + 1) << 3), 0, 635 + GFP_ATOMIC)) { 636 + __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)), IPSTATS_MIB_OUTDISCARDS); 637 + kfree_skb(skb); 638 + kfree(buf); 639 + return -1; 640 + } 641 + 642 + oldhdr = ipv6_hdr(skb); 643 + } 615 644 skb_push(skb, ((chdr->hdrlen + 1) << 3) + sizeof(struct ipv6hdr)); 616 645 skb_reset_network_header(skb); 617 646 skb_mac_header_rebuild(skb);
+2 -1
net/ipv6/ping.c
··· 114 114 addr_type = ipv6_addr_type(daddr); 115 115 if ((__ipv6_addr_needs_scope_id(addr_type) && !oif) || 116 116 (addr_type & IPV6_ADDR_MAPPED) || 117 - (oif && sk->sk_bound_dev_if && oif != sk->sk_bound_dev_if)) 117 + (oif && sk->sk_bound_dev_if && oif != sk->sk_bound_dev_if && 118 + l3mdev_master_ifindex_by_index(sock_net(sk), oif) != sk->sk_bound_dev_if)) 118 119 return -EINVAL; 119 120 120 121 ipcm6_init_sk(&ipc6, np);
+2 -2
net/ipv6/route.c
··· 6412 6412 { 6413 6413 .procname = "skip_notify_on_dev_down", 6414 6414 .data = &init_net.ipv6.sysctl.skip_notify_on_dev_down, 6415 - .maxlen = sizeof(int), 6415 + .maxlen = sizeof(u8), 6416 6416 .mode = 0644, 6417 - .proc_handler = proc_dointvec_minmax, 6417 + .proc_handler = proc_dou8vec_minmax, 6418 6418 .extra1 = SYSCTL_ZERO, 6419 6419 .extra2 = SYSCTL_ONE, 6420 6420 },
+4
net/ipv6/udplite.c
··· 8 8 * Changes: 9 9 * Fixes: 10 10 */ 11 + #define pr_fmt(fmt) "UDPLite6: " fmt 12 + 11 13 #include <linux/export.h> 12 14 #include <linux/proc_fs.h> 13 15 #include "udp_impl.h" ··· 18 16 { 19 17 udpv6_init_sock(sk); 20 18 udp_sk(sk)->pcflag = UDPLITE_BIT; 19 + pr_warn_once("UDP-Lite is deprecated and scheduled to be removed in 2025, " 20 + "please contact the netdev mailing list\n"); 21 21 return 0; 22 22 } 23 23
+8 -1
net/mac80211/cfg.c
··· 4865 4865 unsigned int link_id) 4866 4866 { 4867 4867 struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev); 4868 + int res; 4868 4869 4869 4870 if (wdev->use_4addr) 4870 4871 return -EOPNOTSUPP; 4871 4872 4872 - return ieee80211_vif_set_links(sdata, wdev->valid_links); 4873 + mutex_lock(&sdata->local->mtx); 4874 + res = ieee80211_vif_set_links(sdata, wdev->valid_links); 4875 + mutex_unlock(&sdata->local->mtx); 4876 + 4877 + return res; 4873 4878 } 4874 4879 4875 4880 static void ieee80211_del_intf_link(struct wiphy *wiphy, ··· 4883 4878 { 4884 4879 struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev); 4885 4880 4881 + mutex_lock(&sdata->local->mtx); 4886 4882 ieee80211_vif_set_links(sdata, wdev->valid_links); 4883 + mutex_unlock(&sdata->local->mtx); 4887 4884 } 4888 4885 4889 4886 static int sta_add_link_station(struct ieee80211_local *local,
+10 -5
net/mac80211/he.c
··· 3 3 * HE handling 4 4 * 5 5 * Copyright(c) 2017 Intel Deutschland GmbH 6 - * Copyright(c) 2019 - 2022 Intel Corporation 6 + * Copyright(c) 2019 - 2023 Intel Corporation 7 7 */ 8 8 9 9 #include "ieee80211_i.h" ··· 114 114 struct link_sta_info *link_sta) 115 115 { 116 116 struct ieee80211_sta_he_cap *he_cap = &link_sta->pub->he_cap; 117 + const struct ieee80211_sta_he_cap *own_he_cap_ptr; 117 118 struct ieee80211_sta_he_cap own_he_cap; 118 119 struct ieee80211_he_cap_elem *he_cap_ie_elem = (void *)he_cap_ie; 119 120 u8 he_ppe_size; ··· 124 123 125 124 memset(he_cap, 0, sizeof(*he_cap)); 126 125 127 - if (!he_cap_ie || 128 - !ieee80211_get_he_iftype_cap(sband, 129 - ieee80211_vif_type_p2p(&sdata->vif))) 126 + if (!he_cap_ie) 130 127 return; 131 128 132 - own_he_cap = sband->iftype_data->he_cap; 129 + own_he_cap_ptr = 130 + ieee80211_get_he_iftype_cap(sband, 131 + ieee80211_vif_type_p2p(&sdata->vif)); 132 + if (!own_he_cap_ptr) 133 + return; 134 + 135 + own_he_cap = *own_he_cap_ptr; 133 136 134 137 /* Make sure size is OK */ 135 138 mcs_nss_size = ieee80211_he_mcs_nss_size(he_cap_ie_elem);
+1 -1
net/mac80211/ieee80211_i.h
··· 2312 2312 return ieee802_11_parse_elems_crc(start, len, action, 0, 0, bss); 2313 2313 } 2314 2314 2315 - void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos); 2315 + void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos, u8 frag_id); 2316 2316 2317 2317 extern const int ieee802_1d_to_ac[8]; 2318 2318
+2 -2
net/mac80211/link.c
··· 2 2 /* 3 3 * MLO link handling 4 4 * 5 - * Copyright (C) 2022 Intel Corporation 5 + * Copyright (C) 2022-2023 Intel Corporation 6 6 */ 7 7 #include <linux/slab.h> 8 8 #include <linux/kernel.h> ··· 409 409 IEEE80211_CHANCTX_SHARED); 410 410 WARN_ON_ONCE(ret); 411 411 412 + ieee80211_mgd_set_link_qos_params(link); 412 413 ieee80211_link_info_change_notify(sdata, link, 413 414 BSS_CHANGED_ERP_CTS_PROT | 414 415 BSS_CHANGED_ERP_PREAMBLE | ··· 424 423 BSS_CHANGED_TWT | 425 424 BSS_CHANGED_HE_OBSS_PD | 426 425 BSS_CHANGED_HE_BSS_COLOR); 427 - ieee80211_mgd_set_link_qos_params(link); 428 426 } 429 427 430 428 old_active = sdata->vif.active_links;
+10 -3
net/mac80211/mlme.c
··· 1217 1217 const u16 *inner) 1218 1218 { 1219 1219 unsigned int skb_len = skb->len; 1220 + bool at_extension = false; 1220 1221 bool added = false; 1221 1222 int i, j; 1222 1223 u8 *len, *list_len = NULL; ··· 1229 1228 for (i = 0; i < PRESENT_ELEMS_MAX && outer[i]; i++) { 1230 1229 u16 elem = outer[i]; 1231 1230 bool have_inner = false; 1232 - bool at_extension = false; 1233 1231 1234 1232 /* should at least be sorted in the sense of normal -> ext */ 1235 1233 WARN_ON(at_extension && elem < PRESENT_ELEM_EXT_OFFS); ··· 1257 1257 } 1258 1258 *list_len += 1; 1259 1259 skb_put_u8(skb, (u8)elem); 1260 + added = true; 1260 1261 } 1261 1262 1263 + /* if we added a list but no extension list, make a zero-len one */ 1264 + if (added && (!at_extension || !list_len)) 1265 + skb_put_u8(skb, 0); 1266 + 1267 + /* if nothing added remove extension element completely */ 1262 1268 if (!added) 1263 1269 skb_trim(skb, skb_len); 1264 1270 else ··· 1372 1366 ieee80211_add_non_inheritance_elem(skb, outer_present_elems, 1373 1367 link_present_elems); 1374 1368 1375 - ieee80211_fragment_element(skb, subelem_len); 1369 + ieee80211_fragment_element(skb, subelem_len, 1370 + IEEE80211_MLE_SUBELEM_FRAGMENT); 1376 1371 } 1377 1372 1378 - ieee80211_fragment_element(skb, ml_elem_len); 1373 + ieee80211_fragment_element(skb, ml_elem_len, WLAN_EID_FRAGMENT); 1379 1374 } 1380 1375 1381 1376 static int ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata)
+3 -1
net/mac80211/rx.c
··· 4965 4965 } 4966 4966 4967 4967 if (unlikely(rx->sta && rx->sta->sta.mlo) && 4968 - is_unicast_ether_addr(hdr->addr1)) { 4968 + is_unicast_ether_addr(hdr->addr1) && 4969 + !ieee80211_is_probe_resp(hdr->frame_control) && 4970 + !ieee80211_is_beacon(hdr->frame_control)) { 4969 4971 /* translate to MLD addresses */ 4970 4972 if (ether_addr_equal(link->conf->addr, hdr->addr1)) 4971 4973 ether_addr_copy(hdr->addr1, rx->sdata->vif.addr);
+4 -4
net/mac80211/tx.c
··· 4445 4445 struct sk_buff *skb) 4446 4446 { 4447 4447 struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev); 4448 - unsigned long links = sdata->vif.valid_links; 4448 + unsigned long links = sdata->vif.active_links; 4449 4449 unsigned int link; 4450 4450 u32 ctrl_flags = IEEE80211_TX_CTRL_MCAST_MLO_FIRST_TX; 4451 4451 ··· 5528 5528 { 5529 5529 struct ieee80211_ema_beacons *ema_beacons = NULL; 5530 5530 5531 - WARN_ON(__ieee80211_beacon_get(hw, vif, NULL, false, link_id, 0, 5531 + WARN_ON(__ieee80211_beacon_get(hw, vif, NULL, true, link_id, 0, 5532 5532 &ema_beacons)); 5533 5533 5534 5534 return ema_beacons; ··· 6040 6040 rcu_read_unlock(); 6041 6041 6042 6042 if (WARN_ON_ONCE(link == ARRAY_SIZE(sdata->vif.link_conf))) 6043 - link = ffs(sdata->vif.valid_links) - 1; 6043 + link = ffs(sdata->vif.active_links) - 1; 6044 6044 } 6045 6045 6046 6046 IEEE80211_SKB_CB(skb)->control.flags |= ··· 6076 6076 band = chanctx_conf->def.chan->band; 6077 6077 } else { 6078 6078 WARN_ON(link_id >= 0 && 6079 - !(sdata->vif.valid_links & BIT(link_id))); 6079 + !(sdata->vif.active_links & BIT(link_id))); 6080 6080 /* MLD transmissions must not rely on the band */ 6081 6081 band = 0; 6082 6082 }
+2 -2
net/mac80211/util.c
··· 5049 5049 return pos; 5050 5050 } 5051 5051 5052 - void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos) 5052 + void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos, u8 frag_id) 5053 5053 { 5054 5054 unsigned int elem_len; 5055 5055 ··· 5069 5069 memmove(len_pos + 255 + 3, len_pos + 255 + 1, elem_len); 5070 5070 /* place the fragment ID */ 5071 5071 len_pos += 255 + 1; 5072 - *len_pos = WLAN_EID_FRAGMENT; 5072 + *len_pos = frag_id; 5073 5073 /* and point to fragment length to update later */ 5074 5074 len_pos++; 5075 5075 }
+19 -4
net/mptcp/pm.c
··· 87 87 unsigned int subflows_max; 88 88 int ret = 0; 89 89 90 - if (mptcp_pm_is_userspace(msk)) 91 - return mptcp_userspace_pm_active(msk); 90 + if (mptcp_pm_is_userspace(msk)) { 91 + if (mptcp_userspace_pm_active(msk)) { 92 + spin_lock_bh(&pm->lock); 93 + pm->subflows++; 94 + spin_unlock_bh(&pm->lock); 95 + return true; 96 + } 97 + return false; 98 + } 92 99 93 100 subflows_max = mptcp_pm_get_subflows_max(msk); 94 101 ··· 188 181 struct mptcp_pm_data *pm = &msk->pm; 189 182 bool update_subflows; 190 183 191 - update_subflows = (subflow->request_join || subflow->mp_join) && 192 - mptcp_pm_is_kernel(msk); 184 + update_subflows = subflow->request_join || subflow->mp_join; 185 + if (mptcp_pm_is_userspace(msk)) { 186 + if (update_subflows) { 187 + spin_lock_bh(&pm->lock); 188 + pm->subflows--; 189 + spin_unlock_bh(&pm->lock); 190 + } 191 + return; 192 + } 193 + 193 194 if (!READ_ONCE(pm->work_pending) && !update_subflows) 194 195 return; 195 196
+18
net/mptcp/pm_netlink.c
··· 1558 1558 return ret; 1559 1559 } 1560 1560 1561 + void mptcp_pm_remove_addrs(struct mptcp_sock *msk, struct list_head *rm_list) 1562 + { 1563 + struct mptcp_rm_list alist = { .nr = 0 }; 1564 + struct mptcp_pm_addr_entry *entry; 1565 + 1566 + list_for_each_entry(entry, rm_list, list) { 1567 + remove_anno_list_by_saddr(msk, &entry->addr); 1568 + if (alist.nr < MPTCP_RM_IDS_MAX) 1569 + alist.ids[alist.nr++] = entry->addr.id; 1570 + } 1571 + 1572 + if (alist.nr) { 1573 + spin_lock_bh(&msk->pm.lock); 1574 + mptcp_pm_remove_addr(msk, &alist); 1575 + spin_unlock_bh(&msk->pm.lock); 1576 + } 1577 + } 1578 + 1561 1579 void mptcp_pm_remove_addrs_and_subflows(struct mptcp_sock *msk, 1562 1580 struct list_head *rm_list) 1563 1581 {
+47 -1
net/mptcp/pm_userspace.c
··· 69 69 MPTCP_PM_MAX_ADDR_ID + 1, 70 70 1); 71 71 list_add_tail_rcu(&e->list, &msk->pm.userspace_pm_local_addr_list); 72 + msk->pm.local_addr_used++; 72 73 ret = e->addr.id; 73 74 } else if (match) { 74 75 ret = entry->addr.id; ··· 78 77 append_err: 79 78 spin_unlock_bh(&msk->pm.lock); 80 79 return ret; 80 + } 81 + 82 + /* If the subflow is closed from the other peer (not via a 83 + * subflow destroy command then), we want to keep the entry 84 + * not to assign the same ID to another address and to be 85 + * able to send RM_ADDR after the removal of the subflow. 86 + */ 87 + static int mptcp_userspace_pm_delete_local_addr(struct mptcp_sock *msk, 88 + struct mptcp_pm_addr_entry *addr) 89 + { 90 + struct mptcp_pm_addr_entry *entry, *tmp; 91 + 92 + list_for_each_entry_safe(entry, tmp, &msk->pm.userspace_pm_local_addr_list, list) { 93 + if (mptcp_addresses_equal(&entry->addr, &addr->addr, false)) { 94 + /* TODO: a refcount is needed because the entry can 95 + * be used multiple times (e.g. fullmesh mode). 96 + */ 97 + list_del_rcu(&entry->list); 98 + kfree(entry); 99 + msk->pm.local_addr_used--; 100 + return 0; 101 + } 102 + } 103 + 104 + return -EINVAL; 81 105 } 82 106 83 107 int mptcp_userspace_pm_get_flags_and_ifindex_by_id(struct mptcp_sock *msk, ··· 197 171 spin_lock_bh(&msk->pm.lock); 198 172 199 173 if (mptcp_pm_alloc_anno_list(msk, &addr_val)) { 174 + msk->pm.add_addr_signaled++; 200 175 mptcp_pm_announce_addr(msk, &addr_val.addr, false); 201 176 mptcp_pm_nl_addr_send_ack(msk); 202 177 } ··· 259 232 260 233 list_move(&match->list, &free_list); 261 234 262 - mptcp_pm_remove_addrs_and_subflows(msk, &free_list); 235 + mptcp_pm_remove_addrs(msk, &free_list); 263 236 264 237 release_sock((struct sock *)msk); 265 238 ··· 278 251 struct nlattr *raddr = info->attrs[MPTCP_PM_ATTR_ADDR_REMOTE]; 279 252 struct nlattr *token = info->attrs[MPTCP_PM_ATTR_TOKEN]; 280 253 struct nlattr *laddr = info->attrs[MPTCP_PM_ATTR_ADDR]; 254 + struct mptcp_pm_addr_entry local = { 0 }; 281 255 struct mptcp_addr_info addr_r; 282 256 struct mptcp_addr_info addr_l; 283 257 struct mptcp_sock *msk; ··· 330 302 goto create_err; 331 303 } 332 304 305 + local.addr = addr_l; 306 + err = mptcp_userspace_pm_append_new_local_addr(msk, &local); 307 + if (err < 0) { 308 + GENL_SET_ERR_MSG(info, "did not match address and id"); 309 + goto create_err; 310 + } 311 + 333 312 lock_sock(sk); 334 313 335 314 err = __mptcp_subflow_connect(sk, &addr_l, &addr_r); 336 315 337 316 release_sock(sk); 317 + 318 + spin_lock_bh(&msk->pm.lock); 319 + if (err) 320 + mptcp_userspace_pm_delete_local_addr(msk, &local); 321 + else 322 + msk->pm.subflows++; 323 + spin_unlock_bh(&msk->pm.lock); 338 324 339 325 create_err: 340 326 sock_put((struct sock *)msk); ··· 462 420 ssk = mptcp_nl_find_ssk(msk, &addr_l, &addr_r); 463 421 if (ssk) { 464 422 struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); 423 + struct mptcp_pm_addr_entry entry = { .addr = addr_l }; 465 424 425 + spin_lock_bh(&msk->pm.lock); 426 + mptcp_userspace_pm_delete_local_addr(msk, &entry); 427 + spin_unlock_bh(&msk->pm.lock); 466 428 mptcp_subflow_shutdown(sk, ssk, RCV_SHUTDOWN | SEND_SHUTDOWN); 467 429 mptcp_close_ssk(sk, ssk, subflow); 468 430 MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RMSUBFLOW);
+1
net/mptcp/protocol.h
··· 832 832 bool echo); 833 833 int mptcp_pm_remove_addr(struct mptcp_sock *msk, const struct mptcp_rm_list *rm_list); 834 834 int mptcp_pm_remove_subflow(struct mptcp_sock *msk, const struct mptcp_rm_list *rm_list); 835 + void mptcp_pm_remove_addrs(struct mptcp_sock *msk, struct list_head *rm_list); 835 836 void mptcp_pm_remove_addrs_and_subflows(struct mptcp_sock *msk, 836 837 struct list_head *rm_list); 837 838
+8
net/netfilter/ipset/ip_set_core.c
··· 1694 1694 bool eexist = flags & IPSET_FLAG_EXIST, retried = false; 1695 1695 1696 1696 do { 1697 + if (retried) { 1698 + __ip_set_get(set); 1699 + nfnl_unlock(NFNL_SUBSYS_IPSET); 1700 + cond_resched(); 1701 + nfnl_lock(NFNL_SUBSYS_IPSET); 1702 + __ip_set_put(set); 1703 + } 1704 + 1697 1705 ip_set_lock(set); 1698 1706 ret = set->variant->uadt(set, tb, adt, &lineno, flags, retried); 1699 1707 ip_set_unlock(set);
+3
net/netfilter/nf_conntrack_core.c
··· 2260 2260 return 0; 2261 2261 2262 2262 helper = rcu_dereference(help->helper); 2263 + if (!helper) 2264 + return 0; 2265 + 2263 2266 if (!(helper->flags & NF_CT_HELPER_F_USERSPACE)) 2264 2267 return 0; 2265 2268
+10 -3
net/netfilter/nf_flow_table_core.c
··· 317 317 EXPORT_SYMBOL_GPL(flow_offload_add); 318 318 319 319 void flow_offload_refresh(struct nf_flowtable *flow_table, 320 - struct flow_offload *flow) 320 + struct flow_offload *flow, bool force) 321 321 { 322 322 u32 timeout; 323 323 324 324 timeout = nf_flowtable_time_stamp + flow_offload_get_timeout(flow); 325 - if (timeout - READ_ONCE(flow->timeout) > HZ) 325 + if (force || timeout - READ_ONCE(flow->timeout) > HZ) 326 326 WRITE_ONCE(flow->timeout, timeout); 327 327 else 328 328 return; ··· 333 333 nf_flow_offload_add(flow_table, flow); 334 334 } 335 335 EXPORT_SYMBOL_GPL(flow_offload_refresh); 336 + 337 + static bool nf_flow_is_outdated(const struct flow_offload *flow) 338 + { 339 + return test_bit(IPS_SEEN_REPLY_BIT, &flow->ct->status) && 340 + !test_bit(NF_FLOW_HW_ESTABLISHED, &flow->flags); 341 + } 336 342 337 343 static inline bool nf_flow_has_expired(const struct flow_offload *flow) 338 344 { ··· 429 423 struct flow_offload *flow, void *data) 430 424 { 431 425 if (nf_flow_has_expired(flow) || 432 - nf_ct_is_dying(flow->ct)) 426 + nf_ct_is_dying(flow->ct) || 427 + nf_flow_is_outdated(flow)) 433 428 flow_offload_teardown(flow); 434 429 435 430 if (test_bit(NF_FLOW_TEARDOWN, &flow->flags)) {
+2 -2
net/netfilter/nf_flow_table_ip.c
··· 384 384 if (skb_try_make_writable(skb, thoff + hdrsize)) 385 385 return NF_DROP; 386 386 387 - flow_offload_refresh(flow_table, flow); 387 + flow_offload_refresh(flow_table, flow, false); 388 388 389 389 nf_flow_encap_pop(skb, tuplehash); 390 390 thoff -= offset; ··· 650 650 if (skb_try_make_writable(skb, thoff + hdrsize)) 651 651 return NF_DROP; 652 652 653 - flow_offload_refresh(flow_table, flow); 653 + flow_offload_refresh(flow_table, flow, false); 654 654 655 655 nf_flow_encap_pop(skb, tuplehash); 656 656
+61 -2
net/netfilter/nf_tables_api.c
··· 1600 1600 1601 1601 if (nft_base_chain_netdev(family, ops->hooknum)) { 1602 1602 nest_devs = nla_nest_start_noflag(skb, NFTA_HOOK_DEVS); 1603 + if (!nest_devs) 1604 + goto nla_put_failure; 1603 1605 1604 1606 if (!hook_list) 1605 1607 hook_list = &basechain->hook_list; ··· 3844 3842 if (flow) 3845 3843 nft_flow_rule_destroy(flow); 3846 3844 err_release_rule: 3847 - nf_tables_rule_release(&ctx, rule); 3845 + nft_rule_expr_deactivate(&ctx, rule, NFT_TRANS_PREPARE); 3846 + nf_tables_rule_destroy(&ctx, rule); 3848 3847 err_release_expr: 3849 3848 for (i = 0; i < n; i++) { 3850 3849 if (expr_info[i].ops) { ··· 4920 4917 4921 4918 set->num_exprs = num_exprs; 4922 4919 set->handle = nf_tables_alloc_handle(table); 4920 + INIT_LIST_HEAD(&set->pending_update); 4923 4921 4924 4922 err = nft_trans_set_add(&ctx, NFT_MSG_NEWSET, set); 4925 4923 if (err < 0) ··· 9009 9005 continue; 9010 9006 } 9011 9007 9012 - if (WARN_ON_ONCE(data + expr->ops->size > data_boundary)) 9008 + if (WARN_ON_ONCE(data + size + expr->ops->size > data_boundary)) 9013 9009 return -ENOMEM; 9014 9010 9015 9011 memcpy(data + size, expr, expr->ops->size); ··· 9277 9273 } 9278 9274 } 9279 9275 9276 + static void nft_set_commit_update(struct list_head *set_update_list) 9277 + { 9278 + struct nft_set *set, *next; 9279 + 9280 + list_for_each_entry_safe(set, next, set_update_list, pending_update) { 9281 + list_del_init(&set->pending_update); 9282 + 9283 + if (!set->ops->commit) 9284 + continue; 9285 + 9286 + set->ops->commit(set); 9287 + } 9288 + } 9289 + 9280 9290 static int nf_tables_commit(struct net *net, struct sk_buff *skb) 9281 9291 { 9282 9292 struct nftables_pernet *nft_net = nft_pernet(net); 9283 9293 struct nft_trans *trans, *next; 9294 + LIST_HEAD(set_update_list); 9284 9295 struct nft_trans_elem *te; 9285 9296 struct nft_chain *chain; 9286 9297 struct nft_table *table; ··· 9470 9451 nf_tables_setelem_notify(&trans->ctx, te->set, 9471 9452 &te->elem, 9472 9453 NFT_MSG_NEWSETELEM); 9454 + if (te->set->ops->commit && 9455 + list_empty(&te->set->pending_update)) { 9456 + list_add_tail(&te->set->pending_update, 9457 + &set_update_list); 9458 + } 9473 9459 nft_trans_destroy(trans); 9474 9460 break; 9475 9461 case NFT_MSG_DELSETELEM: ··· 9488 9464 if (!nft_setelem_is_catchall(te->set, &te->elem)) { 9489 9465 atomic_dec(&te->set->nelems); 9490 9466 te->set->ndeact--; 9467 + } 9468 + if (te->set->ops->commit && 9469 + list_empty(&te->set->pending_update)) { 9470 + list_add_tail(&te->set->pending_update, 9471 + &set_update_list); 9491 9472 } 9492 9473 break; 9493 9474 case NFT_MSG_NEWOBJ: ··· 9556 9527 } 9557 9528 } 9558 9529 9530 + nft_set_commit_update(&set_update_list); 9531 + 9559 9532 nft_commit_notify(net, NETLINK_CB(skb).portid); 9560 9533 nf_tables_gen_notify(net, skb, NFT_MSG_NEWGEN); 9561 9534 nf_tables_commit_audit_log(&adl, nft_net->base_seq); ··· 9617 9586 kfree(trans); 9618 9587 } 9619 9588 9589 + static void nft_set_abort_update(struct list_head *set_update_list) 9590 + { 9591 + struct nft_set *set, *next; 9592 + 9593 + list_for_each_entry_safe(set, next, set_update_list, pending_update) { 9594 + list_del_init(&set->pending_update); 9595 + 9596 + if (!set->ops->abort) 9597 + continue; 9598 + 9599 + set->ops->abort(set); 9600 + } 9601 + } 9602 + 9620 9603 static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action) 9621 9604 { 9622 9605 struct nftables_pernet *nft_net = nft_pernet(net); 9623 9606 struct nft_trans *trans, *next; 9607 + LIST_HEAD(set_update_list); 9624 9608 struct nft_trans_elem *te; 9625 9609 9626 9610 if (action == NFNL_ABORT_VALIDATE && ··· 9745 9699 nft_setelem_remove(net, te->set, &te->elem); 9746 9700 if (!nft_setelem_is_catchall(te->set, &te->elem)) 9747 9701 atomic_dec(&te->set->nelems); 9702 + 9703 + if (te->set->ops->abort && 9704 + list_empty(&te->set->pending_update)) { 9705 + list_add_tail(&te->set->pending_update, 9706 + &set_update_list); 9707 + } 9748 9708 break; 9749 9709 case NFT_MSG_DELSETELEM: 9750 9710 case NFT_MSG_DESTROYSETELEM: ··· 9761 9709 if (!nft_setelem_is_catchall(te->set, &te->elem)) 9762 9710 te->set->ndeact--; 9763 9711 9712 + if (te->set->ops->abort && 9713 + list_empty(&te->set->pending_update)) { 9714 + list_add_tail(&te->set->pending_update, 9715 + &set_update_list); 9716 + } 9764 9717 nft_trans_destroy(trans); 9765 9718 break; 9766 9719 case NFT_MSG_NEWOBJ: ··· 9807 9750 break; 9808 9751 } 9809 9752 } 9753 + 9754 + nft_set_abort_update(&set_update_list); 9810 9755 9811 9756 synchronize_rcu(); 9812 9757
+2 -1
net/netfilter/nfnetlink.c
··· 533 533 * processed, this avoids that the same error is 534 534 * reported several times when replaying the batch. 535 535 */ 536 - if (nfnl_err_add(&err_list, nlh, err, &extack) < 0) { 536 + if (err == -ENOMEM || 537 + nfnl_err_add(&err_list, nlh, err, &extack) < 0) { 537 538 /* We failed to enqueue an error, reset the 538 539 * list of errors and send OOM to userspace 539 540 * pointing to the batch header.
+1 -1
net/netfilter/nft_bitwise.c
··· 323 323 dreg = priv->dreg; 324 324 regcount = DIV_ROUND_UP(priv->len, NFT_REG32_SIZE); 325 325 for (i = 0; i < regcount; i++, dreg++) 326 - track->regs[priv->dreg].bitwise = expr; 326 + track->regs[dreg].bitwise = expr; 327 327 328 328 return false; 329 329 }
+40 -15
net/netfilter/nft_set_pipapo.c
··· 1600 1600 } 1601 1601 } 1602 1602 1603 - /** 1604 - * pipapo_reclaim_match - RCU callback to free fields from old matching data 1605 - * @rcu: RCU head 1606 - */ 1607 - static void pipapo_reclaim_match(struct rcu_head *rcu) 1603 + static void pipapo_free_match(struct nft_pipapo_match *m) 1608 1604 { 1609 - struct nft_pipapo_match *m; 1610 1605 int i; 1611 - 1612 - m = container_of(rcu, struct nft_pipapo_match, rcu); 1613 1606 1614 1607 for_each_possible_cpu(i) 1615 1608 kfree(*per_cpu_ptr(m->scratch, i)); ··· 1618 1625 } 1619 1626 1620 1627 /** 1621 - * pipapo_commit() - Replace lookup data with current working copy 1628 + * pipapo_reclaim_match - RCU callback to free fields from old matching data 1629 + * @rcu: RCU head 1630 + */ 1631 + static void pipapo_reclaim_match(struct rcu_head *rcu) 1632 + { 1633 + struct nft_pipapo_match *m; 1634 + 1635 + m = container_of(rcu, struct nft_pipapo_match, rcu); 1636 + pipapo_free_match(m); 1637 + } 1638 + 1639 + /** 1640 + * nft_pipapo_commit() - Replace lookup data with current working copy 1622 1641 * @set: nftables API set representation 1623 1642 * 1624 1643 * While at it, check if we should perform garbage collection on the working ··· 1640 1635 * We also need to create a new working copy for subsequent insertions and 1641 1636 * deletions. 1642 1637 */ 1643 - static void pipapo_commit(const struct nft_set *set) 1638 + static void nft_pipapo_commit(const struct nft_set *set) 1644 1639 { 1645 1640 struct nft_pipapo *priv = nft_set_priv(set); 1646 1641 struct nft_pipapo_match *new_clone, *old; ··· 1665 1660 priv->clone = new_clone; 1666 1661 } 1667 1662 1663 + static void nft_pipapo_abort(const struct nft_set *set) 1664 + { 1665 + struct nft_pipapo *priv = nft_set_priv(set); 1666 + struct nft_pipapo_match *new_clone, *m; 1667 + 1668 + if (!priv->dirty) 1669 + return; 1670 + 1671 + m = rcu_dereference(priv->match); 1672 + 1673 + new_clone = pipapo_clone(m); 1674 + if (IS_ERR(new_clone)) 1675 + return; 1676 + 1677 + priv->dirty = false; 1678 + 1679 + pipapo_free_match(priv->clone); 1680 + priv->clone = new_clone; 1681 + } 1682 + 1668 1683 /** 1669 1684 * nft_pipapo_activate() - Mark element reference as active given key, commit 1670 1685 * @net: Network namespace ··· 1692 1667 * @elem: nftables API element representation containing key data 1693 1668 * 1694 1669 * On insertion, elements are added to a copy of the matching data currently 1695 - * in use for lookups, and not directly inserted into current lookup data, so 1696 - * we'll take care of that by calling pipapo_commit() here. Both 1670 + * in use for lookups, and not directly inserted into current lookup data. Both 1697 1671 * nft_pipapo_insert() and nft_pipapo_activate() are called once for each 1698 1672 * element, hence we can't purpose either one as a real commit operation. 1699 1673 */ ··· 1708 1684 1709 1685 nft_set_elem_change_active(net, set, &e->ext); 1710 1686 nft_set_elem_clear_busy(&e->ext); 1711 - 1712 - pipapo_commit(set); 1713 1687 } 1714 1688 1715 1689 /** ··· 1953 1931 if (i == m->field_count) { 1954 1932 priv->dirty = true; 1955 1933 pipapo_drop(m, rulemap); 1956 - pipapo_commit(set); 1957 1934 return; 1958 1935 } 1959 1936 ··· 2251 2230 .init = nft_pipapo_init, 2252 2231 .destroy = nft_pipapo_destroy, 2253 2232 .gc_init = nft_pipapo_gc_init, 2233 + .commit = nft_pipapo_commit, 2234 + .abort = nft_pipapo_abort, 2254 2235 .elemsize = offsetof(struct nft_pipapo_elem, ext), 2255 2236 }, 2256 2237 }; ··· 2275 2252 .init = nft_pipapo_init, 2276 2253 .destroy = nft_pipapo_destroy, 2277 2254 .gc_init = nft_pipapo_gc_init, 2255 + .commit = nft_pipapo_commit, 2256 + .abort = nft_pipapo_abort, 2278 2257 .elemsize = offsetof(struct nft_pipapo_elem, ext), 2279 2258 }, 2280 2259 };
+2 -1
net/netlabel/netlabel_kapi.c
··· 857 857 858 858 offset -= iter->startbit; 859 859 idx = offset / NETLBL_CATMAP_MAPSIZE; 860 - iter->bitmap[idx] |= bitmap << (offset % NETLBL_CATMAP_MAPSIZE); 860 + iter->bitmap[idx] |= (NETLBL_CATMAP_MAPTYPE)bitmap 861 + << (offset % NETLBL_CATMAP_MAPSIZE); 861 862 862 863 return 0; 863 864 }
-19
net/openvswitch/datapath.c
··· 236 236 /* First drop references to device. */ 237 237 hlist_del_rcu(&p->dp_hash_node); 238 238 239 - /* Free percpu memory */ 240 - free_percpu(p->upcall_stats); 241 - 242 239 /* Then destroy it. */ 243 240 ovs_vport_del(p); 244 241 } ··· 1855 1858 goto err_destroy_portids; 1856 1859 } 1857 1860 1858 - vport->upcall_stats = netdev_alloc_pcpu_stats(struct vport_upcall_stats_percpu); 1859 - if (!vport->upcall_stats) { 1860 - err = -ENOMEM; 1861 - goto err_destroy_vport; 1862 - } 1863 - 1864 1861 err = ovs_dp_cmd_fill_info(dp, reply, info->snd_portid, 1865 1862 info->snd_seq, 0, OVS_DP_CMD_NEW); 1866 1863 BUG_ON(err < 0); ··· 1867 1876 ovs_notify(&dp_datapath_genl_family, reply, info); 1868 1877 return 0; 1869 1878 1870 - err_destroy_vport: 1871 - ovs_dp_detach_port(vport); 1872 1879 err_destroy_portids: 1873 1880 kfree(rcu_dereference_raw(dp->upcall_portids)); 1874 1881 err_unlock_and_destroy_meters: ··· 2311 2322 goto exit_unlock_free; 2312 2323 } 2313 2324 2314 - vport->upcall_stats = netdev_alloc_pcpu_stats(struct vport_upcall_stats_percpu); 2315 - if (!vport->upcall_stats) { 2316 - err = -ENOMEM; 2317 - goto exit_unlock_free_vport; 2318 - } 2319 - 2320 2325 err = ovs_vport_cmd_fill_info(vport, reply, genl_info_net(info), 2321 2326 info->snd_portid, info->snd_seq, 0, 2322 2327 OVS_VPORT_CMD_NEW, GFP_KERNEL); ··· 2328 2345 ovs_notify(&dp_vport_genl_family, reply, info); 2329 2346 return 0; 2330 2347 2331 - exit_unlock_free_vport: 2332 - ovs_dp_detach_port(vport); 2333 2348 exit_unlock_free: 2334 2349 ovs_unlock(); 2335 2350 kfree_skb(reply);
+16 -2
net/openvswitch/vport.c
··· 124 124 { 125 125 struct vport *vport; 126 126 size_t alloc_size; 127 + int err; 127 128 128 129 alloc_size = sizeof(struct vport); 129 130 if (priv_size) { ··· 136 135 if (!vport) 137 136 return ERR_PTR(-ENOMEM); 138 137 138 + vport->upcall_stats = netdev_alloc_pcpu_stats(struct vport_upcall_stats_percpu); 139 + if (!vport->upcall_stats) { 140 + err = -ENOMEM; 141 + goto err_kfree_vport; 142 + } 143 + 139 144 vport->dp = parms->dp; 140 145 vport->port_no = parms->port_no; 141 146 vport->ops = ops; 142 147 INIT_HLIST_NODE(&vport->dp_hash_node); 143 148 144 149 if (ovs_vport_set_upcall_portids(vport, parms->upcall_portids)) { 145 - kfree(vport); 146 - return ERR_PTR(-EINVAL); 150 + err = -EINVAL; 151 + goto err_free_percpu; 147 152 } 148 153 149 154 return vport; 155 + 156 + err_free_percpu: 157 + free_percpu(vport->upcall_stats); 158 + err_kfree_vport: 159 + kfree(vport); 160 + return ERR_PTR(err); 150 161 } 151 162 EXPORT_SYMBOL_GPL(ovs_vport_alloc); 152 163 ··· 178 165 * it is safe to use raw dereference. 179 166 */ 180 167 kfree(rcu_dereference_raw(vport->upcall_portids)); 168 + free_percpu(vport->upcall_stats); 181 169 kfree(vport); 182 170 } 183 171 EXPORT_SYMBOL_GPL(ovs_vport_free);
+8 -1
net/sched/act_ct.c
··· 610 610 struct flow_offload_tuple tuple = {}; 611 611 enum ip_conntrack_info ctinfo; 612 612 struct tcphdr *tcph = NULL; 613 + bool force_refresh = false; 613 614 struct flow_offload *flow; 614 615 struct nf_conn *ct; 615 616 u8 dir; ··· 648 647 * established state, then don't refresh. 649 648 */ 650 649 return false; 650 + force_refresh = true; 651 651 } 652 652 653 653 if (tcph && (unlikely(tcph->fin || tcph->rst))) { ··· 662 660 else 663 661 ctinfo = IP_CT_ESTABLISHED_REPLY; 664 662 665 - flow_offload_refresh(nf_ft, flow); 663 + flow_offload_refresh(nf_ft, flow, force_refresh); 664 + if (!test_bit(IPS_ASSURED_BIT, &ct->status)) { 665 + /* Process this flow in SW to allow promoting to ASSURED */ 666 + return false; 667 + } 668 + 666 669 nf_conntrack_get(&ct->ct_general); 667 670 nf_ct_set(skb, ct, ctinfo); 668 671 if (nf_ft->flags & NF_FLOWTABLE_COUNTER)
+43 -5
net/sched/act_pedit.c
··· 13 13 #include <linux/rtnetlink.h> 14 14 #include <linux/module.h> 15 15 #include <linux/init.h> 16 + #include <linux/ip.h> 17 + #include <linux/ipv6.h> 16 18 #include <linux/slab.h> 19 + #include <net/ipv6.h> 17 20 #include <net/netlink.h> 18 21 #include <net/pkt_sched.h> 19 22 #include <linux/tc_act/tc_pedit.h> ··· 330 327 return true; 331 328 } 332 329 333 - static void pedit_skb_hdr_offset(struct sk_buff *skb, 330 + static int pedit_l4_skb_offset(struct sk_buff *skb, int *hoffset, const int header_type) 331 + { 332 + const int noff = skb_network_offset(skb); 333 + int ret = -EINVAL; 334 + struct iphdr _iph; 335 + 336 + switch (skb->protocol) { 337 + case htons(ETH_P_IP): { 338 + const struct iphdr *iph = skb_header_pointer(skb, noff, sizeof(_iph), &_iph); 339 + 340 + if (!iph) 341 + goto out; 342 + *hoffset = noff + iph->ihl * 4; 343 + ret = 0; 344 + break; 345 + } 346 + case htons(ETH_P_IPV6): 347 + ret = ipv6_find_hdr(skb, hoffset, header_type, NULL, NULL) == header_type ? 0 : -EINVAL; 348 + break; 349 + } 350 + out: 351 + return ret; 352 + } 353 + 354 + static int pedit_skb_hdr_offset(struct sk_buff *skb, 334 355 enum pedit_header_type htype, int *hoffset) 335 356 { 357 + int ret = -EINVAL; 336 358 /* 'htype' is validated in the netlink parsing */ 337 359 switch (htype) { 338 360 case TCA_PEDIT_KEY_EX_HDR_TYPE_ETH: 339 - if (skb_mac_header_was_set(skb)) 361 + if (skb_mac_header_was_set(skb)) { 340 362 *hoffset = skb_mac_offset(skb); 363 + ret = 0; 364 + } 341 365 break; 342 366 case TCA_PEDIT_KEY_EX_HDR_TYPE_NETWORK: 343 367 case TCA_PEDIT_KEY_EX_HDR_TYPE_IP4: 344 368 case TCA_PEDIT_KEY_EX_HDR_TYPE_IP6: 345 369 *hoffset = skb_network_offset(skb); 370 + ret = 0; 346 371 break; 347 372 case TCA_PEDIT_KEY_EX_HDR_TYPE_TCP: 373 + ret = pedit_l4_skb_offset(skb, hoffset, IPPROTO_TCP); 374 + break; 348 375 case TCA_PEDIT_KEY_EX_HDR_TYPE_UDP: 349 - if (skb_transport_header_was_set(skb)) 350 - *hoffset = skb_transport_offset(skb); 376 + ret = pedit_l4_skb_offset(skb, hoffset, IPPROTO_UDP); 351 377 break; 352 378 default: 353 379 break; 354 380 } 381 + return ret; 355 382 } 356 383 357 384 TC_INDIRECT_SCOPE int tcf_pedit_act(struct sk_buff *skb, ··· 417 384 int hoffset = 0; 418 385 u32 *ptr, hdata; 419 386 u32 val; 387 + int rc; 420 388 421 389 if (tkey_ex) { 422 390 htype = tkey_ex->htype; ··· 426 392 tkey_ex++; 427 393 } 428 394 429 - pedit_skb_hdr_offset(skb, htype, &hoffset); 395 + rc = pedit_skb_hdr_offset(skb, htype, &hoffset); 396 + if (rc) { 397 + pr_info_ratelimited("tc action pedit unable to extract header offset for header type (0x%x)\n", htype); 398 + goto bad; 399 + } 430 400 431 401 if (tkey->offmask) { 432 402 u8 *d, _d;
+5 -5
net/sched/act_police.c
··· 357 357 opt.burst = PSCHED_NS2TICKS(p->tcfp_burst); 358 358 if (p->rate_present) { 359 359 psched_ratecfg_getrate(&opt.rate, &p->rate); 360 - if ((police->params->rate.rate_bytes_ps >= (1ULL << 32)) && 360 + if ((p->rate.rate_bytes_ps >= (1ULL << 32)) && 361 361 nla_put_u64_64bit(skb, TCA_POLICE_RATE64, 362 - police->params->rate.rate_bytes_ps, 362 + p->rate.rate_bytes_ps, 363 363 TCA_POLICE_PAD)) 364 364 goto nla_put_failure; 365 365 } 366 366 if (p->peak_present) { 367 367 psched_ratecfg_getrate(&opt.peakrate, &p->peak); 368 - if ((police->params->peak.rate_bytes_ps >= (1ULL << 32)) && 368 + if ((p->peak.rate_bytes_ps >= (1ULL << 32)) && 369 369 nla_put_u64_64bit(skb, TCA_POLICE_PEAKRATE64, 370 - police->params->peak.rate_bytes_ps, 370 + p->peak.rate_bytes_ps, 371 371 TCA_POLICE_PAD)) 372 372 goto nla_put_failure; 373 373 } 374 374 if (p->pps_present) { 375 375 if (nla_put_u64_64bit(skb, TCA_POLICE_PKTRATE64, 376 - police->params->ppsrate.rate_pkts_ps, 376 + p->ppsrate.rate_pkts_ps, 377 377 TCA_POLICE_PAD)) 378 378 goto nla_put_failure; 379 379 if (nla_put_u64_64bit(skb, TCA_POLICE_PKTBURST64,
+8 -7
net/sched/cls_api.c
··· 43 43 #include <net/flow_offload.h> 44 44 #include <net/tc_wrapper.h> 45 45 46 - extern const struct nla_policy rtm_tca_policy[TCA_MAX + 1]; 47 - 48 46 /* The list of all installed classifier types */ 49 47 static LIST_HEAD(tcf_proto_base); 50 48 ··· 657 659 { 658 660 struct tcf_block *block = chain->block; 659 661 const struct tcf_proto_ops *tmplt_ops; 662 + unsigned int refcnt, non_act_refcnt; 660 663 bool free_block = false; 661 - unsigned int refcnt; 662 664 void *tmplt_priv; 663 665 664 666 mutex_lock(&block->lock); ··· 678 680 * save these to temporary variables. 679 681 */ 680 682 refcnt = --chain->refcnt; 683 + non_act_refcnt = refcnt - chain->action_refcnt; 681 684 tmplt_ops = chain->tmplt_ops; 682 685 tmplt_priv = chain->tmplt_priv; 683 686 684 - /* The last dropped non-action reference will trigger notification. */ 685 - if (refcnt - chain->action_refcnt == 0 && !by_act) { 686 - tc_chain_notify_delete(tmplt_ops, tmplt_priv, chain->index, 687 - block, NULL, 0, 0, false); 687 + if (non_act_refcnt == chain->explicitly_created && !by_act) { 688 + if (non_act_refcnt == 0) 689 + tc_chain_notify_delete(tmplt_ops, tmplt_priv, 690 + chain->index, block, NULL, 0, 0, 691 + false); 688 692 /* Last reference to chain, no need to lock. */ 689 693 chain->flushing = false; 690 694 } ··· 2952 2952 return PTR_ERR(ops); 2953 2953 if (!ops->tmplt_create || !ops->tmplt_destroy || !ops->tmplt_dump) { 2954 2954 NL_SET_ERR_MSG(extack, "Chain templates are not supported with specified classifier"); 2955 + module_put(ops->owner); 2955 2956 return -EOPNOTSUPP; 2956 2957 } 2957 2958
+10 -8
net/sched/cls_u32.c
··· 718 718 struct nlattr *est, u32 flags, u32 fl_flags, 719 719 struct netlink_ext_ack *extack) 720 720 { 721 - int err; 721 + int err, ifindex = -1; 722 722 723 723 err = tcf_exts_validate_ex(net, tp, tb, est, &n->exts, flags, 724 724 fl_flags, extack); 725 725 if (err < 0) 726 726 return err; 727 + 728 + if (tb[TCA_U32_INDEV]) { 729 + ifindex = tcf_change_indev(net, tb[TCA_U32_INDEV], extack); 730 + if (ifindex < 0) 731 + return -EINVAL; 732 + } 727 733 728 734 if (tb[TCA_U32_LINK]) { 729 735 u32 handle = nla_get_u32(tb[TCA_U32_LINK]); ··· 765 759 tcf_bind_filter(tp, &n->res, base); 766 760 } 767 761 768 - if (tb[TCA_U32_INDEV]) { 769 - int ret; 770 - ret = tcf_change_indev(net, tb[TCA_U32_INDEV], extack); 771 - if (ret < 0) 772 - return -EINVAL; 773 - n->ifindex = ret; 774 - } 762 + if (ifindex >= 0) 763 + n->ifindex = ifindex; 764 + 775 765 return 0; 776 766 } 777 767
+49 -23
net/sched/sch_api.c
··· 309 309 310 310 if (dev_ingress_queue(dev)) 311 311 q = qdisc_match_from_root( 312 - dev_ingress_queue(dev)->qdisc_sleeping, 312 + rtnl_dereference(dev_ingress_queue(dev)->qdisc_sleeping), 313 313 handle); 314 314 out: 315 315 return q; ··· 328 328 329 329 nq = dev_ingress_queue_rcu(dev); 330 330 if (nq) 331 - q = qdisc_match_from_root(nq->qdisc_sleeping, handle); 331 + q = qdisc_match_from_root(rcu_dereference(nq->qdisc_sleeping), 332 + handle); 332 333 out: 333 334 return q; 334 335 } ··· 635 634 void qdisc_watchdog_schedule_range_ns(struct qdisc_watchdog *wd, u64 expires, 636 635 u64 delta_ns) 637 636 { 638 - if (test_bit(__QDISC_STATE_DEACTIVATED, 639 - &qdisc_root_sleeping(wd->qdisc)->state)) 637 + bool deactivated; 638 + 639 + rcu_read_lock(); 640 + deactivated = test_bit(__QDISC_STATE_DEACTIVATED, 641 + &qdisc_root_sleeping(wd->qdisc)->state); 642 + rcu_read_unlock(); 643 + if (deactivated) 640 644 return; 641 645 642 646 if (hrtimer_is_queued(&wd->timer)) { ··· 1079 1073 1080 1074 if (parent == NULL) { 1081 1075 unsigned int i, num_q, ingress; 1076 + struct netdev_queue *dev_queue; 1082 1077 1083 1078 ingress = 0; 1084 1079 num_q = dev->num_tx_queues; 1085 1080 if ((q && q->flags & TCQ_F_INGRESS) || 1086 1081 (new && new->flags & TCQ_F_INGRESS)) { 1087 - num_q = 1; 1088 1082 ingress = 1; 1089 - if (!dev_ingress_queue(dev)) { 1083 + dev_queue = dev_ingress_queue(dev); 1084 + if (!dev_queue) { 1090 1085 NL_SET_ERR_MSG(extack, "Device does not have an ingress queue"); 1091 1086 return -ENOENT; 1087 + } 1088 + 1089 + q = rtnl_dereference(dev_queue->qdisc_sleeping); 1090 + 1091 + /* This is the counterpart of that qdisc_refcount_inc_nz() call in 1092 + * __tcf_qdisc_find() for filter requests. 1093 + */ 1094 + if (!qdisc_refcount_dec_if_one(q)) { 1095 + NL_SET_ERR_MSG(extack, 1096 + "Current ingress or clsact Qdisc has ongoing filter requests"); 1097 + return -EBUSY; 1092 1098 } 1093 1099 } 1094 1100 ··· 1112 1094 if (new && new->ops->attach && !ingress) 1113 1095 goto skip; 1114 1096 1115 - for (i = 0; i < num_q; i++) { 1116 - struct netdev_queue *dev_queue = dev_ingress_queue(dev); 1117 - 1118 - if (!ingress) 1097 + if (!ingress) { 1098 + for (i = 0; i < num_q; i++) { 1119 1099 dev_queue = netdev_get_tx_queue(dev, i); 1100 + old = dev_graft_qdisc(dev_queue, new); 1120 1101 1121 - old = dev_graft_qdisc(dev_queue, new); 1122 - if (new && i > 0) 1123 - qdisc_refcount_inc(new); 1124 - 1125 - if (!ingress) 1102 + if (new && i > 0) 1103 + qdisc_refcount_inc(new); 1126 1104 qdisc_put(old); 1105 + } 1106 + } else { 1107 + old = dev_graft_qdisc(dev_queue, NULL); 1108 + 1109 + /* {ingress,clsact}_destroy() @old before grafting @new to avoid 1110 + * unprotected concurrent accesses to net_device::miniq_{in,e}gress 1111 + * pointer(s) in mini_qdisc_pair_swap(). 1112 + */ 1113 + qdisc_notify(net, skb, n, classid, old, new, extack); 1114 + qdisc_destroy(old); 1115 + 1116 + dev_graft_qdisc(dev_queue, new); 1127 1117 } 1128 1118 1129 1119 skip: ··· 1145 1119 1146 1120 if (new && new->ops->attach) 1147 1121 new->ops->attach(new); 1148 - } else { 1149 - notify_and_destroy(net, skb, n, classid, old, new, extack); 1150 1122 } 1151 1123 1152 1124 if (dev->flags & IFF_UP) ··· 1502 1478 } 1503 1479 q = qdisc_leaf(p, clid); 1504 1480 } else if (dev_ingress_queue(dev)) { 1505 - q = dev_ingress_queue(dev)->qdisc_sleeping; 1481 + q = rtnl_dereference(dev_ingress_queue(dev)->qdisc_sleeping); 1506 1482 } 1507 1483 } else { 1508 1484 q = rtnl_dereference(dev->qdisc); ··· 1588 1564 } 1589 1565 q = qdisc_leaf(p, clid); 1590 1566 } else if (dev_ingress_queue_create(dev)) { 1591 - q = dev_ingress_queue(dev)->qdisc_sleeping; 1567 + q = rtnl_dereference(dev_ingress_queue(dev)->qdisc_sleeping); 1592 1568 } 1593 1569 } else { 1594 1570 q = rtnl_dereference(dev->qdisc); ··· 1829 1805 1830 1806 dev_queue = dev_ingress_queue(dev); 1831 1807 if (dev_queue && 1832 - tc_dump_qdisc_root(dev_queue->qdisc_sleeping, skb, cb, 1833 - &q_idx, s_q_idx, false, 1808 + tc_dump_qdisc_root(rtnl_dereference(dev_queue->qdisc_sleeping), 1809 + skb, cb, &q_idx, s_q_idx, false, 1834 1810 tca[TCA_DUMP_INVISIBLE]) < 0) 1835 1811 goto done; 1836 1812 ··· 2273 2249 2274 2250 dev_queue = dev_ingress_queue(dev); 2275 2251 if (dev_queue && 2276 - tc_dump_tclass_root(dev_queue->qdisc_sleeping, skb, tcm, cb, 2277 - &t, s_t, false) < 0) 2252 + tc_dump_tclass_root(rtnl_dereference(dev_queue->qdisc_sleeping), 2253 + skb, tcm, cb, &t, s_t, false) < 0) 2278 2254 goto done; 2279 2255 2280 2256 done: ··· 2326 2302 .exit = psched_net_exit, 2327 2303 }; 2328 2304 2305 + #if IS_ENABLED(CONFIG_RETPOLINE) 2329 2306 DEFINE_STATIC_KEY_FALSE(tc_skip_wrapper); 2307 + #endif 2330 2308 2331 2309 static int __init pktsched_init(void) 2332 2310 {
+9 -1
net/sched/sch_fq_pie.c
··· 201 201 return NET_XMIT_CN; 202 202 } 203 203 204 + static struct netlink_range_validation fq_pie_q_range = { 205 + .min = 1, 206 + .max = 1 << 20, 207 + }; 208 + 204 209 static const struct nla_policy fq_pie_policy[TCA_FQ_PIE_MAX + 1] = { 205 210 [TCA_FQ_PIE_LIMIT] = {.type = NLA_U32}, 206 211 [TCA_FQ_PIE_FLOWS] = {.type = NLA_U32}, ··· 213 208 [TCA_FQ_PIE_TUPDATE] = {.type = NLA_U32}, 214 209 [TCA_FQ_PIE_ALPHA] = {.type = NLA_U32}, 215 210 [TCA_FQ_PIE_BETA] = {.type = NLA_U32}, 216 - [TCA_FQ_PIE_QUANTUM] = {.type = NLA_U32}, 211 + [TCA_FQ_PIE_QUANTUM] = 212 + NLA_POLICY_FULL_RANGE(NLA_U32, &fq_pie_q_range), 217 213 [TCA_FQ_PIE_MEMORY_LIMIT] = {.type = NLA_U32}, 218 214 [TCA_FQ_PIE_ECN_PROB] = {.type = NLA_U32}, 219 215 [TCA_FQ_PIE_ECN] = {.type = NLA_U32}, ··· 379 373 spinlock_t *root_lock; /* to lock qdisc for probability calculations */ 380 374 u32 idx; 381 375 376 + rcu_read_lock(); 382 377 root_lock = qdisc_lock(qdisc_root_sleeping(sch)); 383 378 spin_lock(root_lock); 384 379 ··· 392 385 mod_timer(&q->adapt_timer, jiffies + q->p_params.tupdate); 393 386 394 387 spin_unlock(root_lock); 388 + rcu_read_unlock(); 395 389 } 396 390 397 391 static int fq_pie_init(struct Qdisc *sch, struct nlattr *opt,
+26 -18
net/sched/sch_generic.c
··· 648 648 649 649 static struct netdev_queue noop_netdev_queue = { 650 650 RCU_POINTER_INITIALIZER(qdisc, &noop_qdisc), 651 - .qdisc_sleeping = &noop_qdisc, 651 + RCU_POINTER_INITIALIZER(qdisc_sleeping, &noop_qdisc), 652 652 }; 653 653 654 654 struct Qdisc noop_qdisc = { ··· 1046 1046 qdisc_free(q); 1047 1047 } 1048 1048 1049 - static void qdisc_destroy(struct Qdisc *qdisc) 1049 + static void __qdisc_destroy(struct Qdisc *qdisc) 1050 1050 { 1051 1051 const struct Qdisc_ops *ops = qdisc->ops; 1052 1052 ··· 1070 1070 call_rcu(&qdisc->rcu, qdisc_free_cb); 1071 1071 } 1072 1072 1073 + void qdisc_destroy(struct Qdisc *qdisc) 1074 + { 1075 + if (qdisc->flags & TCQ_F_BUILTIN) 1076 + return; 1077 + 1078 + __qdisc_destroy(qdisc); 1079 + } 1080 + 1073 1081 void qdisc_put(struct Qdisc *qdisc) 1074 1082 { 1075 1083 if (!qdisc) ··· 1087 1079 !refcount_dec_and_test(&qdisc->refcnt)) 1088 1080 return; 1089 1081 1090 - qdisc_destroy(qdisc); 1082 + __qdisc_destroy(qdisc); 1091 1083 } 1092 1084 EXPORT_SYMBOL(qdisc_put); 1093 1085 ··· 1102 1094 !refcount_dec_and_rtnl_lock(&qdisc->refcnt)) 1103 1095 return; 1104 1096 1105 - qdisc_destroy(qdisc); 1097 + __qdisc_destroy(qdisc); 1106 1098 rtnl_unlock(); 1107 1099 } 1108 1100 EXPORT_SYMBOL(qdisc_put_unlocked); ··· 1111 1103 struct Qdisc *dev_graft_qdisc(struct netdev_queue *dev_queue, 1112 1104 struct Qdisc *qdisc) 1113 1105 { 1114 - struct Qdisc *oqdisc = dev_queue->qdisc_sleeping; 1106 + struct Qdisc *oqdisc = rtnl_dereference(dev_queue->qdisc_sleeping); 1115 1107 spinlock_t *root_lock; 1116 1108 1117 1109 root_lock = qdisc_lock(oqdisc); ··· 1120 1112 /* ... and graft new one */ 1121 1113 if (qdisc == NULL) 1122 1114 qdisc = &noop_qdisc; 1123 - dev_queue->qdisc_sleeping = qdisc; 1115 + rcu_assign_pointer(dev_queue->qdisc_sleeping, qdisc); 1124 1116 rcu_assign_pointer(dev_queue->qdisc, &noop_qdisc); 1125 1117 1126 1118 spin_unlock_bh(root_lock); ··· 1133 1125 struct netdev_queue *dev_queue, 1134 1126 void *_qdisc_default) 1135 1127 { 1136 - struct Qdisc *qdisc = dev_queue->qdisc_sleeping; 1128 + struct Qdisc *qdisc = rtnl_dereference(dev_queue->qdisc_sleeping); 1137 1129 struct Qdisc *qdisc_default = _qdisc_default; 1138 1130 1139 1131 if (qdisc) { 1140 1132 rcu_assign_pointer(dev_queue->qdisc, qdisc_default); 1141 - dev_queue->qdisc_sleeping = qdisc_default; 1133 + rcu_assign_pointer(dev_queue->qdisc_sleeping, qdisc_default); 1142 1134 1143 1135 qdisc_put(qdisc); 1144 1136 } ··· 1162 1154 1163 1155 if (!netif_is_multiqueue(dev)) 1164 1156 qdisc->flags |= TCQ_F_ONETXQUEUE | TCQ_F_NOPARENT; 1165 - dev_queue->qdisc_sleeping = qdisc; 1157 + rcu_assign_pointer(dev_queue->qdisc_sleeping, qdisc); 1166 1158 } 1167 1159 1168 1160 static void attach_default_qdiscs(struct net_device *dev) ··· 1175 1167 if (!netif_is_multiqueue(dev) || 1176 1168 dev->priv_flags & IFF_NO_QUEUE) { 1177 1169 netdev_for_each_tx_queue(dev, attach_one_default_qdisc, NULL); 1178 - qdisc = txq->qdisc_sleeping; 1170 + qdisc = rtnl_dereference(txq->qdisc_sleeping); 1179 1171 rcu_assign_pointer(dev->qdisc, qdisc); 1180 1172 qdisc_refcount_inc(qdisc); 1181 1173 } else { ··· 1194 1186 netdev_for_each_tx_queue(dev, shutdown_scheduler_queue, &noop_qdisc); 1195 1187 dev->priv_flags |= IFF_NO_QUEUE; 1196 1188 netdev_for_each_tx_queue(dev, attach_one_default_qdisc, NULL); 1197 - qdisc = txq->qdisc_sleeping; 1189 + qdisc = rtnl_dereference(txq->qdisc_sleeping); 1198 1190 rcu_assign_pointer(dev->qdisc, qdisc); 1199 1191 qdisc_refcount_inc(qdisc); 1200 1192 dev->priv_flags ^= IFF_NO_QUEUE; ··· 1210 1202 struct netdev_queue *dev_queue, 1211 1203 void *_need_watchdog) 1212 1204 { 1213 - struct Qdisc *new_qdisc = dev_queue->qdisc_sleeping; 1205 + struct Qdisc *new_qdisc = rtnl_dereference(dev_queue->qdisc_sleeping); 1214 1206 int *need_watchdog_p = _need_watchdog; 1215 1207 1216 1208 if (!(new_qdisc->flags & TCQ_F_BUILTIN)) ··· 1280 1272 struct Qdisc *qdisc; 1281 1273 bool nolock; 1282 1274 1283 - qdisc = dev_queue->qdisc_sleeping; 1275 + qdisc = rtnl_dereference(dev_queue->qdisc_sleeping); 1284 1276 if (!qdisc) 1285 1277 return; 1286 1278 ··· 1311 1303 int val; 1312 1304 1313 1305 dev_queue = netdev_get_tx_queue(dev, i); 1314 - q = dev_queue->qdisc_sleeping; 1306 + q = rtnl_dereference(dev_queue->qdisc_sleeping); 1315 1307 1316 1308 root_lock = qdisc_lock(q); 1317 1309 spin_lock_bh(root_lock); ··· 1387 1379 static int qdisc_change_tx_queue_len(struct net_device *dev, 1388 1380 struct netdev_queue *dev_queue) 1389 1381 { 1390 - struct Qdisc *qdisc = dev_queue->qdisc_sleeping; 1382 + struct Qdisc *qdisc = rtnl_dereference(dev_queue->qdisc_sleeping); 1391 1383 const struct Qdisc_ops *ops = qdisc->ops; 1392 1384 1393 1385 if (ops->change_tx_queue_len) ··· 1412 1404 unsigned int i; 1413 1405 1414 1406 for (i = new_real_tx; i < dev->real_num_tx_queues; i++) { 1415 - qdisc = netdev_get_tx_queue(dev, i)->qdisc_sleeping; 1407 + qdisc = rtnl_dereference(netdev_get_tx_queue(dev, i)->qdisc_sleeping); 1416 1408 /* Only update the default qdiscs we created, 1417 1409 * qdiscs with handles are always hashed. 1418 1410 */ ··· 1420 1412 qdisc_hash_del(qdisc); 1421 1413 } 1422 1414 for (i = dev->real_num_tx_queues; i < new_real_tx; i++) { 1423 - qdisc = netdev_get_tx_queue(dev, i)->qdisc_sleeping; 1415 + qdisc = rtnl_dereference(netdev_get_tx_queue(dev, i)->qdisc_sleeping); 1424 1416 if (qdisc != &noop_qdisc && !qdisc->handle) 1425 1417 qdisc_hash_add(qdisc, false); 1426 1418 } ··· 1457 1449 struct Qdisc *qdisc = _qdisc; 1458 1450 1459 1451 rcu_assign_pointer(dev_queue->qdisc, qdisc); 1460 - dev_queue->qdisc_sleeping = qdisc; 1452 + rcu_assign_pointer(dev_queue->qdisc_sleeping, qdisc); 1461 1453 } 1462 1454 1463 1455 void dev_init_scheduler(struct net_device *dev)
+4 -4
net/sched/sch_mq.c
··· 141 141 * qdisc totals are added at end. 142 142 */ 143 143 for (ntx = 0; ntx < dev->num_tx_queues; ntx++) { 144 - qdisc = netdev_get_tx_queue(dev, ntx)->qdisc_sleeping; 144 + qdisc = rtnl_dereference(netdev_get_tx_queue(dev, ntx)->qdisc_sleeping); 145 145 spin_lock_bh(qdisc_lock(qdisc)); 146 146 147 147 gnet_stats_add_basic(&sch->bstats, qdisc->cpu_bstats, ··· 202 202 { 203 203 struct netdev_queue *dev_queue = mq_queue_get(sch, cl); 204 204 205 - return dev_queue->qdisc_sleeping; 205 + return rtnl_dereference(dev_queue->qdisc_sleeping); 206 206 } 207 207 208 208 static unsigned long mq_find(struct Qdisc *sch, u32 classid) ··· 221 221 222 222 tcm->tcm_parent = TC_H_ROOT; 223 223 tcm->tcm_handle |= TC_H_MIN(cl); 224 - tcm->tcm_info = dev_queue->qdisc_sleeping->handle; 224 + tcm->tcm_info = rtnl_dereference(dev_queue->qdisc_sleeping)->handle; 225 225 return 0; 226 226 } 227 227 ··· 230 230 { 231 231 struct netdev_queue *dev_queue = mq_queue_get(sch, cl); 232 232 233 - sch = dev_queue->qdisc_sleeping; 233 + sch = rtnl_dereference(dev_queue->qdisc_sleeping); 234 234 if (gnet_stats_copy_basic(d, sch->cpu_bstats, &sch->bstats, true) < 0 || 235 235 qdisc_qstats_copy(d, sch) < 0) 236 236 return -1;
+4 -4
net/sched/sch_mqprio.c
··· 557 557 * qdisc totals are added at end. 558 558 */ 559 559 for (ntx = 0; ntx < dev->num_tx_queues; ntx++) { 560 - qdisc = netdev_get_tx_queue(dev, ntx)->qdisc_sleeping; 560 + qdisc = rtnl_dereference(netdev_get_tx_queue(dev, ntx)->qdisc_sleeping); 561 561 spin_lock_bh(qdisc_lock(qdisc)); 562 562 563 563 gnet_stats_add_basic(&sch->bstats, qdisc->cpu_bstats, ··· 604 604 if (!dev_queue) 605 605 return NULL; 606 606 607 - return dev_queue->qdisc_sleeping; 607 + return rtnl_dereference(dev_queue->qdisc_sleeping); 608 608 } 609 609 610 610 static unsigned long mqprio_find(struct Qdisc *sch, u32 classid) ··· 637 637 tcm->tcm_parent = (tc < 0) ? 0 : 638 638 TC_H_MAKE(TC_H_MAJ(sch->handle), 639 639 TC_H_MIN(tc + TC_H_MIN_PRIORITY)); 640 - tcm->tcm_info = dev_queue->qdisc_sleeping->handle; 640 + tcm->tcm_info = rtnl_dereference(dev_queue->qdisc_sleeping)->handle; 641 641 } else { 642 642 tcm->tcm_parent = TC_H_ROOT; 643 643 tcm->tcm_info = 0; ··· 693 693 } else { 694 694 struct netdev_queue *dev_queue = mqprio_queue_get(sch, cl); 695 695 696 - sch = dev_queue->qdisc_sleeping; 696 + sch = rtnl_dereference(dev_queue->qdisc_sleeping); 697 697 if (gnet_stats_copy_basic(d, sch->cpu_bstats, 698 698 &sch->bstats, true) < 0 || 699 699 qdisc_qstats_copy(d, sch) < 0)
+4 -1
net/sched/sch_pie.c
··· 421 421 { 422 422 struct pie_sched_data *q = from_timer(q, t, adapt_timer); 423 423 struct Qdisc *sch = q->sch; 424 - spinlock_t *root_lock = qdisc_lock(qdisc_root_sleeping(sch)); 424 + spinlock_t *root_lock; 425 425 426 + rcu_read_lock(); 427 + root_lock = qdisc_lock(qdisc_root_sleeping(sch)); 426 428 spin_lock(root_lock); 427 429 pie_calculate_probability(&q->params, &q->vars, sch->qstats.backlog); 428 430 ··· 432 430 if (q->params.tupdate) 433 431 mod_timer(&q->adapt_timer, jiffies + q->params.tupdate); 434 432 spin_unlock(root_lock); 433 + rcu_read_unlock(); 435 434 } 436 435 437 436 static int pie_init(struct Qdisc *sch, struct nlattr *opt,
+4 -1
net/sched/sch_red.c
··· 321 321 { 322 322 struct red_sched_data *q = from_timer(q, t, adapt_timer); 323 323 struct Qdisc *sch = q->sch; 324 - spinlock_t *root_lock = qdisc_lock(qdisc_root_sleeping(sch)); 324 + spinlock_t *root_lock; 325 325 326 + rcu_read_lock(); 327 + root_lock = qdisc_lock(qdisc_root_sleeping(sch)); 326 328 spin_lock(root_lock); 327 329 red_adaptative_algo(&q->parms, &q->vars); 328 330 mod_timer(&q->adapt_timer, jiffies + HZ/2); 329 331 spin_unlock(root_lock); 332 + rcu_read_unlock(); 330 333 } 331 334 332 335 static int red_init(struct Qdisc *sch, struct nlattr *opt,
+4 -1
net/sched/sch_sfq.c
··· 606 606 { 607 607 struct sfq_sched_data *q = from_timer(q, t, perturb_timer); 608 608 struct Qdisc *sch = q->sch; 609 - spinlock_t *root_lock = qdisc_lock(qdisc_root_sleeping(sch)); 609 + spinlock_t *root_lock; 610 610 siphash_key_t nkey; 611 611 612 612 get_random_bytes(&nkey, sizeof(nkey)); 613 + rcu_read_lock(); 614 + root_lock = qdisc_lock(qdisc_root_sleeping(sch)); 613 615 spin_lock(root_lock); 614 616 q->perturbation = nkey; 615 617 if (!q->filter_list && q->tail) ··· 620 618 621 619 if (q->perturb_period) 622 620 mod_timer(&q->perturb_timer, jiffies + q->perturb_period); 621 + rcu_read_unlock(); 623 622 } 624 623 625 624 static int sfq_change(struct Qdisc *sch, struct nlattr *opt)
+6 -3
net/sched/sch_taprio.c
··· 797 797 798 798 taprio_next_tc_txq(dev, tc, &q->cur_txq[tc]); 799 799 800 + if (q->cur_txq[tc] >= dev->num_tx_queues) 801 + q->cur_txq[tc] = first_txq; 802 + 800 803 if (skb) 801 804 return skb; 802 805 } while (q->cur_txq[tc] != first_txq); ··· 2361 2358 if (!dev_queue) 2362 2359 return NULL; 2363 2360 2364 - return dev_queue->qdisc_sleeping; 2361 + return rtnl_dereference(dev_queue->qdisc_sleeping); 2365 2362 } 2366 2363 2367 2364 static unsigned long taprio_find(struct Qdisc *sch, u32 classid) ··· 2380 2377 2381 2378 tcm->tcm_parent = TC_H_ROOT; 2382 2379 tcm->tcm_handle |= TC_H_MIN(cl); 2383 - tcm->tcm_info = dev_queue->qdisc_sleeping->handle; 2380 + tcm->tcm_info = rtnl_dereference(dev_queue->qdisc_sleeping)->handle; 2384 2381 2385 2382 return 0; 2386 2383 } ··· 2392 2389 { 2393 2390 struct netdev_queue *dev_queue = taprio_queue_get(sch, cl); 2394 2391 2395 - sch = dev_queue->qdisc_sleeping; 2392 + sch = rtnl_dereference(dev_queue->qdisc_sleeping); 2396 2393 if (gnet_stats_copy_basic(d, NULL, &sch->bstats, true) < 0 || 2397 2394 qdisc_qstats_copy(d, sch) < 0) 2398 2395 return -1;
+1 -1
net/sched/sch_teql.c
··· 297 297 struct net_device *slave = qdisc_dev(q); 298 298 struct netdev_queue *slave_txq = netdev_get_tx_queue(slave, 0); 299 299 300 - if (slave_txq->qdisc_sleeping != q) 300 + if (rcu_access_pointer(slave_txq->qdisc_sleeping) != q) 301 301 continue; 302 302 if (netif_xmit_stopped(netdev_get_tx_queue(slave, subq)) || 303 303 !netif_running(slave)) {
+4 -1
net/sctp/sm_sideeffect.c
··· 1250 1250 default: 1251 1251 pr_err("impossible disposition %d in state %d, event_type %d, event_id %d\n", 1252 1252 status, state, event_type, subtype.chunk); 1253 - BUG(); 1253 + error = status; 1254 + if (error >= 0) 1255 + error = -EINVAL; 1256 + WARN_ON_ONCE(1); 1254 1257 break; 1255 1258 } 1256 1259
+1 -1
net/sctp/sm_statefuns.c
··· 4482 4482 SCTP_AUTH_NEW_KEY, GFP_ATOMIC); 4483 4483 4484 4484 if (!ev) 4485 - return -ENOMEM; 4485 + return SCTP_DISPOSITION_NOMEM; 4486 4486 4487 4487 sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, 4488 4488 SCTP_ULPEVENT(ev));
+2 -2
net/smc/smc_llc.c
··· 851 851 addc_llc->num_rkeys = *num_rkeys_todo; 852 852 n = *num_rkeys_todo; 853 853 for (i = 0; i < min_t(u8, n, SMC_LLC_RKEYS_PER_CONT_MSG); i++) { 854 + while (*buf_pos && !(*buf_pos)->used) 855 + *buf_pos = smc_llc_get_next_rmb(lgr, buf_lst, *buf_pos); 854 856 if (!*buf_pos) { 855 857 addc_llc->num_rkeys = addc_llc->num_rkeys - 856 858 *num_rkeys_todo; ··· 869 867 870 868 (*num_rkeys_todo)--; 871 869 *buf_pos = smc_llc_get_next_rmb(lgr, buf_lst, *buf_pos); 872 - while (*buf_pos && !(*buf_pos)->used) 873 - *buf_pos = smc_llc_get_next_rmb(lgr, buf_lst, *buf_pos); 874 870 } 875 871 addc_llc->hd.common.llc_type = SMC_LLC_ADD_LINK_CONT; 876 872 addc_llc->hd.length = sizeof(struct smc_llc_msg_add_link_cont);
+2 -2
net/tipc/bearer.c
··· 1258 1258 struct tipc_nl_msg msg; 1259 1259 struct tipc_media *media; 1260 1260 struct sk_buff *rep; 1261 - struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1]; 1261 + struct nlattr *attrs[TIPC_NLA_MEDIA_MAX + 1]; 1262 1262 1263 1263 if (!info->attrs[TIPC_NLA_MEDIA]) 1264 1264 return -EINVAL; ··· 1307 1307 int err; 1308 1308 char *name; 1309 1309 struct tipc_media *m; 1310 - struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1]; 1310 + struct nlattr *attrs[TIPC_NLA_MEDIA_MAX + 1]; 1311 1311 1312 1312 if (!info->attrs[TIPC_NLA_MEDIA]) 1313 1313 return -EINVAL;
+2 -2
net/wireless/core.c
··· 368 368 rdev = container_of(work, struct cfg80211_registered_device, 369 369 sched_scan_stop_wk); 370 370 371 - rtnl_lock(); 371 + wiphy_lock(&rdev->wiphy); 372 372 list_for_each_entry_safe(req, tmp, &rdev->sched_scan_req_list, list) { 373 373 if (req->nl_owner_dead) 374 374 cfg80211_stop_sched_scan_req(rdev, req, false); 375 375 } 376 - rtnl_unlock(); 376 + wiphy_unlock(&rdev->wiphy); 377 377 } 378 378 379 379 static void cfg80211_propagate_radar_detect_wk(struct work_struct *work)
+2
net/wireless/nl80211.c
··· 10723 10723 if (!info->attrs[NL80211_ATTR_MLD_ADDR]) 10724 10724 return -EINVAL; 10725 10725 req.ap_mld_addr = nla_data(info->attrs[NL80211_ATTR_MLD_ADDR]); 10726 + if (!is_valid_ether_addr(req.ap_mld_addr)) 10727 + return -EINVAL; 10726 10728 } 10727 10729 10728 10730 req.bss = cfg80211_get_bss(&rdev->wiphy, chan, bssid, ssid, ssid_len,
+3 -3
net/wireless/rdev-ops.h
··· 2 2 /* 3 3 * Portions of this file 4 4 * Copyright(c) 2016-2017 Intel Deutschland GmbH 5 - * Copyright (C) 2018, 2021-2022 Intel Corporation 5 + * Copyright (C) 2018, 2021-2023 Intel Corporation 6 6 */ 7 7 #ifndef __CFG80211_RDEV_OPS 8 8 #define __CFG80211_RDEV_OPS ··· 1441 1441 unsigned int link_id) 1442 1442 { 1443 1443 trace_rdev_del_intf_link(&rdev->wiphy, wdev, link_id); 1444 - if (rdev->ops->add_intf_link) 1445 - rdev->ops->add_intf_link(&rdev->wiphy, wdev, link_id); 1444 + if (rdev->ops->del_intf_link) 1445 + rdev->ops->del_intf_link(&rdev->wiphy, wdev, link_id); 1446 1446 trace_rdev_return_void(&rdev->wiphy); 1447 1447 } 1448 1448
+2 -5
net/wireless/reg.c
··· 2404 2404 case NL80211_IFTYPE_P2P_GO: 2405 2405 case NL80211_IFTYPE_ADHOC: 2406 2406 case NL80211_IFTYPE_MESH_POINT: 2407 - wiphy_lock(wiphy); 2408 2407 ret = cfg80211_reg_can_beacon_relax(wiphy, &chandef, 2409 2408 iftype); 2410 - wiphy_unlock(wiphy); 2411 - 2412 2409 if (!ret) 2413 2410 return ret; 2414 2411 break; ··· 2437 2440 struct wireless_dev *wdev; 2438 2441 struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); 2439 2442 2440 - ASSERT_RTNL(); 2441 - 2443 + wiphy_lock(wiphy); 2442 2444 list_for_each_entry(wdev, &rdev->wiphy.wdev_list, list) 2443 2445 if (!reg_wdev_chan_valid(wiphy, wdev)) 2444 2446 cfg80211_leave(rdev, wdev); 2447 + wiphy_unlock(wiphy); 2445 2448 } 2446 2449 2447 2450 static void reg_check_chans_work(struct work_struct *work)
+8 -1
net/wireless/util.c
··· 5 5 * Copyright 2007-2009 Johannes Berg <johannes@sipsolutions.net> 6 6 * Copyright 2013-2014 Intel Mobile Communications GmbH 7 7 * Copyright 2017 Intel Deutschland GmbH 8 - * Copyright (C) 2018-2022 Intel Corporation 8 + * Copyright (C) 2018-2023 Intel Corporation 9 9 */ 10 10 #include <linux/export.h> 11 11 #include <linux/bitops.h> ··· 2557 2557 void cfg80211_remove_links(struct wireless_dev *wdev) 2558 2558 { 2559 2559 unsigned int link_id; 2560 + 2561 + /* 2562 + * links are controlled by upper layers (userspace/cfg) 2563 + * only for AP mode, so only remove them here for AP 2564 + */ 2565 + if (wdev->iftype != NL80211_IFTYPE_AP) 2566 + return; 2560 2567 2561 2568 wdev_lock(wdev); 2562 2569 if (wdev->valid_links) {
+22 -13
sound/core/seq/oss/seq_oss_midi.c
··· 37 37 struct snd_midi_event *coder; /* MIDI event coder */ 38 38 struct seq_oss_devinfo *devinfo; /* assigned OSSseq device */ 39 39 snd_use_lock_t use_lock; 40 + struct mutex open_mutex; 40 41 }; 41 42 42 43 ··· 173 172 mdev->flags = pinfo->capability; 174 173 mdev->opened = 0; 175 174 snd_use_lock_init(&mdev->use_lock); 175 + mutex_init(&mdev->open_mutex); 176 176 177 177 /* copy and truncate the name of synth device */ 178 178 strscpy(mdev->name, pinfo->name, sizeof(mdev->name)); ··· 324 322 int perm; 325 323 struct seq_oss_midi *mdev; 326 324 struct snd_seq_port_subscribe subs; 325 + int err; 327 326 328 327 mdev = get_mididev(dp, dev); 329 328 if (!mdev) 330 329 return -ENODEV; 331 330 331 + mutex_lock(&mdev->open_mutex); 332 332 /* already used? */ 333 333 if (mdev->opened && mdev->devinfo != dp) { 334 - snd_use_lock_free(&mdev->use_lock); 335 - return -EBUSY; 334 + err = -EBUSY; 335 + goto unlock; 336 336 } 337 337 338 338 perm = 0; ··· 344 340 perm |= PERM_READ; 345 341 perm &= mdev->flags; 346 342 if (perm == 0) { 347 - snd_use_lock_free(&mdev->use_lock); 348 - return -ENXIO; 343 + err = -ENXIO; 344 + goto unlock; 349 345 } 350 346 351 347 /* already opened? */ 352 348 if ((mdev->opened & perm) == perm) { 353 - snd_use_lock_free(&mdev->use_lock); 354 - return 0; 349 + err = 0; 350 + goto unlock; 355 351 } 356 352 357 353 perm &= ~mdev->opened; ··· 376 372 } 377 373 378 374 if (! mdev->opened) { 379 - snd_use_lock_free(&mdev->use_lock); 380 - return -ENXIO; 375 + err = -ENXIO; 376 + goto unlock; 381 377 } 382 378 383 379 mdev->devinfo = dp; 380 + err = 0; 381 + 382 + unlock: 383 + mutex_unlock(&mdev->open_mutex); 384 384 snd_use_lock_free(&mdev->use_lock); 385 - return 0; 385 + return err; 386 386 } 387 387 388 388 /* ··· 401 393 mdev = get_mididev(dp, dev); 402 394 if (!mdev) 403 395 return -ENODEV; 404 - if (! mdev->opened || mdev->devinfo != dp) { 405 - snd_use_lock_free(&mdev->use_lock); 406 - return 0; 407 - } 396 + mutex_lock(&mdev->open_mutex); 397 + if (!mdev->opened || mdev->devinfo != dp) 398 + goto unlock; 408 399 409 400 memset(&subs, 0, sizeof(subs)); 410 401 if (mdev->opened & PERM_WRITE) { ··· 422 415 mdev->opened = 0; 423 416 mdev->devinfo = NULL; 424 417 418 + unlock: 419 + mutex_unlock(&mdev->open_mutex); 425 420 snd_use_lock_free(&mdev->use_lock); 426 421 return 0; 427 422 }
+1 -1
sound/isa/gus/gus_pcm.c
··· 892 892 kctl = snd_ctl_new1(&snd_gf1_pcm_volume_control1, gus); 893 893 else 894 894 kctl = snd_ctl_new1(&snd_gf1_pcm_volume_control, gus); 895 + kctl->id.index = control_index; 895 896 err = snd_ctl_add(card, kctl); 896 897 if (err < 0) 897 898 return err; 898 - kctl->id.index = control_index; 899 899 900 900 return 0; 901 901 }
+3 -3
sound/pci/cmipci.c
··· 2688 2688 } 2689 2689 if (cm->can_ac3_hw) { 2690 2690 kctl = snd_ctl_new1(&snd_cmipci_spdif_default, cm); 2691 + kctl->id.device = pcm_spdif_device; 2691 2692 err = snd_ctl_add(card, kctl); 2692 2693 if (err < 0) 2693 2694 return err; 2694 - kctl->id.device = pcm_spdif_device; 2695 2695 kctl = snd_ctl_new1(&snd_cmipci_spdif_mask, cm); 2696 + kctl->id.device = pcm_spdif_device; 2696 2697 err = snd_ctl_add(card, kctl); 2697 2698 if (err < 0) 2698 2699 return err; 2699 - kctl->id.device = pcm_spdif_device; 2700 2700 kctl = snd_ctl_new1(&snd_cmipci_spdif_stream, cm); 2701 + kctl->id.device = pcm_spdif_device; 2701 2702 err = snd_ctl_add(card, kctl); 2702 2703 if (err < 0) 2703 2704 return err; 2704 - kctl->id.device = pcm_spdif_device; 2705 2705 } 2706 2706 if (cm->chip_version <= 37) { 2707 2707 sw = snd_cmipci_old_mixer_switches;
+5 -1
sound/pci/hda/hda_codec.c
··· 2458 2458 type == HDA_PCM_TYPE_HDMI) { 2459 2459 /* suppose a single SPDIF device */ 2460 2460 for (dig_mix = dig_mixes; dig_mix->name; dig_mix++) { 2461 + struct snd_ctl_elem_id id; 2462 + 2461 2463 kctl = find_mixer_ctl(codec, dig_mix->name, 0, 0); 2462 2464 if (!kctl) 2463 2465 break; 2464 - kctl->id.index = spdif_index; 2466 + id = kctl->id; 2467 + id.index = spdif_index; 2468 + snd_ctl_rename_id(codec->card, &kctl->id, &id); 2465 2469 } 2466 2470 bus->primary_dig_out_type = HDA_PCM_TYPE_HDMI; 2467 2471 }
+13 -1
sound/pci/hda/patch_realtek.c
··· 9500 9500 SND_PCI_QUIRK(0x103c, 0x8b8a, "HP", ALC236_FIXUP_HP_GPIO_LED), 9501 9501 SND_PCI_QUIRK(0x103c, 0x8b8b, "HP", ALC236_FIXUP_HP_GPIO_LED), 9502 9502 SND_PCI_QUIRK(0x103c, 0x8b8d, "HP", ALC236_FIXUP_HP_GPIO_LED), 9503 - SND_PCI_QUIRK(0x103c, 0x8b8f, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9503 + SND_PCI_QUIRK(0x103c, 0x8b8f, "HP", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED), 9504 9504 SND_PCI_QUIRK(0x103c, 0x8b92, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9505 9505 SND_PCI_QUIRK(0x103c, 0x8b96, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 9506 9506 SND_PCI_QUIRK(0x103c, 0x8b97, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), ··· 9547 9547 SND_PCI_QUIRK(0x1043, 0x1a8f, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2), 9548 9548 SND_PCI_QUIRK(0x1043, 0x1b11, "ASUS UX431DA", ALC294_FIXUP_ASUS_COEF_1B), 9549 9549 SND_PCI_QUIRK(0x1043, 0x1b13, "Asus U41SV", ALC269_FIXUP_INV_DMIC), 9550 + SND_PCI_QUIRK(0x1043, 0x1b93, "ASUS G614JVR/JIR", ALC245_FIXUP_CS35L41_SPI_2), 9550 9551 SND_PCI_QUIRK(0x1043, 0x1bbd, "ASUS Z550MA", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE), 9551 9552 SND_PCI_QUIRK(0x1043, 0x1c23, "Asus X55U", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 9552 9553 SND_PCI_QUIRK(0x1043, 0x1c62, "ASUS GU603", ALC289_FIXUP_ASUS_GA401), ··· 9566 9565 SND_PCI_QUIRK(0x1043, 0x1f12, "ASUS UM5302", ALC287_FIXUP_CS35L41_I2C_2), 9567 9566 SND_PCI_QUIRK(0x1043, 0x1f92, "ASUS ROG Flow X16", ALC289_FIXUP_ASUS_GA401), 9568 9567 SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2), 9568 + SND_PCI_QUIRK(0x1043, 0x3a20, "ASUS G614JZR", ALC245_FIXUP_CS35L41_SPI_2), 9569 + SND_PCI_QUIRK(0x1043, 0x3a30, "ASUS G814JVR/JIR", ALC245_FIXUP_CS35L41_SPI_2), 9570 + SND_PCI_QUIRK(0x1043, 0x3a40, "ASUS G814JZR", ALC245_FIXUP_CS35L41_SPI_2), 9571 + SND_PCI_QUIRK(0x1043, 0x3a50, "ASUS G834JYR/JZR", ALC245_FIXUP_CS35L41_SPI_2), 9572 + SND_PCI_QUIRK(0x1043, 0x3a60, "ASUS G634JYR/JZR", ALC245_FIXUP_CS35L41_SPI_2), 9569 9573 SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC), 9570 9574 SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC), 9571 9575 SND_PCI_QUIRK(0x1043, 0x8398, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC), ··· 9594 9588 SND_PCI_QUIRK(0x10ec, 0x124c, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK), 9595 9589 SND_PCI_QUIRK(0x10ec, 0x1252, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK), 9596 9590 SND_PCI_QUIRK(0x10ec, 0x1254, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK), 9591 + SND_PCI_QUIRK(0x10ec, 0x12cc, "Intel Reference board", ALC225_FIXUP_HEADSET_JACK), 9597 9592 SND_PCI_QUIRK(0x10f7, 0x8338, "Panasonic CF-SZ6", ALC269_FIXUP_HEADSET_MODE), 9598 9593 SND_PCI_QUIRK(0x144d, 0xc109, "Samsung Ativ book 9 (NP900X3G)", ALC269_FIXUP_INV_DMIC), 9599 9594 SND_PCI_QUIRK(0x144d, 0xc169, "Samsung Notebook 9 Pen (NP930SBE-K01US)", ALC298_FIXUP_SAMSUNG_AMP), ··· 9643 9636 SND_PCI_QUIRK(0x1558, 0x5101, "Clevo S510WU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9644 9637 SND_PCI_QUIRK(0x1558, 0x5157, "Clevo W517GU1", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9645 9638 SND_PCI_QUIRK(0x1558, 0x51a1, "Clevo NS50MU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9639 + SND_PCI_QUIRK(0x1558, 0x51b1, "Clevo NS50AU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9646 9640 SND_PCI_QUIRK(0x1558, 0x5630, "Clevo NP50RNJS", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9647 9641 SND_PCI_QUIRK(0x1558, 0x70a1, "Clevo NB70T[HJK]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9648 9642 SND_PCI_QUIRK(0x1558, 0x70b3, "Clevo NK70SB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ··· 9815 9807 SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC), 9816 9808 SND_PCI_QUIRK(0x8086, 0x2080, "Intel NUC 8 Rugged", ALC256_FIXUP_INTEL_NUC8_RUGGED), 9817 9809 SND_PCI_QUIRK(0x8086, 0x2081, "Intel NUC 10", ALC256_FIXUP_INTEL_NUC10), 9810 + SND_PCI_QUIRK(0x8086, 0x3038, "Intel NUC 13", ALC225_FIXUP_HEADSET_JACK), 9818 9811 SND_PCI_QUIRK(0xf111, 0x0001, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE), 9819 9812 9820 9813 #if 0 ··· 11703 11694 SND_PCI_QUIRK(0x103c, 0x8719, "HP", ALC897_FIXUP_HP_HSMIC_VERB), 11704 11695 SND_PCI_QUIRK(0x103c, 0x872b, "HP", ALC897_FIXUP_HP_HSMIC_VERB), 11705 11696 SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2), 11697 + SND_PCI_QUIRK(0x103c, 0x8768, "HP Slim Desktop S01", ALC671_FIXUP_HP_HEADSET_MIC2), 11706 11698 SND_PCI_QUIRK(0x103c, 0x877e, "HP 288 Pro G6", ALC671_FIXUP_HP_HEADSET_MIC2), 11707 11699 SND_PCI_QUIRK(0x103c, 0x885f, "HP 288 Pro G8", ALC671_FIXUP_HP_HEADSET_MIC2), 11708 11700 SND_PCI_QUIRK(0x1043, 0x1080, "Asus UX501VW", ALC668_FIXUP_HEADSET_MODE), ··· 11725 11715 SND_PCI_QUIRK(0x14cd, 0x5003, "USI", ALC662_FIXUP_USI_HEADSET_MODE), 11726 11716 SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC662_FIXUP_LENOVO_MULTI_CODECS), 11727 11717 SND_PCI_QUIRK(0x17aa, 0x1057, "Lenovo P360", ALC897_FIXUP_HEADSET_MIC_PIN), 11718 + SND_PCI_QUIRK(0x17aa, 0x1064, "Lenovo P3 Tower", ALC897_FIXUP_HEADSET_MIC_PIN), 11728 11719 SND_PCI_QUIRK(0x17aa, 0x32ca, "Lenovo ThinkCentre M80", ALC897_FIXUP_HEADSET_MIC_PIN), 11729 11720 SND_PCI_QUIRK(0x17aa, 0x32cb, "Lenovo ThinkCentre M70", ALC897_FIXUP_HEADSET_MIC_PIN), 11730 11721 SND_PCI_QUIRK(0x17aa, 0x32cf, "Lenovo ThinkCentre M950", ALC897_FIXUP_HEADSET_MIC_PIN), ··· 11740 11729 SND_PCI_QUIRK(0x1b0a, 0x01b8, "ACER Veriton", ALC662_FIXUP_ACER_VERITON), 11741 11730 SND_PCI_QUIRK(0x1b35, 0x1234, "CZC ET26", ALC662_FIXUP_CZC_ET26), 11742 11731 SND_PCI_QUIRK(0x1b35, 0x2206, "CZC P10T", ALC662_FIXUP_CZC_P10T), 11732 + SND_PCI_QUIRK(0x1c6c, 0x1239, "Compaq N14JP6-V2", ALC897_FIXUP_HP_HSMIC_VERB), 11743 11733 11744 11734 #if 0 11745 11735 /* Below is a quirk table taken from the old code.
+4 -3
sound/pci/ice1712/aureon.c
··· 1899 1899 else { 1900 1900 for (i = 0; i < ARRAY_SIZE(cs8415_controls); i++) { 1901 1901 struct snd_kcontrol *kctl; 1902 - err = snd_ctl_add(ice->card, (kctl = snd_ctl_new1(&cs8415_controls[i], ice))); 1903 - if (err < 0) 1904 - return err; 1902 + kctl = snd_ctl_new1(&cs8415_controls[i], ice); 1905 1903 if (i > 1) 1906 1904 kctl->id.device = ice->pcm->device; 1905 + err = snd_ctl_add(ice->card, kctl); 1906 + if (err < 0) 1907 + return err; 1907 1908 } 1908 1909 } 1909 1910 }
+9 -5
sound/pci/ice1712/ice1712.c
··· 2371 2371 2372 2372 if (snd_BUG_ON(!ice->pcm_pro)) 2373 2373 return -EIO; 2374 - err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_ice1712_spdif_default, ice)); 2374 + kctl = snd_ctl_new1(&snd_ice1712_spdif_default, ice); 2375 + kctl->id.device = ice->pcm_pro->device; 2376 + err = snd_ctl_add(ice->card, kctl); 2375 2377 if (err < 0) 2376 2378 return err; 2379 + kctl = snd_ctl_new1(&snd_ice1712_spdif_maskc, ice); 2377 2380 kctl->id.device = ice->pcm_pro->device; 2378 - err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_ice1712_spdif_maskc, ice)); 2381 + err = snd_ctl_add(ice->card, kctl); 2379 2382 if (err < 0) 2380 2383 return err; 2384 + kctl = snd_ctl_new1(&snd_ice1712_spdif_maskp, ice); 2381 2385 kctl->id.device = ice->pcm_pro->device; 2382 - err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_ice1712_spdif_maskp, ice)); 2386 + err = snd_ctl_add(ice->card, kctl); 2383 2387 if (err < 0) 2384 2388 return err; 2389 + kctl = snd_ctl_new1(&snd_ice1712_spdif_stream, ice); 2385 2390 kctl->id.device = ice->pcm_pro->device; 2386 - err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_ice1712_spdif_stream, ice)); 2391 + err = snd_ctl_add(ice->card, kctl); 2387 2392 if (err < 0) 2388 2393 return err; 2389 - kctl->id.device = ice->pcm_pro->device; 2390 2394 ice->spdif.stream_ctl = kctl; 2391 2395 return 0; 2392 2396 }
+10 -6
sound/pci/ice1712/ice1724.c
··· 2392 2392 if (err < 0) 2393 2393 return err; 2394 2394 2395 - err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_vt1724_spdif_default, ice)); 2395 + kctl = snd_ctl_new1(&snd_vt1724_spdif_default, ice); 2396 + kctl->id.device = ice->pcm->device; 2397 + err = snd_ctl_add(ice->card, kctl); 2396 2398 if (err < 0) 2397 2399 return err; 2400 + kctl = snd_ctl_new1(&snd_vt1724_spdif_maskc, ice); 2398 2401 kctl->id.device = ice->pcm->device; 2399 - err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_vt1724_spdif_maskc, ice)); 2402 + err = snd_ctl_add(ice->card, kctl); 2400 2403 if (err < 0) 2401 2404 return err; 2405 + kctl = snd_ctl_new1(&snd_vt1724_spdif_maskp, ice); 2402 2406 kctl->id.device = ice->pcm->device; 2403 - err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_vt1724_spdif_maskp, ice)); 2407 + err = snd_ctl_add(ice->card, kctl); 2404 2408 if (err < 0) 2405 2409 return err; 2406 - kctl->id.device = ice->pcm->device; 2407 2410 #if 0 /* use default only */ 2408 - err = snd_ctl_add(ice->card, kctl = snd_ctl_new1(&snd_vt1724_spdif_stream, ice)); 2411 + kctl = snd_ctl_new1(&snd_vt1724_spdif_stream, ice); 2412 + kctl->id.device = ice->pcm->device; 2413 + err = snd_ctl_add(ice->card, kctl); 2409 2414 if (err < 0) 2410 2415 return err; 2411 - kctl->id.device = ice->pcm->device; 2412 2416 ice->spdif.stream_ctl = kctl; 2413 2417 #endif 2414 2418 return 0;
+3 -3
sound/pci/ymfpci/ymfpci_main.c
··· 1822 1822 if (snd_BUG_ON(!chip->pcm_spdif)) 1823 1823 return -ENXIO; 1824 1824 kctl = snd_ctl_new1(&snd_ymfpci_spdif_default, chip); 1825 + kctl->id.device = chip->pcm_spdif->device; 1825 1826 err = snd_ctl_add(chip->card, kctl); 1826 1827 if (err < 0) 1827 1828 return err; 1828 - kctl->id.device = chip->pcm_spdif->device; 1829 1829 kctl = snd_ctl_new1(&snd_ymfpci_spdif_mask, chip); 1830 + kctl->id.device = chip->pcm_spdif->device; 1830 1831 err = snd_ctl_add(chip->card, kctl); 1831 1832 if (err < 0) 1832 1833 return err; 1833 - kctl->id.device = chip->pcm_spdif->device; 1834 1834 kctl = snd_ctl_new1(&snd_ymfpci_spdif_stream, chip); 1835 + kctl->id.device = chip->pcm_spdif->device; 1835 1836 err = snd_ctl_add(chip->card, kctl); 1836 1837 if (err < 0) 1837 1838 return err; 1838 - kctl->id.device = chip->pcm_spdif->device; 1839 1839 chip->spdif_pcm_ctl = kctl; 1840 1840 1841 1841 /* direct recording source */
+1 -2
sound/soc/amd/ps/pci-ps.c
··· 211 211 case ACP63_PDM_DEV_MASK: 212 212 adata->pdm_dev_index = 0; 213 213 acp63_fill_platform_dev_info(&pdevinfo[0], parent, NULL, "acp_ps_pdm_dma", 214 - 0, adata->res, 1, &adata->acp_lock, 215 - sizeof(adata->acp_lock)); 214 + 0, adata->res, 1, NULL, 0); 216 215 acp63_fill_platform_dev_info(&pdevinfo[1], parent, NULL, "dmic-codec", 217 216 0, NULL, 0, NULL, 0); 218 217 acp63_fill_platform_dev_info(&pdevinfo[2], parent, NULL, "acp_ps_mach",
+5 -5
sound/soc/amd/ps/ps-pdm-dma.c
··· 361 361 { 362 362 struct resource *res; 363 363 struct pdm_dev_data *adata; 364 + struct acp63_dev_data *acp_data; 365 + struct device *parent; 364 366 int status; 365 367 366 - if (!pdev->dev.platform_data) { 367 - dev_err(&pdev->dev, "platform_data not retrieved\n"); 368 - return -ENODEV; 369 - } 368 + parent = pdev->dev.parent; 369 + acp_data = dev_get_drvdata(parent); 370 370 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 371 371 if (!res) { 372 372 dev_err(&pdev->dev, "IORESOURCE_MEM FAILED\n"); ··· 382 382 return -ENOMEM; 383 383 384 384 adata->capture_stream = NULL; 385 - adata->acp_lock = pdev->dev.platform_data; 385 + adata->acp_lock = &acp_data->acp_lock; 386 386 dev_set_drvdata(&pdev->dev, adata); 387 387 status = devm_snd_soc_register_component(&pdev->dev, 388 388 &acp63_pdm_component,
+7
sound/soc/amd/yc/acp6x-mach.c
··· 175 175 .driver_data = &acp6x_card, 176 176 .matches = { 177 177 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 178 + DMI_MATCH(DMI_PRODUCT_NAME, "21EF"), 179 + } 180 + }, 181 + { 182 + .driver_data = &acp6x_card, 183 + .matches = { 184 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 178 185 DMI_MATCH(DMI_PRODUCT_NAME, "21EM"), 179 186 } 180 187 },
-3
sound/soc/codecs/cs35l56.c
··· 704 704 static int cs35l56_sdw_dai_set_stream(struct snd_soc_dai *dai, 705 705 void *sdw_stream, int direction) 706 706 { 707 - if (!sdw_stream) 708 - return 0; 709 - 710 707 snd_soc_dai_dma_data_set(dai, direction, sdw_stream); 711 708 712 709 return 0;
+2 -2
sound/soc/codecs/max98363.c
··· 211 211 } 212 212 213 213 #define MAX98363_RATES SNDRV_PCM_RATE_8000_192000 214 - #define MAX98363_FORMATS (SNDRV_PCM_FMTBIT_S32_LE) 214 + #define MAX98363_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE) 215 215 216 216 static int max98363_sdw_dai_hw_params(struct snd_pcm_substream *substream, 217 217 struct snd_pcm_hw_params *params, ··· 246 246 stream_config.frame_rate = params_rate(params); 247 247 stream_config.bps = snd_pcm_format_width(params_format(params)); 248 248 stream_config.direction = direction; 249 - stream_config.ch_count = params_channels(params); 249 + stream_config.ch_count = 1; 250 250 251 251 if (stream_config.ch_count > runtime->hw.channels_max) { 252 252 stream_config.ch_count = runtime->hw.channels_max;
+24
sound/soc/codecs/nau8824.c
··· 1903 1903 }, 1904 1904 .driver_data = (void *)(NAU8824_MONO_SPEAKER), 1905 1905 }, 1906 + { 1907 + /* Positivo CW14Q01P */ 1908 + .matches = { 1909 + DMI_MATCH(DMI_SYS_VENDOR, "Positivo Tecnologia SA"), 1910 + DMI_MATCH(DMI_BOARD_NAME, "CW14Q01P"), 1911 + }, 1912 + .driver_data = (void *)(NAU8824_JD_ACTIVE_HIGH), 1913 + }, 1914 + { 1915 + /* Positivo K1424G */ 1916 + .matches = { 1917 + DMI_MATCH(DMI_SYS_VENDOR, "Positivo Tecnologia SA"), 1918 + DMI_MATCH(DMI_BOARD_NAME, "K1424G"), 1919 + }, 1920 + .driver_data = (void *)(NAU8824_JD_ACTIVE_HIGH), 1921 + }, 1922 + { 1923 + /* Positivo N14ZP74G */ 1924 + .matches = { 1925 + DMI_MATCH(DMI_SYS_VENDOR, "Positivo Tecnologia SA"), 1926 + DMI_MATCH(DMI_BOARD_NAME, "N14ZP74G"), 1927 + }, 1928 + .driver_data = (void *)(NAU8824_JD_ACTIVE_HIGH), 1929 + }, 1906 1930 {} 1907 1931 }; 1908 1932
-1
sound/soc/codecs/wcd938x-sdw.c
··· 1190 1190 .readable_reg = wcd938x_readable_register, 1191 1191 .writeable_reg = wcd938x_writeable_register, 1192 1192 .volatile_reg = wcd938x_volatile_register, 1193 - .can_multi_write = true, 1194 1193 }; 1195 1194 1196 1195 static const struct sdw_slave_ops wcd9380_slave_ops = {
-1
sound/soc/codecs/wsa881x.c
··· 645 645 .readable_reg = wsa881x_readable_register, 646 646 .reg_format_endian = REGMAP_ENDIAN_NATIVE, 647 647 .val_format_endian = REGMAP_ENDIAN_NATIVE, 648 - .can_multi_write = true, 649 648 }; 650 649 651 650 enum {
-1
sound/soc/codecs/wsa883x.c
··· 946 946 .writeable_reg = wsa883x_writeable_register, 947 947 .reg_format_endian = REGMAP_ENDIAN_NATIVE, 948 948 .val_format_endian = REGMAP_ENDIAN_NATIVE, 949 - .can_multi_write = true, 950 949 .use_single_read = true, 951 950 }; 952 951
+9 -2
sound/soc/fsl/fsl_sai.c
··· 491 491 regmap_update_bits(sai->regmap, reg, FSL_SAI_CR2_MSEL_MASK, 492 492 FSL_SAI_CR2_MSEL(sai->mclk_id[tx])); 493 493 494 - if (savediv == 1) 494 + if (savediv == 1) { 495 495 regmap_update_bits(sai->regmap, reg, 496 496 FSL_SAI_CR2_DIV_MASK | FSL_SAI_CR2_BYP, 497 497 FSL_SAI_CR2_BYP); 498 - else 498 + if (fsl_sai_dir_is_synced(sai, adir)) 499 + regmap_update_bits(sai->regmap, FSL_SAI_xCR2(tx, ofs), 500 + FSL_SAI_CR2_BCI, FSL_SAI_CR2_BCI); 501 + else 502 + regmap_update_bits(sai->regmap, FSL_SAI_xCR2(tx, ofs), 503 + FSL_SAI_CR2_BCI, 0); 504 + } else { 499 505 regmap_update_bits(sai->regmap, reg, 500 506 FSL_SAI_CR2_DIV_MASK | FSL_SAI_CR2_BYP, 501 507 savediv / 2 - 1); 508 + } 502 509 503 510 if (sai->soc_data->max_register >= FSL_SAI_MCTL) { 504 511 /* SAI is in master mode at this point, so enable MCLK */
+1
sound/soc/fsl/fsl_sai.h
··· 116 116 117 117 /* SAI Transmit and Receive Configuration 2 Register */ 118 118 #define FSL_SAI_CR2_SYNC BIT(30) 119 + #define FSL_SAI_CR2_BCI BIT(28) 119 120 #define FSL_SAI_CR2_MSEL_MASK (0x3 << 26) 120 121 #define FSL_SAI_CR2_MSEL_BUS 0 121 122 #define FSL_SAI_CR2_MSEL_MCLK1 BIT(26)
+1 -1
sound/soc/generic/simple-card-utils.c
··· 314 314 } 315 315 ret = snd_pcm_hw_constraint_minmax(substream->runtime, SNDRV_PCM_HW_PARAM_RATE, 316 316 fixed_rate, fixed_rate); 317 - if (ret) 317 + if (ret < 0) 318 318 goto codec_err; 319 319 } 320 320
+1
sound/soc/generic/simple-card.c
··· 416 416 417 417 if (ret < 0) { 418 418 of_node_put(codec); 419 + of_node_put(plat); 419 420 of_node_put(np); 420 421 goto error; 421 422 }
-7
sound/soc/mediatek/mt8188/mt8188-afe-clk.c
··· 418 418 return 0; 419 419 } 420 420 421 - void mt8188_afe_deinit_clock(void *priv) 422 - { 423 - struct mtk_base_afe *afe = priv; 424 - 425 - mt8188_audsys_clk_unregister(afe); 426 - } 427 - 428 421 int mt8188_afe_enable_clk(struct mtk_base_afe *afe, struct clk *clk) 429 422 { 430 423 int ret;
-1
sound/soc/mediatek/mt8188/mt8188-afe-clk.h
··· 100 100 int mt8188_afe_get_mclk_source_rate(struct mtk_base_afe *afe, int apll); 101 101 int mt8188_afe_get_default_mclk_source_by_rate(int rate); 102 102 int mt8188_afe_init_clock(struct mtk_base_afe *afe); 103 - void mt8188_afe_deinit_clock(void *priv); 104 103 int mt8188_afe_enable_clk(struct mtk_base_afe *afe, struct clk *clk); 105 104 void mt8188_afe_disable_clk(struct mtk_base_afe *afe, struct clk *clk); 106 105 int mt8188_afe_set_clk_rate(struct mtk_base_afe *afe, struct clk *clk,
-4
sound/soc/mediatek/mt8188/mt8188-afe-pcm.c
··· 3185 3185 if (ret) 3186 3186 return dev_err_probe(dev, ret, "init clock error"); 3187 3187 3188 - ret = devm_add_action_or_reset(dev, mt8188_afe_deinit_clock, (void *)afe); 3189 - if (ret) 3190 - return ret; 3191 - 3192 3188 spin_lock_init(&afe_priv->afe_ctrl_lock); 3193 3189 3194 3190 mutex_init(&afe->irq_alloc_lock);
+24 -23
sound/soc/mediatek/mt8188/mt8188-audsys-clk.c
··· 138 138 GATE_AUD6(CLK_AUD_GASRC11, "aud_gasrc11", "top_asm_h", 11), 139 139 }; 140 140 141 + static void mt8188_audsys_clk_unregister(void *data) 142 + { 143 + struct mtk_base_afe *afe = data; 144 + struct mt8188_afe_private *afe_priv = afe->platform_priv; 145 + struct clk *clk; 146 + struct clk_lookup *cl; 147 + int i; 148 + 149 + if (!afe_priv) 150 + return; 151 + 152 + for (i = 0; i < CLK_AUD_NR_CLK; i++) { 153 + cl = afe_priv->lookup[i]; 154 + if (!cl) 155 + continue; 156 + 157 + clk = cl->clk; 158 + clk_unregister_gate(clk); 159 + 160 + clkdev_drop(cl); 161 + } 162 + } 163 + 141 164 int mt8188_audsys_clk_register(struct mtk_base_afe *afe) 142 165 { 143 166 struct mt8188_afe_private *afe_priv = afe->platform_priv; ··· 202 179 afe_priv->lookup[i] = cl; 203 180 } 204 181 205 - return 0; 206 - } 207 - 208 - void mt8188_audsys_clk_unregister(struct mtk_base_afe *afe) 209 - { 210 - struct mt8188_afe_private *afe_priv = afe->platform_priv; 211 - struct clk *clk; 212 - struct clk_lookup *cl; 213 - int i; 214 - 215 - if (!afe_priv) 216 - return; 217 - 218 - for (i = 0; i < CLK_AUD_NR_CLK; i++) { 219 - cl = afe_priv->lookup[i]; 220 - if (!cl) 221 - continue; 222 - 223 - clk = cl->clk; 224 - clk_unregister_gate(clk); 225 - 226 - clkdev_drop(cl); 227 - } 182 + return devm_add_action_or_reset(afe->dev, mt8188_audsys_clk_unregister, afe); 228 183 }
-1
sound/soc/mediatek/mt8188/mt8188-audsys-clk.h
··· 10 10 #define _MT8188_AUDSYS_CLK_H_ 11 11 12 12 int mt8188_audsys_clk_register(struct mtk_base_afe *afe); 13 - void mt8188_audsys_clk_unregister(struct mtk_base_afe *afe); 14 13 15 14 #endif
-5
sound/soc/mediatek/mt8195/mt8195-afe-clk.c
··· 410 410 return 0; 411 411 } 412 412 413 - void mt8195_afe_deinit_clock(struct mtk_base_afe *afe) 414 - { 415 - mt8195_audsys_clk_unregister(afe); 416 - } 417 - 418 413 int mt8195_afe_enable_clk(struct mtk_base_afe *afe, struct clk *clk) 419 414 { 420 415 int ret;
-1
sound/soc/mediatek/mt8195/mt8195-afe-clk.h
··· 101 101 int mt8195_afe_get_mclk_source_rate(struct mtk_base_afe *afe, int apll); 102 102 int mt8195_afe_get_default_mclk_source_by_rate(int rate); 103 103 int mt8195_afe_init_clock(struct mtk_base_afe *afe); 104 - void mt8195_afe_deinit_clock(struct mtk_base_afe *afe); 105 104 int mt8195_afe_enable_clk(struct mtk_base_afe *afe, struct clk *clk); 106 105 void mt8195_afe_disable_clk(struct mtk_base_afe *afe, struct clk *clk); 107 106 int mt8195_afe_prepare_clk(struct mtk_base_afe *afe, struct clk *clk);
-4
sound/soc/mediatek/mt8195/mt8195-afe-pcm.c
··· 3255 3255 3256 3256 static void mt8195_afe_pcm_dev_remove(struct platform_device *pdev) 3257 3257 { 3258 - struct mtk_base_afe *afe = platform_get_drvdata(pdev); 3259 - 3260 3258 snd_soc_unregister_component(&pdev->dev); 3261 3259 3262 3260 pm_runtime_disable(&pdev->dev); 3263 3261 if (!pm_runtime_status_suspended(&pdev->dev)) 3264 3262 mt8195_afe_runtime_suspend(&pdev->dev); 3265 - 3266 - mt8195_afe_deinit_clock(afe); 3267 3263 } 3268 3264 3269 3265 static const struct of_device_id mt8195_afe_pcm_dt_match[] = {
+24 -23
sound/soc/mediatek/mt8195/mt8195-audsys-clk.c
··· 148 148 GATE_AUD6(CLK_AUD_GASRC19, "aud_gasrc19", "top_asm_h", 19), 149 149 }; 150 150 151 + static void mt8195_audsys_clk_unregister(void *data) 152 + { 153 + struct mtk_base_afe *afe = data; 154 + struct mt8195_afe_private *afe_priv = afe->platform_priv; 155 + struct clk *clk; 156 + struct clk_lookup *cl; 157 + int i; 158 + 159 + if (!afe_priv) 160 + return; 161 + 162 + for (i = 0; i < CLK_AUD_NR_CLK; i++) { 163 + cl = afe_priv->lookup[i]; 164 + if (!cl) 165 + continue; 166 + 167 + clk = cl->clk; 168 + clk_unregister_gate(clk); 169 + 170 + clkdev_drop(cl); 171 + } 172 + } 173 + 151 174 int mt8195_audsys_clk_register(struct mtk_base_afe *afe) 152 175 { 153 176 struct mt8195_afe_private *afe_priv = afe->platform_priv; ··· 211 188 afe_priv->lookup[i] = cl; 212 189 } 213 190 214 - return 0; 215 - } 216 - 217 - void mt8195_audsys_clk_unregister(struct mtk_base_afe *afe) 218 - { 219 - struct mt8195_afe_private *afe_priv = afe->platform_priv; 220 - struct clk *clk; 221 - struct clk_lookup *cl; 222 - int i; 223 - 224 - if (!afe_priv) 225 - return; 226 - 227 - for (i = 0; i < CLK_AUD_NR_CLK; i++) { 228 - cl = afe_priv->lookup[i]; 229 - if (!cl) 230 - continue; 231 - 232 - clk = cl->clk; 233 - clk_unregister_gate(clk); 234 - 235 - clkdev_drop(cl); 236 - } 191 + return devm_add_action_or_reset(afe->dev, mt8195_audsys_clk_unregister, afe); 237 192 }
-1
sound/soc/mediatek/mt8195/mt8195-audsys-clk.h
··· 10 10 #define _MT8195_AUDSYS_CLK_H_ 11 11 12 12 int mt8195_audsys_clk_register(struct mtk_base_afe *afe); 13 - void mt8195_audsys_clk_unregister(struct mtk_base_afe *afe); 14 13 15 14 #endif
+3
sound/soc/tegra/tegra_pcm.c
··· 117 117 return ret; 118 118 } 119 119 120 + /* Set wait time to 500ms by default */ 121 + substream->wait_time = 500; 122 + 120 123 return 0; 121 124 } 122 125 EXPORT_SYMBOL_GPL(tegra_pcm_open);
+4
sound/usb/pcm.c
··· 650 650 goto unlock; 651 651 } 652 652 653 + ret = snd_usb_pcm_change_state(subs, UAC3_PD_STATE_D0); 654 + if (ret < 0) 655 + goto unlock; 656 + 653 657 again: 654 658 if (subs->sync_endpoint) { 655 659 ret = snd_usb_endpoint_prepare(chip, subs->sync_endpoint);
+2
sound/usb/quirks.c
··· 2191 2191 QUIRK_FLAG_DSD_RAW), 2192 2192 VENDOR_FLG(0x2ab6, /* T+A devices */ 2193 2193 QUIRK_FLAG_DSD_RAW), 2194 + VENDOR_FLG(0x3336, /* HEM devices */ 2195 + QUIRK_FLAG_DSD_RAW), 2194 2196 VENDOR_FLG(0x3353, /* Khadas devices */ 2195 2197 QUIRK_FLAG_DSD_RAW), 2196 2198 VENDOR_FLG(0x3842, /* EVGA */
+1
tools/include/uapi/linux/bpf.h
··· 1035 1035 BPF_TRACE_KPROBE_MULTI, 1036 1036 BPF_LSM_CGROUP, 1037 1037 BPF_STRUCT_OPS, 1038 + BPF_NETFILTER, 1038 1039 __MAX_BPF_ATTACH_TYPE 1039 1040 }; 1040 1041
+2 -1
tools/lib/bpf/libbpf.c
··· 117 117 [BPF_PERF_EVENT] = "perf_event", 118 118 [BPF_TRACE_KPROBE_MULTI] = "trace_kprobe_multi", 119 119 [BPF_STRUCT_OPS] = "struct_ops", 120 + [BPF_NETFILTER] = "netfilter", 120 121 }; 121 122 122 123 static const char * const link_type_name[] = { ··· 8713 8712 SEC_DEF("struct_ops+", STRUCT_OPS, 0, SEC_NONE), 8714 8713 SEC_DEF("struct_ops.s+", STRUCT_OPS, 0, SEC_SLEEPABLE), 8715 8714 SEC_DEF("sk_lookup", SK_LOOKUP, BPF_SK_LOOKUP, SEC_ATTACHABLE), 8716 - SEC_DEF("netfilter", NETFILTER, 0, SEC_NONE), 8715 + SEC_DEF("netfilter", NETFILTER, BPF_NETFILTER, SEC_NONE), 8717 8716 }; 8718 8717 8719 8718 static size_t custom_sec_def_cnt;
+2
tools/lib/bpf/libbpf_probes.c
··· 180 180 case BPF_PROG_TYPE_SK_REUSEPORT: 181 181 case BPF_PROG_TYPE_FLOW_DISSECTOR: 182 182 case BPF_PROG_TYPE_CGROUP_SYSCTL: 183 + break; 183 184 case BPF_PROG_TYPE_NETFILTER: 185 + opts.expected_attach_type = BPF_NETFILTER; 184 186 break; 185 187 default: 186 188 return -EOPNOTSUPP;
+3 -2
tools/testing/radix-tree/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 - CFLAGS += -I. -I../../include -g -Og -Wall -D_LGPL_SOURCE -fsanitize=address \ 4 - -fsanitize=undefined 3 + CFLAGS += -I. -I../../include -I../../../lib -g -Og -Wall \ 4 + -D_LGPL_SOURCE -fsanitize=address -fsanitize=undefined 5 5 LDFLAGS += -fsanitize=address -fsanitize=undefined 6 6 LDLIBS+= -lpthread -lurcu 7 7 TARGETS = main idr-test multiorder xarray maple ··· 49 49 ../../../include/linux/xarray.h \ 50 50 ../../../include/linux/maple_tree.h \ 51 51 ../../../include/linux/radix-tree.h \ 52 + ../../../lib/radix-tree.h \ 52 53 ../../../include/linux/idr.h 53 54 54 55 radix-tree.c: ../../../lib/radix-tree.c
+5 -5
tools/testing/selftests/alsa/pcm-test.c
··· 381 381 goto __close; 382 382 } 383 383 if (rrate != rate) { 384 - snprintf(msg, sizeof(msg), "rate mismatch %ld != %ld", rate, rrate); 384 + snprintf(msg, sizeof(msg), "rate mismatch %ld != %d", rate, rrate); 385 385 goto __close; 386 386 } 387 387 rperiod_size = period_size; ··· 447 447 frames = snd_pcm_writei(handle, samples, rate); 448 448 if (frames < 0) { 449 449 snprintf(msg, sizeof(msg), 450 - "Write failed: expected %d, wrote %li", rate, frames); 450 + "Write failed: expected %ld, wrote %li", rate, frames); 451 451 goto __close; 452 452 } 453 453 if (frames < rate) { 454 454 snprintf(msg, sizeof(msg), 455 - "expected %d, wrote %li", rate, frames); 455 + "expected %ld, wrote %li", rate, frames); 456 456 goto __close; 457 457 } 458 458 } else { 459 459 frames = snd_pcm_readi(handle, samples, rate); 460 460 if (frames < 0) { 461 461 snprintf(msg, sizeof(msg), 462 - "expected %d, wrote %li", rate, frames); 462 + "expected %ld, wrote %li", rate, frames); 463 463 goto __close; 464 464 } 465 465 if (frames < rate) { 466 466 snprintf(msg, sizeof(msg), 467 - "expected %d, wrote %li", rate, frames); 467 + "expected %ld, wrote %li", rate, frames); 468 468 goto __close; 469 469 } 470 470 }
+31
tools/testing/selftests/bpf/prog_tests/inner_array_lookup.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + 3 + #include <test_progs.h> 4 + 5 + #include "inner_array_lookup.skel.h" 6 + 7 + void test_inner_array_lookup(void) 8 + { 9 + int map1_fd, err; 10 + int key = 3; 11 + int val = 1; 12 + struct inner_array_lookup *skel; 13 + 14 + skel = inner_array_lookup__open_and_load(); 15 + if (!ASSERT_OK_PTR(skel, "open_load_skeleton")) 16 + return; 17 + 18 + err = inner_array_lookup__attach(skel); 19 + if (!ASSERT_OK(err, "skeleton_attach")) 20 + goto cleanup; 21 + 22 + map1_fd = bpf_map__fd(skel->maps.inner_map1); 23 + bpf_map_update_elem(map1_fd, &key, &val, 0); 24 + 25 + /* Probe should have set the element at index 3 to 2 */ 26 + bpf_map_lookup_elem(map1_fd, &key, &val); 27 + ASSERT_EQ(val, 2, "value_is_2"); 28 + 29 + cleanup: 30 + inner_array_lookup__destroy(skel); 31 + }
+1 -1
tools/testing/selftests/bpf/prog_tests/sockopt_sk.c
··· 209 209 err, errno); 210 210 goto err; 211 211 } 212 - ASSERT_EQ(optlen, 4, "Unexpected NETLINK_LIST_MEMBERSHIPS value"); 212 + ASSERT_EQ(optlen, 8, "Unexpected NETLINK_LIST_MEMBERSHIPS value"); 213 213 214 214 free(big_buf); 215 215 close(fd);
+45
tools/testing/selftests/bpf/progs/inner_array_lookup.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + 3 + #include <linux/bpf.h> 4 + #include <bpf/bpf_helpers.h> 5 + 6 + struct inner_map { 7 + __uint(type, BPF_MAP_TYPE_ARRAY); 8 + __uint(max_entries, 5); 9 + __type(key, int); 10 + __type(value, int); 11 + } inner_map1 SEC(".maps"); 12 + 13 + struct outer_map { 14 + __uint(type, BPF_MAP_TYPE_HASH_OF_MAPS); 15 + __uint(max_entries, 3); 16 + __type(key, int); 17 + __array(values, struct inner_map); 18 + } outer_map1 SEC(".maps") = { 19 + .values = { 20 + [2] = &inner_map1, 21 + }, 22 + }; 23 + 24 + SEC("raw_tp/sys_enter") 25 + int handle__sys_enter(void *ctx) 26 + { 27 + int outer_key = 2, inner_key = 3; 28 + int *val; 29 + void *map; 30 + 31 + map = bpf_map_lookup_elem(&outer_map1, &outer_key); 32 + if (!map) 33 + return 1; 34 + 35 + val = bpf_map_lookup_elem(map, &inner_key); 36 + if (!val) 37 + return 1; 38 + 39 + if (*val == 1) 40 + *val = 2; 41 + 42 + return 0; 43 + } 44 + 45 + char _license[] SEC("license") = "GPL";
+3
tools/testing/selftests/net/.gitignore
··· 8 8 fin_ack_lat 9 9 gro 10 10 hwtstamp_config 11 + io_uring_zerocopy_tx 11 12 ioam6_parser 12 13 ip_defrag 14 + ip_local_port_range 13 15 ipsec 14 16 ipv6_flowlabel 15 17 ipv6_flowlabel_mgr ··· 28 26 reuseport_bpf_numa 29 27 reuseport_dualstack 30 28 rxtimestamp 29 + sctp_hello 31 30 sk_bind_sendto_listen 32 31 sk_connect_zero_addr 33 32 socket
+7 -4
tools/testing/selftests/net/forwarding/hw_stats_l3.sh
··· 84 84 85 85 router_rp1_200_create() 86 86 { 87 - ip link add name $rp1.200 up \ 88 - link $rp1 addrgenmode eui64 type vlan id 200 87 + ip link add name $rp1.200 link $rp1 type vlan id 200 88 + ip link set dev $rp1.200 addrgenmode eui64 89 + ip link set dev $rp1.200 up 89 90 ip address add dev $rp1.200 192.0.2.2/28 90 91 ip address add dev $rp1.200 2001:db8:1::2/64 91 92 ip stats set dev $rp1.200 l3_stats on ··· 257 256 258 257 router_rp1_200_destroy 259 258 260 - ip link add name $rp1.200 link $rp1 addrgenmode none type vlan id 200 259 + ip link add name $rp1.200 link $rp1 type vlan id 200 260 + ip link set dev $rp1.200 addrgenmode none 261 261 ip stats set dev $rp1.200 l3_stats on 262 - ip link set dev $rp1.200 up addrgenmode eui64 262 + ip link set dev $rp1.200 addrgenmode eui64 263 + ip link set dev $rp1.200 up 263 264 ip address add dev $rp1.200 192.0.2.2/28 264 265 ip address add dev $rp1.200 2001:db8:1::2/64 265 266 }
+1
tools/testing/selftests/net/mptcp/config
··· 1 + CONFIG_KALLSYMS=y 1 2 CONFIG_MPTCP=y 2 3 CONFIG_IPV6=y 3 4 CONFIG_MPTCP_IPV6=y
+17 -25
tools/testing/selftests/net/mptcp/diag.sh
··· 55 55 { 56 56 local command="$1" 57 57 local expected=$2 58 - local msg nr 58 + local msg="$3" 59 + local skip="${4:-SKIP}" 60 + local nr 59 61 60 - shift 2 61 - msg=$* 62 62 nr=$(eval $command) 63 63 64 64 printf "%-50s" "$msg" 65 65 if [ $nr != $expected ]; then 66 - echo "[ fail ] expected $expected found $nr" 67 - ret=$test_cnt 66 + if [ $nr = "$skip" ] && ! mptcp_lib_expect_all_features; then 67 + echo "[ skip ] Feature probably not supported" 68 + else 69 + echo "[ fail ] expected $expected found $nr" 70 + ret=$test_cnt 71 + fi 68 72 else 69 73 echo "[ ok ]" 70 74 fi ··· 80 76 local condition=$1 81 77 shift 1 82 78 83 - __chk_nr "ss -inmHMN $ns | $condition" $* 79 + __chk_nr "ss -inmHMN $ns | $condition" "$@" 84 80 } 85 81 86 82 chk_msk_nr() 87 83 { 88 - __chk_msk_nr "grep -c token:" $* 84 + __chk_msk_nr "grep -c token:" "$@" 89 85 } 90 86 91 87 wait_msk_nr() ··· 123 119 124 120 chk_msk_fallback_nr() 125 121 { 126 - __chk_msk_nr "grep -c fallback" $* 122 + __chk_msk_nr "grep -c fallback" "$@" 127 123 } 128 124 129 125 chk_msk_remote_key_nr() 130 126 { 131 - __chk_msk_nr "grep -c remote_key" $* 127 + __chk_msk_nr "grep -c remote_key" "$@" 132 128 } 133 129 134 130 __chk_listen() 135 131 { 136 132 local filter="$1" 137 133 local expected=$2 134 + local msg="$3" 138 135 139 - shift 2 140 - msg=$* 141 - 142 - nr=$(ss -N $ns -Ml "$filter" | grep -c LISTEN) 143 - printf "%-50s" "$msg" 144 - 145 - if [ $nr != $expected ]; then 146 - echo "[ fail ] expected $expected found $nr" 147 - ret=$test_cnt 148 - else 149 - echo "[ ok ]" 150 - fi 136 + __chk_nr "ss -N $ns -Ml '$filter' | grep -c LISTEN" "$expected" "$msg" 0 151 137 } 152 138 153 139 chk_msk_listen() 154 140 { 155 141 lport=$1 156 - local msg="check for listen socket" 157 142 158 143 # destination port search should always return empty list 159 144 __chk_listen "dport $lport" 0 "listen match for dport $lport" ··· 160 167 chk_msk_inuse() 161 168 { 162 169 local expected=$1 170 + local msg="$2" 163 171 local listen_nr 164 - 165 - shift 1 166 172 167 173 listen_nr=$(ss -N "${ns}" -Ml | grep -c LISTEN) 168 174 expected=$((expected + listen_nr)) ··· 173 181 sleep 0.1 174 182 done 175 183 176 - __chk_nr get_msk_inuse $expected $* 184 + __chk_nr get_msk_inuse $expected "$msg" 0 177 185 } 178 186 179 187 # $1: ns, $2: port
+20
tools/testing/selftests/net/mptcp/mptcp_connect.sh
··· 144 144 } 145 145 146 146 mptcp_lib_check_mptcp 147 + mptcp_lib_check_kallsyms 147 148 148 149 ip -Version > /dev/null 2>&1 149 150 if [ $? -ne 0 ];then ··· 696 695 return 0 697 696 fi 698 697 698 + # IP(V6)_TRANSPARENT has been added after TOS support which came with 699 + # the required infrastructure in MPTCP sockopt code. To support TOS, the 700 + # following function has been exported (T). Not great but better than 701 + # checking for a specific kernel version. 702 + if ! mptcp_lib_kallsyms_has "T __ip_sock_set_tos$"; then 703 + echo "INFO: ${msg} not supported by the kernel: SKIP" 704 + return 705 + fi 706 + 699 707 ip netns exec "$listener_ns" nft -f /dev/stdin <<"EOF" 700 708 flush ruleset 701 709 table inet mangle { ··· 777 767 778 768 run_tests_mptfo() 779 769 { 770 + if ! mptcp_lib_kallsyms_has "mptcp_fastopen_"; then 771 + echo "INFO: TFO not supported by the kernel: SKIP" 772 + return 773 + fi 774 + 780 775 echo "INFO: with MPTFO start" 781 776 ip netns exec "$ns1" sysctl -q net.ipv4.tcp_fastopen=2 782 777 ip netns exec "$ns2" sysctl -q net.ipv4.tcp_fastopen=1 ··· 801 786 { 802 787 local old_cin=$cin 803 788 local old_sin=$sin 789 + 790 + if ! mptcp_lib_kallsyms_has "mptcp_pm_data_reset$"; then 791 + echo "INFO: Full disconnect not supported: SKIP" 792 + return 793 + fi 804 794 805 795 cat $cin $cin $cin > "$cin".disconnect 806 796
+338 -186
tools/testing/selftests/net/mptcp/mptcp_join.sh
··· 25 25 ns1="" 26 26 ns2="" 27 27 ksft_skip=4 28 + iptables="iptables" 29 + ip6tables="ip6tables" 28 30 timeout_poll=30 29 31 timeout_test=$((timeout_poll * 2 + 1)) 30 32 capture=0 ··· 84 82 ip netns add $netns || exit $ksft_skip 85 83 ip -net $netns link set lo up 86 84 ip netns exec $netns sysctl -q net.mptcp.enabled=1 87 - ip netns exec $netns sysctl -q net.mptcp.pm_type=0 85 + ip netns exec $netns sysctl -q net.mptcp.pm_type=0 2>/dev/null || true 88 86 ip netns exec $netns sysctl -q net.ipv4.conf.all.rp_filter=0 89 87 ip netns exec $netns sysctl -q net.ipv4.conf.default.rp_filter=0 90 88 if [ $checksum -eq 1 ]; then ··· 142 140 check_tools() 143 141 { 144 142 mptcp_lib_check_mptcp 143 + mptcp_lib_check_kallsyms 145 144 146 145 if ! ip -Version &> /dev/null; then 147 146 echo "SKIP: Could not run test without ip tool" 148 147 exit $ksft_skip 149 148 fi 150 149 151 - if ! iptables -V &> /dev/null; then 150 + # Use the legacy version if available to support old kernel versions 151 + if iptables-legacy -V &> /dev/null; then 152 + iptables="iptables-legacy" 153 + ip6tables="ip6tables-legacy" 154 + elif ! iptables -V &> /dev/null; then 152 155 echo "SKIP: Could not run all tests without iptables tool" 153 156 exit $ksft_skip 154 157 fi ··· 190 183 rm -f "$tmpfile" 191 184 rm -rf $evts_ns1 $evts_ns2 192 185 cleanup_partial 186 + } 187 + 188 + # $1: msg 189 + print_title() 190 + { 191 + printf "%03u %-36s %s" "${TEST_COUNT}" "${TEST_NAME}" "${1}" 192 + } 193 + 194 + # [ $1: fail msg ] 195 + mark_as_skipped() 196 + { 197 + local msg="${1:-"Feature not supported"}" 198 + 199 + mptcp_lib_fail_if_expected_feature "${msg}" 200 + 201 + print_title "[ skip ] ${msg}" 202 + printf "\n" 203 + } 204 + 205 + # $@: condition 206 + continue_if() 207 + { 208 + if ! "${@}"; then 209 + mark_as_skipped 210 + return 1 211 + fi 193 212 } 194 213 195 214 skip_test() ··· 261 228 return 0 262 229 } 263 230 231 + # $1: test name ; $2: counter to check 232 + reset_check_counter() 233 + { 234 + reset "${1}" || return 1 235 + 236 + local counter="${2}" 237 + 238 + if ! nstat -asz "${counter}" | grep -wq "${counter}"; then 239 + mark_as_skipped "counter '${counter}' is not available" 240 + return 1 241 + fi 242 + } 243 + 264 244 # $1: test name 265 245 reset_with_cookies() 266 246 { ··· 293 247 294 248 reset "${1}" || return 1 295 249 296 - tables="iptables" 250 + tables="${iptables}" 297 251 if [ $ip -eq 6 ]; then 298 - tables="ip6tables" 252 + tables="${ip6tables}" 299 253 fi 300 254 301 255 ip netns exec $ns1 sysctl -q net.mptcp.add_addr_timeout=1 302 - ip netns exec $ns2 $tables -A OUTPUT -p tcp \ 303 - -m tcp --tcp-option 30 \ 304 - -m bpf --bytecode \ 305 - "$CBPF_MPTCP_SUBOPTION_ADD_ADDR" \ 306 - -j DROP 256 + 257 + if ! ip netns exec $ns2 $tables -A OUTPUT -p tcp \ 258 + -m tcp --tcp-option 30 \ 259 + -m bpf --bytecode \ 260 + "$CBPF_MPTCP_SUBOPTION_ADD_ADDR" \ 261 + -j DROP; then 262 + mark_as_skipped "unable to set the 'add addr' rule" 263 + return 1 264 + fi 307 265 } 308 266 309 267 # $1: test name ··· 351 301 # tc action pedit offset 162 out of bounds 352 302 # 353 303 # Netfilter is used to mark packets with enough data. 354 - reset_with_fail() 304 + setup_fail_rules() 355 305 { 356 - reset "${1}" || return 1 357 - 358 - ip netns exec $ns1 sysctl -q net.mptcp.checksum_enabled=1 359 - ip netns exec $ns2 sysctl -q net.mptcp.checksum_enabled=1 360 - 361 306 check_invert=1 362 307 validate_checksum=1 363 - local i="$2" 364 - local ip="${3:-4}" 308 + local i="$1" 309 + local ip="${2:-4}" 365 310 local tables 366 311 367 - tables="iptables" 312 + tables="${iptables}" 368 313 if [ $ip -eq 6 ]; then 369 - tables="ip6tables" 314 + tables="${ip6tables}" 370 315 fi 371 316 372 317 ip netns exec $ns2 $tables \ ··· 371 326 -p tcp \ 372 327 -m length --length 150:9999 \ 373 328 -m statistic --mode nth --packet 1 --every 99999 \ 374 - -j MARK --set-mark 42 || exit 1 329 + -j MARK --set-mark 42 || return ${ksft_skip} 375 330 376 - tc -n $ns2 qdisc add dev ns2eth$i clsact || exit 1 331 + tc -n $ns2 qdisc add dev ns2eth$i clsact || return ${ksft_skip} 377 332 tc -n $ns2 filter add dev ns2eth$i egress \ 378 333 protocol ip prio 1000 \ 379 334 handle 42 fw \ 380 335 action pedit munge offset 148 u8 invert \ 381 336 pipe csum tcp \ 382 - index 100 || exit 1 337 + index 100 || return ${ksft_skip} 338 + } 339 + 340 + reset_with_fail() 341 + { 342 + reset_check_counter "${1}" "MPTcpExtInfiniteMapTx" || return 1 343 + shift 344 + 345 + ip netns exec $ns1 sysctl -q net.mptcp.checksum_enabled=1 346 + ip netns exec $ns2 sysctl -q net.mptcp.checksum_enabled=1 347 + 348 + local rc=0 349 + setup_fail_rules "${@}" || rc=$? 350 + 351 + if [ ${rc} -eq ${ksft_skip} ]; then 352 + mark_as_skipped "unable to set the 'fail' rules" 353 + return 1 354 + fi 383 355 } 384 356 385 357 reset_with_events() ··· 409 347 evts_ns1_pid=$! 410 348 ip netns exec $ns2 ./pm_nl_ctl events >> "$evts_ns2" 2>&1 & 411 349 evts_ns2_pid=$! 350 + } 351 + 352 + reset_with_tcp_filter() 353 + { 354 + reset "${1}" || return 1 355 + shift 356 + 357 + local ns="${!1}" 358 + local src="${2}" 359 + local target="${3}" 360 + 361 + if ! ip netns exec "${ns}" ${iptables} \ 362 + -A INPUT \ 363 + -s "${src}" \ 364 + -p tcp \ 365 + -j "${target}"; then 366 + mark_as_skipped "unable to set the filter rules" 367 + return 1 368 + fi 412 369 } 413 370 414 371 fail_test() ··· 548 467 done 549 468 } 550 469 470 + # $1: ns ; $2: counter 471 + get_counter() 472 + { 473 + local ns="${1}" 474 + local counter="${2}" 475 + local count 476 + 477 + count=$(ip netns exec ${ns} nstat -asz "${counter}" | awk 'NR==1 {next} {print $2}') 478 + if [ -z "${count}" ]; then 479 + mptcp_lib_fail_if_expected_feature "${counter} counter" 480 + return 1 481 + fi 482 + 483 + echo "${count}" 484 + } 485 + 551 486 rm_addr_count() 552 487 { 553 - local ns=${1} 554 - 555 - ip netns exec ${ns} nstat -as | grep MPTcpExtRmAddr | awk '{print $2}' 488 + get_counter "${1}" "MPTcpExtRmAddr" 556 489 } 557 490 558 491 # $1: ns, $2: old rm_addr counter in $ns ··· 589 494 local ns="${1}" 590 495 local cnt old_cnt 591 496 592 - old_cnt=$(ip netns exec ${ns} nstat -as | grep MPJoinAckRx | awk '{print $2}') 497 + old_cnt=$(get_counter ${ns} "MPTcpExtMPJoinAckRx") 593 498 594 499 local i 595 500 for i in $(seq 10); do 596 - cnt=$(ip netns exec ${ns} nstat -as | grep MPJoinAckRx | awk '{print $2}') 501 + cnt=$(get_counter ${ns} "MPTcpExtMPJoinAckRx") 597 502 [ "$cnt" = "${old_cnt}" ] || break 598 503 sleep 0.1 599 504 done ··· 793 698 fi 794 699 } 795 700 796 - filter_tcp_from() 797 - { 798 - local ns="${1}" 799 - local src="${2}" 800 - local target="${3}" 801 - 802 - ip netns exec "${ns}" iptables -A INPUT -s "${src}" -p tcp -j "${target}" 803 - } 804 - 805 701 do_transfer() 806 702 { 807 703 local listener_ns="$1" ··· 948 862 sed -n 's/.*\(token:\)\([[:digit:]]*\).*$/\2/p;q') 949 863 ip netns exec ${listener_ns} ./pm_nl_ctl ann $addr token $tk id $id 950 864 sleep 1 865 + sp=$(grep "type:10" "$evts_ns1" | 866 + sed -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q') 867 + da=$(grep "type:10" "$evts_ns1" | 868 + sed -n 's/.*\(daddr6:\)\([0-9a-f:.]*\).*$/\2/p;q') 869 + dp=$(grep "type:10" "$evts_ns1" | 870 + sed -n 's/.*\(dport:\)\([[:digit:]]*\).*$/\2/p;q') 951 871 ip netns exec ${listener_ns} ./pm_nl_ctl rem token $tk id $id 872 + ip netns exec ${listener_ns} ./pm_nl_ctl dsf lip "::ffff:$addr" \ 873 + lport $sp rip $da rport $dp token $tk 952 874 fi 953 875 954 876 counter=$((counter + 1)) ··· 1022 928 sleep 1 1023 929 sp=$(grep "type:10" "$evts_ns2" | 1024 930 sed -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q') 931 + ip netns exec ${connector_ns} ./pm_nl_ctl rem token $tk id $id 1025 932 ip netns exec ${connector_ns} ./pm_nl_ctl dsf lip $addr lport $sp \ 1026 933 rip $da rport $dp token $tk 1027 934 fi ··· 1243 1148 fi 1244 1149 1245 1150 printf "%-${nr_blank}s %s" " " "sum" 1246 - count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtDataCsumErr | awk '{print $2}') 1247 - [ -z "$count" ] && count=0 1151 + count=$(get_counter ${ns1} "MPTcpExtDataCsumErr") 1248 1152 if [ "$count" != "$csum_ns1" ]; then 1249 1153 extra_msg="$extra_msg ns1=$count" 1250 1154 fi 1251 - if { [ "$count" != $csum_ns1 ] && [ $allow_multi_errors_ns1 -eq 0 ]; } || 1155 + if [ -z "$count" ]; then 1156 + echo -n "[skip]" 1157 + elif { [ "$count" != $csum_ns1 ] && [ $allow_multi_errors_ns1 -eq 0 ]; } || 1252 1158 { [ "$count" -lt $csum_ns1 ] && [ $allow_multi_errors_ns1 -eq 1 ]; }; then 1253 1159 echo "[fail] got $count data checksum error[s] expected $csum_ns1" 1254 1160 fail_test ··· 1258 1162 echo -n "[ ok ]" 1259 1163 fi 1260 1164 echo -n " - csum " 1261 - count=$(ip netns exec $ns2 nstat -as | grep MPTcpExtDataCsumErr | awk '{print $2}') 1262 - [ -z "$count" ] && count=0 1165 + count=$(get_counter ${ns2} "MPTcpExtDataCsumErr") 1263 1166 if [ "$count" != "$csum_ns2" ]; then 1264 1167 extra_msg="$extra_msg ns2=$count" 1265 1168 fi 1266 - if { [ "$count" != $csum_ns2 ] && [ $allow_multi_errors_ns2 -eq 0 ]; } || 1169 + if [ -z "$count" ]; then 1170 + echo -n "[skip]" 1171 + elif { [ "$count" != $csum_ns2 ] && [ $allow_multi_errors_ns2 -eq 0 ]; } || 1267 1172 { [ "$count" -lt $csum_ns2 ] && [ $allow_multi_errors_ns2 -eq 1 ]; }; then 1268 1173 echo "[fail] got $count data checksum error[s] expected $csum_ns2" 1269 1174 fail_test ··· 1306 1209 fi 1307 1210 1308 1211 printf "%-${nr_blank}s %s" " " "ftx" 1309 - count=$(ip netns exec $ns_tx nstat -as | grep MPTcpExtMPFailTx | awk '{print $2}') 1310 - [ -z "$count" ] && count=0 1212 + count=$(get_counter ${ns_tx} "MPTcpExtMPFailTx") 1311 1213 if [ "$count" != "$fail_tx" ]; then 1312 1214 extra_msg="$extra_msg,tx=$count" 1313 1215 fi 1314 - if { [ "$count" != "$fail_tx" ] && [ $allow_tx_lost -eq 0 ]; } || 1216 + if [ -z "$count" ]; then 1217 + echo -n "[skip]" 1218 + elif { [ "$count" != "$fail_tx" ] && [ $allow_tx_lost -eq 0 ]; } || 1315 1219 { [ "$count" -gt "$fail_tx" ] && [ $allow_tx_lost -eq 1 ]; }; then 1316 1220 echo "[fail] got $count MP_FAIL[s] TX expected $fail_tx" 1317 1221 fail_test ··· 1322 1224 fi 1323 1225 1324 1226 echo -n " - failrx" 1325 - count=$(ip netns exec $ns_rx nstat -as | grep MPTcpExtMPFailRx | awk '{print $2}') 1326 - [ -z "$count" ] && count=0 1227 + count=$(get_counter ${ns_rx} "MPTcpExtMPFailRx") 1327 1228 if [ "$count" != "$fail_rx" ]; then 1328 1229 extra_msg="$extra_msg,rx=$count" 1329 1230 fi 1330 - if { [ "$count" != "$fail_rx" ] && [ $allow_rx_lost -eq 0 ]; } || 1231 + if [ -z "$count" ]; then 1232 + echo -n "[skip]" 1233 + elif { [ "$count" != "$fail_rx" ] && [ $allow_rx_lost -eq 0 ]; } || 1331 1234 { [ "$count" -gt "$fail_rx" ] && [ $allow_rx_lost -eq 1 ]; }; then 1332 1235 echo "[fail] got $count MP_FAIL[s] RX expected $fail_rx" 1333 1236 fail_test ··· 1360 1261 fi 1361 1262 1362 1263 printf "%-${nr_blank}s %s" " " "ctx" 1363 - count=$(ip netns exec $ns_tx nstat -as | grep MPTcpExtMPFastcloseTx | awk '{print $2}') 1364 - [ -z "$count" ] && count=0 1365 - [ "$count" != "$fclose_tx" ] && extra_msg="$extra_msg,tx=$count" 1366 - if [ "$count" != "$fclose_tx" ]; then 1264 + count=$(get_counter ${ns_tx} "MPTcpExtMPFastcloseTx") 1265 + if [ -z "$count" ]; then 1266 + echo -n "[skip]" 1267 + elif [ "$count" != "$fclose_tx" ]; then 1268 + extra_msg="$extra_msg,tx=$count" 1367 1269 echo "[fail] got $count MP_FASTCLOSE[s] TX expected $fclose_tx" 1368 1270 fail_test 1369 1271 dump_stats=1 ··· 1373 1273 fi 1374 1274 1375 1275 echo -n " - fclzrx" 1376 - count=$(ip netns exec $ns_rx nstat -as | grep MPTcpExtMPFastcloseRx | awk '{print $2}') 1377 - [ -z "$count" ] && count=0 1378 - [ "$count" != "$fclose_rx" ] && extra_msg="$extra_msg,rx=$count" 1379 - if [ "$count" != "$fclose_rx" ]; then 1276 + count=$(get_counter ${ns_rx} "MPTcpExtMPFastcloseRx") 1277 + if [ -z "$count" ]; then 1278 + echo -n "[skip]" 1279 + elif [ "$count" != "$fclose_rx" ]; then 1280 + extra_msg="$extra_msg,rx=$count" 1380 1281 echo "[fail] got $count MP_FASTCLOSE[s] RX expected $fclose_rx" 1381 1282 fail_test 1382 1283 dump_stats=1 ··· 1408 1307 fi 1409 1308 1410 1309 printf "%-${nr_blank}s %s" " " "rtx" 1411 - count=$(ip netns exec $ns_tx nstat -as | grep MPTcpExtMPRstTx | awk '{print $2}') 1412 - [ -z "$count" ] && count=0 1413 - if [ $count -lt $rst_tx ]; then 1310 + count=$(get_counter ${ns_tx} "MPTcpExtMPRstTx") 1311 + if [ -z "$count" ]; then 1312 + echo -n "[skip]" 1313 + elif [ $count -lt $rst_tx ]; then 1414 1314 echo "[fail] got $count MP_RST[s] TX expected $rst_tx" 1415 1315 fail_test 1416 1316 dump_stats=1 ··· 1420 1318 fi 1421 1319 1422 1320 echo -n " - rstrx " 1423 - count=$(ip netns exec $ns_rx nstat -as | grep MPTcpExtMPRstRx | awk '{print $2}') 1424 - [ -z "$count" ] && count=0 1425 - if [ "$count" -lt "$rst_rx" ]; then 1321 + count=$(get_counter ${ns_rx} "MPTcpExtMPRstRx") 1322 + if [ -z "$count" ]; then 1323 + echo -n "[skip]" 1324 + elif [ "$count" -lt "$rst_rx" ]; then 1426 1325 echo "[fail] got $count MP_RST[s] RX expected $rst_rx" 1427 1326 fail_test 1428 1327 dump_stats=1 ··· 1444 1341 local dump_stats 1445 1342 1446 1343 printf "%-${nr_blank}s %s" " " "itx" 1447 - count=$(ip netns exec $ns2 nstat -as | grep InfiniteMapTx | awk '{print $2}') 1448 - [ -z "$count" ] && count=0 1449 - if [ "$count" != "$infi_tx" ]; then 1344 + count=$(get_counter ${ns2} "MPTcpExtInfiniteMapTx") 1345 + if [ -z "$count" ]; then 1346 + echo -n "[skip]" 1347 + elif [ "$count" != "$infi_tx" ]; then 1450 1348 echo "[fail] got $count infinite map[s] TX expected $infi_tx" 1451 1349 fail_test 1452 1350 dump_stats=1 ··· 1456 1352 fi 1457 1353 1458 1354 echo -n " - infirx" 1459 - count=$(ip netns exec $ns1 nstat -as | grep InfiniteMapRx | awk '{print $2}') 1460 - [ -z "$count" ] && count=0 1461 - if [ "$count" != "$infi_rx" ]; then 1355 + count=$(get_counter ${ns1} "MPTcpExtInfiniteMapRx") 1356 + if [ -z "$count" ]; then 1357 + echo "[skip]" 1358 + elif [ "$count" != "$infi_rx" ]; then 1462 1359 echo "[fail] got $count infinite map[s] RX expected $infi_rx" 1463 1360 fail_test 1464 1361 dump_stats=1 ··· 1491 1386 fi 1492 1387 1493 1388 printf "%03u %-36s %s" "${TEST_COUNT}" "${title}" "syn" 1494 - count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinSynRx | awk '{print $2}') 1495 - [ -z "$count" ] && count=0 1496 - if [ "$count" != "$syn_nr" ]; then 1389 + count=$(get_counter ${ns1} "MPTcpExtMPJoinSynRx") 1390 + if [ -z "$count" ]; then 1391 + echo -n "[skip]" 1392 + elif [ "$count" != "$syn_nr" ]; then 1497 1393 echo "[fail] got $count JOIN[s] syn expected $syn_nr" 1498 1394 fail_test 1499 1395 dump_stats=1 ··· 1504 1398 1505 1399 echo -n " - synack" 1506 1400 with_cookie=$(ip netns exec $ns2 sysctl -n net.ipv4.tcp_syncookies) 1507 - count=$(ip netns exec $ns2 nstat -as | grep MPTcpExtMPJoinSynAckRx | awk '{print $2}') 1508 - [ -z "$count" ] && count=0 1509 - if [ "$count" != "$syn_ack_nr" ]; then 1401 + count=$(get_counter ${ns2} "MPTcpExtMPJoinSynAckRx") 1402 + if [ -z "$count" ]; then 1403 + echo -n "[skip]" 1404 + elif [ "$count" != "$syn_ack_nr" ]; then 1510 1405 # simult connections exceeding the limit with cookie enabled could go up to 1511 1406 # synack validation as the conn limit can be enforced reliably only after 1512 1407 # the subflow creation ··· 1523 1416 fi 1524 1417 1525 1418 echo -n " - ack" 1526 - count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinAckRx | awk '{print $2}') 1527 - [ -z "$count" ] && count=0 1528 - if [ "$count" != "$ack_nr" ]; then 1419 + count=$(get_counter ${ns1} "MPTcpExtMPJoinAckRx") 1420 + if [ -z "$count" ]; then 1421 + echo "[skip]" 1422 + elif [ "$count" != "$ack_nr" ]; then 1529 1423 echo "[fail] got $count JOIN[s] ack expected $ack_nr" 1530 1424 fail_test 1531 1425 dump_stats=1 ··· 1558 1450 local recover_nr 1559 1451 1560 1452 printf "%-${nr_blank}s %-18s" " " "stale" 1561 - stale_nr=$(ip netns exec $ns nstat -as | grep MPTcpExtSubflowStale | awk '{print $2}') 1562 - [ -z "$stale_nr" ] && stale_nr=0 1563 - recover_nr=$(ip netns exec $ns nstat -as | grep MPTcpExtSubflowRecover | awk '{print $2}') 1564 - [ -z "$recover_nr" ] && recover_nr=0 1565 1453 1566 - if [ $stale_nr -lt $stale_min ] || 1454 + stale_nr=$(get_counter ${ns} "MPTcpExtSubflowStale") 1455 + recover_nr=$(get_counter ${ns} "MPTcpExtSubflowRecover") 1456 + if [ -z "$stale_nr" ] || [ -z "$recover_nr" ]; then 1457 + echo "[skip]" 1458 + elif [ $stale_nr -lt $stale_min ] || 1567 1459 { [ $stale_max -gt 0 ] && [ $stale_nr -gt $stale_max ]; } || 1568 1460 [ $((stale_nr - recover_nr)) -ne $stale_delta ]; then 1569 1461 echo "[fail] got $stale_nr stale[s] $recover_nr recover[s], " \ ··· 1599 1491 timeout=$(ip netns exec $ns1 sysctl -n net.mptcp.add_addr_timeout) 1600 1492 1601 1493 printf "%-${nr_blank}s %s" " " "add" 1602 - count=$(ip netns exec $ns2 nstat -as MPTcpExtAddAddr | grep MPTcpExtAddAddr | awk '{print $2}') 1603 - [ -z "$count" ] && count=0 1604 - 1494 + count=$(get_counter ${ns2} "MPTcpExtAddAddr") 1495 + if [ -z "$count" ]; then 1496 + echo -n "[skip]" 1605 1497 # if the test configured a short timeout tolerate greater then expected 1606 1498 # add addrs options, due to retransmissions 1607 - if [ "$count" != "$add_nr" ] && { [ "$timeout" -gt 1 ] || [ "$count" -lt "$add_nr" ]; }; then 1499 + elif [ "$count" != "$add_nr" ] && { [ "$timeout" -gt 1 ] || [ "$count" -lt "$add_nr" ]; }; then 1608 1500 echo "[fail] got $count ADD_ADDR[s] expected $add_nr" 1609 1501 fail_test 1610 1502 dump_stats=1 ··· 1613 1505 fi 1614 1506 1615 1507 echo -n " - echo " 1616 - count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtEchoAdd | awk '{print $2}') 1617 - [ -z "$count" ] && count=0 1618 - if [ "$count" != "$echo_nr" ]; then 1508 + count=$(get_counter ${ns1} "MPTcpExtEchoAdd") 1509 + if [ -z "$count" ]; then 1510 + echo -n "[skip]" 1511 + elif [ "$count" != "$echo_nr" ]; then 1619 1512 echo "[fail] got $count ADD_ADDR echo[s] expected $echo_nr" 1620 1513 fail_test 1621 1514 dump_stats=1 ··· 1626 1517 1627 1518 if [ $port_nr -gt 0 ]; then 1628 1519 echo -n " - pt " 1629 - count=$(ip netns exec $ns2 nstat -as | grep MPTcpExtPortAdd | awk '{print $2}') 1630 - [ -z "$count" ] && count=0 1631 - if [ "$count" != "$port_nr" ]; then 1520 + count=$(get_counter ${ns2} "MPTcpExtPortAdd") 1521 + if [ -z "$count" ]; then 1522 + echo "[skip]" 1523 + elif [ "$count" != "$port_nr" ]; then 1632 1524 echo "[fail] got $count ADD_ADDR[s] with a port-number expected $port_nr" 1633 1525 fail_test 1634 1526 dump_stats=1 ··· 1638 1528 fi 1639 1529 1640 1530 printf "%-${nr_blank}s %s" " " "syn" 1641 - count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinPortSynRx | 1642 - awk '{print $2}') 1643 - [ -z "$count" ] && count=0 1644 - if [ "$count" != "$syn_nr" ]; then 1531 + count=$(get_counter ${ns1} "MPTcpExtMPJoinPortSynRx") 1532 + if [ -z "$count" ]; then 1533 + echo -n "[skip]" 1534 + elif [ "$count" != "$syn_nr" ]; then 1645 1535 echo "[fail] got $count JOIN[s] syn with a different \ 1646 1536 port-number expected $syn_nr" 1647 1537 fail_test ··· 1651 1541 fi 1652 1542 1653 1543 echo -n " - synack" 1654 - count=$(ip netns exec $ns2 nstat -as | grep MPTcpExtMPJoinPortSynAckRx | 1655 - awk '{print $2}') 1656 - [ -z "$count" ] && count=0 1657 - if [ "$count" != "$syn_ack_nr" ]; then 1544 + count=$(get_counter ${ns2} "MPTcpExtMPJoinPortSynAckRx") 1545 + if [ -z "$count" ]; then 1546 + echo -n "[skip]" 1547 + elif [ "$count" != "$syn_ack_nr" ]; then 1658 1548 echo "[fail] got $count JOIN[s] synack with a different \ 1659 1549 port-number expected $syn_ack_nr" 1660 1550 fail_test ··· 1664 1554 fi 1665 1555 1666 1556 echo -n " - ack" 1667 - count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinPortAckRx | 1668 - awk '{print $2}') 1669 - [ -z "$count" ] && count=0 1670 - if [ "$count" != "$ack_nr" ]; then 1557 + count=$(get_counter ${ns1} "MPTcpExtMPJoinPortAckRx") 1558 + if [ -z "$count" ]; then 1559 + echo "[skip]" 1560 + elif [ "$count" != "$ack_nr" ]; then 1671 1561 echo "[fail] got $count JOIN[s] ack with a different \ 1672 1562 port-number expected $ack_nr" 1673 1563 fail_test ··· 1677 1567 fi 1678 1568 1679 1569 printf "%-${nr_blank}s %s" " " "syn" 1680 - count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMismatchPortSynRx | 1681 - awk '{print $2}') 1682 - [ -z "$count" ] && count=0 1683 - if [ "$count" != "$mis_syn_nr" ]; then 1570 + count=$(get_counter ${ns1} "MPTcpExtMismatchPortSynRx") 1571 + if [ -z "$count" ]; then 1572 + echo -n "[skip]" 1573 + elif [ "$count" != "$mis_syn_nr" ]; then 1684 1574 echo "[fail] got $count JOIN[s] syn with a mismatched \ 1685 1575 port-number expected $mis_syn_nr" 1686 1576 fail_test ··· 1690 1580 fi 1691 1581 1692 1582 echo -n " - ack " 1693 - count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMismatchPortAckRx | 1694 - awk '{print $2}') 1695 - [ -z "$count" ] && count=0 1696 - if [ "$count" != "$mis_ack_nr" ]; then 1583 + count=$(get_counter ${ns1} "MPTcpExtMismatchPortAckRx") 1584 + if [ -z "$count" ]; then 1585 + echo "[skip]" 1586 + elif [ "$count" != "$mis_ack_nr" ]; then 1697 1587 echo "[fail] got $count JOIN[s] ack with a mismatched \ 1698 1588 port-number expected $mis_ack_nr" 1699 1589 fail_test ··· 1737 1627 fi 1738 1628 1739 1629 printf "%-${nr_blank}s %s" " " "rm " 1740 - count=$(ip netns exec $addr_ns nstat -as | grep MPTcpExtRmAddr | awk '{print $2}') 1741 - [ -z "$count" ] && count=0 1742 - if [ "$count" != "$rm_addr_nr" ]; then 1630 + count=$(get_counter ${addr_ns} "MPTcpExtRmAddr") 1631 + if [ -z "$count" ]; then 1632 + echo -n "[skip]" 1633 + elif [ "$count" != "$rm_addr_nr" ]; then 1743 1634 echo "[fail] got $count RM_ADDR[s] expected $rm_addr_nr" 1744 1635 fail_test 1745 1636 dump_stats=1 ··· 1749 1638 fi 1750 1639 1751 1640 echo -n " - rmsf " 1752 - count=$(ip netns exec $subflow_ns nstat -as | grep MPTcpExtRmSubflow | awk '{print $2}') 1753 - [ -z "$count" ] && count=0 1754 - if [ -n "$simult" ]; then 1641 + count=$(get_counter ${subflow_ns} "MPTcpExtRmSubflow") 1642 + if [ -z "$count" ]; then 1643 + echo -n "[skip]" 1644 + elif [ -n "$simult" ]; then 1755 1645 local cnt suffix 1756 1646 1757 - cnt=$(ip netns exec $addr_ns nstat -as | grep MPTcpExtRmSubflow | awk '{print $2}') 1647 + cnt=$(get_counter ${addr_ns} "MPTcpExtRmSubflow") 1758 1648 1759 1649 # in case of simult flush, the subflow removal count on each side is 1760 1650 # unreliable 1761 - [ -z "$cnt" ] && cnt=0 1762 1651 count=$((count + cnt)) 1763 1652 [ "$count" != "$rm_subflow_nr" ] && suffix="$count in [$rm_subflow_nr:$((rm_subflow_nr*2))]" 1764 1653 if [ $count -ge "$rm_subflow_nr" ] && \ 1765 1654 [ "$count" -le "$((rm_subflow_nr *2 ))" ]; then 1766 - echo "[ ok ] $suffix" 1655 + echo -n "[ ok ] $suffix" 1767 1656 else 1768 1657 echo "[fail] got $count RM_SUBFLOW[s] expected in range [$rm_subflow_nr:$((rm_subflow_nr*2))]" 1769 1658 fail_test 1770 1659 dump_stats=1 1771 1660 fi 1772 - return 1773 - fi 1774 - if [ "$count" != "$rm_subflow_nr" ]; then 1661 + elif [ "$count" != "$rm_subflow_nr" ]; then 1775 1662 echo "[fail] got $count RM_SUBFLOW[s] expected $rm_subflow_nr" 1776 1663 fail_test 1777 1664 dump_stats=1 ··· 1790 1681 local dump_stats 1791 1682 1792 1683 printf "%-${nr_blank}s %s" " " "ptx" 1793 - count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPPrioTx | awk '{print $2}') 1794 - [ -z "$count" ] && count=0 1795 - if [ "$count" != "$mp_prio_nr_tx" ]; then 1684 + count=$(get_counter ${ns1} "MPTcpExtMPPrioTx") 1685 + if [ -z "$count" ]; then 1686 + echo -n "[skip]" 1687 + elif [ "$count" != "$mp_prio_nr_tx" ]; then 1796 1688 echo "[fail] got $count MP_PRIO[s] TX expected $mp_prio_nr_tx" 1797 1689 fail_test 1798 1690 dump_stats=1 ··· 1802 1692 fi 1803 1693 1804 1694 echo -n " - prx " 1805 - count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPPrioRx | awk '{print $2}') 1806 - [ -z "$count" ] && count=0 1807 - if [ "$count" != "$mp_prio_nr_rx" ]; then 1695 + count=$(get_counter ${ns1} "MPTcpExtMPPrioRx") 1696 + if [ -z "$count" ]; then 1697 + echo "[skip]" 1698 + elif [ "$count" != "$mp_prio_nr_rx" ]; then 1808 1699 echo "[fail] got $count MP_PRIO[s] RX expected $mp_prio_nr_rx" 1809 1700 fail_test 1810 1701 dump_stats=1 ··· 1921 1810 while [ $time -lt $timeout_ms ]; do 1922 1811 local cnt 1923 1812 1924 - cnt=$(ip netns exec $ns nstat -as TcpAttemptFails | grep TcpAttemptFails | awk '{print $2}') 1813 + cnt=$(get_counter ${ns} "TcpAttemptFails") 1925 1814 1926 1815 [ "$cnt" = 1 ] && return 1 1927 1816 time=$((time + 100)) ··· 2014 1903 fi 2015 1904 2016 1905 # multiple subflows, with subflow creation error 2017 - if reset "multi subflows, with failing subflow"; then 1906 + if reset_with_tcp_filter "multi subflows, with failing subflow" ns1 10.0.3.2 REJECT && 1907 + continue_if mptcp_lib_kallsyms_has "mptcp_pm_subflow_check_next$"; then 2018 1908 pm_nl_set_limits $ns1 0 2 2019 1909 pm_nl_set_limits $ns2 0 2 2020 1910 pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow 2021 1911 pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow 2022 - filter_tcp_from $ns1 10.0.3.2 REJECT 2023 1912 run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow 2024 1913 chk_join_nr 1 1 1 2025 1914 fi 2026 1915 2027 1916 # multiple subflows, with subflow timeout on MPJ 2028 - if reset "multi subflows, with subflow timeout"; then 1917 + if reset_with_tcp_filter "multi subflows, with subflow timeout" ns1 10.0.3.2 DROP && 1918 + continue_if mptcp_lib_kallsyms_has "mptcp_pm_subflow_check_next$"; then 2029 1919 pm_nl_set_limits $ns1 0 2 2030 1920 pm_nl_set_limits $ns2 0 2 2031 1921 pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow 2032 1922 pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow 2033 - filter_tcp_from $ns1 10.0.3.2 DROP 2034 1923 run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow 2035 1924 chk_join_nr 1 1 1 2036 1925 fi ··· 2038 1927 # multiple subflows, check that the endpoint corresponding to 2039 1928 # closed subflow (due to reset) is not reused if additional 2040 1929 # subflows are added later 2041 - if reset "multi subflows, fair usage on close"; then 1930 + if reset_with_tcp_filter "multi subflows, fair usage on close" ns1 10.0.3.2 REJECT && 1931 + continue_if mptcp_lib_kallsyms_has "mptcp_pm_subflow_check_next$"; then 2042 1932 pm_nl_set_limits $ns1 0 1 2043 1933 pm_nl_set_limits $ns2 0 1 2044 1934 pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow 2045 - filter_tcp_from $ns1 10.0.3.2 REJECT 2046 1935 run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow & 2047 1936 2048 1937 # mpj subflow will be in TW after the reset ··· 2142 2031 # the peer could possibly miss some addr notification, allow retransmission 2143 2032 ip netns exec $ns1 sysctl -q net.mptcp.add_addr_timeout=1 2144 2033 run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow 2145 - chk_join_nr 3 3 3 2146 2034 2147 - # the server will not signal the address terminating 2148 - # the MPC subflow 2149 - chk_add_nr 3 3 2035 + # It is not directly linked to the commit introducing this 2036 + # symbol but for the parent one which is linked anyway. 2037 + if ! mptcp_lib_kallsyms_has "mptcp_pm_subflow_check_next$"; then 2038 + chk_join_nr 3 3 2 2039 + chk_add_nr 4 4 2040 + else 2041 + chk_join_nr 3 3 3 2042 + # the server will not signal the address terminating 2043 + # the MPC subflow 2044 + chk_add_nr 3 3 2045 + fi 2150 2046 fi 2151 2047 } 2152 2048 ··· 2394 2276 pm_nl_add_endpoint $ns2 10.0.4.2 flags subflow 2395 2277 run_tests $ns1 $ns2 10.0.1.1 0 -8 -8 slow 2396 2278 chk_join_nr 3 3 3 2397 - chk_rm_nr 0 3 simult 2279 + 2280 + if mptcp_lib_kversion_ge 5.18; then 2281 + chk_rm_nr 0 3 simult 2282 + else 2283 + chk_rm_nr 3 3 2284 + fi 2398 2285 fi 2399 2286 2400 2287 # addresses flush ··· 2637 2514 2638 2515 mixed_tests() 2639 2516 { 2640 - if reset "IPv4 sockets do not use IPv6 addresses"; then 2517 + if reset "IPv4 sockets do not use IPv6 addresses" && 2518 + continue_if mptcp_lib_kversion_ge 6.3; then 2641 2519 pm_nl_set_limits $ns1 0 1 2642 2520 pm_nl_set_limits $ns2 1 1 2643 2521 pm_nl_add_endpoint $ns1 dead:beef:2::1 flags signal ··· 2647 2523 fi 2648 2524 2649 2525 # Need an IPv6 mptcp socket to allow subflows of both families 2650 - if reset "simult IPv4 and IPv6 subflows"; then 2526 + if reset "simult IPv4 and IPv6 subflows" && 2527 + continue_if mptcp_lib_kversion_ge 6.3; then 2651 2528 pm_nl_set_limits $ns1 0 1 2652 2529 pm_nl_set_limits $ns2 1 1 2653 2530 pm_nl_add_endpoint $ns1 10.0.1.1 flags signal ··· 2657 2532 fi 2658 2533 2659 2534 # cross families subflows will not be created even in fullmesh mode 2660 - if reset "simult IPv4 and IPv6 subflows, fullmesh 1x1"; then 2535 + if reset "simult IPv4 and IPv6 subflows, fullmesh 1x1" && 2536 + continue_if mptcp_lib_kversion_ge 6.3; then 2661 2537 pm_nl_set_limits $ns1 0 4 2662 2538 pm_nl_set_limits $ns2 1 4 2663 2539 pm_nl_add_endpoint $ns2 dead:beef:2::2 flags subflow,fullmesh ··· 2669 2543 2670 2544 # fullmesh still tries to create all the possibly subflows with 2671 2545 # matching family 2672 - if reset "simult IPv4 and IPv6 subflows, fullmesh 2x2"; then 2546 + if reset "simult IPv4 and IPv6 subflows, fullmesh 2x2" && 2547 + continue_if mptcp_lib_kversion_ge 6.3; then 2673 2548 pm_nl_set_limits $ns1 0 4 2674 2549 pm_nl_set_limits $ns2 2 4 2675 2550 pm_nl_add_endpoint $ns1 10.0.2.1 flags signal ··· 2683 2556 backup_tests() 2684 2557 { 2685 2558 # single subflow, backup 2686 - if reset "single subflow, backup"; then 2559 + if reset "single subflow, backup" && 2560 + continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then 2687 2561 pm_nl_set_limits $ns1 0 1 2688 2562 pm_nl_set_limits $ns2 0 1 2689 2563 pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow,backup ··· 2694 2566 fi 2695 2567 2696 2568 # single address, backup 2697 - if reset "single address, backup"; then 2569 + if reset "single address, backup" && 2570 + continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then 2698 2571 pm_nl_set_limits $ns1 0 1 2699 2572 pm_nl_add_endpoint $ns1 10.0.2.1 flags signal 2700 2573 pm_nl_set_limits $ns2 1 1 ··· 2706 2577 fi 2707 2578 2708 2579 # single address with port, backup 2709 - if reset "single address with port, backup"; then 2580 + if reset "single address with port, backup" && 2581 + continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then 2710 2582 pm_nl_set_limits $ns1 0 1 2711 2583 pm_nl_add_endpoint $ns1 10.0.2.1 flags signal port 10100 2712 2584 pm_nl_set_limits $ns2 1 1 ··· 2717 2587 chk_prio_nr 1 1 2718 2588 fi 2719 2589 2720 - if reset "mpc backup"; then 2590 + if reset "mpc backup" && 2591 + continue_if mptcp_lib_kallsyms_doesnt_have "mptcp_subflow_send_ack$"; then 2721 2592 pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow,backup 2722 2593 run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow 2723 2594 chk_join_nr 0 0 0 2724 2595 chk_prio_nr 0 1 2725 2596 fi 2726 2597 2727 - if reset "mpc backup both sides"; then 2598 + if reset "mpc backup both sides" && 2599 + continue_if mptcp_lib_kallsyms_doesnt_have "mptcp_subflow_send_ack$"; then 2728 2600 pm_nl_add_endpoint $ns1 10.0.1.1 flags subflow,backup 2729 2601 pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow,backup 2730 2602 run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow ··· 2734 2602 chk_prio_nr 1 1 2735 2603 fi 2736 2604 2737 - if reset "mpc switch to backup"; then 2605 + if reset "mpc switch to backup" && 2606 + continue_if mptcp_lib_kallsyms_doesnt_have "mptcp_subflow_send_ack$"; then 2738 2607 pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow 2739 2608 run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow backup 2740 2609 chk_join_nr 0 0 0 2741 2610 chk_prio_nr 0 1 2742 2611 fi 2743 2612 2744 - if reset "mpc switch to backup both sides"; then 2613 + if reset "mpc switch to backup both sides" && 2614 + continue_if mptcp_lib_kallsyms_doesnt_have "mptcp_subflow_send_ack$"; then 2745 2615 pm_nl_add_endpoint $ns1 10.0.1.1 flags subflow 2746 2616 pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow 2747 2617 run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow backup ··· 2769 2635 local family 2770 2636 local saddr 2771 2637 local sport 2638 + local name 2772 2639 2773 2640 if [ $e_type = $LISTENER_CREATED ]; then 2774 - stdbuf -o0 -e0 printf "\t\t\t\t\t CREATE_LISTENER %s:%s"\ 2775 - $e_saddr $e_sport 2641 + name="LISTENER_CREATED" 2776 2642 elif [ $e_type = $LISTENER_CLOSED ]; then 2777 - stdbuf -o0 -e0 printf "\t\t\t\t\t CLOSE_LISTENER %s:%s "\ 2778 - $e_saddr $e_sport 2643 + name="LISTENER_CLOSED" 2644 + else 2645 + name="$e_type" 2779 2646 fi 2780 2647 2781 - type=$(grep "type:$e_type," $evt | 2782 - sed --unbuffered -n 's/.*\(type:\)\([[:digit:]]*\).*$/\2/p;q') 2783 - family=$(grep "type:$e_type," $evt | 2784 - sed --unbuffered -n 's/.*\(family:\)\([[:digit:]]*\).*$/\2/p;q') 2785 - sport=$(grep "type:$e_type," $evt | 2786 - sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q') 2648 + printf "%-${nr_blank}s %s %s:%s " " " "$name" "$e_saddr" "$e_sport" 2649 + 2650 + if ! mptcp_lib_kallsyms_has "mptcp_event_pm_listener$"; then 2651 + printf "[skip]: event not supported\n" 2652 + return 2653 + fi 2654 + 2655 + type=$(grep "type:$e_type," $evt | sed -n 's/.*\(type:\)\([[:digit:]]*\).*$/\2/p;q') 2656 + family=$(grep "type:$e_type," $evt | sed -n 's/.*\(family:\)\([[:digit:]]*\).*$/\2/p;q') 2657 + sport=$(grep "type:$e_type," $evt | sed -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q') 2787 2658 if [ $family ] && [ $family = $AF_INET6 ]; then 2788 - saddr=$(grep "type:$e_type," $evt | 2789 - sed --unbuffered -n 's/.*\(saddr6:\)\([0-9a-f:.]*\).*$/\2/p;q') 2659 + saddr=$(grep "type:$e_type," $evt | sed -n 's/.*\(saddr6:\)\([0-9a-f:.]*\).*$/\2/p;q') 2790 2660 else 2791 - saddr=$(grep "type:$e_type," $evt | 2792 - sed --unbuffered -n 's/.*\(saddr4:\)\([0-9.]*\).*$/\2/p;q') 2661 + saddr=$(grep "type:$e_type," $evt | sed -n 's/.*\(saddr4:\)\([0-9.]*\).*$/\2/p;q') 2793 2662 fi 2794 2663 2795 2664 if [ $type ] && [ $type = $e_type ] && 2796 2665 [ $family ] && [ $family = $e_family ] && 2797 2666 [ $saddr ] && [ $saddr = $e_saddr ] && 2798 2667 [ $sport ] && [ $sport = $e_sport ]; then 2799 - stdbuf -o0 -e0 printf "[ ok ]\n" 2668 + echo "[ ok ]" 2800 2669 return 0 2801 2670 fi 2802 2671 fail_test 2803 - stdbuf -o0 -e0 printf "[fail]\n" 2672 + echo "[fail]" 2804 2673 } 2805 2674 2806 2675 add_addr_ports_tests() ··· 3109 2972 fi 3110 2973 3111 2974 # set fullmesh flag 3112 - if reset "set fullmesh flag test"; then 2975 + if reset "set fullmesh flag test" && 2976 + continue_if mptcp_lib_kversion_ge 5.18; then 3113 2977 pm_nl_set_limits $ns1 4 4 3114 2978 pm_nl_add_endpoint $ns1 10.0.2.1 flags subflow 3115 2979 pm_nl_set_limits $ns2 4 4 ··· 3120 2982 fi 3121 2983 3122 2984 # set nofullmesh flag 3123 - if reset "set nofullmesh flag test"; then 2985 + if reset "set nofullmesh flag test" && 2986 + continue_if mptcp_lib_kversion_ge 5.18; then 3124 2987 pm_nl_set_limits $ns1 4 4 3125 2988 pm_nl_add_endpoint $ns1 10.0.2.1 flags subflow,fullmesh 3126 2989 pm_nl_set_limits $ns2 4 4 ··· 3131 2992 fi 3132 2993 3133 2994 # set backup,fullmesh flags 3134 - if reset "set backup,fullmesh flags test"; then 2995 + if reset "set backup,fullmesh flags test" && 2996 + continue_if mptcp_lib_kversion_ge 5.18; then 3135 2997 pm_nl_set_limits $ns1 4 4 3136 2998 pm_nl_add_endpoint $ns1 10.0.2.1 flags subflow 3137 2999 pm_nl_set_limits $ns2 4 4 ··· 3143 3003 fi 3144 3004 3145 3005 # set nobackup,nofullmesh flags 3146 - if reset "set nobackup,nofullmesh flags test"; then 3006 + if reset "set nobackup,nofullmesh flags test" && 3007 + continue_if mptcp_lib_kversion_ge 5.18; then 3147 3008 pm_nl_set_limits $ns1 4 4 3148 3009 pm_nl_set_limits $ns2 4 4 3149 3010 pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow,backup,fullmesh ··· 3157 3016 3158 3017 fastclose_tests() 3159 3018 { 3160 - if reset "fastclose test"; then 3019 + if reset_check_counter "fastclose test" "MPTcpExtMPFastcloseTx"; then 3161 3020 run_tests $ns1 $ns2 10.0.1.1 1024 0 fastclose_client 3162 3021 chk_join_nr 0 0 0 3163 3022 chk_fclose_nr 1 1 3164 3023 chk_rst_nr 1 1 invert 3165 3024 fi 3166 3025 3167 - if reset "fastclose server test"; then 3026 + if reset_check_counter "fastclose server test" "MPTcpExtMPFastcloseRx"; then 3168 3027 run_tests $ns1 $ns2 10.0.1.1 1024 0 fastclose_server 3169 3028 chk_join_nr 0 0 0 3170 3029 chk_fclose_nr 1 1 invert ··· 3202 3061 userspace_tests() 3203 3062 { 3204 3063 # userspace pm type prevents add_addr 3205 - if reset "userspace pm type prevents add_addr"; then 3064 + if reset "userspace pm type prevents add_addr" && 3065 + continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 3206 3066 set_userspace_pm $ns1 3207 3067 pm_nl_set_limits $ns1 0 2 3208 3068 pm_nl_set_limits $ns2 0 2 ··· 3214 3072 fi 3215 3073 3216 3074 # userspace pm type does not echo add_addr without daemon 3217 - if reset "userspace pm no echo w/o daemon"; then 3075 + if reset "userspace pm no echo w/o daemon" && 3076 + continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 3218 3077 set_userspace_pm $ns2 3219 3078 pm_nl_set_limits $ns1 0 2 3220 3079 pm_nl_set_limits $ns2 0 2 ··· 3226 3083 fi 3227 3084 3228 3085 # userspace pm type rejects join 3229 - if reset "userspace pm type rejects join"; then 3086 + if reset "userspace pm type rejects join" && 3087 + continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 3230 3088 set_userspace_pm $ns1 3231 3089 pm_nl_set_limits $ns1 1 1 3232 3090 pm_nl_set_limits $ns2 1 1 ··· 3237 3093 fi 3238 3094 3239 3095 # userspace pm type does not send join 3240 - if reset "userspace pm type does not send join"; then 3096 + if reset "userspace pm type does not send join" && 3097 + continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 3241 3098 set_userspace_pm $ns2 3242 3099 pm_nl_set_limits $ns1 1 1 3243 3100 pm_nl_set_limits $ns2 1 1 ··· 3248 3103 fi 3249 3104 3250 3105 # userspace pm type prevents mp_prio 3251 - if reset "userspace pm type prevents mp_prio"; then 3106 + if reset "userspace pm type prevents mp_prio" && 3107 + continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 3252 3108 set_userspace_pm $ns1 3253 3109 pm_nl_set_limits $ns1 1 1 3254 3110 pm_nl_set_limits $ns2 1 1 ··· 3260 3114 fi 3261 3115 3262 3116 # userspace pm type prevents rm_addr 3263 - if reset "userspace pm type prevents rm_addr"; then 3117 + if reset "userspace pm type prevents rm_addr" && 3118 + continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 3264 3119 set_userspace_pm $ns1 3265 3120 set_userspace_pm $ns2 3266 3121 pm_nl_set_limits $ns1 0 1 ··· 3273 3126 fi 3274 3127 3275 3128 # userspace pm add & remove address 3276 - if reset_with_events "userspace pm add & remove address"; then 3129 + if reset_with_events "userspace pm add & remove address" && 3130 + continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 3277 3131 set_userspace_pm $ns1 3278 3132 pm_nl_set_limits $ns2 1 1 3279 3133 run_tests $ns1 $ns2 10.0.1.1 0 userspace_1 0 slow ··· 3285 3137 fi 3286 3138 3287 3139 # userspace pm create destroy subflow 3288 - if reset_with_events "userspace pm create destroy subflow"; then 3140 + if reset_with_events "userspace pm create destroy subflow" && 3141 + continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 3289 3142 set_userspace_pm $ns2 3290 3143 pm_nl_set_limits $ns1 0 1 3291 3144 run_tests $ns1 $ns2 10.0.1.1 0 0 userspace_1 slow 3292 3145 chk_join_nr 1 1 1 3293 - chk_rm_nr 0 1 3146 + chk_rm_nr 1 1 3294 3147 kill_events_pids 3295 3148 fi 3296 3149 } 3297 3150 3298 3151 endpoint_tests() 3299 3152 { 3153 + # subflow_rebuild_header is needed to support the implicit flag 3300 3154 # userspace pm type prevents add_addr 3301 - if reset "implicit EP"; then 3155 + if reset "implicit EP" && 3156 + mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then 3302 3157 pm_nl_set_limits $ns1 2 2 3303 3158 pm_nl_set_limits $ns2 2 2 3304 3159 pm_nl_add_endpoint $ns1 10.0.2.1 flags signal ··· 3321 3170 kill_tests_wait 3322 3171 fi 3323 3172 3324 - if reset "delete and re-add"; then 3173 + if reset "delete and re-add" && 3174 + mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then 3325 3175 pm_nl_set_limits $ns1 1 1 3326 3176 pm_nl_set_limits $ns2 1 1 3327 3177 pm_nl_add_endpoint $ns2 10.0.2.2 id 2 dev ns2eth2 flags subflow
+64
tools/testing/selftests/net/mptcp/mptcp_lib.sh
··· 38 38 exit ${KSFT_SKIP} 39 39 fi 40 40 } 41 + 42 + mptcp_lib_check_kallsyms() { 43 + if ! mptcp_lib_has_file "/proc/kallsyms"; then 44 + echo "SKIP: CONFIG_KALLSYMS is missing" 45 + exit ${KSFT_SKIP} 46 + fi 47 + } 48 + 49 + # Internal: use mptcp_lib_kallsyms_has() instead 50 + __mptcp_lib_kallsyms_has() { 51 + local sym="${1}" 52 + 53 + mptcp_lib_check_kallsyms 54 + 55 + grep -q " ${sym}" /proc/kallsyms 56 + } 57 + 58 + # $1: part of a symbol to look at, add '$' at the end for full name 59 + mptcp_lib_kallsyms_has() { 60 + local sym="${1}" 61 + 62 + if __mptcp_lib_kallsyms_has "${sym}"; then 63 + return 0 64 + fi 65 + 66 + mptcp_lib_fail_if_expected_feature "${sym} symbol not found" 67 + } 68 + 69 + # $1: part of a symbol to look at, add '$' at the end for full name 70 + mptcp_lib_kallsyms_doesnt_have() { 71 + local sym="${1}" 72 + 73 + if ! __mptcp_lib_kallsyms_has "${sym}"; then 74 + return 0 75 + fi 76 + 77 + mptcp_lib_fail_if_expected_feature "${sym} symbol has been found" 78 + } 79 + 80 + # !!!AVOID USING THIS!!! 81 + # Features might not land in the expected version and features can be backported 82 + # 83 + # $1: kernel version, e.g. 6.3 84 + mptcp_lib_kversion_ge() { 85 + local exp_maj="${1%.*}" 86 + local exp_min="${1#*.}" 87 + local v maj min 88 + 89 + # If the kernel has backported features, set this env var to 1: 90 + if [ "${SELFTESTS_MPTCP_LIB_NO_KVERSION_CHECK:-}" = "1" ]; then 91 + return 0 92 + fi 93 + 94 + v=$(uname -r | cut -d'.' -f1,2) 95 + maj=${v%.*} 96 + min=${v#*.} 97 + 98 + if [ "${maj}" -gt "${exp_maj}" ] || 99 + { [ "${maj}" -eq "${exp_maj}" ] && [ "${min}" -ge "${exp_min}" ]; }; then 100 + return 0 101 + fi 102 + 103 + mptcp_lib_fail_if_expected_feature "kernel version ${1} lower than ${v}" 104 + }
+12 -6
tools/testing/selftests/net/mptcp/mptcp_sockopt.c
··· 87 87 uint64_t tcpi_rcv_delta; 88 88 }; 89 89 90 + #ifndef MIN 91 + #define MIN(a, b) ((a) < (b) ? (a) : (b)) 92 + #endif 93 + 90 94 static void die_perror(const char *msg) 91 95 { 92 96 perror(msg); ··· 353 349 xerror("getsockopt MPTCP_TCPINFO (tries %d, %m)"); 354 350 355 351 assert(olen <= sizeof(ti)); 356 - assert(ti.d.size_user == ti.d.size_kernel); 357 - assert(ti.d.size_user == sizeof(struct tcp_info)); 352 + assert(ti.d.size_kernel > 0); 353 + assert(ti.d.size_user == 354 + MIN(ti.d.size_kernel, sizeof(struct tcp_info))); 358 355 assert(ti.d.num_subflows == 1); 359 356 360 357 assert(olen > (socklen_t)sizeof(struct mptcp_subflow_data)); 361 358 olen -= sizeof(struct mptcp_subflow_data); 362 - assert(olen == sizeof(struct tcp_info)); 359 + assert(olen == ti.d.size_user); 363 360 364 361 if (ti.ti[0].tcpi_bytes_sent == w && 365 362 ti.ti[0].tcpi_bytes_received == r) ··· 406 401 die_perror("getsockopt MPTCP_SUBFLOW_ADDRS"); 407 402 408 403 assert(olen <= sizeof(addrs)); 409 - assert(addrs.d.size_user == addrs.d.size_kernel); 410 - assert(addrs.d.size_user == sizeof(struct mptcp_subflow_addrs)); 404 + assert(addrs.d.size_kernel > 0); 405 + assert(addrs.d.size_user == 406 + MIN(addrs.d.size_kernel, sizeof(struct mptcp_subflow_addrs))); 411 407 assert(addrs.d.num_subflows == 1); 412 408 413 409 assert(olen > (socklen_t)sizeof(struct mptcp_subflow_data)); 414 410 olen -= sizeof(struct mptcp_subflow_data); 415 - assert(olen == sizeof(struct mptcp_subflow_addrs)); 411 + assert(olen == addrs.d.size_user); 416 412 417 413 llen = sizeof(local); 418 414 ret = getsockname(fd, (struct sockaddr *)&local, &llen);
+18 -2
tools/testing/selftests/net/mptcp/mptcp_sockopt.sh
··· 87 87 } 88 88 89 89 mptcp_lib_check_mptcp 90 + mptcp_lib_check_kallsyms 90 91 91 92 ip -Version > /dev/null 2>&1 92 93 if [ $? -ne 0 ];then ··· 187 186 local_addr="0.0.0.0" 188 187 fi 189 188 189 + cmsg="TIMESTAMPNS" 190 + if mptcp_lib_kallsyms_has "mptcp_ioctl$"; then 191 + cmsg+=",TCPINQ" 192 + fi 193 + 190 194 timeout ${timeout_test} \ 191 195 ip netns exec ${listener_ns} \ 192 - $mptcp_connect -t ${timeout_poll} -l -M 1 -p $port -s ${srv_proto} -c TIMESTAMPNS,TCPINQ \ 196 + $mptcp_connect -t ${timeout_poll} -l -M 1 -p $port -s ${srv_proto} -c "${cmsg}" \ 193 197 ${local_addr} < "$sin" > "$sout" & 194 198 local spid=$! 195 199 ··· 202 196 203 197 timeout ${timeout_test} \ 204 198 ip netns exec ${connector_ns} \ 205 - $mptcp_connect -t ${timeout_poll} -M 2 -p $port -s ${cl_proto} -c TIMESTAMPNS,TCPINQ \ 199 + $mptcp_connect -t ${timeout_poll} -M 2 -p $port -s ${cl_proto} -c "${cmsg}" \ 206 200 $connect_addr < "$cin" > "$cout" & 207 201 208 202 local cpid=$! ··· 259 253 { 260 254 local lret=0 261 255 256 + if ! mptcp_lib_kallsyms_has "mptcp_diag_fill_info$"; then 257 + echo "INFO: MPTCP sockopt not supported: SKIP" 258 + return 259 + fi 260 + 262 261 ip netns exec "$ns_sbox" ./mptcp_sockopt 263 262 lret=$? 264 263 ··· 317 306 do_tcpinq_tests() 318 307 { 319 308 local lret=0 309 + 310 + if ! mptcp_lib_kallsyms_has "mptcp_ioctl$"; then 311 + echo "INFO: TCP_INQ not supported: SKIP" 312 + return 313 + fi 320 314 321 315 local args 322 316 for args in "-t tcp" "-r tcp"; do
+17 -10
tools/testing/selftests/net/mptcp/pm_netlink.sh
··· 73 73 } 74 74 75 75 check "ip netns exec $ns1 ./pm_nl_ctl dump" "" "defaults addr list" 76 - check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0 76 + 77 + default_limits="$(ip netns exec $ns1 ./pm_nl_ctl limits)" 78 + if mptcp_lib_expect_all_features; then 79 + check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0 77 80 subflows 2" "defaults limits" 81 + fi 78 82 79 83 ip netns exec $ns1 ./pm_nl_ctl add 10.0.1.1 80 84 ip netns exec $ns1 ./pm_nl_ctl add 10.0.1.2 flags subflow dev lo ··· 125 121 check "ip netns exec $ns1 ./pm_nl_ctl dump" "" "flush addrs" 126 122 127 123 ip netns exec $ns1 ./pm_nl_ctl limits 9 1 128 - check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0 129 - subflows 2" "rcv addrs above hard limit" 124 + check "ip netns exec $ns1 ./pm_nl_ctl limits" "$default_limits" "rcv addrs above hard limit" 130 125 131 126 ip netns exec $ns1 ./pm_nl_ctl limits 1 9 132 - check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 0 133 - subflows 2" "subflows above hard limit" 127 + check "ip netns exec $ns1 ./pm_nl_ctl limits" "$default_limits" "subflows above hard limit" 134 128 135 129 ip netns exec $ns1 ./pm_nl_ctl limits 8 8 136 130 check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 8 ··· 178 176 ip netns exec $ns1 ./pm_nl_ctl set 10.0.1.1 flags nobackup 179 177 check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \ 180 178 subflow 10.0.1.1" " (nobackup)" 179 + 180 + # fullmesh support has been added later 181 181 ip netns exec $ns1 ./pm_nl_ctl set id 1 flags fullmesh 182 - check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \ 182 + if ip netns exec $ns1 ./pm_nl_ctl dump | grep -q "fullmesh" || 183 + mptcp_lib_expect_all_features; then 184 + check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \ 183 185 subflow,fullmesh 10.0.1.1" " (fullmesh)" 184 - ip netns exec $ns1 ./pm_nl_ctl set id 1 flags nofullmesh 185 - check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \ 186 + ip netns exec $ns1 ./pm_nl_ctl set id 1 flags nofullmesh 187 + check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \ 186 188 subflow 10.0.1.1" " (nofullmesh)" 187 - ip netns exec $ns1 ./pm_nl_ctl set id 1 flags backup,fullmesh 188 - check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \ 189 + ip netns exec $ns1 ./pm_nl_ctl set id 1 flags backup,fullmesh 190 + check "ip netns exec $ns1 ./pm_nl_ctl dump" "id 1 flags \ 189 191 subflow,backup,fullmesh 10.0.1.1" " (backup,fullmesh)" 192 + fi 190 193 191 194 exit $ret
+12 -1
tools/testing/selftests/net/mptcp/userspace_pm.sh
··· 4 4 . "$(dirname "${0}")/mptcp_lib.sh" 5 5 6 6 mptcp_lib_check_mptcp 7 + mptcp_lib_check_kallsyms 8 + 9 + if ! mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 10 + echo "userspace pm tests are not supported by the kernel: SKIP" 11 + exit ${KSFT_SKIP} 12 + fi 7 13 8 14 ip -Version > /dev/null 2>&1 9 15 if [ $? -ne 0 ];then 10 16 echo "SKIP: Cannot not run test without ip tool" 11 - exit 1 17 + exit ${KSFT_SKIP} 12 18 fi 13 19 14 20 ANNOUNCED=6 # MPTCP_EVENT_ANNOUNCED ··· 914 908 test_listener() 915 909 { 916 910 print_title "Listener tests" 911 + 912 + if ! mptcp_lib_kallsyms_has "mptcp_event_pm_listener$"; then 913 + stdbuf -o0 -e0 printf "LISTENER events \t[SKIP] Not supported\n" 914 + return 915 + fi 917 916 918 917 # Capture events on the network namespace running the client 919 918 :>$client_evts
+3 -3
tools/testing/selftests/ptp/testptp.c
··· 502 502 interval = t2 - t1; 503 503 offset = (t2 + t1) / 2 - tp; 504 504 505 - printf("system time: %lld.%u\n", 505 + printf("system time: %lld.%09u\n", 506 506 (pct+2*i)->sec, (pct+2*i)->nsec); 507 - printf("phc time: %lld.%u\n", 507 + printf("phc time: %lld.%09u\n", 508 508 (pct+2*i+1)->sec, (pct+2*i+1)->nsec); 509 - printf("system time: %lld.%u\n", 509 + printf("system time: %lld.%09u\n", 510 510 (pct+2*i+2)->sec, (pct+2*i+2)->nsec); 511 511 printf("system/phc clock time offset is %" PRId64 " ns\n" 512 512 "system clock time delay is %" PRId64 " ns\n",
+1 -5
tools/testing/selftests/tc-testing/config
··· 6 6 CONFIG_NF_CONNTRACK_ZONES=y 7 7 CONFIG_NF_CONNTRACK_LABELS=y 8 8 CONFIG_NF_NAT=m 9 + CONFIG_NETFILTER_XT_TARGET_LOG=m 9 10 10 11 CONFIG_NET_SCHED=y 11 12 12 13 # 13 14 # Queueing/Scheduling 14 15 # 15 - CONFIG_NET_SCH_ATM=m 16 16 CONFIG_NET_SCH_CAKE=m 17 - CONFIG_NET_SCH_CBQ=m 18 17 CONFIG_NET_SCH_CBS=m 19 18 CONFIG_NET_SCH_CHOKE=m 20 19 CONFIG_NET_SCH_CODEL=m 21 20 CONFIG_NET_SCH_DRR=m 22 - CONFIG_NET_SCH_DSMARK=m 23 21 CONFIG_NET_SCH_ETF=m 24 22 CONFIG_NET_SCH_FQ=m 25 23 CONFIG_NET_SCH_FQ_CODEL=m ··· 55 57 CONFIG_NET_CLS_FLOWER=m 56 58 CONFIG_NET_CLS_MATCHALL=m 57 59 CONFIG_NET_CLS_ROUTE4=m 58 - CONFIG_NET_CLS_RSVP=m 59 - CONFIG_NET_CLS_TCINDEX=m 60 60 CONFIG_NET_EMATCH=y 61 61 CONFIG_NET_EMATCH_STACK=32 62 62 CONFIG_NET_EMATCH_CMP=m
+2 -2
tools/testing/selftests/tc-testing/tc-tests/qdiscs/sfb.json
··· 58 58 "setup": [ 59 59 "$IP link add dev $DUMMY type dummy || /bin/true" 60 60 ], 61 - "cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfb db 10", 61 + "cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfb db 100", 62 62 "expExitCode": "0", 63 63 "verifyCmd": "$TC qdisc show dev $DUMMY", 64 - "matchPattern": "qdisc sfb 1: root refcnt [0-9]+ rehash 600s db 10ms", 64 + "matchPattern": "qdisc sfb 1: root refcnt [0-9]+ rehash 600s db 100ms", 65 65 "matchCount": "1", 66 66 "teardown": [ 67 67 "$TC qdisc del dev $DUMMY handle 1: root",
+1
tools/testing/selftests/tc-testing/tdc.sh
··· 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 4 modprobe netdevsim 5 + modprobe sch_teql 5 6 ./tdc.py -c actions --nobuildebpf 6 7 ./tdc.py -c qdisc
+7
tools/virtio/ringtest/.gitignore
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + /noring 3 + /ptr_ring 4 + /ring 5 + /virtio_ring_0_9 6 + /virtio_ring_inorder 7 + /virtio_ring_poll
+11
tools/virtio/ringtest/main.h
··· 8 8 #ifndef MAIN_H 9 9 #define MAIN_H 10 10 11 + #include <assert.h> 11 12 #include <stdbool.h> 12 13 13 14 extern int param; ··· 96 95 #define cpu_relax() asm ("rep; nop" ::: "memory") 97 96 #elif defined(__s390x__) 98 97 #define cpu_relax() barrier() 98 + #elif defined(__aarch64__) 99 + #define cpu_relax() asm ("yield" ::: "memory") 99 100 #else 100 101 #define cpu_relax() assert(0) 101 102 #endif ··· 115 112 116 113 #if defined(__x86_64__) || defined(__i386__) 117 114 #define smp_mb() asm volatile("lock; addl $0,-132(%%rsp)" ::: "memory", "cc") 115 + #elif defined(__aarch64__) 116 + #define smp_mb() asm volatile("dmb ish" ::: "memory") 118 117 #else 119 118 /* 120 119 * Not using __ATOMIC_SEQ_CST since gcc docs say they are only synchronized ··· 141 136 142 137 #if defined(__i386__) || defined(__x86_64__) || defined(__s390x__) 143 138 #define smp_wmb() barrier() 139 + #elif defined(__aarch64__) 140 + #define smp_wmb() asm volatile("dmb ishst" ::: "memory") 144 141 #else 145 142 #define smp_wmb() smp_release() 143 + #endif 144 + 145 + #ifndef __always_inline 146 + #define __always_inline inline __attribute__((always_inline)) 146 147 #endif 147 148 148 149 static __always_inline
+1 -1
tools/virtio/virtio-trace/README
··· 95 95 96 96 1) Enable ftrace in the guest 97 97 <Example> 98 - # echo 1 > /sys/kernel/debug/tracing/events/sched/enable 98 + # echo 1 > /sys/kernel/tracing/events/sched/enable 99 99 100 100 2) Run trace agent in the guest 101 101 This agent must be operated as root.
+8 -4
tools/virtio/virtio-trace/trace-agent.c
··· 18 18 #define PIPE_DEF_BUFS 16 19 19 #define PIPE_MIN_SIZE (PAGE_SIZE*PIPE_DEF_BUFS) 20 20 #define PIPE_MAX_SIZE (1024*1024) 21 - #define READ_PATH_FMT \ 22 - "/sys/kernel/debug/tracing/per_cpu/cpu%d/trace_pipe_raw" 21 + #define TRACEFS "/sys/kernel/tracing" 22 + #define DEBUGFS "/sys/kernel/debug/tracing" 23 + #define READ_PATH_FMT "%s/per_cpu/cpu%d/trace_pipe_raw" 23 24 #define WRITE_PATH_FMT "/dev/virtio-ports/trace-path-cpu%d" 24 25 #define CTL_PATH "/dev/virtio-ports/agent-ctl-path" 25 26 ··· 121 120 if (this_is_write_path) 122 121 /* write(output) path */ 123 122 ret = snprintf(buf, PATH_MAX, WRITE_PATH_FMT, cpu_num); 124 - else 123 + else { 125 124 /* read(input) path */ 126 - ret = snprintf(buf, PATH_MAX, READ_PATH_FMT, cpu_num); 125 + ret = snprintf(buf, PATH_MAX, READ_PATH_FMT, TRACEFS, cpu_num); 126 + if (ret > 0 && access(buf, F_OK) != 0) 127 + ret = snprintf(buf, PATH_MAX, READ_PATH_FMT, DEBUGFS, cpu_num); 128 + } 127 129 128 130 if (ret <= 0) { 129 131 pr_err("Failed to generate %s path(CPU#%d):%d\n",