Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drivers-5.15' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM SoC driver updates from Arnd Bergmann:
"These are updates for drivers that are tied to a particular SoC,
including the correspondig device tree bindings:

- A couple of reset controller changes for unisoc, uniphier, renesas
and zte platforms

- memory controller driver fixes for omap and tegra

- Rockchip io domain driver updates

- Lots of updates for qualcomm platforms, mostly touching their
firmware and power management drivers

- Tegra FUSE and firmware driver updateѕ

- Support for virtio transports in the SCMI firmware framework

- cleanup of ixp4xx drivers, towards enabling multiplatform support
and bringing it up to date with modern platforms

- Minor updates for keystone, mediatek, omap, renesas"

* tag 'drivers-5.15' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (96 commits)
reset: simple: remove ZTE details in Kconfig help
soc: rockchip: io-domain: Remove unneeded semicolon
soc: rockchip: io-domain: add rk3568 support
dt-bindings: power: add rk3568-pmu-io-domain support
bus: ixp4xx: return on error in ixp4xx_exp_probe()
soc: renesas: Prefer memcpy() over strcpy()
firmware: tegra: Stop using seq_get_buf()
soc/tegra: fuse: Enable fuse clock on suspend for Tegra124
soc/tegra: fuse: Add runtime PM support
soc/tegra: fuse: Clear fuse->clk on driver probe failure
soc/tegra: pmc: Prevent racing with cpuilde driver
soc/tegra: bpmp: Remove unused including <linux/version.h>
dt-bindings: soc: ti: pruss: Add dma-coherent property
soc: ti: Remove pm_runtime_irq_safe() usage for smartreflex
soc: ti: pruss: Enable support for ICSSG subsystems on K3 AM64x SoCs
dt-bindings: soc: ti: pruss: Update bindings for K3 AM64x SoCs
firmware: arm_scmi: Use WARN_ON() to check configured transports
firmware: arm_scmi: Fix boolconv.cocci warnings
soc: mediatek: mmsys: Fix missing UFOE component in mt8173 table routing
soc: mediatek: mmsys: add MT8365 support
...

+4222 -1047
+61
Documentation/devicetree/bindings/ata/intel,ixp4xx-compact-flash.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/ata/intel,ixp4xx-compact-flash.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Intel IXP4xx CompactFlash Card Controller 8 + 9 + maintainers: 10 + - Linus Walleij <linus.walleij@linaro.org> 11 + 12 + description: | 13 + The IXP4xx network processors have a CompactFlash interface that presents 14 + a CompactFlash card to the system as a true IDE (parallel ATA) device. The 15 + device is always connected to the expansion bus of the IXP4xx SoCs using one 16 + or two chip select areas and address translating logic on the board. The 17 + node must be placed inside a chip select node on the IXP4xx expansion bus. 18 + 19 + properties: 20 + compatible: 21 + const: intel,ixp4xx-compact-flash 22 + 23 + reg: 24 + items: 25 + - description: Command interface registers 26 + - description: Control interface registers 27 + 28 + interrupts: 29 + maxItems: 1 30 + 31 + required: 32 + - compatible 33 + - reg 34 + - interrupts 35 + 36 + allOf: 37 + - $ref: pata-common.yaml# 38 + 39 + unevaluatedProperties: false 40 + 41 + examples: 42 + - | 43 + #include <dt-bindings/interrupt-controller/irq.h> 44 + 45 + bus@c4000000 { 46 + compatible = "intel,ixp43x-expansion-bus-controller", "syscon"; 47 + reg = <0xc4000000 0x1000>; 48 + native-endian; 49 + #address-cells = <2>; 50 + #size-cells = <1>; 51 + ranges = <0 0x0 0x50000000 0x01000000>, <1 0x0 0x51000000 0x01000000>; 52 + dma-ranges = <0 0x0 0x50000000 0x01000000>, <1 0x0 0x51000000 0x01000000>; 53 + ide@1,0 { 54 + compatible = "intel,ixp4xx-compact-flash"; 55 + reg = <1 0x00000000 0x1000>, <1 0x00040000 0x1000>; 56 + interrupt-parent = <&gpio0>; 57 + interrupts = <12 IRQ_TYPE_EDGE_RISING>; 58 + }; 59 + }; 60 + 61 + ...
+168
Documentation/devicetree/bindings/bus/intel,ixp4xx-expansion-bus-controller.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/bus/intel,ixp4xx-expansion-bus-controller.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Intel IXP4xx Expansion Bus Controller 8 + 9 + description: | 10 + The IXP4xx expansion bus controller handles access to devices on the 11 + memory-mapped expansion bus on the Intel IXP4xx family of system on chips, 12 + including IXP42x, IXP43x, IXP45x and IXP46x. 13 + 14 + maintainers: 15 + - Linus Walleij <linus.walleij@linaro.org> 16 + 17 + properties: 18 + $nodename: 19 + pattern: '^bus@[0-9a-f]+$' 20 + 21 + compatible: 22 + items: 23 + - enum: 24 + - intel,ixp42x-expansion-bus-controller 25 + - intel,ixp43x-expansion-bus-controller 26 + - intel,ixp45x-expansion-bus-controller 27 + - intel,ixp46x-expansion-bus-controller 28 + - const: syscon 29 + 30 + reg: 31 + description: Control registers for the expansion bus, these are not 32 + inside the memory range handled by the expansion bus. 33 + maxItems: 1 34 + 35 + native-endian: 36 + $ref: /schemas/types.yaml#/definitions/flag 37 + description: The IXP4xx has a peculiar MMIO access scheme, as it changes 38 + the access pattern for words (swizzling) on the bus depending on whether 39 + the SoC is running in big-endian or little-endian mode. Thus the 40 + registers must always be accessed using native endianness. 41 + 42 + "#address-cells": 43 + description: | 44 + The first cell is the chip select number. 45 + The second cell is the address offset within the bank. 46 + const: 2 47 + 48 + "#size-cells": 49 + const: 1 50 + 51 + ranges: true 52 + dma-ranges: true 53 + 54 + patternProperties: 55 + "^.*@[0-7],[0-9a-f]+$": 56 + description: Devices attached to chip selects are represented as 57 + subnodes. 58 + type: object 59 + 60 + properties: 61 + intel,ixp4xx-eb-t1: 62 + description: Address timing, extend address phase with n cycles. 63 + $ref: /schemas/types.yaml#/definitions/uint32 64 + maximum: 3 65 + 66 + intel,ixp4xx-eb-t2: 67 + description: Setup chip select timing, extend setup phase with n cycles. 68 + $ref: /schemas/types.yaml#/definitions/uint32 69 + maximum: 3 70 + 71 + intel,ixp4xx-eb-t3: 72 + description: Strobe timing, extend strobe phase with n cycles. 73 + $ref: /schemas/types.yaml#/definitions/uint32 74 + maximum: 15 75 + 76 + intel,ixp4xx-eb-t4: 77 + description: Hold timing, extend hold phase with n cycles. 78 + $ref: /schemas/types.yaml#/definitions/uint32 79 + maximum: 3 80 + 81 + intel,ixp4xx-eb-t5: 82 + description: Recovery timing, extend recovery phase with n cycles. 83 + $ref: /schemas/types.yaml#/definitions/uint32 84 + maximum: 15 85 + 86 + intel,ixp4xx-eb-cycle-type: 87 + description: The type of cycles to use on the expansion bus for this 88 + chip select. 0 = Intel cycles, 1 = Motorola cycles, 2 = HPI cycles. 89 + $ref: /schemas/types.yaml#/definitions/uint32 90 + enum: [0, 1, 2] 91 + 92 + intel,ixp4xx-eb-byte-access-on-halfword: 93 + description: Allow byte read access on half word devices. 94 + $ref: /schemas/types.yaml#/definitions/uint32 95 + enum: [0, 1] 96 + 97 + intel,ixp4xx-eb-hpi-hrdy-pol-high: 98 + description: Set HPI HRDY polarity to active high when using HPI. 99 + $ref: /schemas/types.yaml#/definitions/uint32 100 + enum: [0, 1] 101 + 102 + intel,ixp4xx-eb-mux-address-and-data: 103 + description: Multiplex address and data on the data bus. 104 + $ref: /schemas/types.yaml#/definitions/uint32 105 + enum: [0, 1] 106 + 107 + intel,ixp4xx-eb-ahb-split-transfers: 108 + description: Enable AHB split transfers. 109 + $ref: /schemas/types.yaml#/definitions/uint32 110 + enum: [0, 1] 111 + 112 + intel,ixp4xx-eb-write-enable: 113 + description: Enable write cycles. 114 + $ref: /schemas/types.yaml#/definitions/uint32 115 + enum: [0, 1] 116 + 117 + intel,ixp4xx-eb-byte-access: 118 + description: Expansion bus uses only 8 bits. The default is to use 119 + 16 bits. 120 + $ref: /schemas/types.yaml#/definitions/uint32 121 + enum: [0, 1] 122 + 123 + required: 124 + - compatible 125 + - reg 126 + - native-endian 127 + - "#address-cells" 128 + - "#size-cells" 129 + - ranges 130 + - dma-ranges 131 + 132 + additionalProperties: false 133 + 134 + examples: 135 + - | 136 + #include <dt-bindings/interrupt-controller/irq.h> 137 + bus@50000000 { 138 + compatible = "intel,ixp42x-expansion-bus-controller", "syscon"; 139 + reg = <0xc4000000 0x28>; 140 + native-endian; 141 + #address-cells = <2>; 142 + #size-cells = <1>; 143 + ranges = <0 0x0 0x50000000 0x01000000>, 144 + <1 0x0 0x51000000 0x01000000>; 145 + dma-ranges = <0 0x0 0x50000000 0x01000000>, 146 + <1 0x0 0x51000000 0x01000000>; 147 + flash@0,0 { 148 + compatible = "intel,ixp4xx-flash", "cfi-flash"; 149 + bank-width = <2>; 150 + reg = <0 0x00000000 0x1000000>; 151 + intel,ixp4xx-eb-t3 = <3>; 152 + intel,ixp4xx-eb-cycle-type = <0>; 153 + intel,ixp4xx-eb-byte-access-on-halfword = <1>; 154 + intel,ixp4xx-eb-write-enable = <1>; 155 + intel,ixp4xx-eb-byte-access = <0>; 156 + }; 157 + serial@1,0 { 158 + compatible = "exar,xr16l2551", "ns8250"; 159 + reg = <1 0x00000000 0x10>; 160 + interrupt-parent = <&gpio0>; 161 + interrupts = <4 IRQ_TYPE_LEVEL_LOW>; 162 + clock-frequency = <1843200>; 163 + intel,ixp4xx-eb-t3 = <3>; 164 + intel,ixp4xx-eb-cycle-type = <1>; 165 + intel,ixp4xx-eb-write-enable = <1>; 166 + intel,ixp4xx-eb-byte-access = <1>; 167 + }; 168 + };
+1
Documentation/devicetree/bindings/dma/fsl-imx-sdma.txt
··· 9 9 "fsl,imx53-sdma" 10 10 "fsl,imx6q-sdma" 11 11 "fsl,imx7d-sdma" 12 + "fsl,imx6ul-sdma" 12 13 "fsl,imx8mq-sdma" 13 14 "fsl,imx8mm-sdma" 14 15 "fsl,imx8mn-sdma"
+7 -1
Documentation/devicetree/bindings/firmware/arm,scmi.yaml
··· 34 34 - description: SCMI compliant firmware with ARM SMC/HVC transport 35 35 items: 36 36 - const: arm,scmi-smc 37 + - description: SCMI compliant firmware with SCMI Virtio transport. 38 + The virtio transport only supports a single device. 39 + items: 40 + - const: arm,scmi-virtio 37 41 38 42 interrupts: 39 43 description: ··· 176 172 Each sub-node represents a protocol supported. If the platform 177 173 supports a dedicated communication channel for a particular protocol, 178 174 then the corresponding transport properties must be present. 175 + The virtio transport does not support a dedicated communication channel. 179 176 180 177 properties: 181 178 reg: ··· 200 195 201 196 required: 202 197 - compatible 203 - - shmem 204 198 205 199 if: 206 200 properties: ··· 213 209 214 210 required: 215 211 - mboxes 212 + - shmem 216 213 217 214 else: 218 215 if: ··· 224 219 then: 225 220 required: 226 221 - arm,smc-id 222 + - shmem 227 223 228 224 examples: 229 225 - |
+1
Documentation/devicetree/bindings/power/qcom,rpmpd.yaml
··· 30 30 - qcom,sc8180x-rpmhpd 31 31 - qcom,sdm845-rpmhpd 32 32 - qcom,sdx55-rpmhpd 33 + - qcom,sm6115-rpmpd 33 34 - qcom,sm8150-rpmhpd 34 35 - qcom,sm8250-rpmhpd 35 36 - qcom,sm8350-rpmhpd
-135
Documentation/devicetree/bindings/power/rockchip-io-domain.txt
··· 1 - Rockchip SRAM for IO Voltage Domains: 2 - ------------------------------------- 3 - 4 - IO domain voltages on some Rockchip SoCs are variable but need to be 5 - kept in sync between the regulators and the SoC using a special 6 - register. 7 - 8 - A specific example using rk3288: 9 - - If the regulator hooked up to a pin like SDMMC0_VDD is 3.3V then 10 - bit 7 of GRF_IO_VSEL needs to be 0. If the regulator hooked up to 11 - that same pin is 1.8V then bit 7 of GRF_IO_VSEL needs to be 1. 12 - 13 - Said another way, this driver simply handles keeping bits in the SoC's 14 - general register file (GRF) in sync with the actual value of a voltage 15 - hooked up to the pins. 16 - 17 - Note that this driver specifically doesn't include: 18 - - any logic for deciding what voltage we should set regulators to 19 - - any logic for deciding whether regulators (or internal SoC blocks) 20 - should have power or not have power 21 - 22 - If there were some other software that had the smarts of making 23 - decisions about regulators, it would work in conjunction with this 24 - driver. When that other software adjusted a regulator's voltage then 25 - this driver would handle telling the SoC about it. A good example is 26 - vqmmc for SD. In that case the dw_mmc driver simply is told about a 27 - regulator. It changes the regulator between 3.3V and 1.8V at the 28 - right time. This driver notices the change and makes sure that the 29 - SoC is on the same page. 30 - 31 - 32 - Required properties: 33 - - compatible: should be one of: 34 - - "rockchip,px30-io-voltage-domain" for px30 35 - - "rockchip,px30-pmu-io-voltage-domain" for px30 pmu-domains 36 - - "rockchip,rk3188-io-voltage-domain" for rk3188 37 - - "rockchip,rk3228-io-voltage-domain" for rk3228 38 - - "rockchip,rk3288-io-voltage-domain" for rk3288 39 - - "rockchip,rk3328-io-voltage-domain" for rk3328 40 - - "rockchip,rk3368-io-voltage-domain" for rk3368 41 - - "rockchip,rk3368-pmu-io-voltage-domain" for rk3368 pmu-domains 42 - - "rockchip,rk3399-io-voltage-domain" for rk3399 43 - - "rockchip,rk3399-pmu-io-voltage-domain" for rk3399 pmu-domains 44 - - "rockchip,rv1108-io-voltage-domain" for rv1108 45 - - "rockchip,rv1108-pmu-io-voltage-domain" for rv1108 pmu-domains 46 - 47 - Deprecated properties: 48 - - rockchip,grf: phandle to the syscon managing the "general register files" 49 - Systems should move the io-domains to a sub-node of the grf simple-mfd. 50 - 51 - You specify supplies using the standard regulator bindings by including 52 - a phandle the relevant regulator. All specified supplies must be able 53 - to report their voltage. The IO Voltage Domain for any non-specified 54 - supplies will be not be touched. 55 - 56 - Possible supplies for PX30: 57 - - vccio6-supply: The supply connected to VCCIO6. 58 - - vccio1-supply: The supply connected to VCCIO1. 59 - - vccio2-supply: The supply connected to VCCIO2. 60 - - vccio3-supply: The supply connected to VCCIO3. 61 - - vccio4-supply: The supply connected to VCCIO4. 62 - - vccio5-supply: The supply connected to VCCIO5. 63 - - vccio-oscgpi-supply: The supply connected to VCCIO_OSCGPI. 64 - 65 - Possible supplies for PX30 pmu-domains: 66 - - pmuio1-supply: The supply connected to PMUIO1. 67 - - pmuio2-supply: The supply connected to PMUIO2. 68 - 69 - Possible supplies for rk3188: 70 - - ap0-supply: The supply connected to AP0_VCC. 71 - - ap1-supply: The supply connected to AP1_VCC. 72 - - cif-supply: The supply connected to CIF_VCC. 73 - - flash-supply: The supply connected to FLASH_VCC. 74 - - lcdc0-supply: The supply connected to LCD0_VCC. 75 - - lcdc1-supply: The supply connected to LCD1_VCC. 76 - - vccio0-supply: The supply connected to VCCIO0. 77 - - vccio1-supply: The supply connected to VCCIO1. 78 - Sometimes also labeled VCCIO1 and VCCIO2. 79 - 80 - Possible supplies for rk3228: 81 - - vccio1-supply: The supply connected to VCCIO1. 82 - - vccio2-supply: The supply connected to VCCIO2. 83 - - vccio3-supply: The supply connected to VCCIO3. 84 - - vccio4-supply: The supply connected to VCCIO4. 85 - 86 - Possible supplies for rk3288: 87 - - audio-supply: The supply connected to APIO4_VDD. 88 - - bb-supply: The supply connected to APIO5_VDD. 89 - - dvp-supply: The supply connected to DVPIO_VDD. 90 - - flash0-supply: The supply connected to FLASH0_VDD. Typically for eMMC 91 - - flash1-supply: The supply connected to FLASH1_VDD. Also known as SDIO1. 92 - - gpio30-supply: The supply connected to APIO1_VDD. 93 - - gpio1830 The supply connected to APIO2_VDD. 94 - - lcdc-supply: The supply connected to LCDC_VDD. 95 - - sdcard-supply: The supply connected to SDMMC0_VDD. 96 - - wifi-supply: The supply connected to APIO3_VDD. Also known as SDIO0. 97 - 98 - Possible supplies for rk3368: 99 - - audio-supply: The supply connected to APIO3_VDD. 100 - - dvp-supply: The supply connected to DVPIO_VDD. 101 - - flash0-supply: The supply connected to FLASH0_VDD. Typically for eMMC 102 - - gpio30-supply: The supply connected to APIO1_VDD. 103 - - gpio1830 The supply connected to APIO4_VDD. 104 - - sdcard-supply: The supply connected to SDMMC0_VDD. 105 - - wifi-supply: The supply connected to APIO2_VDD. Also known as SDIO0. 106 - 107 - Possible supplies for rk3368 pmu-domains: 108 - - pmu-supply: The supply connected to PMUIO_VDD. 109 - - vop-supply: The supply connected to LCDC_VDD. 110 - 111 - Possible supplies for rk3399: 112 - - bt656-supply: The supply connected to APIO2_VDD. 113 - - audio-supply: The supply connected to APIO5_VDD. 114 - - sdmmc-supply: The supply connected to SDMMC0_VDD. 115 - - gpio1830 The supply connected to APIO4_VDD. 116 - 117 - Possible supplies for rk3399 pmu-domains: 118 - - pmu1830-supply:The supply connected to PMUIO2_VDD. 119 - 120 - Example: 121 - 122 - io-domains { 123 - compatible = "rockchip,rk3288-io-voltage-domain"; 124 - rockchip,grf = <&grf>; 125 - 126 - audio-supply = <&vcc18_codec>; 127 - bb-supply = <&vcc33_io>; 128 - dvp-supply = <&vcc_18>; 129 - flash0-supply = <&vcc18_flashio>; 130 - gpio1830-supply = <&vcc33_io>; 131 - gpio30-supply = <&vcc33_pmuio>; 132 - lcdc-supply = <&vcc33_lcd>; 133 - sdcard-supply = <&vccio_sd>; 134 - wifi-supply = <&vcc18_wl>; 135 - };
+360
Documentation/devicetree/bindings/power/rockchip-io-domain.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/power/rockchip-io-domain.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Rockchip SRAM for IO Voltage Domains 8 + 9 + maintainers: 10 + - Heiko Stuebner <heiko@sntech.de> 11 + 12 + description: | 13 + IO domain voltages on some Rockchip SoCs are variable but need to be 14 + kept in sync between the regulators and the SoC using a special 15 + register. 16 + 17 + A specific example using rk3288 18 + If the regulator hooked up to a pin like SDMMC0_VDD is 3.3V then 19 + bit 7 of GRF_IO_VSEL needs to be 0. If the regulator hooked up to 20 + that same pin is 1.8V then bit 7 of GRF_IO_VSEL needs to be 1. 21 + 22 + Said another way, this driver simply handles keeping bits in the SoCs 23 + General Register File (GRF) in sync with the actual value of a voltage 24 + hooked up to the pins. 25 + 26 + Note that this driver specifically does not include 27 + any logic for deciding what voltage we should set regulators to 28 + any logic for deciding whether regulators (or internal SoC blocks) 29 + should have power or not have power 30 + 31 + If there were some other software that had the smarts of making 32 + decisions about regulators, it would work in conjunction with this 33 + driver. When that other software adjusted a regulators voltage then 34 + this driver would handle telling the SoC about it. A good example is 35 + vqmmc for SD. In that case the dw_mmc driver simply is told about a 36 + regulator. It changes the regulator between 3.3V and 1.8V at the 37 + right time. This driver notices the change and makes sure that the 38 + SoC is on the same page. 39 + 40 + You specify supplies using the standard regulator bindings by including 41 + a phandle the relevant regulator. All specified supplies must be able 42 + to report their voltage. The IO Voltage Domain for any non-specified 43 + supplies will be not be touched. 44 + 45 + properties: 46 + compatible: 47 + enum: 48 + - rockchip,px30-io-voltage-domain 49 + - rockchip,px30-pmu-io-voltage-domain 50 + - rockchip,rk3188-io-voltage-domain 51 + - rockchip,rk3228-io-voltage-domain 52 + - rockchip,rk3288-io-voltage-domain 53 + - rockchip,rk3328-io-voltage-domain 54 + - rockchip,rk3368-io-voltage-domain 55 + - rockchip,rk3368-pmu-io-voltage-domain 56 + - rockchip,rk3399-io-voltage-domain 57 + - rockchip,rk3399-pmu-io-voltage-domain 58 + - rockchip,rk3568-pmu-io-voltage-domain 59 + - rockchip,rv1108-io-voltage-domain 60 + - rockchip,rv1108-pmu-io-voltage-domain 61 + 62 + required: 63 + - compatible 64 + 65 + unevaluatedProperties: false 66 + 67 + allOf: 68 + - $ref: "#/$defs/px30" 69 + - $ref: "#/$defs/px30-pmu" 70 + - $ref: "#/$defs/rk3188" 71 + - $ref: "#/$defs/rk3228" 72 + - $ref: "#/$defs/rk3288" 73 + - $ref: "#/$defs/rk3328" 74 + - $ref: "#/$defs/rk3368" 75 + - $ref: "#/$defs/rk3368-pmu" 76 + - $ref: "#/$defs/rk3399" 77 + - $ref: "#/$defs/rk3399-pmu" 78 + - $ref: "#/$defs/rk3568-pmu" 79 + - $ref: "#/$defs/rv1108" 80 + - $ref: "#/$defs/rv1108-pmu" 81 + 82 + $defs: 83 + px30: 84 + if: 85 + properties: 86 + compatible: 87 + contains: 88 + const: rockchip,px30-io-voltage-domain 89 + 90 + then: 91 + properties: 92 + vccio1-supply: 93 + description: The supply connected to VCCIO1. 94 + vccio2-supply: 95 + description: The supply connected to VCCIO2. 96 + vccio3-supply: 97 + description: The supply connected to VCCIO3. 98 + vccio4-supply: 99 + description: The supply connected to VCCIO4. 100 + vccio5-supply: 101 + description: The supply connected to VCCIO5. 102 + vccio6-supply: 103 + description: The supply connected to VCCIO6. 104 + vccio-oscgpi-supply: 105 + description: The supply connected to VCCIO_OSCGPI. 106 + 107 + px30-pmu: 108 + if: 109 + properties: 110 + compatible: 111 + contains: 112 + const: rockchip,px30-pmu-io-voltage-domain 113 + 114 + then: 115 + properties: 116 + pmuio1-supply: 117 + description: The supply connected to PMUIO1. 118 + pmuio2-supply: 119 + description: The supply connected to PMUIO2. 120 + 121 + rk3188: 122 + if: 123 + properties: 124 + compatible: 125 + contains: 126 + const: rockchip,rk3188-io-voltage-domain 127 + 128 + then: 129 + properties: 130 + ap0-supply: 131 + description: The supply connected to AP0_VCC. 132 + ap1-supply: 133 + description: The supply connected to AP1_VCC. 134 + cif-supply: 135 + description: The supply connected to CIF_VCC. 136 + flash-supply: 137 + description: The supply connected to FLASH_VCC. 138 + lcdc0-supply: 139 + description: The supply connected to LCD0_VCC. 140 + lcdc1-supply: 141 + description: The supply connected to LCD1_VCC. 142 + vccio0-supply: 143 + description: The supply connected to VCCIO0. 144 + vccio1-supply: 145 + description: The supply connected to VCCIO1. Also labeled as VCCIO2. 146 + 147 + rk3228: 148 + if: 149 + properties: 150 + compatible: 151 + contains: 152 + const: rockchip,rk3228-io-voltage-domain 153 + 154 + then: 155 + properties: 156 + vccio1-supply: 157 + description: The supply connected to VCCIO1. 158 + vccio2-supply: 159 + description: The supply connected to VCCIO2. 160 + vccio3-supply: 161 + description: The supply connected to VCCIO3. 162 + vccio4-supply: 163 + description: The supply connected to VCCIO4. 164 + 165 + rk3288: 166 + if: 167 + properties: 168 + compatible: 169 + contains: 170 + const: rockchip,rk3288-io-voltage-domain 171 + 172 + then: 173 + properties: 174 + audio-supply: 175 + description: The supply connected to APIO4_VDD. 176 + bb-supply: 177 + description: The supply connected to APIO5_VDD. 178 + dvp-supply: 179 + description: The supply connected to DVPIO_VDD. 180 + flash0-supply: 181 + description: The supply connected to FLASH0_VDD. Typically for eMMC. 182 + flash1-supply: 183 + description: The supply connected to FLASH1_VDD. Also known as SDIO1. 184 + gpio30-supply: 185 + description: The supply connected to APIO1_VDD. 186 + gpio1830-supply: 187 + description: The supply connected to APIO2_VDD. 188 + lcdc-supply: 189 + description: The supply connected to LCDC_VDD. 190 + sdcard-supply: 191 + description: The supply connected to SDMMC0_VDD. 192 + wifi-supply: 193 + description: The supply connected to APIO3_VDD. Also known as SDIO0. 194 + 195 + rk3328: 196 + if: 197 + properties: 198 + compatible: 199 + contains: 200 + const: rockchip,rk3328-io-voltage-domain 201 + 202 + then: 203 + properties: 204 + vccio1-supply: 205 + description: The supply connected to VCCIO1. 206 + vccio2-supply: 207 + description: The supply connected to VCCIO2. 208 + vccio3-supply: 209 + description: The supply connected to VCCIO3. 210 + vccio4-supply: 211 + description: The supply connected to VCCIO4. 212 + vccio5-supply: 213 + description: The supply connected to VCCIO5. 214 + vccio6-supply: 215 + description: The supply connected to VCCIO6. 216 + pmuio-supply: 217 + description: The supply connected to VCCIO_PMU. 218 + 219 + rk3368: 220 + if: 221 + properties: 222 + compatible: 223 + contains: 224 + const: rockchip,rk3368-io-voltage-domain 225 + 226 + then: 227 + properties: 228 + audio-supply: 229 + description: The supply connected to APIO3_VDD. 230 + dvp-supply: 231 + description: The supply connected to DVPIO_VDD. 232 + flash0-supply: 233 + description: The supply connected to FLASH0_VDD. Typically for eMMC. 234 + gpio30-supply: 235 + description: The supply connected to APIO1_VDD. 236 + gpio1830-supply: 237 + description: The supply connected to APIO4_VDD. 238 + sdcard-supply: 239 + description: The supply connected to SDMMC0_VDD. 240 + wifi-supply: 241 + description: The supply connected to APIO2_VDD. Also known as SDIO0. 242 + 243 + rk3368-pmu: 244 + if: 245 + properties: 246 + compatible: 247 + contains: 248 + const: rockchip,rk3368-pmu-io-voltage-domain 249 + 250 + then: 251 + properties: 252 + pmu-supply: 253 + description: The supply connected to PMUIO_VDD. 254 + vop-supply: 255 + description: The supply connected to LCDC_VDD. 256 + 257 + rk3399: 258 + if: 259 + properties: 260 + compatible: 261 + contains: 262 + const: rockchip,rk3399-io-voltage-domain 263 + 264 + then: 265 + properties: 266 + audio-supply: 267 + description: The supply connected to APIO5_VDD. 268 + bt656-supply: 269 + description: The supply connected to APIO2_VDD. 270 + gpio1830-supply: 271 + description: The supply connected to APIO4_VDD. 272 + sdmmc-supply: 273 + description: The supply connected to SDMMC0_VDD. 274 + 275 + rk3399-pmu: 276 + if: 277 + properties: 278 + compatible: 279 + contains: 280 + const: rockchip,rk3399-pmu-io-voltage-domain 281 + 282 + then: 283 + properties: 284 + pmu1830-supply: 285 + description: The supply connected to PMUIO2_VDD. 286 + 287 + rk3568-pmu: 288 + if: 289 + properties: 290 + compatible: 291 + contains: 292 + const: rockchip,rk3568-pmu-io-voltage-domain 293 + 294 + then: 295 + properties: 296 + pmuio1-supply: 297 + description: The supply connected to PMUIO1. 298 + pmuio2-supply: 299 + description: The supply connected to PMUIO2. 300 + vccio1-supply: 301 + description: The supply connected to VCCIO1. 302 + vccio2-supply: 303 + description: The supply connected to VCCIO2. 304 + vccio3-supply: 305 + description: The supply connected to VCCIO3. 306 + vccio4-supply: 307 + description: The supply connected to VCCIO4. 308 + vccio5-supply: 309 + description: The supply connected to VCCIO5. 310 + vccio6-supply: 311 + description: The supply connected to VCCIO6. 312 + vccio7-supply: 313 + description: The supply connected to VCCIO7. 314 + 315 + rv1108: 316 + if: 317 + properties: 318 + compatible: 319 + contains: 320 + const: rockchip,rv1108-io-voltage-domain 321 + 322 + then: 323 + properties: 324 + vccio1-supply: 325 + description: The supply connected to APIO1_VDD. 326 + vccio2-supply: 327 + description: The supply connected to APIO2_VDD. 328 + vccio3-supply: 329 + description: The supply connected to APIO3_VDD. 330 + vccio5-supply: 331 + description: The supply connected to APIO5_VDD. 332 + vccio6-supply: 333 + description: The supply connected to APIO6_VDD. 334 + 335 + rv1108-pmu: 336 + if: 337 + properties: 338 + compatible: 339 + contains: 340 + const: rockchip,rv1108-pmu-io-voltage-domain 341 + 342 + then: 343 + properties: 344 + pmu-supply: 345 + description: The supply connected to PMUIO_VDD. 346 + 347 + examples: 348 + - | 349 + io-domains { 350 + compatible = "rockchip,rk3288-io-voltage-domain"; 351 + audio-supply = <&vcc18_codec>; 352 + bb-supply = <&vcc33_io>; 353 + dvp-supply = <&vcc_18>; 354 + flash0-supply = <&vcc18_flashio>; 355 + gpio1830-supply = <&vcc33_io>; 356 + gpio30-supply = <&vcc33_pmuio>; 357 + lcdc-supply = <&vcc33_lcd>; 358 + sdcard-supply = <&vccio_sd>; 359 + wifi-supply = <&vcc18_wl>; 360 + };
+5
Documentation/devicetree/bindings/reset/qcom,aoss-reset.yaml
··· 21 21 - const: "qcom,sc7180-aoss-cc" 22 22 - const: "qcom,sdm845-aoss-cc" 23 23 24 + - description: on SC7280 SoCs the following compatibles must be specified 25 + items: 26 + - const: "qcom,sc7280-aoss-cc" 27 + - const: "qcom,sdm845-aoss-cc" 28 + 24 29 - description: on SDM845 SoCs the following compatibles must be specified 25 30 items: 26 31 - const: "qcom,sdm845-aoss-cc"
+4
Documentation/devicetree/bindings/reset/qcom,pdc-global.yaml
··· 21 21 - const: "qcom,sc7180-pdc-global" 22 22 - const: "qcom,sdm845-pdc-global" 23 23 24 + - description: on SC7280 SoCs the following compatibles must be specified 25 + items: 26 + - const: "qcom,sc7280-pdc-global" 27 + 24 28 - description: on SDM845 SoCs the following compatibles must be specified 25 29 items: 26 30 - const: "qcom,sdm845-pdc-global"
+65
Documentation/devicetree/bindings/reset/renesas,rzg2l-usbphy-ctrl.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/reset/renesas,rzg2l-usbphy-ctrl.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Renesas RZ/G2L USBPHY Control 8 + 9 + maintainers: 10 + - Biju Das <biju.das.jz@bp.renesas.com> 11 + 12 + description: 13 + The RZ/G2L USBPHY Control mainly controls reset and power down of the 14 + USB/PHY. 15 + 16 + properties: 17 + compatible: 18 + items: 19 + - enum: 20 + - renesas,r9a07g044-usbphy-ctrl # RZ/G2{L,LC} 21 + - const: renesas,rzg2l-usbphy-ctrl 22 + 23 + reg: 24 + maxItems: 1 25 + 26 + clocks: 27 + maxItems: 1 28 + 29 + resets: 30 + maxItems: 1 31 + 32 + power-domains: 33 + maxItems: 1 34 + 35 + '#reset-cells': 36 + const: 1 37 + description: | 38 + The phandle's argument in the reset specifier is the PHY reset associated 39 + with the USB port. 40 + 0 = Port 1 Phy reset 41 + 1 = Port 2 Phy reset 42 + 43 + required: 44 + - compatible 45 + - reg 46 + - clocks 47 + - resets 48 + - power-domains 49 + - '#reset-cells' 50 + 51 + additionalProperties: false 52 + 53 + examples: 54 + - | 55 + #include <dt-bindings/clock/r9a07g044-cpg.h> 56 + 57 + phyrst: usbphy-ctrl@11c40000 { 58 + compatible = "renesas,r9a07g044-usbphy-ctrl", 59 + "renesas,rzg2l-usbphy-ctrl"; 60 + reg = <0x11c40000 0x10000>; 61 + clocks = <&cpg CPG_MOD R9A07G044_USB_PCLK>; 62 + resets = <&cpg R9A07G044_USB_PRESETN>; 63 + power-domains = <&cpg>; 64 + #reset-cells = <1>; 65 + };
+88
Documentation/devicetree/bindings/reset/socionext,uniphier-glue-reset.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/reset/socionext,uniphier-glue-reset.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Socionext UniPhier peripheral core reset in glue layer 8 + 9 + description: | 10 + Some peripheral core reset belongs to its own glue layer. Before using 11 + this core reset, it is necessary to control the clocks and resets to 12 + enable this layer. These clocks and resets should be described in each 13 + property. 14 + 15 + maintainers: 16 + - Kunihiko Hayashi <hayashi.kunihiko@socionext.com> 17 + 18 + properties: 19 + compatible: 20 + enum: 21 + - socionext,uniphier-pro4-usb3-reset 22 + - socionext,uniphier-pro5-usb3-reset 23 + - socionext,uniphier-pxs2-usb3-reset 24 + - socionext,uniphier-ld20-usb3-reset 25 + - socionext,uniphier-pxs3-usb3-reset 26 + - socionext,uniphier-pro4-ahci-reset 27 + - socionext,uniphier-pxs2-ahci-reset 28 + - socionext,uniphier-pxs3-ahci-reset 29 + 30 + reg: 31 + maxItems: 1 32 + 33 + "#reset-cells": 34 + const: 1 35 + 36 + clocks: 37 + minItems: 1 38 + maxItems: 2 39 + 40 + clock-names: 41 + oneOf: 42 + - items: # for Pro4, Pro5 43 + - const: gio 44 + - const: link 45 + - items: # for others 46 + - const: link 47 + 48 + resets: 49 + minItems: 1 50 + maxItems: 2 51 + 52 + reset-names: 53 + oneOf: 54 + - items: # for Pro4, Pro5 55 + - const: gio 56 + - const: link 57 + - items: # for others 58 + - const: link 59 + 60 + additionalProperties: false 61 + 62 + required: 63 + - compatible 64 + - reg 65 + - "#reset-cells" 66 + - clocks 67 + - clock-names 68 + - resets 69 + - reset-names 70 + 71 + examples: 72 + - | 73 + usb-glue@65b00000 { 74 + compatible = "simple-mfd"; 75 + #address-cells = <1>; 76 + #size-cells = <1>; 77 + ranges = <0 0x65b00000 0x400>; 78 + 79 + usb_rst: reset@0 { 80 + compatible = "socionext,uniphier-ld20-usb3-reset"; 81 + reg = <0x0 0x4>; 82 + #reset-cells = <1>; 83 + clock-names = "link"; 84 + clocks = <&sys_clk 14>; 85 + reset-names = "link"; 86 + resets = <&sys_rst 14>; 87 + }; 88 + };
-61
Documentation/devicetree/bindings/reset/uniphier-reset.txt
··· 1 - UniPhier glue reset controller 2 - 3 - 4 - Peripheral core reset in glue layer 5 - ----------------------------------- 6 - 7 - Some peripheral core reset belongs to its own glue layer. Before using 8 - this core reset, it is necessary to control the clocks and resets to enable 9 - this layer. These clocks and resets should be described in each property. 10 - 11 - Required properties: 12 - - compatible: Should be 13 - "socionext,uniphier-pro4-usb3-reset" - for Pro4 SoC USB3 14 - "socionext,uniphier-pro5-usb3-reset" - for Pro5 SoC USB3 15 - "socionext,uniphier-pxs2-usb3-reset" - for PXs2 SoC USB3 16 - "socionext,uniphier-ld20-usb3-reset" - for LD20 SoC USB3 17 - "socionext,uniphier-pxs3-usb3-reset" - for PXs3 SoC USB3 18 - "socionext,uniphier-pro4-ahci-reset" - for Pro4 SoC AHCI 19 - "socionext,uniphier-pxs2-ahci-reset" - for PXs2 SoC AHCI 20 - "socionext,uniphier-pxs3-ahci-reset" - for PXs3 SoC AHCI 21 - - #reset-cells: Should be 1. 22 - - reg: Specifies offset and length of the register set for the device. 23 - - clocks: A list of phandles to the clock gate for the glue layer. 24 - According to the clock-names, appropriate clocks are required. 25 - - clock-names: Should contain 26 - "gio", "link" - for Pro4 and Pro5 SoCs 27 - "link" - for others 28 - - resets: A list of phandles to the reset control for the glue layer. 29 - According to the reset-names, appropriate resets are required. 30 - - reset-names: Should contain 31 - "gio", "link" - for Pro4 and Pro5 SoCs 32 - "link" - for others 33 - 34 - Example: 35 - 36 - usb-glue@65b00000 { 37 - compatible = "socionext,uniphier-ld20-dwc3-glue", 38 - "simple-mfd"; 39 - #address-cells = <1>; 40 - #size-cells = <1>; 41 - ranges = <0 0x65b00000 0x400>; 42 - 43 - usb_rst: reset@0 { 44 - compatible = "socionext,uniphier-ld20-usb3-reset"; 45 - reg = <0x0 0x4>; 46 - #reset-cells = <1>; 47 - clock-names = "link"; 48 - clocks = <&sys_clk 14>; 49 - reset-names = "link"; 50 - resets = <&sys_rst 14>; 51 - }; 52 - 53 - regulator { 54 - ... 55 - }; 56 - 57 - phy { 58 - ... 59 - }; 60 - ... 61 - };
-87
Documentation/devicetree/bindings/soc/qcom/qcom,aoss-qmp.txt
··· 1 - Qualcomm Always-On Subsystem side channel binding 2 - 3 - This binding describes the hardware component responsible for side channel 4 - requests to the always-on subsystem (AOSS), used for certain power management 5 - requests that is not handled by the standard RPMh interface. Each client in the 6 - SoC has it's own block of message RAM and IRQ for communication with the AOSS. 7 - The protocol used to communicate in the message RAM is known as Qualcomm 8 - Messaging Protocol (QMP) 9 - 10 - The AOSS side channel exposes control over a set of resources, used to control 11 - a set of debug related clocks and to affect the low power state of resources 12 - related to the secondary subsystems. These resources are exposed as a set of 13 - power-domains. 14 - 15 - - compatible: 16 - Usage: required 17 - Value type: <string> 18 - Definition: must be one of: 19 - "qcom,sc7180-aoss-qmp" 20 - "qcom,sc7280-aoss-qmp" 21 - "qcom,sdm845-aoss-qmp" 22 - "qcom,sm8150-aoss-qmp" 23 - "qcom,sm8250-aoss-qmp" 24 - "qcom,sm8350-aoss-qmp" 25 - 26 - - reg: 27 - Usage: required 28 - Value type: <prop-encoded-array> 29 - Definition: the base address and size of the message RAM for this 30 - client's communication with the AOSS 31 - 32 - - interrupts: 33 - Usage: required 34 - Value type: <prop-encoded-array> 35 - Definition: should specify the AOSS message IRQ for this client 36 - 37 - - mboxes: 38 - Usage: required 39 - Value type: <prop-encoded-array> 40 - Definition: reference to the mailbox representing the outgoing doorbell 41 - in APCS for this client, as described in mailbox/mailbox.txt 42 - 43 - - #clock-cells: 44 - Usage: optional 45 - Value type: <u32> 46 - Definition: must be 0 47 - The single clock represents the QDSS clock. 48 - 49 - - #power-domain-cells: 50 - Usage: optional 51 - Value type: <u32> 52 - Definition: must be 1 53 - The provided power-domains are: 54 - CDSP state (0), LPASS state (1), modem state (2), SLPI 55 - state (3), SPSS state (4) and Venus state (5). 56 - 57 - = SUBNODES 58 - The AOSS side channel also provides the controls for three cooling devices, 59 - these are expressed as subnodes of the QMP node. The name of the node is used 60 - to identify the resource and must therefor be "cx", "mx" or "ebi". 61 - 62 - - #cooling-cells: 63 - Usage: optional 64 - Value type: <u32> 65 - Definition: must be 2 66 - 67 - = EXAMPLE 68 - 69 - The following example represents the AOSS side-channel message RAM and the 70 - mechanism exposing the power-domains, as found in SDM845. 71 - 72 - aoss_qmp: qmp@c300000 { 73 - compatible = "qcom,sdm845-aoss-qmp"; 74 - reg = <0x0c300000 0x100000>; 75 - interrupts = <GIC_SPI 389 IRQ_TYPE_EDGE_RISING>; 76 - mboxes = <&apss_shared 0>; 77 - 78 - #power-domain-cells = <1>; 79 - 80 - cx_cdev: cx { 81 - #cooling-cells = <2>; 82 - }; 83 - 84 - mx_cdev: mx { 85 - #cooling-cells = <2>; 86 - }; 87 - };
+114
Documentation/devicetree/bindings/soc/qcom/qcom,aoss-qmp.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/soc/qcom/qcom,aoss-qmp.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm Always-On Subsystem side channel binding 8 + 9 + maintainers: 10 + - Bjorn Andersson <bjorn.andersson@linaro.org> 11 + 12 + description: 13 + This binding describes the hardware component responsible for side channel 14 + requests to the always-on subsystem (AOSS), used for certain power management 15 + requests that is not handled by the standard RPMh interface. Each client in the 16 + SoC has it's own block of message RAM and IRQ for communication with the AOSS. 17 + The protocol used to communicate in the message RAM is known as Qualcomm 18 + Messaging Protocol (QMP) 19 + 20 + The AOSS side channel exposes control over a set of resources, used to control 21 + a set of debug related clocks and to affect the low power state of resources 22 + related to the secondary subsystems. These resources are exposed as a set of 23 + power-domains. 24 + 25 + properties: 26 + compatible: 27 + items: 28 + - enum: 29 + - qcom,sc7180-aoss-qmp 30 + - qcom,sc7280-aoss-qmp 31 + - qcom,sc8180x-aoss-qmp 32 + - qcom,sdm845-aoss-qmp 33 + - qcom,sm8150-aoss-qmp 34 + - qcom,sm8250-aoss-qmp 35 + - qcom,sm8350-aoss-qmp 36 + - const: qcom,aoss-qmp 37 + 38 + reg: 39 + maxItems: 1 40 + description: 41 + The base address and size of the message RAM for this client's 42 + communication with the AOSS 43 + 44 + interrupts: 45 + maxItems: 1 46 + description: 47 + Should specify the AOSS message IRQ for this client 48 + 49 + mboxes: 50 + maxItems: 1 51 + description: 52 + Reference to the mailbox representing the outgoing doorbell in APCS for 53 + this client, as described in mailbox/mailbox.txt 54 + 55 + "#clock-cells": 56 + const: 0 57 + description: 58 + The single clock represents the QDSS clock. 59 + 60 + "#power-domain-cells": 61 + const: 1 62 + description: | 63 + The provided power-domains are: 64 + CDSP state (0), LPASS state (1), modem state (2), SLPI 65 + state (3), SPSS state (4) and Venus state (5). 66 + 67 + required: 68 + - compatible 69 + - reg 70 + - interrupts 71 + - mboxes 72 + - "#clock-cells" 73 + 74 + additionalProperties: false 75 + 76 + patternProperties: 77 + "^(cx|mx|ebi)$": 78 + type: object 79 + description: 80 + The AOSS side channel also provides the controls for three cooling devices, 81 + these are expressed as subnodes of the QMP node. The name of the node is 82 + used to identify the resource and must therefor be "cx", "mx" or "ebi". 83 + 84 + properties: 85 + "#cooling-cells": 86 + const: 2 87 + 88 + required: 89 + - "#cooling-cells" 90 + 91 + additionalProperties: false 92 + 93 + examples: 94 + - | 95 + #include <dt-bindings/interrupt-controller/arm-gic.h> 96 + 97 + aoss_qmp: qmp@c300000 { 98 + compatible = "qcom,sdm845-aoss-qmp", "qcom,aoss-qmp"; 99 + reg = <0x0c300000 0x100000>; 100 + interrupts = <GIC_SPI 389 IRQ_TYPE_EDGE_RISING>; 101 + mboxes = <&apss_shared 0>; 102 + 103 + #clock-cells = <0>; 104 + #power-domain-cells = <1>; 105 + 106 + cx_cdev: cx { 107 + #cooling-cells = <2>; 108 + }; 109 + 110 + mx_cdev: mx { 111 + #cooling-cells = <2>; 112 + }; 113 + }; 114 + ...
+3
Documentation/devicetree/bindings/soc/qcom/qcom,geni-se.yaml
··· 51 51 interconnect-names: 52 52 const: qup-core 53 53 54 + iommus: 55 + maxItems: 1 56 + 54 57 required: 55 58 - compatible 56 59 - reg
+1
Documentation/devicetree/bindings/soc/qcom/qcom,smd-rpm.yaml
··· 39 39 - qcom,rpm-msm8996 40 40 - qcom,rpm-msm8998 41 41 - qcom,rpm-sdm660 42 + - qcom,rpm-sm6115 42 43 - qcom,rpm-sm6125 43 44 - qcom,rpm-qcs404 44 45
+13 -6
Documentation/devicetree/bindings/soc/rockchip/grf.yaml
··· 15 15 - items: 16 16 - enum: 17 17 - rockchip,rk3288-sgrf 18 - - rockchip,rv1108-pmugrf 19 18 - rockchip,rv1108-usbgrf 20 19 - const: syscon 21 20 - items: ··· 40 41 - rockchip,rk3568-grf 41 42 - rockchip,rk3568-pmugrf 42 43 - rockchip,rv1108-grf 44 + - rockchip,rv1108-pmugrf 43 45 - const: syscon 44 46 - const: simple-mfd 45 47 ··· 198 198 compatible: 199 199 contains: 200 200 enum: 201 - - rockchip,px30-pmugrf 202 201 - rockchip,px30-grf 202 + - rockchip,px30-pmugrf 203 + - rockchip,rk3188-grf 203 204 - rockchip,rk3228-grf 204 205 - rockchip,rk3288-grf 205 206 - rockchip,rk3328-grf 206 - - rockchip,rk3368-pmugrf 207 207 - rockchip,rk3368-grf 208 - - rockchip,rk3399-pmugrf 208 + - rockchip,rk3368-pmugrf 209 209 - rockchip,rk3399-grf 210 + - rockchip,rk3399-pmugrf 211 + - rockchip,rk3568-pmugrf 212 + - rockchip,rv1108-grf 213 + - rockchip,rv1108-pmugrf 210 214 211 215 then: 212 216 properties: 213 217 io-domains: 214 - description: 215 - Documentation/devicetree/bindings/power/rockchip-io-domain.txt 218 + type: object 219 + 220 + $ref: "/schemas/power/rockchip-io-domain.yaml#" 221 + 222 + unevaluatedProperties: false 216 223 217 224 examples: 218 225 - |
+28 -13
Documentation/devicetree/bindings/soc/ti/ti,pruss.yaml
··· 68 68 - ti,k2g-pruss # for 66AK2G SoC family 69 69 - ti,am654-icssg # for K3 AM65x SoC family 70 70 - ti,j721e-icssg # for K3 J721E SoC family 71 + - ti,am642-icssg # for K3 AM64x SoC family 71 72 72 73 reg: 73 74 maxItems: 1 ··· 84 83 85 84 dma-ranges: 86 85 maxItems: 1 86 + 87 + dma-coherent: true 87 88 88 89 power-domains: 89 90 description: | ··· 234 231 description: | 235 232 Industrial Ethernet Peripheral to manage/generate Industrial Ethernet 236 233 functions such as time stamping. Each PRUSS has either 1 IEP (on AM335x, 237 - AM437x, AM57xx & 66AK2G SoCs) or 2 IEPs (on K3 AM65x & J721E SoCs ). IEP 238 - is used for creating PTP clocks and generating PPS signals. 234 + AM437x, AM57xx & 66AK2G SoCs) or 2 IEPs (on K3 AM65x, J721E & AM64x SoCs). 235 + IEP is used for creating PTP clocks and generating PPS signals. 239 236 240 237 type: object 241 238 ··· 326 323 # - interrupt-controller 327 324 # - pru 328 325 329 - if: 330 - properties: 331 - compatible: 332 - contains: 333 - enum: 334 - - ti,k2g-pruss 335 - - ti,am654-icssg 336 - - ti,j721e-icssg 337 - then: 338 - required: 339 - - power-domains 326 + allOf: 327 + - if: 328 + properties: 329 + compatible: 330 + contains: 331 + enum: 332 + - ti,k2g-pruss 333 + - ti,am654-icssg 334 + - ti,j721e-icssg 335 + - ti,am642-icssg 336 + then: 337 + required: 338 + - power-domains 339 + 340 + - if: 341 + properties: 342 + compatible: 343 + contains: 344 + enum: 345 + - ti,k2g-pruss 346 + then: 347 + required: 348 + - dma-coherent 340 349 341 350 examples: 342 351 - |
+4 -1
MAINTAINERS
··· 1501 1501 M: Naga Sureshkumar Relli <nagasure@xilinx.com> 1502 1502 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1503 1503 S: Maintained 1504 - F: Documentation/devicetree/bindings/mtd/arm,pl353-smc.yaml 1504 + F: Documentation/devicetree/bindings/memory-controllers/arm,pl353-smc.yaml 1505 1505 F: drivers/memory/pl353-smc.c 1506 1506 1507 1507 ARM PRIMECELL CLCD PL110 DRIVER ··· 2023 2023 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2024 2024 S: Maintained 2025 2025 F: Documentation/devicetree/bindings/arm/intel-ixp4xx.yaml 2026 + F: Documentation/devicetree/bindings/bus/intel,ixp4xx-expansion-bus-controller.yaml 2026 2027 F: Documentation/devicetree/bindings/gpio/intel,ixp4xx-gpio.txt 2027 2028 F: Documentation/devicetree/bindings/interrupt-controller/intel,ixp4xx-interrupt.yaml 2028 2029 F: Documentation/devicetree/bindings/timer/intel,ixp4xx-timer.yaml 2029 2030 F: arch/arm/mach-ixp4xx/ 2031 + F: drivers/bus/intel-ixp4xx-eb.c 2030 2032 F: drivers/clocksource/timer-ixp4xx.c 2031 2033 F: drivers/crypto/ixp4xx_crypto.c 2032 2034 F: drivers/gpio/gpio-ixp4xx.c ··· 18079 18077 F: drivers/reset/reset-scmi.c 18080 18078 F: include/linux/sc[mp]i_protocol.h 18081 18079 F: include/trace/events/scmi.h 18080 + F: include/uapi/linux/virtio_scmi.h 18082 18081 18083 18082 SYSTEM RESET/SHUTDOWN DRIVERS 18084 18083 M: Sebastian Reichel <sre@kernel.org>
+1 -1
arch/arm/boot/dts/imx6q.dtsi
··· 177 177 clocks = <&clks IMX6Q_CLK_ECSPI5>, 178 178 <&clks IMX6Q_CLK_ECSPI5>; 179 179 clock-names = "ipg", "per"; 180 - dmas = <&sdma 11 8 1>, <&sdma 12 8 2>; 180 + dmas = <&sdma 11 7 1>, <&sdma 12 7 2>; 181 181 dma-names = "rx", "tx"; 182 182 status = "disabled"; 183 183 };
+4 -4
arch/arm/boot/dts/imx6qdl.dtsi
··· 334 334 clocks = <&clks IMX6QDL_CLK_ECSPI1>, 335 335 <&clks IMX6QDL_CLK_ECSPI1>; 336 336 clock-names = "ipg", "per"; 337 - dmas = <&sdma 3 8 1>, <&sdma 4 8 2>; 337 + dmas = <&sdma 3 7 1>, <&sdma 4 7 2>; 338 338 dma-names = "rx", "tx"; 339 339 status = "disabled"; 340 340 }; ··· 348 348 clocks = <&clks IMX6QDL_CLK_ECSPI2>, 349 349 <&clks IMX6QDL_CLK_ECSPI2>; 350 350 clock-names = "ipg", "per"; 351 - dmas = <&sdma 5 8 1>, <&sdma 6 8 2>; 351 + dmas = <&sdma 5 7 1>, <&sdma 6 7 2>; 352 352 dma-names = "rx", "tx"; 353 353 status = "disabled"; 354 354 }; ··· 362 362 clocks = <&clks IMX6QDL_CLK_ECSPI3>, 363 363 <&clks IMX6QDL_CLK_ECSPI3>; 364 364 clock-names = "ipg", "per"; 365 - dmas = <&sdma 7 8 1>, <&sdma 8 8 2>; 365 + dmas = <&sdma 7 7 1>, <&sdma 8 7 2>; 366 366 dma-names = "rx", "tx"; 367 367 status = "disabled"; 368 368 }; ··· 376 376 clocks = <&clks IMX6QDL_CLK_ECSPI4>, 377 377 <&clks IMX6QDL_CLK_ECSPI4>; 378 378 clock-names = "ipg", "per"; 379 - dmas = <&sdma 9 8 1>, <&sdma 10 8 2>; 379 + dmas = <&sdma 9 7 1>, <&sdma 10 7 2>; 380 380 dma-names = "rx", "tx"; 381 381 status = "disabled"; 382 382 };
-5
arch/arm/mach-omap2/pm34xx.c
··· 26 26 #include <linux/delay.h> 27 27 #include <linux/slab.h> 28 28 #include <linux/of.h> 29 - #include <linux/omap-gpmc.h> 30 29 31 30 #include <trace/events/power.h> 32 31 ··· 80 81 81 82 /* Save the Interrupt controller context */ 82 83 omap_intc_save_context(); 83 - /* Save the GPMC context */ 84 - omap3_gpmc_save_context(); 85 84 /* Save the system control module context, padconf already save above*/ 86 85 omap3_control_save_context(); 87 86 } ··· 88 91 { 89 92 /* Restore the control module context, padconf restored by h/w */ 90 93 omap3_control_restore_context(); 91 - /* Restore the GPMC context */ 92 - omap3_gpmc_restore_context(); 93 94 /* Restore the interrupt controller context */ 94 95 omap_intc_restore_context(); 95 96 }
+1 -1
arch/arm/mach-tegra/pm.c
··· 403 403 .enter = tegra_suspend_enter, 404 404 }; 405 405 406 - void __init tegra_init_suspend(void) 406 + void tegra_pm_init_suspend(void) 407 407 { 408 408 enum tegra_suspend_mode mode = tegra_pmc_get_suspend_mode(); 409 409
-6
arch/arm/mach-tegra/pm.h
··· 25 25 26 26 extern void (*tegra_tear_down_cpu)(void); 27 27 28 - #ifdef CONFIG_PM_SLEEP 29 - void tegra_init_suspend(void); 30 - #else 31 - static inline void tegra_init_suspend(void) {} 32 - #endif 33 - 34 28 #endif /* _MACH_TEGRA_PM_H_ */
-2
arch/arm/mach-tegra/tegra.c
··· 84 84 85 85 static void __init tegra_dt_init_late(void) 86 86 { 87 - tegra_init_suspend(); 88 - 89 87 if (IS_ENABLED(CONFIG_ARCH_TEGRA_2x_SOC) && 90 88 of_machine_is_compatible("compal,paz00")) 91 89 tegra_paz00_wifikill_init();
+183 -77
drivers/ata/pata_ixp4xx_cf.c
··· 13 13 */ 14 14 15 15 #include <linux/kernel.h> 16 + #include <linux/mfd/syscon.h> 16 17 #include <linux/module.h> 17 18 #include <linux/libata.h> 18 19 #include <linux/irq.h> 19 20 #include <linux/platform_device.h> 20 - #include <linux/platform_data/pata_ixp4xx_cf.h> 21 + #include <linux/regmap.h> 21 22 #include <scsi/scsi_host.h> 22 23 23 24 #define DRV_NAME "pata_ixp4xx_cf" 24 - #define DRV_VERSION "0.2" 25 + #define DRV_VERSION "1.0" 25 26 26 - static int ixp4xx_set_mode(struct ata_link *link, struct ata_device **error) 27 + struct ixp4xx_pata { 28 + struct ata_host *host; 29 + struct regmap *rmap; 30 + u32 cmd_csreg; 31 + void __iomem *cmd; 32 + void __iomem *ctl; 33 + }; 34 + 35 + #define IXP4XX_EXP_TIMING_STRIDE 0x04 36 + /* The timings for the chipselect is in bits 29..16 */ 37 + #define IXP4XX_EXP_T1_T5_MASK GENMASK(29, 16) 38 + #define IXP4XX_EXP_PIO_0_8 0x0a470000 39 + #define IXP4XX_EXP_PIO_1_8 0x06430000 40 + #define IXP4XX_EXP_PIO_2_8 0x02410000 41 + #define IXP4XX_EXP_PIO_3_8 0x00820000 42 + #define IXP4XX_EXP_PIO_4_8 0x00400000 43 + #define IXP4XX_EXP_PIO_0_16 0x29640000 44 + #define IXP4XX_EXP_PIO_1_16 0x05030000 45 + #define IXP4XX_EXP_PIO_2_16 0x00b20000 46 + #define IXP4XX_EXP_PIO_3_16 0x00820000 47 + #define IXP4XX_EXP_PIO_4_16 0x00400000 48 + #define IXP4XX_EXP_BW_MASK (BIT(6)|BIT(0)) 49 + #define IXP4XX_EXP_BYTE_RD16 BIT(6) /* Byte reads on half-word devices */ 50 + #define IXP4XX_EXP_BYTE_EN BIT(0) /* Use 8bit data bus if set */ 51 + 52 + static void ixp4xx_set_8bit_timing(struct ixp4xx_pata *ixpp, u8 pio_mode) 27 53 { 28 - struct ata_device *dev; 29 - 30 - ata_for_each_dev(dev, link, ENABLED) { 31 - ata_dev_info(dev, "configured for PIO0\n"); 32 - dev->pio_mode = XFER_PIO_0; 33 - dev->xfer_mode = XFER_PIO_0; 34 - dev->xfer_shift = ATA_SHIFT_PIO; 35 - dev->flags |= ATA_DFLAG_PIO; 54 + switch (pio_mode) { 55 + case XFER_PIO_0: 56 + regmap_update_bits(ixpp->rmap, ixpp->cmd_csreg, 57 + IXP4XX_EXP_T1_T5_MASK, IXP4XX_EXP_PIO_0_8); 58 + break; 59 + case XFER_PIO_1: 60 + regmap_update_bits(ixpp->rmap, ixpp->cmd_csreg, 61 + IXP4XX_EXP_T1_T5_MASK, IXP4XX_EXP_PIO_1_8); 62 + break; 63 + case XFER_PIO_2: 64 + regmap_update_bits(ixpp->rmap, ixpp->cmd_csreg, 65 + IXP4XX_EXP_T1_T5_MASK, IXP4XX_EXP_PIO_2_8); 66 + break; 67 + case XFER_PIO_3: 68 + regmap_update_bits(ixpp->rmap, ixpp->cmd_csreg, 69 + IXP4XX_EXP_T1_T5_MASK, IXP4XX_EXP_PIO_3_8); 70 + break; 71 + case XFER_PIO_4: 72 + regmap_update_bits(ixpp->rmap, ixpp->cmd_csreg, 73 + IXP4XX_EXP_T1_T5_MASK, IXP4XX_EXP_PIO_4_8); 74 + break; 75 + default: 76 + break; 36 77 } 37 - return 0; 78 + regmap_update_bits(ixpp->rmap, ixpp->cmd_csreg, 79 + IXP4XX_EXP_BW_MASK, IXP4XX_EXP_BYTE_RD16|IXP4XX_EXP_BYTE_EN); 38 80 } 39 81 82 + static void ixp4xx_set_16bit_timing(struct ixp4xx_pata *ixpp, u8 pio_mode) 83 + { 84 + switch (pio_mode){ 85 + case XFER_PIO_0: 86 + regmap_update_bits(ixpp->rmap, ixpp->cmd_csreg, 87 + IXP4XX_EXP_T1_T5_MASK, IXP4XX_EXP_PIO_0_16); 88 + break; 89 + case XFER_PIO_1: 90 + regmap_update_bits(ixpp->rmap, ixpp->cmd_csreg, 91 + IXP4XX_EXP_T1_T5_MASK, IXP4XX_EXP_PIO_1_16); 92 + break; 93 + case XFER_PIO_2: 94 + regmap_update_bits(ixpp->rmap, ixpp->cmd_csreg, 95 + IXP4XX_EXP_T1_T5_MASK, IXP4XX_EXP_PIO_2_16); 96 + break; 97 + case XFER_PIO_3: 98 + regmap_update_bits(ixpp->rmap, ixpp->cmd_csreg, 99 + IXP4XX_EXP_T1_T5_MASK, IXP4XX_EXP_PIO_3_16); 100 + break; 101 + case XFER_PIO_4: 102 + regmap_update_bits(ixpp->rmap, ixpp->cmd_csreg, 103 + IXP4XX_EXP_T1_T5_MASK, IXP4XX_EXP_PIO_4_16); 104 + break; 105 + default: 106 + break; 107 + } 108 + regmap_update_bits(ixpp->rmap, ixpp->cmd_csreg, 109 + IXP4XX_EXP_BW_MASK, IXP4XX_EXP_BYTE_RD16); 110 + } 111 + 112 + /* This sets up the timing on the chipselect CMD accordingly */ 113 + static void ixp4xx_set_piomode(struct ata_port *ap, struct ata_device *adev) 114 + { 115 + struct ixp4xx_pata *ixpp = ap->host->private_data; 116 + 117 + ata_dev_printk(adev, KERN_INFO, "configured for PIO%d 8bit\n", 118 + adev->pio_mode - XFER_PIO_0); 119 + ixp4xx_set_8bit_timing(ixpp, adev->pio_mode); 120 + } 121 + 122 + 40 123 static unsigned int ixp4xx_mmio_data_xfer(struct ata_queued_cmd *qc, 41 - unsigned char *buf, unsigned int buflen, int rw) 124 + unsigned char *buf, unsigned int buflen, int rw) 42 125 { 43 126 unsigned int i; 44 127 unsigned int words = buflen >> 1; 45 128 u16 *buf16 = (u16 *) buf; 129 + struct ata_device *adev = qc->dev; 46 130 struct ata_port *ap = qc->dev->link->ap; 47 131 void __iomem *mmio = ap->ioaddr.data_addr; 48 - struct ixp4xx_pata_data *data = dev_get_platdata(ap->host->dev); 132 + struct ixp4xx_pata *ixpp = ap->host->private_data; 133 + unsigned long flags; 134 + 135 + ata_dev_printk(adev, KERN_DEBUG, "%s %d bytes\n", (rw == READ) ? "READ" : "WRITE", 136 + buflen); 137 + spin_lock_irqsave(ap->lock, flags); 49 138 50 139 /* set the expansion bus in 16bit mode and restore 51 140 * 8 bit mode after the transaction. 52 141 */ 53 - *data->cs0_cfg &= ~(0x01); 54 - udelay(100); 142 + ixp4xx_set_16bit_timing(ixpp, adev->pio_mode); 143 + udelay(5); 55 144 56 145 /* Transfer multiple of 2 bytes */ 57 146 if (rw == READ) ··· 165 76 words++; 166 77 } 167 78 168 - udelay(100); 169 - *data->cs0_cfg |= 0x01; 79 + ixp4xx_set_8bit_timing(ixpp, adev->pio_mode); 80 + udelay(5); 81 + 82 + spin_unlock_irqrestore(ap->lock, flags); 170 83 171 84 return words << 1; 172 85 } ··· 181 90 .inherits = &ata_sff_port_ops, 182 91 .sff_data_xfer = ixp4xx_mmio_data_xfer, 183 92 .cable_detect = ata_cable_40wire, 184 - .set_mode = ixp4xx_set_mode, 93 + .set_piomode = ixp4xx_set_piomode, 94 + }; 95 + 96 + static struct ata_port_info ixp4xx_port_info = { 97 + .flags = ATA_FLAG_NO_ATAPI, 98 + .pio_mask = ATA_PIO4, 99 + .port_ops = &ixp4xx_port_ops, 185 100 }; 186 101 187 102 static void ixp4xx_setup_port(struct ata_port *ap, 188 - struct ixp4xx_pata_data *data, 189 - unsigned long raw_cs0, unsigned long raw_cs1) 103 + struct ixp4xx_pata *ixpp, 104 + unsigned long raw_cmd, unsigned long raw_ctl) 190 105 { 191 106 struct ata_ioports *ioaddr = &ap->ioaddr; 192 - unsigned long raw_cmd = raw_cs0; 193 - unsigned long raw_ctl = raw_cs1 + 0x06; 194 107 195 - ioaddr->cmd_addr = data->cs0; 196 - ioaddr->altstatus_addr = data->cs1 + 0x06; 197 - ioaddr->ctl_addr = data->cs1 + 0x06; 108 + raw_ctl += 0x06; 109 + ioaddr->cmd_addr = ixpp->cmd; 110 + ioaddr->altstatus_addr = ixpp->ctl + 0x06; 111 + ioaddr->ctl_addr = ixpp->ctl + 0x06; 198 112 199 113 ata_sff_std_ports(ioaddr); 200 114 201 - #ifndef __ARMEB__ 115 + if (!IS_ENABLED(CONFIG_CPU_BIG_ENDIAN)) { 116 + /* adjust the addresses to handle the address swizzling of the 117 + * ixp4xx in little endian mode. 118 + */ 202 119 203 - /* adjust the addresses to handle the address swizzling of the 204 - * ixp4xx in little endian mode. 205 - */ 120 + *(unsigned long *)&ioaddr->data_addr ^= 0x02; 121 + *(unsigned long *)&ioaddr->cmd_addr ^= 0x03; 122 + *(unsigned long *)&ioaddr->altstatus_addr ^= 0x03; 123 + *(unsigned long *)&ioaddr->ctl_addr ^= 0x03; 124 + *(unsigned long *)&ioaddr->error_addr ^= 0x03; 125 + *(unsigned long *)&ioaddr->feature_addr ^= 0x03; 126 + *(unsigned long *)&ioaddr->nsect_addr ^= 0x03; 127 + *(unsigned long *)&ioaddr->lbal_addr ^= 0x03; 128 + *(unsigned long *)&ioaddr->lbam_addr ^= 0x03; 129 + *(unsigned long *)&ioaddr->lbah_addr ^= 0x03; 130 + *(unsigned long *)&ioaddr->device_addr ^= 0x03; 131 + *(unsigned long *)&ioaddr->status_addr ^= 0x03; 132 + *(unsigned long *)&ioaddr->command_addr ^= 0x03; 206 133 207 - *(unsigned long *)&ioaddr->data_addr ^= 0x02; 208 - *(unsigned long *)&ioaddr->cmd_addr ^= 0x03; 209 - *(unsigned long *)&ioaddr->altstatus_addr ^= 0x03; 210 - *(unsigned long *)&ioaddr->ctl_addr ^= 0x03; 211 - *(unsigned long *)&ioaddr->error_addr ^= 0x03; 212 - *(unsigned long *)&ioaddr->feature_addr ^= 0x03; 213 - *(unsigned long *)&ioaddr->nsect_addr ^= 0x03; 214 - *(unsigned long *)&ioaddr->lbal_addr ^= 0x03; 215 - *(unsigned long *)&ioaddr->lbam_addr ^= 0x03; 216 - *(unsigned long *)&ioaddr->lbah_addr ^= 0x03; 217 - *(unsigned long *)&ioaddr->device_addr ^= 0x03; 218 - *(unsigned long *)&ioaddr->status_addr ^= 0x03; 219 - *(unsigned long *)&ioaddr->command_addr ^= 0x03; 220 - 221 - raw_cmd ^= 0x03; 222 - raw_ctl ^= 0x03; 223 - #endif 134 + raw_cmd ^= 0x03; 135 + raw_ctl ^= 0x03; 136 + } 224 137 225 138 ata_port_desc(ap, "cmd 0x%lx ctl 0x%lx", raw_cmd, raw_ctl); 226 139 } 227 140 228 141 static int ixp4xx_pata_probe(struct platform_device *pdev) 229 142 { 230 - struct resource *cs0, *cs1; 231 - struct ata_host *host; 232 - struct ata_port *ap; 233 - struct ixp4xx_pata_data *data = dev_get_platdata(&pdev->dev); 143 + struct resource *cmd, *ctl; 144 + struct ata_port_info pi = ixp4xx_port_info; 145 + const struct ata_port_info *ppi[] = { &pi, NULL }; 146 + struct device *dev = &pdev->dev; 147 + struct device_node *np = dev->of_node; 148 + struct ixp4xx_pata *ixpp; 149 + u32 csindex; 234 150 int ret; 235 151 int irq; 236 152 237 - cs0 = platform_get_resource(pdev, IORESOURCE_MEM, 0); 238 - cs1 = platform_get_resource(pdev, IORESOURCE_MEM, 1); 153 + cmd = platform_get_resource(pdev, IORESOURCE_MEM, 0); 154 + ctl = platform_get_resource(pdev, IORESOURCE_MEM, 1); 239 155 240 - if (!cs0 || !cs1) 156 + if (!cmd || !ctl) 241 157 return -EINVAL; 242 158 243 - /* allocate host */ 244 - host = ata_host_alloc(&pdev->dev, 1); 245 - if (!host) 159 + ixpp = devm_kzalloc(dev, sizeof(*ixpp), GFP_KERNEL); 160 + if (!ixpp) 246 161 return -ENOMEM; 247 162 248 - /* acquire resources and fill host */ 249 - ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 163 + ixpp->rmap = syscon_node_to_regmap(np->parent); 164 + if (IS_ERR(ixpp->rmap)) 165 + return dev_err_probe(dev, PTR_ERR(ixpp->rmap), "no regmap\n"); 166 + /* Inspect our address to figure out what chipselect the CMD is on */ 167 + ret = of_property_read_u32_index(np, "reg", 0, &csindex); 168 + if (ret) 169 + return dev_err_probe(dev, ret, "can't inspect CMD address\n"); 170 + dev_info(dev, "using CS%d for PIO timing configuration\n", csindex); 171 + ixpp->cmd_csreg = csindex * IXP4XX_EXP_TIMING_STRIDE; 172 + 173 + ixpp->host = ata_host_alloc_pinfo(dev, ppi, 1); 174 + if (!ixpp->host) 175 + return -ENOMEM; 176 + ixpp->host->private_data = ixpp; 177 + 178 + ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32)); 250 179 if (ret) 251 180 return ret; 252 181 253 - data->cs0 = devm_ioremap(&pdev->dev, cs0->start, 0x1000); 254 - data->cs1 = devm_ioremap(&pdev->dev, cs1->start, 0x1000); 255 - 256 - if (!data->cs0 || !data->cs1) 182 + ixpp->cmd = devm_ioremap_resource(dev, cmd); 183 + ixpp->ctl = devm_ioremap_resource(dev, ctl); 184 + if (IS_ERR(ixpp->cmd) || IS_ERR(ixpp->ctl)) 257 185 return -ENOMEM; 258 186 259 187 irq = platform_get_irq(pdev, 0); ··· 283 173 else 284 174 return -EINVAL; 285 175 286 - /* Setup expansion bus chip selects */ 287 - *data->cs0_cfg = data->cs0_bits; 288 - *data->cs1_cfg = data->cs1_bits; 176 + /* Just one port to set up */ 177 + ixp4xx_setup_port(ixpp->host->ports[0], ixpp, cmd->start, ctl->start); 289 178 290 - ap = host->ports[0]; 179 + ata_print_version_once(dev, DRV_VERSION); 291 180 292 - ap->ops = &ixp4xx_port_ops; 293 - ap->pio_mask = ATA_PIO4; 294 - ap->flags |= ATA_FLAG_NO_ATAPI; 295 - 296 - ixp4xx_setup_port(ap, data, cs0->start, cs1->start); 297 - 298 - ata_print_version_once(&pdev->dev, DRV_VERSION); 299 - 300 - /* activate host */ 301 - return ata_host_activate(host, irq, ata_sff_interrupt, 0, &ixp4xx_sht); 181 + return ata_host_activate(ixpp->host, irq, ata_sff_interrupt, 0, &ixp4xx_sht); 302 182 } 183 + 184 + static const struct of_device_id ixp4xx_pata_of_match[] = { 185 + { .compatible = "intel,ixp4xx-compact-flash", }, 186 + { }, 187 + }; 303 188 304 189 static struct platform_driver ixp4xx_pata_platform_driver = { 305 190 .driver = { 306 191 .name = DRV_NAME, 192 + .of_match_table = ixp4xx_pata_of_match, 307 193 }, 308 194 .probe = ixp4xx_pata_probe, 309 195 .remove = ata_platform_remove_one,
+11
drivers/bus/Kconfig
··· 95 95 The WEIM(Wireless External Interface Module) works like a bus. 96 96 You can attach many different devices on it, such as NOR, onenand. 97 97 98 + config INTEL_IXP4XX_EB 99 + bool "Intel IXP4xx expansion bus interface driver" 100 + depends on HAS_IOMEM 101 + depends on ARCH_IXP4XX || COMPILE_TEST 102 + default ARCH_IXP4XX 103 + select MFD_SYSCON 104 + help 105 + Driver for the Intel IXP4xx expansion bus interface. The driver is 106 + needed to set up various chip select configuration parameters before 107 + devices on the expansion bus can be discovered. 108 + 98 109 config MIPS_CDMM 99 110 bool "MIPS Common Device Memory Map (CDMM) Driver" 100 111 depends on CPU_MIPSR2 || CPU_MIPSR5
+1
drivers/bus/Makefile
··· 16 16 obj-$(CONFIG_BT1_APB) += bt1-apb.o 17 17 obj-$(CONFIG_BT1_AXI) += bt1-axi.o 18 18 obj-$(CONFIG_IMX_WEIM) += imx-weim.o 19 + obj-$(CONFIG_INTEL_IXP4XX_EB) += intel-ixp4xx-eb.o 19 20 obj-$(CONFIG_MIPS_CDMM) += mips_cdmm.o 20 21 obj-$(CONFIG_MVEBU_MBUS) += mvebu-mbus.o 21 22
+429
drivers/bus/intel-ixp4xx-eb.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Intel IXP4xx Expansion Bus Controller 4 + * Copyright (C) 2021 Linaro Ltd. 5 + * 6 + * Author: Linus Walleij <linus.walleij@linaro.org> 7 + */ 8 + 9 + #include <linux/bitfield.h> 10 + #include <linux/bits.h> 11 + #include <linux/err.h> 12 + #include <linux/init.h> 13 + #include <linux/log2.h> 14 + #include <linux/mfd/syscon.h> 15 + #include <linux/module.h> 16 + #include <linux/of.h> 17 + #include <linux/of_platform.h> 18 + #include <linux/platform_device.h> 19 + #include <linux/regmap.h> 20 + 21 + #define IXP4XX_EXP_NUM_CS 8 22 + 23 + #define IXP4XX_EXP_TIMING_CS0 0x00 24 + #define IXP4XX_EXP_TIMING_CS1 0x04 25 + #define IXP4XX_EXP_TIMING_CS2 0x08 26 + #define IXP4XX_EXP_TIMING_CS3 0x0c 27 + #define IXP4XX_EXP_TIMING_CS4 0x10 28 + #define IXP4XX_EXP_TIMING_CS5 0x14 29 + #define IXP4XX_EXP_TIMING_CS6 0x18 30 + #define IXP4XX_EXP_TIMING_CS7 0x1c 31 + 32 + /* Bits inside each CS timing register */ 33 + #define IXP4XX_EXP_TIMING_STRIDE 0x04 34 + #define IXP4XX_EXP_CS_EN BIT(31) 35 + #define IXP456_EXP_PAR_EN BIT(30) /* Only on IXP45x and IXP46x */ 36 + #define IXP4XX_EXP_T1_MASK GENMASK(28, 27) 37 + #define IXP4XX_EXP_T1_SHIFT 28 38 + #define IXP4XX_EXP_T2_MASK GENMASK(27, 26) 39 + #define IXP4XX_EXP_T2_SHIFT 26 40 + #define IXP4XX_EXP_T3_MASK GENMASK(25, 22) 41 + #define IXP4XX_EXP_T3_SHIFT 22 42 + #define IXP4XX_EXP_T4_MASK GENMASK(21, 20) 43 + #define IXP4XX_EXP_T4_SHIFT 20 44 + #define IXP4XX_EXP_T5_MASK GENMASK(19, 16) 45 + #define IXP4XX_EXP_T5_SHIFT 16 46 + #define IXP4XX_EXP_CYC_TYPE_MASK GENMASK(15, 14) 47 + #define IXP4XX_EXP_CYC_TYPE_SHIFT 14 48 + #define IXP4XX_EXP_SIZE_MASK GENMASK(13, 10) 49 + #define IXP4XX_EXP_SIZE_SHIFT 10 50 + #define IXP4XX_EXP_CNFG_0 BIT(9) /* Always zero */ 51 + #define IXP43X_EXP_SYNC_INTEL BIT(8) /* Only on IXP43x */ 52 + #define IXP43X_EXP_EXP_CHIP BIT(7) /* Only on IXP43x */ 53 + #define IXP4XX_EXP_BYTE_RD16 BIT(6) 54 + #define IXP4XX_EXP_HRDY_POL BIT(5) /* Only on IXP42x */ 55 + #define IXP4XX_EXP_MUX_EN BIT(4) 56 + #define IXP4XX_EXP_SPLT_EN BIT(3) 57 + #define IXP4XX_EXP_WORD BIT(2) /* Always zero */ 58 + #define IXP4XX_EXP_WR_EN BIT(1) 59 + #define IXP4XX_EXP_BYTE_EN BIT(0) 60 + #define IXP42X_RESERVED (BIT(30)|IXP4XX_EXP_CNFG_0|BIT(8)|BIT(7)|IXP4XX_EXP_WORD) 61 + #define IXP43X_RESERVED (BIT(30)|IXP4XX_EXP_CNFG_0|BIT(5)|IXP4XX_EXP_WORD) 62 + 63 + #define IXP4XX_EXP_CNFG0 0x20 64 + #define IXP4XX_EXP_CNFG0_MEM_MAP BIT(31) 65 + #define IXP4XX_EXP_CNFG1 0x24 66 + 67 + #define IXP4XX_EXP_BOOT_BASE 0x00000000 68 + #define IXP4XX_EXP_NORMAL_BASE 0x50000000 69 + #define IXP4XX_EXP_STRIDE 0x01000000 70 + 71 + /* Fuses on the IXP43x */ 72 + #define IXP43X_EXP_UNIT_FUSE_RESET 0x28 73 + #define IXP43x_EXP_FUSE_SPEED_MASK GENMASK(23, 22) 74 + 75 + /* Number of device tree values in "reg" */ 76 + #define IXP4XX_OF_REG_SIZE 3 77 + 78 + struct ixp4xx_eb { 79 + struct device *dev; 80 + struct regmap *rmap; 81 + u32 bus_base; 82 + bool is_42x; 83 + bool is_43x; 84 + }; 85 + 86 + struct ixp4xx_exp_tim_prop { 87 + const char *prop; 88 + u32 max; 89 + u32 mask; 90 + u16 shift; 91 + }; 92 + 93 + static const struct ixp4xx_exp_tim_prop ixp4xx_exp_tim_props[] = { 94 + { 95 + .prop = "intel,ixp4xx-eb-t1", 96 + .max = 3, 97 + .mask = IXP4XX_EXP_T1_MASK, 98 + .shift = IXP4XX_EXP_T1_SHIFT, 99 + }, 100 + { 101 + .prop = "intel,ixp4xx-eb-t2", 102 + .max = 3, 103 + .mask = IXP4XX_EXP_T2_MASK, 104 + .shift = IXP4XX_EXP_T2_SHIFT, 105 + }, 106 + { 107 + .prop = "intel,ixp4xx-eb-t3", 108 + .max = 15, 109 + .mask = IXP4XX_EXP_T3_MASK, 110 + .shift = IXP4XX_EXP_T3_SHIFT, 111 + }, 112 + { 113 + .prop = "intel,ixp4xx-eb-t4", 114 + .max = 3, 115 + .mask = IXP4XX_EXP_T4_MASK, 116 + .shift = IXP4XX_EXP_T4_SHIFT, 117 + }, 118 + { 119 + .prop = "intel,ixp4xx-eb-t5", 120 + .max = 15, 121 + .mask = IXP4XX_EXP_T5_MASK, 122 + .shift = IXP4XX_EXP_T5_SHIFT, 123 + }, 124 + { 125 + .prop = "intel,ixp4xx-eb-byte-access-on-halfword", 126 + .max = 1, 127 + .mask = IXP4XX_EXP_BYTE_RD16, 128 + }, 129 + { 130 + .prop = "intel,ixp4xx-eb-hpi-hrdy-pol-high", 131 + .max = 1, 132 + .mask = IXP4XX_EXP_HRDY_POL, 133 + }, 134 + { 135 + .prop = "intel,ixp4xx-eb-mux-address-and-data", 136 + .max = 1, 137 + .mask = IXP4XX_EXP_MUX_EN, 138 + }, 139 + { 140 + .prop = "intel,ixp4xx-eb-ahb-split-transfers", 141 + .max = 1, 142 + .mask = IXP4XX_EXP_SPLT_EN, 143 + }, 144 + { 145 + .prop = "intel,ixp4xx-eb-write-enable", 146 + .max = 1, 147 + .mask = IXP4XX_EXP_WR_EN, 148 + }, 149 + { 150 + .prop = "intel,ixp4xx-eb-byte-access", 151 + .max = 1, 152 + .mask = IXP4XX_EXP_BYTE_EN, 153 + }, 154 + }; 155 + 156 + static void ixp4xx_exp_setup_chipselect(struct ixp4xx_eb *eb, 157 + struct device_node *np, 158 + u32 cs_index, 159 + u32 cs_size) 160 + { 161 + u32 cs_cfg; 162 + u32 val; 163 + u32 cur_cssize; 164 + u32 cs_order; 165 + int ret; 166 + int i; 167 + 168 + if (eb->is_42x && (cs_index > 7)) { 169 + dev_err(eb->dev, 170 + "invalid chipselect %u, we only support 0-7\n", 171 + cs_index); 172 + return; 173 + } 174 + if (eb->is_43x && (cs_index > 3)) { 175 + dev_err(eb->dev, 176 + "invalid chipselect %u, we only support 0-3\n", 177 + cs_index); 178 + return; 179 + } 180 + 181 + /* Several chip selects can be joined into one device */ 182 + if (cs_size > IXP4XX_EXP_STRIDE) 183 + cur_cssize = IXP4XX_EXP_STRIDE; 184 + else 185 + cur_cssize = cs_size; 186 + 187 + 188 + /* 189 + * The following will read/modify/write the configuration for one 190 + * chipselect, attempting to leave the boot defaults in place unless 191 + * something is explicitly defined. 192 + */ 193 + regmap_read(eb->rmap, IXP4XX_EXP_TIMING_CS0 + 194 + IXP4XX_EXP_TIMING_STRIDE * cs_index, &cs_cfg); 195 + dev_info(eb->dev, "CS%d at %#08x, size %#08x, config before: %#08x\n", 196 + cs_index, eb->bus_base + IXP4XX_EXP_STRIDE * cs_index, 197 + cur_cssize, cs_cfg); 198 + 199 + /* Size set-up first align to 2^9 .. 2^24 */ 200 + cur_cssize = roundup_pow_of_two(cur_cssize); 201 + if (cur_cssize < 512) 202 + cur_cssize = 512; 203 + cs_order = ilog2(cur_cssize); 204 + if (cs_order < 9 || cs_order > 24) { 205 + dev_err(eb->dev, "illegal size order %d\n", cs_order); 206 + return; 207 + } 208 + dev_dbg(eb->dev, "CS%d size order: %d\n", cs_index, cs_order); 209 + cs_cfg &= ~(IXP4XX_EXP_SIZE_MASK); 210 + cs_cfg |= ((cs_order - 9) << IXP4XX_EXP_SIZE_SHIFT); 211 + 212 + for (i = 0; i < ARRAY_SIZE(ixp4xx_exp_tim_props); i++) { 213 + const struct ixp4xx_exp_tim_prop *ip = &ixp4xx_exp_tim_props[i]; 214 + 215 + /* All are regular u32 values */ 216 + ret = of_property_read_u32(np, ip->prop, &val); 217 + if (ret) 218 + continue; 219 + 220 + /* Handle bools (single bits) first */ 221 + if (ip->max == 1) { 222 + if (val) 223 + cs_cfg |= ip->mask; 224 + else 225 + cs_cfg &= ~ip->mask; 226 + dev_info(eb->dev, "CS%d %s %s\n", cs_index, 227 + val ? "enabled" : "disabled", 228 + ip->prop); 229 + continue; 230 + } 231 + 232 + if (val > ip->max) { 233 + dev_err(eb->dev, 234 + "CS%d too high value for %s: %u, capped at %u\n", 235 + cs_index, ip->prop, val, ip->max); 236 + val = ip->max; 237 + } 238 + /* This assumes max value fills all the assigned bits (and it does) */ 239 + cs_cfg &= ~ip->mask; 240 + cs_cfg |= (val << ip->shift); 241 + dev_info(eb->dev, "CS%d set %s to %u\n", cs_index, ip->prop, val); 242 + } 243 + 244 + ret = of_property_read_u32(np, "intel,ixp4xx-eb-cycle-type", &val); 245 + if (!ret) { 246 + if (val > 3) { 247 + dev_err(eb->dev, "illegal cycle type %d\n", val); 248 + return; 249 + } 250 + dev_info(eb->dev, "CS%d set cycle type %d\n", cs_index, val); 251 + cs_cfg &= ~IXP4XX_EXP_CYC_TYPE_MASK; 252 + cs_cfg |= val << IXP4XX_EXP_CYC_TYPE_SHIFT; 253 + } 254 + 255 + if (eb->is_42x) 256 + cs_cfg &= ~IXP42X_RESERVED; 257 + if (eb->is_43x) { 258 + cs_cfg &= ~IXP43X_RESERVED; 259 + /* 260 + * This bit for Intel strata flash is currently unused, but let's 261 + * report it if we find one. 262 + */ 263 + if (cs_cfg & IXP43X_EXP_SYNC_INTEL) 264 + dev_info(eb->dev, "claims to be Intel strata flash\n"); 265 + } 266 + cs_cfg |= IXP4XX_EXP_CS_EN; 267 + 268 + regmap_write(eb->rmap, 269 + IXP4XX_EXP_TIMING_CS0 + IXP4XX_EXP_TIMING_STRIDE * cs_index, 270 + cs_cfg); 271 + dev_info(eb->dev, "CS%d wrote %#08x into CS config\n", cs_index, cs_cfg); 272 + 273 + /* 274 + * If several chip selects are joined together into one big 275 + * device area, we call ourselves recursively for each successive 276 + * chip select. For a 32MB flash chip this results in two calls 277 + * for example. 278 + */ 279 + if (cs_size > IXP4XX_EXP_STRIDE) 280 + ixp4xx_exp_setup_chipselect(eb, np, 281 + cs_index + 1, 282 + cs_size - IXP4XX_EXP_STRIDE); 283 + } 284 + 285 + static void ixp4xx_exp_setup_child(struct ixp4xx_eb *eb, 286 + struct device_node *np) 287 + { 288 + u32 cs_sizes[IXP4XX_EXP_NUM_CS]; 289 + int num_regs; 290 + u32 csindex; 291 + u32 cssize; 292 + int ret; 293 + int i; 294 + 295 + num_regs = of_property_count_elems_of_size(np, "reg", IXP4XX_OF_REG_SIZE); 296 + if (num_regs <= 0) 297 + return; 298 + dev_dbg(eb->dev, "child %s has %d register sets\n", 299 + of_node_full_name(np), num_regs); 300 + 301 + for (csindex = 0; csindex < IXP4XX_EXP_NUM_CS; csindex++) 302 + cs_sizes[csindex] = 0; 303 + 304 + for (i = 0; i < num_regs; i++) { 305 + u32 rbase, rsize; 306 + 307 + ret = of_property_read_u32_index(np, "reg", 308 + i * IXP4XX_OF_REG_SIZE, &csindex); 309 + if (ret) 310 + break; 311 + ret = of_property_read_u32_index(np, "reg", 312 + i * IXP4XX_OF_REG_SIZE + 1, &rbase); 313 + if (ret) 314 + break; 315 + ret = of_property_read_u32_index(np, "reg", 316 + i * IXP4XX_OF_REG_SIZE + 2, &rsize); 317 + if (ret) 318 + break; 319 + 320 + if (csindex >= IXP4XX_EXP_NUM_CS) { 321 + dev_err(eb->dev, "illegal CS %d\n", csindex); 322 + continue; 323 + } 324 + /* 325 + * The memory window always starts from CS base so we need to add 326 + * the start and size to get to the size from the start of the CS 327 + * base. For example if CS0 is at 0x50000000 and the reg is 328 + * <0 0xe40000 0x40000> the size is e80000. 329 + * 330 + * Roof this if we have several regs setting the same CS. 331 + */ 332 + cssize = rbase + rsize; 333 + dev_dbg(eb->dev, "CS%d size %#08x\n", csindex, cssize); 334 + if (cs_sizes[csindex] < cssize) 335 + cs_sizes[csindex] = cssize; 336 + } 337 + 338 + for (csindex = 0; csindex < IXP4XX_EXP_NUM_CS; csindex++) { 339 + cssize = cs_sizes[csindex]; 340 + if (!cssize) 341 + continue; 342 + /* Just this one, so set it up and return */ 343 + ixp4xx_exp_setup_chipselect(eb, np, csindex, cssize); 344 + } 345 + } 346 + 347 + static int ixp4xx_exp_probe(struct platform_device *pdev) 348 + { 349 + struct device *dev = &pdev->dev; 350 + struct device_node *np = dev->of_node; 351 + struct ixp4xx_eb *eb; 352 + struct device_node *child; 353 + bool have_children = false; 354 + u32 val; 355 + int ret; 356 + 357 + eb = devm_kzalloc(dev, sizeof(*eb), GFP_KERNEL); 358 + if (!eb) 359 + return -ENOMEM; 360 + 361 + eb->dev = dev; 362 + eb->is_42x = of_device_is_compatible(np, "intel,ixp42x-expansion-bus-controller"); 363 + eb->is_43x = of_device_is_compatible(np, "intel,ixp43x-expansion-bus-controller"); 364 + 365 + eb->rmap = syscon_node_to_regmap(np); 366 + if (IS_ERR(eb->rmap)) 367 + return dev_err_probe(dev, PTR_ERR(eb->rmap), "no regmap\n"); 368 + 369 + /* We check that the regmap work only on first read */ 370 + ret = regmap_read(eb->rmap, IXP4XX_EXP_CNFG0, &val); 371 + if (ret) 372 + return dev_err_probe(dev, ret, "cannot read regmap\n"); 373 + if (val & IXP4XX_EXP_CNFG0_MEM_MAP) 374 + eb->bus_base = IXP4XX_EXP_BOOT_BASE; 375 + else 376 + eb->bus_base = IXP4XX_EXP_NORMAL_BASE; 377 + dev_info(dev, "expansion bus at %08x\n", eb->bus_base); 378 + 379 + if (eb->is_43x) { 380 + /* Check some fuses */ 381 + regmap_read(eb->rmap, IXP43X_EXP_UNIT_FUSE_RESET, &val); 382 + switch (FIELD_GET(IXP43x_EXP_FUSE_SPEED_MASK, val)) { 383 + case 0: 384 + dev_info(dev, "IXP43x at 533 MHz\n"); 385 + break; 386 + case 1: 387 + dev_info(dev, "IXP43x at 400 MHz\n"); 388 + break; 389 + case 2: 390 + dev_info(dev, "IXP43x at 667 MHz\n"); 391 + break; 392 + default: 393 + dev_info(dev, "IXP43x unknown speed\n"); 394 + break; 395 + } 396 + } 397 + 398 + /* Walk over the child nodes and see what chipselects we use */ 399 + for_each_available_child_of_node(np, child) { 400 + ixp4xx_exp_setup_child(eb, child); 401 + /* We have at least one child */ 402 + have_children = true; 403 + } 404 + 405 + if (have_children) 406 + return of_platform_default_populate(np, NULL, dev); 407 + 408 + return 0; 409 + } 410 + 411 + static const struct of_device_id ixp4xx_exp_of_match[] = { 412 + { .compatible = "intel,ixp42x-expansion-bus-controller", }, 413 + { .compatible = "intel,ixp43x-expansion-bus-controller", }, 414 + { .compatible = "intel,ixp45x-expansion-bus-controller", }, 415 + { .compatible = "intel,ixp46x-expansion-bus-controller", }, 416 + { } 417 + }; 418 + 419 + static struct platform_driver ixp4xx_exp_driver = { 420 + .probe = ixp4xx_exp_probe, 421 + .driver = { 422 + .name = "intel-extbus", 423 + .of_match_table = ixp4xx_exp_of_match, 424 + }, 425 + }; 426 + module_platform_driver(ixp4xx_exp_driver); 427 + MODULE_AUTHOR("Linus Walleij <linus.walleij@linaro.org>"); 428 + MODULE_DESCRIPTION("Intel IXP4xx external bus driver"); 429 + MODULE_LICENSE("GPL");
+7 -10
drivers/bus/ti-sysc.c
··· 855 855 } 856 856 857 857 /** 858 - * syc_ioremap - ioremap register space for the interconnect target module 858 + * sysc_ioremap - ioremap register space for the interconnect target module 859 859 * @ddata: device driver data 860 860 * 861 861 * Note that the interconnect target module registers can be anywhere ··· 1446 1446 SYSC_QUIRK_LEGACY_IDLE | SYSC_QUIRK_OPT_CLKS_IN_RESET), 1447 1447 SYSC_QUIRK("sham", 0, 0x100, 0x110, 0x114, 0x40000c03, 0xffffffff, 1448 1448 SYSC_QUIRK_LEGACY_IDLE), 1449 - SYSC_QUIRK("smartreflex", 0, -ENODEV, 0x24, -ENODEV, 0x00000000, 0xffffffff, 1450 - SYSC_QUIRK_LEGACY_IDLE), 1451 - SYSC_QUIRK("smartreflex", 0, -ENODEV, 0x38, -ENODEV, 0x00000000, 0xffffffff, 1452 - SYSC_QUIRK_LEGACY_IDLE), 1453 1449 SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x00000046, 0xffffffff, 1454 1450 SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE), 1455 1451 SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x00000052, 0xffffffff, ··· 1497 1501 SYSC_MODULE_QUIRK_SGX), 1498 1502 SYSC_QUIRK("lcdc", 0, 0, 0x54, -ENODEV, 0x4f201000, 0xffffffff, 1499 1503 SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY), 1504 + SYSC_QUIRK("mcasp", 0, 0, 0x4, -ENODEV, 0x44306302, 0xffffffff, 1505 + SYSC_QUIRK_SWSUP_SIDLE), 1500 1506 SYSC_QUIRK("rtc", 0, 0x74, 0x78, -ENODEV, 0x4eb01908, 0xffff00f0, 1501 1507 SYSC_MODULE_QUIRK_RTC_UNLOCK), 1502 1508 SYSC_QUIRK("tptc", 0, 0, 0x10, -ENODEV, 0x40006c00, 0xffffefff, ··· 1555 1557 SYSC_QUIRK("hsi", 0, 0, 0x10, 0x14, 0x50043101, 0xffffffff, 0), 1556 1558 SYSC_QUIRK("iss", 0, 0, 0x10, -ENODEV, 0x40000101, 0xffffffff, 0), 1557 1559 SYSC_QUIRK("keypad", 0x4a31c000, 0, 0x10, 0x14, 0x00000020, 0xffffffff, 0), 1558 - SYSC_QUIRK("mcasp", 0, 0, 0x4, -ENODEV, 0x44306302, 0xffffffff, 0), 1559 1560 SYSC_QUIRK("mcasp", 0, 0, 0x4, -ENODEV, 0x44307b02, 0xffffffff, 0), 1560 1561 SYSC_QUIRK("mcbsp", 0, -ENODEV, 0x8c, -ENODEV, 0, 0, 0), 1561 1562 SYSC_QUIRK("mcspi", 0, 0, 0x10, -ENODEV, 0x40300a0b, 0xffff00ff, 0), ··· 1582 1585 SYSC_QUIRK("sdma", 0, 0, 0x2c, 0x28, 0x00010900, 0xffffffff, 0), 1583 1586 SYSC_QUIRK("slimbus", 0, 0, 0x10, -ENODEV, 0x40000902, 0xffffffff, 0), 1584 1587 SYSC_QUIRK("slimbus", 0, 0, 0x10, -ENODEV, 0x40002903, 0xffffffff, 0), 1588 + SYSC_QUIRK("smartreflex", 0, -ENODEV, 0x24, -ENODEV, 0x00000000, 0xffffffff, 0), 1589 + SYSC_QUIRK("smartreflex", 0, -ENODEV, 0x38, -ENODEV, 0x00000000, 0xffffffff, 0), 1585 1590 SYSC_QUIRK("spinlock", 0, 0, 0x10, -ENODEV, 0x50020000, 0xffffffff, 0), 1586 1591 SYSC_QUIRK("rng", 0, 0x1fe0, 0x1fe4, -ENODEV, 0x00000020, 0xffffffff, 0), 1587 1592 SYSC_QUIRK("timer", 0, 0, 0x10, 0x14, 0x00000013, 0xffffffff, 0), ··· 3114 3115 goto unprepare; 3115 3116 3116 3117 pm_runtime_enable(ddata->dev); 3117 - error = pm_runtime_get_sync(ddata->dev); 3118 + error = pm_runtime_resume_and_get(ddata->dev); 3118 3119 if (error < 0) { 3119 - pm_runtime_put_noidle(ddata->dev); 3120 3120 pm_runtime_disable(ddata->dev); 3121 3121 goto unprepare; 3122 3122 } ··· 3173 3175 3174 3176 cancel_delayed_work_sync(&ddata->idle_work); 3175 3177 3176 - error = pm_runtime_get_sync(ddata->dev); 3178 + error = pm_runtime_resume_and_get(ddata->dev); 3177 3179 if (error < 0) { 3178 - pm_runtime_put_noidle(ddata->dev); 3179 3180 pm_runtime_disable(ddata->dev); 3180 3181 goto unprepare; 3181 3182 }
+36 -12
drivers/clocksource/timer-ixp4xx.c
··· 18 18 #include <linux/delay.h> 19 19 #include <linux/of_address.h> 20 20 #include <linux/of_irq.h> 21 + #include <linux/platform_device.h> 21 22 /* Goes away with OF conversion */ 22 23 #include <linux/platform_data/timer-ixp4xx.h> 23 24 ··· 30 29 #define IXP4XX_OSRT1_OFFSET 0x08 /* Timer 1 Reload */ 31 30 #define IXP4XX_OST2_OFFSET 0x0C /* Timer 2 Timestamp */ 32 31 #define IXP4XX_OSRT2_OFFSET 0x10 /* Timer 2 Reload */ 33 - #define IXP4XX_OSWT_OFFSET 0x14 /* Watchdog Timer */ 34 - #define IXP4XX_OSWE_OFFSET 0x18 /* Watchdog Enable */ 35 - #define IXP4XX_OSWK_OFFSET 0x1C /* Watchdog Key */ 36 32 #define IXP4XX_OSST_OFFSET 0x20 /* Timer Status */ 37 33 38 34 /* ··· 43 45 #define IXP4XX_OSST_TIMER_1_PEND 0x00000001 44 46 #define IXP4XX_OSST_TIMER_2_PEND 0x00000002 45 47 #define IXP4XX_OSST_TIMER_TS_PEND 0x00000004 46 - #define IXP4XX_OSST_TIMER_WDOG_PEND 0x00000008 47 - #define IXP4XX_OSST_TIMER_WARM_RESET 0x00000010 48 - 49 - #define IXP4XX_WDT_KEY 0x0000482E 50 - #define IXP4XX_WDT_RESET_ENABLE 0x00000001 51 - #define IXP4XX_WDT_IRQ_ENABLE 0x00000002 52 - #define IXP4XX_WDT_COUNT_ENABLE 0x00000004 48 + /* Remaining registers are for the watchdog and defined in the watchdog driver */ 53 49 54 50 struct ixp4xx_timer { 55 51 void __iomem *base; 56 - unsigned int tick_rate; 57 52 u32 latch; 58 53 struct clock_event_device clkevt; 59 54 #ifdef CONFIG_ARM ··· 172 181 if (!tmr) 173 182 return -ENOMEM; 174 183 tmr->base = base; 175 - tmr->tick_rate = timer_freq; 176 184 177 185 /* 178 186 * The timer register doesn't allow to specify the two least ··· 228 238 229 239 return 0; 230 240 } 241 + 242 + static struct platform_device ixp4xx_watchdog_device = { 243 + .name = "ixp4xx-watchdog", 244 + .id = -1, 245 + }; 246 + 247 + /* 248 + * This probe gets called after the timer is already up and running. The main 249 + * function on this platform is to spawn the watchdog device as a child. 250 + */ 251 + static int ixp4xx_timer_probe(struct platform_device *pdev) 252 + { 253 + struct device *dev = &pdev->dev; 254 + 255 + /* Pass the base address as platform data and nothing else */ 256 + ixp4xx_watchdog_device.dev.platform_data = local_ixp4xx_timer->base; 257 + ixp4xx_watchdog_device.dev.parent = dev; 258 + return platform_device_register(&ixp4xx_watchdog_device); 259 + } 260 + 261 + static const struct of_device_id ixp4xx_timer_dt_id[] = { 262 + { .compatible = "intel,ixp4xx-timer", }, 263 + { /* sentinel */ }, 264 + }; 265 + 266 + static struct platform_driver ixp4xx_timer_driver = { 267 + .probe = ixp4xx_timer_probe, 268 + .driver = { 269 + .name = "ixp4xx-timer", 270 + .of_match_table = ixp4xx_timer_dt_id, 271 + .suppress_bind_attrs = true, 272 + }, 273 + }; 274 + builtin_platform_driver(ixp4xx_timer_driver); 231 275 232 276 /** 233 277 * ixp4xx_timer_setup() - Timer setup function to be called from boardfiles
+70 -23
drivers/dma/imx-sdma.c
··· 198 198 s32 per_2_firi_addr; 199 199 s32 mcu_2_firi_addr; 200 200 s32 uart_2_per_addr; 201 - s32 uart_2_mcu_addr; 201 + s32 uart_2_mcu_ram_addr; 202 202 s32 per_2_app_addr; 203 203 s32 mcu_2_app_addr; 204 204 s32 per_2_per_addr; 205 205 s32 uartsh_2_per_addr; 206 - s32 uartsh_2_mcu_addr; 206 + s32 uartsh_2_mcu_ram_addr; 207 207 s32 per_2_shp_addr; 208 208 s32 mcu_2_shp_addr; 209 209 s32 ata_2_mcu_addr; ··· 230 230 s32 zcanfd_2_mcu_addr; 231 231 s32 zqspi_2_mcu_addr; 232 232 s32 mcu_2_ecspi_addr; 233 + s32 mcu_2_sai_addr; 234 + s32 sai_2_mcu_addr; 235 + s32 uart_2_mcu_addr; 236 + s32 uartsh_2_mcu_addr; 233 237 /* End of v3 array */ 234 238 s32 mcu_2_zqspi_addr; 235 239 /* End of v4 array */ ··· 437 433 unsigned long watermark_level; 438 434 u32 shp_addr, per_addr; 439 435 enum dma_status status; 440 - bool context_loaded; 441 436 struct imx_dma_data data; 442 437 struct work_struct terminate_worker; 438 + struct list_head terminated; 439 + bool is_ram_script; 443 440 }; 444 441 445 442 #define IMX_DMA_SG_LOOP BIT(0) ··· 481 476 int num_events; 482 477 struct sdma_script_start_addrs *script_addrs; 483 478 bool check_ratio; 479 + /* 480 + * ecspi ERR009165 fixed should be done in sdma script 481 + * and it has been fixed in soc from i.mx6ul. 482 + * please get more information from the below link: 483 + * https://www.nxp.com/docs/en/errata/IMX6DQCE.pdf 484 + */ 485 + bool ecspi_fixed; 484 486 }; 485 487 486 488 struct sdma_engine { ··· 511 499 struct sdma_buffer_descriptor *bd0; 512 500 /* clock ratio for AHB:SDMA core. 1:1 is 1, 2:1 is 0*/ 513 501 bool clk_ratio; 502 + bool fw_loaded; 514 503 }; 515 504 516 505 static int sdma_config_write(struct dma_chan *chan, ··· 608 595 .script_addrs = &sdma_script_imx6q, 609 596 }; 610 597 598 + static struct sdma_driver_data sdma_imx6ul = { 599 + .chnenbl0 = SDMA_CHNENBL0_IMX35, 600 + .num_events = 48, 601 + .script_addrs = &sdma_script_imx6q, 602 + .ecspi_fixed = true, 603 + }; 604 + 611 605 static struct sdma_script_start_addrs sdma_script_imx7d = { 612 606 .ap_2_ap_addr = 644, 613 607 .uart_2_mcu_addr = 819, ··· 648 628 { .compatible = "fsl,imx31-sdma", .data = &sdma_imx31, }, 649 629 { .compatible = "fsl,imx25-sdma", .data = &sdma_imx25, }, 650 630 { .compatible = "fsl,imx7d-sdma", .data = &sdma_imx7d, }, 631 + { .compatible = "fsl,imx6ul-sdma", .data = &sdma_imx6ul, }, 651 632 { .compatible = "fsl,imx8mq-sdma", .data = &sdma_imx8mq, }, 652 633 { /* sentinel */ } 653 634 }; ··· 940 919 sdmac->pc_to_device = 0; 941 920 sdmac->device_to_device = 0; 942 921 sdmac->pc_to_pc = 0; 922 + sdmac->is_ram_script = false; 943 923 944 924 switch (peripheral_type) { 945 925 case IMX_DMATYPE_MEMORY: ··· 967 945 emi_2_per = sdma->script_addrs->mcu_2_ata_addr; 968 946 break; 969 947 case IMX_DMATYPE_CSPI: 948 + per_2_emi = sdma->script_addrs->app_2_mcu_addr; 949 + 950 + /* Use rom script mcu_2_app if ERR009165 fixed */ 951 + if (sdmac->sdma->drvdata->ecspi_fixed) { 952 + emi_2_per = sdma->script_addrs->mcu_2_app_addr; 953 + } else { 954 + emi_2_per = sdma->script_addrs->mcu_2_ecspi_addr; 955 + sdmac->is_ram_script = true; 956 + } 957 + 958 + break; 970 959 case IMX_DMATYPE_EXT: 971 960 case IMX_DMATYPE_SSI: 972 961 case IMX_DMATYPE_SAI: ··· 987 954 case IMX_DMATYPE_SSI_DUAL: 988 955 per_2_emi = sdma->script_addrs->ssish_2_mcu_addr; 989 956 emi_2_per = sdma->script_addrs->mcu_2_ssish_addr; 957 + sdmac->is_ram_script = true; 990 958 break; 991 959 case IMX_DMATYPE_SSI_SP: 992 960 case IMX_DMATYPE_MMC: ··· 1002 968 per_2_emi = sdma->script_addrs->asrc_2_mcu_addr; 1003 969 emi_2_per = sdma->script_addrs->asrc_2_mcu_addr; 1004 970 per_2_per = sdma->script_addrs->per_2_per_addr; 971 + sdmac->is_ram_script = true; 1005 972 break; 1006 973 case IMX_DMATYPE_ASRC_SP: 1007 974 per_2_emi = sdma->script_addrs->shp_2_mcu_addr; ··· 1042 1007 struct sdma_buffer_descriptor *bd0 = sdma->bd0; 1043 1008 int ret; 1044 1009 unsigned long flags; 1045 - 1046 - if (sdmac->context_loaded) 1047 - return 0; 1048 1010 1049 1011 if (sdmac->direction == DMA_DEV_TO_MEM) 1050 1012 load_address = sdmac->pc_from_device; ··· 1085 1053 1086 1054 spin_unlock_irqrestore(&sdma->channel_0_lock, flags); 1087 1055 1088 - sdmac->context_loaded = true; 1089 - 1090 1056 return ret; 1091 1057 } 1092 1058 ··· 1108 1078 { 1109 1079 struct sdma_channel *sdmac = container_of(work, struct sdma_channel, 1110 1080 terminate_worker); 1111 - unsigned long flags; 1112 - LIST_HEAD(head); 1113 - 1114 1081 /* 1115 1082 * According to NXP R&D team a delay of one BD SDMA cost time 1116 1083 * (maximum is 1ms) should be added after disable of the channel ··· 1116 1089 */ 1117 1090 usleep_range(1000, 2000); 1118 1091 1119 - spin_lock_irqsave(&sdmac->vc.lock, flags); 1120 - vchan_get_all_descriptors(&sdmac->vc, &head); 1121 - spin_unlock_irqrestore(&sdmac->vc.lock, flags); 1122 - vchan_dma_desc_free_list(&sdmac->vc, &head); 1123 - sdmac->context_loaded = false; 1092 + vchan_dma_desc_free_list(&sdmac->vc, &sdmac->terminated); 1124 1093 } 1125 1094 1126 1095 static int sdma_terminate_all(struct dma_chan *chan) ··· 1130 1107 1131 1108 if (sdmac->desc) { 1132 1109 vchan_terminate_vdesc(&sdmac->desc->vd); 1110 + /* 1111 + * move out current descriptor into terminated list so that 1112 + * it could be free in sdma_channel_terminate_work alone 1113 + * later without potential involving next descriptor raised 1114 + * up before the last descriptor terminated. 1115 + */ 1116 + vchan_get_all_descriptors(&sdmac->vc, &sdmac->terminated); 1133 1117 sdmac->desc = NULL; 1134 1118 schedule_work(&sdmac->terminate_worker); 1135 1119 } ··· 1198 1168 static int sdma_config_channel(struct dma_chan *chan) 1199 1169 { 1200 1170 struct sdma_channel *sdmac = to_sdma_chan(chan); 1201 - int ret; 1202 1171 1203 1172 sdma_disable_channel(chan); 1204 1173 ··· 1237 1208 sdmac->watermark_level = 0; /* FIXME: M3_BASE_ADDRESS */ 1238 1209 } 1239 1210 1240 - ret = sdma_load_context(sdmac); 1241 - 1242 - return ret; 1211 + return 0; 1243 1212 } 1244 1213 1245 1214 static int sdma_set_channel_priority(struct sdma_channel *sdmac, ··· 1388 1361 1389 1362 sdmac->event_id0 = 0; 1390 1363 sdmac->event_id1 = 0; 1391 - sdmac->context_loaded = false; 1392 1364 1393 1365 sdma_set_channel_priority(sdmac, 0); 1394 1366 ··· 1399 1373 enum dma_transfer_direction direction, u32 bds) 1400 1374 { 1401 1375 struct sdma_desc *desc; 1376 + 1377 + if (!sdmac->sdma->fw_loaded && sdmac->is_ram_script) { 1378 + dev_warn_once(sdmac->sdma->dev, "sdma firmware not ready!\n"); 1379 + goto err_out; 1380 + } 1402 1381 1403 1382 desc = kzalloc((sizeof(*desc)), GFP_NOWAIT); 1404 1383 if (!desc) ··· 1753 1722 1754 1723 #define SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V1 34 1755 1724 #define SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V2 38 1756 - #define SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V3 41 1757 - #define SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V4 42 1725 + #define SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V3 45 1726 + #define SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V4 46 1758 1727 1759 1728 static void sdma_add_scripts(struct sdma_engine *sdma, 1760 1729 const struct sdma_script_start_addrs *addr) ··· 1778 1747 for (i = 0; i < sdma->script_number; i++) 1779 1748 if (addr_arr[i] > 0) 1780 1749 saddr_arr[i] = addr_arr[i]; 1750 + 1751 + /* 1752 + * get uart_2_mcu_addr/uartsh_2_mcu_addr rom script specially because 1753 + * they are now replaced by uart_2_mcu_ram_addr/uartsh_2_mcu_ram_addr 1754 + * to be compatible with legacy freescale/nxp sdma firmware, and they 1755 + * are located in the bottom part of sdma_script_start_addrs which are 1756 + * beyond the SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V1. 1757 + */ 1758 + if (addr->uart_2_mcu_addr) 1759 + sdma->script_addrs->uart_2_mcu_addr = addr->uart_2_mcu_addr; 1760 + if (addr->uartsh_2_mcu_addr) 1761 + sdma->script_addrs->uartsh_2_mcu_addr = addr->uartsh_2_mcu_addr; 1762 + 1781 1763 } 1782 1764 1783 1765 static void sdma_load_firmware(const struct firmware *fw, void *context) ··· 1846 1802 clk_disable(sdma->clk_ahb); 1847 1803 1848 1804 sdma_add_scripts(sdma, addr); 1805 + 1806 + sdma->fw_loaded = true; 1849 1807 1850 1808 dev_info(sdma->dev, "loaded firmware %d.%d\n", 1851 1809 header->version_major, ··· 2132 2086 2133 2087 sdmac->channel = i; 2134 2088 sdmac->vc.desc_free = sdma_desc_free; 2089 + INIT_LIST_HEAD(&sdmac->terminated); 2135 2090 INIT_WORK(&sdmac->terminate_worker, 2136 2091 sdma_channel_terminate_work); 2137 2092 /*
+2 -34
drivers/firmware/Kconfig
··· 6 6 7 7 menu "Firmware Drivers" 8 8 9 - config ARM_SCMI_PROTOCOL 10 - tristate "ARM System Control and Management Interface (SCMI) Message Protocol" 11 - depends on ARM || ARM64 || COMPILE_TEST 12 - depends on MAILBOX || HAVE_ARM_SMCCC_DISCOVERY 13 - help 14 - ARM System Control and Management Interface (SCMI) protocol is a 15 - set of operating system-independent software interfaces that are 16 - used in system management. SCMI is extensible and currently provides 17 - interfaces for: Discovery and self-description of the interfaces 18 - it supports, Power domain management which is the ability to place 19 - a given device or domain into the various power-saving states that 20 - it supports, Performance management which is the ability to control 21 - the performance of a domain that is composed of compute engines 22 - such as application processors and other accelerators, Clock 23 - management which is the ability to set and inquire rates on platform 24 - managed clocks and Sensor management which is the ability to read 25 - sensor data, and be notified of sensor value. 26 - 27 - This protocol library provides interface for all the client drivers 28 - making use of the features offered by the SCMI. 29 - 30 - config ARM_SCMI_POWER_DOMAIN 31 - tristate "SCMI power domain driver" 32 - depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF) 33 - default y 34 - select PM_GENERIC_DOMAINS if PM 35 - help 36 - This enables support for the SCMI power domains which can be 37 - enabled or disabled via the SCP firmware 38 - 39 - This driver can also be built as a module. If so, the module 40 - will be called scmi_pm_domain. Note this may needed early in boot 41 - before rootfs may be available. 9 + source "drivers/firmware/arm_scmi/Kconfig" 42 10 43 11 config ARM_SCPI_PROTOCOL 44 12 tristate "ARM System Control and Power Interface (SCPI) Message Protocol" ··· 203 235 Say Y here if you want Intel RSU support. 204 236 205 237 config QCOM_SCM 206 - bool 238 + tristate "Qcom SCM driver" 207 239 depends on ARM || ARM64 208 240 depends on HAVE_ARM_SMCCC 209 241 select RESET_CONTROLLER
+2 -1
drivers/firmware/Makefile
··· 17 17 obj-$(CONFIG_FIRMWARE_MEMMAP) += memmap.o 18 18 obj-$(CONFIG_RASPBERRYPI_FIRMWARE) += raspberrypi.o 19 19 obj-$(CONFIG_FW_CFG_SYSFS) += qemu_fw_cfg.o 20 - obj-$(CONFIG_QCOM_SCM) += qcom_scm.o qcom_scm-smc.o qcom_scm-legacy.o 20 + obj-$(CONFIG_QCOM_SCM) += qcom-scm.o 21 + qcom-scm-objs += qcom_scm.o qcom_scm-smc.o qcom_scm-legacy.o 21 22 obj-$(CONFIG_SYSFB) += sysfb.o 22 23 obj-$(CONFIG_SYSFB_SIMPLEFB) += sysfb_simplefb.o 23 24 obj-$(CONFIG_TI_SCI_PROTOCOL) += ti_sci.o
+95
drivers/firmware/arm_scmi/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + menu "ARM System Control and Management Interface Protocol" 3 + 4 + config ARM_SCMI_PROTOCOL 5 + tristate "ARM System Control and Management Interface (SCMI) Message Protocol" 6 + depends on ARM || ARM64 || COMPILE_TEST 7 + help 8 + ARM System Control and Management Interface (SCMI) protocol is a 9 + set of operating system-independent software interfaces that are 10 + used in system management. SCMI is extensible and currently provides 11 + interfaces for: Discovery and self-description of the interfaces 12 + it supports, Power domain management which is the ability to place 13 + a given device or domain into the various power-saving states that 14 + it supports, Performance management which is the ability to control 15 + the performance of a domain that is composed of compute engines 16 + such as application processors and other accelerators, Clock 17 + management which is the ability to set and inquire rates on platform 18 + managed clocks and Sensor management which is the ability to read 19 + sensor data, and be notified of sensor value. 20 + 21 + This protocol library provides interface for all the client drivers 22 + making use of the features offered by the SCMI. 23 + 24 + if ARM_SCMI_PROTOCOL 25 + 26 + config ARM_SCMI_HAVE_TRANSPORT 27 + bool 28 + help 29 + This declares whether at least one SCMI transport has been configured. 30 + Used to trigger a build bug when trying to build SCMI without any 31 + configured transport. 32 + 33 + config ARM_SCMI_HAVE_SHMEM 34 + bool 35 + help 36 + This declares whether a shared memory based transport for SCMI is 37 + available. 38 + 39 + config ARM_SCMI_HAVE_MSG 40 + bool 41 + help 42 + This declares whether a message passing based transport for SCMI is 43 + available. 44 + 45 + config ARM_SCMI_TRANSPORT_MAILBOX 46 + bool "SCMI transport based on Mailbox" 47 + depends on MAILBOX 48 + select ARM_SCMI_HAVE_TRANSPORT 49 + select ARM_SCMI_HAVE_SHMEM 50 + default y 51 + help 52 + Enable mailbox based transport for SCMI. 53 + 54 + If you want the ARM SCMI PROTOCOL stack to include support for a 55 + transport based on mailboxes, answer Y. 56 + 57 + config ARM_SCMI_TRANSPORT_SMC 58 + bool "SCMI transport based on SMC" 59 + depends on HAVE_ARM_SMCCC_DISCOVERY 60 + select ARM_SCMI_HAVE_TRANSPORT 61 + select ARM_SCMI_HAVE_SHMEM 62 + default y 63 + help 64 + Enable SMC based transport for SCMI. 65 + 66 + If you want the ARM SCMI PROTOCOL stack to include support for a 67 + transport based on SMC, answer Y. 68 + 69 + config ARM_SCMI_TRANSPORT_VIRTIO 70 + bool "SCMI transport based on VirtIO" 71 + depends on VIRTIO 72 + select ARM_SCMI_HAVE_TRANSPORT 73 + select ARM_SCMI_HAVE_MSG 74 + help 75 + This enables the virtio based transport for SCMI. 76 + 77 + If you want the ARM SCMI PROTOCOL stack to include support for a 78 + transport based on VirtIO, answer Y. 79 + 80 + endif #ARM_SCMI_PROTOCOL 81 + 82 + config ARM_SCMI_POWER_DOMAIN 83 + tristate "SCMI power domain driver" 84 + depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF) 85 + default y 86 + select PM_GENERIC_DOMAINS if PM 87 + help 88 + This enables support for the SCMI power domains which can be 89 + enabled or disabled via the SCP firmware 90 + 91 + This driver can also be built as a module. If so, the module 92 + will be called scmi_pm_domain. Note this may needed early in boot 93 + before rootfs may be available. 94 + 95 + endmenu
+5 -3
drivers/firmware/arm_scmi/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 scmi-bus-y = bus.o 3 3 scmi-driver-y = driver.o notify.o 4 - scmi-transport-y = shmem.o 5 - scmi-transport-$(CONFIG_MAILBOX) += mailbox.o 6 - scmi-transport-$(CONFIG_HAVE_ARM_SMCCC_DISCOVERY) += smc.o 4 + scmi-transport-$(CONFIG_ARM_SCMI_HAVE_SHMEM) = shmem.o 5 + scmi-transport-$(CONFIG_ARM_SCMI_TRANSPORT_MAILBOX) += mailbox.o 6 + scmi-transport-$(CONFIG_ARM_SCMI_TRANSPORT_SMC) += smc.o 7 + scmi-transport-$(CONFIG_ARM_SCMI_HAVE_MSG) += msg.o 8 + scmi-transport-$(CONFIG_ARM_SCMI_TRANSPORT_VIRTIO) += virtio.o 7 9 scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o system.o voltage.o 8 10 scmi-module-objs := $(scmi-bus-y) $(scmi-driver-y) $(scmi-protocols-y) \ 9 11 $(scmi-transport-y)
+107 -6
drivers/firmware/arm_scmi/common.h
··· 14 14 #include <linux/device.h> 15 15 #include <linux/errno.h> 16 16 #include <linux/kernel.h> 17 + #include <linux/hashtable.h> 18 + #include <linux/list.h> 17 19 #include <linux/module.h> 20 + #include <linux/refcount.h> 18 21 #include <linux/scmi_protocol.h> 22 + #include <linux/spinlock.h> 19 23 #include <linux/types.h> 20 24 21 25 #include <asm/unaligned.h> ··· 69 65 #define MSG_XTRACT_TOKEN(hdr) FIELD_GET(MSG_TOKEN_ID_MASK, (hdr)) 70 66 #define MSG_TOKEN_MAX (MSG_XTRACT_TOKEN(MSG_TOKEN_ID_MASK) + 1) 71 67 68 + /* 69 + * Size of @pending_xfers hashtable included in @scmi_xfers_info; ideally, in 70 + * order to minimize space and collisions, this should equal max_msg, i.e. the 71 + * maximum number of in-flight messages on a specific platform, but such value 72 + * is only available at runtime while kernel hashtables are statically sized: 73 + * pick instead as a fixed static size the maximum number of entries that can 74 + * fit the whole table into one 4k page. 75 + */ 76 + #define SCMI_PENDING_XFERS_HT_ORDER_SZ 9 77 + 72 78 /** 73 79 * struct scmi_msg_hdr - Message(Tx/Rx) header 74 80 * 75 81 * @id: The identifier of the message being sent 76 82 * @protocol_id: The identifier of the protocol used to send @id message 83 + * @type: The SCMI type for this message 77 84 * @seq: The token to identify the message. When a message returns, the 78 85 * platform returns the whole message header unmodified including the 79 86 * token ··· 95 80 struct scmi_msg_hdr { 96 81 u8 id; 97 82 u8 protocol_id; 83 + u8 type; 98 84 u16 seq; 99 85 u32 status; 100 86 bool poll_completion; ··· 105 89 * pack_scmi_header() - packs and returns 32-bit header 106 90 * 107 91 * @hdr: pointer to header containing all the information on message id, 108 - * protocol id and sequence id. 92 + * protocol id, sequence id and type. 109 93 * 110 94 * Return: 32-bit packed message header to be sent to the platform. 111 95 */ 112 96 static inline u32 pack_scmi_header(struct scmi_msg_hdr *hdr) 113 97 { 114 98 return FIELD_PREP(MSG_ID_MASK, hdr->id) | 99 + FIELD_PREP(MSG_TYPE_MASK, hdr->type) | 115 100 FIELD_PREP(MSG_TOKEN_ID_MASK, hdr->seq) | 116 101 FIELD_PREP(MSG_PROTOCOL_ID_MASK, hdr->protocol_id); 117 102 } ··· 127 110 { 128 111 hdr->id = MSG_XTRACT_ID(msg_hdr); 129 112 hdr->protocol_id = MSG_XTRACT_PROT_ID(msg_hdr); 113 + hdr->type = MSG_XTRACT_TYPE(msg_hdr); 130 114 } 131 115 132 116 /** ··· 152 134 * buffer for the rx path as we use for the tx path. 153 135 * @done: command message transmit completion event 154 136 * @async_done: pointer to delayed response message received event completion 137 + * @pending: True for xfers added to @pending_xfers hashtable 138 + * @node: An hlist_node reference used to store this xfer, alternatively, on 139 + * the free list @free_xfers or in the @pending_xfers hashtable 140 + * @users: A refcount to track the active users for this xfer. 141 + * This is meant to protect against the possibility that, when a command 142 + * transaction times out concurrently with the reception of a valid 143 + * response message, the xfer could be finally put on the TX path, and 144 + * so vanish, while on the RX path scmi_rx_callback() is still 145 + * processing it: in such a case this refcounting will ensure that, even 146 + * though the timed-out transaction will anyway cause the command 147 + * request to be reported as failed by time-out, the underlying xfer 148 + * cannot be discarded and possibly reused until the last one user on 149 + * the RX path has released it. 150 + * @busy: An atomic flag to ensure exclusive write access to this xfer 151 + * @state: The current state of this transfer, with states transitions deemed 152 + * valid being: 153 + * - SCMI_XFER_SENT_OK -> SCMI_XFER_RESP_OK [ -> SCMI_XFER_DRESP_OK ] 154 + * - SCMI_XFER_SENT_OK -> SCMI_XFER_DRESP_OK 155 + * (Missing synchronous response is assumed OK and ignored) 156 + * @lock: A spinlock to protect state and busy fields. 157 + * @priv: A pointer for transport private usage. 155 158 */ 156 159 struct scmi_xfer { 157 160 int transfer_id; ··· 181 142 struct scmi_msg rx; 182 143 struct completion done; 183 144 struct completion *async_done; 145 + bool pending; 146 + struct hlist_node node; 147 + refcount_t users; 148 + #define SCMI_XFER_FREE 0 149 + #define SCMI_XFER_BUSY 1 150 + atomic_t busy; 151 + #define SCMI_XFER_SENT_OK 0 152 + #define SCMI_XFER_RESP_OK 1 153 + #define SCMI_XFER_DRESP_OK 2 154 + int state; 155 + /* A lock to protect state and busy fields */ 156 + spinlock_t lock; 157 + void *priv; 184 158 }; 159 + 160 + /* 161 + * An helper macro to lookup an xfer from the @pending_xfers hashtable 162 + * using the message sequence number token as a key. 163 + */ 164 + #define XFER_FIND(__ht, __k) \ 165 + ({ \ 166 + typeof(__k) k_ = __k; \ 167 + struct scmi_xfer *xfer_ = NULL; \ 168 + \ 169 + hash_for_each_possible((__ht), xfer_, node, k_) \ 170 + if (xfer_->hdr.seq == k_) \ 171 + break; \ 172 + xfer_; \ 173 + }) 185 174 186 175 struct scmi_xfer_ops; 187 176 ··· 350 283 /** 351 284 * struct scmi_transport_ops - Structure representing a SCMI transport ops 352 285 * 286 + * @link_supplier: Optional callback to add link to a supplier device 353 287 * @chan_available: Callback to check if channel is available or not 354 288 * @chan_setup: Callback to allocate and setup a channel 355 289 * @chan_free: Callback to free a channel 290 + * @get_max_msg: Optional callback to provide max_msg dynamically 291 + * Returns the maximum number of messages for the channel type 292 + * (tx or rx) that can be pending simultaneously in the system 356 293 * @send_message: Callback to send a message 357 294 * @mark_txdone: Callback to mark tx as done 358 295 * @fetch_response: Callback to fetch response ··· 365 294 * @poll_done: Callback to poll transfer status 366 295 */ 367 296 struct scmi_transport_ops { 297 + int (*link_supplier)(struct device *dev); 368 298 bool (*chan_available)(struct device *dev, int idx); 369 299 int (*chan_setup)(struct scmi_chan_info *cinfo, struct device *dev, 370 300 bool tx); 371 301 int (*chan_free)(int id, void *p, void *data); 302 + unsigned int (*get_max_msg)(struct scmi_chan_info *base_cinfo); 372 303 int (*send_message)(struct scmi_chan_info *cinfo, 373 304 struct scmi_xfer *xfer); 374 305 void (*mark_txdone)(struct scmi_chan_info *cinfo, int ret); ··· 390 317 /** 391 318 * struct scmi_desc - Description of SoC integration 392 319 * 320 + * @transport_init: An optional function that a transport can provide to 321 + * initialize some transport-specific setup during SCMI core 322 + * initialization, so ahead of SCMI core probing. 323 + * @transport_exit: An optional function that a transport can provide to 324 + * de-initialize some transport-specific setup during SCMI core 325 + * de-initialization, so after SCMI core removal. 393 326 * @ops: Pointer to the transport specific ops structure 394 327 * @max_rx_timeout_ms: Timeout for communication with SoC (in Milliseconds) 395 - * @max_msg: Maximum number of messages that can be pending 396 - * simultaneously in the system 328 + * @max_msg: Maximum number of messages for a channel type (tx or rx) that can 329 + * be pending simultaneously in the system. May be overridden by the 330 + * get_max_msg op. 397 331 * @max_msg_size: Maximum size of data per message that can be handled. 398 332 */ 399 333 struct scmi_desc { 334 + int (*transport_init)(void); 335 + void (*transport_exit)(void); 400 336 const struct scmi_transport_ops *ops; 401 337 int max_rx_timeout_ms; 402 338 int max_msg; 403 339 int max_msg_size; 404 340 }; 405 341 342 + #ifdef CONFIG_ARM_SCMI_TRANSPORT_MAILBOX 406 343 extern const struct scmi_desc scmi_mailbox_desc; 407 - #ifdef CONFIG_HAVE_ARM_SMCCC_DISCOVERY 344 + #endif 345 + #ifdef CONFIG_ARM_SCMI_TRANSPORT_SMC 408 346 extern const struct scmi_desc scmi_smc_desc; 409 347 #endif 348 + #ifdef CONFIG_ARM_SCMI_TRANSPORT_VIRTIO 349 + extern const struct scmi_desc scmi_virtio_desc; 350 + #endif 410 351 411 - void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr); 352 + void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr, void *priv); 412 353 void scmi_free_channel(struct scmi_chan_info *cinfo, struct idr *idr, int id); 413 354 414 355 /* shmem related declarations */ ··· 439 352 bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem, 440 353 struct scmi_xfer *xfer); 441 354 355 + /* declarations for message passing transports */ 356 + struct scmi_msg_payld; 357 + 358 + /* Maximum overhead of message w.r.t. struct scmi_desc.max_msg_size */ 359 + #define SCMI_MSG_MAX_PROT_OVERHEAD (2 * sizeof(__le32)) 360 + 361 + size_t msg_response_size(struct scmi_xfer *xfer); 362 + size_t msg_command_size(struct scmi_xfer *xfer); 363 + void msg_tx_prepare(struct scmi_msg_payld *msg, struct scmi_xfer *xfer); 364 + u32 msg_read_header(struct scmi_msg_payld *msg); 365 + void msg_fetch_response(struct scmi_msg_payld *msg, size_t len, 366 + struct scmi_xfer *xfer); 367 + void msg_fetch_notification(struct scmi_msg_payld *msg, size_t len, 368 + size_t max_len, struct scmi_xfer *xfer); 369 + 442 370 void scmi_notification_instance_data_set(const struct scmi_handle *handle, 443 371 void *priv); 444 372 void *scmi_notification_instance_data_get(const struct scmi_handle *handle); 445 - 446 373 #endif /* _SCMI_COMMON_H */
+584 -104
drivers/firmware/arm_scmi/driver.c
··· 21 21 #include <linux/io.h> 22 22 #include <linux/kernel.h> 23 23 #include <linux/ktime.h> 24 + #include <linux/hashtable.h> 24 25 #include <linux/list.h> 25 26 #include <linux/module.h> 26 27 #include <linux/of_address.h> ··· 68 67 /** 69 68 * struct scmi_xfers_info - Structure to manage transfer information 70 69 * 71 - * @xfer_block: Preallocated Message array 72 70 * @xfer_alloc_table: Bitmap table for allocated messages. 73 71 * Index of this bitmap table is also used for message 74 72 * sequence identifier. 75 73 * @xfer_lock: Protection for message allocation 74 + * @max_msg: Maximum number of messages that can be pending 75 + * @free_xfers: A free list for available to use xfers. It is initialized with 76 + * a number of xfers equal to the maximum allowed in-flight 77 + * messages. 78 + * @pending_xfers: An hashtable, indexed by msg_hdr.seq, used to keep all the 79 + * currently in-flight messages. 76 80 */ 77 81 struct scmi_xfers_info { 78 - struct scmi_xfer *xfer_block; 79 82 unsigned long *xfer_alloc_table; 80 83 spinlock_t xfer_lock; 84 + int max_msg; 85 + struct hlist_head free_xfers; 86 + DECLARE_HASHTABLE(pending_xfers, SCMI_PENDING_XFERS_HT_ORDER_SZ); 81 87 }; 82 88 83 89 /** ··· 180 172 return -EIO; 181 173 } 182 174 183 - /** 184 - * scmi_dump_header_dbg() - Helper to dump a message header. 185 - * 186 - * @dev: Device pointer corresponding to the SCMI entity 187 - * @hdr: pointer to header. 188 - */ 189 - static inline void scmi_dump_header_dbg(struct device *dev, 190 - struct scmi_msg_hdr *hdr) 191 - { 192 - dev_dbg(dev, "Message ID: %x Sequence ID: %x Protocol: %x\n", 193 - hdr->id, hdr->seq, hdr->protocol_id); 194 - } 195 - 196 175 void scmi_notification_instance_data_set(const struct scmi_handle *handle, 197 176 void *priv) 198 177 { ··· 200 205 } 201 206 202 207 /** 208 + * scmi_xfer_token_set - Reserve and set new token for the xfer at hand 209 + * 210 + * @minfo: Pointer to Tx/Rx Message management info based on channel type 211 + * @xfer: The xfer to act upon 212 + * 213 + * Pick the next unused monotonically increasing token and set it into 214 + * xfer->hdr.seq: picking a monotonically increasing value avoids immediate 215 + * reuse of freshly completed or timed-out xfers, thus mitigating the risk 216 + * of incorrect association of a late and expired xfer with a live in-flight 217 + * transaction, both happening to re-use the same token identifier. 218 + * 219 + * Since platform is NOT required to answer our request in-order we should 220 + * account for a few rare but possible scenarios: 221 + * 222 + * - exactly 'next_token' may be NOT available so pick xfer_id >= next_token 223 + * using find_next_zero_bit() starting from candidate next_token bit 224 + * 225 + * - all tokens ahead upto (MSG_TOKEN_ID_MASK - 1) are used in-flight but we 226 + * are plenty of free tokens at start, so try a second pass using 227 + * find_next_zero_bit() and starting from 0. 228 + * 229 + * X = used in-flight 230 + * 231 + * Normal 232 + * ------ 233 + * 234 + * |- xfer_id picked 235 + * -----------+---------------------------------------------------------- 236 + * | | |X|X|X| | | | | | ... ... ... ... ... ... ... ... ... ... ...|X|X| 237 + * ---------------------------------------------------------------------- 238 + * ^ 239 + * |- next_token 240 + * 241 + * Out-of-order pending at start 242 + * ----------------------------- 243 + * 244 + * |- xfer_id picked, last_token fixed 245 + * -----+---------------------------------------------------------------- 246 + * |X|X| | | | |X|X| ... ... ... ... ... ... ... ... ... ... ... ...|X| | 247 + * ---------------------------------------------------------------------- 248 + * ^ 249 + * |- next_token 250 + * 251 + * 252 + * Out-of-order pending at end 253 + * --------------------------- 254 + * 255 + * |- xfer_id picked, last_token fixed 256 + * -----+---------------------------------------------------------------- 257 + * |X|X| | | | |X|X| ... ... ... ... ... ... ... ... ... ... |X|X|X||X|X| 258 + * ---------------------------------------------------------------------- 259 + * ^ 260 + * |- next_token 261 + * 262 + * Context: Assumes to be called with @xfer_lock already acquired. 263 + * 264 + * Return: 0 on Success or error 265 + */ 266 + static int scmi_xfer_token_set(struct scmi_xfers_info *minfo, 267 + struct scmi_xfer *xfer) 268 + { 269 + unsigned long xfer_id, next_token; 270 + 271 + /* 272 + * Pick a candidate monotonic token in range [0, MSG_TOKEN_MAX - 1] 273 + * using the pre-allocated transfer_id as a base. 274 + * Note that the global transfer_id is shared across all message types 275 + * so there could be holes in the allocated set of monotonic sequence 276 + * numbers, but that is going to limit the effectiveness of the 277 + * mitigation only in very rare limit conditions. 278 + */ 279 + next_token = (xfer->transfer_id & (MSG_TOKEN_MAX - 1)); 280 + 281 + /* Pick the next available xfer_id >= next_token */ 282 + xfer_id = find_next_zero_bit(minfo->xfer_alloc_table, 283 + MSG_TOKEN_MAX, next_token); 284 + if (xfer_id == MSG_TOKEN_MAX) { 285 + /* 286 + * After heavily out-of-order responses, there are no free 287 + * tokens ahead, but only at start of xfer_alloc_table so 288 + * try again from the beginning. 289 + */ 290 + xfer_id = find_next_zero_bit(minfo->xfer_alloc_table, 291 + MSG_TOKEN_MAX, 0); 292 + /* 293 + * Something is wrong if we got here since there can be a 294 + * maximum number of (MSG_TOKEN_MAX - 1) in-flight messages 295 + * but we have not found any free token [0, MSG_TOKEN_MAX - 1]. 296 + */ 297 + if (WARN_ON_ONCE(xfer_id == MSG_TOKEN_MAX)) 298 + return -ENOMEM; 299 + } 300 + 301 + /* Update +/- last_token accordingly if we skipped some hole */ 302 + if (xfer_id != next_token) 303 + atomic_add((int)(xfer_id - next_token), &transfer_last_id); 304 + 305 + /* Set in-flight */ 306 + set_bit(xfer_id, minfo->xfer_alloc_table); 307 + xfer->hdr.seq = (u16)xfer_id; 308 + 309 + return 0; 310 + } 311 + 312 + /** 313 + * scmi_xfer_token_clear - Release the token 314 + * 315 + * @minfo: Pointer to Tx/Rx Message management info based on channel type 316 + * @xfer: The xfer to act upon 317 + */ 318 + static inline void scmi_xfer_token_clear(struct scmi_xfers_info *minfo, 319 + struct scmi_xfer *xfer) 320 + { 321 + clear_bit(xfer->hdr.seq, minfo->xfer_alloc_table); 322 + } 323 + 324 + /** 203 325 * scmi_xfer_get() - Allocate one message 204 326 * 205 327 * @handle: Pointer to SCMI entity handle 206 328 * @minfo: Pointer to Tx/Rx Message management info based on channel type 329 + * @set_pending: If true a monotonic token is picked and the xfer is added to 330 + * the pending hash table. 207 331 * 208 332 * Helper function which is used by various message functions that are 209 333 * exposed to clients of this driver for allocating a message traffic event. 210 334 * 211 - * This function can sleep depending on pending requests already in the system 212 - * for the SCMI entity. Further, this also holds a spinlock to maintain 213 - * integrity of internal data structures. 335 + * Picks an xfer from the free list @free_xfers (if any available) and, if 336 + * required, sets a monotonically increasing token and stores the inflight xfer 337 + * into the @pending_xfers hashtable for later retrieval. 338 + * 339 + * The successfully initialized xfer is refcounted. 340 + * 341 + * Context: Holds @xfer_lock while manipulating @xfer_alloc_table and 342 + * @free_xfers. 214 343 * 215 344 * Return: 0 if all went fine, else corresponding error. 216 345 */ 217 346 static struct scmi_xfer *scmi_xfer_get(const struct scmi_handle *handle, 218 - struct scmi_xfers_info *minfo) 347 + struct scmi_xfers_info *minfo, 348 + bool set_pending) 219 349 { 220 - u16 xfer_id; 350 + int ret; 351 + unsigned long flags; 221 352 struct scmi_xfer *xfer; 222 - unsigned long flags, bit_pos; 223 - struct scmi_info *info = handle_to_scmi_info(handle); 224 353 225 - /* Keep the locked section as small as possible */ 226 354 spin_lock_irqsave(&minfo->xfer_lock, flags); 227 - bit_pos = find_first_zero_bit(minfo->xfer_alloc_table, 228 - info->desc->max_msg); 229 - if (bit_pos == info->desc->max_msg) { 355 + if (hlist_empty(&minfo->free_xfers)) { 230 356 spin_unlock_irqrestore(&minfo->xfer_lock, flags); 231 357 return ERR_PTR(-ENOMEM); 232 358 } 233 - set_bit(bit_pos, minfo->xfer_alloc_table); 234 - spin_unlock_irqrestore(&minfo->xfer_lock, flags); 235 359 236 - xfer_id = bit_pos; 360 + /* grab an xfer from the free_list */ 361 + xfer = hlist_entry(minfo->free_xfers.first, struct scmi_xfer, node); 362 + hlist_del_init(&xfer->node); 237 363 238 - xfer = &minfo->xfer_block[xfer_id]; 239 - xfer->hdr.seq = xfer_id; 364 + /* 365 + * Allocate transfer_id early so that can be used also as base for 366 + * monotonic sequence number generation if needed. 367 + */ 240 368 xfer->transfer_id = atomic_inc_return(&transfer_last_id); 369 + 370 + if (set_pending) { 371 + /* Pick and set monotonic token */ 372 + ret = scmi_xfer_token_set(minfo, xfer); 373 + if (!ret) { 374 + hash_add(minfo->pending_xfers, &xfer->node, 375 + xfer->hdr.seq); 376 + xfer->pending = true; 377 + } else { 378 + dev_err(handle->dev, 379 + "Failed to get monotonic token %d\n", ret); 380 + hlist_add_head(&xfer->node, &minfo->free_xfers); 381 + xfer = ERR_PTR(ret); 382 + } 383 + } 384 + 385 + if (!IS_ERR(xfer)) { 386 + refcount_set(&xfer->users, 1); 387 + atomic_set(&xfer->busy, SCMI_XFER_FREE); 388 + } 389 + spin_unlock_irqrestore(&minfo->xfer_lock, flags); 241 390 242 391 return xfer; 243 392 } ··· 392 253 * @minfo: Pointer to Tx/Rx Message management info based on channel type 393 254 * @xfer: message that was reserved by scmi_xfer_get 394 255 * 256 + * After refcount check, possibly release an xfer, clearing the token slot, 257 + * removing xfer from @pending_xfers and putting it back into free_xfers. 258 + * 395 259 * This holds a spinlock to maintain integrity of internal data structures. 396 260 */ 397 261 static void ··· 402 260 { 403 261 unsigned long flags; 404 262 405 - /* 406 - * Keep the locked section as small as possible 407 - * NOTE: we might escape with smp_mb and no lock here.. 408 - * but just be conservative and symmetric. 409 - */ 410 263 spin_lock_irqsave(&minfo->xfer_lock, flags); 411 - clear_bit(xfer->hdr.seq, minfo->xfer_alloc_table); 264 + if (refcount_dec_and_test(&xfer->users)) { 265 + if (xfer->pending) { 266 + scmi_xfer_token_clear(minfo, xfer); 267 + hash_del(&xfer->node); 268 + xfer->pending = false; 269 + } 270 + hlist_add_head(&xfer->node, &minfo->free_xfers); 271 + } 412 272 spin_unlock_irqrestore(&minfo->xfer_lock, flags); 413 273 } 414 274 415 - static void scmi_handle_notification(struct scmi_chan_info *cinfo, u32 msg_hdr) 275 + /** 276 + * scmi_xfer_lookup_unlocked - Helper to lookup an xfer_id 277 + * 278 + * @minfo: Pointer to Tx/Rx Message management info based on channel type 279 + * @xfer_id: Token ID to lookup in @pending_xfers 280 + * 281 + * Refcounting is untouched. 282 + * 283 + * Context: Assumes to be called with @xfer_lock already acquired. 284 + * 285 + * Return: A valid xfer on Success or error otherwise 286 + */ 287 + static struct scmi_xfer * 288 + scmi_xfer_lookup_unlocked(struct scmi_xfers_info *minfo, u16 xfer_id) 289 + { 290 + struct scmi_xfer *xfer = NULL; 291 + 292 + if (test_bit(xfer_id, minfo->xfer_alloc_table)) 293 + xfer = XFER_FIND(minfo->pending_xfers, xfer_id); 294 + 295 + return xfer ?: ERR_PTR(-EINVAL); 296 + } 297 + 298 + /** 299 + * scmi_msg_response_validate - Validate message type against state of related 300 + * xfer 301 + * 302 + * @cinfo: A reference to the channel descriptor. 303 + * @msg_type: Message type to check 304 + * @xfer: A reference to the xfer to validate against @msg_type 305 + * 306 + * This function checks if @msg_type is congruent with the current state of 307 + * a pending @xfer; if an asynchronous delayed response is received before the 308 + * related synchronous response (Out-of-Order Delayed Response) the missing 309 + * synchronous response is assumed to be OK and completed, carrying on with the 310 + * Delayed Response: this is done to address the case in which the underlying 311 + * SCMI transport can deliver such out-of-order responses. 312 + * 313 + * Context: Assumes to be called with xfer->lock already acquired. 314 + * 315 + * Return: 0 on Success, error otherwise 316 + */ 317 + static inline int scmi_msg_response_validate(struct scmi_chan_info *cinfo, 318 + u8 msg_type, 319 + struct scmi_xfer *xfer) 320 + { 321 + /* 322 + * Even if a response was indeed expected on this slot at this point, 323 + * a buggy platform could wrongly reply feeding us an unexpected 324 + * delayed response we're not prepared to handle: bail-out safely 325 + * blaming firmware. 326 + */ 327 + if (msg_type == MSG_TYPE_DELAYED_RESP && !xfer->async_done) { 328 + dev_err(cinfo->dev, 329 + "Delayed Response for %d not expected! Buggy F/W ?\n", 330 + xfer->hdr.seq); 331 + return -EINVAL; 332 + } 333 + 334 + switch (xfer->state) { 335 + case SCMI_XFER_SENT_OK: 336 + if (msg_type == MSG_TYPE_DELAYED_RESP) { 337 + /* 338 + * Delayed Response expected but delivered earlier. 339 + * Assume message RESPONSE was OK and skip state. 340 + */ 341 + xfer->hdr.status = SCMI_SUCCESS; 342 + xfer->state = SCMI_XFER_RESP_OK; 343 + complete(&xfer->done); 344 + dev_warn(cinfo->dev, 345 + "Received valid OoO Delayed Response for %d\n", 346 + xfer->hdr.seq); 347 + } 348 + break; 349 + case SCMI_XFER_RESP_OK: 350 + if (msg_type != MSG_TYPE_DELAYED_RESP) 351 + return -EINVAL; 352 + break; 353 + case SCMI_XFER_DRESP_OK: 354 + /* No further message expected once in SCMI_XFER_DRESP_OK */ 355 + return -EINVAL; 356 + } 357 + 358 + return 0; 359 + } 360 + 361 + /** 362 + * scmi_xfer_state_update - Update xfer state 363 + * 364 + * @xfer: A reference to the xfer to update 365 + * @msg_type: Type of message being processed. 366 + * 367 + * Note that this message is assumed to have been already successfully validated 368 + * by @scmi_msg_response_validate(), so here we just update the state. 369 + * 370 + * Context: Assumes to be called on an xfer exclusively acquired using the 371 + * busy flag. 372 + */ 373 + static inline void scmi_xfer_state_update(struct scmi_xfer *xfer, u8 msg_type) 374 + { 375 + xfer->hdr.type = msg_type; 376 + 377 + /* Unknown command types were already discarded earlier */ 378 + if (xfer->hdr.type == MSG_TYPE_COMMAND) 379 + xfer->state = SCMI_XFER_RESP_OK; 380 + else 381 + xfer->state = SCMI_XFER_DRESP_OK; 382 + } 383 + 384 + static bool scmi_xfer_acquired(struct scmi_xfer *xfer) 385 + { 386 + int ret; 387 + 388 + ret = atomic_cmpxchg(&xfer->busy, SCMI_XFER_FREE, SCMI_XFER_BUSY); 389 + 390 + return ret == SCMI_XFER_FREE; 391 + } 392 + 393 + /** 394 + * scmi_xfer_command_acquire - Helper to lookup and acquire a command xfer 395 + * 396 + * @cinfo: A reference to the channel descriptor. 397 + * @msg_hdr: A message header to use as lookup key 398 + * 399 + * When a valid xfer is found for the sequence number embedded in the provided 400 + * msg_hdr, reference counting is properly updated and exclusive access to this 401 + * xfer is granted till released with @scmi_xfer_command_release. 402 + * 403 + * Return: A valid @xfer on Success or error otherwise. 404 + */ 405 + static inline struct scmi_xfer * 406 + scmi_xfer_command_acquire(struct scmi_chan_info *cinfo, u32 msg_hdr) 407 + { 408 + int ret; 409 + unsigned long flags; 410 + struct scmi_xfer *xfer; 411 + struct scmi_info *info = handle_to_scmi_info(cinfo->handle); 412 + struct scmi_xfers_info *minfo = &info->tx_minfo; 413 + u8 msg_type = MSG_XTRACT_TYPE(msg_hdr); 414 + u16 xfer_id = MSG_XTRACT_TOKEN(msg_hdr); 415 + 416 + /* Are we even expecting this? */ 417 + spin_lock_irqsave(&minfo->xfer_lock, flags); 418 + xfer = scmi_xfer_lookup_unlocked(minfo, xfer_id); 419 + if (IS_ERR(xfer)) { 420 + dev_err(cinfo->dev, 421 + "Message for %d type %d is not expected!\n", 422 + xfer_id, msg_type); 423 + spin_unlock_irqrestore(&minfo->xfer_lock, flags); 424 + return xfer; 425 + } 426 + refcount_inc(&xfer->users); 427 + spin_unlock_irqrestore(&minfo->xfer_lock, flags); 428 + 429 + spin_lock_irqsave(&xfer->lock, flags); 430 + ret = scmi_msg_response_validate(cinfo, msg_type, xfer); 431 + /* 432 + * If a pending xfer was found which was also in a congruent state with 433 + * the received message, acquire exclusive access to it setting the busy 434 + * flag. 435 + * Spins only on the rare limit condition of concurrent reception of 436 + * RESP and DRESP for the same xfer. 437 + */ 438 + if (!ret) { 439 + spin_until_cond(scmi_xfer_acquired(xfer)); 440 + scmi_xfer_state_update(xfer, msg_type); 441 + } 442 + spin_unlock_irqrestore(&xfer->lock, flags); 443 + 444 + if (ret) { 445 + dev_err(cinfo->dev, 446 + "Invalid message type:%d for %d - HDR:0x%X state:%d\n", 447 + msg_type, xfer_id, msg_hdr, xfer->state); 448 + /* On error the refcount incremented above has to be dropped */ 449 + __scmi_xfer_put(minfo, xfer); 450 + xfer = ERR_PTR(-EINVAL); 451 + } 452 + 453 + return xfer; 454 + } 455 + 456 + static inline void scmi_xfer_command_release(struct scmi_info *info, 457 + struct scmi_xfer *xfer) 458 + { 459 + atomic_set(&xfer->busy, SCMI_XFER_FREE); 460 + __scmi_xfer_put(&info->tx_minfo, xfer); 461 + } 462 + 463 + static inline void scmi_clear_channel(struct scmi_info *info, 464 + struct scmi_chan_info *cinfo) 465 + { 466 + if (info->desc->ops->clear_channel) 467 + info->desc->ops->clear_channel(cinfo); 468 + } 469 + 470 + static void scmi_handle_notification(struct scmi_chan_info *cinfo, 471 + u32 msg_hdr, void *priv) 416 472 { 417 473 struct scmi_xfer *xfer; 418 474 struct device *dev = cinfo->dev; ··· 619 279 ktime_t ts; 620 280 621 281 ts = ktime_get_boottime(); 622 - xfer = scmi_xfer_get(cinfo->handle, minfo); 282 + xfer = scmi_xfer_get(cinfo->handle, minfo, false); 623 283 if (IS_ERR(xfer)) { 624 284 dev_err(dev, "failed to get free message slot (%ld)\n", 625 285 PTR_ERR(xfer)); 626 - info->desc->ops->clear_channel(cinfo); 286 + scmi_clear_channel(info, cinfo); 627 287 return; 628 288 } 629 289 630 290 unpack_scmi_header(msg_hdr, &xfer->hdr); 631 - scmi_dump_header_dbg(dev, &xfer->hdr); 291 + if (priv) 292 + xfer->priv = priv; 632 293 info->desc->ops->fetch_notification(cinfo, info->desc->max_msg_size, 633 294 xfer); 634 295 scmi_notify(cinfo->handle, xfer->hdr.protocol_id, ··· 641 300 642 301 __scmi_xfer_put(minfo, xfer); 643 302 644 - info->desc->ops->clear_channel(cinfo); 303 + scmi_clear_channel(info, cinfo); 645 304 } 646 305 647 306 static void scmi_handle_response(struct scmi_chan_info *cinfo, 648 - u16 xfer_id, u8 msg_type) 307 + u32 msg_hdr, void *priv) 649 308 { 650 309 struct scmi_xfer *xfer; 651 - struct device *dev = cinfo->dev; 652 310 struct scmi_info *info = handle_to_scmi_info(cinfo->handle); 653 - struct scmi_xfers_info *minfo = &info->tx_minfo; 654 311 655 - /* Are we even expecting this? */ 656 - if (!test_bit(xfer_id, minfo->xfer_alloc_table)) { 657 - dev_err(dev, "message for %d is not expected!\n", xfer_id); 658 - info->desc->ops->clear_channel(cinfo); 659 - return; 660 - } 661 - 662 - xfer = &minfo->xfer_block[xfer_id]; 663 - /* 664 - * Even if a response was indeed expected on this slot at this point, 665 - * a buggy platform could wrongly reply feeding us an unexpected 666 - * delayed response we're not prepared to handle: bail-out safely 667 - * blaming firmware. 668 - */ 669 - if (unlikely(msg_type == MSG_TYPE_DELAYED_RESP && !xfer->async_done)) { 670 - dev_err(dev, 671 - "Delayed Response for %d not expected! Buggy F/W ?\n", 672 - xfer_id); 673 - info->desc->ops->clear_channel(cinfo); 674 - /* It was unexpected, so nobody will clear the xfer if not us */ 675 - __scmi_xfer_put(minfo, xfer); 312 + xfer = scmi_xfer_command_acquire(cinfo, msg_hdr); 313 + if (IS_ERR(xfer)) { 314 + scmi_clear_channel(info, cinfo); 676 315 return; 677 316 } 678 317 679 318 /* rx.len could be shrunk in the sync do_xfer, so reset to maxsz */ 680 - if (msg_type == MSG_TYPE_DELAYED_RESP) 319 + if (xfer->hdr.type == MSG_TYPE_DELAYED_RESP) 681 320 xfer->rx.len = info->desc->max_msg_size; 682 321 683 - scmi_dump_header_dbg(dev, &xfer->hdr); 684 - 322 + if (priv) 323 + xfer->priv = priv; 685 324 info->desc->ops->fetch_response(cinfo, xfer); 686 325 687 326 trace_scmi_rx_done(xfer->transfer_id, xfer->hdr.id, 688 327 xfer->hdr.protocol_id, xfer->hdr.seq, 689 - msg_type); 328 + xfer->hdr.type); 690 329 691 - if (msg_type == MSG_TYPE_DELAYED_RESP) { 692 - info->desc->ops->clear_channel(cinfo); 330 + if (xfer->hdr.type == MSG_TYPE_DELAYED_RESP) { 331 + scmi_clear_channel(info, cinfo); 693 332 complete(xfer->async_done); 694 333 } else { 695 334 complete(&xfer->done); 696 335 } 336 + 337 + scmi_xfer_command_release(info, xfer); 697 338 } 698 339 699 340 /** ··· 683 360 * 684 361 * @cinfo: SCMI channel info 685 362 * @msg_hdr: Message header 363 + * @priv: Transport specific private data. 686 364 * 687 365 * Processes one received message to appropriate transfer information and 688 366 * signals completion of the transfer. ··· 691 367 * NOTE: This function will be invoked in IRQ context, hence should be 692 368 * as optimal as possible. 693 369 */ 694 - void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr) 370 + void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr, void *priv) 695 371 { 696 - u16 xfer_id = MSG_XTRACT_TOKEN(msg_hdr); 697 372 u8 msg_type = MSG_XTRACT_TYPE(msg_hdr); 698 373 699 374 switch (msg_type) { 700 375 case MSG_TYPE_NOTIFICATION: 701 - scmi_handle_notification(cinfo, msg_hdr); 376 + scmi_handle_notification(cinfo, msg_hdr, priv); 702 377 break; 703 378 case MSG_TYPE_COMMAND: 704 379 case MSG_TYPE_DELAYED_RESP: 705 - scmi_handle_response(cinfo, xfer_id, msg_type); 380 + scmi_handle_response(cinfo, msg_hdr, priv); 706 381 break; 707 382 default: 708 383 WARN_ONCE(1, "received unknown msg_type:%d\n", msg_type); ··· 713 390 * xfer_put() - Release a transmit message 714 391 * 715 392 * @ph: Pointer to SCMI protocol handle 716 - * @xfer: message that was reserved by scmi_xfer_get 393 + * @xfer: message that was reserved by xfer_get_init 717 394 */ 718 395 static void xfer_put(const struct scmi_protocol_handle *ph, 719 396 struct scmi_xfer *xfer) ··· 731 408 { 732 409 struct scmi_info *info = handle_to_scmi_info(cinfo->handle); 733 410 411 + /* 412 + * Poll also on xfer->done so that polling can be forcibly terminated 413 + * in case of out-of-order receptions of delayed responses 414 + */ 734 415 return info->desc->ops->poll_done(cinfo, xfer) || 416 + try_wait_for_completion(&xfer->done) || 735 417 ktime_after(ktime_get(), stop); 736 418 } 737 419 ··· 760 432 struct device *dev = info->dev; 761 433 struct scmi_chan_info *cinfo; 762 434 435 + if (xfer->hdr.poll_completion && !info->desc->ops->poll_done) { 436 + dev_warn_once(dev, 437 + "Polling mode is not supported by transport.\n"); 438 + return -EINVAL; 439 + } 440 + 763 441 /* 764 442 * Initialise protocol id now from protocol handle to avoid it being 765 443 * overridden by mistake (or malice) by the protocol code mangling with ··· 782 448 xfer->hdr.protocol_id, xfer->hdr.seq, 783 449 xfer->hdr.poll_completion); 784 450 451 + xfer->state = SCMI_XFER_SENT_OK; 452 + /* 453 + * Even though spinlocking is not needed here since no race is possible 454 + * on xfer->state due to the monotonically increasing tokens allocation, 455 + * we must anyway ensure xfer->state initialization is not re-ordered 456 + * after the .send_message() to be sure that on the RX path an early 457 + * ISR calling scmi_rx_callback() cannot see an old stale xfer->state. 458 + */ 459 + smp_mb(); 460 + 785 461 ret = info->desc->ops->send_message(cinfo, xfer); 786 462 if (ret < 0) { 787 463 dev_dbg(dev, "Failed to send message %d\n", ret); ··· 802 458 ktime_t stop = ktime_add_ns(ktime_get(), SCMI_MAX_POLL_TO_NS); 803 459 804 460 spin_until_cond(scmi_xfer_done_no_timeout(cinfo, xfer, stop)); 461 + if (ktime_before(ktime_get(), stop)) { 462 + unsigned long flags; 805 463 806 - if (ktime_before(ktime_get(), stop)) 807 - info->desc->ops->fetch_response(cinfo, xfer); 808 - else 464 + /* 465 + * Do not fetch_response if an out-of-order delayed 466 + * response is being processed. 467 + */ 468 + spin_lock_irqsave(&xfer->lock, flags); 469 + if (xfer->state == SCMI_XFER_SENT_OK) { 470 + info->desc->ops->fetch_response(cinfo, xfer); 471 + xfer->state = SCMI_XFER_RESP_OK; 472 + } 473 + spin_unlock_irqrestore(&xfer->lock, flags); 474 + } else { 809 475 ret = -ETIMEDOUT; 476 + } 810 477 } else { 811 478 /* And we wait for the response. */ 812 479 timeout = msecs_to_jiffies(info->desc->max_rx_timeout_ms); ··· 912 557 tx_size > info->desc->max_msg_size) 913 558 return -ERANGE; 914 559 915 - xfer = scmi_xfer_get(pi->handle, minfo); 560 + xfer = scmi_xfer_get(pi->handle, minfo, true); 916 561 if (IS_ERR(xfer)) { 917 562 ret = PTR_ERR(xfer); 918 563 dev_err(dev, "failed to get free message slot(%d)\n", ret); ··· 921 566 922 567 xfer->tx.len = tx_size; 923 568 xfer->rx.len = rx_size ? : info->desc->max_msg_size; 569 + xfer->hdr.type = MSG_TYPE_COMMAND; 924 570 xfer->hdr.id = msg_id; 925 571 xfer->hdr.poll_completion = false; 926 572 ··· 1382 1026 const struct scmi_desc *desc = sinfo->desc; 1383 1027 1384 1028 /* Pre-allocated messages, no more than what hdr.seq can support */ 1385 - if (WARN_ON(!desc->max_msg || desc->max_msg > MSG_TOKEN_MAX)) { 1029 + if (WARN_ON(!info->max_msg || info->max_msg > MSG_TOKEN_MAX)) { 1386 1030 dev_err(dev, 1387 1031 "Invalid maximum messages %d, not in range [1 - %lu]\n", 1388 - desc->max_msg, MSG_TOKEN_MAX); 1032 + info->max_msg, MSG_TOKEN_MAX); 1389 1033 return -EINVAL; 1390 1034 } 1391 1035 1392 - info->xfer_block = devm_kcalloc(dev, desc->max_msg, 1393 - sizeof(*info->xfer_block), GFP_KERNEL); 1394 - if (!info->xfer_block) 1395 - return -ENOMEM; 1036 + hash_init(info->pending_xfers); 1396 1037 1397 - info->xfer_alloc_table = devm_kcalloc(dev, BITS_TO_LONGS(desc->max_msg), 1038 + /* Allocate a bitmask sized to hold MSG_TOKEN_MAX tokens */ 1039 + info->xfer_alloc_table = devm_kcalloc(dev, BITS_TO_LONGS(MSG_TOKEN_MAX), 1398 1040 sizeof(long), GFP_KERNEL); 1399 1041 if (!info->xfer_alloc_table) 1400 1042 return -ENOMEM; 1401 1043 1402 - /* Pre-initialize the buffer pointer to pre-allocated buffers */ 1403 - for (i = 0, xfer = info->xfer_block; i < desc->max_msg; i++, xfer++) { 1044 + /* 1045 + * Preallocate a number of xfers equal to max inflight messages, 1046 + * pre-initialize the buffer pointer to pre-allocated buffers and 1047 + * attach all of them to the free list 1048 + */ 1049 + INIT_HLIST_HEAD(&info->free_xfers); 1050 + for (i = 0; i < info->max_msg; i++) { 1051 + xfer = devm_kzalloc(dev, sizeof(*xfer), GFP_KERNEL); 1052 + if (!xfer) 1053 + return -ENOMEM; 1054 + 1404 1055 xfer->rx.buf = devm_kcalloc(dev, sizeof(u8), desc->max_msg_size, 1405 1056 GFP_KERNEL); 1406 1057 if (!xfer->rx.buf) ··· 1415 1052 1416 1053 xfer->tx.buf = xfer->rx.buf; 1417 1054 init_completion(&xfer->done); 1055 + spin_lock_init(&xfer->lock); 1056 + 1057 + /* Add initialized xfer to the free list */ 1058 + hlist_add_head(&xfer->node, &info->free_xfers); 1418 1059 } 1419 1060 1420 1061 spin_lock_init(&info->xfer_lock); ··· 1426 1059 return 0; 1427 1060 } 1428 1061 1062 + static int scmi_channels_max_msg_configure(struct scmi_info *sinfo) 1063 + { 1064 + const struct scmi_desc *desc = sinfo->desc; 1065 + 1066 + if (!desc->ops->get_max_msg) { 1067 + sinfo->tx_minfo.max_msg = desc->max_msg; 1068 + sinfo->rx_minfo.max_msg = desc->max_msg; 1069 + } else { 1070 + struct scmi_chan_info *base_cinfo; 1071 + 1072 + base_cinfo = idr_find(&sinfo->tx_idr, SCMI_PROTOCOL_BASE); 1073 + if (!base_cinfo) 1074 + return -EINVAL; 1075 + sinfo->tx_minfo.max_msg = desc->ops->get_max_msg(base_cinfo); 1076 + 1077 + /* RX channel is optional so can be skipped */ 1078 + base_cinfo = idr_find(&sinfo->rx_idr, SCMI_PROTOCOL_BASE); 1079 + if (base_cinfo) 1080 + sinfo->rx_minfo.max_msg = 1081 + desc->ops->get_max_msg(base_cinfo); 1082 + } 1083 + 1084 + return 0; 1085 + } 1086 + 1429 1087 static int scmi_xfer_info_init(struct scmi_info *sinfo) 1430 1088 { 1431 - int ret = __scmi_xfer_info_init(sinfo, &sinfo->tx_minfo); 1089 + int ret; 1432 1090 1091 + ret = scmi_channels_max_msg_configure(sinfo); 1092 + if (ret) 1093 + return ret; 1094 + 1095 + ret = __scmi_xfer_info_init(sinfo, &sinfo->tx_minfo); 1433 1096 if (!ret && idr_find(&sinfo->rx_idr, SCMI_PROTOCOL_BASE)) 1434 1097 ret = __scmi_xfer_info_init(sinfo, &sinfo->rx_minfo); 1435 1098 ··· 1787 1390 mutex_unlock(&scmi_requested_devices_mtx); 1788 1391 } 1789 1392 1393 + static int scmi_cleanup_txrx_channels(struct scmi_info *info) 1394 + { 1395 + int ret; 1396 + struct idr *idr = &info->tx_idr; 1397 + 1398 + ret = idr_for_each(idr, info->desc->ops->chan_free, idr); 1399 + idr_destroy(&info->tx_idr); 1400 + 1401 + idr = &info->rx_idr; 1402 + ret = idr_for_each(idr, info->desc->ops->chan_free, idr); 1403 + idr_destroy(&info->rx_idr); 1404 + 1405 + return ret; 1406 + } 1407 + 1790 1408 static int scmi_probe(struct platform_device *pdev) 1791 1409 { 1792 1410 int ret; ··· 1836 1424 handle->devm_protocol_get = scmi_devm_protocol_get; 1837 1425 handle->devm_protocol_put = scmi_devm_protocol_put; 1838 1426 1427 + if (desc->ops->link_supplier) { 1428 + ret = desc->ops->link_supplier(dev); 1429 + if (ret) 1430 + return ret; 1431 + } 1432 + 1839 1433 ret = scmi_txrx_setup(info, dev, SCMI_PROTOCOL_BASE); 1840 1434 if (ret) 1841 1435 return ret; 1842 1436 1843 1437 ret = scmi_xfer_info_init(info); 1844 1438 if (ret) 1845 - return ret; 1439 + goto clear_txrx_setup; 1846 1440 1847 1441 if (scmi_notification_init(handle)) 1848 1442 dev_err(dev, "SCMI Notifications NOT available.\n"); ··· 1861 1443 ret = scmi_protocol_acquire(handle, SCMI_PROTOCOL_BASE); 1862 1444 if (ret) { 1863 1445 dev_err(dev, "unable to communicate with SCMI\n"); 1864 - return ret; 1446 + goto notification_exit; 1865 1447 } 1866 1448 1867 1449 mutex_lock(&scmi_list_mutex); ··· 1900 1482 } 1901 1483 1902 1484 return 0; 1485 + 1486 + notification_exit: 1487 + scmi_notification_exit(&info->handle); 1488 + clear_txrx_setup: 1489 + scmi_cleanup_txrx_channels(info); 1490 + return ret; 1903 1491 } 1904 1492 1905 1493 void scmi_free_channel(struct scmi_chan_info *cinfo, struct idr *idr, int id) ··· 1917 1493 { 1918 1494 int ret = 0, id; 1919 1495 struct scmi_info *info = platform_get_drvdata(pdev); 1920 - struct idr *idr = &info->tx_idr; 1921 1496 struct device_node *child; 1922 1497 1923 1498 mutex_lock(&scmi_list_mutex); ··· 1940 1517 idr_destroy(&info->active_protocols); 1941 1518 1942 1519 /* Safe to free channels since no more users */ 1943 - ret = idr_for_each(idr, info->desc->ops->chan_free, idr); 1944 - idr_destroy(&info->tx_idr); 1945 - 1946 - idr = &info->rx_idr; 1947 - ret = idr_for_each(idr, info->desc->ops->chan_free, idr); 1948 - idr_destroy(&info->rx_idr); 1949 - 1950 - return ret; 1520 + return scmi_cleanup_txrx_channels(info); 1951 1521 } 1952 1522 1953 1523 static ssize_t protocol_version_show(struct device *dev, ··· 1991 1575 1992 1576 /* Each compatible listed below must have descriptor associated with it */ 1993 1577 static const struct of_device_id scmi_of_match[] = { 1994 - #ifdef CONFIG_MAILBOX 1578 + #ifdef CONFIG_ARM_SCMI_TRANSPORT_MAILBOX 1995 1579 { .compatible = "arm,scmi", .data = &scmi_mailbox_desc }, 1996 1580 #endif 1997 - #ifdef CONFIG_HAVE_ARM_SMCCC_DISCOVERY 1581 + #ifdef CONFIG_ARM_SCMI_TRANSPORT_SMC 1998 1582 { .compatible = "arm,scmi-smc", .data = &scmi_smc_desc}, 1583 + #endif 1584 + #ifdef CONFIG_ARM_SCMI_TRANSPORT_VIRTIO 1585 + { .compatible = "arm,scmi-virtio", .data = &scmi_virtio_desc}, 1999 1586 #endif 2000 1587 { /* Sentinel */ }, 2001 1588 }; ··· 2015 1596 .remove = scmi_remove, 2016 1597 }; 2017 1598 1599 + /** 1600 + * __scmi_transports_setup - Common helper to call transport-specific 1601 + * .init/.exit code if provided. 1602 + * 1603 + * @init: A flag to distinguish between init and exit. 1604 + * 1605 + * Note that, if provided, we invoke .init/.exit functions for all the 1606 + * transports currently compiled in. 1607 + * 1608 + * Return: 0 on Success. 1609 + */ 1610 + static inline int __scmi_transports_setup(bool init) 1611 + { 1612 + int ret = 0; 1613 + const struct of_device_id *trans; 1614 + 1615 + for (trans = scmi_of_match; trans->data; trans++) { 1616 + const struct scmi_desc *tdesc = trans->data; 1617 + 1618 + if ((init && !tdesc->transport_init) || 1619 + (!init && !tdesc->transport_exit)) 1620 + continue; 1621 + 1622 + if (init) 1623 + ret = tdesc->transport_init(); 1624 + else 1625 + tdesc->transport_exit(); 1626 + 1627 + if (ret) { 1628 + pr_err("SCMI transport %s FAILED initialization!\n", 1629 + trans->compatible); 1630 + break; 1631 + } 1632 + } 1633 + 1634 + return ret; 1635 + } 1636 + 1637 + static int __init scmi_transports_init(void) 1638 + { 1639 + return __scmi_transports_setup(true); 1640 + } 1641 + 1642 + static void __exit scmi_transports_exit(void) 1643 + { 1644 + __scmi_transports_setup(false); 1645 + } 1646 + 2018 1647 static int __init scmi_driver_init(void) 2019 1648 { 1649 + int ret; 1650 + 1651 + /* Bail out if no SCMI transport was configured */ 1652 + if (WARN_ON(!IS_ENABLED(CONFIG_ARM_SCMI_HAVE_TRANSPORT))) 1653 + return -EINVAL; 1654 + 2020 1655 scmi_bus_init(); 1656 + 1657 + /* Initialize any compiled-in transport which provided an init/exit */ 1658 + ret = scmi_transports_init(); 1659 + if (ret) 1660 + return ret; 2021 1661 2022 1662 scmi_base_register(); 2023 1663 ··· 2105 1627 scmi_system_unregister(); 2106 1628 2107 1629 scmi_bus_exit(); 1630 + 1631 + scmi_transports_exit(); 2108 1632 2109 1633 platform_driver_unregister(&scmi_driver); 2110 1634 }
+1 -1
drivers/firmware/arm_scmi/mailbox.c
··· 43 43 { 44 44 struct scmi_mailbox *smbox = client_to_scmi_mailbox(cl); 45 45 46 - scmi_rx_callback(smbox->cinfo, shmem_read_header(smbox->shmem)); 46 + scmi_rx_callback(smbox->cinfo, shmem_read_header(smbox->shmem), NULL); 47 47 } 48 48 49 49 static bool mailbox_chan_available(struct device *dev, int idx)
+111
drivers/firmware/arm_scmi/msg.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * For transports using message passing. 4 + * 5 + * Derived from shm.c. 6 + * 7 + * Copyright (C) 2019-2021 ARM Ltd. 8 + * Copyright (C) 2020-2021 OpenSynergy GmbH 9 + */ 10 + 11 + #include <linux/types.h> 12 + 13 + #include "common.h" 14 + 15 + /* 16 + * struct scmi_msg_payld - Transport SDU layout 17 + * 18 + * The SCMI specification requires all parameters, message headers, return 19 + * arguments or any protocol data to be expressed in little endian format only. 20 + */ 21 + struct scmi_msg_payld { 22 + __le32 msg_header; 23 + __le32 msg_payload[]; 24 + }; 25 + 26 + /** 27 + * msg_command_size() - Actual size of transport SDU for command. 28 + * 29 + * @xfer: message which core has prepared for sending 30 + * 31 + * Return: transport SDU size. 32 + */ 33 + size_t msg_command_size(struct scmi_xfer *xfer) 34 + { 35 + return sizeof(struct scmi_msg_payld) + xfer->tx.len; 36 + } 37 + 38 + /** 39 + * msg_response_size() - Maximum size of transport SDU for response. 40 + * 41 + * @xfer: message which core has prepared for sending 42 + * 43 + * Return: transport SDU size. 44 + */ 45 + size_t msg_response_size(struct scmi_xfer *xfer) 46 + { 47 + return sizeof(struct scmi_msg_payld) + sizeof(__le32) + xfer->rx.len; 48 + } 49 + 50 + /** 51 + * msg_tx_prepare() - Set up transport SDU for command. 52 + * 53 + * @msg: transport SDU for command 54 + * @xfer: message which is being sent 55 + */ 56 + void msg_tx_prepare(struct scmi_msg_payld *msg, struct scmi_xfer *xfer) 57 + { 58 + msg->msg_header = cpu_to_le32(pack_scmi_header(&xfer->hdr)); 59 + if (xfer->tx.buf) 60 + memcpy(msg->msg_payload, xfer->tx.buf, xfer->tx.len); 61 + } 62 + 63 + /** 64 + * msg_read_header() - Read SCMI header from transport SDU. 65 + * 66 + * @msg: transport SDU 67 + * 68 + * Return: SCMI header 69 + */ 70 + u32 msg_read_header(struct scmi_msg_payld *msg) 71 + { 72 + return le32_to_cpu(msg->msg_header); 73 + } 74 + 75 + /** 76 + * msg_fetch_response() - Fetch response SCMI payload from transport SDU. 77 + * 78 + * @msg: transport SDU with response 79 + * @len: transport SDU size 80 + * @xfer: message being responded to 81 + */ 82 + void msg_fetch_response(struct scmi_msg_payld *msg, size_t len, 83 + struct scmi_xfer *xfer) 84 + { 85 + size_t prefix_len = sizeof(*msg) + sizeof(msg->msg_payload[0]); 86 + 87 + xfer->hdr.status = le32_to_cpu(msg->msg_payload[0]); 88 + xfer->rx.len = min_t(size_t, xfer->rx.len, 89 + len >= prefix_len ? len - prefix_len : 0); 90 + 91 + /* Take a copy to the rx buffer.. */ 92 + memcpy(xfer->rx.buf, &msg->msg_payload[1], xfer->rx.len); 93 + } 94 + 95 + /** 96 + * msg_fetch_notification() - Fetch notification payload from transport SDU. 97 + * 98 + * @msg: transport SDU with notification 99 + * @len: transport SDU size 100 + * @max_len: maximum SCMI payload size to fetch 101 + * @xfer: notification message 102 + */ 103 + void msg_fetch_notification(struct scmi_msg_payld *msg, size_t len, 104 + size_t max_len, struct scmi_xfer *xfer) 105 + { 106 + xfer->rx.len = min_t(size_t, max_len, 107 + len >= sizeof(*msg) ? len - sizeof(*msg) : 0); 108 + 109 + /* Take a copy to the rx buffer.. */ 110 + memcpy(xfer->rx.buf, msg->msg_payload, xfer->rx.len); 111 + }
+2 -1
drivers/firmware/arm_scmi/smc.c
··· 154 154 if (scmi_info->irq) 155 155 wait_for_completion(&scmi_info->tx_complete); 156 156 157 - scmi_rx_callback(scmi_info->cinfo, shmem_read_header(scmi_info->shmem)); 157 + scmi_rx_callback(scmi_info->cinfo, 158 + shmem_read_header(scmi_info->shmem), NULL); 158 159 159 160 mutex_unlock(&scmi_info->shmem_lock); 160 161
+491
drivers/firmware/arm_scmi/virtio.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Virtio Transport driver for Arm System Control and Management Interface 4 + * (SCMI). 5 + * 6 + * Copyright (C) 2020-2021 OpenSynergy. 7 + * Copyright (C) 2021 ARM Ltd. 8 + */ 9 + 10 + /** 11 + * DOC: Theory of Operation 12 + * 13 + * The scmi-virtio transport implements a driver for the virtio SCMI device. 14 + * 15 + * There is one Tx channel (virtio cmdq, A2P channel) and at most one Rx 16 + * channel (virtio eventq, P2A channel). Each channel is implemented through a 17 + * virtqueue. Access to each virtqueue is protected by spinlocks. 18 + */ 19 + 20 + #include <linux/errno.h> 21 + #include <linux/slab.h> 22 + #include <linux/virtio.h> 23 + #include <linux/virtio_config.h> 24 + 25 + #include <uapi/linux/virtio_ids.h> 26 + #include <uapi/linux/virtio_scmi.h> 27 + 28 + #include "common.h" 29 + 30 + #define VIRTIO_SCMI_MAX_MSG_SIZE 128 /* Value may be increased. */ 31 + #define VIRTIO_SCMI_MAX_PDU_SIZE \ 32 + (VIRTIO_SCMI_MAX_MSG_SIZE + SCMI_MSG_MAX_PROT_OVERHEAD) 33 + #define DESCRIPTORS_PER_TX_MSG 2 34 + 35 + /** 36 + * struct scmi_vio_channel - Transport channel information 37 + * 38 + * @vqueue: Associated virtqueue 39 + * @cinfo: SCMI Tx or Rx channel 40 + * @free_list: List of unused scmi_vio_msg, maintained for Tx channels only 41 + * @is_rx: Whether channel is an Rx channel 42 + * @ready: Whether transport user is ready to hear about channel 43 + * @max_msg: Maximum number of pending messages for this channel. 44 + * @lock: Protects access to all members except ready. 45 + * @ready_lock: Protects access to ready. If required, it must be taken before 46 + * lock. 47 + */ 48 + struct scmi_vio_channel { 49 + struct virtqueue *vqueue; 50 + struct scmi_chan_info *cinfo; 51 + struct list_head free_list; 52 + bool is_rx; 53 + bool ready; 54 + unsigned int max_msg; 55 + /* lock to protect access to all members except ready. */ 56 + spinlock_t lock; 57 + /* lock to rotects access to ready flag. */ 58 + spinlock_t ready_lock; 59 + }; 60 + 61 + /** 62 + * struct scmi_vio_msg - Transport PDU information 63 + * 64 + * @request: SDU used for commands 65 + * @input: SDU used for (delayed) responses and notifications 66 + * @list: List which scmi_vio_msg may be part of 67 + * @rx_len: Input SDU size in bytes, once input has been received 68 + */ 69 + struct scmi_vio_msg { 70 + struct scmi_msg_payld *request; 71 + struct scmi_msg_payld *input; 72 + struct list_head list; 73 + unsigned int rx_len; 74 + }; 75 + 76 + /* Only one SCMI VirtIO device can possibly exist */ 77 + static struct virtio_device *scmi_vdev; 78 + 79 + static bool scmi_vio_have_vq_rx(struct virtio_device *vdev) 80 + { 81 + return virtio_has_feature(vdev, VIRTIO_SCMI_F_P2A_CHANNELS); 82 + } 83 + 84 + static int scmi_vio_feed_vq_rx(struct scmi_vio_channel *vioch, 85 + struct scmi_vio_msg *msg) 86 + { 87 + struct scatterlist sg_in; 88 + int rc; 89 + unsigned long flags; 90 + 91 + sg_init_one(&sg_in, msg->input, VIRTIO_SCMI_MAX_PDU_SIZE); 92 + 93 + spin_lock_irqsave(&vioch->lock, flags); 94 + 95 + rc = virtqueue_add_inbuf(vioch->vqueue, &sg_in, 1, msg, GFP_ATOMIC); 96 + if (rc) 97 + dev_err_once(vioch->cinfo->dev, 98 + "failed to add to virtqueue (%d)\n", rc); 99 + else 100 + virtqueue_kick(vioch->vqueue); 101 + 102 + spin_unlock_irqrestore(&vioch->lock, flags); 103 + 104 + return rc; 105 + } 106 + 107 + static void scmi_finalize_message(struct scmi_vio_channel *vioch, 108 + struct scmi_vio_msg *msg) 109 + { 110 + if (vioch->is_rx) { 111 + scmi_vio_feed_vq_rx(vioch, msg); 112 + } else { 113 + unsigned long flags; 114 + 115 + spin_lock_irqsave(&vioch->lock, flags); 116 + list_add(&msg->list, &vioch->free_list); 117 + spin_unlock_irqrestore(&vioch->lock, flags); 118 + } 119 + } 120 + 121 + static void scmi_vio_complete_cb(struct virtqueue *vqueue) 122 + { 123 + unsigned long ready_flags; 124 + unsigned long flags; 125 + unsigned int length; 126 + struct scmi_vio_channel *vioch; 127 + struct scmi_vio_msg *msg; 128 + bool cb_enabled = true; 129 + 130 + if (WARN_ON_ONCE(!vqueue->vdev->priv)) 131 + return; 132 + vioch = &((struct scmi_vio_channel *)vqueue->vdev->priv)[vqueue->index]; 133 + 134 + for (;;) { 135 + spin_lock_irqsave(&vioch->ready_lock, ready_flags); 136 + 137 + if (!vioch->ready) { 138 + if (!cb_enabled) 139 + (void)virtqueue_enable_cb(vqueue); 140 + goto unlock_ready_out; 141 + } 142 + 143 + spin_lock_irqsave(&vioch->lock, flags); 144 + if (cb_enabled) { 145 + virtqueue_disable_cb(vqueue); 146 + cb_enabled = false; 147 + } 148 + msg = virtqueue_get_buf(vqueue, &length); 149 + if (!msg) { 150 + if (virtqueue_enable_cb(vqueue)) 151 + goto unlock_out; 152 + cb_enabled = true; 153 + } 154 + spin_unlock_irqrestore(&vioch->lock, flags); 155 + 156 + if (msg) { 157 + msg->rx_len = length; 158 + scmi_rx_callback(vioch->cinfo, 159 + msg_read_header(msg->input), msg); 160 + 161 + scmi_finalize_message(vioch, msg); 162 + } 163 + 164 + spin_unlock_irqrestore(&vioch->ready_lock, ready_flags); 165 + } 166 + 167 + unlock_out: 168 + spin_unlock_irqrestore(&vioch->lock, flags); 169 + unlock_ready_out: 170 + spin_unlock_irqrestore(&vioch->ready_lock, ready_flags); 171 + } 172 + 173 + static const char *const scmi_vio_vqueue_names[] = { "tx", "rx" }; 174 + 175 + static vq_callback_t *scmi_vio_complete_callbacks[] = { 176 + scmi_vio_complete_cb, 177 + scmi_vio_complete_cb 178 + }; 179 + 180 + static unsigned int virtio_get_max_msg(struct scmi_chan_info *base_cinfo) 181 + { 182 + struct scmi_vio_channel *vioch = base_cinfo->transport_info; 183 + 184 + return vioch->max_msg; 185 + } 186 + 187 + static int virtio_link_supplier(struct device *dev) 188 + { 189 + if (!scmi_vdev) { 190 + dev_notice_once(dev, 191 + "Deferring probe after not finding a bound scmi-virtio device\n"); 192 + return -EPROBE_DEFER; 193 + } 194 + 195 + if (!device_link_add(dev, &scmi_vdev->dev, 196 + DL_FLAG_AUTOREMOVE_CONSUMER)) { 197 + dev_err(dev, "Adding link to supplier virtio device failed\n"); 198 + return -ECANCELED; 199 + } 200 + 201 + return 0; 202 + } 203 + 204 + static bool virtio_chan_available(struct device *dev, int idx) 205 + { 206 + struct scmi_vio_channel *channels, *vioch = NULL; 207 + 208 + if (WARN_ON_ONCE(!scmi_vdev)) 209 + return false; 210 + 211 + channels = (struct scmi_vio_channel *)scmi_vdev->priv; 212 + 213 + switch (idx) { 214 + case VIRTIO_SCMI_VQ_TX: 215 + vioch = &channels[VIRTIO_SCMI_VQ_TX]; 216 + break; 217 + case VIRTIO_SCMI_VQ_RX: 218 + if (scmi_vio_have_vq_rx(scmi_vdev)) 219 + vioch = &channels[VIRTIO_SCMI_VQ_RX]; 220 + break; 221 + default: 222 + return false; 223 + } 224 + 225 + return vioch && !vioch->cinfo; 226 + } 227 + 228 + static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev, 229 + bool tx) 230 + { 231 + unsigned long flags; 232 + struct scmi_vio_channel *vioch; 233 + int index = tx ? VIRTIO_SCMI_VQ_TX : VIRTIO_SCMI_VQ_RX; 234 + int i; 235 + 236 + if (!scmi_vdev) 237 + return -EPROBE_DEFER; 238 + 239 + vioch = &((struct scmi_vio_channel *)scmi_vdev->priv)[index]; 240 + 241 + for (i = 0; i < vioch->max_msg; i++) { 242 + struct scmi_vio_msg *msg; 243 + 244 + msg = devm_kzalloc(cinfo->dev, sizeof(*msg), GFP_KERNEL); 245 + if (!msg) 246 + return -ENOMEM; 247 + 248 + if (tx) { 249 + msg->request = devm_kzalloc(cinfo->dev, 250 + VIRTIO_SCMI_MAX_PDU_SIZE, 251 + GFP_KERNEL); 252 + if (!msg->request) 253 + return -ENOMEM; 254 + } 255 + 256 + msg->input = devm_kzalloc(cinfo->dev, VIRTIO_SCMI_MAX_PDU_SIZE, 257 + GFP_KERNEL); 258 + if (!msg->input) 259 + return -ENOMEM; 260 + 261 + if (tx) { 262 + spin_lock_irqsave(&vioch->lock, flags); 263 + list_add_tail(&msg->list, &vioch->free_list); 264 + spin_unlock_irqrestore(&vioch->lock, flags); 265 + } else { 266 + scmi_vio_feed_vq_rx(vioch, msg); 267 + } 268 + } 269 + 270 + spin_lock_irqsave(&vioch->lock, flags); 271 + cinfo->transport_info = vioch; 272 + /* Indirectly setting channel not available any more */ 273 + vioch->cinfo = cinfo; 274 + spin_unlock_irqrestore(&vioch->lock, flags); 275 + 276 + spin_lock_irqsave(&vioch->ready_lock, flags); 277 + vioch->ready = true; 278 + spin_unlock_irqrestore(&vioch->ready_lock, flags); 279 + 280 + return 0; 281 + } 282 + 283 + static int virtio_chan_free(int id, void *p, void *data) 284 + { 285 + unsigned long flags; 286 + struct scmi_chan_info *cinfo = p; 287 + struct scmi_vio_channel *vioch = cinfo->transport_info; 288 + 289 + spin_lock_irqsave(&vioch->ready_lock, flags); 290 + vioch->ready = false; 291 + spin_unlock_irqrestore(&vioch->ready_lock, flags); 292 + 293 + scmi_free_channel(cinfo, data, id); 294 + 295 + spin_lock_irqsave(&vioch->lock, flags); 296 + vioch->cinfo = NULL; 297 + spin_unlock_irqrestore(&vioch->lock, flags); 298 + 299 + return 0; 300 + } 301 + 302 + static int virtio_send_message(struct scmi_chan_info *cinfo, 303 + struct scmi_xfer *xfer) 304 + { 305 + struct scmi_vio_channel *vioch = cinfo->transport_info; 306 + struct scatterlist sg_out; 307 + struct scatterlist sg_in; 308 + struct scatterlist *sgs[DESCRIPTORS_PER_TX_MSG] = { &sg_out, &sg_in }; 309 + unsigned long flags; 310 + int rc; 311 + struct scmi_vio_msg *msg; 312 + 313 + spin_lock_irqsave(&vioch->lock, flags); 314 + 315 + if (list_empty(&vioch->free_list)) { 316 + spin_unlock_irqrestore(&vioch->lock, flags); 317 + return -EBUSY; 318 + } 319 + 320 + msg = list_first_entry(&vioch->free_list, typeof(*msg), list); 321 + list_del(&msg->list); 322 + 323 + msg_tx_prepare(msg->request, xfer); 324 + 325 + sg_init_one(&sg_out, msg->request, msg_command_size(xfer)); 326 + sg_init_one(&sg_in, msg->input, msg_response_size(xfer)); 327 + 328 + rc = virtqueue_add_sgs(vioch->vqueue, sgs, 1, 1, msg, GFP_ATOMIC); 329 + if (rc) { 330 + list_add(&msg->list, &vioch->free_list); 331 + dev_err_once(vioch->cinfo->dev, 332 + "%s() failed to add to virtqueue (%d)\n", __func__, 333 + rc); 334 + } else { 335 + virtqueue_kick(vioch->vqueue); 336 + } 337 + 338 + spin_unlock_irqrestore(&vioch->lock, flags); 339 + 340 + return rc; 341 + } 342 + 343 + static void virtio_fetch_response(struct scmi_chan_info *cinfo, 344 + struct scmi_xfer *xfer) 345 + { 346 + struct scmi_vio_msg *msg = xfer->priv; 347 + 348 + if (msg) { 349 + msg_fetch_response(msg->input, msg->rx_len, xfer); 350 + xfer->priv = NULL; 351 + } 352 + } 353 + 354 + static void virtio_fetch_notification(struct scmi_chan_info *cinfo, 355 + size_t max_len, struct scmi_xfer *xfer) 356 + { 357 + struct scmi_vio_msg *msg = xfer->priv; 358 + 359 + if (msg) { 360 + msg_fetch_notification(msg->input, msg->rx_len, max_len, xfer); 361 + xfer->priv = NULL; 362 + } 363 + } 364 + 365 + static const struct scmi_transport_ops scmi_virtio_ops = { 366 + .link_supplier = virtio_link_supplier, 367 + .chan_available = virtio_chan_available, 368 + .chan_setup = virtio_chan_setup, 369 + .chan_free = virtio_chan_free, 370 + .get_max_msg = virtio_get_max_msg, 371 + .send_message = virtio_send_message, 372 + .fetch_response = virtio_fetch_response, 373 + .fetch_notification = virtio_fetch_notification, 374 + }; 375 + 376 + static int scmi_vio_probe(struct virtio_device *vdev) 377 + { 378 + struct device *dev = &vdev->dev; 379 + struct scmi_vio_channel *channels; 380 + bool have_vq_rx; 381 + int vq_cnt; 382 + int i; 383 + int ret; 384 + struct virtqueue *vqs[VIRTIO_SCMI_VQ_MAX_CNT]; 385 + 386 + /* Only one SCMI VirtiO device allowed */ 387 + if (scmi_vdev) 388 + return -EINVAL; 389 + 390 + have_vq_rx = scmi_vio_have_vq_rx(vdev); 391 + vq_cnt = have_vq_rx ? VIRTIO_SCMI_VQ_MAX_CNT : 1; 392 + 393 + channels = devm_kcalloc(dev, vq_cnt, sizeof(*channels), GFP_KERNEL); 394 + if (!channels) 395 + return -ENOMEM; 396 + 397 + if (have_vq_rx) 398 + channels[VIRTIO_SCMI_VQ_RX].is_rx = true; 399 + 400 + ret = virtio_find_vqs(vdev, vq_cnt, vqs, scmi_vio_complete_callbacks, 401 + scmi_vio_vqueue_names, NULL); 402 + if (ret) { 403 + dev_err(dev, "Failed to get %d virtqueue(s)\n", vq_cnt); 404 + return ret; 405 + } 406 + 407 + for (i = 0; i < vq_cnt; i++) { 408 + unsigned int sz; 409 + 410 + spin_lock_init(&channels[i].lock); 411 + spin_lock_init(&channels[i].ready_lock); 412 + INIT_LIST_HEAD(&channels[i].free_list); 413 + channels[i].vqueue = vqs[i]; 414 + 415 + sz = virtqueue_get_vring_size(channels[i].vqueue); 416 + /* Tx messages need multiple descriptors. */ 417 + if (!channels[i].is_rx) 418 + sz /= DESCRIPTORS_PER_TX_MSG; 419 + 420 + if (sz > MSG_TOKEN_MAX) { 421 + dev_info_once(dev, 422 + "%s virtqueue could hold %d messages. Only %ld allowed to be pending.\n", 423 + channels[i].is_rx ? "rx" : "tx", 424 + sz, MSG_TOKEN_MAX); 425 + sz = MSG_TOKEN_MAX; 426 + } 427 + channels[i].max_msg = sz; 428 + } 429 + 430 + vdev->priv = channels; 431 + scmi_vdev = vdev; 432 + 433 + return 0; 434 + } 435 + 436 + static void scmi_vio_remove(struct virtio_device *vdev) 437 + { 438 + vdev->config->reset(vdev); 439 + vdev->config->del_vqs(vdev); 440 + scmi_vdev = NULL; 441 + } 442 + 443 + static int scmi_vio_validate(struct virtio_device *vdev) 444 + { 445 + if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1)) { 446 + dev_err(&vdev->dev, 447 + "device does not comply with spec version 1.x\n"); 448 + return -EINVAL; 449 + } 450 + 451 + return 0; 452 + } 453 + 454 + static unsigned int features[] = { 455 + VIRTIO_SCMI_F_P2A_CHANNELS, 456 + }; 457 + 458 + static const struct virtio_device_id id_table[] = { 459 + { VIRTIO_ID_SCMI, VIRTIO_DEV_ANY_ID }, 460 + { 0 } 461 + }; 462 + 463 + static struct virtio_driver virtio_scmi_driver = { 464 + .driver.name = "scmi-virtio", 465 + .driver.owner = THIS_MODULE, 466 + .feature_table = features, 467 + .feature_table_size = ARRAY_SIZE(features), 468 + .id_table = id_table, 469 + .probe = scmi_vio_probe, 470 + .remove = scmi_vio_remove, 471 + .validate = scmi_vio_validate, 472 + }; 473 + 474 + static int __init virtio_scmi_init(void) 475 + { 476 + return register_virtio_driver(&virtio_scmi_driver); 477 + } 478 + 479 + static void __exit virtio_scmi_exit(void) 480 + { 481 + unregister_virtio_driver(&virtio_scmi_driver); 482 + } 483 + 484 + const struct scmi_desc scmi_virtio_desc = { 485 + .transport_init = virtio_scmi_init, 486 + .transport_exit = virtio_scmi_exit, 487 + .ops = &scmi_virtio_ops, 488 + .max_rx_timeout_ms = 60000, /* for non-realtime virtio devices */ 489 + .max_msg = 0, /* overridden by virtio_get_max_msg() */ 490 + .max_msg_size = VIRTIO_SCMI_MAX_MSG_SIZE, 491 + };
+6 -2
drivers/firmware/qcom_scm.c
··· 71 71 { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU3 }, 72 72 }; 73 73 74 - static const char *qcom_scm_convention_names[] = { 74 + static const char * const qcom_scm_convention_names[] = { 75 75 [SMC_CONVENTION_UNKNOWN] = "unknown", 76 76 [SMC_CONVENTION_ARM_32] = "smc arm 32", 77 77 [SMC_CONVENTION_ARM_64] = "smc arm 64", ··· 331 331 .owner = ARM_SMCCC_OWNER_SIP, 332 332 }; 333 333 334 - if (!cpus || (cpus && cpumask_empty(cpus))) 334 + if (!cpus || cpumask_empty(cpus)) 335 335 return -EINVAL; 336 336 337 337 for_each_cpu(cpu, cpus) { ··· 1299 1299 { .compatible = "qcom,scm" }, 1300 1300 {} 1301 1301 }; 1302 + MODULE_DEVICE_TABLE(of, qcom_scm_dt_match); 1302 1303 1303 1304 static struct platform_driver qcom_scm_driver = { 1304 1305 .driver = { ··· 1316 1315 return platform_driver_register(&qcom_scm_driver); 1317 1316 } 1318 1317 subsys_initcall(qcom_scm_init); 1318 + 1319 + MODULE_DESCRIPTION("Qualcomm Technologies, Inc. SCM driver"); 1320 + MODULE_LICENSE("GPL v2");
+47 -11
drivers/firmware/tegra/bpmp-debugfs.c
··· 296 296 struct file *file = m->private; 297 297 struct inode *inode = file_inode(file); 298 298 struct tegra_bpmp *bpmp = inode->i_private; 299 - char *databuf = NULL; 300 299 char fnamebuf[256]; 301 300 const char *filename; 302 - uint32_t nbytes = 0; 303 - size_t len; 304 - int err; 305 - 306 - len = seq_get_buf(m, &databuf); 307 - if (!databuf) 308 - return -ENOMEM; 301 + struct mrq_debug_request req = { 302 + .cmd = cpu_to_le32(CMD_DEBUG_READ), 303 + }; 304 + struct mrq_debug_response resp; 305 + struct tegra_bpmp_message msg = { 306 + .mrq = MRQ_DEBUG, 307 + .tx = { 308 + .data = &req, 309 + .size = sizeof(req), 310 + }, 311 + .rx = { 312 + .data = &resp, 313 + .size = sizeof(resp), 314 + }, 315 + }; 316 + uint32_t fd = 0, len = 0; 317 + int remaining, err; 309 318 310 319 filename = get_filename(bpmp, file, fnamebuf, sizeof(fnamebuf)); 311 320 if (!filename) 312 321 return -ENOENT; 313 322 314 - err = mrq_debug_read(bpmp, filename, databuf, len, &nbytes); 315 - if (!err) 316 - seq_commit(m, nbytes); 323 + mutex_lock(&bpmp_debug_lock); 324 + err = mrq_debug_open(bpmp, filename, &fd, &len, 0); 325 + if (err) 326 + goto out; 317 327 328 + req.frd.fd = fd; 329 + remaining = len; 330 + 331 + while (remaining > 0) { 332 + err = tegra_bpmp_transfer(bpmp, &msg); 333 + if (err < 0) { 334 + goto close; 335 + } else if (msg.rx.ret < 0) { 336 + err = -EINVAL; 337 + goto close; 338 + } 339 + 340 + if (resp.frd.readlen > remaining) { 341 + pr_err("%s: read data length invalid\n", __func__); 342 + err = -EINVAL; 343 + goto close; 344 + } 345 + 346 + seq_write(m, resp.frd.data, resp.frd.readlen); 347 + remaining -= resp.frd.readlen; 348 + } 349 + 350 + close: 351 + err = mrq_debug_close(bpmp, fd); 352 + out: 353 + mutex_unlock(&bpmp_debug_lock); 318 354 return err; 319 355 } 320 356
+2
drivers/iommu/Kconfig
··· 253 253 config ARM_SMMU 254 254 tristate "ARM Ltd. System MMU (SMMU) Support" 255 255 depends on ARM64 || ARM || (COMPILE_TEST && !GENERIC_ATOMIC64) 256 + depends on QCOM_SCM || !QCOM_SCM #if QCOM_SCM=m this can't be =y 256 257 select IOMMU_API 257 258 select IOMMU_IO_PGTABLE_LPAE 258 259 select ARM_DMA_USE_IOMMU if ARM ··· 383 382 # Note: iommu drivers cannot (yet?) be built as modules 384 383 bool "Qualcomm IOMMU Support" 385 384 depends on ARCH_QCOM || (COMPILE_TEST && !GENERIC_ATOMIC64) 385 + depends on QCOM_SCM=y 386 386 select IOMMU_API 387 387 select IOMMU_IO_PGTABLE_LPAE 388 388 select ARM_DMA_USE_IOMMU
+118 -73
drivers/memory/omap-gpmc.c
··· 9 9 * Copyright (C) 2009 Texas Instruments 10 10 * Added OMAP4 support - Santosh Shilimkar <santosh.shilimkar@ti.com> 11 11 */ 12 + #include <linux/cpu_pm.h> 12 13 #include <linux/irq.h> 13 14 #include <linux/kernel.h> 14 15 #include <linux/init.h> ··· 233 232 int irq; 234 233 struct irq_chip irq_chip; 235 234 struct gpio_chip gpio_chip; 235 + struct notifier_block nb; 236 + struct omap3_gpmc_regs context; 236 237 int nirqs; 238 + unsigned int is_suspended:1; 237 239 }; 238 240 239 241 static struct irq_domain *gpmc_irq_domain; ··· 2388 2384 return 0; 2389 2385 } 2390 2386 2387 + static void omap3_gpmc_save_context(struct gpmc_device *gpmc) 2388 + { 2389 + struct omap3_gpmc_regs *gpmc_context; 2390 + int i; 2391 + 2392 + if (!gpmc || !gpmc_base) 2393 + return; 2394 + 2395 + gpmc_context = &gpmc->context; 2396 + 2397 + gpmc_context->sysconfig = gpmc_read_reg(GPMC_SYSCONFIG); 2398 + gpmc_context->irqenable = gpmc_read_reg(GPMC_IRQENABLE); 2399 + gpmc_context->timeout_ctrl = gpmc_read_reg(GPMC_TIMEOUT_CONTROL); 2400 + gpmc_context->config = gpmc_read_reg(GPMC_CONFIG); 2401 + gpmc_context->prefetch_config1 = gpmc_read_reg(GPMC_PREFETCH_CONFIG1); 2402 + gpmc_context->prefetch_config2 = gpmc_read_reg(GPMC_PREFETCH_CONFIG2); 2403 + gpmc_context->prefetch_control = gpmc_read_reg(GPMC_PREFETCH_CONTROL); 2404 + for (i = 0; i < gpmc_cs_num; i++) { 2405 + gpmc_context->cs_context[i].is_valid = gpmc_cs_mem_enabled(i); 2406 + if (gpmc_context->cs_context[i].is_valid) { 2407 + gpmc_context->cs_context[i].config1 = 2408 + gpmc_cs_read_reg(i, GPMC_CS_CONFIG1); 2409 + gpmc_context->cs_context[i].config2 = 2410 + gpmc_cs_read_reg(i, GPMC_CS_CONFIG2); 2411 + gpmc_context->cs_context[i].config3 = 2412 + gpmc_cs_read_reg(i, GPMC_CS_CONFIG3); 2413 + gpmc_context->cs_context[i].config4 = 2414 + gpmc_cs_read_reg(i, GPMC_CS_CONFIG4); 2415 + gpmc_context->cs_context[i].config5 = 2416 + gpmc_cs_read_reg(i, GPMC_CS_CONFIG5); 2417 + gpmc_context->cs_context[i].config6 = 2418 + gpmc_cs_read_reg(i, GPMC_CS_CONFIG6); 2419 + gpmc_context->cs_context[i].config7 = 2420 + gpmc_cs_read_reg(i, GPMC_CS_CONFIG7); 2421 + } 2422 + } 2423 + } 2424 + 2425 + static void omap3_gpmc_restore_context(struct gpmc_device *gpmc) 2426 + { 2427 + struct omap3_gpmc_regs *gpmc_context; 2428 + int i; 2429 + 2430 + if (!gpmc || !gpmc_base) 2431 + return; 2432 + 2433 + gpmc_context = &gpmc->context; 2434 + 2435 + gpmc_write_reg(GPMC_SYSCONFIG, gpmc_context->sysconfig); 2436 + gpmc_write_reg(GPMC_IRQENABLE, gpmc_context->irqenable); 2437 + gpmc_write_reg(GPMC_TIMEOUT_CONTROL, gpmc_context->timeout_ctrl); 2438 + gpmc_write_reg(GPMC_CONFIG, gpmc_context->config); 2439 + gpmc_write_reg(GPMC_PREFETCH_CONFIG1, gpmc_context->prefetch_config1); 2440 + gpmc_write_reg(GPMC_PREFETCH_CONFIG2, gpmc_context->prefetch_config2); 2441 + gpmc_write_reg(GPMC_PREFETCH_CONTROL, gpmc_context->prefetch_control); 2442 + for (i = 0; i < gpmc_cs_num; i++) { 2443 + if (gpmc_context->cs_context[i].is_valid) { 2444 + gpmc_cs_write_reg(i, GPMC_CS_CONFIG1, 2445 + gpmc_context->cs_context[i].config1); 2446 + gpmc_cs_write_reg(i, GPMC_CS_CONFIG2, 2447 + gpmc_context->cs_context[i].config2); 2448 + gpmc_cs_write_reg(i, GPMC_CS_CONFIG3, 2449 + gpmc_context->cs_context[i].config3); 2450 + gpmc_cs_write_reg(i, GPMC_CS_CONFIG4, 2451 + gpmc_context->cs_context[i].config4); 2452 + gpmc_cs_write_reg(i, GPMC_CS_CONFIG5, 2453 + gpmc_context->cs_context[i].config5); 2454 + gpmc_cs_write_reg(i, GPMC_CS_CONFIG6, 2455 + gpmc_context->cs_context[i].config6); 2456 + gpmc_cs_write_reg(i, GPMC_CS_CONFIG7, 2457 + gpmc_context->cs_context[i].config7); 2458 + } else { 2459 + gpmc_cs_write_reg(i, GPMC_CS_CONFIG7, 0); 2460 + } 2461 + } 2462 + } 2463 + 2464 + static int omap_gpmc_context_notifier(struct notifier_block *nb, 2465 + unsigned long cmd, void *v) 2466 + { 2467 + struct gpmc_device *gpmc; 2468 + 2469 + gpmc = container_of(nb, struct gpmc_device, nb); 2470 + if (gpmc->is_suspended || pm_runtime_suspended(gpmc->dev)) 2471 + return NOTIFY_OK; 2472 + 2473 + switch (cmd) { 2474 + case CPU_CLUSTER_PM_ENTER: 2475 + omap3_gpmc_save_context(gpmc); 2476 + break; 2477 + case CPU_CLUSTER_PM_ENTER_FAILED: /* No need to restore context */ 2478 + break; 2479 + case CPU_CLUSTER_PM_EXIT: 2480 + omap3_gpmc_restore_context(gpmc); 2481 + break; 2482 + } 2483 + 2484 + return NOTIFY_OK; 2485 + } 2486 + 2391 2487 static int gpmc_probe(struct platform_device *pdev) 2392 2488 { 2393 2489 int rc; ··· 2576 2472 2577 2473 gpmc_probe_dt_children(pdev); 2578 2474 2475 + gpmc->nb.notifier_call = omap_gpmc_context_notifier; 2476 + cpu_pm_register_notifier(&gpmc->nb); 2477 + 2579 2478 return 0; 2580 2479 2581 2480 gpio_init_failed: ··· 2593 2486 { 2594 2487 struct gpmc_device *gpmc = platform_get_drvdata(pdev); 2595 2488 2489 + cpu_pm_unregister_notifier(&gpmc->nb); 2596 2490 gpmc_free_irq(gpmc); 2597 2491 gpmc_mem_exit(); 2598 2492 pm_runtime_put_sync(&pdev->dev); ··· 2605 2497 #ifdef CONFIG_PM_SLEEP 2606 2498 static int gpmc_suspend(struct device *dev) 2607 2499 { 2608 - omap3_gpmc_save_context(); 2500 + struct gpmc_device *gpmc = dev_get_drvdata(dev); 2501 + 2502 + omap3_gpmc_save_context(gpmc); 2609 2503 pm_runtime_put_sync(dev); 2504 + gpmc->is_suspended = 1; 2505 + 2610 2506 return 0; 2611 2507 } 2612 2508 2613 2509 static int gpmc_resume(struct device *dev) 2614 2510 { 2511 + struct gpmc_device *gpmc = dev_get_drvdata(dev); 2512 + 2615 2513 pm_runtime_get_sync(dev); 2616 - omap3_gpmc_restore_context(); 2514 + omap3_gpmc_restore_context(gpmc); 2515 + gpmc->is_suspended = 0; 2516 + 2617 2517 return 0; 2618 2518 } 2619 2519 #endif ··· 2643 2527 return platform_driver_register(&gpmc_driver); 2644 2528 } 2645 2529 postcore_initcall(gpmc_init); 2646 - 2647 - static struct omap3_gpmc_regs gpmc_context; 2648 - 2649 - void omap3_gpmc_save_context(void) 2650 - { 2651 - int i; 2652 - 2653 - if (!gpmc_base) 2654 - return; 2655 - 2656 - gpmc_context.sysconfig = gpmc_read_reg(GPMC_SYSCONFIG); 2657 - gpmc_context.irqenable = gpmc_read_reg(GPMC_IRQENABLE); 2658 - gpmc_context.timeout_ctrl = gpmc_read_reg(GPMC_TIMEOUT_CONTROL); 2659 - gpmc_context.config = gpmc_read_reg(GPMC_CONFIG); 2660 - gpmc_context.prefetch_config1 = gpmc_read_reg(GPMC_PREFETCH_CONFIG1); 2661 - gpmc_context.prefetch_config2 = gpmc_read_reg(GPMC_PREFETCH_CONFIG2); 2662 - gpmc_context.prefetch_control = gpmc_read_reg(GPMC_PREFETCH_CONTROL); 2663 - for (i = 0; i < gpmc_cs_num; i++) { 2664 - gpmc_context.cs_context[i].is_valid = gpmc_cs_mem_enabled(i); 2665 - if (gpmc_context.cs_context[i].is_valid) { 2666 - gpmc_context.cs_context[i].config1 = 2667 - gpmc_cs_read_reg(i, GPMC_CS_CONFIG1); 2668 - gpmc_context.cs_context[i].config2 = 2669 - gpmc_cs_read_reg(i, GPMC_CS_CONFIG2); 2670 - gpmc_context.cs_context[i].config3 = 2671 - gpmc_cs_read_reg(i, GPMC_CS_CONFIG3); 2672 - gpmc_context.cs_context[i].config4 = 2673 - gpmc_cs_read_reg(i, GPMC_CS_CONFIG4); 2674 - gpmc_context.cs_context[i].config5 = 2675 - gpmc_cs_read_reg(i, GPMC_CS_CONFIG5); 2676 - gpmc_context.cs_context[i].config6 = 2677 - gpmc_cs_read_reg(i, GPMC_CS_CONFIG6); 2678 - gpmc_context.cs_context[i].config7 = 2679 - gpmc_cs_read_reg(i, GPMC_CS_CONFIG7); 2680 - } 2681 - } 2682 - } 2683 - 2684 - void omap3_gpmc_restore_context(void) 2685 - { 2686 - int i; 2687 - 2688 - if (!gpmc_base) 2689 - return; 2690 - 2691 - gpmc_write_reg(GPMC_SYSCONFIG, gpmc_context.sysconfig); 2692 - gpmc_write_reg(GPMC_IRQENABLE, gpmc_context.irqenable); 2693 - gpmc_write_reg(GPMC_TIMEOUT_CONTROL, gpmc_context.timeout_ctrl); 2694 - gpmc_write_reg(GPMC_CONFIG, gpmc_context.config); 2695 - gpmc_write_reg(GPMC_PREFETCH_CONFIG1, gpmc_context.prefetch_config1); 2696 - gpmc_write_reg(GPMC_PREFETCH_CONFIG2, gpmc_context.prefetch_config2); 2697 - gpmc_write_reg(GPMC_PREFETCH_CONTROL, gpmc_context.prefetch_control); 2698 - for (i = 0; i < gpmc_cs_num; i++) { 2699 - if (gpmc_context.cs_context[i].is_valid) { 2700 - gpmc_cs_write_reg(i, GPMC_CS_CONFIG1, 2701 - gpmc_context.cs_context[i].config1); 2702 - gpmc_cs_write_reg(i, GPMC_CS_CONFIG2, 2703 - gpmc_context.cs_context[i].config2); 2704 - gpmc_cs_write_reg(i, GPMC_CS_CONFIG3, 2705 - gpmc_context.cs_context[i].config3); 2706 - gpmc_cs_write_reg(i, GPMC_CS_CONFIG4, 2707 - gpmc_context.cs_context[i].config4); 2708 - gpmc_cs_write_reg(i, GPMC_CS_CONFIG5, 2709 - gpmc_context.cs_context[i].config5); 2710 - gpmc_cs_write_reg(i, GPMC_CS_CONFIG6, 2711 - gpmc_context.cs_context[i].config6); 2712 - gpmc_cs_write_reg(i, GPMC_CS_CONFIG7, 2713 - gpmc_context.cs_context[i].config7); 2714 - } 2715 - } 2716 - }
+2
drivers/memory/tegra/tegra186.c
··· 71 71 return 0; 72 72 } 73 73 74 + #if IS_ENABLED(CONFIG_IOMMU_API) 74 75 static void tegra186_mc_client_sid_override(struct tegra_mc *mc, 75 76 const struct tegra_mc_client *client, 76 77 unsigned int sid) ··· 109 108 writel(sid, mc->regs + client->regs.sid.override); 110 109 } 111 110 } 111 + #endif 112 112 113 113 static int tegra186_mc_probe_device(struct tegra_mc *mc, struct device *dev) 114 114 {
+1
drivers/net/wireless/ath/ath10k/Kconfig
··· 44 44 tristate "Qualcomm ath10k SNOC support" 45 45 depends on ATH10K 46 46 depends on ARCH_QCOM || COMPILE_TEST 47 + depends on QCOM_SCM || !QCOM_SCM #if QCOM_SCM=m this can't be =y 47 48 select QCOM_QMI_HELPERS 48 49 help 49 50 This module adds support for integrated WCN3990 chip connected
+7 -1
drivers/reset/Kconfig
··· 181 181 interfacing with RPi4's co-processor and model these firmware 182 182 initialization routines as reset lines. 183 183 184 + config RESET_RZG2L_USBPHY_CTRL 185 + tristate "Renesas RZ/G2L USBPHY control driver" 186 + depends on ARCH_R9A07G044 || COMPILE_TEST 187 + help 188 + Support for USBPHY Control found on RZ/G2L family. It mainly 189 + controls reset and power down of the USB/PHY. 190 + 184 191 config RESET_SCMI 185 192 tristate "Reset driver controlled via ARM SCMI interface" 186 193 depends on ARM_SCMI_PROTOCOL || COMPILE_TEST ··· 214 207 - Realtek SoCs 215 208 - RCC reset controller in STM32 MCUs 216 209 - Allwinner SoCs 217 - - ZTE's zx2967 family 218 210 - SiFive FU740 SoCs 219 211 220 212 config RESET_SOCFPGA
+1
drivers/reset/Makefile
··· 25 25 obj-$(CONFIG_RESET_QCOM_AOSS) += reset-qcom-aoss.o 26 26 obj-$(CONFIG_RESET_QCOM_PDC) += reset-qcom-pdc.o 27 27 obj-$(CONFIG_RESET_RASPBERRYPI) += reset-raspberrypi.o 28 + obj-$(CONFIG_RESET_RZG2L_USBPHY_CTRL) += reset-rzg2l-usbphy-ctrl.o 28 29 obj-$(CONFIG_RESET_SCMI) += reset-scmi.o 29 30 obj-$(CONFIG_RESET_SIMPLE) += reset-simple.o 30 31 obj-$(CONFIG_RESET_SOCFPGA) += reset-socfpga.o
+51 -11
drivers/reset/reset-qcom-pdc.c
··· 11 11 12 12 #include <dt-bindings/reset/qcom,sdm845-pdc.h> 13 13 14 - #define RPMH_PDC_SYNC_RESET 0x100 14 + #define RPMH_SDM845_PDC_SYNC_RESET 0x100 15 + #define RPMH_SC7280_PDC_SYNC_RESET 0x1000 15 16 16 17 struct qcom_pdc_reset_map { 17 18 u8 bit; 18 19 }; 19 20 21 + struct qcom_pdc_reset_desc { 22 + const struct qcom_pdc_reset_map *resets; 23 + size_t num_resets; 24 + unsigned int offset; 25 + }; 26 + 20 27 struct qcom_pdc_reset_data { 21 28 struct reset_controller_dev rcdev; 22 29 struct regmap *regmap; 30 + const struct qcom_pdc_reset_desc *desc; 23 31 }; 24 32 25 - static const struct regmap_config sdm845_pdc_regmap_config = { 33 + static const struct regmap_config pdc_regmap_config = { 26 34 .name = "pdc-reset", 27 35 .reg_bits = 32, 28 36 .reg_stride = 4, ··· 52 44 [PDC_MODEM_SYNC_RESET] = {9}, 53 45 }; 54 46 47 + static const struct qcom_pdc_reset_desc sdm845_pdc_reset_desc = { 48 + .resets = sdm845_pdc_resets, 49 + .num_resets = ARRAY_SIZE(sdm845_pdc_resets), 50 + .offset = RPMH_SDM845_PDC_SYNC_RESET, 51 + }; 52 + 53 + static const struct qcom_pdc_reset_map sc7280_pdc_resets[] = { 54 + [PDC_APPS_SYNC_RESET] = {0}, 55 + [PDC_SP_SYNC_RESET] = {1}, 56 + [PDC_AUDIO_SYNC_RESET] = {2}, 57 + [PDC_SENSORS_SYNC_RESET] = {3}, 58 + [PDC_AOP_SYNC_RESET] = {4}, 59 + [PDC_DEBUG_SYNC_RESET] = {5}, 60 + [PDC_GPU_SYNC_RESET] = {6}, 61 + [PDC_DISPLAY_SYNC_RESET] = {7}, 62 + [PDC_COMPUTE_SYNC_RESET] = {8}, 63 + [PDC_MODEM_SYNC_RESET] = {9}, 64 + [PDC_WLAN_RF_SYNC_RESET] = {10}, 65 + [PDC_WPSS_SYNC_RESET] = {11}, 66 + }; 67 + 68 + static const struct qcom_pdc_reset_desc sc7280_pdc_reset_desc = { 69 + .resets = sc7280_pdc_resets, 70 + .num_resets = ARRAY_SIZE(sc7280_pdc_resets), 71 + .offset = RPMH_SC7280_PDC_SYNC_RESET, 72 + }; 73 + 55 74 static inline struct qcom_pdc_reset_data *to_qcom_pdc_reset_data( 56 75 struct reset_controller_dev *rcdev) 57 76 { ··· 89 54 unsigned long idx) 90 55 { 91 56 struct qcom_pdc_reset_data *data = to_qcom_pdc_reset_data(rcdev); 57 + u32 mask = BIT(data->desc->resets[idx].bit); 92 58 93 - return regmap_update_bits(data->regmap, RPMH_PDC_SYNC_RESET, 94 - BIT(sdm845_pdc_resets[idx].bit), 95 - BIT(sdm845_pdc_resets[idx].bit)); 59 + return regmap_update_bits(data->regmap, data->desc->offset, mask, mask); 96 60 } 97 61 98 62 static int qcom_pdc_control_deassert(struct reset_controller_dev *rcdev, 99 63 unsigned long idx) 100 64 { 101 65 struct qcom_pdc_reset_data *data = to_qcom_pdc_reset_data(rcdev); 66 + u32 mask = BIT(data->desc->resets[idx].bit); 102 67 103 - return regmap_update_bits(data->regmap, RPMH_PDC_SYNC_RESET, 104 - BIT(sdm845_pdc_resets[idx].bit), 0); 68 + return regmap_update_bits(data->regmap, data->desc->offset, mask, 0); 105 69 } 106 70 107 71 static const struct reset_control_ops qcom_pdc_reset_ops = { ··· 110 76 111 77 static int qcom_pdc_reset_probe(struct platform_device *pdev) 112 78 { 79 + const struct qcom_pdc_reset_desc *desc; 113 80 struct qcom_pdc_reset_data *data; 114 81 struct device *dev = &pdev->dev; 115 82 void __iomem *base; 116 83 struct resource *res; 117 84 85 + desc = device_get_match_data(&pdev->dev); 86 + if (!desc) 87 + return -EINVAL; 88 + 118 89 data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 119 90 if (!data) 120 91 return -ENOMEM; 121 92 93 + data->desc = desc; 122 94 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 123 95 base = devm_ioremap_resource(dev, res); 124 96 if (IS_ERR(base)) 125 97 return PTR_ERR(base); 126 98 127 - data->regmap = devm_regmap_init_mmio(dev, base, 128 - &sdm845_pdc_regmap_config); 99 + data->regmap = devm_regmap_init_mmio(dev, base, &pdc_regmap_config); 129 100 if (IS_ERR(data->regmap)) { 130 101 dev_err(dev, "Unable to initialize regmap\n"); 131 102 return PTR_ERR(data->regmap); ··· 138 99 139 100 data->rcdev.owner = THIS_MODULE; 140 101 data->rcdev.ops = &qcom_pdc_reset_ops; 141 - data->rcdev.nr_resets = ARRAY_SIZE(sdm845_pdc_resets); 102 + data->rcdev.nr_resets = desc->num_resets; 142 103 data->rcdev.of_node = dev->of_node; 143 104 144 105 return devm_reset_controller_register(dev, &data->rcdev); 145 106 } 146 107 147 108 static const struct of_device_id qcom_pdc_reset_of_match[] = { 148 - { .compatible = "qcom,sdm845-pdc-global" }, 109 + { .compatible = "qcom,sc7280-pdc-global", .data = &sc7280_pdc_reset_desc }, 110 + { .compatible = "qcom,sdm845-pdc-global", .data = &sdm845_pdc_reset_desc }, 149 111 {} 150 112 }; 151 113 MODULE_DEVICE_TABLE(of, qcom_pdc_reset_of_match);
+175
drivers/reset/reset-rzg2l-usbphy-ctrl.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Renesas RZ/G2L USBPHY control driver 4 + * 5 + * Copyright (C) 2021 Renesas Electronics Corporation 6 + */ 7 + 8 + #include <linux/io.h> 9 + #include <linux/module.h> 10 + #include <linux/of.h> 11 + #include <linux/platform_device.h> 12 + #include <linux/pm_runtime.h> 13 + #include <linux/reset.h> 14 + #include <linux/reset-controller.h> 15 + 16 + #define RESET 0x000 17 + 18 + #define RESET_SEL_PLLRESET BIT(12) 19 + #define RESET_PLLRESET BIT(8) 20 + 21 + #define RESET_SEL_P2RESET BIT(5) 22 + #define RESET_SEL_P1RESET BIT(4) 23 + #define RESET_PHYRST_2 BIT(1) 24 + #define RESET_PHYRST_1 BIT(0) 25 + 26 + #define PHY_RESET_PORT2 (RESET_SEL_P2RESET | RESET_PHYRST_2) 27 + #define PHY_RESET_PORT1 (RESET_SEL_P1RESET | RESET_PHYRST_1) 28 + 29 + #define NUM_PORTS 2 30 + 31 + struct rzg2l_usbphy_ctrl_priv { 32 + struct reset_controller_dev rcdev; 33 + struct reset_control *rstc; 34 + void __iomem *base; 35 + 36 + spinlock_t lock; 37 + }; 38 + 39 + #define rcdev_to_priv(x) container_of(x, struct rzg2l_usbphy_ctrl_priv, rcdev) 40 + 41 + static int rzg2l_usbphy_ctrl_assert(struct reset_controller_dev *rcdev, 42 + unsigned long id) 43 + { 44 + struct rzg2l_usbphy_ctrl_priv *priv = rcdev_to_priv(rcdev); 45 + u32 port_mask = PHY_RESET_PORT1 | PHY_RESET_PORT2; 46 + void __iomem *base = priv->base; 47 + unsigned long flags; 48 + u32 val; 49 + 50 + spin_lock_irqsave(&priv->lock, flags); 51 + val = readl(base + RESET); 52 + val |= id ? PHY_RESET_PORT2 : PHY_RESET_PORT1; 53 + if (port_mask == (val & port_mask)) 54 + val |= RESET_PLLRESET; 55 + writel(val, base + RESET); 56 + spin_unlock_irqrestore(&priv->lock, flags); 57 + 58 + return 0; 59 + } 60 + 61 + static int rzg2l_usbphy_ctrl_deassert(struct reset_controller_dev *rcdev, 62 + unsigned long id) 63 + { 64 + struct rzg2l_usbphy_ctrl_priv *priv = rcdev_to_priv(rcdev); 65 + void __iomem *base = priv->base; 66 + unsigned long flags; 67 + u32 val; 68 + 69 + spin_lock_irqsave(&priv->lock, flags); 70 + val = readl(base + RESET); 71 + 72 + val |= RESET_SEL_PLLRESET; 73 + val &= ~(RESET_PLLRESET | (id ? PHY_RESET_PORT2 : PHY_RESET_PORT1)); 74 + writel(val, base + RESET); 75 + spin_unlock_irqrestore(&priv->lock, flags); 76 + 77 + return 0; 78 + } 79 + 80 + static int rzg2l_usbphy_ctrl_status(struct reset_controller_dev *rcdev, 81 + unsigned long id) 82 + { 83 + struct rzg2l_usbphy_ctrl_priv *priv = rcdev_to_priv(rcdev); 84 + u32 port_mask; 85 + 86 + port_mask = id ? PHY_RESET_PORT2 : PHY_RESET_PORT1; 87 + 88 + return !!(readl(priv->base + RESET) & port_mask); 89 + } 90 + 91 + static const struct of_device_id rzg2l_usbphy_ctrl_match_table[] = { 92 + { .compatible = "renesas,rzg2l-usbphy-ctrl" }, 93 + { /* Sentinel */ } 94 + }; 95 + MODULE_DEVICE_TABLE(of, rzg2l_usbphy_ctrl_match_table); 96 + 97 + static const struct reset_control_ops rzg2l_usbphy_ctrl_reset_ops = { 98 + .assert = rzg2l_usbphy_ctrl_assert, 99 + .deassert = rzg2l_usbphy_ctrl_deassert, 100 + .status = rzg2l_usbphy_ctrl_status, 101 + }; 102 + 103 + static int rzg2l_usbphy_ctrl_probe(struct platform_device *pdev) 104 + { 105 + struct device *dev = &pdev->dev; 106 + struct rzg2l_usbphy_ctrl_priv *priv; 107 + unsigned long flags; 108 + int error; 109 + u32 val; 110 + 111 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 112 + if (!priv) 113 + return -ENOMEM; 114 + 115 + priv->base = devm_platform_ioremap_resource(pdev, 0); 116 + if (IS_ERR(priv->base)) 117 + return PTR_ERR(priv->base); 118 + 119 + priv->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL); 120 + if (IS_ERR(priv->rstc)) 121 + return dev_err_probe(dev, PTR_ERR(priv->rstc), 122 + "failed to get reset\n"); 123 + 124 + reset_control_deassert(priv->rstc); 125 + 126 + priv->rcdev.ops = &rzg2l_usbphy_ctrl_reset_ops; 127 + priv->rcdev.of_reset_n_cells = 1; 128 + priv->rcdev.nr_resets = NUM_PORTS; 129 + priv->rcdev.of_node = dev->of_node; 130 + priv->rcdev.dev = dev; 131 + 132 + error = devm_reset_controller_register(dev, &priv->rcdev); 133 + if (error) 134 + return error; 135 + 136 + spin_lock_init(&priv->lock); 137 + dev_set_drvdata(dev, priv); 138 + 139 + pm_runtime_enable(&pdev->dev); 140 + pm_runtime_resume_and_get(&pdev->dev); 141 + 142 + /* put pll and phy into reset state */ 143 + spin_lock_irqsave(&priv->lock, flags); 144 + val = readl(priv->base + RESET); 145 + val |= RESET_SEL_PLLRESET | RESET_PLLRESET | PHY_RESET_PORT2 | PHY_RESET_PORT1; 146 + writel(val, priv->base + RESET); 147 + spin_unlock_irqrestore(&priv->lock, flags); 148 + 149 + return 0; 150 + } 151 + 152 + static int rzg2l_usbphy_ctrl_remove(struct platform_device *pdev) 153 + { 154 + struct rzg2l_usbphy_ctrl_priv *priv = dev_get_drvdata(&pdev->dev); 155 + 156 + pm_runtime_put(&pdev->dev); 157 + pm_runtime_disable(&pdev->dev); 158 + reset_control_assert(priv->rstc); 159 + 160 + return 0; 161 + } 162 + 163 + static struct platform_driver rzg2l_usbphy_ctrl_driver = { 164 + .driver = { 165 + .name = "rzg2l_usbphy_ctrl", 166 + .of_match_table = rzg2l_usbphy_ctrl_match_table, 167 + }, 168 + .probe = rzg2l_usbphy_ctrl_probe, 169 + .remove = rzg2l_usbphy_ctrl_remove, 170 + }; 171 + module_platform_driver(rzg2l_usbphy_ctrl_driver); 172 + 173 + MODULE_LICENSE("GPL v2"); 174 + MODULE_DESCRIPTION("Renesas RZ/G2L USBPHY Control"); 175 + MODULE_AUTHOR("biju.das.jz@bp.renesas.com>");
+1
drivers/soc/mediatek/mt8173-pm-domains.h
··· 71 71 .ctl_offs = SPM_MFG_ASYNC_PWR_CON, 72 72 .sram_pdn_bits = GENMASK(11, 8), 73 73 .sram_pdn_ack_bits = 0, 74 + .caps = MTK_SCPD_DOMAIN_SUPPLY, 74 75 }, 75 76 [MT8173_POWER_DOMAIN_MFG_2D] = { 76 77 .name = "mfg_2d",
+14 -7
drivers/soc/mediatek/mt8183-mmsys.h
··· 28 28 static const struct mtk_mmsys_routes mmsys_mt8183_routing_table[] = { 29 29 { 30 30 DDP_COMPONENT_OVL0, DDP_COMPONENT_OVL_2L0, 31 - MT8183_DISP_OVL0_MOUT_EN, MT8183_OVL0_MOUT_EN_OVL0_2L 31 + MT8183_DISP_OVL0_MOUT_EN, MT8183_OVL0_MOUT_EN_OVL0_2L, 32 + MT8183_OVL0_MOUT_EN_OVL0_2L 32 33 }, { 33 34 DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA0, 34 - MT8183_DISP_OVL0_2L_MOUT_EN, MT8183_OVL0_2L_MOUT_EN_DISP_PATH0 35 + MT8183_DISP_OVL0_2L_MOUT_EN, MT8183_OVL0_2L_MOUT_EN_DISP_PATH0, 36 + MT8183_OVL0_2L_MOUT_EN_DISP_PATH0 35 37 }, { 36 38 DDP_COMPONENT_OVL_2L1, DDP_COMPONENT_RDMA1, 37 - MT8183_DISP_OVL1_2L_MOUT_EN, MT8183_OVL1_2L_MOUT_EN_RDMA1 39 + MT8183_DISP_OVL1_2L_MOUT_EN, MT8183_OVL1_2L_MOUT_EN_RDMA1, 40 + MT8183_OVL1_2L_MOUT_EN_RDMA1 38 41 }, { 39 42 DDP_COMPONENT_DITHER, DDP_COMPONENT_DSI0, 40 - MT8183_DISP_DITHER0_MOUT_EN, MT8183_DITHER0_MOUT_IN_DSI0 43 + MT8183_DISP_DITHER0_MOUT_EN, MT8183_DITHER0_MOUT_IN_DSI0, 44 + MT8183_DITHER0_MOUT_IN_DSI0 41 45 }, { 42 46 DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA0, 43 - MT8183_DISP_PATH0_SEL_IN, MT8183_DISP_PATH0_SEL_IN_OVL0_2L 47 + MT8183_DISP_PATH0_SEL_IN, MT8183_DISP_PATH0_SEL_IN_OVL0_2L, 48 + MT8183_DISP_PATH0_SEL_IN_OVL0_2L 44 49 }, { 45 50 DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0, 46 - MT8183_DISP_DPI0_SEL_IN, MT8183_DPI0_SEL_IN_RDMA1 51 + MT8183_DISP_DPI0_SEL_IN, MT8183_DPI0_SEL_IN_RDMA1, 52 + MT8183_DPI0_SEL_IN_RDMA1 47 53 }, { 48 54 DDP_COMPONENT_RDMA0, DDP_COMPONENT_COLOR0, 49 - MT8183_DISP_RDMA0_SOUT_SEL_IN, MT8183_RDMA0_SOUT_COLOR0 55 + MT8183_DISP_RDMA0_SOUT_SEL_IN, MT8183_RDMA0_SOUT_COLOR0, 56 + MT8183_RDMA0_SOUT_COLOR0 50 57 } 51 58 }; 52 59
+60
drivers/soc/mediatek/mt8365-mmsys.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + 3 + #ifndef __SOC_MEDIATEK_MT8365_MMSYS_H 4 + #define __SOC_MEDIATEK_MT8365_MMSYS_H 5 + 6 + #define MT8365_DISP_REG_CONFIG_DISP_OVL0_MOUT_EN 0xf3c 7 + #define MT8365_DISP_REG_CONFIG_DISP_RDMA0_SOUT_SEL 0xf4c 8 + #define MT8365_DISP_REG_CONFIG_DISP_DITHER0_MOUT_EN 0xf50 9 + #define MT8365_DISP_REG_CONFIG_DISP_RDMA0_SEL_IN 0xf54 10 + #define MT8365_DISP_REG_CONFIG_DISP_RDMA0_RSZ0_SEL_IN 0xf60 11 + #define MT8365_DISP_REG_CONFIG_DISP_COLOR0_SEL_IN 0xf64 12 + #define MT8365_DISP_REG_CONFIG_DISP_DSI0_SEL_IN 0xf68 13 + 14 + #define MT8365_RDMA0_SOUT_COLOR0 0x1 15 + #define MT8365_DITHER_MOUT_EN_DSI0 0x1 16 + #define MT8365_DSI0_SEL_IN_DITHER 0x1 17 + #define MT8365_RDMA0_SEL_IN_OVL0 0x0 18 + #define MT8365_RDMA0_RSZ0_SEL_IN_RDMA0 0x0 19 + #define MT8365_DISP_COLOR_SEL_IN_COLOR0 0x0 20 + #define MT8365_OVL0_MOUT_PATH0_SEL BIT(0) 21 + 22 + static const struct mtk_mmsys_routes mt8365_mmsys_routing_table[] = { 23 + { 24 + DDP_COMPONENT_OVL0, DDP_COMPONENT_RDMA0, 25 + MT8365_DISP_REG_CONFIG_DISP_OVL0_MOUT_EN, 26 + MT8365_OVL0_MOUT_PATH0_SEL, MT8365_OVL0_MOUT_PATH0_SEL 27 + }, 28 + { 29 + DDP_COMPONENT_OVL0, DDP_COMPONENT_RDMA0, 30 + MT8365_DISP_REG_CONFIG_DISP_RDMA0_SEL_IN, 31 + MT8365_RDMA0_SEL_IN_OVL0, MT8365_RDMA0_SEL_IN_OVL0 32 + }, 33 + { 34 + DDP_COMPONENT_RDMA0, DDP_COMPONENT_COLOR0, 35 + MT8365_DISP_REG_CONFIG_DISP_RDMA0_SOUT_SEL, 36 + MT8365_RDMA0_SOUT_COLOR0, MT8365_RDMA0_SOUT_COLOR0 37 + }, 38 + { 39 + DDP_COMPONENT_COLOR0, DDP_COMPONENT_CCORR, 40 + MT8365_DISP_REG_CONFIG_DISP_COLOR0_SEL_IN, 41 + MT8365_DISP_COLOR_SEL_IN_COLOR0,MT8365_DISP_COLOR_SEL_IN_COLOR0 42 + }, 43 + { 44 + DDP_COMPONENT_DITHER, DDP_COMPONENT_DSI0, 45 + MT8365_DISP_REG_CONFIG_DISP_DITHER0_MOUT_EN, 46 + MT8365_DITHER_MOUT_EN_DSI0, MT8365_DITHER_MOUT_EN_DSI0 47 + }, 48 + { 49 + DDP_COMPONENT_DITHER, DDP_COMPONENT_DSI0, 50 + MT8365_DISP_REG_CONFIG_DISP_DSI0_SEL_IN, 51 + MT8365_DSI0_SEL_IN_DITHER, MT8365_DSI0_SEL_IN_DITHER 52 + }, 53 + { 54 + DDP_COMPONENT_RDMA0, DDP_COMPONENT_COLOR0, 55 + MT8365_DISP_REG_CONFIG_DISP_RDMA0_RSZ0_SEL_IN, 56 + MT8365_RDMA0_RSZ0_SEL_IN_RDMA0, MT8365_RDMA0_RSZ0_SEL_IN_RDMA0 57 + }, 58 + }; 59 + 60 + #endif /* __SOC_MEDIATEK_MT8365_MMSYS_H */
+16 -2
drivers/soc/mediatek/mtk-mmsys.c
··· 13 13 #include "mtk-mmsys.h" 14 14 #include "mt8167-mmsys.h" 15 15 #include "mt8183-mmsys.h" 16 + #include "mt8365-mmsys.h" 16 17 17 18 static const struct mtk_mmsys_driver_data mt2701_mmsys_driver_data = { 18 19 .clk_driver = "clk-mt2701-mm", ··· 53 52 .num_routes = ARRAY_SIZE(mmsys_mt8183_routing_table), 54 53 }; 55 54 55 + static const struct mtk_mmsys_driver_data mt8365_mmsys_driver_data = { 56 + .clk_driver = "clk-mt8365-mm", 57 + .routes = mt8365_mmsys_routing_table, 58 + .num_routes = ARRAY_SIZE(mt8365_mmsys_routing_table), 59 + }; 60 + 56 61 struct mtk_mmsys { 57 62 void __iomem *regs; 58 63 const struct mtk_mmsys_driver_data *data; ··· 75 68 76 69 for (i = 0; i < mmsys->data->num_routes; i++) 77 70 if (cur == routes[i].from_comp && next == routes[i].to_comp) { 78 - reg = readl_relaxed(mmsys->regs + routes[i].addr) | routes[i].val; 71 + reg = readl_relaxed(mmsys->regs + routes[i].addr); 72 + reg &= ~routes[i].mask; 73 + reg |= routes[i].val; 79 74 writel_relaxed(reg, mmsys->regs + routes[i].addr); 80 75 } 81 76 } ··· 94 85 95 86 for (i = 0; i < mmsys->data->num_routes; i++) 96 87 if (cur == routes[i].from_comp && next == routes[i].to_comp) { 97 - reg = readl_relaxed(mmsys->regs + routes[i].addr) & ~routes[i].val; 88 + reg = readl_relaxed(mmsys->regs + routes[i].addr); 89 + reg &= ~routes[i].mask; 98 90 writel_relaxed(reg, mmsys->regs + routes[i].addr); 99 91 } 100 92 } ··· 166 156 { 167 157 .compatible = "mediatek,mt8183-mmsys", 168 158 .data = &mt8183_mmsys_driver_data, 159 + }, 160 + { 161 + .compatible = "mediatek,mt8365-mmsys", 162 + .data = &mt8365_mmsys_driver_data, 169 163 }, 170 164 { } 171 165 };
+97 -40
drivers/soc/mediatek/mtk-mmsys.h
··· 35 35 #define RDMA0_SOUT_DSI1 0x1 36 36 #define RDMA0_SOUT_DSI2 0x4 37 37 #define RDMA0_SOUT_DSI3 0x5 38 + #define RDMA0_SOUT_MASK 0x7 38 39 #define RDMA1_SOUT_DPI0 0x2 39 40 #define RDMA1_SOUT_DPI1 0x3 40 41 #define RDMA1_SOUT_DSI1 0x1 41 42 #define RDMA1_SOUT_DSI2 0x4 42 43 #define RDMA1_SOUT_DSI3 0x5 44 + #define RDMA1_SOUT_MASK 0x7 43 45 #define RDMA2_SOUT_DPI0 0x2 44 46 #define RDMA2_SOUT_DPI1 0x3 45 47 #define RDMA2_SOUT_DSI1 0x1 46 48 #define RDMA2_SOUT_DSI2 0x4 47 49 #define RDMA2_SOUT_DSI3 0x5 50 + #define RDMA2_SOUT_MASK 0x7 48 51 #define DPI0_SEL_IN_RDMA1 0x1 49 52 #define DPI0_SEL_IN_RDMA2 0x3 53 + #define DPI0_SEL_IN_MASK 0x3 50 54 #define DPI1_SEL_IN_RDMA1 (0x1 << 8) 51 55 #define DPI1_SEL_IN_RDMA2 (0x3 << 8) 56 + #define DPI1_SEL_IN_MASK (0x3 << 8) 52 57 #define DSI0_SEL_IN_RDMA1 0x1 53 58 #define DSI0_SEL_IN_RDMA2 0x4 59 + #define DSI0_SEL_IN_MASK 0x7 54 60 #define DSI1_SEL_IN_RDMA1 0x1 55 61 #define DSI1_SEL_IN_RDMA2 0x4 62 + #define DSI1_SEL_IN_MASK 0x7 56 63 #define DSI2_SEL_IN_RDMA1 (0x1 << 16) 57 64 #define DSI2_SEL_IN_RDMA2 (0x4 << 16) 65 + #define DSI2_SEL_IN_MASK (0x7 << 16) 58 66 #define DSI3_SEL_IN_RDMA1 (0x1 << 16) 59 67 #define DSI3_SEL_IN_RDMA2 (0x4 << 16) 68 + #define DSI3_SEL_IN_MASK (0x7 << 16) 60 69 #define COLOR1_SEL_IN_OVL1 0x1 61 70 62 71 #define OVL_MOUT_EN_RDMA 0x1 63 72 #define BLS_TO_DSI_RDMA1_TO_DPI1 0x8 64 73 #define BLS_TO_DPI_RDMA1_TO_DSI 0x2 74 + #define BLS_RDMA1_DSI_DPI_MASK 0xf 65 75 #define DSI_SEL_IN_BLS 0x0 66 76 #define DPI_SEL_IN_BLS 0x0 77 + #define DPI_SEL_IN_MASK 0x1 67 78 #define DSI_SEL_IN_RDMA 0x1 79 + #define DSI_SEL_IN_MASK 0x1 68 80 69 81 struct mtk_mmsys_routes { 70 82 u32 from_comp; 71 83 u32 to_comp; 72 84 u32 addr; 85 + u32 mask; 73 86 u32 val; 74 87 }; 75 88 ··· 104 91 static const struct mtk_mmsys_routes mmsys_default_routing_table[] = { 105 92 { 106 93 DDP_COMPONENT_BLS, DDP_COMPONENT_DSI0, 107 - DISP_REG_CONFIG_OUT_SEL, BLS_TO_DSI_RDMA1_TO_DPI1 94 + DISP_REG_CONFIG_OUT_SEL, BLS_RDMA1_DSI_DPI_MASK, 95 + BLS_TO_DSI_RDMA1_TO_DPI1 108 96 }, { 109 97 DDP_COMPONENT_BLS, DDP_COMPONENT_DSI0, 110 - DISP_REG_CONFIG_DSI_SEL, DSI_SEL_IN_BLS 98 + DISP_REG_CONFIG_DSI_SEL, DSI_SEL_IN_MASK, 99 + DSI_SEL_IN_BLS 111 100 }, { 112 101 DDP_COMPONENT_BLS, DDP_COMPONENT_DPI0, 113 - DISP_REG_CONFIG_OUT_SEL, BLS_TO_DPI_RDMA1_TO_DSI 102 + DISP_REG_CONFIG_OUT_SEL, BLS_RDMA1_DSI_DPI_MASK, 103 + BLS_TO_DPI_RDMA1_TO_DSI 114 104 }, { 115 105 DDP_COMPONENT_BLS, DDP_COMPONENT_DPI0, 116 - DISP_REG_CONFIG_DSI_SEL, DSI_SEL_IN_RDMA 106 + DISP_REG_CONFIG_DSI_SEL, DSI_SEL_IN_MASK, 107 + DSI_SEL_IN_RDMA 117 108 }, { 118 109 DDP_COMPONENT_BLS, DDP_COMPONENT_DPI0, 119 - DISP_REG_CONFIG_DPI_SEL, DPI_SEL_IN_BLS 110 + DISP_REG_CONFIG_DPI_SEL, DPI_SEL_IN_MASK, 111 + DPI_SEL_IN_BLS 120 112 }, { 121 113 DDP_COMPONENT_GAMMA, DDP_COMPONENT_RDMA1, 122 - DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN, GAMMA_MOUT_EN_RDMA1 114 + DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN, GAMMA_MOUT_EN_RDMA1, 115 + GAMMA_MOUT_EN_RDMA1 123 116 }, { 124 117 DDP_COMPONENT_OD0, DDP_COMPONENT_RDMA0, 125 - DISP_REG_CONFIG_DISP_OD_MOUT_EN, OD_MOUT_EN_RDMA0 118 + DISP_REG_CONFIG_DISP_OD_MOUT_EN, OD_MOUT_EN_RDMA0, 119 + OD_MOUT_EN_RDMA0 126 120 }, { 127 121 DDP_COMPONENT_OD1, DDP_COMPONENT_RDMA1, 128 - DISP_REG_CONFIG_DISP_OD_MOUT_EN, OD1_MOUT_EN_RDMA1 122 + DISP_REG_CONFIG_DISP_OD_MOUT_EN, OD1_MOUT_EN_RDMA1, 123 + OD1_MOUT_EN_RDMA1 129 124 }, { 130 125 DDP_COMPONENT_OVL0, DDP_COMPONENT_COLOR0, 131 - DISP_REG_CONFIG_DISP_OVL0_MOUT_EN, OVL0_MOUT_EN_COLOR0 126 + DISP_REG_CONFIG_DISP_OVL0_MOUT_EN, OVL0_MOUT_EN_COLOR0, 127 + OVL0_MOUT_EN_COLOR0 132 128 }, { 133 129 DDP_COMPONENT_OVL0, DDP_COMPONENT_COLOR0, 134 - DISP_REG_CONFIG_DISP_COLOR0_SEL_IN, COLOR0_SEL_IN_OVL0 130 + DISP_REG_CONFIG_DISP_COLOR0_SEL_IN, COLOR0_SEL_IN_OVL0, 131 + COLOR0_SEL_IN_OVL0 135 132 }, { 136 133 DDP_COMPONENT_OVL0, DDP_COMPONENT_RDMA0, 137 - DISP_REG_CONFIG_DISP_OVL_MOUT_EN, OVL_MOUT_EN_RDMA 134 + DISP_REG_CONFIG_DISP_OVL_MOUT_EN, OVL_MOUT_EN_RDMA, 135 + OVL_MOUT_EN_RDMA 138 136 }, { 139 137 DDP_COMPONENT_OVL1, DDP_COMPONENT_COLOR1, 140 - DISP_REG_CONFIG_DISP_OVL1_MOUT_EN, OVL1_MOUT_EN_COLOR1 138 + DISP_REG_CONFIG_DISP_OVL1_MOUT_EN, OVL1_MOUT_EN_COLOR1, 139 + OVL1_MOUT_EN_COLOR1 141 140 }, { 142 141 DDP_COMPONENT_OVL1, DDP_COMPONENT_COLOR1, 143 - DISP_REG_CONFIG_DISP_COLOR1_SEL_IN, COLOR1_SEL_IN_OVL1 142 + DISP_REG_CONFIG_DISP_COLOR1_SEL_IN, COLOR1_SEL_IN_OVL1, 143 + COLOR1_SEL_IN_OVL1 144 144 }, { 145 145 DDP_COMPONENT_RDMA0, DDP_COMPONENT_DPI0, 146 - DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_DPI0 146 + DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_MASK, 147 + RDMA0_SOUT_DPI0 147 148 }, { 148 149 DDP_COMPONENT_RDMA0, DDP_COMPONENT_DPI1, 149 - DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_DPI1 150 + DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_MASK, 151 + RDMA0_SOUT_DPI1 150 152 }, { 151 153 DDP_COMPONENT_RDMA0, DDP_COMPONENT_DSI1, 152 - DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_DSI1 154 + DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_MASK, 155 + RDMA0_SOUT_DSI1 153 156 }, { 154 157 DDP_COMPONENT_RDMA0, DDP_COMPONENT_DSI2, 155 - DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_DSI2 158 + DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_MASK, 159 + RDMA0_SOUT_DSI2 156 160 }, { 157 161 DDP_COMPONENT_RDMA0, DDP_COMPONENT_DSI3, 158 - DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_DSI3 162 + DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_MASK, 163 + RDMA0_SOUT_DSI3 159 164 }, { 160 165 DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0, 161 - DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_DPI0 166 + DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_MASK, 167 + RDMA1_SOUT_DPI0 162 168 }, { 163 169 DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0, 164 - DISP_REG_CONFIG_DPI_SEL_IN, DPI0_SEL_IN_RDMA1 170 + DISP_REG_CONFIG_DPI_SEL_IN, DPI0_SEL_IN_MASK, 171 + DPI0_SEL_IN_RDMA1 165 172 }, { 166 173 DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI1, 167 - DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_DPI1 174 + DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_MASK, 175 + RDMA1_SOUT_DPI1 168 176 }, { 169 177 DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI1, 170 - DISP_REG_CONFIG_DPI_SEL_IN, DPI1_SEL_IN_RDMA1 178 + DISP_REG_CONFIG_DPI_SEL_IN, DPI1_SEL_IN_MASK, 179 + DPI1_SEL_IN_RDMA1 171 180 }, { 172 181 DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI0, 173 - DISP_REG_CONFIG_DSIE_SEL_IN, DSI0_SEL_IN_RDMA1 182 + DISP_REG_CONFIG_DSIE_SEL_IN, DSI0_SEL_IN_MASK, 183 + DSI0_SEL_IN_RDMA1 174 184 }, { 175 185 DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI1, 176 - DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_DSI1 186 + DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_MASK, 187 + RDMA1_SOUT_DSI1 177 188 }, { 178 189 DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI1, 179 - DISP_REG_CONFIG_DSIO_SEL_IN, DSI1_SEL_IN_RDMA1 190 + DISP_REG_CONFIG_DSIO_SEL_IN, DSI1_SEL_IN_MASK, 191 + DSI1_SEL_IN_RDMA1 180 192 }, { 181 193 DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI2, 182 - DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_DSI2 194 + DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_MASK, 195 + RDMA1_SOUT_DSI2 183 196 }, { 184 197 DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI2, 185 - DISP_REG_CONFIG_DSIE_SEL_IN, DSI2_SEL_IN_RDMA1 198 + DISP_REG_CONFIG_DSIE_SEL_IN, DSI2_SEL_IN_MASK, 199 + DSI2_SEL_IN_RDMA1 186 200 }, { 187 201 DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI3, 188 - DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_DSI3 202 + DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_MASK, 203 + RDMA1_SOUT_DSI3 189 204 }, { 190 205 DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI3, 191 - DISP_REG_CONFIG_DSIO_SEL_IN, DSI3_SEL_IN_RDMA1 206 + DISP_REG_CONFIG_DSIO_SEL_IN, DSI3_SEL_IN_MASK, 207 + DSI3_SEL_IN_RDMA1 192 208 }, { 193 209 DDP_COMPONENT_RDMA2, DDP_COMPONENT_DPI0, 194 - DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_DPI0 210 + DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_MASK, 211 + RDMA2_SOUT_DPI0 195 212 }, { 196 213 DDP_COMPONENT_RDMA2, DDP_COMPONENT_DPI0, 197 - DISP_REG_CONFIG_DPI_SEL_IN, DPI0_SEL_IN_RDMA2 214 + DISP_REG_CONFIG_DPI_SEL_IN, DPI0_SEL_IN_MASK, 215 + DPI0_SEL_IN_RDMA2 198 216 }, { 199 217 DDP_COMPONENT_RDMA2, DDP_COMPONENT_DPI1, 200 - DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_DPI1 218 + DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_MASK, 219 + RDMA2_SOUT_DPI1 201 220 }, { 202 221 DDP_COMPONENT_RDMA2, DDP_COMPONENT_DPI1, 203 - DISP_REG_CONFIG_DPI_SEL_IN, DPI1_SEL_IN_RDMA2 222 + DISP_REG_CONFIG_DPI_SEL_IN, DPI1_SEL_IN_MASK, 223 + DPI1_SEL_IN_RDMA2 204 224 }, { 205 225 DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI0, 206 - DISP_REG_CONFIG_DSIE_SEL_IN, DSI0_SEL_IN_RDMA2 226 + DISP_REG_CONFIG_DSIE_SEL_IN, DSI0_SEL_IN_MASK, 227 + DSI0_SEL_IN_RDMA2 207 228 }, { 208 229 DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI1, 209 - DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_DSI1 230 + DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_MASK, 231 + RDMA2_SOUT_DSI1 210 232 }, { 211 233 DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI1, 212 - DISP_REG_CONFIG_DSIO_SEL_IN, DSI1_SEL_IN_RDMA2 234 + DISP_REG_CONFIG_DSIO_SEL_IN, DSI1_SEL_IN_MASK, 235 + DSI1_SEL_IN_RDMA2 213 236 }, { 214 237 DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI2, 215 - DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_DSI2 238 + DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_MASK, 239 + RDMA2_SOUT_DSI2 216 240 }, { 217 241 DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI2, 218 - DISP_REG_CONFIG_DSIE_SEL_IN, DSI2_SEL_IN_RDMA2 242 + DISP_REG_CONFIG_DSIE_SEL_IN, DSI2_SEL_IN_MASK, 243 + DSI2_SEL_IN_RDMA2 219 244 }, { 220 245 DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI3, 221 - DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_DSI3 246 + DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_MASK, 247 + RDMA2_SOUT_DSI3 222 248 }, { 223 249 DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI3, 224 - DISP_REG_CONFIG_DSIO_SEL_IN, DSI3_SEL_IN_RDMA2 250 + DISP_REG_CONFIG_DSIO_SEL_IN, DSI3_SEL_IN_MASK, 251 + DSI3_SEL_IN_RDMA2 252 + }, { 253 + DDP_COMPONENT_UFOE, DDP_COMPONENT_DSI0, 254 + DISP_REG_CONFIG_DISP_UFOE_MOUT_EN, UFOE_MOUT_EN_DSI0, 255 + UFOE_MOUT_EN_DSI0 225 256 } 226 257 }; 227 258
+1 -1
drivers/soc/mediatek/mtk-pm-domains.h
··· 60 60 #define BUS_PROT_UPDATE_TOPAXI(_mask) \ 61 61 BUS_PROT_UPDATE(_mask, \ 62 62 INFRA_TOPAXI_PROTECTEN, \ 63 - INFRA_TOPAXI_PROTECTEN_CLR, \ 63 + INFRA_TOPAXI_PROTECTEN, \ 64 64 INFRA_TOPAXI_PROTECTSTA1) 65 65 66 66 struct scpsys_bus_prot_data {
+5 -38
drivers/soc/qcom/cpr.c
··· 801 801 return ret; 802 802 } 803 803 804 - static int cpr_read_efuse(struct device *dev, const char *cname, u32 *data) 805 - { 806 - struct nvmem_cell *cell; 807 - ssize_t len; 808 - char *ret; 809 - int i; 810 - 811 - *data = 0; 812 - 813 - cell = nvmem_cell_get(dev, cname); 814 - if (IS_ERR(cell)) { 815 - if (PTR_ERR(cell) != -EPROBE_DEFER) 816 - dev_err(dev, "undefined cell %s\n", cname); 817 - return PTR_ERR(cell); 818 - } 819 - 820 - ret = nvmem_cell_read(cell, &len); 821 - nvmem_cell_put(cell); 822 - if (IS_ERR(ret)) { 823 - dev_err(dev, "can't read cell %s\n", cname); 824 - return PTR_ERR(ret); 825 - } 826 - 827 - for (i = 0; i < len; i++) 828 - *data |= ret[i] << (8 * i); 829 - 830 - kfree(ret); 831 - dev_dbg(dev, "efuse read(%s) = %x, bytes %zd\n", cname, *data, len); 832 - 833 - return 0; 834 - } 835 - 836 804 static int 837 805 cpr_populate_ring_osc_idx(struct cpr_drv *drv) 838 806 { ··· 811 843 int ret; 812 844 813 845 for (; fuse < end; fuse++, fuses++) { 814 - ret = cpr_read_efuse(drv->dev, fuses->ring_osc, 815 - &data); 846 + ret = nvmem_cell_read_variable_le_u32(drv->dev, fuses->ring_osc, &data); 816 847 if (ret) 817 848 return ret; 818 849 fuse->ring_osc_idx = data; ··· 830 863 u32 bits = 0; 831 864 int ret; 832 865 833 - ret = cpr_read_efuse(drv->dev, init_v_efuse, &bits); 866 + ret = nvmem_cell_read_variable_le_u32(drv->dev, init_v_efuse, &bits); 834 867 if (ret) 835 868 return ret; 836 869 ··· 899 932 } 900 933 901 934 /* Populate target quotient by scaling */ 902 - ret = cpr_read_efuse(drv->dev, fuses->quotient, &fuse->quot); 935 + ret = nvmem_cell_read_variable_le_u32(drv->dev, fuses->quotient, &fuse->quot); 903 936 if (ret) 904 937 return ret; 905 938 ··· 968 1001 prev_fuse = fuse - 1; 969 1002 970 1003 if (quot_offset) { 971 - ret = cpr_read_efuse(drv->dev, quot_offset, &quot_diff); 1004 + ret = nvmem_cell_read_variable_le_u32(drv->dev, quot_offset, &quot_diff); 972 1005 if (ret) 973 1006 return ret; 974 1007 ··· 1668 1701 * initialized after attaching to the power domain, 1669 1702 * since it depends on the CPU's OPP table. 1670 1703 */ 1671 - ret = cpr_read_efuse(dev, "cpr_fuse_revision", &cpr_rev); 1704 + ret = nvmem_cell_read_variable_le_u32(dev, "cpr_fuse_revision", &cpr_rev); 1672 1705 if (ret) 1673 1706 return ret; 1674 1707
+12 -6
drivers/soc/qcom/mdt_loader.c
··· 166 166 metadata = qcom_mdt_read_metadata(fw, &metadata_len); 167 167 if (IS_ERR(metadata)) { 168 168 ret = PTR_ERR(metadata); 169 + dev_err(dev, "error %d reading firmware %s metadata\n", 170 + ret, fw_name); 169 171 goto out; 170 172 } 171 173 ··· 175 173 176 174 kfree(metadata); 177 175 if (ret) { 178 - dev_err(dev, "invalid firmware metadata\n"); 176 + /* Invalid firmware metadata */ 177 + dev_err(dev, "error %d initializing firmware %s\n", 178 + ret, fw_name); 179 179 goto out; 180 180 } 181 181 } ··· 203 199 ret = qcom_scm_pas_mem_setup(pas_id, mem_phys, 204 200 max_addr - min_addr); 205 201 if (ret) { 206 - dev_err(dev, "unable to setup relocation\n"); 202 + /* Unable to set up relocation */ 203 + dev_err(dev, "error %d setting up firmware %s\n", 204 + ret, fw_name); 207 205 goto out; 208 206 } 209 207 } ··· 249 243 if (phdr->p_filesz && phdr->p_offset < fw->size) { 250 244 /* Firmware is large enough to be non-split */ 251 245 if (phdr->p_offset + phdr->p_filesz > fw->size) { 252 - dev_err(dev, 253 - "failed to load segment %d from truncated file %s\n", 254 - i, firmware); 246 + dev_err(dev, "file %s segment %d would be truncated\n", 247 + fw_name, i); 255 248 ret = -EINVAL; 256 249 break; 257 250 } ··· 262 257 ret = request_firmware_into_buf(&seg_fw, fw_name, dev, 263 258 ptr, phdr->p_filesz); 264 259 if (ret) { 265 - dev_err(dev, "failed to load %s\n", fw_name); 260 + dev_err(dev, "error %d loading %s\n", 261 + ret, fw_name); 266 262 break; 267 263 } 268 264
+28 -2
drivers/soc/qcom/qcom-geni-se.c
··· 104 104 #define GENI_OUTPUT_CTRL 0x24 105 105 #define GENI_CGC_CTRL 0x28 106 106 #define GENI_CLK_CTRL_RO 0x60 107 - #define GENI_IF_DISABLE_RO 0x64 108 107 #define GENI_FW_S_REVISION_RO 0x6c 109 108 #define SE_GENI_BYTE_GRAN 0x254 110 109 #define SE_GENI_TX_PACKING_CFG0 0x260 ··· 321 322 writel_relaxed(val, se->base + SE_GENI_DMA_MODE_EN); 322 323 } 323 324 325 + static void geni_se_select_gpi_mode(struct geni_se *se) 326 + { 327 + u32 val; 328 + 329 + geni_se_irq_clear(se); 330 + 331 + writel(0, se->base + SE_IRQ_EN); 332 + 333 + val = readl(se->base + SE_GENI_S_IRQ_EN); 334 + val &= ~S_CMD_DONE_EN; 335 + writel(val, se->base + SE_GENI_S_IRQ_EN); 336 + 337 + val = readl(se->base + SE_GENI_M_IRQ_EN); 338 + val &= ~(M_CMD_DONE_EN | M_TX_FIFO_WATERMARK_EN | 339 + M_RX_FIFO_WATERMARK_EN | M_RX_FIFO_LAST_EN); 340 + writel(val, se->base + SE_GENI_M_IRQ_EN); 341 + 342 + writel(GENI_DMA_MODE_EN, se->base + SE_GENI_DMA_MODE_EN); 343 + 344 + val = readl(se->base + SE_GSI_EVENT_EN); 345 + val |= (DMA_RX_EVENT_EN | DMA_TX_EVENT_EN | GENI_M_EVENT_EN | GENI_S_EVENT_EN); 346 + writel(val, se->base + SE_GSI_EVENT_EN); 347 + } 348 + 324 349 /** 325 350 * geni_se_select_mode() - Select the serial engine transfer mode 326 351 * @se: Pointer to the concerned serial engine. ··· 352 329 */ 353 330 void geni_se_select_mode(struct geni_se *se, enum geni_se_xfer_mode mode) 354 331 { 355 - WARN_ON(mode != GENI_SE_FIFO && mode != GENI_SE_DMA); 332 + WARN_ON(mode != GENI_SE_FIFO && mode != GENI_SE_DMA && mode != GENI_GPI_DMA); 356 333 357 334 switch (mode) { 358 335 case GENI_SE_FIFO: ··· 360 337 break; 361 338 case GENI_SE_DMA: 362 339 geni_se_select_dma_mode(se); 340 + break; 341 + case GENI_GPI_DMA: 342 + geni_se_select_gpi_mode(se); 363 343 break; 364 344 case GENI_SE_INVALID: 365 345 default:
+7 -2
drivers/soc/qcom/qcom_aoss.c
··· 476 476 static int qmp_cooling_devices_register(struct qmp *qmp) 477 477 { 478 478 struct device_node *np, *child; 479 - int count = QMP_NUM_COOLING_RESOURCES; 479 + int count = 0; 480 480 int ret; 481 481 482 482 np = qmp->dev->of_node; 483 483 484 - qmp->cooling_devs = devm_kcalloc(qmp->dev, count, 484 + qmp->cooling_devs = devm_kcalloc(qmp->dev, QMP_NUM_COOLING_RESOURCES, 485 485 sizeof(*qmp->cooling_devs), 486 486 GFP_KERNEL); 487 487 ··· 497 497 goto unroll; 498 498 } 499 499 500 + if (!count) 501 + devm_kfree(qmp->dev, qmp->cooling_devs); 502 + 500 503 return 0; 501 504 502 505 unroll: 503 506 while (--count >= 0) 504 507 thermal_cooling_device_unregister 505 508 (qmp->cooling_devs[count].cdev); 509 + devm_kfree(qmp->dev, qmp->cooling_devs); 506 510 507 511 return ret; 508 512 } ··· 606 602 { .compatible = "qcom,sm8150-aoss-qmp", }, 607 603 { .compatible = "qcom,sm8250-aoss-qmp", }, 608 604 { .compatible = "qcom,sm8350-aoss-qmp", }, 605 + { .compatible = "qcom,aoss-qmp", }, 609 606 {} 610 607 }; 611 608 MODULE_DEVICE_TABLE(of, qmp_dt_match);
+2 -3
drivers/soc/qcom/rpmhpd.c
··· 403 403 static int rpmhpd_power_off(struct generic_pm_domain *domain) 404 404 { 405 405 struct rpmhpd *pd = domain_to_rpmhpd(domain); 406 - int ret = 0; 406 + int ret; 407 407 408 408 mutex_lock(&rpmhpd_lock); 409 409 410 - ret = rpmhpd_aggregate_corner(pd, pd->level[0]); 411 - 410 + ret = rpmhpd_aggregate_corner(pd, 0); 412 411 if (!ret) 413 412 pd->enabled = false; 414 413
+28
drivers/soc/qcom/rpmpd.c
··· 346 346 .max_state = RPM_SMD_LEVEL_TURBO, 347 347 }; 348 348 349 + /* sm4250/6115 RPM Power domains */ 350 + DEFINE_RPMPD_PAIR(sm6115, vddcx, vddcx_ao, RWCX, LEVEL, 0); 351 + DEFINE_RPMPD_VFL(sm6115, vddcx_vfl, RWCX, 0); 352 + 353 + DEFINE_RPMPD_PAIR(sm6115, vddmx, vddmx_ao, RWMX, LEVEL, 0); 354 + DEFINE_RPMPD_VFL(sm6115, vddmx_vfl, RWMX, 0); 355 + 356 + DEFINE_RPMPD_LEVEL(sm6115, vdd_lpi_cx, RWLC, 0); 357 + DEFINE_RPMPD_LEVEL(sm6115, vdd_lpi_mx, RWLM, 0); 358 + 359 + static struct rpmpd *sm6115_rpmpds[] = { 360 + [SM6115_VDDCX] = &sm6115_vddcx, 361 + [SM6115_VDDCX_AO] = &sm6115_vddcx_ao, 362 + [SM6115_VDDCX_VFL] = &sm6115_vddcx_vfl, 363 + [SM6115_VDDMX] = &sm6115_vddmx, 364 + [SM6115_VDDMX_AO] = &sm6115_vddmx_ao, 365 + [SM6115_VDDMX_VFL] = &sm6115_vddmx_vfl, 366 + [SM6115_VDD_LPI_CX] = &sm6115_vdd_lpi_cx, 367 + [SM6115_VDD_LPI_MX] = &sm6115_vdd_lpi_mx, 368 + }; 369 + 370 + static const struct rpmpd_desc sm6115_desc = { 371 + .rpmpds = sm6115_rpmpds, 372 + .num_pds = ARRAY_SIZE(sm6115_rpmpds), 373 + .max_state = RPM_SMD_LEVEL_TURBO_NO_CPR, 374 + }; 375 + 349 376 static const struct of_device_id rpmpd_match_table[] = { 350 377 { .compatible = "qcom,mdm9607-rpmpd", .data = &mdm9607_desc }, 351 378 { .compatible = "qcom,msm8916-rpmpd", .data = &msm8916_desc }, ··· 383 356 { .compatible = "qcom,msm8998-rpmpd", .data = &msm8998_desc }, 384 357 { .compatible = "qcom,qcs404-rpmpd", .data = &qcs404_desc }, 385 358 { .compatible = "qcom,sdm660-rpmpd", .data = &sdm660_desc }, 359 + { .compatible = "qcom,sm6115-rpmpd", .data = &sm6115_desc }, 386 360 { } 387 361 }; 388 362 MODULE_DEVICE_TABLE(of, rpmpd_match_table);
+1
drivers/soc/qcom/smd-rpm.c
··· 242 242 { .compatible = "qcom,rpm-msm8996" }, 243 243 { .compatible = "qcom,rpm-msm8998" }, 244 244 { .compatible = "qcom,rpm-sdm660" }, 245 + { .compatible = "qcom,rpm-sm6115" }, 245 246 { .compatible = "qcom,rpm-sm6125" }, 246 247 { .compatible = "qcom,rpm-qcs404" }, 247 248 {}
+25 -3
drivers/soc/qcom/smsm.c
··· 109 109 DECLARE_BITMAP(irq_enabled, 32); 110 110 DECLARE_BITMAP(irq_rising, 32); 111 111 DECLARE_BITMAP(irq_falling, 32); 112 - u32 last_value; 112 + unsigned long last_value; 113 113 114 114 u32 *remote_state; 115 115 u32 *subscription; ··· 204 204 u32 val; 205 205 206 206 val = readl(entry->remote_state); 207 - changed = val ^ entry->last_value; 208 - entry->last_value = val; 207 + changed = val ^ xchg(&entry->last_value, val); 209 208 210 209 for_each_set_bit(i, entry->irq_enabled, 32) { 211 210 if (!(changed & BIT(i))) ··· 263 264 struct qcom_smsm *smsm = entry->smsm; 264 265 u32 val; 265 266 267 + /* Make sure our last cached state is up-to-date */ 268 + if (readl(entry->remote_state) & BIT(irq)) 269 + set_bit(irq, &entry->last_value); 270 + else 271 + clear_bit(irq, &entry->last_value); 272 + 266 273 set_bit(irq, entry->irq_enabled); 267 274 268 275 if (entry->subscription) { ··· 304 299 return 0; 305 300 } 306 301 302 + static int smsm_get_irqchip_state(struct irq_data *irqd, 303 + enum irqchip_irq_state which, bool *state) 304 + { 305 + struct smsm_entry *entry = irq_data_get_irq_chip_data(irqd); 306 + irq_hw_number_t irq = irqd_to_hwirq(irqd); 307 + u32 val; 308 + 309 + if (which != IRQCHIP_STATE_LINE_LEVEL) 310 + return -EINVAL; 311 + 312 + val = readl(entry->remote_state); 313 + *state = !!(val & BIT(irq)); 314 + 315 + return 0; 316 + } 317 + 307 318 static struct irq_chip smsm_irq_chip = { 308 319 .name = "smsm", 309 320 .irq_mask = smsm_mask_irq, 310 321 .irq_unmask = smsm_unmask_irq, 311 322 .irq_set_type = smsm_set_irq_type, 323 + .irq_get_irqchip_state = smsm_get_irqchip_state, 312 324 }; 313 325 314 326 /**
+2 -2
drivers/soc/qcom/socinfo.c
··· 417 417 static int show_image_##type(struct seq_file *seq, void *p) \ 418 418 { \ 419 419 struct smem_image_version *image_version = seq->private; \ 420 - seq_puts(seq, image_version->type); \ 421 - seq_putc(seq, '\n'); \ 420 + if (image_version->type[0] != '\0') \ 421 + seq_printf(seq, "%s\n", image_version->type); \ 422 422 return 0; \ 423 423 } \ 424 424 static int open_image_##type(struct inode *inode, struct file *file) \
+2
drivers/soc/renesas/Kconfig
··· 208 208 help 209 209 This enables support for the Renesas R-Car H3 SoC (revisions 2.0 and 210 210 later). 211 + This includes different gradings like R-Car H3e-2G. 211 212 212 213 config ARCH_R8A77965 213 214 bool "ARM64 Platform support for R-Car M3-N" ··· 230 229 select SYSC_R8A77961 231 230 help 232 231 This enables support for the Renesas R-Car M3-W+ SoC. 232 + This includes different gradings like R-Car M3e-2G. 233 233 234 234 config ARCH_R8A77980 235 235 bool "ARM64 Platform support for R-Car V3H"
+4 -2
drivers/soc/renesas/r8a779a0-sysc.c
··· 404 404 for (i = 0; i < info->num_areas; i++) { 405 405 const struct r8a779a0_sysc_area *area = &info->areas[i]; 406 406 struct r8a779a0_sysc_pd *pd; 407 + size_t n; 407 408 408 409 if (!area->name) { 409 410 /* Skip NULLified area */ 410 411 continue; 411 412 } 412 413 413 - pd = kzalloc(sizeof(*pd) + strlen(area->name) + 1, GFP_KERNEL); 414 + n = strlen(area->name) + 1; 415 + pd = kzalloc(sizeof(*pd) + n, GFP_KERNEL); 414 416 if (!pd) { 415 417 error = -ENOMEM; 416 418 goto out_put; 417 419 } 418 420 419 - strcpy(pd->name, area->name); 421 + memcpy(pd->name, area->name, n); 420 422 pd->genpd.name = pd->name; 421 423 pd->pdr = area->pdr; 422 424 pd->flags = area->flags;
+4 -2
drivers/soc/renesas/rcar-sysc.c
··· 396 396 for (i = 0; i < info->num_areas; i++) { 397 397 const struct rcar_sysc_area *area = &info->areas[i]; 398 398 struct rcar_sysc_pd *pd; 399 + size_t n; 399 400 400 401 if (!area->name) { 401 402 /* Skip NULLified area */ 402 403 continue; 403 404 } 404 405 405 - pd = kzalloc(sizeof(*pd) + strlen(area->name) + 1, GFP_KERNEL); 406 + n = strlen(area->name) + 1; 407 + pd = kzalloc(sizeof(*pd) + n, GFP_KERNEL); 406 408 if (!pd) { 407 409 error = -ENOMEM; 408 410 goto out_put; 409 411 } 410 412 411 - strcpy(pd->name, area->name); 413 + memcpy(pd->name, area->name, n); 412 414 pd->genpd.name = pd->name; 413 415 pd->ch.chan_offs = area->chan_offs; 414 416 pd->ch.chan_bit = area->chan_bit;
+4
drivers/soc/renesas/renesas-soc.c
··· 284 284 #if defined(CONFIG_ARCH_R8A77950) || defined(CONFIG_ARCH_R8A77951) 285 285 { .compatible = "renesas,r8a7795", .data = &soc_rcar_h3 }, 286 286 #endif 287 + #ifdef CONFIG_ARCH_R8A77951 288 + { .compatible = "renesas,r8a779m1", .data = &soc_rcar_h3 }, 289 + #endif 287 290 #ifdef CONFIG_ARCH_R8A77960 288 291 { .compatible = "renesas,r8a7796", .data = &soc_rcar_m3_w }, 289 292 #endif 290 293 #ifdef CONFIG_ARCH_R8A77961 291 294 { .compatible = "renesas,r8a77961", .data = &soc_rcar_m3_w }, 295 + { .compatible = "renesas,r8a779m3", .data = &soc_rcar_m3_w }, 292 296 #endif 293 297 #ifdef CONFIG_ARCH_R8A77965 294 298 { .compatible = "renesas,r8a77965", .data = &soc_rcar_m3_n },
+2 -2
drivers/soc/rockchip/Kconfig
··· 6 6 # 7 7 8 8 config ROCKCHIP_GRF 9 - bool 10 - default y 9 + bool "Rockchip General Register Files support" if COMPILE_TEST 10 + default y if ARCH_ROCKCHIP 11 11 help 12 12 The General Register Files are a central component providing 13 13 special additional settings registers for a lot of soc-components.
+80 -8
drivers/soc/rockchip/io-domain.c
··· 51 51 #define RK3399_PMUGRF_CON0_VSEL BIT(8) 52 52 #define RK3399_PMUGRF_VSEL_SUPPLY_NUM 9 53 53 54 - struct rockchip_iodomain; 54 + #define RK3568_PMU_GRF_IO_VSEL0 (0x0140) 55 + #define RK3568_PMU_GRF_IO_VSEL1 (0x0144) 56 + #define RK3568_PMU_GRF_IO_VSEL2 (0x0148) 55 57 56 - struct rockchip_iodomain_soc_data { 57 - int grf_offset; 58 - const char *supply_names[MAX_SUPPLIES]; 59 - void (*init)(struct rockchip_iodomain *iod); 60 - }; 58 + struct rockchip_iodomain; 61 59 62 60 struct rockchip_iodomain_supply { 63 61 struct rockchip_iodomain *iod; ··· 64 66 int idx; 65 67 }; 66 68 69 + struct rockchip_iodomain_soc_data { 70 + int grf_offset; 71 + const char *supply_names[MAX_SUPPLIES]; 72 + void (*init)(struct rockchip_iodomain *iod); 73 + int (*write)(struct rockchip_iodomain_supply *supply, int uV); 74 + }; 75 + 67 76 struct rockchip_iodomain { 68 77 struct device *dev; 69 78 struct regmap *grf; 70 79 const struct rockchip_iodomain_soc_data *soc_data; 71 80 struct rockchip_iodomain_supply supplies[MAX_SUPPLIES]; 81 + int (*write)(struct rockchip_iodomain_supply *supply, int uV); 72 82 }; 83 + 84 + static int rk3568_iodomain_write(struct rockchip_iodomain_supply *supply, int uV) 85 + { 86 + struct rockchip_iodomain *iod = supply->iod; 87 + u32 is_3v3 = uV > MAX_VOLTAGE_1_8; 88 + u32 val0, val1; 89 + int b; 90 + 91 + switch (supply->idx) { 92 + case 0: /* pmuio1 */ 93 + break; 94 + case 1: /* pmuio2 */ 95 + b = supply->idx; 96 + val0 = BIT(16 + b) | (is_3v3 ? 0 : BIT(b)); 97 + b = supply->idx + 4; 98 + val1 = BIT(16 + b) | (is_3v3 ? BIT(b) : 0); 99 + 100 + regmap_write(iod->grf, RK3568_PMU_GRF_IO_VSEL2, val0); 101 + regmap_write(iod->grf, RK3568_PMU_GRF_IO_VSEL2, val1); 102 + break; 103 + case 3: /* vccio2 */ 104 + break; 105 + case 2: /* vccio1 */ 106 + case 4: /* vccio3 */ 107 + case 5: /* vccio4 */ 108 + case 6: /* vccio5 */ 109 + case 7: /* vccio6 */ 110 + case 8: /* vccio7 */ 111 + b = supply->idx - 1; 112 + val0 = BIT(16 + b) | (is_3v3 ? 0 : BIT(b)); 113 + val1 = BIT(16 + b) | (is_3v3 ? BIT(b) : 0); 114 + 115 + regmap_write(iod->grf, RK3568_PMU_GRF_IO_VSEL0, val0); 116 + regmap_write(iod->grf, RK3568_PMU_GRF_IO_VSEL1, val1); 117 + break; 118 + default: 119 + return -EINVAL; 120 + } 121 + 122 + return 0; 123 + } 73 124 74 125 static int rockchip_iodomain_write(struct rockchip_iodomain_supply *supply, 75 126 int uV) ··· 183 136 return NOTIFY_BAD; 184 137 } 185 138 186 - ret = rockchip_iodomain_write(supply, uV); 139 + ret = supply->iod->write(supply, uV); 187 140 if (ret && event == REGULATOR_EVENT_PRE_VOLTAGE_CHANGE) 188 141 return NOTIFY_BAD; 189 142 ··· 445 398 .init = rk3399_pmu_iodomain_init, 446 399 }; 447 400 401 + static const struct rockchip_iodomain_soc_data soc_data_rk3568_pmu = { 402 + .grf_offset = 0x140, 403 + .supply_names = { 404 + "pmuio1", 405 + "pmuio2", 406 + "vccio1", 407 + "vccio2", 408 + "vccio3", 409 + "vccio4", 410 + "vccio5", 411 + "vccio6", 412 + "vccio7", 413 + }, 414 + .write = rk3568_iodomain_write, 415 + }; 416 + 448 417 static const struct rockchip_iodomain_soc_data soc_data_rv1108 = { 449 418 .grf_offset = 0x404, 450 419 .supply_names = { ··· 533 470 .data = &soc_data_rk3399_pmu 534 471 }, 535 472 { 473 + .compatible = "rockchip,rk3568-pmu-io-voltage-domain", 474 + .data = &soc_data_rk3568_pmu 475 + }, 476 + { 536 477 .compatible = "rockchip,rv1108-io-voltage-domain", 537 478 .data = &soc_data_rv1108 538 479 }, ··· 568 501 569 502 match = of_match_node(rockchip_iodomain_match, np); 570 503 iod->soc_data = match->data; 504 + 505 + if (iod->soc_data->write) 506 + iod->write = iod->soc_data->write; 507 + else 508 + iod->write = rockchip_iodomain_write; 571 509 572 510 parent = pdev->dev.parent; 573 511 if (parent && parent->of_node) { ··· 634 562 supply->reg = reg; 635 563 supply->nb.notifier_call = rockchip_iodomain_notify; 636 564 637 - ret = rockchip_iodomain_write(supply, uV); 565 + ret = iod->write(supply, uV); 638 566 if (ret) { 639 567 supply->reg = NULL; 640 568 goto unreg_notify;
+60
drivers/soc/tegra/fuse/fuse-tegra.c
··· 13 13 #include <linux/of.h> 14 14 #include <linux/of_address.h> 15 15 #include <linux/platform_device.h> 16 + #include <linux/pm_runtime.h> 16 17 #include <linux/slab.h> 17 18 #include <linux/sys_soc.h> 18 19 ··· 211 210 platform_set_drvdata(pdev, fuse); 212 211 fuse->dev = &pdev->dev; 213 212 213 + pm_runtime_enable(&pdev->dev); 214 + 214 215 if (fuse->soc->probe) { 215 216 err = fuse->soc->probe(fuse); 216 217 if (err < 0) ··· 249 246 return 0; 250 247 251 248 restore: 249 + fuse->clk = NULL; 252 250 fuse->base = base; 251 + pm_runtime_disable(&pdev->dev); 253 252 return err; 254 253 } 254 + 255 + static int __maybe_unused tegra_fuse_runtime_resume(struct device *dev) 256 + { 257 + int err; 258 + 259 + err = clk_prepare_enable(fuse->clk); 260 + if (err < 0) { 261 + dev_err(dev, "failed to enable FUSE clock: %d\n", err); 262 + return err; 263 + } 264 + 265 + return 0; 266 + } 267 + 268 + static int __maybe_unused tegra_fuse_runtime_suspend(struct device *dev) 269 + { 270 + clk_disable_unprepare(fuse->clk); 271 + 272 + return 0; 273 + } 274 + 275 + static int __maybe_unused tegra_fuse_suspend(struct device *dev) 276 + { 277 + int ret; 278 + 279 + /* 280 + * Critical for RAM re-repair operation, which must occur on resume 281 + * from LP1 system suspend and as part of CCPLEX cluster switching. 282 + */ 283 + if (fuse->soc->clk_suspend_on) 284 + ret = pm_runtime_resume_and_get(dev); 285 + else 286 + ret = pm_runtime_force_suspend(dev); 287 + 288 + return ret; 289 + } 290 + 291 + static int __maybe_unused tegra_fuse_resume(struct device *dev) 292 + { 293 + int ret = 0; 294 + 295 + if (fuse->soc->clk_suspend_on) 296 + pm_runtime_put(dev); 297 + else 298 + ret = pm_runtime_force_resume(dev); 299 + 300 + return ret; 301 + } 302 + 303 + static const struct dev_pm_ops tegra_fuse_pm = { 304 + SET_RUNTIME_PM_OPS(tegra_fuse_runtime_suspend, tegra_fuse_runtime_resume, 305 + NULL) 306 + SET_SYSTEM_SLEEP_PM_OPS(tegra_fuse_suspend, tegra_fuse_resume) 307 + }; 255 308 256 309 static struct platform_driver tegra_fuse_driver = { 257 310 .driver = { 258 311 .name = "tegra-fuse", 259 312 .of_match_table = tegra_fuse_match, 313 + .pm = &tegra_fuse_pm, 260 314 .suppress_bind_attrs = true, 261 315 }, 262 316 .probe = tegra_fuse_probe,
+7 -4
drivers/soc/tegra/fuse/fuse-tegra20.c
··· 16 16 #include <linux/kobject.h> 17 17 #include <linux/of_device.h> 18 18 #include <linux/platform_device.h> 19 + #include <linux/pm_runtime.h> 19 20 #include <linux/random.h> 20 21 21 22 #include <soc/tegra/fuse.h> ··· 47 46 u32 value = 0; 48 47 int err; 49 48 49 + err = pm_runtime_resume_and_get(fuse->dev); 50 + if (err) 51 + return err; 52 + 50 53 mutex_lock(&fuse->apbdma.lock); 51 54 52 55 fuse->apbdma.config.src_addr = fuse->phys + FUSE_BEGIN + offset; ··· 71 66 72 67 reinit_completion(&fuse->apbdma.wait); 73 68 74 - clk_prepare_enable(fuse->clk); 75 - 76 69 dmaengine_submit(dma_desc); 77 70 dma_async_issue_pending(fuse->apbdma.chan); 78 71 time_left = wait_for_completion_timeout(&fuse->apbdma.wait, ··· 81 78 else 82 79 value = *fuse->apbdma.virt; 83 80 84 - clk_disable_unprepare(fuse->clk); 85 - 86 81 out: 87 82 mutex_unlock(&fuse->apbdma.lock); 83 + pm_runtime_put(fuse->dev); 88 84 return value; 89 85 } 90 86 ··· 167 165 .probe = tegra20_fuse_probe, 168 166 .info = &tegra20_fuse_info, 169 167 .soc_attr_group = &tegra_soc_attr_group, 168 + .clk_suspend_on = false, 170 169 };
+11 -5
drivers/soc/tegra/fuse/fuse-tegra30.c
··· 12 12 #include <linux/of_device.h> 13 13 #include <linux/of_address.h> 14 14 #include <linux/platform_device.h> 15 + #include <linux/pm_runtime.h> 15 16 #include <linux/random.h> 16 17 17 18 #include <soc/tegra/fuse.h> ··· 53 52 u32 value; 54 53 int err; 55 54 56 - err = clk_prepare_enable(fuse->clk); 57 - if (err < 0) { 58 - dev_err(fuse->dev, "failed to enable FUSE clock: %d\n", err); 55 + err = pm_runtime_resume_and_get(fuse->dev); 56 + if (err) 59 57 return 0; 60 - } 61 58 62 59 value = readl_relaxed(fuse->base + FUSE_BEGIN + offset); 63 60 64 - clk_disable_unprepare(fuse->clk); 61 + pm_runtime_put(fuse->dev); 65 62 66 63 return value; 67 64 } ··· 112 113 .speedo_init = tegra30_init_speedo_data, 113 114 .info = &tegra30_fuse_info, 114 115 .soc_attr_group = &tegra_soc_attr_group, 116 + .clk_suspend_on = false, 115 117 }; 116 118 #endif 117 119 ··· 128 128 .speedo_init = tegra114_init_speedo_data, 129 129 .info = &tegra114_fuse_info, 130 130 .soc_attr_group = &tegra_soc_attr_group, 131 + .clk_suspend_on = false, 131 132 }; 132 133 #endif 133 134 ··· 210 209 .lookups = tegra124_fuse_lookups, 211 210 .num_lookups = ARRAY_SIZE(tegra124_fuse_lookups), 212 211 .soc_attr_group = &tegra_soc_attr_group, 212 + .clk_suspend_on = true, 213 213 }; 214 214 #endif 215 215 ··· 297 295 .lookups = tegra210_fuse_lookups, 298 296 .num_lookups = ARRAY_SIZE(tegra210_fuse_lookups), 299 297 .soc_attr_group = &tegra_soc_attr_group, 298 + .clk_suspend_on = false, 300 299 }; 301 300 #endif 302 301 ··· 328 325 .lookups = tegra186_fuse_lookups, 329 326 .num_lookups = ARRAY_SIZE(tegra186_fuse_lookups), 330 327 .soc_attr_group = &tegra_soc_attr_group, 328 + .clk_suspend_on = false, 331 329 }; 332 330 #endif 333 331 ··· 359 355 .lookups = tegra194_fuse_lookups, 360 356 .num_lookups = ARRAY_SIZE(tegra194_fuse_lookups), 361 357 .soc_attr_group = &tegra194_soc_attr_group, 358 + .clk_suspend_on = false, 362 359 }; 363 360 #endif 364 361 ··· 390 385 .lookups = tegra234_fuse_lookups, 391 386 .num_lookups = ARRAY_SIZE(tegra234_fuse_lookups), 392 387 .soc_attr_group = &tegra194_soc_attr_group, 388 + .clk_suspend_on = false, 393 389 }; 394 390 #endif
+2
drivers/soc/tegra/fuse/fuse.h
··· 34 34 unsigned int num_lookups; 35 35 36 36 const struct attribute_group *soc_attr_group; 37 + 38 + bool clk_suspend_on; 37 39 }; 38 40 39 41 struct tegra_fuse {
+13 -1
drivers/soc/tegra/pmc.c
··· 436 436 437 437 static struct tegra_pmc *pmc = &(struct tegra_pmc) { 438 438 .base = NULL, 439 - .suspend_mode = TEGRA_SUSPEND_NONE, 439 + .suspend_mode = TEGRA_SUSPEND_NOT_READY, 440 440 }; 441 441 442 442 static inline struct tegra_powergate * ··· 1812 1812 u32 value, values[2]; 1813 1813 1814 1814 if (of_property_read_u32(np, "nvidia,suspend-mode", &value)) { 1815 + pmc->suspend_mode = TEGRA_SUSPEND_NONE; 1815 1816 } else { 1816 1817 switch (value) { 1817 1818 case 0: ··· 2786 2785 return 0; 2787 2786 } 2788 2787 2788 + static void tegra_pmc_reset_suspend_mode(void *data) 2789 + { 2790 + pmc->suspend_mode = TEGRA_SUSPEND_NOT_READY; 2791 + } 2792 + 2789 2793 static int tegra_pmc_probe(struct platform_device *pdev) 2790 2794 { 2791 2795 void __iomem *base; ··· 2807 2801 2808 2802 err = tegra_pmc_parse_dt(pmc, pdev->dev.of_node); 2809 2803 if (err < 0) 2804 + return err; 2805 + 2806 + err = devm_add_action_or_reset(&pdev->dev, tegra_pmc_reset_suspend_mode, 2807 + NULL); 2808 + if (err) 2810 2809 return err; 2811 2810 2812 2811 /* take over the memory region from the early initialization */ ··· 2920 2909 2921 2910 tegra_pmc_clock_register(pmc, pdev->dev.of_node); 2922 2911 platform_set_drvdata(pdev, pmc); 2912 + tegra_pm_init_suspend(); 2923 2913 2924 2914 return 0; 2925 2915
-1
drivers/soc/tegra/powergate-bpmp.c
··· 7 7 #include <linux/platform_device.h> 8 8 #include <linux/pm_domain.h> 9 9 #include <linux/slab.h> 10 - #include <linux/version.h> 11 10 12 11 #include <soc/tegra/bpmp.h> 13 12 #include <soc/tegra/bpmp-abi.h>
+1
drivers/soc/ti/pruss.c
··· 338 338 { .compatible = "ti,k2g-pruss" }, 339 339 { .compatible = "ti,am654-icssg", .data = &am65x_j721e_pruss_data, }, 340 340 { .compatible = "ti,j721e-icssg", .data = &am65x_j721e_pruss_data, }, 341 + { .compatible = "ti,am642-icssg", .data = &am65x_j721e_pruss_data, }, 341 342 {}, 342 343 }; 343 344 MODULE_DEVICE_TABLE(of, pruss_of_match);
+22 -30
drivers/soc/ti/smartreflex.c
··· 126 126 127 127 static void sr_set_clk_length(struct omap_sr *sr) 128 128 { 129 - struct clk *fck; 130 129 u32 fclk_speed; 131 130 132 131 /* Try interconnect target module fck first if it already exists */ 133 - fck = clk_get(sr->pdev->dev.parent, "fck"); 134 - if (IS_ERR(fck)) { 135 - fck = clk_get(&sr->pdev->dev, "fck"); 136 - if (IS_ERR(fck)) { 137 - dev_err(&sr->pdev->dev, 138 - "%s: unable to get fck for device %s\n", 139 - __func__, dev_name(&sr->pdev->dev)); 140 - return; 141 - } 142 - } 132 + if (IS_ERR(sr->fck)) 133 + return; 143 134 144 - fclk_speed = clk_get_rate(fck); 145 - clk_put(fck); 135 + fclk_speed = clk_get_rate(sr->fck); 146 136 147 137 switch (fclk_speed) { 148 138 case 12000000: ··· 577 587 /* errminlimit is opp dependent and hence linked to voltage */ 578 588 sr->err_minlimit = nvalue_row->errminlimit; 579 589 580 - pm_runtime_get_sync(&sr->pdev->dev); 590 + clk_enable(sr->fck); 581 591 582 592 /* Check if SR is already enabled. If yes do nothing */ 583 593 if (sr_read_reg(sr, SRCONFIG) & SRCONFIG_SRENABLE) 584 - return 0; 594 + goto out_enabled; 585 595 586 596 /* Configure SR */ 587 597 ret = sr_class->configure(sr); 588 598 if (ret) 589 - return ret; 599 + goto out_enabled; 590 600 591 601 sr_write_reg(sr, NVALUERECIPROCAL, nvalue_row->nvalue); 592 602 593 603 /* SRCONFIG - enable SR */ 594 604 sr_modify_reg(sr, SRCONFIG, SRCONFIG_SRENABLE, SRCONFIG_SRENABLE); 605 + 606 + out_enabled: 607 + sr->enabled = 1; 608 + 595 609 return 0; 596 610 } 597 611 ··· 615 621 } 616 622 617 623 /* Check if SR clocks are already disabled. If yes do nothing */ 618 - if (pm_runtime_suspended(&sr->pdev->dev)) 624 + if (!sr->enabled) 619 625 return; 620 626 621 627 /* ··· 636 642 } 637 643 } 638 644 639 - pm_runtime_put_sync_suspend(&sr->pdev->dev); 645 + clk_disable(sr->fck); 646 + sr->enabled = 0; 640 647 } 641 648 642 649 /** ··· 846 851 847 852 irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 848 853 854 + sr_info->fck = devm_clk_get(pdev->dev.parent, "fck"); 855 + if (IS_ERR(sr_info->fck)) 856 + return PTR_ERR(sr_info->fck); 857 + clk_prepare(sr_info->fck); 858 + 849 859 pm_runtime_enable(&pdev->dev); 850 - pm_runtime_irq_safe(&pdev->dev); 851 860 852 861 snprintf(sr_info->name, SMARTREFLEX_NAME_LEN, "%s", pdata->name); 853 862 ··· 876 877 sr_set_clk_length(sr_info); 877 878 878 879 list_add(&sr_info->node, &sr_list); 879 - 880 - ret = pm_runtime_get_sync(&pdev->dev); 881 - if (ret < 0) { 882 - pm_runtime_put_noidle(&pdev->dev); 883 - goto err_list_del; 884 - } 885 880 886 881 /* 887 882 * Call into late init to do initializations that require ··· 926 933 927 934 } 928 935 929 - pm_runtime_put_sync(&pdev->dev); 930 - 931 936 return ret; 932 937 933 938 err_debugfs: 934 939 debugfs_remove_recursive(sr_info->dbg_dir); 935 940 err_list_del: 936 941 list_del(&sr_info->node); 937 - 938 - pm_runtime_put_sync(&pdev->dev); 942 + clk_unprepare(sr_info->fck); 939 943 940 944 return ret; 941 945 } ··· 940 950 static int omap_sr_remove(struct platform_device *pdev) 941 951 { 942 952 struct omap_sr_data *pdata = pdev->dev.platform_data; 953 + struct device *dev = &pdev->dev; 943 954 struct omap_sr *sr_info; 944 955 945 956 if (!pdata) { ··· 959 968 sr_stop_vddautocomp(sr_info); 960 969 debugfs_remove_recursive(sr_info->dbg_dir); 961 970 962 - pm_runtime_disable(&pdev->dev); 971 + pm_runtime_disable(dev); 972 + clk_unprepare(sr_info->fck); 963 973 list_del(&sr_info->node); 964 974 return 0; 965 975 }
+35 -6
drivers/spi/spi-imx.c
··· 77 77 bool has_slavemode; 78 78 unsigned int fifo_size; 79 79 bool dynamic_burst; 80 + /* 81 + * ERR009165 fixed or not: 82 + * https://www.nxp.com/docs/en/errata/IMX6DQCE.pdf 83 + */ 84 + bool tx_glitch_fixed; 80 85 enum spi_imx_devtype devtype; 81 86 }; 82 87 ··· 627 622 ctrl |= mx51_ecspi_clkdiv(spi_imx, spi_imx->spi_bus_clk, &clk); 628 623 spi_imx->spi_bus_clk = clk; 629 624 630 - if (spi_imx->usedma) 625 + /* 626 + * ERR009165: work in XHC mode instead of SMC as PIO on the chips 627 + * before i.mx6ul. 628 + */ 629 + if (spi_imx->usedma && spi_imx->devtype_data->tx_glitch_fixed) 631 630 ctrl |= MX51_ECSPI_CTRL_SMC; 631 + else 632 + ctrl &= ~MX51_ECSPI_CTRL_SMC; 632 633 633 634 writel(ctrl, spi_imx->base + MX51_ECSPI_CTRL); 634 635 ··· 643 632 644 633 static void mx51_setup_wml(struct spi_imx_data *spi_imx) 645 634 { 635 + u32 tx_wml = 0; 636 + 637 + if (spi_imx->devtype_data->tx_glitch_fixed) 638 + tx_wml = spi_imx->wml; 646 639 /* 647 640 * Configure the DMA register: setup the watermark 648 641 * and enable DMA request. 649 642 */ 650 643 writel(MX51_ECSPI_DMA_RX_WML(spi_imx->wml - 1) | 651 - MX51_ECSPI_DMA_TX_WML(spi_imx->wml) | 644 + MX51_ECSPI_DMA_TX_WML(tx_wml) | 652 645 MX51_ECSPI_DMA_RXT_WML(spi_imx->wml) | 653 646 MX51_ECSPI_DMA_TEDEN | MX51_ECSPI_DMA_RXDEN | 654 647 MX51_ECSPI_DMA_RXTDEN, spi_imx->base + MX51_ECSPI_DMA); ··· 1043 1028 .devtype = IMX53_ECSPI, 1044 1029 }; 1045 1030 1031 + static struct spi_imx_devtype_data imx6ul_ecspi_devtype_data = { 1032 + .intctrl = mx51_ecspi_intctrl, 1033 + .prepare_message = mx51_ecspi_prepare_message, 1034 + .prepare_transfer = mx51_ecspi_prepare_transfer, 1035 + .trigger = mx51_ecspi_trigger, 1036 + .rx_available = mx51_ecspi_rx_available, 1037 + .reset = mx51_ecspi_reset, 1038 + .setup_wml = mx51_setup_wml, 1039 + .fifo_size = 64, 1040 + .has_dmamode = true, 1041 + .dynamic_burst = true, 1042 + .has_slavemode = true, 1043 + .tx_glitch_fixed = true, 1044 + .disable = mx51_ecspi_disable, 1045 + .devtype = IMX51_ECSPI, 1046 + }; 1047 + 1046 1048 static const struct of_device_id spi_imx_dt_ids[] = { 1047 1049 { .compatible = "fsl,imx1-cspi", .data = &imx1_cspi_devtype_data, }, 1048 1050 { .compatible = "fsl,imx21-cspi", .data = &imx21_cspi_devtype_data, }, ··· 1068 1036 { .compatible = "fsl,imx35-cspi", .data = &imx35_cspi_devtype_data, }, 1069 1037 { .compatible = "fsl,imx51-ecspi", .data = &imx51_ecspi_devtype_data, }, 1070 1038 { .compatible = "fsl,imx53-ecspi", .data = &imx53_ecspi_devtype_data, }, 1039 + { .compatible = "fsl,imx6ul-ecspi", .data = &imx6ul_ecspi_devtype_data, }, 1071 1040 { /* sentinel */ } 1072 1041 }; 1073 1042 MODULE_DEVICE_TABLE(of, spi_imx_dt_ids); ··· 1281 1248 struct spi_master *master) 1282 1249 { 1283 1250 int ret; 1284 - 1285 - /* use pio mode for i.mx6dl chip TKT238285 */ 1286 - if (of_machine_is_compatible("fsl,imx6dl")) 1287 - return 0; 1288 1251 1289 1252 spi_imx->wml = spi_imx->devtype_data->fifo_size / 2; 1290 1253
+1
drivers/watchdog/Kconfig
··· 487 487 config IXP4XX_WATCHDOG 488 488 tristate "IXP4xx Watchdog" 489 489 depends on ARCH_IXP4XX 490 + select WATCHDOG_CORE 490 491 help 491 492 Say Y here if to include support for the watchdog timer 492 493 in the Intel IXP4xx network processors. This driver can
+134 -181
drivers/watchdog/ixp4xx_wdt.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 1 2 /* 2 3 * drivers/char/watchdog/ixp4xx_wdt.c 3 4 * 4 5 * Watchdog driver for Intel IXP4xx network processors 5 6 * 6 7 * Author: Deepak Saxena <dsaxena@plexity.net> 8 + * Author: Linus Walleij <linus.walleij@linaro.org> 7 9 * 8 10 * Copyright 2004 (c) MontaVista, Software, Inc. 9 11 * Based on sa1100 driver, Copyright (C) 2000 Oleg Drokin <green@crimea.edu> 10 - * 11 - * This file is licensed under the terms of the GNU General Public 12 - * License version 2. This program is licensed "as is" without any 13 - * warranty of any kind, whether express or implied. 14 12 */ 15 13 16 - #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 17 - 18 14 #include <linux/module.h> 19 - #include <linux/moduleparam.h> 20 15 #include <linux/types.h> 21 16 #include <linux/kernel.h> 22 - #include <linux/fs.h> 23 - #include <linux/miscdevice.h> 24 - #include <linux/of.h> 25 17 #include <linux/watchdog.h> 26 - #include <linux/init.h> 27 - #include <linux/bitops.h> 28 - #include <linux/uaccess.h> 29 - #include <mach/hardware.h> 18 + #include <linux/bits.h> 19 + #include <linux/platform_device.h> 20 + #include <linux/clk.h> 21 + #include <linux/soc/ixp4xx/cpu.h> 30 22 31 - static bool nowayout = WATCHDOG_NOWAYOUT; 32 - static int heartbeat = 60; /* (secs) Default is 1 minute */ 33 - static unsigned long wdt_status; 34 - static unsigned long boot_status; 35 - static DEFINE_SPINLOCK(wdt_lock); 36 - 37 - #define WDT_TICK_RATE (IXP4XX_PERIPHERAL_BUS_CLOCK * 1000000UL) 38 - 39 - #define WDT_IN_USE 0 40 - #define WDT_OK_TO_CLOSE 1 41 - 42 - static void wdt_enable(void) 43 - { 44 - spin_lock(&wdt_lock); 45 - *IXP4XX_OSWK = IXP4XX_WDT_KEY; 46 - *IXP4XX_OSWE = 0; 47 - *IXP4XX_OSWT = WDT_TICK_RATE * heartbeat; 48 - *IXP4XX_OSWE = IXP4XX_WDT_COUNT_ENABLE | IXP4XX_WDT_RESET_ENABLE; 49 - *IXP4XX_OSWK = 0; 50 - spin_unlock(&wdt_lock); 51 - } 52 - 53 - static void wdt_disable(void) 54 - { 55 - spin_lock(&wdt_lock); 56 - *IXP4XX_OSWK = IXP4XX_WDT_KEY; 57 - *IXP4XX_OSWE = 0; 58 - *IXP4XX_OSWK = 0; 59 - spin_unlock(&wdt_lock); 60 - } 61 - 62 - static int ixp4xx_wdt_open(struct inode *inode, struct file *file) 63 - { 64 - if (test_and_set_bit(WDT_IN_USE, &wdt_status)) 65 - return -EBUSY; 66 - 67 - clear_bit(WDT_OK_TO_CLOSE, &wdt_status); 68 - wdt_enable(); 69 - return stream_open(inode, file); 70 - } 71 - 72 - static ssize_t 73 - ixp4xx_wdt_write(struct file *file, const char *data, size_t len, loff_t *ppos) 74 - { 75 - if (len) { 76 - if (!nowayout) { 77 - size_t i; 78 - 79 - clear_bit(WDT_OK_TO_CLOSE, &wdt_status); 80 - 81 - for (i = 0; i != len; i++) { 82 - char c; 83 - 84 - if (get_user(c, data + i)) 85 - return -EFAULT; 86 - if (c == 'V') 87 - set_bit(WDT_OK_TO_CLOSE, &wdt_status); 88 - } 89 - } 90 - wdt_enable(); 91 - } 92 - return len; 93 - } 94 - 95 - static const struct watchdog_info ident = { 96 - .options = WDIOF_CARDRESET | WDIOF_MAGICCLOSE | 97 - WDIOF_SETTIMEOUT | WDIOF_KEEPALIVEPING, 98 - .identity = "IXP4xx Watchdog", 23 + struct ixp4xx_wdt { 24 + struct watchdog_device wdd; 25 + void __iomem *base; 26 + unsigned long rate; 99 27 }; 100 28 29 + /* Fallback if we do not have a clock for this */ 30 + #define IXP4XX_TIMER_FREQ 66666000 101 31 102 - static long ixp4xx_wdt_ioctl(struct file *file, unsigned int cmd, 103 - unsigned long arg) 32 + /* Registers after the timer registers */ 33 + #define IXP4XX_OSWT_OFFSET 0x14 /* Watchdog Timer */ 34 + #define IXP4XX_OSWE_OFFSET 0x18 /* Watchdog Enable */ 35 + #define IXP4XX_OSWK_OFFSET 0x1C /* Watchdog Key */ 36 + #define IXP4XX_OSST_OFFSET 0x20 /* Timer Status */ 37 + 38 + #define IXP4XX_OSST_TIMER_WDOG_PEND 0x00000008 39 + #define IXP4XX_OSST_TIMER_WARM_RESET 0x00000010 40 + #define IXP4XX_WDT_KEY 0x0000482E 41 + #define IXP4XX_WDT_RESET_ENABLE 0x00000001 42 + #define IXP4XX_WDT_IRQ_ENABLE 0x00000002 43 + #define IXP4XX_WDT_COUNT_ENABLE 0x00000004 44 + 45 + static inline 46 + struct ixp4xx_wdt *to_ixp4xx_wdt(struct watchdog_device *wdd) 104 47 { 105 - int ret = -ENOTTY; 106 - int time; 107 - 108 - switch (cmd) { 109 - case WDIOC_GETSUPPORT: 110 - ret = copy_to_user((struct watchdog_info *)arg, &ident, 111 - sizeof(ident)) ? -EFAULT : 0; 112 - break; 113 - 114 - case WDIOC_GETSTATUS: 115 - ret = put_user(0, (int *)arg); 116 - break; 117 - 118 - case WDIOC_GETBOOTSTATUS: 119 - ret = put_user(boot_status, (int *)arg); 120 - break; 121 - 122 - case WDIOC_KEEPALIVE: 123 - wdt_enable(); 124 - ret = 0; 125 - break; 126 - 127 - case WDIOC_SETTIMEOUT: 128 - ret = get_user(time, (int *)arg); 129 - if (ret) 130 - break; 131 - 132 - if (time <= 0 || time > 60) { 133 - ret = -EINVAL; 134 - break; 135 - } 136 - 137 - heartbeat = time; 138 - wdt_enable(); 139 - fallthrough; 140 - 141 - case WDIOC_GETTIMEOUT: 142 - ret = put_user(heartbeat, (int *)arg); 143 - break; 144 - } 145 - return ret; 48 + return container_of(wdd, struct ixp4xx_wdt, wdd); 146 49 } 147 50 148 - static int ixp4xx_wdt_release(struct inode *inode, struct file *file) 51 + static int ixp4xx_wdt_start(struct watchdog_device *wdd) 149 52 { 150 - if (test_bit(WDT_OK_TO_CLOSE, &wdt_status)) 151 - wdt_disable(); 152 - else 153 - pr_crit("Device closed unexpectedly - timer will not stop\n"); 154 - clear_bit(WDT_IN_USE, &wdt_status); 155 - clear_bit(WDT_OK_TO_CLOSE, &wdt_status); 53 + struct ixp4xx_wdt *iwdt = to_ixp4xx_wdt(wdd); 54 + 55 + __raw_writel(IXP4XX_WDT_KEY, iwdt->base + IXP4XX_OSWK_OFFSET); 56 + __raw_writel(0, iwdt->base + IXP4XX_OSWE_OFFSET); 57 + __raw_writel(wdd->timeout * iwdt->rate, 58 + iwdt->base + IXP4XX_OSWT_OFFSET); 59 + __raw_writel(IXP4XX_WDT_COUNT_ENABLE | IXP4XX_WDT_RESET_ENABLE, 60 + iwdt->base + IXP4XX_OSWE_OFFSET); 61 + __raw_writel(0, iwdt->base + IXP4XX_OSWK_OFFSET); 156 62 157 63 return 0; 158 64 } 159 65 160 - 161 - static const struct file_operations ixp4xx_wdt_fops = { 162 - .owner = THIS_MODULE, 163 - .llseek = no_llseek, 164 - .write = ixp4xx_wdt_write, 165 - .unlocked_ioctl = ixp4xx_wdt_ioctl, 166 - .compat_ioctl = compat_ptr_ioctl, 167 - .open = ixp4xx_wdt_open, 168 - .release = ixp4xx_wdt_release, 169 - }; 170 - 171 - static struct miscdevice ixp4xx_wdt_miscdev = { 172 - .minor = WATCHDOG_MINOR, 173 - .name = "watchdog", 174 - .fops = &ixp4xx_wdt_fops, 175 - }; 176 - 177 - static int __init ixp4xx_wdt_init(void) 66 + static int ixp4xx_wdt_stop(struct watchdog_device *wdd) 178 67 { 68 + struct ixp4xx_wdt *iwdt = to_ixp4xx_wdt(wdd); 69 + 70 + __raw_writel(IXP4XX_WDT_KEY, iwdt->base + IXP4XX_OSWK_OFFSET); 71 + __raw_writel(0, iwdt->base + IXP4XX_OSWE_OFFSET); 72 + __raw_writel(0, iwdt->base + IXP4XX_OSWK_OFFSET); 73 + 74 + return 0; 75 + } 76 + 77 + static int ixp4xx_wdt_set_timeout(struct watchdog_device *wdd, 78 + unsigned int timeout) 79 + { 80 + wdd->timeout = timeout; 81 + if (watchdog_active(wdd)) 82 + ixp4xx_wdt_start(wdd); 83 + 84 + return 0; 85 + } 86 + 87 + static const struct watchdog_ops ixp4xx_wdt_ops = { 88 + .start = ixp4xx_wdt_start, 89 + .stop = ixp4xx_wdt_stop, 90 + .set_timeout = ixp4xx_wdt_set_timeout, 91 + .owner = THIS_MODULE, 92 + }; 93 + 94 + static const struct watchdog_info ixp4xx_wdt_info = { 95 + .options = WDIOF_KEEPALIVEPING 96 + | WDIOF_MAGICCLOSE 97 + | WDIOF_SETTIMEOUT, 98 + .identity = KBUILD_MODNAME, 99 + }; 100 + 101 + /* Devres-handled clock disablement */ 102 + static void ixp4xx_clock_action(void *d) 103 + { 104 + clk_disable_unprepare(d); 105 + } 106 + 107 + static int ixp4xx_wdt_probe(struct platform_device *pdev) 108 + { 109 + struct device *dev = &pdev->dev; 110 + struct ixp4xx_wdt *iwdt; 111 + struct clk *clk; 179 112 int ret; 180 113 181 - /* 182 - * FIXME: we bail out on device tree boot but this really needs 183 - * to be fixed in a nicer way: this registers the MDIO bus before 184 - * even matching the driver infrastructure, we should only probe 185 - * detected hardware. 186 - */ 187 - if (of_have_populated_dt()) 188 - return -ENODEV; 189 114 if (!(read_cpuid_id() & 0xf) && !cpu_is_ixp46x()) { 190 - pr_err("Rev. A0 IXP42x CPU detected - watchdog disabled\n"); 191 - 115 + dev_err(dev, "Rev. A0 IXP42x CPU detected - watchdog disabled\n"); 192 116 return -ENODEV; 193 117 } 194 - boot_status = (*IXP4XX_OSST & IXP4XX_OSST_TIMER_WARM_RESET) ? 195 - WDIOF_CARDRESET : 0; 196 - ret = misc_register(&ixp4xx_wdt_miscdev); 197 - if (ret == 0) 198 - pr_info("timer heartbeat %d sec\n", heartbeat); 199 - return ret; 118 + 119 + iwdt = devm_kzalloc(dev, sizeof(*iwdt), GFP_KERNEL); 120 + if (!iwdt) 121 + return -ENOMEM; 122 + iwdt->base = dev->platform_data; 123 + 124 + /* 125 + * Retrieve rate from a fixed clock from the device tree if 126 + * the parent has that, else use the default clock rate. 127 + */ 128 + clk = devm_clk_get(dev->parent, NULL); 129 + if (!IS_ERR(clk)) { 130 + ret = clk_prepare_enable(clk); 131 + if (ret) 132 + return ret; 133 + ret = devm_add_action_or_reset(dev, ixp4xx_clock_action, clk); 134 + if (ret) 135 + return ret; 136 + iwdt->rate = clk_get_rate(clk); 137 + } 138 + if (!iwdt->rate) 139 + iwdt->rate = IXP4XX_TIMER_FREQ; 140 + 141 + iwdt->wdd.info = &ixp4xx_wdt_info; 142 + iwdt->wdd.ops = &ixp4xx_wdt_ops; 143 + iwdt->wdd.min_timeout = 1; 144 + iwdt->wdd.max_timeout = U32_MAX / iwdt->rate; 145 + iwdt->wdd.parent = dev; 146 + /* Default to 60 seconds */ 147 + iwdt->wdd.timeout = 60U; 148 + watchdog_init_timeout(&iwdt->wdd, 0, dev); 149 + 150 + if (__raw_readl(iwdt->base + IXP4XX_OSST_OFFSET) & 151 + IXP4XX_OSST_TIMER_WARM_RESET) 152 + iwdt->wdd.bootstatus = WDIOF_CARDRESET; 153 + 154 + ret = devm_watchdog_register_device(dev, &iwdt->wdd); 155 + if (ret) 156 + return ret; 157 + 158 + dev_info(dev, "IXP4xx watchdog available\n"); 159 + 160 + return 0; 200 161 } 201 162 202 - static void __exit ixp4xx_wdt_exit(void) 203 - { 204 - misc_deregister(&ixp4xx_wdt_miscdev); 205 - } 206 - 207 - 208 - module_init(ixp4xx_wdt_init); 209 - module_exit(ixp4xx_wdt_exit); 163 + static struct platform_driver ixp4xx_wdt_driver = { 164 + .probe = ixp4xx_wdt_probe, 165 + .driver = { 166 + .name = "ixp4xx-watchdog", 167 + }, 168 + }; 169 + module_platform_driver(ixp4xx_wdt_driver); 210 170 211 171 MODULE_AUTHOR("Deepak Saxena <dsaxena@plexity.net>"); 212 172 MODULE_DESCRIPTION("IXP4xx Network Processor Watchdog"); 213 - 214 - module_param(heartbeat, int, 0); 215 - MODULE_PARM_DESC(heartbeat, "Watchdog heartbeat in seconds (default 60s)"); 216 - 217 - module_param(nowayout, bool, 0); 218 - MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started"); 219 - 220 173 MODULE_LICENSE("GPL");
+10
include/dt-bindings/power/qcom-rpmpd.h
··· 192 192 #define SDM660_SSCMX 8 193 193 #define SDM660_SSCMX_VFL 9 194 194 195 + /* SM6115 Power Domains */ 196 + #define SM6115_VDDCX 0 197 + #define SM6115_VDDCX_AO 1 198 + #define SM6115_VDDCX_VFL 2 199 + #define SM6115_VDDMX 3 200 + #define SM6115_VDDMX_AO 4 201 + #define SM6115_VDDMX_VFL 5 202 + #define SM6115_VDD_LPI_CX 6 203 + #define SM6115_VDD_LPI_MX 7 204 + 195 205 /* RPM SMD Power Domain performance levels */ 196 206 #define RPM_SMD_LEVEL_RETENTION 16 197 207 #define RPM_SMD_LEVEL_RETENTION_PLUS 32
+2
include/dt-bindings/reset/qcom,sdm845-pdc.h
··· 16 16 #define PDC_DISPLAY_SYNC_RESET 7 17 17 #define PDC_COMPUTE_SYNC_RESET 8 18 18 #define PDC_MODEM_SYNC_RESET 9 19 + #define PDC_WLAN_RF_SYNC_RESET 10 20 + #define PDC_WPSS_SYNC_RESET 11 19 21 20 22 #endif
-3
include/linux/omap-gpmc.h
··· 81 81 extern void gpmc_read_settings_dt(struct device_node *np, 82 82 struct gpmc_settings *p); 83 83 84 - extern void omap3_gpmc_save_context(void); 85 - extern void omap3_gpmc_restore_context(void); 86 - 87 84 struct gpmc_timings; 88 85 struct omap_nand_platform_data; 89 86 struct omap_onenand_platform_data;
+2 -2
include/linux/platform_data/pata_ixp4xx_cf.h
··· 14 14 volatile u32 *cs1_cfg; 15 15 unsigned long cs0_bits; 16 16 unsigned long cs1_bits; 17 - void __iomem *cs0; 18 - void __iomem *cs1; 17 + void __iomem *cmd; 18 + void __iomem *ctl; 19 19 }; 20 20 21 21 #endif
+2
include/linux/power/smartreflex.h
··· 155 155 struct voltagedomain *voltdm; 156 156 struct dentry *dbg_dir; 157 157 unsigned int irq; 158 + struct clk *fck; 158 159 int srid; 159 160 int ip_type; 160 161 int nvalue_count; ··· 170 169 u32 senp_mod; 171 170 u32 senn_mod; 172 171 void __iomem *base; 172 + unsigned long enabled:1; 173 173 }; 174 174 175 175 /**
+18 -1
include/linux/qcom-geni-se.h
··· 8 8 9 9 #include <linux/interconnect.h> 10 10 11 - /* Transfer mode supported by GENI Serial Engines */ 11 + /** 12 + * enum geni_se_xfer_mode: Transfer modes supported by Serial Engines 13 + * 14 + * @GENI_SE_INVALID: Invalid mode 15 + * @GENI_SE_FIFO: FIFO mode. Data is transferred with SE FIFO 16 + * by programmed IO method 17 + * @GENI_SE_DMA: Serial Engine DMA mode. Data is transferred 18 + * with SE by DMAengine internal to SE 19 + * @GENI_GPI_DMA: GPI DMA mode. Data is transferred using a DMAengine 20 + * configured by a firmware residing on a GSI engine. This DMA name is 21 + * interchangeably used as GSI or GPI which seem to imply the same DMAengine 22 + */ 23 + 12 24 enum geni_se_xfer_mode { 13 25 GENI_SE_INVALID, 14 26 GENI_SE_FIFO, 15 27 GENI_SE_DMA, 28 + GENI_GPI_DMA, 16 29 }; 17 30 18 31 /* Protocols supported by GENI Serial Engines */ ··· 76 63 #define SE_GENI_STATUS 0x40 77 64 #define GENI_SER_M_CLK_CFG 0x48 78 65 #define GENI_SER_S_CLK_CFG 0x4c 66 + #define GENI_IF_DISABLE_RO 0x64 79 67 #define GENI_FW_REVISION_RO 0x68 80 68 #define SE_GENI_CLK_SEL 0x7c 81 69 #define SE_GENI_DMA_MODE_EN 0x258 ··· 118 104 #define SER_CLK_EN BIT(0) 119 105 #define CLK_DIV_MSK GENMASK(15, 4) 120 106 #define CLK_DIV_SHFT 4 107 + 108 + /* GENI_IF_DISABLE_RO fields */ 109 + #define FIFO_IF_DISABLE (BIT(0)) 121 110 122 111 /* GENI_FW_REVISION_RO fields */ 123 112 #define FW_REV_PROTOCOL_MSK GENMASK(15, 8)
+6
include/soc/tegra/pm.h
··· 14 14 TEGRA_SUSPEND_LP1, /* CPU voltage off, DRAM self-refresh */ 15 15 TEGRA_SUSPEND_LP0, /* CPU + core voltage off, DRAM self-refresh */ 16 16 TEGRA_MAX_SUSPEND_MODE, 17 + TEGRA_SUSPEND_NOT_READY, 17 18 }; 18 19 19 20 #if defined(CONFIG_PM_SLEEP) && defined(CONFIG_ARM) ··· 29 28 void tegra_pm_set_cpu_in_lp2(void); 30 29 int tegra_pm_enter_lp2(void); 31 30 int tegra_pm_park_secondary_cpu(unsigned long cpu); 31 + void tegra_pm_init_suspend(void); 32 32 #else 33 33 static inline enum tegra_suspend_mode 34 34 tegra_pm_validate_suspend_mode(enum tegra_suspend_mode mode) ··· 62 60 static inline int tegra_pm_park_secondary_cpu(unsigned long cpu) 63 61 { 64 62 return -ENOTSUPP; 63 + } 64 + 65 + static inline void tegra_pm_init_suspend(void) 66 + { 65 67 } 66 68 #endif /* CONFIG_PM_SLEEP */ 67 69
+1
include/uapi/linux/virtio_ids.h
··· 55 55 #define VIRTIO_ID_FS 26 /* virtio filesystem */ 56 56 #define VIRTIO_ID_PMEM 27 /* virtio pmem */ 57 57 #define VIRTIO_ID_MAC80211_HWSIM 29 /* virtio mac80211-hwsim */ 58 + #define VIRTIO_ID_SCMI 32 /* virtio SCMI */ 58 59 #define VIRTIO_ID_I2C_ADAPTER 34 /* virtio i2c adapter */ 59 60 #define VIRTIO_ID_BT 40 /* virtio bluetooth */ 60 61
+24
include/uapi/linux/virtio_scmi.h
··· 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */ 2 + /* 3 + * Copyright (C) 2020-2021 OpenSynergy GmbH 4 + * Copyright (C) 2021 ARM Ltd. 5 + */ 6 + 7 + #ifndef _UAPI_LINUX_VIRTIO_SCMI_H 8 + #define _UAPI_LINUX_VIRTIO_SCMI_H 9 + 10 + #include <linux/virtio_types.h> 11 + 12 + /* Device implements some SCMI notifications, or delayed responses. */ 13 + #define VIRTIO_SCMI_F_P2A_CHANNELS 0 14 + 15 + /* Device implements any SCMI statistics shared memory region */ 16 + #define VIRTIO_SCMI_F_SHARED_MEMORY 1 17 + 18 + /* Virtqueues */ 19 + 20 + #define VIRTIO_SCMI_VQ_TX 0 /* cmdq */ 21 + #define VIRTIO_SCMI_VQ_RX 1 /* eventq */ 22 + #define VIRTIO_SCMI_VQ_MAX_CNT 2 23 + 24 + #endif /* _UAPI_LINUX_VIRTIO_SCMI_H */