Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'soc-drivers-6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM SoC driver updates from Arnd Bergmann:
"This is the usual mix of updates for drivers that are used on (mostly
ARM) SoCs with no other top-level subsystem tree, including:

- The SCMI firmware subsystem gains support for version 3.2 of the
specification and updates to the notification code

- Feature updates for Tegra and Qualcomm platforms for added hardware
support

- A number of platforms get soc_device additions for identifying
newly added chips from Renesas, Qualcomm, Mediatek and Google

- Trivial improvements for firmware and memory drivers amongst
others, in particular 'const' annotations throughout multiple
subsystems"

* tag 'soc-drivers-6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (96 commits)
tee: make tee_bus_type const
soc: qcom: aoss: add missing kerneldoc for qmp members
soc: qcom: geni-se: drop unused kerneldoc struct geni_wrapper param
soc: qcom: spm: fix building with CONFIG_REGULATOR=n
bus: ti-sysc: constify the struct device_type usage
memory: stm32-fmc2-ebi: keep power domain on
memory: stm32-fmc2-ebi: add MP25 RIF support
memory: stm32-fmc2-ebi: add MP25 support
memory: stm32-fmc2-ebi: check regmap_read return value
dt-bindings: memory-controller: st,stm32: add MP25 support
dt-bindings: bus: imx-weim: convert to YAML
watchdog: s3c2410_wdt: use exynos_get_pmu_regmap_by_phandle() for PMU regs
soc: samsung: exynos-pmu: Add regmap support for SoCs that protect PMU regs
MAINTAINERS: Update SCMI entry with HWMON driver
MAINTAINERS: samsung: gs101: match patches touching Google Tensor SoC
memory: tegra: Fix indentation
memory: tegra: Add BPMP and ICC info for DLA clients
memory: tegra: Correct DLA client names
dt-bindings: memory: renesas,rpc-if: Document R-Car V4M support
firmware: arm_scmi: Update the supported clock protocol version
...

+3170 -540
-58
Documentation/devicetree/bindings/arm/msm/qcom,saw2.txt
··· 1 - SPM AVS Wrapper 2 (SAW2) 2 - 3 - The SAW2 is a wrapper around the Subsystem Power Manager (SPM) and the 4 - Adaptive Voltage Scaling (AVS) hardware. The SPM is a programmable 5 - power-controller that transitions a piece of hardware (like a processor or 6 - subsystem) into and out of low power modes via a direct connection to 7 - the PMIC. It can also be wired up to interact with other processors in the 8 - system, notifying them when a low power state is entered or exited. 9 - 10 - Multiple revisions of the SAW hardware are supported using these Device Nodes. 11 - SAW2 revisions differ in the register offset and configuration data. Also, the 12 - same revision of the SAW in different SoCs may have different configuration 13 - data due the differences in hardware capabilities. Hence the SoC name, the 14 - version of the SAW hardware in that SoC and the distinction between cpu (big 15 - or Little) or cache, may be needed to uniquely identify the SAW register 16 - configuration and initialization data. The compatible string is used to 17 - indicate this parameter. 18 - 19 - PROPERTIES 20 - 21 - - compatible: 22 - Usage: required 23 - Value type: <string> 24 - Definition: Must have 25 - "qcom,saw2" 26 - A more specific value could be one of: 27 - "qcom,apq8064-saw2-v1.1-cpu" 28 - "qcom,msm8226-saw2-v2.1-cpu" 29 - "qcom,msm8974-saw2-v2.1-cpu" 30 - "qcom,apq8084-saw2-v2.1-cpu" 31 - 32 - - reg: 33 - Usage: required 34 - Value type: <prop-encoded-array> 35 - Definition: the first element specifies the base address and size of 36 - the register region. An optional second element specifies 37 - the base address and size of the alias register region. 38 - 39 - - regulator: 40 - Usage: optional 41 - Value type: boolean 42 - Definition: Indicates that this SPM device acts as a regulator device 43 - device for the core (CPU or Cache) the SPM is attached 44 - to. 45 - 46 - Example 1: 47 - 48 - power-controller@2099000 { 49 - compatible = "qcom,saw2"; 50 - reg = <0x02099000 0x1000>, <0x02009000 0x1000>; 51 - regulator; 52 - }; 53 - 54 - Example 2: 55 - saw0: power-controller@f9089000 { 56 - compatible = "qcom,apq8084-saw2-v2.1-cpu", "qcom,saw2"; 57 - reg = <0xf9089000 0x1000>, <0xf9009000 0x1000>; 58 - };
-117
Documentation/devicetree/bindings/bus/imx-weim.txt
··· 1 - Device tree bindings for i.MX Wireless External Interface Module (WEIM) 2 - 3 - The term "wireless" does not imply that the WEIM is literally an interface 4 - without wires. It simply means that this module was originally designed for 5 - wireless and mobile applications that use low-power technology. 6 - 7 - The actual devices are instantiated from the child nodes of a WEIM node. 8 - 9 - Required properties: 10 - 11 - - compatible: Should contain one of the following: 12 - "fsl,imx1-weim" 13 - "fsl,imx27-weim" 14 - "fsl,imx51-weim" 15 - "fsl,imx50-weim" 16 - "fsl,imx6q-weim" 17 - - reg: A resource specifier for the register space 18 - (see the example below) 19 - - clocks: the clock, see the example below. 20 - - #address-cells: Must be set to 2 to allow memory address translation 21 - - #size-cells: Must be set to 1 to allow CS address passing 22 - - ranges: Must be set up to reflect the memory layout with four 23 - integer values for each chip-select line in use: 24 - 25 - <cs-number> 0 <physical address of mapping> <size> 26 - 27 - Optional properties: 28 - 29 - - fsl,weim-cs-gpr: For "fsl,imx50-weim" and "fsl,imx6q-weim" type of 30 - devices, it should be the phandle to the system General 31 - Purpose Register controller that contains WEIM CS GPR 32 - register, e.g. IOMUXC_GPR1 on i.MX6Q. IOMUXC_GPR1[11:0] 33 - should be set up as one of the following 4 possible 34 - values depending on the CS space configuration. 35 - 36 - IOMUXC_GPR1[11:0] CS0 CS1 CS2 CS3 37 - --------------------------------------------- 38 - 05 128M 0M 0M 0M 39 - 033 64M 64M 0M 0M 40 - 0113 64M 32M 32M 0M 41 - 01111 32M 32M 32M 32M 42 - 43 - In case that the property is absent, the reset value or 44 - what bootloader sets up in IOMUXC_GPR1[11:0] will be 45 - used. 46 - 47 - - fsl,burst-clk-enable For "fsl,imx50-weim" and "fsl,imx6q-weim" type of 48 - devices, the presence of this property indicates that 49 - the weim bus should operate in Burst Clock Mode. 50 - 51 - - fsl,continuous-burst-clk Make Burst Clock to output continuous clock. 52 - Without this option Burst Clock will output clock 53 - only when necessary. This takes effect only if 54 - "fsl,burst-clk-enable" is set. 55 - 56 - Timing property for child nodes. It is mandatory, not optional. 57 - 58 - - fsl,weim-cs-timing: The timing array, contains timing values for the 59 - child node. We get the CS indexes from the address 60 - ranges in the child node's "reg" property. 61 - The number of registers depends on the selected chip: 62 - For i.MX1, i.MX21 ("fsl,imx1-weim") there are two 63 - registers: CSxU, CSxL. 64 - For i.MX25, i.MX27, i.MX31 and i.MX35 ("fsl,imx27-weim") 65 - there are three registers: CSCRxU, CSCRxL, CSCRxA. 66 - For i.MX50, i.MX53 ("fsl,imx50-weim"), 67 - i.MX51 ("fsl,imx51-weim") and i.MX6Q ("fsl,imx6q-weim") 68 - there are six registers: CSxGCR1, CSxGCR2, CSxRCR1, 69 - CSxRCR2, CSxWCR1, CSxWCR2. 70 - 71 - Example for an imx6q-sabreauto board, the NOR flash connected to the WEIM: 72 - 73 - weim: weim@21b8000 { 74 - compatible = "fsl,imx6q-weim"; 75 - reg = <0x021b8000 0x4000>; 76 - clocks = <&clks 196>; 77 - #address-cells = <2>; 78 - #size-cells = <1>; 79 - ranges = <0 0 0x08000000 0x08000000>; 80 - fsl,weim-cs-gpr = <&gpr>; 81 - 82 - nor@0,0 { 83 - compatible = "cfi-flash"; 84 - reg = <0 0 0x02000000>; 85 - #address-cells = <1>; 86 - #size-cells = <1>; 87 - bank-width = <2>; 88 - fsl,weim-cs-timing = <0x00620081 0x00000001 0x1c022000 89 - 0x0000c000 0x1404a38e 0x00000000>; 90 - }; 91 - }; 92 - 93 - Example for an imx6q-based board, a multi-chipselect device connected to WEIM: 94 - 95 - In this case, both chip select 0 and 1 will be configured with the same timing 96 - array values. 97 - 98 - weim: weim@21b8000 { 99 - compatible = "fsl,imx6q-weim"; 100 - reg = <0x021b8000 0x4000>; 101 - clocks = <&clks 196>; 102 - #address-cells = <2>; 103 - #size-cells = <1>; 104 - ranges = <0 0 0x08000000 0x02000000 105 - 1 0 0x0a000000 0x02000000 106 - 2 0 0x0c000000 0x02000000 107 - 3 0 0x0e000000 0x02000000>; 108 - fsl,weim-cs-gpr = <&gpr>; 109 - 110 - acme@0 { 111 - compatible = "acme,whatever"; 112 - reg = <0 0 0x100>, <0 0x400000 0x800>, 113 - <1 0x400000 0x800>; 114 - fsl,weim-cs-timing = <0x024400b1 0x00001010 0x20081100 115 - 0x00000000 0xa0000240 0x00000000>; 116 - }; 117 - };
+1
Documentation/devicetree/bindings/i2c/i2c-exynos5.yaml
··· 33 33 - const: samsung,exynos7-hsi2c 34 34 - items: 35 35 - enum: 36 + - google,gs101-hsi2c 36 37 - samsung,exynos850-hsi2c 37 38 - const: samsung,exynosautov9-hsi2c 38 39 - const: samsung,exynos5-hsi2c # Exynos5250 and Exynos5420
+31
Documentation/devicetree/bindings/memory-controllers/fsl/fsl,imx-weim-peripherals.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/memory-controllers/fsl/fsl,imx-weim-peripherals.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: i.MX WEIM Bus Peripheral Nodes 8 + 9 + maintainers: 10 + - Shawn Guo <shawnguo@kernel.org> 11 + - Sascha Hauer <s.hauer@pengutronix.de> 12 + 13 + description: 14 + This binding is meant for the child nodes of the WEIM node. The node 15 + represents any device connected to the WEIM bus. It may be a Flash chip, 16 + RAM chip or Ethernet controller, etc. These properties are meant for 17 + configuring the WEIM settings/timings and will accompany the bindings 18 + supported by the respective device. 19 + 20 + properties: 21 + reg: true 22 + 23 + fsl,weim-cs-timing: 24 + $ref: /schemas/types.yaml#/definitions/uint32-array 25 + description: 26 + Timing values for the child node. 27 + minItems: 2 28 + maxItems: 6 29 + 30 + # the WEIM child will have its own native properties 31 + additionalProperties: true
+204
Documentation/devicetree/bindings/memory-controllers/fsl/fsl,imx-weim.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/memory-controllers/fsl/fsl,imx-weim.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: i.MX Wireless External Interface Module (WEIM) 8 + 9 + maintainers: 10 + - Shawn Guo <shawnguo@kernel.org> 11 + - Sascha Hauer <s.hauer@pengutronix.de> 12 + 13 + description: 14 + The term "wireless" does not imply that the WEIM is literally an interface 15 + without wires. It simply means that this module was originally designed for 16 + wireless and mobile applications that use low-power technology. The actual 17 + devices are instantiated from the child nodes of a WEIM node. 18 + 19 + properties: 20 + $nodename: 21 + pattern: "^memory-controller@[0-9a-f]+$" 22 + 23 + compatible: 24 + oneOf: 25 + - enum: 26 + - fsl,imx1-weim 27 + - fsl,imx27-weim 28 + - fsl,imx50-weim 29 + - fsl,imx51-weim 30 + - fsl,imx6q-weim 31 + - items: 32 + - enum: 33 + - fsl,imx31-weim 34 + - fsl,imx35-weim 35 + - const: fsl,imx27-weim 36 + - items: 37 + - enum: 38 + - fsl,imx6sx-weim 39 + - fsl,imx6ul-weim 40 + - const: fsl,imx6q-weim 41 + 42 + "#address-cells": 43 + const: 2 44 + 45 + "#size-cells": 46 + const: 1 47 + 48 + reg: 49 + maxItems: 1 50 + 51 + clocks: 52 + maxItems: 1 53 + 54 + interrupts: 55 + maxItems: 1 56 + 57 + ranges: true 58 + 59 + fsl,weim-cs-gpr: 60 + $ref: /schemas/types.yaml#/definitions/phandle 61 + description: | 62 + Phandle to the system General Purpose Register controller that contains 63 + WEIM CS GPR register, e.g. IOMUXC_GPR1 on i.MX6Q. IOMUXC_GPR1[11:0] 64 + should be set up as one of the following 4 possible values depending on 65 + the CS space configuration. 66 + 67 + IOMUXC_GPR1[11:0] CS0 CS1 CS2 CS3 68 + --------------------------------------------- 69 + 05 128M 0M 0M 0M 70 + 033 64M 64M 0M 0M 71 + 0113 64M 32M 32M 0M 72 + 01111 32M 32M 32M 32M 73 + 74 + In case that the property is absent, the reset value or what bootloader 75 + sets up in IOMUXC_GPR1[11:0] will be used. 76 + 77 + fsl,burst-clk-enable: 78 + type: boolean 79 + description: 80 + The presence of this property indicates that the weim bus should operate 81 + in Burst Clock Mode. 82 + 83 + fsl,continuous-burst-clk: 84 + type: boolean 85 + description: 86 + Make Burst Clock to output continuous clock. Without this option Burst 87 + Clock will output clock only when necessary. 88 + 89 + patternProperties: 90 + "^.*@[0-7],[0-9a-f]+$": 91 + type: object 92 + description: Devices attached to chip selects are represented as subnodes. 93 + $ref: fsl,imx-weim-peripherals.yaml 94 + additionalProperties: true 95 + required: 96 + - fsl,weim-cs-timing 97 + 98 + required: 99 + - compatible 100 + - reg 101 + - clocks 102 + - "#address-cells" 103 + - "#size-cells" 104 + - ranges 105 + 106 + allOf: 107 + - if: 108 + properties: 109 + compatible: 110 + not: 111 + contains: 112 + enum: 113 + - fsl,imx50-weim 114 + - fsl,imx6q-weim 115 + then: 116 + properties: 117 + fsl,weim-cs-gpr: false 118 + fsl,burst-clk-enable: false 119 + - if: 120 + not: 121 + required: 122 + - fsl,burst-clk-enable 123 + then: 124 + properties: 125 + fsl,continuous-burst-clk: false 126 + - if: 127 + properties: 128 + compatible: 129 + contains: 130 + const: fsl,imx1-weim 131 + then: 132 + patternProperties: 133 + "^.*@[0-7],[0-9a-f]+$": 134 + properties: 135 + fsl,weim-cs-timing: 136 + items: 137 + items: 138 + - description: CSxU 139 + - description: CSxL 140 + - if: 141 + properties: 142 + compatible: 143 + contains: 144 + enum: 145 + - fsl,imx27-weim 146 + - fsl,imx31-weim 147 + - fsl,imx35-weim 148 + then: 149 + patternProperties: 150 + "^.*@[0-7],[0-9a-f]+$": 151 + properties: 152 + fsl,weim-cs-timing: 153 + items: 154 + items: 155 + - description: CSCRxU 156 + - description: CSCRxL 157 + - description: CSCRxA 158 + - if: 159 + properties: 160 + compatible: 161 + contains: 162 + enum: 163 + - fsl,imx50-weim 164 + - fsl,imx51-weim 165 + - fsl,imx6q-weim 166 + - fsl,imx6sx-weim 167 + - fsl,imx6ul-weim 168 + then: 169 + patternProperties: 170 + "^.*@[0-7],[0-9a-f]+$": 171 + properties: 172 + fsl,weim-cs-timing: 173 + items: 174 + items: 175 + - description: CSxGCR1 176 + - description: CSxGCR2 177 + - description: CSxRCR1 178 + - description: CSxRCR2 179 + - description: CSxWCR1 180 + - description: CSxWCR2 181 + 182 + additionalProperties: false 183 + 184 + examples: 185 + - | 186 + memory-controller@21b8000 { 187 + compatible = "fsl,imx6q-weim"; 188 + reg = <0x021b8000 0x4000>; 189 + clocks = <&clks 196>; 190 + #address-cells = <2>; 191 + #size-cells = <1>; 192 + ranges = <0 0 0x08000000 0x08000000>; 193 + fsl,weim-cs-gpr = <&gpr>; 194 + 195 + flash@0,0 { 196 + compatible = "cfi-flash"; 197 + reg = <0 0 0x02000000>; 198 + #address-cells = <1>; 199 + #size-cells = <1>; 200 + bank-width = <2>; 201 + fsl,weim-cs-timing = <0x00620081 0x00000001 0x1c022000 202 + 0x0000c000 0x1404a38e 0x00000000>; 203 + }; 204 + };
+1
Documentation/devicetree/bindings/memory-controllers/mc-peripheral-props.yaml
··· 37 37 - $ref: ingenic,nemc-peripherals.yaml# 38 38 - $ref: intel,ixp4xx-expansion-peripheral-props.yaml# 39 39 - $ref: ti,gpmc-child.yaml# 40 + - $ref: fsl/fsl,imx-weim-peripherals.yaml 40 41 41 42 additionalProperties: true
+1 -1
Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-emc.yaml
··· 145 145 "^emc-table@[0-9]+$": 146 146 $ref: "#/$defs/emc-table" 147 147 148 - "^emc-tables@[a-z0-9-]+$": 148 + "^emc-tables@[a-f0-9-]+$": 149 149 type: object 150 150 properties: 151 151 reg:
+1
Documentation/devicetree/bindings/memory-controllers/renesas,rpc-if.yaml
··· 45 45 - items: 46 46 - enum: 47 47 - renesas,r8a779g0-rpc-if # R-Car V4H 48 + - renesas,r8a779h0-rpc-if # R-Car V4M 48 49 - const: renesas,rcar-gen4-rpc-if # a generic R-Car gen4 device 49 50 50 51 - items:
+6 -1
Documentation/devicetree/bindings/memory-controllers/st,stm32-fmc2-ebi.yaml
··· 23 23 24 24 properties: 25 25 compatible: 26 - const: st,stm32mp1-fmc2-ebi 26 + enum: 27 + - st,stm32mp1-fmc2-ebi 28 + - st,stm32mp25-fmc2-ebi 27 29 28 30 reg: 29 31 maxItems: 1 ··· 34 32 maxItems: 1 35 33 36 34 resets: 35 + maxItems: 1 36 + 37 + power-domains: 37 38 maxItems: 1 38 39 39 40 "#address-cells":
+46
Documentation/devicetree/bindings/soc/qcom/qcom,pbs.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/soc/qcom/qcom,pbs.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm Technologies, Inc. Programmable Boot Sequencer 8 + 9 + maintainers: 10 + - Anjelique Melendez <quic_amelende@quicinc.com> 11 + 12 + description: | 13 + The Qualcomm Technologies, Inc. Programmable Boot Sequencer (PBS) 14 + supports triggering power up and power down sequences for clients 15 + upon request. 16 + 17 + properties: 18 + compatible: 19 + items: 20 + - enum: 21 + - qcom,pmi632-pbs 22 + - const: qcom,pbs 23 + 24 + reg: 25 + maxItems: 1 26 + 27 + required: 28 + - compatible 29 + - reg 30 + 31 + additionalProperties: false 32 + 33 + examples: 34 + - | 35 + #include <dt-bindings/spmi/spmi.h> 36 + 37 + pmic@0 { 38 + reg = <0x0 SPMI_USID>; 39 + #address-cells = <1>; 40 + #size-cells = <0>; 41 + 42 + pbs@7400 { 43 + compatible = "qcom,pmi632-pbs", "qcom,pbs"; 44 + reg = <0x7400>; 45 + }; 46 + };
+2
Documentation/devicetree/bindings/soc/qcom/qcom,pmic-glink.yaml
··· 32 32 - items: 33 33 - enum: 34 34 - qcom,sm8650-pmic-glink 35 + - qcom,x1e80100-pmic-glink 35 36 - const: qcom,sm8550-pmic-glink 36 37 - const: qcom,pmic-glink 37 38 ··· 66 65 enum: 67 66 - qcom,sm8450-pmic-glink 68 67 - qcom,sm8550-pmic-glink 68 + - qcom,x1e80100-pmic-glink 69 69 then: 70 70 properties: 71 71 orientation-gpios: false
+2
Documentation/devicetree/bindings/soc/qcom/qcom,rpm-master-stats.yaml
··· 35 35 description: Phandle to an RPM MSG RAM slice containing the master stats 36 36 minItems: 1 37 37 maxItems: 5 38 + items: 39 + maxItems: 1 38 40 39 41 qcom,master-names: 40 42 $ref: /schemas/types.yaml#/definitions/string-array
+40 -6
Documentation/devicetree/bindings/soc/qcom/qcom,spm.yaml Documentation/devicetree/bindings/soc/qcom/qcom,saw2.yaml
··· 1 1 # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 2 %YAML 1.2 3 3 --- 4 - $id: http://devicetree.org/schemas/soc/qcom/qcom,spm.yaml# 4 + $id: http://devicetree.org/schemas/soc/qcom/qcom,saw2.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 - title: Qualcomm Subsystem Power Manager 7 + title: Qualcomm Subsystem Power Manager / SPM AVS Wrapper 2 (SAW2) 8 8 9 9 maintainers: 10 10 - Andy Gross <agross@kernel.org> 11 11 - Bjorn Andersson <bjorn.andersson@linaro.org> 12 12 13 13 description: | 14 - This binding describes the Qualcomm Subsystem Power Manager, used to control 15 - the peripheral logic surrounding the application cores in Qualcomm platforms. 14 + The Qualcomm Subsystem Power Manager is used to control the peripheral logic 15 + surrounding the application cores in Qualcomm platforms. 16 + 17 + The SAW2 is a wrapper around the Subsystem Power Manager (SPM) and the 18 + Adaptive Voltage Scaling (AVS) hardware. The SPM is a programmable 19 + power-controller that transitions a piece of hardware (like a processor or 20 + subsystem) into and out of low power modes via a direct connection to 21 + the PMIC. It can also be wired up to interact with other processors in the 22 + system, notifying them when a low power state is entered or exited. 16 23 17 24 properties: 18 25 compatible: 19 26 items: 20 27 - enum: 28 + - qcom,ipq4019-saw2-cpu 29 + - qcom,ipq4019-saw2-l2 30 + - qcom,ipq8064-saw2-cpu 21 31 - qcom,sdm660-gold-saw2-v4.1-l2 22 32 - qcom,sdm660-silver-saw2-v4.1-l2 23 33 - qcom,msm8998-gold-saw2-v4.1-l2 ··· 36 26 - qcom,msm8916-saw2-v3.0-cpu 37 27 - qcom,msm8939-saw2-v3.0-cpu 38 28 - qcom,msm8226-saw2-v2.1-cpu 29 + - qcom,msm8226-saw2-v2.1-l2 30 + - qcom,msm8960-saw2-cpu 39 31 - qcom,msm8974-saw2-v2.1-cpu 32 + - qcom,msm8974-saw2-v2.1-l2 40 33 - qcom,msm8976-gold-saw2-v2.3-l2 41 34 - qcom,msm8976-silver-saw2-v2.3-l2 42 35 - qcom,apq8084-saw2-v2.1-cpu 36 + - qcom,apq8084-saw2-v2.1-l2 43 37 - qcom,apq8064-saw2-v1.1-cpu 44 38 - const: qcom,saw2 45 39 46 40 reg: 47 - description: Base address and size of the SPM register region 48 - maxItems: 1 41 + items: 42 + - description: Base address and size of the SPM register region 43 + - description: Base address and size of the alias register region 44 + minItems: 1 45 + 46 + regulator: 47 + $ref: /schemas/regulator/regulator.yaml# 48 + description: Indicates that this SPM device acts as a regulator device 49 + device for the core (CPU or Cache) the SPM is attached to. 49 50 50 51 required: 51 52 - compatible ··· 103 82 reg = <0x17912000 0x1000>; 104 83 }; 105 84 85 + - | 86 + /* 87 + * Example 3: SAW2 with the bundled regulator definition. 88 + */ 89 + power-manager@2089000 { 90 + compatible = "qcom,apq8064-saw2-v1.1-cpu", "qcom,saw2"; 91 + reg = <0x02089000 0x1000>, <0x02009000 0x1000>; 92 + 93 + regulator { 94 + regulator-min-microvolt = <850000>; 95 + regulator-max-microvolt = <1300000>; 96 + }; 97 + }; 106 98 ...
+2
Documentation/devicetree/bindings/soc/samsung/samsung,exynos-sysreg.yaml
··· 72 72 compatible: 73 73 contains: 74 74 enum: 75 + - google,gs101-peric0-sysreg 76 + - google,gs101-peric1-sysreg 75 77 - samsung,exynos850-cmgp-sysreg 76 78 - samsung,exynos850-peri-sysreg 77 79 - samsung,exynos850-sysreg
+2 -2
MAINTAINERS
··· 9091 9091 F: arch/arm64/boot/dts/exynos/google/ 9092 9092 F: drivers/clk/samsung/clk-gs101.c 9093 9093 F: include/dt-bindings/clock/google,gs101.h 9094 + K: [gG]oogle.?[tT]ensor 9094 9095 9095 9096 GPD POCKET FAN DRIVER 9096 9097 M: Hans de Goede <hdegoede@redhat.com> ··· 17370 17369 F: drivers/pinctrl/renesas/ 17371 17370 17372 17371 PIN CONTROLLER - SAMSUNG 17373 - M: Tomasz Figa <tomasz.figa@gmail.com> 17374 17372 M: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> 17375 17373 M: Sylwester Nawrocki <s.nawrocki@samsung.com> 17376 17374 R: Alim Akhtar <alim.akhtar@samsung.com> ··· 19428 19428 SAMSUNG SOC CLOCK DRIVERS 19429 19429 M: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> 19430 19430 M: Sylwester Nawrocki <s.nawrocki@samsung.com> 19431 - M: Tomasz Figa <tomasz.figa@gmail.com> 19432 19431 M: Chanwoo Choi <cw00.choi@samsung.com> 19433 19432 R: Alim Akhtar <alim.akhtar@samsung.com> 19434 19433 L: linux-samsung-soc@vger.kernel.org ··· 21347 21348 F: drivers/cpufreq/sc[mp]i-cpufreq.c 21348 21349 F: drivers/firmware/arm_scmi/ 21349 21350 F: drivers/firmware/arm_scpi.c 21351 + F: drivers/hwmon/scmi-hwmon.c 21350 21352 F: drivers/pmdomain/arm/ 21351 21353 F: drivers/powercap/arm_scmi_powercap.c 21352 21354 F: drivers/regulator/scmi-regulator.c
+3 -2
drivers/bus/Kconfig
··· 186 186 187 187 config TEGRA_ACONNECT 188 188 tristate "Tegra ACONNECT Bus Driver" 189 - depends on ARCH_TEGRA_210_SOC 189 + depends on ARCH_TEGRA 190 190 depends on OF && PM 191 191 help 192 192 Driver for the Tegra ACONNECT bus which is used to interface with 193 - the devices inside the Audio Processing Engine (APE) for Tegra210. 193 + the devices inside the Audio Processing Engine (APE) for 194 + Tegra210 and later. 194 195 195 196 config TEGRA_GMI 196 197 tristate "Tegra Generic Memory Interface bus driver"
+2 -2
drivers/bus/sunxi-rsb.c
··· 128 128 }; 129 129 130 130 /* bus / slave device related functions */ 131 - static struct bus_type sunxi_rsb_bus; 131 + static const struct bus_type sunxi_rsb_bus; 132 132 133 133 static int sunxi_rsb_device_match(struct device *dev, struct device_driver *drv) 134 134 { ··· 177 177 return of_device_uevent_modalias(dev, env); 178 178 } 179 179 180 - static struct bus_type sunxi_rsb_bus = { 180 + static const struct bus_type sunxi_rsb_bus = { 181 181 .name = RSB_CTRL_NAME, 182 182 .match = sunxi_rsb_device_match, 183 183 .probe = sunxi_rsb_device_probe,
+1 -1
drivers/bus/ti-sysc.c
··· 2400 2400 return 0; 2401 2401 } 2402 2402 2403 - static struct device_type sysc_device_type = { 2403 + static const struct device_type sysc_device_type = { 2404 2404 }; 2405 2405 2406 2406 static struct sysc *sysc_child_to_parent(struct device *dev)
+1 -1
drivers/firmware/arm_ffa/bus.c
··· 105 105 }; 106 106 ATTRIBUTE_GROUPS(ffa_device_attributes); 107 107 108 - struct bus_type ffa_bus_type = { 108 + const struct bus_type ffa_bus_type = { 109 109 .name = "arm_ffa", 110 110 .match = ffa_device_match, 111 111 .probe = ffa_device_probe,
+23 -3
drivers/firmware/arm_scmi/bus.c
··· 141 141 return ret; 142 142 } 143 143 144 + static int scmi_protocol_table_register(const struct scmi_device_id *id_table) 145 + { 146 + int ret = 0; 147 + const struct scmi_device_id *entry; 148 + 149 + for (entry = id_table; entry->name && ret == 0; entry++) 150 + ret = scmi_protocol_device_request(entry); 151 + 152 + return ret; 153 + } 154 + 144 155 /** 145 156 * scmi_protocol_device_unrequest - Helper to unrequest a device 146 157 * ··· 195 184 } 196 185 } 197 186 mutex_unlock(&scmi_requested_devices_mtx); 187 + } 188 + 189 + static void 190 + scmi_protocol_table_unregister(const struct scmi_device_id *id_table) 191 + { 192 + const struct scmi_device_id *entry; 193 + 194 + for (entry = id_table; entry->name; entry++) 195 + scmi_protocol_device_unrequest(entry); 198 196 } 199 197 200 198 static const struct scmi_device_id * ··· 283 263 scmi_drv->remove(scmi_dev); 284 264 } 285 265 286 - struct bus_type scmi_bus_type = { 266 + const struct bus_type scmi_bus_type = { 287 267 .name = "scmi_protocol", 288 268 .match = scmi_dev_match, 289 269 .probe = scmi_dev_probe, ··· 299 279 if (!driver->probe) 300 280 return -EINVAL; 301 281 302 - retval = scmi_protocol_device_request(driver->id_table); 282 + retval = scmi_protocol_table_register(driver->id_table); 303 283 if (retval) 304 284 return retval; 305 285 ··· 319 299 void scmi_driver_unregister(struct scmi_driver *driver) 320 300 { 321 301 driver_unregister(&driver->driver); 322 - scmi_protocol_device_unrequest(driver->id_table); 302 + scmi_protocol_table_unregister(driver->id_table); 323 303 } 324 304 EXPORT_SYMBOL_GPL(scmi_driver_unregister); 325 305
+166 -28
drivers/firmware/arm_scmi/clock.c
··· 13 13 #include "notify.h" 14 14 15 15 /* Updated only after ALL the mandatory features for that version are merged */ 16 - #define SCMI_PROTOCOL_SUPPORTED_VERSION 0x20000 16 + #define SCMI_PROTOCOL_SUPPORTED_VERSION 0x30000 17 17 18 18 enum scmi_clock_protocol_cmd { 19 19 CLOCK_ATTRIBUTES = 0x3, ··· 28 28 CLOCK_POSSIBLE_PARENTS_GET = 0xC, 29 29 CLOCK_PARENT_SET = 0xD, 30 30 CLOCK_PARENT_GET = 0xE, 31 + CLOCK_GET_PERMISSIONS = 0xF, 31 32 }; 33 + 34 + #define CLOCK_STATE_CONTROL_ALLOWED BIT(31) 35 + #define CLOCK_PARENT_CONTROL_ALLOWED BIT(30) 36 + #define CLOCK_RATE_CONTROL_ALLOWED BIT(29) 32 37 33 38 enum clk_state { 34 39 CLK_STATE_DISABLE, ··· 54 49 #define SUPPORTS_RATE_CHANGE_REQUESTED_NOTIF(x) ((x) & BIT(30)) 55 50 #define SUPPORTS_EXTENDED_NAMES(x) ((x) & BIT(29)) 56 51 #define SUPPORTS_PARENT_CLOCK(x) ((x) & BIT(28)) 52 + #define SUPPORTS_EXTENDED_CONFIG(x) ((x) & BIT(27)) 53 + #define SUPPORTS_GET_PERMISSIONS(x) ((x) & BIT(1)) 57 54 u8 name[SCMI_SHORT_NAME_MAX_SIZE]; 58 55 __le32 clock_enable_latency; 59 56 }; ··· 159 152 u32 version; 160 153 int num_clocks; 161 154 int max_async_req; 155 + bool notify_rate_changed_cmd; 156 + bool notify_rate_change_requested_cmd; 162 157 atomic_t cur_async_req; 163 158 struct scmi_clock_info *clk; 164 159 int (*clock_config_set)(const struct scmi_protocol_handle *ph, 165 160 u32 clk_id, enum clk_state state, 166 - u8 oem_type, u32 oem_val, bool atomic); 161 + enum scmi_clock_oem_config oem_type, 162 + u32 oem_val, bool atomic); 167 163 int (*clock_config_get)(const struct scmi_protocol_handle *ph, 168 - u32 clk_id, u8 oem_type, u32 *attributes, 169 - bool *enabled, u32 *oem_val, bool atomic); 164 + u32 clk_id, enum scmi_clock_oem_config oem_type, 165 + u32 *attributes, bool *enabled, u32 *oem_val, 166 + bool atomic); 170 167 }; 171 168 172 169 static enum scmi_clock_protocol_cmd evt_2_cmd[] = { 173 170 CLOCK_RATE_NOTIFY, 174 171 CLOCK_RATE_CHANGE_REQUESTED_NOTIFY, 175 172 }; 173 + 174 + static inline struct scmi_clock_info * 175 + scmi_clock_domain_lookup(struct clock_info *ci, u32 clk_id) 176 + { 177 + if (clk_id >= ci->num_clocks) 178 + return ERR_PTR(-EINVAL); 179 + 180 + return ci->clk + clk_id; 181 + } 176 182 177 183 static int 178 184 scmi_clock_protocol_attributes_get(const struct scmi_protocol_handle *ph, ··· 209 189 } 210 190 211 191 ph->xops->xfer_put(ph, t); 192 + 193 + if (!ret) { 194 + if (!ph->hops->protocol_msg_check(ph, CLOCK_RATE_NOTIFY, NULL)) 195 + ci->notify_rate_changed_cmd = true; 196 + 197 + if (!ph->hops->protocol_msg_check(ph, 198 + CLOCK_RATE_CHANGE_REQUESTED_NOTIFY, 199 + NULL)) 200 + ci->notify_rate_change_requested_cmd = true; 201 + } 202 + 212 203 return ret; 213 204 } 214 205 ··· 315 284 return ret; 316 285 } 317 286 287 + static int 288 + scmi_clock_get_permissions(const struct scmi_protocol_handle *ph, u32 clk_id, 289 + struct scmi_clock_info *clk) 290 + { 291 + struct scmi_xfer *t; 292 + u32 perm; 293 + int ret; 294 + 295 + ret = ph->xops->xfer_get_init(ph, CLOCK_GET_PERMISSIONS, 296 + sizeof(clk_id), sizeof(perm), &t); 297 + if (ret) 298 + return ret; 299 + 300 + put_unaligned_le32(clk_id, t->tx.buf); 301 + 302 + ret = ph->xops->do_xfer(ph, t); 303 + if (!ret) { 304 + perm = get_unaligned_le32(t->rx.buf); 305 + 306 + clk->state_ctrl_forbidden = !(perm & CLOCK_STATE_CONTROL_ALLOWED); 307 + clk->rate_ctrl_forbidden = !(perm & CLOCK_RATE_CONTROL_ALLOWED); 308 + clk->parent_ctrl_forbidden = !(perm & CLOCK_PARENT_CONTROL_ALLOWED); 309 + } 310 + 311 + ph->xops->xfer_put(ph, t); 312 + 313 + return ret; 314 + } 315 + 318 316 static int scmi_clock_attributes_get(const struct scmi_protocol_handle *ph, 319 - u32 clk_id, struct scmi_clock_info *clk, 317 + u32 clk_id, struct clock_info *cinfo, 320 318 u32 version) 321 319 { 322 320 int ret; 323 321 u32 attributes; 324 322 struct scmi_xfer *t; 325 323 struct scmi_msg_resp_clock_attributes *attr; 324 + struct scmi_clock_info *clk = cinfo->clk + clk_id; 326 325 327 326 ret = ph->xops->xfer_get_init(ph, CLOCK_ATTRIBUTES, 328 327 sizeof(clk_id), sizeof(*attr), &t); ··· 385 324 NULL, clk->name, 386 325 SCMI_MAX_STR_SIZE); 387 326 388 - if (SUPPORTS_RATE_CHANGED_NOTIF(attributes)) 327 + if (cinfo->notify_rate_changed_cmd && 328 + SUPPORTS_RATE_CHANGED_NOTIF(attributes)) 389 329 clk->rate_changed_notifications = true; 390 - if (SUPPORTS_RATE_CHANGE_REQUESTED_NOTIF(attributes)) 330 + if (cinfo->notify_rate_change_requested_cmd && 331 + SUPPORTS_RATE_CHANGE_REQUESTED_NOTIF(attributes)) 391 332 clk->rate_change_requested_notifications = true; 392 - if (SUPPORTS_PARENT_CLOCK(attributes)) 393 - scmi_clock_possible_parents(ph, clk_id, clk); 333 + if (PROTOCOL_REV_MAJOR(version) >= 0x3) { 334 + if (SUPPORTS_PARENT_CLOCK(attributes)) 335 + scmi_clock_possible_parents(ph, clk_id, clk); 336 + if (SUPPORTS_GET_PERMISSIONS(attributes)) 337 + scmi_clock_get_permissions(ph, clk_id, clk); 338 + if (SUPPORTS_EXTENDED_CONFIG(attributes)) 339 + clk->extended_config = true; 340 + } 394 341 } 395 342 396 343 return ret; ··· 571 502 struct scmi_xfer *t; 572 503 struct scmi_clock_set_rate *cfg; 573 504 struct clock_info *ci = ph->get_priv(ph); 505 + struct scmi_clock_info *clk; 506 + 507 + clk = scmi_clock_domain_lookup(ci, clk_id); 508 + if (IS_ERR(clk)) 509 + return PTR_ERR(clk); 510 + 511 + if (clk->rate_ctrl_forbidden) 512 + return -EACCES; 574 513 575 514 ret = ph->xops->xfer_get_init(ph, CLOCK_RATE_SET, sizeof(*cfg), 0, &t); 576 515 if (ret) ··· 620 543 621 544 static int 622 545 scmi_clock_config_set(const struct scmi_protocol_handle *ph, u32 clk_id, 623 - enum clk_state state, u8 __unused0, u32 __unused1, 546 + enum clk_state state, 547 + enum scmi_clock_oem_config __unused0, u32 __unused1, 624 548 bool atomic) 625 549 { 626 550 int ret; ··· 658 580 struct clock_info *ci = ph->get_priv(ph); 659 581 struct scmi_clock_info *clk; 660 582 661 - if (clk_id >= ci->num_clocks) 662 - return -EINVAL; 663 - 664 - clk = ci->clk + clk_id; 583 + clk = scmi_clock_domain_lookup(ci, clk_id); 584 + if (IS_ERR(clk)) 585 + return PTR_ERR(clk); 665 586 666 587 if (parent_id >= clk->num_parents) 667 588 return -EINVAL; 589 + 590 + if (clk->parent_ctrl_forbidden) 591 + return -EACCES; 668 592 669 593 ret = ph->xops->xfer_get_init(ph, CLOCK_PARENT_SET, 670 594 sizeof(*cfg), 0, &t); ··· 708 628 return ret; 709 629 } 710 630 711 - /* For SCMI clock v2.1 and onwards */ 631 + /* For SCMI clock v3.0 and onwards */ 712 632 static int 713 633 scmi_clock_config_set_v2(const struct scmi_protocol_handle *ph, u32 clk_id, 714 - enum clk_state state, u8 oem_type, u32 oem_val, 634 + enum clk_state state, 635 + enum scmi_clock_oem_config oem_type, u32 oem_val, 715 636 bool atomic) 716 637 { 717 638 int ret; ··· 752 671 bool atomic) 753 672 { 754 673 struct clock_info *ci = ph->get_priv(ph); 674 + struct scmi_clock_info *clk; 675 + 676 + clk = scmi_clock_domain_lookup(ci, clk_id); 677 + if (IS_ERR(clk)) 678 + return PTR_ERR(clk); 679 + 680 + if (clk->state_ctrl_forbidden) 681 + return -EACCES; 755 682 756 683 return ci->clock_config_set(ph, clk_id, CLK_STATE_ENABLE, 757 684 NULL_OEM_TYPE, 0, atomic); ··· 769 680 bool atomic) 770 681 { 771 682 struct clock_info *ci = ph->get_priv(ph); 683 + struct scmi_clock_info *clk; 684 + 685 + clk = scmi_clock_domain_lookup(ci, clk_id); 686 + if (IS_ERR(clk)) 687 + return PTR_ERR(clk); 688 + 689 + if (clk->state_ctrl_forbidden) 690 + return -EACCES; 772 691 773 692 return ci->clock_config_set(ph, clk_id, CLK_STATE_DISABLE, 774 693 NULL_OEM_TYPE, 0, atomic); 775 694 } 776 695 777 - /* For SCMI clock v2.1 and onwards */ 696 + /* For SCMI clock v3.0 and onwards */ 778 697 static int 779 698 scmi_clock_config_get_v2(const struct scmi_protocol_handle *ph, u32 clk_id, 780 - u8 oem_type, u32 *attributes, bool *enabled, 781 - u32 *oem_val, bool atomic) 699 + enum scmi_clock_oem_config oem_type, u32 *attributes, 700 + bool *enabled, u32 *oem_val, bool atomic) 782 701 { 783 702 int ret; 784 703 u32 flags; ··· 827 730 828 731 static int 829 732 scmi_clock_config_get(const struct scmi_protocol_handle *ph, u32 clk_id, 830 - u8 oem_type, u32 *attributes, bool *enabled, 831 - u32 *oem_val, bool atomic) 733 + enum scmi_clock_oem_config oem_type, u32 *attributes, 734 + bool *enabled, u32 *oem_val, bool atomic) 832 735 { 833 736 int ret; 834 737 struct scmi_xfer *t; ··· 865 768 } 866 769 867 770 static int scmi_clock_config_oem_set(const struct scmi_protocol_handle *ph, 868 - u32 clk_id, u8 oem_type, u32 oem_val, 869 - bool atomic) 771 + u32 clk_id, 772 + enum scmi_clock_oem_config oem_type, 773 + u32 oem_val, bool atomic) 870 774 { 871 775 struct clock_info *ci = ph->get_priv(ph); 776 + struct scmi_clock_info *clk; 777 + 778 + clk = scmi_clock_domain_lookup(ci, clk_id); 779 + if (IS_ERR(clk)) 780 + return PTR_ERR(clk); 781 + 782 + if (!clk->extended_config) 783 + return -EOPNOTSUPP; 872 784 873 785 return ci->clock_config_set(ph, clk_id, CLK_STATE_UNCHANGED, 874 786 oem_type, oem_val, atomic); 875 787 } 876 788 877 789 static int scmi_clock_config_oem_get(const struct scmi_protocol_handle *ph, 878 - u32 clk_id, u8 oem_type, u32 *oem_val, 879 - u32 *attributes, bool atomic) 790 + u32 clk_id, 791 + enum scmi_clock_oem_config oem_type, 792 + u32 *oem_val, u32 *attributes, bool atomic) 880 793 { 881 794 struct clock_info *ci = ph->get_priv(ph); 795 + struct scmi_clock_info *clk; 796 + 797 + clk = scmi_clock_domain_lookup(ci, clk_id); 798 + if (IS_ERR(clk)) 799 + return PTR_ERR(clk); 800 + 801 + if (!clk->extended_config) 802 + return -EOPNOTSUPP; 882 803 883 804 return ci->clock_config_get(ph, clk_id, oem_type, attributes, 884 805 NULL, oem_val, atomic); ··· 915 800 struct scmi_clock_info *clk; 916 801 struct clock_info *ci = ph->get_priv(ph); 917 802 918 - if (clk_id >= ci->num_clocks) 803 + clk = scmi_clock_domain_lookup(ci, clk_id); 804 + if (IS_ERR(clk)) 919 805 return NULL; 920 806 921 - clk = ci->clk + clk_id; 922 807 if (!clk->name[0]) 923 808 return NULL; 924 809 ··· 938 823 .parent_set = scmi_clock_set_parent, 939 824 .parent_get = scmi_clock_get_parent, 940 825 }; 826 + 827 + static bool scmi_clk_notify_supported(const struct scmi_protocol_handle *ph, 828 + u8 evt_id, u32 src_id) 829 + { 830 + bool supported; 831 + struct scmi_clock_info *clk; 832 + struct clock_info *ci = ph->get_priv(ph); 833 + 834 + if (evt_id >= ARRAY_SIZE(evt_2_cmd)) 835 + return false; 836 + 837 + clk = scmi_clock_domain_lookup(ci, src_id); 838 + if (IS_ERR(clk)) 839 + return false; 840 + 841 + if (evt_id == SCMI_EVENT_CLOCK_RATE_CHANGED) 842 + supported = clk->rate_changed_notifications; 843 + else 844 + supported = clk->rate_change_requested_notifications; 845 + 846 + return supported; 847 + } 941 848 942 849 static int scmi_clk_rate_notify(const struct scmi_protocol_handle *ph, 943 850 u32 clk_id, int message_id, bool enable) ··· 1045 908 }; 1046 909 1047 910 static const struct scmi_event_ops clk_event_ops = { 911 + .is_notify_supported = scmi_clk_notify_supported, 1048 912 .get_num_sources = scmi_clk_get_num_sources, 1049 913 .set_notify_enabled = scmi_clk_set_notify_enabled, 1050 914 .fill_custom_report = scmi_clk_fill_custom_report, ··· 1087 949 for (clkid = 0; clkid < cinfo->num_clocks; clkid++) { 1088 950 struct scmi_clock_info *clk = cinfo->clk + clkid; 1089 951 1090 - ret = scmi_clock_attributes_get(ph, clkid, clk, version); 952 + ret = scmi_clock_attributes_get(ph, clkid, cinfo, version); 1091 953 if (!ret) 1092 954 scmi_clock_describe_rates_get(ph, clkid, clk); 1093 955 }
+1 -1
drivers/firmware/arm_scmi/common.h
··· 141 141 void scmi_setup_protocol_implemented(const struct scmi_protocol_handle *ph, 142 142 u8 *prot_imp); 143 143 144 - extern struct bus_type scmi_bus_type; 144 + extern const struct bus_type scmi_bus_type; 145 145 146 146 #define SCMI_BUS_NOTIFY_DEVICE_REQUEST 0 147 147 #define SCMI_BUS_NOTIFY_DEVICE_UNREQUEST 1
+94 -5
drivers/firmware/arm_scmi/driver.c
··· 86 86 * @users: A refcount to track effective users of this protocol. 87 87 * @priv: Reference for optional protocol private data. 88 88 * @version: Protocol version supported by the platform as detected at runtime. 89 + * @negotiated_version: When the platform supports a newer protocol version, 90 + * the agent will try to negotiate with the platform the 91 + * usage of the newest version known to it, since 92 + * backward compatibility is NOT automatically assured. 93 + * This field is NON-zero when a successful negotiation 94 + * has completed. 89 95 * @ph: An embedded protocol handle that will be passed down to protocol 90 96 * initialization code to identify this instance. 91 97 * ··· 105 99 refcount_t users; 106 100 void *priv; 107 101 unsigned int version; 102 + unsigned int negotiated_version; 108 103 struct scmi_protocol_handle ph; 109 104 }; 110 105 ··· 1761 1754 #endif 1762 1755 } 1763 1756 1757 + /** 1758 + * scmi_protocol_msg_check - Check protocol message attributes 1759 + * 1760 + * @ph: A reference to the protocol handle. 1761 + * @message_id: The ID of the message to check. 1762 + * @attributes: A parameter to optionally return the retrieved message 1763 + * attributes, in case of Success. 1764 + * 1765 + * An helper to check protocol message attributes for a specific protocol 1766 + * and message pair. 1767 + * 1768 + * Return: 0 on SUCCESS 1769 + */ 1770 + static int scmi_protocol_msg_check(const struct scmi_protocol_handle *ph, 1771 + u32 message_id, u32 *attributes) 1772 + { 1773 + int ret; 1774 + struct scmi_xfer *t; 1775 + 1776 + ret = xfer_get_init(ph, PROTOCOL_MESSAGE_ATTRIBUTES, 1777 + sizeof(__le32), 0, &t); 1778 + if (ret) 1779 + return ret; 1780 + 1781 + put_unaligned_le32(message_id, t->tx.buf); 1782 + ret = do_xfer(ph, t); 1783 + if (!ret && attributes) 1784 + *attributes = get_unaligned_le32(t->rx.buf); 1785 + xfer_put(ph, t); 1786 + 1787 + return ret; 1788 + } 1789 + 1764 1790 static const struct scmi_proto_helpers_ops helpers_ops = { 1765 1791 .extended_name_get = scmi_common_extended_name_get, 1766 1792 .iter_response_init = scmi_iterator_init, 1767 1793 .iter_response_run = scmi_iterator_run, 1794 + .protocol_msg_check = scmi_protocol_msg_check, 1768 1795 .fastchannel_init = scmi_common_fastchannel_init, 1769 1796 .fastchannel_db_ring = scmi_common_fastchannel_db_ring, 1770 1797 }; ··· 1820 1779 const struct scmi_protocol_instance *pi = ph_to_pi(ph); 1821 1780 1822 1781 return pi->handle->version; 1782 + } 1783 + 1784 + /** 1785 + * scmi_protocol_version_negotiate - Negotiate protocol version 1786 + * 1787 + * @ph: A reference to the protocol handle. 1788 + * 1789 + * An helper to negotiate a protocol version different from the latest 1790 + * advertised as supported from the platform: on Success backward 1791 + * compatibility is assured by the platform. 1792 + * 1793 + * Return: 0 on Success 1794 + */ 1795 + static int scmi_protocol_version_negotiate(struct scmi_protocol_handle *ph) 1796 + { 1797 + int ret; 1798 + struct scmi_xfer *t; 1799 + struct scmi_protocol_instance *pi = ph_to_pi(ph); 1800 + 1801 + /* At first check if NEGOTIATE_PROTOCOL_VERSION is supported ... */ 1802 + ret = scmi_protocol_msg_check(ph, NEGOTIATE_PROTOCOL_VERSION, NULL); 1803 + if (ret) 1804 + return ret; 1805 + 1806 + /* ... then attempt protocol version negotiation */ 1807 + ret = xfer_get_init(ph, NEGOTIATE_PROTOCOL_VERSION, 1808 + sizeof(__le32), 0, &t); 1809 + if (ret) 1810 + return ret; 1811 + 1812 + put_unaligned_le32(pi->proto->supported_version, t->tx.buf); 1813 + ret = do_xfer(ph, t); 1814 + if (!ret) 1815 + pi->negotiated_version = pi->proto->supported_version; 1816 + 1817 + xfer_put(ph, t); 1818 + 1819 + return ret; 1823 1820 } 1824 1821 1825 1822 /** ··· 1932 1853 devres_close_group(handle->dev, pi->gid); 1933 1854 dev_dbg(handle->dev, "Initialized protocol: 0x%X\n", pi->proto->id); 1934 1855 1935 - if (pi->version > proto->supported_version) 1936 - dev_warn(handle->dev, 1937 - "Detected UNSUPPORTED higher version 0x%X for protocol 0x%X." 1938 - "Backward compatibility is NOT assured.\n", 1939 - pi->version, pi->proto->id); 1856 + if (pi->version > proto->supported_version) { 1857 + ret = scmi_protocol_version_negotiate(&pi->ph); 1858 + if (!ret) { 1859 + dev_info(handle->dev, 1860 + "Protocol 0x%X successfully negotiated version 0x%X\n", 1861 + proto->id, pi->negotiated_version); 1862 + } else { 1863 + dev_warn(handle->dev, 1864 + "Detected UNSUPPORTED higher version 0x%X for protocol 0x%X.\n", 1865 + pi->version, pi->proto->id); 1866 + dev_warn(handle->dev, 1867 + "Trying version 0x%X. Backward compatibility is NOT assured.\n", 1868 + pi->proto->supported_version); 1869 + } 1870 + } 1940 1871 1941 1872 return pi; 1942 1873
+16 -1
drivers/firmware/arm_scmi/notify.c
··· 99 99 #define PROTO_ID_MASK GENMASK(31, 24) 100 100 #define EVT_ID_MASK GENMASK(23, 16) 101 101 #define SRC_ID_MASK GENMASK(15, 0) 102 + #define NOTIF_UNSUPP -1 102 103 103 104 /* 104 105 * Builds an unsigned 32bit key from the given input tuple to be used ··· 789 788 790 789 pd->ph = ph; 791 790 for (i = 0; i < ee->num_events; i++, evt++) { 791 + int id; 792 792 struct scmi_registered_event *r_evt; 793 793 794 794 r_evt = devm_kzalloc(ni->handle->dev, sizeof(*r_evt), ··· 810 808 evt->max_report_sz, GFP_KERNEL); 811 809 if (!r_evt->report) 812 810 return -ENOMEM; 811 + 812 + for (id = 0; id < r_evt->num_sources; id++) 813 + if (ee->ops->is_notify_supported && 814 + !ee->ops->is_notify_supported(ph, r_evt->evt->id, id)) 815 + refcount_set(&r_evt->sources[id], NOTIF_UNSUPP); 813 816 814 817 pd->registered_events[i] = r_evt; 815 818 /* Ensure events are updated */ ··· 1173 1166 int ret = 0; 1174 1167 1175 1168 sid = &r_evt->sources[src_id]; 1176 - if (refcount_read(sid) == 0) { 1169 + if (refcount_read(sid) == NOTIF_UNSUPP) { 1170 + dev_dbg(r_evt->proto->ph->dev, 1171 + "Notification NOT supported - proto_id:%d evt_id:%d src_id:%d", 1172 + r_evt->proto->id, r_evt->evt->id, 1173 + src_id); 1174 + ret = -EOPNOTSUPP; 1175 + } else if (refcount_read(sid) == 0) { 1177 1176 ret = REVT_NOTIFY_ENABLE(r_evt, r_evt->evt->id, 1178 1177 src_id); 1179 1178 if (!ret) ··· 1192 1179 } else { 1193 1180 for (; num_sources; src_id++, num_sources--) { 1194 1181 sid = &r_evt->sources[src_id]; 1182 + if (refcount_read(sid) == NOTIF_UNSUPP) 1183 + continue; 1195 1184 if (refcount_dec_and_test(sid)) 1196 1185 REVT_NOTIFY_DISABLE(r_evt, 1197 1186 r_evt->evt->id, src_id);
+4
drivers/firmware/arm_scmi/notify.h
··· 35 35 36 36 /** 37 37 * struct scmi_event_ops - Protocol helpers called by the notification core. 38 + * @is_notify_supported: Return 0 if the specified notification for the 39 + * specified resource (src_id) is supported. 38 40 * @get_num_sources: Returns the number of possible events' sources for this 39 41 * protocol 40 42 * @set_notify_enabled: Enable/disable the required evt_id/src_id notifications ··· 52 50 * process context. 53 51 */ 54 52 struct scmi_event_ops { 53 + bool (*is_notify_supported)(const struct scmi_protocol_handle *ph, 54 + u8 evt_id, u32 src_id); 55 55 int (*get_num_sources)(const struct scmi_protocol_handle *ph); 56 56 int (*set_notify_enabled)(const struct scmi_protocol_handle *ph, 57 57 u8 evt_id, u32 src_id, bool enabled);
+4 -2
drivers/firmware/arm_scmi/optee.c
··· 109 109 * @rx_len: Response size 110 110 * @mu: Mutex protection on channel access 111 111 * @cinfo: SCMI channel information 112 - * @shmem: Virtual base address of the shared memory 113 - * @req: Shared memory protocol handle for SCMI request and synchronous response 112 + * @req: union for SCMI interface 113 + * @req.shmem: Virtual base address of the shared memory 114 + * @req.msg: Shared memory protocol handle for SCMI request and 115 + * synchronous response 114 116 * @tee_shm: TEE shared memory handle @req or NULL if using IOMEM shmem 115 117 * @link: Reference in agent's channel list 116 118 */
+147 -16
drivers/firmware/arm_scmi/perf.c
··· 182 182 enum scmi_power_scale power_scale; 183 183 u64 stats_addr; 184 184 u32 stats_size; 185 + bool notify_lvl_cmd; 186 + bool notify_lim_cmd; 185 187 struct perf_dom_info *dom_info; 186 188 }; 187 189 ··· 224 222 } 225 223 226 224 ph->xops->xfer_put(ph, t); 225 + 226 + if (!ret) { 227 + if (!ph->hops->protocol_msg_check(ph, PERF_NOTIFY_LEVEL, NULL)) 228 + pi->notify_lvl_cmd = true; 229 + 230 + if (!ph->hops->protocol_msg_check(ph, PERF_NOTIFY_LIMITS, NULL)) 231 + pi->notify_lim_cmd = true; 232 + } 233 + 227 234 return ret; 228 235 } 229 236 ··· 250 239 static int 251 240 scmi_perf_domain_attributes_get(const struct scmi_protocol_handle *ph, 252 241 struct perf_dom_info *dom_info, 242 + bool notify_lim_cmd, bool notify_lvl_cmd, 253 243 u32 version) 254 244 { 255 245 int ret; ··· 272 260 273 261 dom_info->set_limits = SUPPORTS_SET_LIMITS(flags); 274 262 dom_info->info.set_perf = SUPPORTS_SET_PERF_LVL(flags); 275 - dom_info->perf_limit_notify = SUPPORTS_PERF_LIMIT_NOTIFY(flags); 276 - dom_info->perf_level_notify = SUPPORTS_PERF_LEVEL_NOTIFY(flags); 263 + if (notify_lim_cmd) 264 + dom_info->perf_limit_notify = 265 + SUPPORTS_PERF_LIMIT_NOTIFY(flags); 266 + if (notify_lvl_cmd) 267 + dom_info->perf_level_notify = 268 + SUPPORTS_PERF_LEVEL_NOTIFY(flags); 277 269 dom_info->perf_fastchannels = SUPPORTS_PERF_FASTCHANNELS(flags); 278 270 if (PROTOCOL_REV_MAJOR(version) >= 0x4) 279 271 dom_info->level_indexing_mode = ··· 286 270 le32_to_cpu(attr->sustained_freq_khz); 287 271 dom_info->sustained_perf_level = 288 272 le32_to_cpu(attr->sustained_perf_level); 273 + /* 274 + * sustained_freq_khz = mult_factor * sustained_perf_level 275 + * mult_factor must be non zero positive integer(not fraction) 276 + */ 289 277 if (!dom_info->sustained_freq_khz || 290 278 !dom_info->sustained_perf_level || 291 - dom_info->level_indexing_mode) 279 + dom_info->level_indexing_mode) { 292 280 /* CPUFreq converts to kHz, hence default 1000 */ 293 281 dom_info->mult_factor = 1000; 294 - else 282 + } else { 295 283 dom_info->mult_factor = 296 284 (dom_info->sustained_freq_khz * 1000UL) 297 285 / dom_info->sustained_perf_level; 286 + if ((dom_info->sustained_freq_khz * 1000UL) % 287 + dom_info->sustained_perf_level) 288 + dev_warn(ph->dev, 289 + "multiplier for domain %d rounded\n", 290 + dom_info->id); 291 + } 292 + if (!dom_info->mult_factor) 293 + dev_warn(ph->dev, 294 + "Wrong sustained perf/frequency(domain %d)\n", 295 + dom_info->id); 296 + 298 297 strscpy(dom_info->info.name, attr->name, 299 298 SCMI_SHORT_NAME_MAX_SIZE); 300 299 } ··· 326 295 dom_info->id, NULL, dom_info->info.name, 327 296 SCMI_MAX_STR_SIZE); 328 297 298 + xa_init(&dom_info->opps_by_lvl); 329 299 if (dom_info->level_indexing_mode) { 330 300 xa_init(&dom_info->opps_by_idx); 331 - xa_init(&dom_info->opps_by_lvl); 332 301 hash_init(dom_info->opps_by_freq); 333 302 } 334 303 ··· 371 340 } 372 341 373 342 static inline void 374 - process_response_opp(struct scmi_opp *opp, unsigned int loop_idx, 343 + process_response_opp(struct device *dev, struct perf_dom_info *dom, 344 + struct scmi_opp *opp, unsigned int loop_idx, 375 345 const struct scmi_msg_resp_perf_describe_levels *r) 376 346 { 347 + int ret; 348 + 377 349 opp->perf = le32_to_cpu(r->opp[loop_idx].perf_val); 378 350 opp->power = le32_to_cpu(r->opp[loop_idx].power); 379 351 opp->trans_latency_us = 380 352 le16_to_cpu(r->opp[loop_idx].transition_latency_us); 353 + 354 + ret = xa_insert(&dom->opps_by_lvl, opp->perf, opp, GFP_KERNEL); 355 + if (ret) 356 + dev_warn(dev, "Failed to add opps_by_lvl at %d - ret:%d\n", 357 + opp->perf, ret); 381 358 } 382 359 383 360 static inline void ··· 393 354 struct scmi_opp *opp, unsigned int loop_idx, 394 355 const struct scmi_msg_resp_perf_describe_levels_v4 *r) 395 356 { 357 + int ret; 358 + 396 359 opp->perf = le32_to_cpu(r->opp[loop_idx].perf_val); 397 360 opp->power = le32_to_cpu(r->opp[loop_idx].power); 398 361 opp->trans_latency_us = 399 362 le16_to_cpu(r->opp[loop_idx].transition_latency_us); 400 363 364 + ret = xa_insert(&dom->opps_by_lvl, opp->perf, opp, GFP_KERNEL); 365 + if (ret) 366 + dev_warn(dev, "Failed to add opps_by_lvl at %d - ret:%d\n", 367 + opp->perf, ret); 368 + 401 369 /* Note that PERF v4 reports always five 32-bit words */ 402 370 opp->indicative_freq = le32_to_cpu(r->opp[loop_idx].indicative_freq); 403 371 if (dom->level_indexing_mode) { 404 - int ret; 405 - 406 372 opp->level_index = le32_to_cpu(r->opp[loop_idx].level_index); 407 373 408 374 ret = xa_insert(&dom->opps_by_idx, opp->level_index, opp, ··· 416 372 dev_warn(dev, 417 373 "Failed to add opps_by_idx at %d - ret:%d\n", 418 374 opp->level_index, ret); 419 - 420 - ret = xa_insert(&dom->opps_by_lvl, opp->perf, opp, GFP_KERNEL); 421 - if (ret) 422 - dev_warn(dev, 423 - "Failed to add opps_by_lvl at %d - ret:%d\n", 424 - opp->perf, ret); 425 375 426 376 hash_add(dom->opps_by_freq, &opp->hash, opp->indicative_freq); 427 377 } ··· 431 393 432 394 opp = &p->perf_dom->opp[st->desc_index + st->loop_idx]; 433 395 if (PROTOCOL_REV_MAJOR(p->version) <= 0x3) 434 - process_response_opp(opp, st->loop_idx, response); 396 + process_response_opp(ph->dev, p->perf_dom, opp, st->loop_idx, 397 + response); 435 398 else 436 399 process_response_opp_v4(ph->dev, p->perf_dom, opp, st->loop_idx, 437 400 response); ··· 1017 978 .power_scale_get = scmi_power_scale_get, 1018 979 }; 1019 980 981 + static bool scmi_perf_notify_supported(const struct scmi_protocol_handle *ph, 982 + u8 evt_id, u32 src_id) 983 + { 984 + bool supported; 985 + struct perf_dom_info *dom; 986 + 987 + if (evt_id >= ARRAY_SIZE(evt_2_cmd)) 988 + return false; 989 + 990 + dom = scmi_perf_domain_lookup(ph, src_id); 991 + if (IS_ERR(dom)) 992 + return false; 993 + 994 + if (evt_id == SCMI_EVENT_PERFORMANCE_LIMITS_CHANGED) 995 + supported = dom->perf_limit_notify; 996 + else 997 + supported = dom->perf_level_notify; 998 + 999 + return supported; 1000 + } 1001 + 1020 1002 static int scmi_perf_set_notify_enabled(const struct scmi_protocol_handle *ph, 1021 1003 u8 evt_id, u32 src_id, bool enable) 1022 1004 { ··· 1055 995 return ret; 1056 996 } 1057 997 998 + static int 999 + scmi_perf_xlate_opp_to_freq(struct perf_dom_info *dom, 1000 + unsigned int index, unsigned long *freq) 1001 + { 1002 + struct scmi_opp *opp; 1003 + 1004 + if (!dom || !freq) 1005 + return -EINVAL; 1006 + 1007 + if (!dom->level_indexing_mode) { 1008 + opp = xa_load(&dom->opps_by_lvl, index); 1009 + if (!opp) 1010 + return -ENODEV; 1011 + 1012 + *freq = opp->perf * dom->mult_factor; 1013 + } else { 1014 + opp = xa_load(&dom->opps_by_idx, index); 1015 + if (!opp) 1016 + return -ENODEV; 1017 + 1018 + *freq = opp->indicative_freq * dom->mult_factor; 1019 + } 1020 + 1021 + return 0; 1022 + } 1023 + 1058 1024 static void *scmi_perf_fill_custom_report(const struct scmi_protocol_handle *ph, 1059 1025 u8 evt_id, ktime_t timestamp, 1060 1026 const void *payld, size_t payld_sz, 1061 1027 void *report, u32 *src_id) 1062 1028 { 1029 + int ret; 1063 1030 void *rep = NULL; 1031 + struct perf_dom_info *dom; 1064 1032 1065 1033 switch (evt_id) { 1066 1034 case SCMI_EVENT_PERFORMANCE_LIMITS_CHANGED: 1067 1035 { 1068 1036 const struct scmi_perf_limits_notify_payld *p = payld; 1069 1037 struct scmi_perf_limits_report *r = report; 1038 + unsigned long freq_min, freq_max; 1070 1039 1071 1040 if (sizeof(*p) != payld_sz) 1072 1041 break; ··· 1105 1016 r->domain_id = le32_to_cpu(p->domain_id); 1106 1017 r->range_max = le32_to_cpu(p->range_max); 1107 1018 r->range_min = le32_to_cpu(p->range_min); 1019 + /* Check if the reported domain exist at all */ 1020 + dom = scmi_perf_domain_lookup(ph, r->domain_id); 1021 + if (IS_ERR(dom)) 1022 + break; 1023 + /* 1024 + * Event will be reported from this point on... 1025 + * ...even if, later, xlated frequencies were not retrieved. 1026 + */ 1108 1027 *src_id = r->domain_id; 1109 1028 rep = r; 1029 + 1030 + ret = scmi_perf_xlate_opp_to_freq(dom, r->range_max, &freq_max); 1031 + if (ret) 1032 + break; 1033 + 1034 + ret = scmi_perf_xlate_opp_to_freq(dom, r->range_min, &freq_min); 1035 + if (ret) 1036 + break; 1037 + 1038 + /* Report translated freqs ONLY if both available */ 1039 + r->range_max_freq = freq_max; 1040 + r->range_min_freq = freq_min; 1041 + 1110 1042 break; 1111 1043 } 1112 1044 case SCMI_EVENT_PERFORMANCE_LEVEL_CHANGED: 1113 1045 { 1114 1046 const struct scmi_perf_level_notify_payld *p = payld; 1115 1047 struct scmi_perf_level_report *r = report; 1048 + unsigned long freq; 1116 1049 1117 1050 if (sizeof(*p) != payld_sz) 1118 1051 break; ··· 1142 1031 r->timestamp = timestamp; 1143 1032 r->agent_id = le32_to_cpu(p->agent_id); 1144 1033 r->domain_id = le32_to_cpu(p->domain_id); 1034 + /* Report translated freqs ONLY if available */ 1145 1035 r->performance_level = le32_to_cpu(p->performance_level); 1036 + /* Check if the reported domain exist at all */ 1037 + dom = scmi_perf_domain_lookup(ph, r->domain_id); 1038 + if (IS_ERR(dom)) 1039 + break; 1040 + /* 1041 + * Event will be reported from this point on... 1042 + * ...even if, later, xlated frequencies were not retrieved. 1043 + */ 1146 1044 *src_id = r->domain_id; 1147 1045 rep = r; 1046 + 1047 + /* Report translated freqs ONLY if available */ 1048 + ret = scmi_perf_xlate_opp_to_freq(dom, r->performance_level, 1049 + &freq); 1050 + if (ret) 1051 + break; 1052 + 1053 + r->performance_level_freq = freq; 1054 + 1148 1055 break; 1149 1056 } 1150 1057 default: ··· 1196 1067 }; 1197 1068 1198 1069 static const struct scmi_event_ops perf_event_ops = { 1070 + .is_notify_supported = scmi_perf_notify_supported, 1199 1071 .get_num_sources = scmi_perf_get_num_sources, 1200 1072 .set_notify_enabled = scmi_perf_set_notify_enabled, 1201 1073 .fill_custom_report = scmi_perf_fill_custom_report, ··· 1241 1111 struct perf_dom_info *dom = pinfo->dom_info + domain; 1242 1112 1243 1113 dom->id = domain; 1244 - scmi_perf_domain_attributes_get(ph, dom, version); 1114 + scmi_perf_domain_attributes_get(ph, dom, pinfo->notify_lim_cmd, 1115 + pinfo->notify_lvl_cmd, version); 1245 1116 scmi_perf_describe_levels_get(ph, dom, version); 1246 1117 1247 1118 if (dom->perf_fastchannels)
+27 -3
drivers/firmware/arm_scmi/power.c
··· 68 68 69 69 struct scmi_power_info { 70 70 u32 version; 71 + bool notify_state_change_cmd; 71 72 int num_domains; 72 73 u64 stats_addr; 73 74 u32 stats_size; ··· 98 97 } 99 98 100 99 ph->xops->xfer_put(ph, t); 100 + 101 + if (!ret) 102 + if (!ph->hops->protocol_msg_check(ph, POWER_STATE_NOTIFY, NULL)) 103 + pi->notify_state_change_cmd = true; 104 + 101 105 return ret; 102 106 } 103 107 104 108 static int 105 109 scmi_power_domain_attributes_get(const struct scmi_protocol_handle *ph, 106 110 u32 domain, struct power_dom_info *dom_info, 107 - u32 version) 111 + u32 version, bool notify_state_change_cmd) 108 112 { 109 113 int ret; 110 114 u32 flags; ··· 128 122 if (!ret) { 129 123 flags = le32_to_cpu(attr->flags); 130 124 131 - dom_info->state_set_notify = SUPPORTS_STATE_SET_NOTIFY(flags); 125 + if (notify_state_change_cmd) 126 + dom_info->state_set_notify = 127 + SUPPORTS_STATE_SET_NOTIFY(flags); 132 128 dom_info->state_set_async = SUPPORTS_STATE_SET_ASYNC(flags); 133 129 dom_info->state_set_sync = SUPPORTS_STATE_SET_SYNC(flags); 134 130 strscpy(dom_info->name, attr->name, SCMI_SHORT_NAME_MAX_SIZE); ··· 239 231 return ret; 240 232 } 241 233 234 + static bool scmi_power_notify_supported(const struct scmi_protocol_handle *ph, 235 + u8 evt_id, u32 src_id) 236 + { 237 + struct power_dom_info *dom; 238 + struct scmi_power_info *pinfo = ph->get_priv(ph); 239 + 240 + if (evt_id != SCMI_EVENT_POWER_STATE_CHANGED || 241 + src_id >= pinfo->num_domains) 242 + return false; 243 + 244 + dom = pinfo->dom_info + src_id; 245 + return dom->state_set_notify; 246 + } 247 + 242 248 static int scmi_power_set_notify_enabled(const struct scmi_protocol_handle *ph, 243 249 u8 evt_id, u32 src_id, bool enable) 244 250 { ··· 307 285 }; 308 286 309 287 static const struct scmi_event_ops power_event_ops = { 288 + .is_notify_supported = scmi_power_notify_supported, 310 289 .get_num_sources = scmi_power_get_num_sources, 311 290 .set_notify_enabled = scmi_power_set_notify_enabled, 312 291 .fill_custom_report = scmi_power_fill_custom_report, ··· 349 326 for (domain = 0; domain < pinfo->num_domains; domain++) { 350 327 struct power_dom_info *dom = pinfo->dom_info + domain; 351 328 352 - scmi_power_domain_attributes_get(ph, domain, dom, version); 329 + scmi_power_domain_attributes_get(ph, domain, dom, version, 330 + pinfo->notify_state_change_cmd); 353 331 } 354 332 355 333 pinfo->version = version;
+41 -4
drivers/firmware/arm_scmi/powercap.c
··· 124 124 struct powercap_info { 125 125 u32 version; 126 126 int num_domains; 127 + bool notify_cap_cmd; 128 + bool notify_measurements_cmd; 127 129 struct scmi_powercap_state *states; 128 130 struct scmi_powercap_info *powercaps; 129 131 }; ··· 159 157 } 160 158 161 159 ph->xops->xfer_put(ph, t); 160 + 161 + if (!ret) { 162 + if (!ph->hops->protocol_msg_check(ph, 163 + POWERCAP_CAP_NOTIFY, NULL)) 164 + pi->notify_cap_cmd = true; 165 + 166 + if (!ph->hops->protocol_msg_check(ph, 167 + POWERCAP_MEASUREMENTS_NOTIFY, 168 + NULL)) 169 + pi->notify_measurements_cmd = true; 170 + } 171 + 162 172 return ret; 163 173 } 164 174 ··· 214 200 flags = le32_to_cpu(resp->attributes); 215 201 216 202 dom_info->id = domain; 217 - dom_info->notify_powercap_cap_change = 218 - SUPPORTS_POWERCAP_CAP_CHANGE_NOTIFY(flags); 219 - dom_info->notify_powercap_measurement_change = 220 - SUPPORTS_POWERCAP_MEASUREMENTS_CHANGE_NOTIFY(flags); 203 + if (pinfo->notify_cap_cmd) 204 + dom_info->notify_powercap_cap_change = 205 + SUPPORTS_POWERCAP_CAP_CHANGE_NOTIFY(flags); 206 + if (pinfo->notify_measurements_cmd) 207 + dom_info->notify_powercap_measurement_change = 208 + SUPPORTS_POWERCAP_MEASUREMENTS_CHANGE_NOTIFY(flags); 221 209 dom_info->async_powercap_cap_set = 222 210 SUPPORTS_ASYNC_POWERCAP_CAP_SET(flags); 223 211 dom_info->powercap_cap_config = ··· 804 788 return ret; 805 789 } 806 790 791 + static bool 792 + scmi_powercap_notify_supported(const struct scmi_protocol_handle *ph, 793 + u8 evt_id, u32 src_id) 794 + { 795 + bool supported = false; 796 + const struct scmi_powercap_info *dom_info; 797 + struct powercap_info *pi = ph->get_priv(ph); 798 + 799 + if (evt_id >= ARRAY_SIZE(evt_2_cmd) || src_id >= pi->num_domains) 800 + return false; 801 + 802 + dom_info = pi->powercaps + src_id; 803 + if (evt_id == SCMI_EVENT_POWERCAP_CAP_CHANGED) 804 + supported = dom_info->notify_powercap_cap_change; 805 + else if (evt_id == SCMI_EVENT_POWERCAP_MEASUREMENTS_CHANGED) 806 + supported = dom_info->notify_powercap_measurement_change; 807 + 808 + return supported; 809 + } 810 + 807 811 static int 808 812 scmi_powercap_set_notify_enabled(const struct scmi_protocol_handle *ph, 809 813 u8 evt_id, u32 src_id, bool enable) ··· 940 904 }; 941 905 942 906 static const struct scmi_event_ops powercap_event_ops = { 907 + .is_notify_supported = scmi_powercap_notify_supported, 943 908 .get_num_sources = scmi_powercap_get_num_sources, 944 909 .set_notify_enabled = scmi_powercap_set_notify_enabled, 945 910 .fill_custom_report = scmi_powercap_fill_custom_report,
+5
drivers/firmware/arm_scmi/protocols.h
··· 33 33 PROTOCOL_VERSION = 0x0, 34 34 PROTOCOL_ATTRIBUTES = 0x1, 35 35 PROTOCOL_MESSAGE_ATTRIBUTES = 0x2, 36 + NEGOTIATE_PROTOCOL_VERSION = 0x10, 36 37 }; 37 38 38 39 /** ··· 252 251 * provided in @ops. 253 252 * @iter_response_run: A common helper to trigger the run of a previously 254 253 * initialized iterator. 254 + * @protocol_msg_check: A common helper to check is a specific protocol message 255 + * is supported. 255 256 * @fastchannel_init: A common helper used to initialize FC descriptors by 256 257 * gathering FC descriptions from the SCMI platform server. 257 258 * @fastchannel_db_ring: A common helper to ring a FC doorbell. ··· 267 264 unsigned int max_resources, u8 msg_id, 268 265 size_t tx_size, void *priv); 269 266 int (*iter_response_run)(void *iter); 267 + int (*protocol_msg_check)(const struct scmi_protocol_handle *ph, 268 + u32 message_id, u32 *attributes); 270 269 void (*fastchannel_init)(const struct scmi_protocol_handle *ph, 271 270 u8 describe_id, u32 message_id, 272 271 u32 valid_size, u32 domain,
+29 -8
drivers/firmware/arm_scmi/reset.c
··· 67 67 struct scmi_reset_info { 68 68 u32 version; 69 69 int num_domains; 70 + bool notify_reset_cmd; 70 71 struct reset_dom_info *dom_info; 71 72 }; 72 73 ··· 90 89 } 91 90 92 91 ph->xops->xfer_put(ph, t); 92 + 93 + if (!ret) 94 + if (!ph->hops->protocol_msg_check(ph, RESET_NOTIFY, NULL)) 95 + pi->notify_reset_cmd = true; 96 + 93 97 return ret; 94 98 } 95 99 96 100 static int 97 101 scmi_reset_domain_attributes_get(const struct scmi_protocol_handle *ph, 98 - u32 domain, struct reset_dom_info *dom_info, 99 - u32 version) 102 + struct scmi_reset_info *pinfo, 103 + u32 domain, u32 version) 100 104 { 101 105 int ret; 102 106 u32 attributes; 103 107 struct scmi_xfer *t; 104 108 struct scmi_msg_resp_reset_domain_attributes *attr; 109 + struct reset_dom_info *dom_info = pinfo->dom_info + domain; 105 110 106 111 ret = ph->xops->xfer_get_init(ph, RESET_DOMAIN_ATTRIBUTES, 107 112 sizeof(domain), sizeof(*attr), &t); ··· 122 115 attributes = le32_to_cpu(attr->attributes); 123 116 124 117 dom_info->async_reset = SUPPORTS_ASYNC_RESET(attributes); 125 - dom_info->reset_notify = SUPPORTS_NOTIFY_RESET(attributes); 118 + if (pinfo->notify_reset_cmd) 119 + dom_info->reset_notify = 120 + SUPPORTS_NOTIFY_RESET(attributes); 126 121 dom_info->latency_us = le32_to_cpu(attr->latency); 127 122 if (dom_info->latency_us == U32_MAX) 128 123 dom_info->latency_us = 0; ··· 235 226 .deassert = scmi_reset_domain_deassert, 236 227 }; 237 228 229 + static bool scmi_reset_notify_supported(const struct scmi_protocol_handle *ph, 230 + u8 evt_id, u32 src_id) 231 + { 232 + struct reset_dom_info *dom; 233 + struct scmi_reset_info *pi = ph->get_priv(ph); 234 + 235 + if (evt_id != SCMI_EVENT_RESET_ISSUED || src_id >= pi->num_domains) 236 + return false; 237 + 238 + dom = pi->dom_info + src_id; 239 + 240 + return dom->reset_notify; 241 + } 242 + 238 243 static int scmi_reset_notify(const struct scmi_protocol_handle *ph, 239 244 u32 domain_id, bool enable) 240 245 { ··· 324 301 }; 325 302 326 303 static const struct scmi_event_ops reset_event_ops = { 304 + .is_notify_supported = scmi_reset_notify_supported, 327 305 .get_num_sources = scmi_reset_get_num_sources, 328 306 .set_notify_enabled = scmi_reset_set_notify_enabled, 329 307 .fill_custom_report = scmi_reset_fill_custom_report, ··· 363 339 if (!pinfo->dom_info) 364 340 return -ENOMEM; 365 341 366 - for (domain = 0; domain < pinfo->num_domains; domain++) { 367 - struct reset_dom_info *dom = pinfo->dom_info + domain; 368 - 369 - scmi_reset_domain_attributes_get(ph, domain, dom, version); 370 - } 342 + for (domain = 0; domain < pinfo->num_domains; domain++) 343 + scmi_reset_domain_attributes_get(ph, pinfo, domain, version); 371 344 372 345 pinfo->version = version; 373 346 return ph->set_priv(ph, pinfo, version);
+36 -1
drivers/firmware/arm_scmi/sensors.c
··· 215 215 216 216 struct sensors_info { 217 217 u32 version; 218 + bool notify_trip_point_cmd; 219 + bool notify_continuos_update_cmd; 218 220 int num_sensors; 219 221 int max_requests; 220 222 u64 reg_addr; ··· 248 246 } 249 247 250 248 ph->xops->xfer_put(ph, t); 249 + 250 + if (!ret) { 251 + if (!ph->hops->protocol_msg_check(ph, 252 + SENSOR_TRIP_POINT_NOTIFY, NULL)) 253 + si->notify_trip_point_cmd = true; 254 + 255 + if (!ph->hops->protocol_msg_check(ph, 256 + SENSOR_CONTINUOUS_UPDATE_NOTIFY, 257 + NULL)) 258 + si->notify_continuos_update_cmd = true; 259 + } 260 + 251 261 return ret; 252 262 } 253 263 ··· 608 594 * Such bitfields are assumed to be zeroed on non 609 595 * relevant fw versions...assuming fw not buggy ! 610 596 */ 611 - s->update = SUPPORTS_UPDATE_NOTIFY(attrl); 597 + if (si->notify_continuos_update_cmd) 598 + s->update = SUPPORTS_UPDATE_NOTIFY(attrl); 612 599 s->timestamped = SUPPORTS_TIMESTAMP(attrl); 613 600 if (s->timestamped) 614 601 s->tstamp_scale = S32_EXT(SENSOR_TSTAMP_EXP(attrl)); ··· 1003 988 .config_set = scmi_sensor_config_set, 1004 989 }; 1005 990 991 + static bool scmi_sensor_notify_supported(const struct scmi_protocol_handle *ph, 992 + u8 evt_id, u32 src_id) 993 + { 994 + bool supported = false; 995 + const struct scmi_sensor_info *s; 996 + struct sensors_info *sinfo = ph->get_priv(ph); 997 + 998 + s = scmi_sensor_info_get(ph, src_id); 999 + if (!s) 1000 + return false; 1001 + 1002 + if (evt_id == SCMI_EVENT_SENSOR_TRIP_POINT_EVENT) 1003 + supported = sinfo->notify_trip_point_cmd; 1004 + else if (evt_id == SCMI_EVENT_SENSOR_UPDATE) 1005 + supported = s->update; 1006 + 1007 + return supported; 1008 + } 1009 + 1006 1010 static int scmi_sensor_set_notify_enabled(const struct scmi_protocol_handle *ph, 1007 1011 u8 evt_id, u32 src_id, bool enable) 1008 1012 { ··· 1133 1099 }; 1134 1100 1135 1101 static const struct scmi_event_ops sensor_event_ops = { 1102 + .is_notify_supported = scmi_sensor_notify_supported, 1136 1103 .get_num_sources = scmi_sensor_get_num_sources, 1137 1104 .set_notify_enabled = scmi_sensor_set_notify_enabled, 1138 1105 .fill_custom_report = scmi_sensor_fill_custom_report,
+7
drivers/firmware/arm_scmi/smc.c
··· 214 214 struct scmi_chan_info *cinfo = p; 215 215 struct scmi_smc *scmi_info = cinfo->transport_info; 216 216 217 + /* 218 + * Different protocols might share the same chan info, so a previous 219 + * smc_chan_free call might have already freed the structure. 220 + */ 221 + if (!scmi_info) 222 + return 0; 223 + 217 224 /* Ignore any possible further reception on the IRQ path */ 218 225 if (scmi_info->irq > 0) 219 226 free_irq(scmi_info->irq, scmi_info);
+16
drivers/firmware/arm_scmi/system.c
··· 36 36 struct scmi_system_info { 37 37 u32 version; 38 38 bool graceful_timeout_supported; 39 + bool power_state_notify_cmd; 39 40 }; 41 + 42 + static bool scmi_system_notify_supported(const struct scmi_protocol_handle *ph, 43 + u8 evt_id, u32 src_id) 44 + { 45 + struct scmi_system_info *pinfo = ph->get_priv(ph); 46 + 47 + if (evt_id != SCMI_EVENT_SYSTEM_POWER_STATE_NOTIFIER) 48 + return false; 49 + 50 + return pinfo->power_state_notify_cmd; 51 + } 40 52 41 53 static int scmi_system_request_notify(const struct scmi_protocol_handle *ph, 42 54 bool enable) ··· 126 114 }; 127 115 128 116 static const struct scmi_event_ops system_event_ops = { 117 + .is_notify_supported = scmi_system_notify_supported, 129 118 .set_notify_enabled = scmi_system_set_notify_enabled, 130 119 .fill_custom_report = scmi_system_fill_custom_report, 131 120 }; ··· 159 146 pinfo->version = version; 160 147 if (PROTOCOL_REV_MAJOR(pinfo->version) >= 0x2) 161 148 pinfo->graceful_timeout_supported = true; 149 + 150 + if (!ph->hops->protocol_msg_check(ph, SYSTEM_POWER_STATE_NOTIFY, NULL)) 151 + pinfo->power_state_notify_cmd = true; 162 152 163 153 return ph->set_priv(ph, pinfo, version); 164 154 }
+1 -1
drivers/firmware/tegra/bpmp-debugfs.c
··· 77 77 78 78 root_path_buf = kzalloc(root_path_buf_len, GFP_KERNEL); 79 79 if (!root_path_buf) 80 - goto out; 80 + return NULL; 81 81 82 82 root_path = dentry_path(bpmp->debugfs_mirror, root_path_buf, 83 83 root_path_buf_len);
+24 -41
drivers/memory/emif.c
··· 72 72 static unsigned long irq_state; 73 73 static LIST_HEAD(device_list); 74 74 75 - #ifdef CONFIG_DEBUG_FS 76 75 static void do_emif_regdump_show(struct seq_file *s, struct emif_data *emif, 77 76 struct emif_regs *regs) 78 77 { ··· 139 140 140 141 DEFINE_SHOW_ATTRIBUTE(emif_mr4); 141 142 142 - static int __init_or_module emif_debugfs_init(struct emif_data *emif) 143 + static void emif_debugfs_init(struct emif_data *emif) 143 144 { 144 - emif->debugfs_root = debugfs_create_dir(dev_name(emif->dev), NULL); 145 - debugfs_create_file("regcache_dump", S_IRUGO, emif->debugfs_root, emif, 146 - &emif_regdump_fops); 147 - debugfs_create_file("mr4", S_IRUGO, emif->debugfs_root, emif, 148 - &emif_mr4_fops); 149 - return 0; 145 + if (IS_ENABLED(CONFIG_DEBUG_FS)) { 146 + emif->debugfs_root = debugfs_create_dir(dev_name(emif->dev), NULL); 147 + debugfs_create_file("regcache_dump", S_IRUGO, emif->debugfs_root, emif, 148 + &emif_regdump_fops); 149 + debugfs_create_file("mr4", S_IRUGO, emif->debugfs_root, emif, 150 + &emif_mr4_fops); 151 + } 150 152 } 151 153 152 - static void __exit emif_debugfs_exit(struct emif_data *emif) 154 + static void emif_debugfs_exit(struct emif_data *emif) 153 155 { 154 - debugfs_remove_recursive(emif->debugfs_root); 155 - emif->debugfs_root = NULL; 156 + if (IS_ENABLED(CONFIG_DEBUG_FS)) { 157 + debugfs_remove_recursive(emif->debugfs_root); 158 + emif->debugfs_root = NULL; 159 + } 156 160 } 157 - #else 158 - static inline int __init_or_module emif_debugfs_init(struct emif_data *emif) 159 - { 160 - return 0; 161 - } 162 - 163 - static inline void __exit emif_debugfs_exit(struct emif_data *emif) 164 - { 165 - } 166 - #endif 167 161 168 162 /* 169 163 * Get bus width used by EMIF. Note that this may be different from the ··· 671 679 clear_all_interrupts(emif); 672 680 } 673 681 674 - static int __init_or_module setup_interrupts(struct emif_data *emif, u32 irq) 682 + static int setup_interrupts(struct emif_data *emif, u32 irq) 675 683 { 676 684 u32 interrupts, type; 677 685 void __iomem *base = emif->base; ··· 702 710 703 711 } 704 712 705 - static void __init_or_module emif_onetime_settings(struct emif_data *emif) 713 + static void emif_onetime_settings(struct emif_data *emif) 706 714 { 707 715 u32 pwr_mgmt_ctrl, zq, temp_alert_cfg; 708 716 void __iomem *base = emif->base; ··· 826 834 return valid; 827 835 } 828 836 829 - #if defined(CONFIG_OF) 830 - static void __init_or_module of_get_custom_configs(struct device_node *np_emif, 837 + static void of_get_custom_configs(struct device_node *np_emif, 831 838 struct emif_data *emif) 832 839 { 833 840 struct emif_custom_configs *cust_cfgs = NULL; ··· 875 884 emif->plat_data->custom_configs = cust_cfgs; 876 885 } 877 886 878 - static void __init_or_module of_get_ddr_info(struct device_node *np_emif, 887 + static void of_get_ddr_info(struct device_node *np_emif, 879 888 struct device_node *np_ddr, 880 889 struct ddr_device_info *dev_info) 881 890 { ··· 909 918 dev_info->io_width = __fls(io_width) - 1; 910 919 } 911 920 912 - static struct emif_data * __init_or_module of_get_memory_device_details( 921 + static struct emif_data *of_get_memory_device_details( 913 922 struct device_node *np_emif, struct device *dev) 914 923 { 915 924 struct emif_data *emif = NULL; ··· 982 991 return emif; 983 992 } 984 993 985 - #else 986 - 987 - static struct emif_data * __init_or_module of_get_memory_device_details( 988 - struct device_node *np_emif, struct device *dev) 989 - { 990 - return NULL; 991 - } 992 - #endif 993 - 994 - static struct emif_data *__init_or_module get_device_details( 994 + static struct emif_data *get_device_details( 995 995 struct platform_device *pdev) 996 996 { 997 997 u32 size; ··· 1086 1104 return NULL; 1087 1105 } 1088 1106 1089 - static int __init_or_module emif_probe(struct platform_device *pdev) 1107 + static int emif_probe(struct platform_device *pdev) 1090 1108 { 1091 1109 struct emif_data *emif; 1092 1110 int irq, ret; ··· 1141 1159 return -ENODEV; 1142 1160 } 1143 1161 1144 - static void __exit emif_remove(struct platform_device *pdev) 1162 + static void emif_remove(struct platform_device *pdev) 1145 1163 { 1146 1164 struct emif_data *emif = platform_get_drvdata(pdev); 1147 1165 ··· 1165 1183 #endif 1166 1184 1167 1185 static struct platform_driver emif_driver = { 1168 - .remove_new = __exit_p(emif_remove), 1186 + .probe = emif_probe, 1187 + .remove_new = emif_remove, 1169 1188 .shutdown = emif_shutdown, 1170 1189 .driver = { 1171 1190 .name = "emif", ··· 1174 1191 }, 1175 1192 }; 1176 1193 1177 - module_platform_driver_probe(emif_driver, emif_probe); 1194 + module_platform_driver(emif_driver); 1178 1195 1179 1196 MODULE_DESCRIPTION("TI EMIF SDRAM Controller Driver"); 1180 1197 MODULE_LICENSE("GPL");
+680 -49
drivers/memory/stm32-fmc2-ebi.c
··· 11 11 #include <linux/of_platform.h> 12 12 #include <linux/pinctrl/consumer.h> 13 13 #include <linux/platform_device.h> 14 + #include <linux/pm_runtime.h> 14 15 #include <linux/regmap.h> 15 16 #include <linux/reset.h> 16 17 ··· 21 20 #define FMC2_BCR(x) ((x) * 0x8 + FMC2_BCR1) 22 21 #define FMC2_BTR(x) ((x) * 0x8 + FMC2_BTR1) 23 22 #define FMC2_PCSCNTR 0x20 23 + #define FMC2_CFGR 0x20 24 + #define FMC2_SR 0x84 24 25 #define FMC2_BWTR1 0x104 25 26 #define FMC2_BWTR(x) ((x) * 0x8 + FMC2_BWTR1) 27 + #define FMC2_SECCFGR 0x300 28 + #define FMC2_CIDCFGR0 0x30c 29 + #define FMC2_CIDCFGR(x) ((x) * 0x8 + FMC2_CIDCFGR0) 30 + #define FMC2_SEMCR0 0x310 31 + #define FMC2_SEMCR(x) ((x) * 0x8 + FMC2_SEMCR0) 26 32 27 33 /* Register: FMC2_BCR1 */ 28 34 #define FMC2_BCR1_CCLKEN BIT(20) ··· 50 42 #define FMC2_BCR_ASYNCWAIT BIT(15) 51 43 #define FMC2_BCR_CPSIZE GENMASK(18, 16) 52 44 #define FMC2_BCR_CBURSTRW BIT(19) 45 + #define FMC2_BCR_CSCOUNT GENMASK(21, 20) 53 46 #define FMC2_BCR_NBLSET GENMASK(23, 22) 54 47 55 48 /* Register: FMC2_BTRx/FMC2_BWTRx */ ··· 67 58 #define FMC2_PCSCNTR_CSCOUNT GENMASK(15, 0) 68 59 #define FMC2_PCSCNTR_CNTBEN(x) BIT((x) + 16) 69 60 61 + /* Register: FMC2_CFGR */ 62 + #define FMC2_CFGR_CLKDIV GENMASK(19, 16) 63 + #define FMC2_CFGR_CCLKEN BIT(20) 64 + #define FMC2_CFGR_FMC2EN BIT(31) 65 + 66 + /* Register: FMC2_SR */ 67 + #define FMC2_SR_ISOST GENMASK(1, 0) 68 + 69 + /* Register: FMC2_CIDCFGR */ 70 + #define FMC2_CIDCFGR_CFEN BIT(0) 71 + #define FMC2_CIDCFGR_SEMEN BIT(1) 72 + #define FMC2_CIDCFGR_SCID GENMASK(6, 4) 73 + #define FMC2_CIDCFGR_SEMWLC1 BIT(17) 74 + 75 + /* Register: FMC2_SEMCR */ 76 + #define FMC2_SEMCR_SEM_MUTEX BIT(0) 77 + #define FMC2_SEMCR_SEMCID GENMASK(6, 4) 78 + 70 79 #define FMC2_MAX_EBI_CE 4 71 80 #define FMC2_MAX_BANKS 5 81 + #define FMC2_MAX_RESOURCES 6 82 + #define FMC2_CID1 1 72 83 73 84 #define FMC2_BCR_CPSIZE_0 0x0 74 85 #define FMC2_BCR_CPSIZE_128 0x1 ··· 102 73 #define FMC2_BCR_MTYP_SRAM 0x0 103 74 #define FMC2_BCR_MTYP_PSRAM 0x1 104 75 #define FMC2_BCR_MTYP_NOR 0x2 76 + 77 + #define FMC2_BCR_CSCOUNT_0 0x0 78 + #define FMC2_BCR_CSCOUNT_1 0x1 79 + #define FMC2_BCR_CSCOUNT_64 0x2 80 + #define FMC2_BCR_CSCOUNT_256 0x3 105 81 106 82 #define FMC2_BXTR_EXTMOD_A 0x0 107 83 #define FMC2_BXTR_EXTMOD_B 0x1 ··· 122 88 #define FMC2_BTR_CLKDIV_MAX 0xf 123 89 #define FMC2_BTR_DATLAT_MAX 0xf 124 90 #define FMC2_PCSCNTR_CSCOUNT_MAX 0xff 91 + #define FMC2_CFGR_CLKDIV_MAX 0xf 125 92 126 93 enum stm32_fmc2_ebi_bank { 127 94 FMC2_EBI1 = 0, ··· 136 101 FMC2_REG_BCR = 1, 137 102 FMC2_REG_BTR, 138 103 FMC2_REG_BWTR, 139 - FMC2_REG_PCSCNTR 104 + FMC2_REG_PCSCNTR, 105 + FMC2_REG_CFGR 140 106 }; 141 107 142 108 enum stm32_fmc2_ebi_transaction_type { ··· 168 132 FMC2_CPSIZE_1024 = 1024 169 133 }; 170 134 135 + enum stm32_fmc2_ebi_cscount { 136 + FMC2_CSCOUNT_0 = 0, 137 + FMC2_CSCOUNT_1 = 1, 138 + FMC2_CSCOUNT_64 = 64, 139 + FMC2_CSCOUNT_256 = 256 140 + }; 141 + 142 + struct stm32_fmc2_ebi; 143 + 144 + struct stm32_fmc2_ebi_data { 145 + const struct stm32_fmc2_prop *child_props; 146 + unsigned int nb_child_props; 147 + u32 fmc2_enable_reg; 148 + u32 fmc2_enable_bit; 149 + int (*nwait_used_by_ctrls)(struct stm32_fmc2_ebi *ebi); 150 + void (*set_setup)(struct stm32_fmc2_ebi *ebi); 151 + int (*save_setup)(struct stm32_fmc2_ebi *ebi); 152 + int (*check_rif)(struct stm32_fmc2_ebi *ebi, u32 resource); 153 + void (*put_sems)(struct stm32_fmc2_ebi *ebi); 154 + void (*get_sems)(struct stm32_fmc2_ebi *ebi); 155 + }; 156 + 171 157 struct stm32_fmc2_ebi { 172 158 struct device *dev; 173 159 struct clk *clk; 174 160 struct regmap *regmap; 161 + const struct stm32_fmc2_ebi_data *data; 175 162 u8 bank_assigned; 163 + u8 sem_taken; 164 + bool access_granted; 176 165 177 166 u32 bcr[FMC2_MAX_EBI_CE]; 178 167 u32 btr[FMC2_MAX_EBI_CE]; 179 168 u32 bwtr[FMC2_MAX_EBI_CE]; 180 169 u32 pcscntr; 170 + u32 cfgr; 181 171 }; 182 172 183 173 /* ··· 243 181 int cs) 244 182 { 245 183 u32 bcr; 184 + int ret; 246 185 247 - regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr); 186 + ret = regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr); 187 + if (ret) 188 + return ret; 248 189 249 190 if (bcr & FMC2_BCR_MTYP) 250 191 return 0; ··· 260 195 int cs) 261 196 { 262 197 u32 bcr, val = FIELD_PREP(FMC2_BCR_MTYP, FMC2_BCR_MTYP_NOR); 198 + int ret; 263 199 264 - regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr); 200 + ret = regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr); 201 + if (ret) 202 + return ret; 265 203 266 204 if ((bcr & FMC2_BCR_MTYP) == val && bcr & FMC2_BCR_BURSTEN) 267 205 return 0; ··· 277 209 int cs) 278 210 { 279 211 u32 bcr; 212 + int ret; 280 213 281 - regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr); 214 + ret = regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr); 215 + if (ret) 216 + return ret; 282 217 283 218 if (bcr & FMC2_BCR_BURSTEN) 284 219 return 0; ··· 289 218 return -EINVAL; 290 219 } 291 220 221 + static int stm32_fmc2_ebi_mp25_check_cclk(struct stm32_fmc2_ebi *ebi, 222 + const struct stm32_fmc2_prop *prop, 223 + int cs) 224 + { 225 + if (!ebi->access_granted) 226 + return -EACCES; 227 + 228 + return stm32_fmc2_ebi_check_sync_trans(ebi, prop, cs); 229 + } 230 + 231 + static int stm32_fmc2_ebi_mp25_check_clk_period(struct stm32_fmc2_ebi *ebi, 232 + const struct stm32_fmc2_prop *prop, 233 + int cs) 234 + { 235 + u32 cfgr; 236 + int ret; 237 + 238 + ret = regmap_read(ebi->regmap, FMC2_CFGR, &cfgr); 239 + if (ret) 240 + return ret; 241 + 242 + if (cfgr & FMC2_CFGR_CCLKEN && !ebi->access_granted) 243 + return -EACCES; 244 + 245 + return stm32_fmc2_ebi_check_sync_trans(ebi, prop, cs); 246 + } 247 + 292 248 static int stm32_fmc2_ebi_check_async_trans(struct stm32_fmc2_ebi *ebi, 293 249 const struct stm32_fmc2_prop *prop, 294 250 int cs) 295 251 { 296 252 u32 bcr; 253 + int ret; 297 254 298 - regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr); 255 + ret = regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr); 256 + if (ret) 257 + return ret; 299 258 300 259 if (!(bcr & FMC2_BCR_BURSTEN) || !(bcr & FMC2_BCR_CBURSTRW)) 301 260 return 0; ··· 338 237 int cs) 339 238 { 340 239 u32 bcr, val = FIELD_PREP(FMC2_BCR_MTYP, FMC2_BCR_MTYP_PSRAM); 240 + int ret; 341 241 342 - regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr); 242 + ret = regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr); 243 + if (ret) 244 + return ret; 343 245 344 246 if ((bcr & FMC2_BCR_MTYP) == val && bcr & FMC2_BCR_BURSTEN) 345 247 return 0; ··· 355 251 int cs) 356 252 { 357 253 u32 bcr, bxtr, val = FIELD_PREP(FMC2_BXTR_ACCMOD, FMC2_BXTR_EXTMOD_D); 254 + int ret; 358 255 359 - regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr); 256 + ret = regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr); 257 + if (ret) 258 + return ret; 259 + 360 260 if (prop->reg_type == FMC2_REG_BWTR) 361 - regmap_read(ebi->regmap, FMC2_BWTR(cs), &bxtr); 261 + ret = regmap_read(ebi->regmap, FMC2_BWTR(cs), &bxtr); 362 262 else 363 - regmap_read(ebi->regmap, FMC2_BTR(cs), &bxtr); 263 + ret = regmap_read(ebi->regmap, FMC2_BTR(cs), &bxtr); 264 + if (ret) 265 + return ret; 364 266 365 267 if ((!(bcr & FMC2_BCR_BURSTEN) || !(bcr & FMC2_BCR_CBURSTRW)) && 366 268 ((bxtr & FMC2_BXTR_ACCMOD) == val || bcr & FMC2_BCR_MUXEN)) ··· 380 270 int cs) 381 271 { 382 272 u32 bcr, bcr1; 273 + int ret; 383 274 384 - regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr); 385 - if (cs) 386 - regmap_read(ebi->regmap, FMC2_BCR1, &bcr1); 387 - else 275 + ret = regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr); 276 + if (ret) 277 + return ret; 278 + 279 + if (cs) { 280 + ret = regmap_read(ebi->regmap, FMC2_BCR1, &bcr1); 281 + if (ret) 282 + return ret; 283 + } else { 388 284 bcr1 = bcr; 285 + } 389 286 390 287 if (bcr & FMC2_BCR_BURSTEN && (!cs || !(bcr1 & FMC2_BCR1_CCLKEN))) 391 288 return 0; ··· 424 307 { 425 308 u32 nb_clk_cycles = stm32_fmc2_ebi_ns_to_clock_cycles(ebi, cs, setup); 426 309 u32 bcr, btr, clk_period; 310 + int ret; 427 311 428 - regmap_read(ebi->regmap, FMC2_BCR1, &bcr); 312 + ret = regmap_read(ebi->regmap, FMC2_BCR1, &bcr); 313 + if (ret) 314 + return ret; 315 + 429 316 if (bcr & FMC2_BCR1_CCLKEN || !cs) 430 - regmap_read(ebi->regmap, FMC2_BTR1, &btr); 317 + ret = regmap_read(ebi->regmap, FMC2_BTR1, &btr); 431 318 else 432 - regmap_read(ebi->regmap, FMC2_BTR(cs), &btr); 319 + ret = regmap_read(ebi->regmap, FMC2_BTR(cs), &btr); 320 + if (ret) 321 + return ret; 433 322 434 323 clk_period = FIELD_GET(FMC2_BTR_CLKDIV, btr) + 1; 324 + 325 + return DIV_ROUND_UP(nb_clk_cycles, clk_period); 326 + } 327 + 328 + static u32 stm32_fmc2_ebi_mp25_ns_to_clk_period(struct stm32_fmc2_ebi *ebi, 329 + int cs, u32 setup) 330 + { 331 + u32 nb_clk_cycles = stm32_fmc2_ebi_ns_to_clock_cycles(ebi, cs, setup); 332 + u32 cfgr, btr, clk_period; 333 + int ret; 334 + 335 + ret = regmap_read(ebi->regmap, FMC2_CFGR, &cfgr); 336 + if (ret) 337 + return ret; 338 + 339 + if (cfgr & FMC2_CFGR_CCLKEN) { 340 + clk_period = FIELD_GET(FMC2_CFGR_CLKDIV, cfgr) + 1; 341 + } else { 342 + ret = regmap_read(ebi->regmap, FMC2_BTR(cs), &btr); 343 + if (ret) 344 + return ret; 345 + 346 + clk_period = FIELD_GET(FMC2_BTR_CLKDIV, btr) + 1; 347 + } 435 348 436 349 return DIV_ROUND_UP(nb_clk_cycles, clk_period); 437 350 } ··· 480 333 break; 481 334 case FMC2_REG_PCSCNTR: 482 335 *reg = FMC2_PCSCNTR; 336 + break; 337 + case FMC2_REG_CFGR: 338 + *reg = FMC2_CFGR; 483 339 break; 484 340 default: 485 341 return -EINVAL; ··· 721 571 if (ret) 722 572 return ret; 723 573 724 - regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr); 574 + ret = regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr); 575 + if (ret) 576 + return ret; 577 + 725 578 if (prop->reg_type == FMC2_REG_BWTR) 726 - regmap_read(ebi->regmap, FMC2_BWTR(cs), &bxtr); 579 + ret = regmap_read(ebi->regmap, FMC2_BWTR(cs), &bxtr); 727 580 else 728 - regmap_read(ebi->regmap, FMC2_BTR(cs), &bxtr); 581 + ret = regmap_read(ebi->regmap, FMC2_BTR(cs), &bxtr); 582 + if (ret) 583 + return ret; 729 584 730 585 if ((bxtr & FMC2_BXTR_ACCMOD) == val || bcr & FMC2_BCR_MUXEN) 731 586 val = clamp_val(setup, 1, FMC2_BXTR_ADDSET_MAX); ··· 830 675 return 0; 831 676 } 832 677 678 + static int stm32_fmc2_ebi_mp25_set_clk_period(struct stm32_fmc2_ebi *ebi, 679 + const struct stm32_fmc2_prop *prop, 680 + int cs, u32 setup) 681 + { 682 + u32 val, cfgr; 683 + int ret; 684 + 685 + ret = regmap_read(ebi->regmap, FMC2_CFGR, &cfgr); 686 + if (ret) 687 + return ret; 688 + 689 + if (cfgr & FMC2_CFGR_CCLKEN) { 690 + val = setup ? clamp_val(setup - 1, 1, FMC2_CFGR_CLKDIV_MAX) : 1; 691 + val = FIELD_PREP(FMC2_CFGR_CLKDIV, val); 692 + regmap_update_bits(ebi->regmap, FMC2_CFGR, FMC2_CFGR_CLKDIV, val); 693 + } else { 694 + val = setup ? clamp_val(setup - 1, 1, FMC2_BTR_CLKDIV_MAX) : 1; 695 + val = FIELD_PREP(FMC2_BTR_CLKDIV, val); 696 + regmap_update_bits(ebi->regmap, FMC2_BTR(cs), FMC2_BTR_CLKDIV, val); 697 + } 698 + 699 + return 0; 700 + } 701 + 833 702 static int stm32_fmc2_ebi_set_data_latency(struct stm32_fmc2_ebi *ebi, 834 703 const struct stm32_fmc2_prop *prop, 835 704 int cs, u32 setup) ··· 872 693 int cs, u32 setup) 873 694 { 874 695 u32 old_val, new_val, pcscntr; 696 + int ret; 875 697 876 698 if (setup < 1) 877 699 return 0; 878 700 879 - regmap_read(ebi->regmap, FMC2_PCSCNTR, &pcscntr); 701 + ret = regmap_read(ebi->regmap, FMC2_PCSCNTR, &pcscntr); 702 + if (ret) 703 + return ret; 880 704 881 705 /* Enable counter for the bank */ 882 706 regmap_update_bits(ebi->regmap, FMC2_PCSCNTR, ··· 895 713 new_val = FIELD_PREP(FMC2_PCSCNTR_CSCOUNT, new_val); 896 714 regmap_update_bits(ebi->regmap, FMC2_PCSCNTR, 897 715 FMC2_PCSCNTR_CSCOUNT, new_val); 716 + 717 + return 0; 718 + } 719 + 720 + static int stm32_fmc2_ebi_mp25_set_max_low_pulse(struct stm32_fmc2_ebi *ebi, 721 + const struct stm32_fmc2_prop *prop, 722 + int cs, u32 setup) 723 + { 724 + u32 val; 725 + 726 + if (setup == FMC2_CSCOUNT_0) 727 + val = FIELD_PREP(FMC2_BCR_CSCOUNT, FMC2_BCR_CSCOUNT_0); 728 + else if (setup == FMC2_CSCOUNT_1) 729 + val = FIELD_PREP(FMC2_BCR_CSCOUNT, FMC2_BCR_CSCOUNT_1); 730 + else if (setup <= FMC2_CSCOUNT_64) 731 + val = FIELD_PREP(FMC2_BCR_CSCOUNT, FMC2_BCR_CSCOUNT_64); 732 + else 733 + val = FIELD_PREP(FMC2_BCR_CSCOUNT, FMC2_BCR_CSCOUNT_256); 734 + 735 + regmap_update_bits(ebi->regmap, FMC2_BCR(cs), 736 + FMC2_BCR_CSCOUNT, val); 898 737 899 738 return 0; 900 739 } ··· 1085 882 }, 1086 883 }; 1087 884 885 + static const struct stm32_fmc2_prop stm32_fmc2_mp25_child_props[] = { 886 + /* st,fmc2-ebi-cs-trans-type must be the first property */ 887 + { 888 + .name = "st,fmc2-ebi-cs-transaction-type", 889 + .mprop = true, 890 + .set = stm32_fmc2_ebi_set_trans_type, 891 + }, 892 + { 893 + .name = "st,fmc2-ebi-cs-cclk-enable", 894 + .bprop = true, 895 + .reg_type = FMC2_REG_CFGR, 896 + .reg_mask = FMC2_CFGR_CCLKEN, 897 + .check = stm32_fmc2_ebi_mp25_check_cclk, 898 + .set = stm32_fmc2_ebi_set_bit_field, 899 + }, 900 + { 901 + .name = "st,fmc2-ebi-cs-mux-enable", 902 + .bprop = true, 903 + .reg_type = FMC2_REG_BCR, 904 + .reg_mask = FMC2_BCR_MUXEN, 905 + .check = stm32_fmc2_ebi_check_mux, 906 + .set = stm32_fmc2_ebi_set_bit_field, 907 + }, 908 + { 909 + .name = "st,fmc2-ebi-cs-buswidth", 910 + .reset_val = FMC2_BUSWIDTH_16, 911 + .set = stm32_fmc2_ebi_set_buswidth, 912 + }, 913 + { 914 + .name = "st,fmc2-ebi-cs-waitpol-high", 915 + .bprop = true, 916 + .reg_type = FMC2_REG_BCR, 917 + .reg_mask = FMC2_BCR_WAITPOL, 918 + .set = stm32_fmc2_ebi_set_bit_field, 919 + }, 920 + { 921 + .name = "st,fmc2-ebi-cs-waitcfg-enable", 922 + .bprop = true, 923 + .reg_type = FMC2_REG_BCR, 924 + .reg_mask = FMC2_BCR_WAITCFG, 925 + .check = stm32_fmc2_ebi_check_waitcfg, 926 + .set = stm32_fmc2_ebi_set_bit_field, 927 + }, 928 + { 929 + .name = "st,fmc2-ebi-cs-wait-enable", 930 + .bprop = true, 931 + .reg_type = FMC2_REG_BCR, 932 + .reg_mask = FMC2_BCR_WAITEN, 933 + .check = stm32_fmc2_ebi_check_sync_trans, 934 + .set = stm32_fmc2_ebi_set_bit_field, 935 + }, 936 + { 937 + .name = "st,fmc2-ebi-cs-asyncwait-enable", 938 + .bprop = true, 939 + .reg_type = FMC2_REG_BCR, 940 + .reg_mask = FMC2_BCR_ASYNCWAIT, 941 + .check = stm32_fmc2_ebi_check_async_trans, 942 + .set = stm32_fmc2_ebi_set_bit_field, 943 + }, 944 + { 945 + .name = "st,fmc2-ebi-cs-cpsize", 946 + .check = stm32_fmc2_ebi_check_cpsize, 947 + .set = stm32_fmc2_ebi_set_cpsize, 948 + }, 949 + { 950 + .name = "st,fmc2-ebi-cs-byte-lane-setup-ns", 951 + .calculate = stm32_fmc2_ebi_ns_to_clock_cycles, 952 + .set = stm32_fmc2_ebi_set_bl_setup, 953 + }, 954 + { 955 + .name = "st,fmc2-ebi-cs-address-setup-ns", 956 + .reg_type = FMC2_REG_BTR, 957 + .reset_val = FMC2_BXTR_ADDSET_MAX, 958 + .check = stm32_fmc2_ebi_check_async_trans, 959 + .calculate = stm32_fmc2_ebi_ns_to_clock_cycles, 960 + .set = stm32_fmc2_ebi_set_address_setup, 961 + }, 962 + { 963 + .name = "st,fmc2-ebi-cs-address-hold-ns", 964 + .reg_type = FMC2_REG_BTR, 965 + .reset_val = FMC2_BXTR_ADDHLD_MAX, 966 + .check = stm32_fmc2_ebi_check_address_hold, 967 + .calculate = stm32_fmc2_ebi_ns_to_clock_cycles, 968 + .set = stm32_fmc2_ebi_set_address_hold, 969 + }, 970 + { 971 + .name = "st,fmc2-ebi-cs-data-setup-ns", 972 + .reg_type = FMC2_REG_BTR, 973 + .reset_val = FMC2_BXTR_DATAST_MAX, 974 + .check = stm32_fmc2_ebi_check_async_trans, 975 + .calculate = stm32_fmc2_ebi_ns_to_clock_cycles, 976 + .set = stm32_fmc2_ebi_set_data_setup, 977 + }, 978 + { 979 + .name = "st,fmc2-ebi-cs-bus-turnaround-ns", 980 + .reg_type = FMC2_REG_BTR, 981 + .reset_val = FMC2_BXTR_BUSTURN_MAX + 1, 982 + .calculate = stm32_fmc2_ebi_ns_to_clock_cycles, 983 + .set = stm32_fmc2_ebi_set_bus_turnaround, 984 + }, 985 + { 986 + .name = "st,fmc2-ebi-cs-data-hold-ns", 987 + .reg_type = FMC2_REG_BTR, 988 + .check = stm32_fmc2_ebi_check_async_trans, 989 + .calculate = stm32_fmc2_ebi_ns_to_clock_cycles, 990 + .set = stm32_fmc2_ebi_set_data_hold, 991 + }, 992 + { 993 + .name = "st,fmc2-ebi-cs-clk-period-ns", 994 + .reset_val = FMC2_CFGR_CLKDIV_MAX + 1, 995 + .check = stm32_fmc2_ebi_mp25_check_clk_period, 996 + .calculate = stm32_fmc2_ebi_ns_to_clock_cycles, 997 + .set = stm32_fmc2_ebi_mp25_set_clk_period, 998 + }, 999 + { 1000 + .name = "st,fmc2-ebi-cs-data-latency-ns", 1001 + .check = stm32_fmc2_ebi_check_sync_trans, 1002 + .calculate = stm32_fmc2_ebi_mp25_ns_to_clk_period, 1003 + .set = stm32_fmc2_ebi_set_data_latency, 1004 + }, 1005 + { 1006 + .name = "st,fmc2-ebi-cs-write-address-setup-ns", 1007 + .reg_type = FMC2_REG_BWTR, 1008 + .reset_val = FMC2_BXTR_ADDSET_MAX, 1009 + .check = stm32_fmc2_ebi_check_async_trans, 1010 + .calculate = stm32_fmc2_ebi_ns_to_clock_cycles, 1011 + .set = stm32_fmc2_ebi_set_address_setup, 1012 + }, 1013 + { 1014 + .name = "st,fmc2-ebi-cs-write-address-hold-ns", 1015 + .reg_type = FMC2_REG_BWTR, 1016 + .reset_val = FMC2_BXTR_ADDHLD_MAX, 1017 + .check = stm32_fmc2_ebi_check_address_hold, 1018 + .calculate = stm32_fmc2_ebi_ns_to_clock_cycles, 1019 + .set = stm32_fmc2_ebi_set_address_hold, 1020 + }, 1021 + { 1022 + .name = "st,fmc2-ebi-cs-write-data-setup-ns", 1023 + .reg_type = FMC2_REG_BWTR, 1024 + .reset_val = FMC2_BXTR_DATAST_MAX, 1025 + .check = stm32_fmc2_ebi_check_async_trans, 1026 + .calculate = stm32_fmc2_ebi_ns_to_clock_cycles, 1027 + .set = stm32_fmc2_ebi_set_data_setup, 1028 + }, 1029 + { 1030 + .name = "st,fmc2-ebi-cs-write-bus-turnaround-ns", 1031 + .reg_type = FMC2_REG_BWTR, 1032 + .reset_val = FMC2_BXTR_BUSTURN_MAX + 1, 1033 + .calculate = stm32_fmc2_ebi_ns_to_clock_cycles, 1034 + .set = stm32_fmc2_ebi_set_bus_turnaround, 1035 + }, 1036 + { 1037 + .name = "st,fmc2-ebi-cs-write-data-hold-ns", 1038 + .reg_type = FMC2_REG_BWTR, 1039 + .check = stm32_fmc2_ebi_check_async_trans, 1040 + .calculate = stm32_fmc2_ebi_ns_to_clock_cycles, 1041 + .set = stm32_fmc2_ebi_set_data_hold, 1042 + }, 1043 + { 1044 + .name = "st,fmc2-ebi-cs-max-low-pulse-ns", 1045 + .calculate = stm32_fmc2_ebi_ns_to_clock_cycles, 1046 + .set = stm32_fmc2_ebi_mp25_set_max_low_pulse, 1047 + }, 1048 + }; 1049 + 1050 + static int stm32_fmc2_ebi_mp25_check_rif(struct stm32_fmc2_ebi *ebi, u32 resource) 1051 + { 1052 + u32 seccfgr, cidcfgr, semcr; 1053 + int cid, ret; 1054 + 1055 + if (resource >= FMC2_MAX_RESOURCES) 1056 + return -EINVAL; 1057 + 1058 + ret = regmap_read(ebi->regmap, FMC2_SECCFGR, &seccfgr); 1059 + if (ret) 1060 + return ret; 1061 + 1062 + if (seccfgr & BIT(resource)) { 1063 + if (resource) 1064 + dev_err(ebi->dev, "resource %d is configured as secure\n", 1065 + resource); 1066 + 1067 + return -EACCES; 1068 + } 1069 + 1070 + ret = regmap_read(ebi->regmap, FMC2_CIDCFGR(resource), &cidcfgr); 1071 + if (ret) 1072 + return ret; 1073 + 1074 + if (!(cidcfgr & FMC2_CIDCFGR_CFEN)) 1075 + /* CID filtering is turned off: access granted */ 1076 + return 0; 1077 + 1078 + if (!(cidcfgr & FMC2_CIDCFGR_SEMEN)) { 1079 + /* Static CID mode */ 1080 + cid = FIELD_GET(FMC2_CIDCFGR_SCID, cidcfgr); 1081 + if (cid != FMC2_CID1) { 1082 + if (resource) 1083 + dev_err(ebi->dev, "static CID%d set for resource %d\n", 1084 + cid, resource); 1085 + 1086 + return -EACCES; 1087 + } 1088 + 1089 + return 0; 1090 + } 1091 + 1092 + /* Pass-list with semaphore mode */ 1093 + if (!(cidcfgr & FMC2_CIDCFGR_SEMWLC1)) { 1094 + if (resource) 1095 + dev_err(ebi->dev, "CID1 is block-listed for resource %d\n", 1096 + resource); 1097 + 1098 + return -EACCES; 1099 + } 1100 + 1101 + ret = regmap_read(ebi->regmap, FMC2_SEMCR(resource), &semcr); 1102 + if (ret) 1103 + return ret; 1104 + 1105 + if (!(semcr & FMC2_SEMCR_SEM_MUTEX)) { 1106 + regmap_update_bits(ebi->regmap, FMC2_SEMCR(resource), 1107 + FMC2_SEMCR_SEM_MUTEX, FMC2_SEMCR_SEM_MUTEX); 1108 + 1109 + ret = regmap_read(ebi->regmap, FMC2_SEMCR(resource), &semcr); 1110 + if (ret) 1111 + return ret; 1112 + } 1113 + 1114 + cid = FIELD_GET(FMC2_SEMCR_SEMCID, semcr); 1115 + if (cid != FMC2_CID1) { 1116 + if (resource) 1117 + dev_err(ebi->dev, "resource %d is already used by CID%d\n", 1118 + resource, cid); 1119 + 1120 + return -EACCES; 1121 + } 1122 + 1123 + ebi->sem_taken |= BIT(resource); 1124 + 1125 + return 0; 1126 + } 1127 + 1128 + static void stm32_fmc2_ebi_mp25_put_sems(struct stm32_fmc2_ebi *ebi) 1129 + { 1130 + unsigned int resource; 1131 + 1132 + for (resource = 0; resource < FMC2_MAX_RESOURCES; resource++) { 1133 + if (!(ebi->sem_taken & BIT(resource))) 1134 + continue; 1135 + 1136 + regmap_update_bits(ebi->regmap, FMC2_SEMCR(resource), 1137 + FMC2_SEMCR_SEM_MUTEX, 0); 1138 + } 1139 + } 1140 + 1141 + static void stm32_fmc2_ebi_mp25_get_sems(struct stm32_fmc2_ebi *ebi) 1142 + { 1143 + unsigned int resource; 1144 + 1145 + for (resource = 0; resource < FMC2_MAX_RESOURCES; resource++) { 1146 + if (!(ebi->sem_taken & BIT(resource))) 1147 + continue; 1148 + 1149 + regmap_update_bits(ebi->regmap, FMC2_SEMCR(resource), 1150 + FMC2_SEMCR_SEM_MUTEX, FMC2_SEMCR_SEM_MUTEX); 1151 + } 1152 + } 1153 + 1088 1154 static int stm32_fmc2_ebi_parse_prop(struct stm32_fmc2_ebi *ebi, 1089 1155 struct device_node *dev_node, 1090 1156 const struct stm32_fmc2_prop *prop, ··· 1416 944 regmap_update_bits(ebi->regmap, FMC2_BCR(cs), FMC2_BCR_MBKEN, 0); 1417 945 } 1418 946 1419 - static void stm32_fmc2_ebi_save_setup(struct stm32_fmc2_ebi *ebi) 947 + static int stm32_fmc2_ebi_save_setup(struct stm32_fmc2_ebi *ebi) 1420 948 { 1421 949 unsigned int cs; 950 + int ret; 1422 951 1423 952 for (cs = 0; cs < FMC2_MAX_EBI_CE; cs++) { 1424 - regmap_read(ebi->regmap, FMC2_BCR(cs), &ebi->bcr[cs]); 1425 - regmap_read(ebi->regmap, FMC2_BTR(cs), &ebi->btr[cs]); 1426 - regmap_read(ebi->regmap, FMC2_BWTR(cs), &ebi->bwtr[cs]); 953 + if (!(ebi->bank_assigned & BIT(cs))) 954 + continue; 955 + 956 + ret = regmap_read(ebi->regmap, FMC2_BCR(cs), &ebi->bcr[cs]); 957 + ret |= regmap_read(ebi->regmap, FMC2_BTR(cs), &ebi->btr[cs]); 958 + ret |= regmap_read(ebi->regmap, FMC2_BWTR(cs), &ebi->bwtr[cs]); 959 + if (ret) 960 + return ret; 1427 961 } 1428 962 1429 - regmap_read(ebi->regmap, FMC2_PCSCNTR, &ebi->pcscntr); 963 + return 0; 964 + } 965 + 966 + static int stm32_fmc2_ebi_mp1_save_setup(struct stm32_fmc2_ebi *ebi) 967 + { 968 + int ret; 969 + 970 + ret = stm32_fmc2_ebi_save_setup(ebi); 971 + if (ret) 972 + return ret; 973 + 974 + return regmap_read(ebi->regmap, FMC2_PCSCNTR, &ebi->pcscntr); 975 + } 976 + 977 + static int stm32_fmc2_ebi_mp25_save_setup(struct stm32_fmc2_ebi *ebi) 978 + { 979 + int ret; 980 + 981 + ret = stm32_fmc2_ebi_save_setup(ebi); 982 + if (ret) 983 + return ret; 984 + 985 + if (ebi->access_granted) 986 + ret = regmap_read(ebi->regmap, FMC2_CFGR, &ebi->cfgr); 987 + 988 + return ret; 1430 989 } 1431 990 1432 991 static void stm32_fmc2_ebi_set_setup(struct stm32_fmc2_ebi *ebi) ··· 1465 962 unsigned int cs; 1466 963 1467 964 for (cs = 0; cs < FMC2_MAX_EBI_CE; cs++) { 965 + if (!(ebi->bank_assigned & BIT(cs))) 966 + continue; 967 + 1468 968 regmap_write(ebi->regmap, FMC2_BCR(cs), ebi->bcr[cs]); 1469 969 regmap_write(ebi->regmap, FMC2_BTR(cs), ebi->btr[cs]); 1470 970 regmap_write(ebi->regmap, FMC2_BWTR(cs), ebi->bwtr[cs]); 1471 971 } 972 + } 1472 973 974 + static void stm32_fmc2_ebi_mp1_set_setup(struct stm32_fmc2_ebi *ebi) 975 + { 976 + stm32_fmc2_ebi_set_setup(ebi); 1473 977 regmap_write(ebi->regmap, FMC2_PCSCNTR, ebi->pcscntr); 978 + } 979 + 980 + static void stm32_fmc2_ebi_mp25_set_setup(struct stm32_fmc2_ebi *ebi) 981 + { 982 + stm32_fmc2_ebi_set_setup(ebi); 983 + 984 + if (ebi->access_granted) 985 + regmap_write(ebi->regmap, FMC2_CFGR, ebi->cfgr); 1474 986 } 1475 987 1476 988 static void stm32_fmc2_ebi_disable_banks(struct stm32_fmc2_ebi *ebi) ··· 1501 983 } 1502 984 1503 985 /* NWAIT signal can not be connected to EBI controller and NAND controller */ 1504 - static bool stm32_fmc2_ebi_nwait_used_by_ctrls(struct stm32_fmc2_ebi *ebi) 986 + static int stm32_fmc2_ebi_nwait_used_by_ctrls(struct stm32_fmc2_ebi *ebi) 1505 987 { 988 + struct device *dev = ebi->dev; 1506 989 unsigned int cs; 1507 990 u32 bcr; 991 + int ret; 1508 992 1509 993 for (cs = 0; cs < FMC2_MAX_EBI_CE; cs++) { 1510 994 if (!(ebi->bank_assigned & BIT(cs))) 1511 995 continue; 1512 996 1513 - regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr); 997 + ret = regmap_read(ebi->regmap, FMC2_BCR(cs), &bcr); 998 + if (ret) 999 + return ret; 1000 + 1514 1001 if ((bcr & FMC2_BCR_WAITEN || bcr & FMC2_BCR_ASYNCWAIT) && 1515 - ebi->bank_assigned & BIT(FMC2_NAND)) 1516 - return true; 1002 + ebi->bank_assigned & BIT(FMC2_NAND)) { 1003 + dev_err(dev, "NWAIT signal connected to EBI and NAND controllers\n"); 1004 + return -EINVAL; 1005 + } 1517 1006 } 1518 1007 1519 - return false; 1008 + return 0; 1520 1009 } 1521 1010 1522 1011 static void stm32_fmc2_ebi_enable(struct stm32_fmc2_ebi *ebi) 1523 1012 { 1524 - regmap_update_bits(ebi->regmap, FMC2_BCR1, 1525 - FMC2_BCR1_FMC2EN, FMC2_BCR1_FMC2EN); 1013 + if (!ebi->access_granted) 1014 + return; 1015 + 1016 + regmap_update_bits(ebi->regmap, ebi->data->fmc2_enable_reg, 1017 + ebi->data->fmc2_enable_bit, 1018 + ebi->data->fmc2_enable_bit); 1526 1019 } 1527 1020 1528 1021 static void stm32_fmc2_ebi_disable(struct stm32_fmc2_ebi *ebi) 1529 1022 { 1530 - regmap_update_bits(ebi->regmap, FMC2_BCR1, FMC2_BCR1_FMC2EN, 0); 1023 + if (!ebi->access_granted) 1024 + return; 1025 + 1026 + regmap_update_bits(ebi->regmap, ebi->data->fmc2_enable_reg, 1027 + ebi->data->fmc2_enable_bit, 0); 1531 1028 } 1532 1029 1533 1030 static int stm32_fmc2_ebi_setup_cs(struct stm32_fmc2_ebi *ebi, ··· 1554 1021 1555 1022 stm32_fmc2_ebi_disable_bank(ebi, cs); 1556 1023 1557 - for (i = 0; i < ARRAY_SIZE(stm32_fmc2_child_props); i++) { 1558 - const struct stm32_fmc2_prop *p = &stm32_fmc2_child_props[i]; 1024 + for (i = 0; i < ebi->data->nb_child_props; i++) { 1025 + const struct stm32_fmc2_prop *p = &ebi->data->child_props[i]; 1559 1026 1560 1027 ret = stm32_fmc2_ebi_parse_prop(ebi, dev_node, p, cs); 1561 1028 if (ret) { ··· 1599 1066 return -EINVAL; 1600 1067 } 1601 1068 1069 + if (ebi->data->check_rif) { 1070 + ret = ebi->data->check_rif(ebi, bank + 1); 1071 + if (ret) { 1072 + dev_err(dev, "bank access failed: %d\n", bank); 1073 + of_node_put(child); 1074 + return ret; 1075 + } 1076 + } 1077 + 1602 1078 if (bank < FMC2_MAX_EBI_CE) { 1603 1079 ret = stm32_fmc2_ebi_setup_cs(ebi, child, bank); 1604 1080 if (ret) { ··· 1627 1085 return -ENODEV; 1628 1086 } 1629 1087 1630 - if (stm32_fmc2_ebi_nwait_used_by_ctrls(ebi)) { 1631 - dev_err(dev, "NWAIT signal connected to EBI and NAND controllers\n"); 1632 - return -EINVAL; 1088 + if (ebi->data->nwait_used_by_ctrls) { 1089 + ret = ebi->data->nwait_used_by_ctrls(ebi); 1090 + if (ret) 1091 + return ret; 1633 1092 } 1634 1093 1635 1094 stm32_fmc2_ebi_enable(ebi); ··· 1650 1107 return -ENOMEM; 1651 1108 1652 1109 ebi->dev = dev; 1110 + platform_set_drvdata(pdev, ebi); 1111 + 1112 + ebi->data = of_device_get_match_data(dev); 1113 + if (!ebi->data) 1114 + return -EINVAL; 1653 1115 1654 1116 ebi->regmap = device_node_to_regmap(dev->of_node); 1655 1117 if (IS_ERR(ebi->regmap)) ··· 1668 1120 if (PTR_ERR(rstc) == -EPROBE_DEFER) 1669 1121 return -EPROBE_DEFER; 1670 1122 1671 - ret = clk_prepare_enable(ebi->clk); 1123 + ret = devm_pm_runtime_enable(dev); 1672 1124 if (ret) 1125 + return ret; 1126 + 1127 + ret = pm_runtime_resume_and_get(dev); 1128 + if (ret < 0) 1673 1129 return ret; 1674 1130 1675 1131 if (!IS_ERR(rstc)) { ··· 1681 1129 reset_control_deassert(rstc); 1682 1130 } 1683 1131 1132 + /* Check if CFGR register can be modified */ 1133 + ebi->access_granted = true; 1134 + if (ebi->data->check_rif) { 1135 + ret = ebi->data->check_rif(ebi, 0); 1136 + if (ret) { 1137 + u32 sr; 1138 + 1139 + ebi->access_granted = false; 1140 + 1141 + ret = regmap_read(ebi->regmap, FMC2_SR, &sr); 1142 + if (ret) 1143 + goto err_release; 1144 + 1145 + /* In case of CFGR is secure, just check that the FMC2 is enabled */ 1146 + if (sr & FMC2_SR_ISOST) { 1147 + dev_err(dev, "FMC2 is not ready to be used.\n"); 1148 + ret = -EACCES; 1149 + goto err_release; 1150 + } 1151 + } 1152 + } 1153 + 1684 1154 ret = stm32_fmc2_ebi_parse_dt(ebi); 1685 1155 if (ret) 1686 1156 goto err_release; 1687 1157 1688 - stm32_fmc2_ebi_save_setup(ebi); 1689 - platform_set_drvdata(pdev, ebi); 1158 + ret = ebi->data->save_setup(ebi); 1159 + if (ret) 1160 + goto err_release; 1690 1161 1691 1162 return 0; 1692 1163 1693 1164 err_release: 1694 1165 stm32_fmc2_ebi_disable_banks(ebi); 1695 1166 stm32_fmc2_ebi_disable(ebi); 1696 - clk_disable_unprepare(ebi->clk); 1167 + if (ebi->data->put_sems) 1168 + ebi->data->put_sems(ebi); 1169 + pm_runtime_put_sync_suspend(dev); 1697 1170 1698 1171 return ret; 1699 1172 } ··· 1730 1153 of_platform_depopulate(&pdev->dev); 1731 1154 stm32_fmc2_ebi_disable_banks(ebi); 1732 1155 stm32_fmc2_ebi_disable(ebi); 1156 + if (ebi->data->put_sems) 1157 + ebi->data->put_sems(ebi); 1158 + pm_runtime_put_sync_suspend(&pdev->dev); 1159 + } 1160 + 1161 + static int __maybe_unused stm32_fmc2_ebi_runtime_suspend(struct device *dev) 1162 + { 1163 + struct stm32_fmc2_ebi *ebi = dev_get_drvdata(dev); 1164 + 1733 1165 clk_disable_unprepare(ebi->clk); 1166 + 1167 + return 0; 1168 + } 1169 + 1170 + static int __maybe_unused stm32_fmc2_ebi_runtime_resume(struct device *dev) 1171 + { 1172 + struct stm32_fmc2_ebi *ebi = dev_get_drvdata(dev); 1173 + 1174 + return clk_prepare_enable(ebi->clk); 1734 1175 } 1735 1176 1736 1177 static int __maybe_unused stm32_fmc2_ebi_suspend(struct device *dev) ··· 1756 1161 struct stm32_fmc2_ebi *ebi = dev_get_drvdata(dev); 1757 1162 1758 1163 stm32_fmc2_ebi_disable(ebi); 1759 - clk_disable_unprepare(ebi->clk); 1164 + if (ebi->data->put_sems) 1165 + ebi->data->put_sems(ebi); 1166 + pm_runtime_put_sync_suspend(dev); 1760 1167 pinctrl_pm_select_sleep_state(dev); 1761 1168 1762 1169 return 0; ··· 1771 1174 1772 1175 pinctrl_pm_select_default_state(dev); 1773 1176 1774 - ret = clk_prepare_enable(ebi->clk); 1775 - if (ret) 1177 + ret = pm_runtime_resume_and_get(dev); 1178 + if (ret < 0) 1776 1179 return ret; 1777 1180 1778 - stm32_fmc2_ebi_set_setup(ebi); 1181 + if (ebi->data->get_sems) 1182 + ebi->data->get_sems(ebi); 1183 + ebi->data->set_setup(ebi); 1779 1184 stm32_fmc2_ebi_enable(ebi); 1780 1185 1781 1186 return 0; 1782 1187 } 1783 1188 1784 - static SIMPLE_DEV_PM_OPS(stm32_fmc2_ebi_pm_ops, stm32_fmc2_ebi_suspend, 1785 - stm32_fmc2_ebi_resume); 1189 + static const struct dev_pm_ops stm32_fmc2_ebi_pm_ops = { 1190 + SET_RUNTIME_PM_OPS(stm32_fmc2_ebi_runtime_suspend, 1191 + stm32_fmc2_ebi_runtime_resume, NULL) 1192 + SET_SYSTEM_SLEEP_PM_OPS(stm32_fmc2_ebi_suspend, stm32_fmc2_ebi_resume) 1193 + }; 1194 + 1195 + static const struct stm32_fmc2_ebi_data stm32_fmc2_ebi_mp1_data = { 1196 + .child_props = stm32_fmc2_child_props, 1197 + .nb_child_props = ARRAY_SIZE(stm32_fmc2_child_props), 1198 + .fmc2_enable_reg = FMC2_BCR1, 1199 + .fmc2_enable_bit = FMC2_BCR1_FMC2EN, 1200 + .nwait_used_by_ctrls = stm32_fmc2_ebi_nwait_used_by_ctrls, 1201 + .set_setup = stm32_fmc2_ebi_mp1_set_setup, 1202 + .save_setup = stm32_fmc2_ebi_mp1_save_setup, 1203 + }; 1204 + 1205 + static const struct stm32_fmc2_ebi_data stm32_fmc2_ebi_mp25_data = { 1206 + .child_props = stm32_fmc2_mp25_child_props, 1207 + .nb_child_props = ARRAY_SIZE(stm32_fmc2_mp25_child_props), 1208 + .fmc2_enable_reg = FMC2_CFGR, 1209 + .fmc2_enable_bit = FMC2_CFGR_FMC2EN, 1210 + .set_setup = stm32_fmc2_ebi_mp25_set_setup, 1211 + .save_setup = stm32_fmc2_ebi_mp25_save_setup, 1212 + .check_rif = stm32_fmc2_ebi_mp25_check_rif, 1213 + .put_sems = stm32_fmc2_ebi_mp25_put_sems, 1214 + .get_sems = stm32_fmc2_ebi_mp25_get_sems, 1215 + }; 1786 1216 1787 1217 static const struct of_device_id stm32_fmc2_ebi_match[] = { 1788 - {.compatible = "st,stm32mp1-fmc2-ebi"}, 1218 + { 1219 + .compatible = "st,stm32mp1-fmc2-ebi", 1220 + .data = &stm32_fmc2_ebi_mp1_data, 1221 + }, 1222 + { 1223 + .compatible = "st,stm32mp25-fmc2-ebi", 1224 + .data = &stm32_fmc2_ebi_mp25_data, 1225 + }, 1789 1226 {} 1790 1227 }; 1791 1228 MODULE_DEVICE_TABLE(of, stm32_fmc2_ebi_match);
+39 -9
drivers/memory/tegra/tegra234.c
··· 92 92 }, { 93 93 .id = TEGRA234_MEMORY_CLIENT_DLA0RDB, 94 94 .name = "dla0rdb", 95 + .bpmp_id = TEGRA_ICC_BPMP_DLA_0, 96 + .type = TEGRA_ICC_NISO, 95 97 .sid = TEGRA234_SID_NVDLA0, 96 98 .regs = { 97 99 .sid = { ··· 104 102 }, { 105 103 .id = TEGRA234_MEMORY_CLIENT_DLA0RDB1, 106 104 .name = "dla0rdb1", 105 + .bpmp_id = TEGRA_ICC_BPMP_DLA_0, 106 + .type = TEGRA_ICC_NISO, 107 107 .sid = TEGRA234_SID_NVDLA0, 108 108 .regs = { 109 109 .sid = { ··· 116 112 }, { 117 113 .id = TEGRA234_MEMORY_CLIENT_DLA0WRB, 118 114 .name = "dla0wrb", 115 + .bpmp_id = TEGRA_ICC_BPMP_DLA_0, 116 + .type = TEGRA_ICC_NISO, 119 117 .sid = TEGRA234_SID_NVDLA0, 120 118 .regs = { 121 119 .sid = { ··· 127 121 }, 128 122 }, { 129 123 .id = TEGRA234_MEMORY_CLIENT_DLA1RDB, 130 - .name = "dla0rdb", 124 + .name = "dla1rdb", 125 + .bpmp_id = TEGRA_ICC_BPMP_DLA_1, 126 + .type = TEGRA_ICC_NISO, 131 127 .sid = TEGRA234_SID_NVDLA1, 132 128 .regs = { 133 129 .sid = { ··· 415 407 }, 416 408 }, { 417 409 .id = TEGRA234_MEMORY_CLIENT_DLA1RDB1, 418 - .name = "dla0rdb1", 410 + .name = "dla1rdb1", 411 + .bpmp_id = TEGRA_ICC_BPMP_DLA_1, 412 + .type = TEGRA_ICC_NISO, 419 413 .sid = TEGRA234_SID_NVDLA1, 420 414 .regs = { 421 415 .sid = { ··· 427 417 }, 428 418 }, { 429 419 .id = TEGRA234_MEMORY_CLIENT_DLA1WRB, 430 - .name = "dla0wrb", 420 + .name = "dla1wrb", 421 + .bpmp_id = TEGRA_ICC_BPMP_DLA_1, 422 + .type = TEGRA_ICC_NISO, 431 423 .sid = TEGRA234_SID_NVDLA1, 432 424 .regs = { 433 425 .sid = { ··· 551 539 .bpmp_id = TEGRA_ICC_BPMP_NVJPG_0, 552 540 .type = TEGRA_ICC_NISO, 553 541 .sid = TEGRA234_SID_NVJPG, 554 - .regs = { 542 + .regs = { 555 543 .sid = { 556 544 .override = 0x3f8, 557 545 .security = 0x3fc, ··· 672 660 }, { 673 661 .id = TEGRA234_MEMORY_CLIENT_DLA0RDA, 674 662 .name = "dla0rda", 663 + .bpmp_id = TEGRA_ICC_BPMP_DLA_0, 664 + .type = TEGRA_ICC_NISO, 675 665 .sid = TEGRA234_SID_NVDLA0, 676 666 .regs = { 677 667 .sid = { ··· 684 670 }, { 685 671 .id = TEGRA234_MEMORY_CLIENT_DLA0FALRDB, 686 672 .name = "dla0falrdb", 673 + .bpmp_id = TEGRA_ICC_BPMP_DLA_0, 674 + .type = TEGRA_ICC_NISO, 687 675 .sid = TEGRA234_SID_NVDLA0, 688 676 .regs = { 689 677 .sid = { ··· 696 680 }, { 697 681 .id = TEGRA234_MEMORY_CLIENT_DLA0WRA, 698 682 .name = "dla0wra", 683 + .bpmp_id = TEGRA_ICC_BPMP_DLA_0, 684 + .type = TEGRA_ICC_NISO, 699 685 .sid = TEGRA234_SID_NVDLA0, 700 686 .regs = { 701 687 .sid = { ··· 708 690 }, { 709 691 .id = TEGRA234_MEMORY_CLIENT_DLA0FALWRB, 710 692 .name = "dla0falwrb", 693 + .bpmp_id = TEGRA_ICC_BPMP_DLA_0, 694 + .type = TEGRA_ICC_NISO, 711 695 .sid = TEGRA234_SID_NVDLA0, 712 696 .regs = { 713 697 .sid = { ··· 719 699 }, 720 700 }, { 721 701 .id = TEGRA234_MEMORY_CLIENT_DLA1RDA, 722 - .name = "dla0rda", 702 + .name = "dla1rda", 703 + .bpmp_id = TEGRA_ICC_BPMP_DLA_1, 704 + .type = TEGRA_ICC_NISO, 723 705 .sid = TEGRA234_SID_NVDLA1, 724 706 .regs = { 725 707 .sid = { ··· 731 709 }, 732 710 }, { 733 711 .id = TEGRA234_MEMORY_CLIENT_DLA1FALRDB, 734 - .name = "dla0falrdb", 712 + .name = "dla1falrdb", 713 + .bpmp_id = TEGRA_ICC_BPMP_DLA_1, 714 + .type = TEGRA_ICC_NISO, 735 715 .sid = TEGRA234_SID_NVDLA1, 736 716 .regs = { 737 717 .sid = { ··· 743 719 }, 744 720 }, { 745 721 .id = TEGRA234_MEMORY_CLIENT_DLA1WRA, 746 - .name = "dla0wra", 722 + .name = "dla1wra", 723 + .bpmp_id = TEGRA_ICC_BPMP_DLA_1, 724 + .type = TEGRA_ICC_NISO, 747 725 .sid = TEGRA234_SID_NVDLA1, 748 726 .regs = { 749 727 .sid = { ··· 755 729 }, 756 730 }, { 757 731 .id = TEGRA234_MEMORY_CLIENT_DLA1FALWRB, 758 - .name = "dla0falwrb", 732 + .name = "dla1falwrb", 733 + .bpmp_id = TEGRA_ICC_BPMP_DLA_1, 734 + .type = TEGRA_ICC_NISO, 759 735 .sid = TEGRA234_SID_NVDLA1, 760 736 .regs = { 761 737 .sid = { ··· 936 908 }, { 937 909 .id = TEGRA234_MEMORY_CLIENT_DLA0RDA1, 938 910 .name = "dla0rda1", 911 + .bpmp_id = TEGRA_ICC_BPMP_DLA_0, 912 + .type = TEGRA_ICC_NISO, 939 913 .sid = TEGRA234_SID_NVDLA0, 940 914 .regs = { 941 915 .sid = { ··· 947 917 }, 948 918 }, { 949 919 .id = TEGRA234_MEMORY_CLIENT_DLA1RDA1, 950 - .name = "dla0rda1", 920 + .name = "dla1rda1", 951 921 .sid = TEGRA234_SID_NVDLA1, 952 922 .regs = { 953 923 .sid = {
-1
drivers/pmdomain/qcom/rpmhpd.c
··· 217 217 [SC8280XP_CX] = &cx, 218 218 [SC8280XP_CX_AO] = &cx_ao, 219 219 [SC8280XP_EBI] = &ebi, 220 - [SC8280XP_GFX] = &gfx, 221 220 [SC8280XP_LCX] = &lcx, 222 221 [SC8280XP_LMX] = &lmx, 223 222 [SC8280XP_MMCX] = &mmcx,
+9
drivers/soc/mediatek/Kconfig
··· 68 68 chip process corner, temperatures and other factors. Then DVFS 69 69 driver could apply SVS bank voltage to PMIC/Buck. 70 70 71 + config MTK_SOCINFO 72 + tristate "MediaTek SoC Information" 73 + default y 74 + depends on NVMEM_MTK_EFUSE 75 + help 76 + The MediaTek SoC Information (mtk-socinfo) driver provides 77 + information about the SoC to the userspace including the 78 + manufacturer name, marketing name and soc name. 79 + 71 80 endmenu
+1
drivers/soc/mediatek/Makefile
··· 7 7 obj-$(CONFIG_MTK_MMSYS) += mtk-mmsys.o 8 8 obj-$(CONFIG_MTK_MMSYS) += mtk-mutex.o 9 9 obj-$(CONFIG_MTK_SVS) += mtk-svs.o 10 + obj-$(CONFIG_MTK_SOCINFO) += mtk-socinfo.o
+191
drivers/soc/mediatek/mtk-socinfo.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2023 MediaTek Inc. 4 + */ 5 + #include <linux/kernel.h> 6 + #include <linux/module.h> 7 + #include <linux/of.h> 8 + #include <linux/of_platform.h> 9 + #include <linux/pm_runtime.h> 10 + #include <linux/nvmem-consumer.h> 11 + #include <linux/device.h> 12 + #include <linux/device/bus.h> 13 + #include <linux/debugfs.h> 14 + #include <linux/seq_file.h> 15 + #include <linux/string.h> 16 + #include <linux/sys_soc.h> 17 + #include <linux/slab.h> 18 + #include <linux/platform_device.h> 19 + 20 + #define MTK_SOCINFO_ENTRY(_soc_name, _segment_name, _marketing_name, _cell_data1, _cell_data2) {\ 21 + .soc_name = _soc_name, \ 22 + .segment_name = _segment_name, \ 23 + .marketing_name = _marketing_name, \ 24 + .cell_data = {_cell_data1, _cell_data2} \ 25 + } 26 + #define CELL_NOT_USED (0xFFFFFFFF) 27 + #define MAX_CELLS (2) 28 + 29 + struct mtk_socinfo { 30 + struct device *dev; 31 + struct name_data *name_data; 32 + struct socinfo_data *socinfo_data; 33 + struct soc_device *soc_dev; 34 + }; 35 + 36 + struct socinfo_data { 37 + char *soc_name; 38 + char *segment_name; 39 + char *marketing_name; 40 + u32 cell_data[MAX_CELLS]; 41 + }; 42 + 43 + static const char *cell_names[MAX_CELLS] = {"socinfo-data1", "socinfo-data2"}; 44 + 45 + static struct socinfo_data socinfo_data_table[] = { 46 + MTK_SOCINFO_ENTRY("MT8173", "MT8173V/AC", "MT8173", 0x6CA20004, 0x10000000), 47 + MTK_SOCINFO_ENTRY("MT8183", "MT8183V/AZA", "Kompanio 500", 0x00010043, 0x00000840), 48 + MTK_SOCINFO_ENTRY("MT8183", "MT8183V/AZA", "Kompanio 500", 0x00010043, 0x00000940), 49 + MTK_SOCINFO_ENTRY("MT8186", "MT8186GV/AZA", "Kompanio 520", 0x81861001, CELL_NOT_USED), 50 + MTK_SOCINFO_ENTRY("MT8186T", "MT8186TV/AZA", "Kompanio 528", 0x81862001, CELL_NOT_USED), 51 + MTK_SOCINFO_ENTRY("MT8188", "MT8188GV/AZA", "Kompanio 830", 0x81880000, 0x00000010), 52 + MTK_SOCINFO_ENTRY("MT8188", "MT8188GV/HZA", "Kompanio 830", 0x81880000, 0x00000011), 53 + MTK_SOCINFO_ENTRY("MT8192", "MT8192V/AZA", "Kompanio 820", 0x00001100, 0x00040080), 54 + MTK_SOCINFO_ENTRY("MT8192T", "MT8192V/ATZA", "Kompanio 828", 0x00000100, 0x000400C0), 55 + MTK_SOCINFO_ENTRY("MT8195", "MT8195GV/EZA", "Kompanio 1200", 0x81950300, CELL_NOT_USED), 56 + MTK_SOCINFO_ENTRY("MT8195", "MT8195GV/EHZA", "Kompanio 1200", 0x81950304, CELL_NOT_USED), 57 + MTK_SOCINFO_ENTRY("MT8195", "MT8195TV/EZA", "Kompanio 1380", 0x81950400, CELL_NOT_USED), 58 + MTK_SOCINFO_ENTRY("MT8195", "MT8195TV/EHZA", "Kompanio 1380", 0x81950404, CELL_NOT_USED), 59 + }; 60 + 61 + static int mtk_socinfo_create_socinfo_node(struct mtk_socinfo *mtk_socinfop) 62 + { 63 + struct soc_device_attribute *attrs; 64 + static char machine[30] = {0}; 65 + static const char *soc_manufacturer = "MediaTek"; 66 + 67 + attrs = devm_kzalloc(mtk_socinfop->dev, sizeof(*attrs), GFP_KERNEL); 68 + if (!attrs) 69 + return -ENOMEM; 70 + 71 + snprintf(machine, sizeof(machine), "%s (%s)", mtk_socinfop->socinfo_data->marketing_name, 72 + mtk_socinfop->socinfo_data->soc_name); 73 + attrs->family = soc_manufacturer; 74 + attrs->machine = machine; 75 + 76 + mtk_socinfop->soc_dev = soc_device_register(attrs); 77 + if (IS_ERR(mtk_socinfop->soc_dev)) 78 + return PTR_ERR(mtk_socinfop->soc_dev); 79 + 80 + dev_info(mtk_socinfop->dev, "%s %s SoC detected.\n", soc_manufacturer, attrs->machine); 81 + return 0; 82 + } 83 + 84 + static u32 mtk_socinfo_read_cell(struct device *dev, const char *name) 85 + { 86 + struct nvmem_device *nvmemp; 87 + struct device_node *np, *nvmem_node = dev->parent->of_node; 88 + u32 offset; 89 + u32 cell_val = CELL_NOT_USED; 90 + 91 + /* should never fail since the nvmem driver registers this child */ 92 + nvmemp = nvmem_device_find(nvmem_node, device_match_of_node); 93 + if (IS_ERR(nvmemp)) 94 + goto out; 95 + 96 + np = of_get_child_by_name(nvmem_node, name); 97 + if (!np) 98 + goto put_device; 99 + 100 + if (of_property_read_u32_index(np, "reg", 0, &offset)) 101 + goto put_node; 102 + 103 + nvmem_device_read(nvmemp, offset, sizeof(cell_val), &cell_val); 104 + 105 + put_node: 106 + of_node_put(np); 107 + put_device: 108 + nvmem_device_put(nvmemp); 109 + out: 110 + return cell_val; 111 + } 112 + 113 + static int mtk_socinfo_get_socinfo_data(struct mtk_socinfo *mtk_socinfop) 114 + { 115 + unsigned int i, j; 116 + unsigned int num_cell_data = 0; 117 + u32 cell_data[MAX_CELLS] = {0}; 118 + bool match_socinfo; 119 + int match_socinfo_index = -1; 120 + 121 + for (i = 0; i < MAX_CELLS; i++) { 122 + cell_data[i] = mtk_socinfo_read_cell(mtk_socinfop->dev, cell_names[i]); 123 + if (cell_data[i] != CELL_NOT_USED) 124 + num_cell_data++; 125 + else 126 + break; 127 + } 128 + 129 + if (!num_cell_data) 130 + return -ENOENT; 131 + 132 + for (i = 0; i < ARRAY_SIZE(socinfo_data_table); i++) { 133 + match_socinfo = true; 134 + for (j = 0; j < num_cell_data; j++) { 135 + if (cell_data[j] != socinfo_data_table[i].cell_data[j]) { 136 + match_socinfo = false; 137 + break; 138 + } 139 + } 140 + if (match_socinfo) { 141 + mtk_socinfop->socinfo_data = &(socinfo_data_table[i]); 142 + match_socinfo_index = i; 143 + break; 144 + } 145 + } 146 + 147 + return match_socinfo_index >= 0 ? match_socinfo_index : -ENOENT; 148 + } 149 + 150 + static int mtk_socinfo_probe(struct platform_device *pdev) 151 + { 152 + struct mtk_socinfo *mtk_socinfop; 153 + int ret; 154 + 155 + mtk_socinfop = devm_kzalloc(&pdev->dev, sizeof(*mtk_socinfop), GFP_KERNEL); 156 + if (!mtk_socinfop) 157 + return -ENOMEM; 158 + 159 + mtk_socinfop->dev = &pdev->dev; 160 + 161 + ret = mtk_socinfo_get_socinfo_data(mtk_socinfop); 162 + if (ret < 0) 163 + return dev_err_probe(mtk_socinfop->dev, ret, "Failed to get socinfo data\n"); 164 + 165 + ret = mtk_socinfo_create_socinfo_node(mtk_socinfop); 166 + if (ret) 167 + return dev_err_probe(mtk_socinfop->dev, ret, "Cannot create node\n"); 168 + 169 + platform_set_drvdata(pdev, mtk_socinfop); 170 + return 0; 171 + } 172 + 173 + static void mtk_socinfo_remove(struct platform_device *pdev) 174 + { 175 + struct mtk_socinfo *mtk_socinfop = platform_get_drvdata(pdev); 176 + 177 + soc_device_unregister(mtk_socinfop->soc_dev); 178 + } 179 + 180 + static struct platform_driver mtk_socinfo = { 181 + .probe = mtk_socinfo_probe, 182 + .remove_new = mtk_socinfo_remove, 183 + .driver = { 184 + .name = "mtk-socinfo", 185 + }, 186 + }; 187 + module_platform_driver(mtk_socinfo); 188 + 189 + MODULE_AUTHOR("William-TW LIN <william-tw.lin@mediatek.com>"); 190 + MODULE_DESCRIPTION("MediaTek socinfo driver"); 191 + MODULE_LICENSE("GPL");
+9
drivers/soc/qcom/Kconfig
··· 268 268 tristate 269 269 select QCOM_SCM 270 270 271 + config QCOM_PBS 272 + tristate "PBS trigger support for Qualcomm Technologies, Inc. PMICS" 273 + depends on SPMI 274 + help 275 + This driver supports configuring software programmable boot sequencer (PBS) 276 + trigger event through PBS RAM on Qualcomm Technologies, Inc. PMICs. 277 + This module provides the APIs to the client drivers that wants to send the 278 + PBS trigger event to the PBS RAM. 279 + 271 280 endmenu
+2
drivers/soc/qcom/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 CFLAGS_rpmh-rsc.o := -I$(src) 3 + CFLAGS_qcom_aoss.o := -I$(src) 3 4 obj-$(CONFIG_QCOM_AOSS_QMP) += qcom_aoss.o 4 5 obj-$(CONFIG_QCOM_GENI_SE) += qcom-geni-se.o 5 6 obj-$(CONFIG_QCOM_COMMAND_DB) += cmd-db.o ··· 35 34 obj-$(CONFIG_QCOM_ICC_BWMON) += icc-bwmon.o 36 35 qcom_ice-objs += ice.o 37 36 obj-$(CONFIG_QCOM_INLINE_CRYPTO_ENGINE) += qcom_ice.o 37 + obj-$(CONFIG_QCOM_PBS) += qcom-pbs.o
+1 -1
drivers/soc/qcom/apr.c
··· 399 399 return add_uevent_var(env, "MODALIAS=apr:%s", adev->name); 400 400 } 401 401 402 - struct bus_type aprbus = { 402 + const struct bus_type aprbus = { 403 403 .name = "aprbus", 404 404 .match = apr_device_match, 405 405 .probe = apr_device_probe,
+2
drivers/soc/qcom/llcc-qcom.c
··· 859 859 ret = regmap_read_poll_timeout(drv_data->bcast_regmap, status_reg, 860 860 slice_status, !(slice_status & status), 861 861 0, LLCC_STATUS_READ_DELAY); 862 + if (ret) 863 + return ret; 862 864 863 865 if (drv_data->version >= LLCC_VERSION_4_1_0_0) 864 866 ret = regmap_write(drv_data->bcast_regmap, act_clear_reg,
-1
drivers/soc/qcom/qcom-geni-se.c
··· 89 89 * @base: Base address of this instance of QUP wrapper core 90 90 * @clks: Handle to the primary & optional secondary AHB clocks 91 91 * @num_clks: Count of clocks 92 - * @to_core: Core ICC path 93 92 */ 94 93 struct geni_wrapper { 95 94 struct device *dev;
+236
drivers/soc/qcom/qcom-pbs.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. 4 + */ 5 + 6 + #include <linux/delay.h> 7 + #include <linux/err.h> 8 + #include <linux/module.h> 9 + #include <linux/of.h> 10 + #include <linux/platform_device.h> 11 + #include <linux/of_platform.h> 12 + #include <linux/regmap.h> 13 + #include <linux/spmi.h> 14 + #include <linux/soc/qcom/qcom-pbs.h> 15 + 16 + #define PBS_CLIENT_TRIG_CTL 0x42 17 + #define PBS_CLIENT_SW_TRIG_BIT BIT(7) 18 + #define PBS_CLIENT_SCRATCH1 0x50 19 + #define PBS_CLIENT_SCRATCH2 0x51 20 + #define PBS_CLIENT_SCRATCH2_ERROR 0xFF 21 + 22 + #define RETRIES 2000 23 + #define DELAY 1100 24 + 25 + struct pbs_dev { 26 + struct device *dev; 27 + struct regmap *regmap; 28 + struct mutex lock; 29 + struct device_link *link; 30 + 31 + u32 base; 32 + }; 33 + 34 + static int qcom_pbs_wait_for_ack(struct pbs_dev *pbs, u8 bit_pos) 35 + { 36 + unsigned int val; 37 + int ret; 38 + 39 + ret = regmap_read_poll_timeout(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH2, 40 + val, val & BIT(bit_pos), DELAY, DELAY * RETRIES); 41 + 42 + if (ret < 0) { 43 + dev_err(pbs->dev, "Timeout for PBS ACK/NACK for bit %u\n", bit_pos); 44 + return -ETIMEDOUT; 45 + } 46 + 47 + if (val == PBS_CLIENT_SCRATCH2_ERROR) { 48 + ret = regmap_write(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH2, 0); 49 + dev_err(pbs->dev, "NACK from PBS for bit %u\n", bit_pos); 50 + return -EINVAL; 51 + } 52 + 53 + dev_dbg(pbs->dev, "PBS sequence for bit %u executed!\n", bit_pos); 54 + return 0; 55 + } 56 + 57 + /** 58 + * qcom_pbs_trigger_event() - Trigger the PBS RAM sequence 59 + * @pbs: Pointer to PBS device 60 + * @bitmap: bitmap 61 + * 62 + * This function is used to trigger the PBS RAM sequence to be 63 + * executed by the client driver. 64 + * 65 + * The PBS trigger sequence involves 66 + * 1. setting the PBS sequence bit in PBS_CLIENT_SCRATCH1 67 + * 2. Initiating the SW PBS trigger 68 + * 3. Checking the equivalent bit in PBS_CLIENT_SCRATCH2 for the 69 + * completion of the sequence. 70 + * 4. If PBS_CLIENT_SCRATCH2 == 0xFF, the PBS sequence failed to execute 71 + * 72 + * Return: 0 on success, < 0 on failure 73 + */ 74 + int qcom_pbs_trigger_event(struct pbs_dev *pbs, u8 bitmap) 75 + { 76 + unsigned int val; 77 + u16 bit_pos; 78 + int ret; 79 + 80 + if (WARN_ON(!bitmap)) 81 + return -EINVAL; 82 + 83 + if (IS_ERR_OR_NULL(pbs)) 84 + return -EINVAL; 85 + 86 + mutex_lock(&pbs->lock); 87 + ret = regmap_read(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH2, &val); 88 + if (ret < 0) 89 + goto out; 90 + 91 + if (val == PBS_CLIENT_SCRATCH2_ERROR) { 92 + /* PBS error - clear SCRATCH2 register */ 93 + ret = regmap_write(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH2, 0); 94 + if (ret < 0) 95 + goto out; 96 + } 97 + 98 + for (bit_pos = 0; bit_pos < 8; bit_pos++) { 99 + if (!(bitmap & BIT(bit_pos))) 100 + continue; 101 + 102 + /* Clear the PBS sequence bit position */ 103 + ret = regmap_update_bits(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH2, 104 + BIT(bit_pos), 0); 105 + if (ret < 0) 106 + goto out_clear_scratch1; 107 + 108 + /* Set the PBS sequence bit position */ 109 + ret = regmap_update_bits(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH1, 110 + BIT(bit_pos), BIT(bit_pos)); 111 + if (ret < 0) 112 + goto out_clear_scratch1; 113 + 114 + /* Initiate the SW trigger */ 115 + ret = regmap_update_bits(pbs->regmap, pbs->base + PBS_CLIENT_TRIG_CTL, 116 + PBS_CLIENT_SW_TRIG_BIT, PBS_CLIENT_SW_TRIG_BIT); 117 + if (ret < 0) 118 + goto out_clear_scratch1; 119 + 120 + ret = qcom_pbs_wait_for_ack(pbs, bit_pos); 121 + if (ret < 0) 122 + goto out_clear_scratch1; 123 + 124 + /* Clear the PBS sequence bit position */ 125 + regmap_update_bits(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH1, BIT(bit_pos), 0); 126 + regmap_update_bits(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH2, BIT(bit_pos), 0); 127 + } 128 + 129 + out_clear_scratch1: 130 + /* Clear all the requested bitmap */ 131 + ret = regmap_update_bits(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH1, bitmap, 0); 132 + 133 + out: 134 + mutex_unlock(&pbs->lock); 135 + 136 + return ret; 137 + } 138 + EXPORT_SYMBOL_GPL(qcom_pbs_trigger_event); 139 + 140 + /** 141 + * get_pbs_client_device() - Get the PBS device used by client 142 + * @dev: Client device 143 + * 144 + * This function is used to get the PBS device that is being 145 + * used by the client. 146 + * 147 + * Return: pbs_dev on success, ERR_PTR on failure 148 + */ 149 + struct pbs_dev *get_pbs_client_device(struct device *dev) 150 + { 151 + struct device_node *pbs_dev_node; 152 + struct platform_device *pdev; 153 + struct pbs_dev *pbs; 154 + 155 + pbs_dev_node = of_parse_phandle(dev->of_node, "qcom,pbs", 0); 156 + if (!pbs_dev_node) { 157 + dev_err(dev, "Missing qcom,pbs property\n"); 158 + return ERR_PTR(-ENODEV); 159 + } 160 + 161 + pdev = of_find_device_by_node(pbs_dev_node); 162 + if (!pdev) { 163 + dev_err(dev, "Unable to find PBS dev_node\n"); 164 + pbs = ERR_PTR(-EPROBE_DEFER); 165 + goto out; 166 + } 167 + 168 + pbs = platform_get_drvdata(pdev); 169 + if (!pbs) { 170 + dev_err(dev, "Cannot get pbs instance from %s\n", dev_name(&pdev->dev)); 171 + platform_device_put(pdev); 172 + pbs = ERR_PTR(-EPROBE_DEFER); 173 + goto out; 174 + } 175 + 176 + pbs->link = device_link_add(dev, &pdev->dev, DL_FLAG_AUTOREMOVE_SUPPLIER); 177 + if (!pbs->link) { 178 + dev_err(&pdev->dev, "Failed to create device link to consumer %s\n", dev_name(dev)); 179 + platform_device_put(pdev); 180 + pbs = ERR_PTR(-EINVAL); 181 + goto out; 182 + } 183 + 184 + out: 185 + of_node_put(pbs_dev_node); 186 + return pbs; 187 + } 188 + EXPORT_SYMBOL_GPL(get_pbs_client_device); 189 + 190 + static int qcom_pbs_probe(struct platform_device *pdev) 191 + { 192 + struct pbs_dev *pbs; 193 + u32 val; 194 + int ret; 195 + 196 + pbs = devm_kzalloc(&pdev->dev, sizeof(*pbs), GFP_KERNEL); 197 + if (!pbs) 198 + return -ENOMEM; 199 + 200 + pbs->dev = &pdev->dev; 201 + pbs->regmap = dev_get_regmap(pbs->dev->parent, NULL); 202 + if (!pbs->regmap) { 203 + dev_err(pbs->dev, "Couldn't get parent's regmap\n"); 204 + return -EINVAL; 205 + } 206 + 207 + ret = device_property_read_u32(pbs->dev, "reg", &val); 208 + if (ret < 0) { 209 + dev_err(pbs->dev, "Couldn't find reg, ret = %d\n", ret); 210 + return ret; 211 + } 212 + pbs->base = val; 213 + mutex_init(&pbs->lock); 214 + 215 + platform_set_drvdata(pdev, pbs); 216 + 217 + return 0; 218 + } 219 + 220 + static const struct of_device_id qcom_pbs_match_table[] = { 221 + { .compatible = "qcom,pbs" }, 222 + {} 223 + }; 224 + MODULE_DEVICE_TABLE(of, qcom_pbs_match_table); 225 + 226 + static struct platform_driver qcom_pbs_driver = { 227 + .driver = { 228 + .name = "qcom-pbs", 229 + .of_match_table = qcom_pbs_match_table, 230 + }, 231 + .probe = qcom_pbs_probe, 232 + }; 233 + module_platform_driver(qcom_pbs_driver) 234 + 235 + MODULE_DESCRIPTION("QCOM PBS DRIVER"); 236 + MODULE_LICENSE("GPL");
+104 -1
drivers/soc/qcom/qcom_aoss.c
··· 3 3 * Copyright (c) 2019, Linaro Ltd 4 4 */ 5 5 #include <linux/clk-provider.h> 6 + #include <linux/debugfs.h> 6 7 #include <linux/interrupt.h> 7 8 #include <linux/io.h> 8 9 #include <linux/mailbox_client.h> ··· 13 12 #include <linux/thermal.h> 14 13 #include <linux/slab.h> 15 14 #include <linux/soc/qcom/qcom_aoss.h> 15 + 16 + #define CREATE_TRACE_POINTS 17 + #include "trace-aoss.h" 16 18 17 19 #define QMP_DESC_MAGIC 0x0 18 20 #define QMP_DESC_VERSION 0x4 ··· 48 44 49 45 #define QMP_NUM_COOLING_RESOURCES 2 50 46 47 + #define QMP_DEBUGFS_FILES 4 48 + 51 49 static bool qmp_cdev_max_state = 1; 52 50 53 51 struct qmp_cooling_device { ··· 71 65 * @tx_lock: provides synchronization between multiple callers of qmp_send() 72 66 * @qdss_clk: QDSS clock hw struct 73 67 * @cooling_devs: thermal cooling devices 68 + * @debugfs_root: directory for the developer/tester interface 69 + * @debugfs_files: array of individual debugfs entries under debugfs_root 74 70 */ 75 71 struct qmp { 76 72 void __iomem *msgram; ··· 90 82 91 83 struct clk_hw qdss_clk; 92 84 struct qmp_cooling_device *cooling_devs; 85 + struct dentry *debugfs_root; 86 + struct dentry *debugfs_files[QMP_DEBUGFS_FILES]; 93 87 }; 94 88 95 89 static void qmp_kick(struct qmp *qmp) ··· 224 214 * 225 215 * Return: 0 on success, negative errno on failure 226 216 */ 227 - int qmp_send(struct qmp *qmp, const char *fmt, ...) 217 + int __printf(2, 3) qmp_send(struct qmp *qmp, const char *fmt, ...) 228 218 { 229 219 char buf[QMP_MSG_LEN]; 230 220 long time_left; ··· 244 234 return -EINVAL; 245 235 246 236 mutex_lock(&qmp->tx_lock); 237 + 238 + trace_aoss_send(buf); 247 239 248 240 /* The message RAM only implements 32-bit accesses */ 249 241 __iowrite32_copy(qmp->msgram + qmp->offset + sizeof(u32), ··· 267 255 } else { 268 256 ret = 0; 269 257 } 258 + 259 + trace_aoss_send_done(buf, ret); 270 260 271 261 mutex_unlock(&qmp->tx_lock); 272 262 ··· 489 475 } 490 476 EXPORT_SYMBOL_GPL(qmp_put); 491 477 478 + struct qmp_debugfs_entry { 479 + const char *name; 480 + const char *fmt; 481 + bool is_bool; 482 + const char *true_val; 483 + const char *false_val; 484 + }; 485 + 486 + static const struct qmp_debugfs_entry qmp_debugfs_entries[QMP_DEBUGFS_FILES] = { 487 + { "ddr_frequency_mhz", "{class: ddr, res: fixed, val: %u}", false }, 488 + { "prevent_aoss_sleep", "{class: aoss_slp, res: sleep: %s}", true, "enable", "disable" }, 489 + { "prevent_cx_collapse", "{class: cx_mol, res: cx, val: %s}", true, "mol", "off" }, 490 + { "prevent_ddr_collapse", "{class: ddr_mol, res: ddr, val: %s}", true, "mol", "off" }, 491 + }; 492 + 493 + static ssize_t qmp_debugfs_write(struct file *file, const char __user *user_buf, 494 + size_t count, loff_t *pos) 495 + { 496 + const struct qmp_debugfs_entry *entry = NULL; 497 + struct qmp *qmp = file->private_data; 498 + char buf[QMP_MSG_LEN]; 499 + unsigned int uint_val; 500 + const char *str_val; 501 + bool bool_val; 502 + int ret; 503 + int i; 504 + 505 + for (i = 0; i < ARRAY_SIZE(qmp->debugfs_files); i++) { 506 + if (qmp->debugfs_files[i] == file->f_path.dentry) { 507 + entry = &qmp_debugfs_entries[i]; 508 + break; 509 + } 510 + } 511 + if (WARN_ON(!entry)) 512 + return -EFAULT; 513 + 514 + if (entry->is_bool) { 515 + ret = kstrtobool_from_user(user_buf, count, &bool_val); 516 + if (ret) 517 + return ret; 518 + 519 + str_val = bool_val ? entry->true_val : entry->false_val; 520 + 521 + ret = snprintf(buf, sizeof(buf), entry->fmt, str_val); 522 + if (ret >= sizeof(buf)) 523 + return -EINVAL; 524 + } else { 525 + ret = kstrtou32_from_user(user_buf, count, 0, &uint_val); 526 + if (ret) 527 + return ret; 528 + 529 + ret = snprintf(buf, sizeof(buf), entry->fmt, uint_val); 530 + if (ret >= sizeof(buf)) 531 + return -EINVAL; 532 + } 533 + 534 + ret = qmp_send(qmp, buf); 535 + if (ret < 0) 536 + return ret; 537 + 538 + return count; 539 + } 540 + 541 + static const struct file_operations qmp_debugfs_fops = { 542 + .open = simple_open, 543 + .write = qmp_debugfs_write, 544 + }; 545 + 546 + static void qmp_debugfs_create(struct qmp *qmp) 547 + { 548 + const struct qmp_debugfs_entry *entry; 549 + int i; 550 + 551 + qmp->debugfs_root = debugfs_create_dir("qcom_aoss", NULL); 552 + 553 + for (i = 0; i < ARRAY_SIZE(qmp->debugfs_files); i++) { 554 + entry = &qmp_debugfs_entries[i]; 555 + 556 + qmp->debugfs_files[i] = debugfs_create_file(entry->name, 0200, 557 + qmp->debugfs_root, 558 + qmp, 559 + &qmp_debugfs_fops); 560 + } 561 + } 562 + 492 563 static int qmp_probe(struct platform_device *pdev) 493 564 { 494 565 struct qmp *qmp; ··· 622 523 623 524 platform_set_drvdata(pdev, qmp); 624 525 526 + qmp_debugfs_create(qmp); 527 + 625 528 return 0; 626 529 627 530 err_close_qmp: ··· 637 536 static void qmp_remove(struct platform_device *pdev) 638 537 { 639 538 struct qmp *qmp = platform_get_drvdata(pdev); 539 + 540 + debugfs_remove_recursive(qmp->debugfs_root); 640 541 641 542 qmp_qdss_clk_remove(qmp); 642 543 qmp_cooling_devices_remove(qmp);
-11
drivers/soc/qcom/smem.c
··· 655 655 void *qcom_smem_get(unsigned host, unsigned item, size_t *size) 656 656 { 657 657 struct smem_partition *part; 658 - unsigned long flags; 659 - int ret; 660 658 void *ptr = ERR_PTR(-EPROBE_DEFER); 661 659 662 660 if (!__smem) ··· 662 664 663 665 if (WARN_ON(item >= __smem->item_count)) 664 666 return ERR_PTR(-EINVAL); 665 - 666 - ret = hwspin_lock_timeout_irqsave(__smem->hwlock, 667 - HWSPINLOCK_TIMEOUT, 668 - &flags); 669 - if (ret) 670 - return ERR_PTR(ret); 671 667 672 668 if (host < SMEM_HOST_COUNT && __smem->partitions[host].virt_base) { 673 669 part = &__smem->partitions[host]; ··· 673 681 ptr = qcom_smem_get_global(__smem, item, size); 674 682 } 675 683 676 - hwspin_unlock_irqrestore(__smem->hwlock, &flags); 677 - 678 684 return ptr; 679 - 680 685 } 681 686 EXPORT_SYMBOL_GPL(qcom_smem_get); 682 687
+4 -2
drivers/soc/qcom/smp2p.c
··· 58 58 * @valid_entries: number of allocated entries 59 59 * @flags: 60 60 * @entries: individual communication entries 61 - * @name: name of the entry 62 - * @value: content of the entry 61 + * @entries.name: name of the entry 62 + * @entries.value: content of the entry 63 63 */ 64 64 struct smp2p_smem_item { 65 65 u32 magic; ··· 275 275 * 276 276 * Handle notifications from the remote side to handle newly allocated entries 277 277 * or any changes to the state bits of existing entries. 278 + * 279 + * Return: %IRQ_HANDLED 278 280 */ 279 281 static irqreturn_t qcom_smp2p_intr(int irq, void *data) 280 282 {
+6 -1
drivers/soc/qcom/socinfo.c
··· 124 124 [50] = "PM8350B", 125 125 [51] = "PMR735A", 126 126 [52] = "PMR735B", 127 - [55] = "PM2250", 127 + [55] = "PM4125", 128 128 [58] = "PM8450", 129 129 [65] = "PM8010", 130 130 [69] = "PM8550VS", ··· 424 424 { qcom_board_id(IPQ9510) }, 425 425 { qcom_board_id(QRB4210) }, 426 426 { qcom_board_id(QRB2210) }, 427 + { qcom_board_id(SM8475) }, 428 + { qcom_board_id(SM8475P) }, 427 429 { qcom_board_id(SA8775P) }, 428 430 { qcom_board_id(QRU1000) }, 431 + { qcom_board_id(SM8475_2) }, 429 432 { qcom_board_id(QDU1000) }, 430 433 { qcom_board_id(SM8650) }, 431 434 { qcom_board_id(SM4450) }, ··· 440 437 { qcom_board_id(IPQ5322) }, 441 438 { qcom_board_id(IPQ5312) }, 442 439 { qcom_board_id(IPQ5302) }, 440 + { qcom_board_id(QCS8550) }, 441 + { qcom_board_id(QCM8550) }, 443 442 { qcom_board_id(IPQ5300) }, 444 443 }; 445 444
+244 -4
drivers/soc/qcom/spm.c
··· 6 6 * SAW power controller driver 7 7 */ 8 8 9 - #include <linux/kernel.h> 9 + #include <linux/bitfield.h> 10 + #include <linux/err.h> 10 11 #include <linux/init.h> 11 12 #include <linux/io.h> 13 + #include <linux/iopoll.h> 14 + #include <linux/kernel.h> 15 + #include <linux/linear_range.h> 12 16 #include <linux/module.h> 13 - #include <linux/slab.h> 14 17 #include <linux/of.h> 15 - #include <linux/err.h> 16 18 #include <linux/platform_device.h> 19 + #include <linux/slab.h> 20 + #include <linux/smp.h> 21 + 22 + #include <linux/regulator/driver.h> 23 + 17 24 #include <soc/qcom/spm.h> 25 + 26 + #define FIELD_SET(current, mask, val) \ 27 + (((current) & ~(mask)) | FIELD_PREP((mask), (val))) 18 28 19 29 #define SPM_CTL_INDEX 0x7f 20 30 #define SPM_CTL_INDEX_SHIFT 4 21 31 #define SPM_CTL_EN BIT(0) 32 + 33 + /* These registers might be specific to SPM 1.1 */ 34 + #define SPM_VCTL_VLVL GENMASK(7, 0) 35 + #define SPM_PMIC_DATA_0_VLVL GENMASK(7, 0) 36 + #define SPM_PMIC_DATA_1_MIN_VSEL GENMASK(5, 0) 37 + #define SPM_PMIC_DATA_1_MAX_VSEL GENMASK(21, 16) 38 + 39 + #define SPM_1_1_AVS_CTL_AVS_ENABLED BIT(27) 40 + #define SPM_AVS_CTL_MAX_VLVL GENMASK(22, 17) 41 + #define SPM_AVS_CTL_MIN_VLVL GENMASK(15, 10) 22 42 23 43 enum spm_reg { 24 44 SPM_REG_CFG, ··· 49 29 SPM_REG_PMIC_DATA_1, 50 30 SPM_REG_VCTL, 51 31 SPM_REG_SEQ_ENTRY, 52 - SPM_REG_SPM_STS, 32 + SPM_REG_STS0, 33 + SPM_REG_STS1, 53 34 SPM_REG_PMIC_STS, 54 35 SPM_REG_AVS_CTL, 55 36 SPM_REG_AVS_LIMIT, 37 + SPM_REG_RST, 56 38 SPM_REG_NR, 39 + }; 40 + 41 + #define MAX_PMIC_DATA 2 42 + #define MAX_SEQ_DATA 64 43 + 44 + struct spm_reg_data { 45 + const u16 *reg_offset; 46 + u32 spm_cfg; 47 + u32 spm_dly; 48 + u32 pmic_dly; 49 + u32 pmic_data[MAX_PMIC_DATA]; 50 + u32 avs_ctl; 51 + u32 avs_limit; 52 + u8 seq[MAX_SEQ_DATA]; 53 + u8 start_index[PM_SLEEP_MODE_NR]; 54 + 55 + smp_call_func_t set_vdd; 56 + /* for now we support only a single range */ 57 + struct linear_range *range; 58 + unsigned int ramp_delay; 59 + unsigned int init_uV; 60 + }; 61 + 62 + struct spm_driver_data { 63 + void __iomem *reg_base; 64 + const struct spm_reg_data *reg_data; 65 + struct device *dev; 66 + unsigned int volt_sel; 67 + int reg_cpu; 57 68 }; 58 69 59 70 static const u16 spm_reg_offset_v4_1[SPM_REG_NR] = { ··· 220 169 221 170 static const u16 spm_reg_offset_v1_1[SPM_REG_NR] = { 222 171 [SPM_REG_CFG] = 0x08, 172 + [SPM_REG_STS0] = 0x0c, 173 + [SPM_REG_STS1] = 0x10, 174 + [SPM_REG_VCTL] = 0x14, 175 + [SPM_REG_AVS_CTL] = 0x18, 223 176 [SPM_REG_SPM_CTL] = 0x20, 224 177 [SPM_REG_PMIC_DLY] = 0x24, 225 178 [SPM_REG_PMIC_DATA_0] = 0x28, ··· 231 176 [SPM_REG_SEQ_ENTRY] = 0x80, 232 177 }; 233 178 179 + static void smp_set_vdd_v1_1(void *data); 180 + 234 181 /* SPM register data for 8064 */ 182 + static struct linear_range spm_v1_1_regulator_range = 183 + REGULATOR_LINEAR_RANGE(700000, 0, 56, 12500); 184 + 235 185 static const struct spm_reg_data spm_reg_8064_cpu = { 236 186 .reg_offset = spm_reg_offset_v1_1, 237 187 .spm_cfg = 0x1F, ··· 247 187 0x10, 0x54, 0x30, 0x0C, 0x24, 0x30, 0x0F }, 248 188 .start_index[PM_SLEEP_MODE_STBY] = 0, 249 189 .start_index[PM_SLEEP_MODE_SPC] = 2, 190 + .set_vdd = smp_set_vdd_v1_1, 191 + .range = &spm_v1_1_regulator_range, 192 + .init_uV = 1300000, 193 + .ramp_delay = 1250, 250 194 }; 251 195 252 196 static inline void spm_register_write(struct spm_driver_data *drv, ··· 300 236 ctl_val |= start_index << SPM_CTL_INDEX_SHIFT; 301 237 ctl_val |= SPM_CTL_EN; 302 238 spm_register_write_sync(drv, SPM_REG_SPM_CTL, ctl_val); 239 + } 240 + 241 + static int spm_set_voltage_sel(struct regulator_dev *rdev, unsigned int selector) 242 + { 243 + struct spm_driver_data *drv = rdev_get_drvdata(rdev); 244 + 245 + drv->volt_sel = selector; 246 + 247 + /* Always do the SAW register writes on the corresponding CPU */ 248 + return smp_call_function_single(drv->reg_cpu, drv->reg_data->set_vdd, drv, true); 249 + } 250 + 251 + static int spm_get_voltage_sel(struct regulator_dev *rdev) 252 + { 253 + struct spm_driver_data *drv = rdev_get_drvdata(rdev); 254 + 255 + return drv->volt_sel; 256 + } 257 + 258 + static const struct regulator_ops spm_reg_ops = { 259 + .set_voltage_sel = spm_set_voltage_sel, 260 + .get_voltage_sel = spm_get_voltage_sel, 261 + .list_voltage = regulator_list_voltage_linear_range, 262 + .set_voltage_time_sel = regulator_set_voltage_time_sel, 263 + }; 264 + 265 + static void smp_set_vdd_v1_1(void *data) 266 + { 267 + struct spm_driver_data *drv = data; 268 + unsigned int vctl, data0, data1, avs_ctl, sts; 269 + unsigned int vlevel, volt_sel; 270 + bool avs_enabled; 271 + 272 + volt_sel = drv->volt_sel; 273 + vlevel = volt_sel | 0x80; /* band */ 274 + 275 + avs_ctl = spm_register_read(drv, SPM_REG_AVS_CTL); 276 + vctl = spm_register_read(drv, SPM_REG_VCTL); 277 + data0 = spm_register_read(drv, SPM_REG_PMIC_DATA_0); 278 + data1 = spm_register_read(drv, SPM_REG_PMIC_DATA_1); 279 + 280 + avs_enabled = avs_ctl & SPM_1_1_AVS_CTL_AVS_ENABLED; 281 + 282 + /* If AVS is enabled, switch it off during the voltage change */ 283 + if (avs_enabled) { 284 + avs_ctl &= ~SPM_1_1_AVS_CTL_AVS_ENABLED; 285 + spm_register_write(drv, SPM_REG_AVS_CTL, avs_ctl); 286 + } 287 + 288 + /* Kick the state machine back to idle */ 289 + spm_register_write(drv, SPM_REG_RST, 1); 290 + 291 + vctl = FIELD_SET(vctl, SPM_VCTL_VLVL, vlevel); 292 + data0 = FIELD_SET(data0, SPM_PMIC_DATA_0_VLVL, vlevel); 293 + data1 = FIELD_SET(data1, SPM_PMIC_DATA_1_MIN_VSEL, volt_sel); 294 + data1 = FIELD_SET(data1, SPM_PMIC_DATA_1_MAX_VSEL, volt_sel); 295 + 296 + spm_register_write(drv, SPM_REG_VCTL, vctl); 297 + spm_register_write(drv, SPM_REG_PMIC_DATA_0, data0); 298 + spm_register_write(drv, SPM_REG_PMIC_DATA_1, data1); 299 + 300 + if (read_poll_timeout_atomic(spm_register_read, 301 + sts, sts == vlevel, 302 + 1, 200, false, 303 + drv, SPM_REG_STS1)) { 304 + dev_err_ratelimited(drv->dev, "timeout setting the voltage (%x %x)!\n", sts, vlevel); 305 + goto enable_avs; 306 + } 307 + 308 + if (avs_enabled) { 309 + unsigned int max_avs = volt_sel; 310 + unsigned int min_avs = max(max_avs, 4U) - 4; 311 + 312 + avs_ctl = FIELD_SET(avs_ctl, SPM_AVS_CTL_MIN_VLVL, min_avs); 313 + avs_ctl = FIELD_SET(avs_ctl, SPM_AVS_CTL_MAX_VLVL, max_avs); 314 + spm_register_write(drv, SPM_REG_AVS_CTL, avs_ctl); 315 + } 316 + 317 + enable_avs: 318 + if (avs_enabled) { 319 + avs_ctl |= SPM_1_1_AVS_CTL_AVS_ENABLED; 320 + spm_register_write(drv, SPM_REG_AVS_CTL, avs_ctl); 321 + } 322 + } 323 + 324 + static int spm_get_cpu(struct device *dev) 325 + { 326 + int cpu; 327 + bool found; 328 + 329 + for_each_possible_cpu(cpu) { 330 + struct device_node *cpu_node, *saw_node; 331 + 332 + cpu_node = of_cpu_device_node_get(cpu); 333 + if (!cpu_node) 334 + continue; 335 + 336 + saw_node = of_parse_phandle(cpu_node, "qcom,saw", 0); 337 + found = (saw_node == dev->of_node); 338 + of_node_put(saw_node); 339 + of_node_put(cpu_node); 340 + 341 + if (found) 342 + return cpu; 343 + } 344 + 345 + /* L2 SPM is not bound to any CPU, voltage setting is not supported */ 346 + 347 + return -EOPNOTSUPP; 348 + } 349 + 350 + static int spm_register_regulator(struct device *dev, struct spm_driver_data *drv) 351 + { 352 + struct regulator_config config = { 353 + .dev = dev, 354 + .driver_data = drv, 355 + }; 356 + struct regulator_desc *rdesc; 357 + struct regulator_dev *rdev; 358 + int ret; 359 + bool found; 360 + 361 + if (!drv->reg_data->set_vdd) 362 + return 0; 363 + 364 + rdesc = devm_kzalloc(dev, sizeof(*rdesc), GFP_KERNEL); 365 + if (!rdesc) 366 + return -ENOMEM; 367 + 368 + rdesc->name = "spm"; 369 + rdesc->of_match = of_match_ptr("regulator"); 370 + rdesc->type = REGULATOR_VOLTAGE; 371 + rdesc->owner = THIS_MODULE; 372 + rdesc->ops = &spm_reg_ops; 373 + 374 + rdesc->linear_ranges = drv->reg_data->range; 375 + rdesc->n_linear_ranges = 1; 376 + rdesc->n_voltages = rdesc->linear_ranges[rdesc->n_linear_ranges - 1].max_sel + 1; 377 + rdesc->ramp_delay = drv->reg_data->ramp_delay; 378 + 379 + ret = spm_get_cpu(dev); 380 + if (ret < 0) 381 + return ret; 382 + 383 + drv->reg_cpu = ret; 384 + dev_dbg(dev, "SAW2 bound to CPU %d\n", drv->reg_cpu); 385 + 386 + /* 387 + * Program initial voltage, otherwise registration will also try 388 + * setting the voltage, which might result in undervolting the CPU. 389 + */ 390 + drv->volt_sel = DIV_ROUND_UP(drv->reg_data->init_uV - rdesc->min_uV, 391 + rdesc->uV_step); 392 + ret = linear_range_get_selector_high(drv->reg_data->range, 393 + drv->reg_data->init_uV, 394 + &drv->volt_sel, 395 + &found); 396 + if (ret) { 397 + dev_err(dev, "Initial uV value out of bounds\n"); 398 + return ret; 399 + } 400 + 401 + /* Always do the SAW register writes on the corresponding CPU */ 402 + smp_call_function_single(drv->reg_cpu, drv->reg_data->set_vdd, drv, true); 403 + 404 + rdev = devm_regulator_register(dev, rdesc, &config); 405 + if (IS_ERR(rdev)) { 406 + dev_err(dev, "failed to register regulator\n"); 407 + return PTR_ERR(rdev); 408 + } 409 + 410 + return 0; 303 411 } 304 412 305 413 static const struct of_device_id spm_match_table[] = { ··· 524 288 return -ENODEV; 525 289 526 290 drv->reg_data = match_id->data; 291 + drv->dev = &pdev->dev; 527 292 platform_set_drvdata(pdev, drv); 528 293 529 294 /* Write the SPM sequences first.. */ ··· 551 314 /* Set up Standby as the default low power mode */ 552 315 if (drv->reg_data->reg_offset[SPM_REG_SPM_CTL]) 553 316 spm_set_low_power_mode(drv, PM_SLEEP_MODE_STBY); 317 + 318 + if (IS_ENABLED(CONFIG_REGULATOR)) 319 + return spm_register_regulator(&pdev->dev, drv); 554 320 555 321 return 0; 556 322 }
+48
drivers/soc/qcom/trace-aoss.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved. 4 + */ 5 + 6 + #undef TRACE_SYSTEM 7 + #define TRACE_SYSTEM qcom_aoss 8 + 9 + #if !defined(_TRACE_QCOM_AOSS_H) || defined(TRACE_HEADER_MULTI_READ) 10 + #define _TRACE_QCOM_AOSS_H 11 + 12 + #include <linux/tracepoint.h> 13 + 14 + TRACE_EVENT(aoss_send, 15 + TP_PROTO(const char *msg), 16 + TP_ARGS(msg), 17 + TP_STRUCT__entry( 18 + __string(msg, msg) 19 + ), 20 + TP_fast_assign( 21 + __assign_str(msg, msg); 22 + ), 23 + TP_printk("%s", __get_str(msg)) 24 + ); 25 + 26 + TRACE_EVENT(aoss_send_done, 27 + TP_PROTO(const char *msg, int ret), 28 + TP_ARGS(msg, ret), 29 + TP_STRUCT__entry( 30 + __string(msg, msg) 31 + __field(int, ret) 32 + ), 33 + TP_fast_assign( 34 + __assign_str(msg, msg); 35 + __entry->ret = ret; 36 + ), 37 + TP_printk("%s: %d", __get_str(msg), __entry->ret) 38 + ); 39 + 40 + #endif /* _TRACE_QCOM_AOSS_H */ 41 + 42 + #undef TRACE_INCLUDE_PATH 43 + #define TRACE_INCLUDE_PATH . 44 + 45 + #undef TRACE_INCLUDE_FILE 46 + #define TRACE_INCLUDE_FILE trace-aoss 47 + 48 + #include <trace/define_trace.h>
+14 -3
drivers/soc/renesas/Kconfig
··· 34 34 select SYS_SUPPORTS_SH_CMT 35 35 select SYS_SUPPORTS_SH_TMU 36 36 37 + config ARCH_RCAR_GEN4 38 + bool 39 + select ARCH_RCAR_GEN3 40 + 37 41 config ARCH_RMOBILE 38 42 bool 39 43 select PM ··· 244 240 245 241 config ARCH_R8A779F0 246 242 bool "ARM64 Platform support for R-Car S4-8" 247 - select ARCH_RCAR_GEN3 243 + select ARCH_RCAR_GEN4 248 244 select SYSC_R8A779F0 249 245 help 250 246 This enables support for the Renesas R-Car S4-8 SoC. ··· 265 261 266 262 config ARCH_R8A779A0 267 263 bool "ARM64 Platform support for R-Car V3U" 268 - select ARCH_RCAR_GEN3 264 + select ARCH_RCAR_GEN4 269 265 select SYSC_R8A779A0 270 266 help 271 267 This enables support for the Renesas R-Car V3U SoC. 272 268 273 269 config ARCH_R8A779G0 274 270 bool "ARM64 Platform support for R-Car V4H" 275 - select ARCH_RCAR_GEN3 271 + select ARCH_RCAR_GEN4 276 272 select SYSC_R8A779G0 277 273 help 278 274 This enables support for the Renesas R-Car V4H SoC. 275 + 276 + config ARCH_R8A779H0 277 + bool "ARM64 Platform support for R-Car V4M" 278 + select ARCH_RCAR_GEN4 279 + select SYSC_R8A779H0 280 + help 281 + This enables support for the Renesas R-Car V4M SoC. 279 282 280 283 config ARCH_R8A774C0 281 284 bool "ARM64 Platform support for RZ/G2E"
+1
drivers/soc/renesas/rcar-rst.c
··· 117 117 { .compatible = "renesas,r8a779a0-rst", .data = &rcar_rst_v3u }, 118 118 { .compatible = "renesas,r8a779f0-rst", .data = &rcar_rst_gen4 }, 119 119 { .compatible = "renesas,r8a779g0-rst", .data = &rcar_rst_gen4 }, 120 + { .compatible = "renesas,r8a779h0-rst", .data = &rcar_rst_gen4 }, 120 121 { /* sentinel */ } 121 122 }; 122 123
+8
drivers/soc/renesas/renesas-soc.c
··· 270 270 .id = 0x5c, 271 271 }; 272 272 273 + static const struct renesas_soc soc_rcar_v4m __initconst __maybe_unused = { 274 + .family = &fam_rcar_gen4, 275 + .id = 0x5d, 276 + }; 277 + 273 278 static const struct renesas_soc soc_shmobile_ag5 __initconst __maybe_unused = { 274 279 .family = &fam_shmobile, 275 280 .id = 0x37, ··· 384 379 #endif 385 380 #ifdef CONFIG_ARCH_R8A779G0 386 381 { .compatible = "renesas,r8a779g0", .data = &soc_rcar_v4h }, 382 + #endif 383 + #ifdef CONFIG_ARCH_R8A779H0 384 + { .compatible = "renesas,r8a779h0", .data = &soc_rcar_v4m }, 387 385 #endif 388 386 #ifdef CONFIG_ARCH_R9A07G043 389 387 #ifdef CONFIG_RISCV
+1
drivers/soc/samsung/Kconfig
··· 42 42 depends on ARCH_EXYNOS || ((ARM || ARM64) && COMPILE_TEST) 43 43 select EXYNOS_PMU_ARM_DRIVERS if ARM && ARCH_EXYNOS 44 44 select MFD_CORE 45 + select REGMAP_MMIO 45 46 46 47 # There is no need to enable these drivers for ARMv8 47 48 config EXYNOS_PMU_ARM_DRIVERS
+233 -2
drivers/soc/samsung/exynos-pmu.c
··· 5 5 // 6 6 // Exynos - CPU PMU(Power Management Unit) support 7 7 8 + #include <linux/arm-smccc.h> 8 9 #include <linux/of.h> 9 10 #include <linux/of_address.h> 10 11 #include <linux/mfd/core.h> ··· 13 12 #include <linux/of_platform.h> 14 13 #include <linux/platform_device.h> 15 14 #include <linux/delay.h> 15 + #include <linux/regmap.h> 16 16 17 17 #include <linux/soc/samsung/exynos-regs-pmu.h> 18 18 #include <linux/soc/samsung/exynos-pmu.h> 19 19 20 20 #include "exynos-pmu.h" 21 21 22 + #define PMUALIVE_MASK GENMASK(13, 0) 23 + #define TENSOR_SET_BITS (BIT(15) | BIT(14)) 24 + #define TENSOR_CLR_BITS BIT(15) 25 + #define TENSOR_SMC_PMU_SEC_REG 0x82000504 26 + #define TENSOR_PMUREG_READ 0 27 + #define TENSOR_PMUREG_WRITE 1 28 + #define TENSOR_PMUREG_RMW 2 29 + 22 30 struct exynos_pmu_context { 23 31 struct device *dev; 24 32 const struct exynos_pmu_data *pmu_data; 33 + struct regmap *pmureg; 25 34 }; 26 35 27 36 void __iomem *pmu_base_addr; 28 37 static struct exynos_pmu_context *pmu_context; 38 + /* forward declaration */ 39 + static struct platform_driver exynos_pmu_driver; 40 + 41 + /* 42 + * Tensor SoCs are configured so that PMU_ALIVE registers can only be written 43 + * from EL3, but are still read accessible. As Linux needs to write some of 44 + * these registers, the following functions are provided and exposed via 45 + * regmap. 46 + * 47 + * Note: This SMC interface is known to be implemented on gs101 and derivative 48 + * SoCs. 49 + */ 50 + 51 + /* Write to a protected PMU register. */ 52 + static int tensor_sec_reg_write(void *context, unsigned int reg, 53 + unsigned int val) 54 + { 55 + struct arm_smccc_res res; 56 + unsigned long pmu_base = (unsigned long)context; 57 + 58 + arm_smccc_smc(TENSOR_SMC_PMU_SEC_REG, pmu_base + reg, 59 + TENSOR_PMUREG_WRITE, val, 0, 0, 0, 0, &res); 60 + 61 + /* returns -EINVAL if access isn't allowed or 0 */ 62 + if (res.a0) 63 + pr_warn("%s(): SMC failed: %d\n", __func__, (int)res.a0); 64 + 65 + return (int)res.a0; 66 + } 67 + 68 + /* Read/Modify/Write a protected PMU register. */ 69 + static int tensor_sec_reg_rmw(void *context, unsigned int reg, 70 + unsigned int mask, unsigned int val) 71 + { 72 + struct arm_smccc_res res; 73 + unsigned long pmu_base = (unsigned long)context; 74 + 75 + arm_smccc_smc(TENSOR_SMC_PMU_SEC_REG, pmu_base + reg, 76 + TENSOR_PMUREG_RMW, mask, val, 0, 0, 0, &res); 77 + 78 + /* returns -EINVAL if access isn't allowed or 0 */ 79 + if (res.a0) 80 + pr_warn("%s(): SMC failed: %d\n", __func__, (int)res.a0); 81 + 82 + return (int)res.a0; 83 + } 84 + 85 + /* 86 + * Read a protected PMU register. All PMU registers can be read by Linux. 87 + * Note: The SMC read register is not used, as only registers that can be 88 + * written are readable via SMC. 89 + */ 90 + static int tensor_sec_reg_read(void *context, unsigned int reg, 91 + unsigned int *val) 92 + { 93 + *val = pmu_raw_readl(reg); 94 + return 0; 95 + } 96 + 97 + /* 98 + * For SoCs that have set/clear bit hardware this function can be used when 99 + * the PMU register will be accessed by multiple masters. 100 + * 101 + * For example, to set bits 13:8 in PMU reg offset 0x3e80 102 + * tensor_set_bits_atomic(ctx, 0x3e80, 0x3f00, 0x3f00); 103 + * 104 + * Set bit 8, and clear bits 13:9 PMU reg offset 0x3e80 105 + * tensor_set_bits_atomic(0x3e80, 0x100, 0x3f00); 106 + */ 107 + static int tensor_set_bits_atomic(void *ctx, unsigned int offset, u32 val, 108 + u32 mask) 109 + { 110 + int ret; 111 + unsigned int i; 112 + 113 + for (i = 0; i < 32; i++) { 114 + if (!(mask & BIT(i))) 115 + continue; 116 + 117 + offset &= ~TENSOR_SET_BITS; 118 + 119 + if (val & BIT(i)) 120 + offset |= TENSOR_SET_BITS; 121 + else 122 + offset |= TENSOR_CLR_BITS; 123 + 124 + ret = tensor_sec_reg_write(ctx, offset, i); 125 + if (ret) 126 + return ret; 127 + } 128 + return ret; 129 + } 130 + 131 + static int tensor_sec_update_bits(void *ctx, unsigned int reg, 132 + unsigned int mask, unsigned int val) 133 + { 134 + /* 135 + * Use atomic operations for PMU_ALIVE registers (offset 0~0x3FFF) 136 + * as the target registers can be accessed by multiple masters. 137 + */ 138 + if (reg > PMUALIVE_MASK) 139 + return tensor_sec_reg_rmw(ctx, reg, mask, val); 140 + 141 + return tensor_set_bits_atomic(ctx, reg, val, mask); 142 + } 29 143 30 144 void pmu_raw_writel(u32 val, u32 offset) 31 145 { ··· 191 75 #define exynos_pmu_data_arm_ptr(data) NULL 192 76 #endif 193 77 78 + static const struct regmap_config regmap_smccfg = { 79 + .name = "pmu_regs", 80 + .reg_bits = 32, 81 + .reg_stride = 4, 82 + .val_bits = 32, 83 + .fast_io = true, 84 + .use_single_read = true, 85 + .use_single_write = true, 86 + .reg_read = tensor_sec_reg_read, 87 + .reg_write = tensor_sec_reg_write, 88 + .reg_update_bits = tensor_sec_update_bits, 89 + }; 90 + 91 + static const struct regmap_config regmap_mmiocfg = { 92 + .name = "pmu_regs", 93 + .reg_bits = 32, 94 + .reg_stride = 4, 95 + .val_bits = 32, 96 + .fast_io = true, 97 + .use_single_read = true, 98 + .use_single_write = true, 99 + }; 100 + 101 + static const struct exynos_pmu_data gs101_pmu_data = { 102 + .pmu_secure = true 103 + }; 104 + 194 105 /* 195 106 * PMU platform driver and devicetree bindings. 196 107 */ 197 108 static const struct of_device_id exynos_pmu_of_device_ids[] = { 198 109 { 110 + .compatible = "google,gs101-pmu", 111 + .data = &gs101_pmu_data, 112 + }, { 199 113 .compatible = "samsung,exynos3250-pmu", 200 114 .data = exynos_pmu_data_arm_ptr(exynos3250_pmu_data), 201 115 }, { ··· 259 113 { .name = "exynos-clkout", }, 260 114 }; 261 115 116 + /** 117 + * exynos_get_pmu_regmap() - Obtain pmureg regmap 118 + * 119 + * Find the pmureg regmap previously configured in probe() and return regmap 120 + * pointer. 121 + * 122 + * Return: A pointer to regmap if found or ERR_PTR error value. 123 + */ 262 124 struct regmap *exynos_get_pmu_regmap(void) 263 125 { 264 126 struct device_node *np = of_find_matching_node(NULL, 265 127 exynos_pmu_of_device_ids); 266 128 if (np) 267 - return syscon_node_to_regmap(np); 129 + return exynos_get_pmu_regmap_by_phandle(np, NULL); 268 130 return ERR_PTR(-ENODEV); 269 131 } 270 132 EXPORT_SYMBOL_GPL(exynos_get_pmu_regmap); 271 133 134 + /** 135 + * exynos_get_pmu_regmap_by_phandle() - Obtain pmureg regmap via phandle 136 + * @np: Device node holding PMU phandle property 137 + * @propname: Name of property holding phandle value 138 + * 139 + * Find the pmureg regmap previously configured in probe() and return regmap 140 + * pointer. 141 + * 142 + * Return: A pointer to regmap if found or ERR_PTR error value. 143 + */ 144 + struct regmap *exynos_get_pmu_regmap_by_phandle(struct device_node *np, 145 + const char *propname) 146 + { 147 + struct exynos_pmu_context *ctx; 148 + struct device_node *pmu_np; 149 + struct device *dev; 150 + 151 + if (propname) 152 + pmu_np = of_parse_phandle(np, propname, 0); 153 + else 154 + pmu_np = np; 155 + 156 + if (!pmu_np) 157 + return ERR_PTR(-ENODEV); 158 + 159 + /* 160 + * Determine if exynos-pmu device has probed and therefore regmap 161 + * has been created and can be returned to the caller. Otherwise we 162 + * return -EPROBE_DEFER. 163 + */ 164 + dev = driver_find_device_by_of_node(&exynos_pmu_driver.driver, 165 + (void *)pmu_np); 166 + 167 + if (propname) 168 + of_node_put(pmu_np); 169 + 170 + if (!dev) 171 + return ERR_PTR(-EPROBE_DEFER); 172 + 173 + ctx = dev_get_drvdata(dev); 174 + 175 + return ctx->pmureg; 176 + } 177 + EXPORT_SYMBOL_GPL(exynos_get_pmu_regmap_by_phandle); 178 + 272 179 static int exynos_pmu_probe(struct platform_device *pdev) 273 180 { 274 181 struct device *dev = &pdev->dev; 182 + struct regmap_config pmu_regmcfg; 183 + struct regmap *regmap; 184 + struct resource *res; 275 185 int ret; 276 186 277 187 pmu_base_addr = devm_platform_ioremap_resource(pdev, 0); ··· 339 137 GFP_KERNEL); 340 138 if (!pmu_context) 341 139 return -ENOMEM; 342 - pmu_context->dev = dev; 140 + 141 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 142 + if (!res) 143 + return -ENODEV; 144 + 343 145 pmu_context->pmu_data = of_device_get_match_data(dev); 146 + 147 + /* For SoCs that secure PMU register writes use custom regmap */ 148 + if (pmu_context->pmu_data && pmu_context->pmu_data->pmu_secure) { 149 + pmu_regmcfg = regmap_smccfg; 150 + pmu_regmcfg.max_register = resource_size(res) - 151 + pmu_regmcfg.reg_stride; 152 + /* Need physical address for SMC call */ 153 + regmap = devm_regmap_init(dev, NULL, 154 + (void *)(uintptr_t)res->start, 155 + &pmu_regmcfg); 156 + } else { 157 + /* All other SoCs use a MMIO regmap */ 158 + pmu_regmcfg = regmap_mmiocfg; 159 + pmu_regmcfg.max_register = resource_size(res) - 160 + pmu_regmcfg.reg_stride; 161 + regmap = devm_regmap_init_mmio(dev, pmu_base_addr, 162 + &pmu_regmcfg); 163 + } 164 + 165 + if (IS_ERR(regmap)) 166 + return dev_err_probe(&pdev->dev, PTR_ERR(regmap), 167 + "regmap init failed\n"); 168 + 169 + pmu_context->pmureg = regmap; 170 + pmu_context->dev = dev; 344 171 345 172 if (pmu_context->pmu_data && pmu_context->pmu_data->pmu_init) 346 173 pmu_context->pmu_data->pmu_init();
+1
drivers/soc/samsung/exynos-pmu.h
··· 21 21 struct exynos_pmu_data { 22 22 const struct exynos_pmu_conf *pmu_config; 23 23 const struct exynos_pmu_conf *pmu_config_extra; 24 + bool pmu_secure; 24 25 25 26 void (*pmu_init)(void); 26 27 void (*powerdown_conf)(enum sys_powerdown);
+5
drivers/soc/tegra/Kconfig
··· 133 133 help 134 134 Enable support for the NVIDIA Tegra234 SoC. 135 135 136 + config ARCH_TEGRA_241_SOC 137 + bool "NVIDIA Tegra241 SoC" 138 + help 139 + Enable support for the NVIDIA Tegra241 SoC. 140 + 136 141 endif 137 142 endif 138 143
+89 -29
drivers/soc/tegra/fuse/fuse-tegra.c
··· 3 3 * Copyright (c) 2013-2023, NVIDIA CORPORATION. All rights reserved. 4 4 */ 5 5 6 + #include <linux/acpi.h> 6 7 #include <linux/clk.h> 7 8 #include <linux/device.h> 8 9 #include <linux/kobject.h> 9 10 #include <linux/init.h> 10 11 #include <linux/io.h> 12 + #include <linux/mod_devicetable.h> 11 13 #include <linux/nvmem-consumer.h> 12 14 #include <linux/nvmem-provider.h> 13 15 #include <linux/of.h> ··· 115 113 fuse->clk = NULL; 116 114 } 117 115 116 + static void tegra_fuse_print_sku_info(struct tegra_sku_info *tegra_sku_info) 117 + { 118 + pr_info("Tegra Revision: %s SKU: %d CPU Process: %d SoC Process: %d\n", 119 + tegra_revision_name[tegra_sku_info->revision], 120 + tegra_sku_info->sku_id, tegra_sku_info->cpu_process_id, 121 + tegra_sku_info->soc_process_id); 122 + pr_debug("Tegra CPU Speedo ID %d, SoC Speedo ID %d\n", 123 + tegra_sku_info->cpu_speedo_id, tegra_sku_info->soc_speedo_id); 124 + } 125 + 126 + static int tegra_fuse_add_lookups(struct tegra_fuse *fuse) 127 + { 128 + fuse->lookups = kmemdup_array(fuse->soc->lookups, sizeof(*fuse->lookups), 129 + fuse->soc->num_lookups, GFP_KERNEL); 130 + if (!fuse->lookups) 131 + return -ENOMEM; 132 + 133 + nvmem_add_cell_lookups(fuse->lookups, fuse->soc->num_lookups); 134 + 135 + return 0; 136 + } 137 + 118 138 static int tegra_fuse_probe(struct platform_device *pdev) 119 139 { 120 140 void __iomem *base = fuse->base; ··· 154 130 return PTR_ERR(fuse->base); 155 131 fuse->phys = res->start; 156 132 157 - fuse->clk = devm_clk_get(&pdev->dev, "fuse"); 158 - if (IS_ERR(fuse->clk)) { 159 - if (PTR_ERR(fuse->clk) != -EPROBE_DEFER) 160 - dev_err(&pdev->dev, "failed to get FUSE clock: %ld", 161 - PTR_ERR(fuse->clk)); 133 + /* Initialize the soc data and lookups if using ACPI boot. */ 134 + if (is_acpi_node(dev_fwnode(&pdev->dev)) && !fuse->soc) { 135 + u8 chip; 162 136 163 - return PTR_ERR(fuse->clk); 137 + tegra_acpi_init_apbmisc(); 138 + 139 + chip = tegra_get_chip_id(); 140 + switch (chip) { 141 + #if defined(CONFIG_ARCH_TEGRA_194_SOC) 142 + case TEGRA194: 143 + fuse->soc = &tegra194_fuse_soc; 144 + break; 145 + #endif 146 + #if defined(CONFIG_ARCH_TEGRA_234_SOC) 147 + case TEGRA234: 148 + fuse->soc = &tegra234_fuse_soc; 149 + break; 150 + #endif 151 + #if defined(CONFIG_ARCH_TEGRA_241_SOC) 152 + case TEGRA241: 153 + fuse->soc = &tegra241_fuse_soc; 154 + break; 155 + #endif 156 + default: 157 + return dev_err_probe(&pdev->dev, -EINVAL, "Unsupported SoC: %02x\n", chip); 158 + } 159 + 160 + fuse->soc->init(fuse); 161 + tegra_fuse_print_sku_info(&tegra_sku_info); 162 + tegra_soc_device_register(); 163 + 164 + err = tegra_fuse_add_lookups(fuse); 165 + if (err) 166 + return dev_err_probe(&pdev->dev, err, "failed to add FUSE lookups\n"); 164 167 } 168 + 169 + fuse->clk = devm_clk_get_optional(&pdev->dev, "fuse"); 170 + if (IS_ERR(fuse->clk)) 171 + return dev_err_probe(&pdev->dev, PTR_ERR(fuse->clk), "failed to get FUSE clock\n"); 165 172 166 173 platform_set_drvdata(pdev, fuse); 167 174 fuse->dev = &pdev->dev; ··· 234 179 } 235 180 236 181 fuse->rst = devm_reset_control_get_optional(&pdev->dev, "fuse"); 237 - if (IS_ERR(fuse->rst)) { 238 - err = PTR_ERR(fuse->rst); 239 - dev_err(&pdev->dev, "failed to get FUSE reset: %pe\n", 240 - fuse->rst); 241 - return err; 242 - } 182 + if (IS_ERR(fuse->rst)) 183 + return dev_err_probe(&pdev->dev, PTR_ERR(fuse->rst), "failed to get FUSE reset\n"); 243 184 244 185 /* 245 186 * FUSE clock is enabled at a boot time, hence this resume/suspend ··· 313 262 SET_SYSTEM_SLEEP_PM_OPS(tegra_fuse_suspend, tegra_fuse_resume) 314 263 }; 315 264 265 + static const struct acpi_device_id tegra_fuse_acpi_match[] = { 266 + { "NVDA200F" }, 267 + { /* sentinel */ } 268 + }; 269 + MODULE_DEVICE_TABLE(acpi, tegra_fuse_acpi_match); 270 + 316 271 static struct platform_driver tegra_fuse_driver = { 317 272 .driver = { 318 273 .name = "tegra-fuse", 319 274 .of_match_table = tegra_fuse_match, 275 + .acpi_match_table = tegra_fuse_acpi_match, 320 276 .pm = &tegra_fuse_pm, 321 277 .suppress_bind_attrs = true, 322 278 }, ··· 345 287 346 288 int tegra_fuse_readl(unsigned long offset, u32 *value) 347 289 { 348 - if (!fuse->read || !fuse->clk) 290 + if (!fuse->dev) 291 + return -EPROBE_DEFER; 292 + 293 + /* 294 + * Wait for fuse->clk to be initialized if device-tree boot is used. 295 + */ 296 + if (is_of_node(dev_fwnode(fuse->dev)) && !fuse->clk) 297 + return -EPROBE_DEFER; 298 + 299 + if (!fuse->read) 349 300 return -EPROBE_DEFER; 350 301 351 302 if (IS_ERR(fuse->clk)) ··· 410 343 }; 411 344 412 345 #if IS_ENABLED(CONFIG_ARCH_TEGRA_194_SOC) || \ 413 - IS_ENABLED(CONFIG_ARCH_TEGRA_234_SOC) 346 + IS_ENABLED(CONFIG_ARCH_TEGRA_234_SOC) || \ 347 + IS_ENABLED(CONFIG_ARCH_TEGRA_241_SOC) 414 348 static ssize_t platform_show(struct device *dev, struct device_attribute *attr, 415 349 char *buf) 416 350 { ··· 438 370 }; 439 371 #endif 440 372 441 - struct device * __init tegra_soc_device_register(void) 373 + struct device *tegra_soc_device_register(void) 442 374 { 443 375 struct soc_device_attribute *attr; 444 376 struct soc_device *dev; ··· 475 407 const struct of_device_id *match; 476 408 struct device_node *np; 477 409 struct resource regs; 410 + int err; 478 411 479 412 tegra_init_apbmisc(); 480 413 ··· 566 497 567 498 fuse->soc->init(fuse); 568 499 569 - pr_info("Tegra Revision: %s SKU: %d CPU Process: %d SoC Process: %d\n", 570 - tegra_revision_name[tegra_sku_info.revision], 571 - tegra_sku_info.sku_id, tegra_sku_info.cpu_process_id, 572 - tegra_sku_info.soc_process_id); 573 - pr_debug("Tegra CPU Speedo ID %d, SoC Speedo ID %d\n", 574 - tegra_sku_info.cpu_speedo_id, tegra_sku_info.soc_speedo_id); 500 + tegra_fuse_print_sku_info(&tegra_sku_info); 575 501 576 - if (fuse->soc->lookups) { 577 - size_t size = sizeof(*fuse->lookups) * fuse->soc->num_lookups; 502 + err = tegra_fuse_add_lookups(fuse); 503 + if (err) 504 + pr_err("failed to add FUSE lookups\n"); 578 505 579 - fuse->lookups = kmemdup(fuse->soc->lookups, size, GFP_KERNEL); 580 - if (fuse->lookups) 581 - nvmem_add_cell_lookups(fuse->lookups, fuse->soc->num_lookups); 582 - } 583 - 584 - return 0; 506 + return err; 585 507 } 586 508 early_initcall(tegra_init_fuse); 587 509
+22 -1
drivers/soc/tegra/fuse/fuse-tegra30.c
··· 38 38 defined(CONFIG_ARCH_TEGRA_210_SOC) || \ 39 39 defined(CONFIG_ARCH_TEGRA_186_SOC) || \ 40 40 defined(CONFIG_ARCH_TEGRA_194_SOC) || \ 41 - defined(CONFIG_ARCH_TEGRA_234_SOC) 41 + defined(CONFIG_ARCH_TEGRA_234_SOC) || \ 42 + defined(CONFIG_ARCH_TEGRA_241_SOC) 42 43 static u32 tegra30_fuse_read_early(struct tegra_fuse *fuse, unsigned int offset) 43 44 { 44 45 if (WARN_ON(!fuse->base)) ··· 677 676 .num_keepouts = ARRAY_SIZE(tegra234_fuse_keepouts), 678 677 .soc_attr_group = &tegra194_soc_attr_group, 679 678 .clk_suspend_on = false, 679 + }; 680 + #endif 681 + 682 + #if defined(CONFIG_ARCH_TEGRA_241_SOC) 683 + static const struct tegra_fuse_info tegra241_fuse_info = { 684 + .read = tegra30_fuse_read, 685 + .size = 0x16008, 686 + .spare = 0xcf0, 687 + }; 688 + 689 + static const struct nvmem_keepout tegra241_fuse_keepouts[] = { 690 + { .start = 0xc, .end = 0x1600c } 691 + }; 692 + 693 + const struct tegra_fuse_soc tegra241_fuse_soc = { 694 + .init = tegra30_fuse_init, 695 + .info = &tegra241_fuse_info, 696 + .keepouts = tegra241_fuse_keepouts, 697 + .num_keepouts = ARRAY_SIZE(tegra241_fuse_keepouts), 698 + .soc_attr_group = &tegra194_soc_attr_group, 680 699 }; 681 700 #endif
+7 -1
drivers/soc/tegra/fuse/fuse.h
··· 69 69 70 70 void tegra_init_revision(void); 71 71 void tegra_init_apbmisc(void); 72 + void tegra_acpi_init_apbmisc(void); 72 73 73 74 u32 __init tegra_fuse_read_spare(unsigned int spare); 74 75 u32 __init tegra_fuse_read_early(unsigned int offset); ··· 124 123 #endif 125 124 126 125 #if IS_ENABLED(CONFIG_ARCH_TEGRA_194_SOC) || \ 127 - IS_ENABLED(CONFIG_ARCH_TEGRA_234_SOC) 126 + IS_ENABLED(CONFIG_ARCH_TEGRA_234_SOC) || \ 127 + IS_ENABLED(CONFIG_ARCH_TEGRA_241_SOC) 128 128 extern const struct attribute_group tegra194_soc_attr_group; 129 129 #endif 130 130 ··· 135 133 136 134 #ifdef CONFIG_ARCH_TEGRA_234_SOC 137 135 extern const struct tegra_fuse_soc tegra234_fuse_soc; 136 + #endif 137 + 138 + #ifdef CONFIG_ARCH_TEGRA_241_SOC 139 + extern const struct tegra_fuse_soc tegra241_fuse_soc; 138 140 #endif 139 141 140 142 #endif
+94 -16
drivers/soc/tegra/fuse/tegra-apbmisc.c
··· 3 3 * Copyright (c) 2014-2023, NVIDIA CORPORATION. All rights reserved. 4 4 */ 5 5 6 + #include <linux/acpi.h> 6 7 #include <linux/export.h> 7 8 #include <linux/io.h> 8 9 #include <linux/kernel.h> 10 + #include <linux/mod_devicetable.h> 9 11 #include <linux/of.h> 10 12 #include <linux/of_address.h> 11 13 ··· 64 62 switch (tegra_get_chip_id()) { 65 63 case TEGRA194: 66 64 case TEGRA234: 65 + case TEGRA241: 67 66 case TEGRA264: 68 67 if (tegra_get_platform() == 0) 69 68 return true; ··· 163 160 tegra_sku_info.platform = tegra_get_platform(); 164 161 } 165 162 166 - void __init tegra_init_apbmisc(void) 163 + static void tegra_init_apbmisc_resources(struct resource *apbmisc, 164 + struct resource *straps) 167 165 { 168 166 void __iomem *strapping_base; 167 + 168 + apbmisc_base = ioremap(apbmisc->start, resource_size(apbmisc)); 169 + if (apbmisc_base) 170 + chipid = readl_relaxed(apbmisc_base + 4); 171 + else 172 + pr_err("failed to map APBMISC registers\n"); 173 + 174 + strapping_base = ioremap(straps->start, resource_size(straps)); 175 + if (strapping_base) { 176 + strapping = readl_relaxed(strapping_base); 177 + iounmap(strapping_base); 178 + } else { 179 + pr_err("failed to map strapping options registers\n"); 180 + } 181 + } 182 + 183 + /** 184 + * tegra_init_apbmisc - Initializes Tegra APBMISC and Strapping registers. 185 + * 186 + * This is called during early init as some of the old 32-bit ARM code needs 187 + * information from the APBMISC registers very early during boot. 188 + */ 189 + void __init tegra_init_apbmisc(void) 190 + { 169 191 struct resource apbmisc, straps; 170 192 struct device_node *np; 171 193 ··· 247 219 } 248 220 } 249 221 250 - apbmisc_base = ioremap(apbmisc.start, resource_size(&apbmisc)); 251 - if (!apbmisc_base) { 252 - pr_err("failed to map APBMISC registers\n"); 253 - } else { 254 - chipid = readl_relaxed(apbmisc_base + 4); 255 - } 256 - 257 - strapping_base = ioremap(straps.start, resource_size(&straps)); 258 - if (!strapping_base) { 259 - pr_err("failed to map strapping options registers\n"); 260 - } else { 261 - strapping = readl_relaxed(strapping_base); 262 - iounmap(strapping_base); 263 - } 264 - 222 + tegra_init_apbmisc_resources(&apbmisc, &straps); 265 223 long_ram_code = of_property_read_bool(np, "nvidia,long-ram-code"); 266 224 267 225 put: 268 226 of_node_put(np); 269 227 } 228 + 229 + #ifdef CONFIG_ACPI 230 + static const struct acpi_device_id apbmisc_acpi_match[] = { 231 + { "NVDA2010" }, 232 + { /* sentinel */ } 233 + }; 234 + 235 + void tegra_acpi_init_apbmisc(void) 236 + { 237 + struct resource *resources[2] = { NULL }; 238 + struct resource_entry *rentry; 239 + struct acpi_device *adev = NULL; 240 + struct list_head resource_list; 241 + int rcount = 0; 242 + int ret; 243 + 244 + adev = acpi_dev_get_first_match_dev(apbmisc_acpi_match[0].id, NULL, -1); 245 + if (!adev) 246 + return; 247 + 248 + INIT_LIST_HEAD(&resource_list); 249 + 250 + ret = acpi_dev_get_memory_resources(adev, &resource_list); 251 + if (ret < 0) { 252 + pr_err("failed to get APBMISC memory resources"); 253 + goto out_put_acpi_dev; 254 + } 255 + 256 + /* 257 + * Get required memory resources. 258 + * 259 + * resources[0]: apbmisc. 260 + * resources[1]: straps. 261 + */ 262 + resource_list_for_each_entry(rentry, &resource_list) { 263 + if (rcount >= ARRAY_SIZE(resources)) 264 + break; 265 + 266 + resources[rcount++] = rentry->res; 267 + } 268 + 269 + if (!resources[0]) { 270 + pr_err("failed to get APBMISC registers\n"); 271 + goto out_free_resource_list; 272 + } 273 + 274 + if (!resources[1]) { 275 + pr_err("failed to get strapping options registers\n"); 276 + goto out_free_resource_list; 277 + } 278 + 279 + tegra_init_apbmisc_resources(resources[0], resources[1]); 280 + 281 + out_free_resource_list: 282 + acpi_dev_free_resource_list(&resource_list); 283 + 284 + out_put_acpi_dev: 285 + acpi_dev_put(adev); 286 + } 287 + #else 288 + void tegra_acpi_init_apbmisc(void) 289 + { 290 + } 291 + #endif
+39 -48
drivers/soc/tegra/pmc.c
··· 3 3 * drivers/soc/tegra/pmc.c 4 4 * 5 5 * Copyright (c) 2010 Google, Inc 6 - * Copyright (c) 2018-2023, NVIDIA CORPORATION. All rights reserved. 6 + * Copyright (c) 2018-2024, NVIDIA CORPORATION. All rights reserved. 7 7 * 8 8 * Author: 9 9 * Colin Cross <ccross@google.com> ··· 384 384 bool has_blink_output; 385 385 bool has_usb_sleepwalk; 386 386 bool supports_core_domain; 387 + bool has_single_mmio_aperture; 387 388 }; 388 389 389 390 /** ··· 1778 1777 return TEGRA_IO_PAD_VOLTAGE_3V3; 1779 1778 } 1780 1779 1781 - /** 1782 - * tegra_io_rail_power_on() - enable power to I/O rail 1783 - * @id: Tegra I/O pad ID for which to enable power 1784 - * 1785 - * See also: tegra_io_pad_power_enable() 1786 - */ 1787 - int tegra_io_rail_power_on(unsigned int id) 1788 - { 1789 - return tegra_io_pad_power_enable(id); 1790 - } 1791 - EXPORT_SYMBOL(tegra_io_rail_power_on); 1792 - 1793 - /** 1794 - * tegra_io_rail_power_off() - disable power to I/O rail 1795 - * @id: Tegra I/O pad ID for which to disable power 1796 - * 1797 - * See also: tegra_io_pad_power_disable() 1798 - */ 1799 - int tegra_io_rail_power_off(unsigned int id) 1800 - { 1801 - return tegra_io_pad_power_disable(id); 1802 - } 1803 - EXPORT_SYMBOL(tegra_io_rail_power_off); 1804 - 1805 1780 #ifdef CONFIG_PM_SLEEP 1806 1781 enum tegra_suspend_mode tegra_pmc_get_suspend_mode(void) 1807 1782 { ··· 2886 2909 if (IS_ERR(base)) 2887 2910 return PTR_ERR(base); 2888 2911 2889 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "wake"); 2890 - if (res) { 2912 + if (pmc->soc->has_single_mmio_aperture) { 2913 + pmc->wake = base; 2914 + pmc->aotag = base; 2915 + pmc->scratch = base; 2916 + } else { 2917 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 2918 + "wake"); 2891 2919 pmc->wake = devm_ioremap_resource(&pdev->dev, res); 2892 2920 if (IS_ERR(pmc->wake)) 2893 2921 return PTR_ERR(pmc->wake); 2894 - } else { 2895 - pmc->wake = base; 2896 - } 2897 2922 2898 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "aotag"); 2899 - if (res) { 2923 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 2924 + "aotag"); 2900 2925 pmc->aotag = devm_ioremap_resource(&pdev->dev, res); 2901 2926 if (IS_ERR(pmc->aotag)) 2902 2927 return PTR_ERR(pmc->aotag); 2903 - } else { 2904 - pmc->aotag = base; 2905 - } 2906 2928 2907 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "scratch"); 2908 - if (res) { 2909 - pmc->scratch = devm_ioremap_resource(&pdev->dev, res); 2910 - if (IS_ERR(pmc->scratch)) 2911 - return PTR_ERR(pmc->scratch); 2912 - } else { 2913 - pmc->scratch = base; 2929 + /* "scratch" is an optional aperture */ 2930 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 2931 + "scratch"); 2932 + if (res) { 2933 + pmc->scratch = devm_ioremap_resource(&pdev->dev, res); 2934 + if (IS_ERR(pmc->scratch)) 2935 + return PTR_ERR(pmc->scratch); 2936 + } else { 2937 + pmc->scratch = NULL; 2938 + } 2914 2939 } 2915 2940 2916 2941 pmc->clk = devm_clk_get_optional(&pdev->dev, "pclk"); ··· 2924 2945 * PMC should be last resort for restarting since it soft-resets 2925 2946 * CPU without resetting everything else. 2926 2947 */ 2927 - err = devm_register_reboot_notifier(&pdev->dev, 2928 - &tegra_pmc_reboot_notifier); 2929 - if (err) { 2930 - dev_err(&pdev->dev, "unable to register reboot notifier, %d\n", 2931 - err); 2932 - return err; 2948 + if (pmc->scratch) { 2949 + err = devm_register_reboot_notifier(&pdev->dev, 2950 + &tegra_pmc_reboot_notifier); 2951 + if (err) { 2952 + dev_err(&pdev->dev, 2953 + "unable to register reboot notifier, %d\n", 2954 + err); 2955 + return err; 2956 + } 2933 2957 } 2934 2958 2935 2959 err = devm_register_sys_off_handler(&pdev->dev, ··· 3306 3324 .num_pmc_clks = 0, 3307 3325 .has_blink_output = true, 3308 3326 .has_usb_sleepwalk = true, 3327 + .has_single_mmio_aperture = true, 3309 3328 }; 3310 3329 3311 3330 static const char * const tegra30_powergates[] = { ··· 3368 3385 .num_pmc_clks = ARRAY_SIZE(tegra_pmc_clks_data), 3369 3386 .has_blink_output = true, 3370 3387 .has_usb_sleepwalk = true, 3388 + .has_single_mmio_aperture = true, 3371 3389 }; 3372 3390 3373 3391 static const char * const tegra114_powergates[] = { ··· 3426 3442 .num_pmc_clks = ARRAY_SIZE(tegra_pmc_clks_data), 3427 3443 .has_blink_output = true, 3428 3444 .has_usb_sleepwalk = true, 3445 + .has_single_mmio_aperture = true, 3429 3446 }; 3430 3447 3431 3448 static const char * const tegra124_powergates[] = { ··· 3571 3586 .num_pmc_clks = ARRAY_SIZE(tegra_pmc_clks_data), 3572 3587 .has_blink_output = true, 3573 3588 .has_usb_sleepwalk = true, 3589 + .has_single_mmio_aperture = true, 3574 3590 }; 3575 3591 3576 3592 static const char * const tegra210_powergates[] = { ··· 3735 3749 .num_pmc_clks = ARRAY_SIZE(tegra_pmc_clks_data), 3736 3750 .has_blink_output = true, 3737 3751 .has_usb_sleepwalk = true, 3752 + .has_single_mmio_aperture = true, 3738 3753 }; 3739 3754 3740 3755 static const struct tegra_io_pad_soc tegra186_io_pads[] = { ··· 3933 3946 .num_pmc_clks = 0, 3934 3947 .has_blink_output = false, 3935 3948 .has_usb_sleepwalk = false, 3949 + .has_single_mmio_aperture = false, 3936 3950 }; 3937 3951 3938 3952 static const struct tegra_io_pad_soc tegra194_io_pads[] = { ··· 4119 4131 .num_pmc_clks = 0, 4120 4132 .has_blink_output = false, 4121 4133 .has_usb_sleepwalk = false, 4134 + .has_single_mmio_aperture = false, 4122 4135 }; 4123 4136 4124 4137 static const struct tegra_io_pad_soc tegra234_io_pads[] = { ··· 4209 4220 }; 4210 4221 4211 4222 static const struct tegra_wake_event tegra234_wake_events[] = { 4223 + TEGRA_WAKE_GPIO("sd-wake", 8, 0, TEGRA234_MAIN_GPIO(G, 7)), 4212 4224 TEGRA_WAKE_IRQ("pmu", 24, 209), 4213 4225 TEGRA_WAKE_GPIO("power", 29, 1, TEGRA234_AON_GPIO(EE, 4)), 4214 4226 TEGRA_WAKE_GPIO("mgbe", 56, 0, TEGRA234_MAIN_GPIO(Y, 3)), ··· 4249 4259 .pmc_clks_data = NULL, 4250 4260 .num_pmc_clks = 0, 4251 4261 .has_blink_output = false, 4262 + .has_single_mmio_aperture = false, 4252 4263 }; 4253 4264 4254 4265 static const struct of_device_id tegra_pmc_match[] = {
+1 -1
drivers/staging/fieldbus/Documentation/devicetree/bindings/fieldbus/arcx,anybus-controller.txt
··· 48 48 ----------------- 49 49 50 50 This example places the bridge on top of the i.MX WEIM parallel bus, see: 51 - Documentation/devicetree/bindings/bus/imx-weim.txt 51 + Documentation/devicetree/bindings/memory-controllers/fsl/fsl,imx-weim.yaml 52 52 53 53 &weim { 54 54 controller@0,0 {
+1 -1
drivers/tee/tee_core.c
··· 1226 1226 return add_uevent_var(env, "MODALIAS=tee:%pUb", dev_id); 1227 1227 } 1228 1228 1229 - struct bus_type tee_bus_type = { 1229 + const struct bus_type tee_bus_type = { 1230 1230 .name = "tee", 1231 1231 .match = tee_client_device_match, 1232 1232 .uevent = tee_client_device_uevent,
-1
drivers/watchdog/Kconfig
··· 512 512 tristate "S3C6410/S5Pv210/Exynos Watchdog" 513 513 depends on ARCH_S3C64XX || ARCH_S5PV210 || ARCH_EXYNOS || COMPILE_TEST 514 514 select WATCHDOG_CORE 515 - select MFD_SYSCON if ARCH_EXYNOS 516 515 help 517 516 Watchdog timer block in the Samsung S3C64xx, S5Pv210 and Exynos 518 517 SoCs. This will reboot the system when the timer expires with
+4 -4
drivers/watchdog/s3c2410_wdt.c
··· 24 24 #include <linux/slab.h> 25 25 #include <linux/err.h> 26 26 #include <linux/of.h> 27 - #include <linux/mfd/syscon.h> 28 27 #include <linux/regmap.h> 29 28 #include <linux/delay.h> 29 + #include <linux/soc/samsung/exynos-pmu.h> 30 30 31 31 #define S3C2410_WTCON 0x00 32 32 #define S3C2410_WTDAT 0x04 ··· 699 699 return ret; 700 700 701 701 if (wdt->drv_data->quirks & QUIRKS_HAVE_PMUREG) { 702 - wdt->pmureg = syscon_regmap_lookup_by_phandle(dev->of_node, 703 - "samsung,syscon-phandle"); 702 + wdt->pmureg = exynos_get_pmu_regmap_by_phandle(dev->of_node, 703 + "samsung,syscon-phandle"); 704 704 if (IS_ERR(wdt->pmureg)) 705 705 return dev_err_probe(dev, PTR_ERR(wdt->pmureg), 706 - "syscon regmap lookup failed.\n"); 706 + "PMU regmap lookup failed.\n"); 707 707 } 708 708 709 709 wdt_irq = platform_get_irq(pdev, 0);
+5
include/dt-bindings/arm/qcom,ids.h
··· 252 252 #define QCOM_ID_IPQ9510 521 253 253 #define QCOM_ID_QRB4210 523 254 254 #define QCOM_ID_QRB2210 524 255 + #define QCOM_ID_SM8475 530 256 + #define QCOM_ID_SM8475P 531 255 257 #define QCOM_ID_SA8775P 534 256 258 #define QCOM_ID_QRU1000 539 259 + #define QCOM_ID_SM8475_2 540 257 260 #define QCOM_ID_QDU1000 545 258 261 #define QCOM_ID_SM8650 557 259 262 #define QCOM_ID_SM4450 568 ··· 268 265 #define QCOM_ID_IPQ5322 593 269 266 #define QCOM_ID_IPQ5312 594 270 267 #define QCOM_ID_IPQ5302 595 268 + #define QCOM_ID_QCS8550 603 269 + #define QCOM_ID_QCM8550 604 271 270 #define QCOM_ID_IPQ5300 624 272 271 273 272 /*
+1 -1
include/linux/arm_ffa.h
··· 209 209 #define module_ffa_driver(__ffa_driver) \ 210 210 module_driver(__ffa_driver, ffa_register, ffa_unregister) 211 211 212 - extern struct bus_type ffa_bus_type; 212 + extern const struct bus_type ffa_bus_type; 213 213 214 214 /* FFA transport related */ 215 215 struct ffa_partition_info {
+18 -3
include/linux/scmi_protocol.h
··· 47 47 bool rate_discrete; 48 48 bool rate_changed_notifications; 49 49 bool rate_change_requested_notifications; 50 + bool state_ctrl_forbidden; 51 + bool rate_ctrl_forbidden; 52 + bool parent_ctrl_forbidden; 53 + bool extended_config; 50 54 union { 51 55 struct { 52 56 int num_rates; ··· 75 71 struct scmi_handle; 76 72 struct scmi_device; 77 73 struct scmi_protocol_handle; 74 + 75 + enum scmi_clock_oem_config { 76 + SCMI_CLOCK_CFG_DUTY_CYCLE = 0x1, 77 + SCMI_CLOCK_CFG_PHASE, 78 + SCMI_CLOCK_CFG_OEM_START = 0x80, 79 + SCMI_CLOCK_CFG_OEM_END = 0xFF, 80 + }; 78 81 79 82 /** 80 83 * struct scmi_clk_proto_ops - represents the various operations provided ··· 115 104 int (*state_get)(const struct scmi_protocol_handle *ph, u32 clk_id, 116 105 bool *enabled, bool atomic); 117 106 int (*config_oem_get)(const struct scmi_protocol_handle *ph, u32 clk_id, 118 - u8 oem_type, u32 *oem_val, u32 *attributes, 119 - bool atomic); 107 + enum scmi_clock_oem_config oem_type, 108 + u32 *oem_val, u32 *attributes, bool atomic); 120 109 int (*config_oem_set)(const struct scmi_protocol_handle *ph, u32 clk_id, 121 - u8 oem_type, u32 oem_val, bool atomic); 110 + enum scmi_clock_oem_config oem_type, 111 + u32 oem_val, bool atomic); 122 112 int (*parent_get)(const struct scmi_protocol_handle *ph, u32 clk_id, u32 *parent_id); 123 113 int (*parent_set)(const struct scmi_protocol_handle *ph, u32 clk_id, u32 parent_id); 124 114 }; ··· 965 953 unsigned int domain_id; 966 954 unsigned int range_max; 967 955 unsigned int range_min; 956 + unsigned long range_max_freq; 957 + unsigned long range_min_freq; 968 958 }; 969 959 970 960 struct scmi_perf_level_report { ··· 974 960 unsigned int agent_id; 975 961 unsigned int domain_id; 976 962 unsigned int performance_level; 963 + unsigned long performance_level_freq; 977 964 }; 978 965 979 966 struct scmi_sensor_trip_point_report {
+1 -1
include/linux/soc/qcom/apr.h
··· 9 9 #include <dt-bindings/soc/qcom,apr.h> 10 10 #include <dt-bindings/soc/qcom,gpr.h> 11 11 12 - extern struct bus_type aprbus; 12 + extern const struct bus_type aprbus; 13 13 14 14 #define APR_HDR_LEN(hdr_len) ((hdr_len)/4) 15 15
+30
include/linux/soc/qcom/qcom-pbs.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. 4 + */ 5 + 6 + #ifndef _QCOM_PBS_H 7 + #define _QCOM_PBS_H 8 + 9 + #include <linux/errno.h> 10 + #include <linux/types.h> 11 + 12 + struct device_node; 13 + struct pbs_dev; 14 + 15 + #if IS_ENABLED(CONFIG_QCOM_PBS) 16 + int qcom_pbs_trigger_event(struct pbs_dev *pbs, u8 bitmap); 17 + struct pbs_dev *get_pbs_client_device(struct device *client_dev); 18 + #else 19 + static inline int qcom_pbs_trigger_event(struct pbs_dev *pbs, u8 bitmap) 20 + { 21 + return -ENODEV; 22 + } 23 + 24 + static inline struct pbs_dev *get_pbs_client_device(struct device *client_dev) 25 + { 26 + return ERR_PTR(-ENODEV); 27 + } 28 + #endif 29 + 30 + #endif
+10 -1
include/linux/soc/samsung/exynos-pmu.h
··· 10 10 #define __LINUX_SOC_EXYNOS_PMU_H 11 11 12 12 struct regmap; 13 + struct device_node; 13 14 14 15 enum sys_powerdown { 15 16 SYS_AFTR, ··· 21 20 22 21 extern void exynos_sys_powerdown_conf(enum sys_powerdown mode); 23 22 #ifdef CONFIG_EXYNOS_PMU 24 - extern struct regmap *exynos_get_pmu_regmap(void); 23 + struct regmap *exynos_get_pmu_regmap(void); 24 + struct regmap *exynos_get_pmu_regmap_by_phandle(struct device_node *np, 25 + const char *propname); 25 26 #else 26 27 static inline struct regmap *exynos_get_pmu_regmap(void) 28 + { 29 + return ERR_PTR(-ENODEV); 30 + } 31 + 32 + static inline struct regmap *exynos_get_pmu_regmap_by_phandle(struct device_node *np, 33 + const char *propname) 27 34 { 28 35 return ERR_PTR(-ENODEV); 29 36 }
+1
include/linux/string.h
··· 217 217 extern void *kmemdup(const void *src, size_t len, gfp_t gfp) __realloc_size(2); 218 218 extern void *kvmemdup(const void *src, size_t len, gfp_t gfp) __realloc_size(2); 219 219 extern char *kmemdup_nul(const char *s, size_t len, gfp_t gfp); 220 + extern void *kmemdup_array(const void *src, size_t element_size, size_t count, gfp_t gfp); 220 221 221 222 extern char **argv_split(gfp_t gfp, const char *str, int *argcp); 222 223 extern void argv_free(char **argv);
+1 -1
include/linux/tee_drv.h
··· 482 482 } 483 483 } 484 484 485 - extern struct bus_type tee_bus_type; 485 + extern const struct bus_type tee_bus_type; 486 486 487 487 /** 488 488 * struct tee_client_device - tee based device
+1 -1
include/soc/qcom/qcom-spmi-pmic.h
··· 49 49 #define PMK8350_SUBTYPE 0x2f 50 50 #define PMR735B_SUBTYPE 0x34 51 51 #define PM6350_SUBTYPE 0x36 52 - #define PM2250_SUBTYPE 0x37 52 + #define PM4125_SUBTYPE 0x37 53 53 54 54 #define PMI8998_FAB_ID_SMIC 0x11 55 55 #define PMI8998_FAB_ID_GF 0x30
+1 -22
include/soc/qcom/spm.h
··· 7 7 #ifndef __SPM_H__ 8 8 #define __SPM_H__ 9 9 10 - #include <linux/cpuidle.h> 11 - 12 - #define MAX_PMIC_DATA 2 13 - #define MAX_SEQ_DATA 64 14 - 15 10 enum pm_sleep_mode { 16 11 PM_SLEEP_MODE_STBY, 17 12 PM_SLEEP_MODE_RET, ··· 15 20 PM_SLEEP_MODE_NR, 16 21 }; 17 22 18 - struct spm_reg_data { 19 - const u16 *reg_offset; 20 - u32 spm_cfg; 21 - u32 spm_dly; 22 - u32 pmic_dly; 23 - u32 pmic_data[MAX_PMIC_DATA]; 24 - u32 avs_ctl; 25 - u32 avs_limit; 26 - u8 seq[MAX_SEQ_DATA]; 27 - u8 start_index[PM_SLEEP_MODE_NR]; 28 - }; 29 - 30 - struct spm_driver_data { 31 - void __iomem *reg_base; 32 - const struct spm_reg_data *reg_data; 33 - }; 34 - 23 + struct spm_driver_data; 35 24 void spm_set_low_power_mode(struct spm_driver_data *drv, 36 25 enum pm_sleep_mode mode); 37 26
+1
include/soc/tegra/fuse.h
··· 17 17 #define TEGRA186 0x18 18 18 #define TEGRA194 0x19 19 19 #define TEGRA234 0x23 20 + #define TEGRA241 0x24 20 21 #define TEGRA264 0x26 21 22 22 23 #define TEGRA_FUSE_SKU_CALIB_0 0xf0
-18
include/soc/tegra/pmc.h
··· 148 148 TEGRA_IO_PAD_AO_HV, 149 149 }; 150 150 151 - /* deprecated, use TEGRA_IO_PAD_{HDMI,LVDS} instead */ 152 - #define TEGRA_IO_RAIL_HDMI TEGRA_IO_PAD_HDMI 153 - #define TEGRA_IO_RAIL_LVDS TEGRA_IO_PAD_LVDS 154 - 155 151 #ifdef CONFIG_SOC_TEGRA_PMC 156 152 int tegra_powergate_power_on(unsigned int id); 157 153 int tegra_powergate_power_off(unsigned int id); ··· 159 163 160 164 int tegra_io_pad_power_enable(enum tegra_io_pad id); 161 165 int tegra_io_pad_power_disable(enum tegra_io_pad id); 162 - 163 - /* deprecated, use tegra_io_pad_power_{enable,disable}() instead */ 164 - int tegra_io_rail_power_on(unsigned int id); 165 - int tegra_io_rail_power_off(unsigned int id); 166 166 167 167 void tegra_pmc_set_suspend_mode(enum tegra_suspend_mode mode); 168 168 void tegra_pmc_enter_suspend_mode(enum tegra_suspend_mode mode); ··· 199 207 } 200 208 201 209 static inline int tegra_io_pad_get_voltage(enum tegra_io_pad id) 202 - { 203 - return -ENOSYS; 204 - } 205 - 206 - static inline int tegra_io_rail_power_on(unsigned int id) 207 - { 208 - return -ENOSYS; 209 - } 210 - 211 - static inline int tegra_io_rail_power_off(unsigned int id) 212 210 { 213 211 return -ENOSYS; 214 212 }
+17
mm/util.c
··· 136 136 EXPORT_SYMBOL(kmemdup); 137 137 138 138 /** 139 + * kmemdup_array - duplicate a given array. 140 + * 141 + * @src: array to duplicate. 142 + * @element_size: size of each element of array. 143 + * @count: number of elements to duplicate from array. 144 + * @gfp: GFP mask to use. 145 + * 146 + * Return: duplicated array of @src or %NULL in case of error, 147 + * result is physically contiguous. Use kfree() to free. 148 + */ 149 + void *kmemdup_array(const void *src, size_t element_size, size_t count, gfp_t gfp) 150 + { 151 + return kmemdup(src, size_mul(element_size, count), gfp); 152 + } 153 + EXPORT_SYMBOL(kmemdup_array); 154 + 155 + /** 139 156 * kvmemdup - duplicate region of memory 140 157 * 141 158 * @src: memory region to duplicate