Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drivers-5.16' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM SoC driver updates from Arnd Bergmann:
"These are all the driver updates for SoC specific drivers. There are a
couple of subsystems with individual maintainers picking up their
patches here:

- The reset controller subsystem add support for a few new SoC
variants to existing drivers, along with other minor improvements

- The OP-TEE subsystem gets a driver for the ARM FF-A transport

- The memory controller subsystem has improvements for Tegra,
Mediatek, Renesas, Freescale and Broadcom specific drivers.

- The tegra cpuidle driver changes get merged through this tree this
time. There are only minor changes, but they depend on other tegra
driver updates here.

- The ep93xx platform finally moves to using the drivers/clk/
subsystem, moving the code out of arch/arm in the process. This
depends on a small sound driver change that is included here as
well.

- There are some minor updates for Qualcomm and Tegra specific
firmware drivers.

The other driver updates are mainly for drivers/soc, which contains a
mixture of vendor specific drivers that don't really fit elsewhere:

- Mediatek drivers gain more support for MT8192, with new support for
hw-mutex and mmsys routing, plus support for reset lines in the
mmsys driver.

- Qualcomm gains a new "sleep stats" driver, and support for the
"Generic Packet Router" in the APR driver.

- There is a new user interface for routing the UARTS on ASpeed BMCs,
something that apparently nobody else has needed so far.

- More drivers can now be built as loadable modules, in particular
for Broadcom and Samsung platforms.

- Lots of improvements to the TI sysc driver for better
suspend/resume support"

Finally, there are lots of minor cleanups and new device IDs for
amlogic, renesas, tegra, qualcomm, mediateka, samsung, imx,
layerscape, allwinner, broadcom, and omap"

* tag 'drivers-5.16' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (179 commits)
optee: Fix spelling mistake "reclain" -> "reclaim"
Revert "firmware: qcom: scm: Add support for MC boot address API"
qcom: spm: allow compile-testing
firmware: arm_ffa: Remove unused 'compat_version' variable
soc: samsung: exynos-chipid: add exynosautov9 SoC support
firmware: qcom: scm: Don't break compile test on non-ARM platforms
soc: qcom: smp2p: Add of_node_put() before goto
soc: qcom: apr: Add of_node_put() before return
soc: qcom: qcom_stats: Fix client votes offset
soc: qcom: rpmhpd: fix sm8350_mxc's peer domain
dt-bindings: arm: cpus: Document qcom,msm8916-smp enable-method
ARM: qcom: Add qcom,msm8916-smp enable-method identical to MSM8226
firmware: qcom: scm: Add support for MC boot address API
soc: qcom: spm: Add 8916 SPM register data
dt-bindings: soc: qcom: spm: Document qcom,msm8916-saw2-v3.0-cpu
soc: qcom: socinfo: Add PM8150C and SMB2351 models
firmware: qcom_scm: Fix error retval in __qcom_scm_is_call_available()
soc: aspeed: Add UART routing support
soc: fsl: dpio: rename the enqueue descriptor variable
soc: fsl: dpio: use an explicit NULL instead of 0
...

+8007 -3011
+27
Documentation/ABI/testing/sysfs-driver-aspeed-uart-routing
··· 1 + What: /sys/bus/platform/drivers/aspeed-uart-routing/*/uart* 2 + Date: September 2021 3 + Contact: Oskar Senft <osk@google.com> 4 + Chia-Wei Wang <chiawei_wang@aspeedtech.com> 5 + Description: Selects the RX source of the UARTx device. 6 + 7 + When read, each file shows the list of available options with currently 8 + selected option marked by brackets "[]". The list of available options 9 + depends on the selected file. 10 + 11 + e.g. 12 + cat /sys/bus/platform/drivers/aspeed-uart-routing/*.uart_routing/uart1 13 + [io1] io2 io3 io4 uart2 uart3 uart4 io6 14 + 15 + In this case, UART1 gets its input from IO1 (physical serial port 1). 16 + 17 + Users: OpenBMC. Proposed changes should be mailed to 18 + openbmc@lists.ozlabs.org 19 + 20 + What: /sys/bus/platform/drivers/aspeed-uart-routing/*/io* 21 + Date: September 2021 22 + Contact: Oskar Senft <osk@google.com> 23 + Chia-Wei Wang <chiawei_wang@aspeedtech.com> 24 + Description: Selects the RX source of IOx serial port. The current selection 25 + will be marked by brackets "[]". 26 + Users: OpenBMC. Proposed changes should be mailed to 27 + openbmc@lists.ozlabs.org
+5 -1
Documentation/devicetree/bindings/arm/cpus.yaml
··· 211 211 - qcom,gcc-msm8660 212 212 - qcom,kpss-acc-v1 213 213 - qcom,kpss-acc-v2 214 + - qcom,msm8226-smp 215 + # Only valid on ARM 32-bit, see above for ARM v8 64-bit 216 + - qcom,msm8916-smp 214 217 - renesas,apmu 215 218 - renesas,r9a06g032-smp 216 219 - rockchip,rk3036-smp ··· 300 297 Specifies the ACC* node associated with this CPU. 301 298 302 299 Required for systems that have an "enable-method" property 303 - value of "qcom,kpss-acc-v1" or "qcom,kpss-acc-v2" 300 + value of "qcom,kpss-acc-v1", "qcom,kpss-acc-v2", "qcom,msm8226-smp" or 301 + "qcom,msm8916-smp". 304 302 305 303 * arm/msm/qcom,kpss-acc.txt 306 304
+3 -2
Documentation/devicetree/bindings/arm/samsung/exynos-chipid.yaml
··· 11 11 12 12 properties: 13 13 compatible: 14 - items: 15 - - const: samsung,exynos4210-chipid 14 + enum: 15 + - samsung,exynos4210-chipid 16 + - samsung,exynos850-chipid 16 17 17 18 reg: 18 19 maxItems: 1
Documentation/devicetree/bindings/ddr/lpddr2-timings.txt Documentation/devicetree/bindings/memory-controllers/ddr/lpddr2-timings.txt
-102
Documentation/devicetree/bindings/ddr/lpddr2.txt
··· 1 - * LPDDR2 SDRAM memories compliant to JEDEC JESD209-2 2 - 3 - Required properties: 4 - - compatible : Should be one of - "jedec,lpddr2-nvm", "jedec,lpddr2-s2", 5 - "jedec,lpddr2-s4" 6 - 7 - "ti,jedec-lpddr2-s2" should be listed if the memory part is LPDDR2-S2 type 8 - 9 - "ti,jedec-lpddr2-s4" should be listed if the memory part is LPDDR2-S4 type 10 - 11 - "ti,jedec-lpddr2-nvm" should be listed if the memory part is LPDDR2-NVM type 12 - 13 - - density : <u32> representing density in Mb (Mega bits) 14 - 15 - - io-width : <u32> representing bus width. Possible values are 8, 16, and 32 16 - 17 - Optional properties: 18 - 19 - The following optional properties represent the minimum value of some AC 20 - timing parameters of the DDR device in terms of number of clock cycles. 21 - These values shall be obtained from the device data-sheet. 22 - - tRRD-min-tck 23 - - tWTR-min-tck 24 - - tXP-min-tck 25 - - tRTP-min-tck 26 - - tCKE-min-tck 27 - - tRPab-min-tck 28 - - tRCD-min-tck 29 - - tWR-min-tck 30 - - tRASmin-min-tck 31 - - tCKESR-min-tck 32 - - tFAW-min-tck 33 - 34 - Child nodes: 35 - - The lpddr2 node may have one or more child nodes of type "lpddr2-timings". 36 - "lpddr2-timings" provides AC timing parameters of the device for 37 - a given speed-bin. The user may provide the timings for as many 38 - speed-bins as is required. Please see Documentation/devicetree/ 39 - bindings/ddr/lpddr2-timings.txt for more information on "lpddr2-timings" 40 - 41 - Example: 42 - 43 - elpida_ECB240ABACN : lpddr2 { 44 - compatible = "Elpida,ECB240ABACN","jedec,lpddr2-s4"; 45 - density = <2048>; 46 - io-width = <32>; 47 - 48 - tRPab-min-tck = <3>; 49 - tRCD-min-tck = <3>; 50 - tWR-min-tck = <3>; 51 - tRASmin-min-tck = <3>; 52 - tRRD-min-tck = <2>; 53 - tWTR-min-tck = <2>; 54 - tXP-min-tck = <2>; 55 - tRTP-min-tck = <2>; 56 - tCKE-min-tck = <3>; 57 - tCKESR-min-tck = <3>; 58 - tFAW-min-tck = <8>; 59 - 60 - timings_elpida_ECB240ABACN_400mhz: lpddr2-timings@0 { 61 - compatible = "jedec,lpddr2-timings"; 62 - min-freq = <10000000>; 63 - max-freq = <400000000>; 64 - tRPab = <21000>; 65 - tRCD = <18000>; 66 - tWR = <15000>; 67 - tRAS-min = <42000>; 68 - tRRD = <10000>; 69 - tWTR = <7500>; 70 - tXP = <7500>; 71 - tRTP = <7500>; 72 - tCKESR = <15000>; 73 - tDQSCK-max = <5500>; 74 - tFAW = <50000>; 75 - tZQCS = <90000>; 76 - tZQCL = <360000>; 77 - tZQinit = <1000000>; 78 - tRAS-max-ns = <70000>; 79 - }; 80 - 81 - timings_elpida_ECB240ABACN_200mhz: lpddr2-timings@1 { 82 - compatible = "jedec,lpddr2-timings"; 83 - min-freq = <10000000>; 84 - max-freq = <200000000>; 85 - tRPab = <21000>; 86 - tRCD = <18000>; 87 - tWR = <15000>; 88 - tRAS-min = <42000>; 89 - tRRD = <10000>; 90 - tWTR = <10000>; 91 - tXP = <7500>; 92 - tRTP = <7500>; 93 - tCKESR = <15000>; 94 - tDQSCK-max = <5500>; 95 - tFAW = <50000>; 96 - tZQCS = <90000>; 97 - tZQCL = <360000>; 98 - tZQinit = <1000000>; 99 - tRAS-max-ns = <70000>; 100 - }; 101 - 102 - }
Documentation/devicetree/bindings/ddr/lpddr3-timings.txt Documentation/devicetree/bindings/memory-controllers/ddr/lpddr3-timings.txt
+3 -2
Documentation/devicetree/bindings/ddr/lpddr3.txt Documentation/devicetree/bindings/memory-controllers/ddr/lpddr3.txt
··· 43 43 Child nodes: 44 44 - The lpddr3 node may have one or more child nodes of type "lpddr3-timings". 45 45 "lpddr3-timings" provides AC timing parameters of the device for 46 - a given speed-bin. Please see Documentation/devicetree/ 47 - bindings/ddr/lpddr3-timings.txt for more information on "lpddr3-timings" 46 + a given speed-bin. Please see 47 + Documentation/devicetree/bindings/memory-controllers/ddr/lpddr3-timings.txt 48 + for more information on "lpddr3-timings" 48 49 49 50 Example: 50 51
-1
Documentation/devicetree/bindings/display/msm/dp-controller.yaml
··· 102 102 - | 103 103 #include <dt-bindings/interrupt-controller/arm-gic.h> 104 104 #include <dt-bindings/clock/qcom,dispcc-sc7180.h> 105 - #include <dt-bindings/power/qcom-aoss-qmp.h> 106 105 #include <dt-bindings/power/qcom-rpmpd.h> 107 106 108 107 displayport-controller@ae90000 {
+3 -1
Documentation/devicetree/bindings/firmware/qcom,scm.txt
··· 13 13 * "qcom,scm-ipq806x" 14 14 * "qcom,scm-ipq8074" 15 15 * "qcom,scm-mdm9607" 16 + * "qcom,scm-msm8226" 16 17 * "qcom,scm-msm8660" 17 18 * "qcom,scm-msm8916" 19 + * "qcom,scm-msm8953" 18 20 * "qcom,scm-msm8960" 19 21 * "qcom,scm-msm8974" 20 22 * "qcom,scm-msm8994" ··· 35 33 * core clock required for "qcom,scm-apq8064", "qcom,scm-msm8660" and 36 34 "qcom,scm-msm8960" 37 35 * core, iface and bus clocks required for "qcom,scm-apq8084", 38 - "qcom,scm-msm8916" and "qcom,scm-msm8974" 36 + "qcom,scm-msm8916", "qcom,scm-msm8953" and "qcom,scm-msm8974" 39 37 - clock-names: Must contain "core" for the core clock, "iface" for the interface 40 38 clock and "bus" for the bus clock per the requirements of the compatible. 41 39 - qcom,dload-mode: phandle to the TCSR hardware block and offset of the
+223
Documentation/devicetree/bindings/memory-controllers/ddr/jedec,lpddr2.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/memory-controllers/ddr/jedec,lpddr2.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: LPDDR2 SDRAM compliant to JEDEC JESD209-2 8 + 9 + maintainers: 10 + - Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 11 + 12 + properties: 13 + compatible: 14 + oneOf: 15 + - items: 16 + - enum: 17 + - elpida,ECB240ABACN 18 + - elpida,B8132B2PB-6D-F 19 + - enum: 20 + - jedec,lpddr2-s4 21 + - items: 22 + - enum: 23 + - jedec,lpddr2-s2 24 + - items: 25 + - enum: 26 + - jedec,lpddr2-nvm 27 + 28 + revision-id1: 29 + $ref: /schemas/types.yaml#/definitions/uint32 30 + maximum: 255 31 + description: | 32 + Revision 1 value of SDRAM chip. Obtained from device datasheet. 33 + 34 + revision-id2: 35 + $ref: /schemas/types.yaml#/definitions/uint32 36 + maximum: 255 37 + description: | 38 + Revision 2 value of SDRAM chip. Obtained from device datasheet. 39 + 40 + density: 41 + $ref: /schemas/types.yaml#/definitions/uint32 42 + description: | 43 + Density in megabits of SDRAM chip. Obtained from device datasheet. 44 + enum: 45 + - 64 46 + - 128 47 + - 256 48 + - 512 49 + - 1024 50 + - 2048 51 + - 4096 52 + - 8192 53 + - 16384 54 + - 32768 55 + 56 + io-width: 57 + $ref: /schemas/types.yaml#/definitions/uint32 58 + description: | 59 + IO bus width in bits of SDRAM chip. Obtained from device datasheet. 60 + enum: 61 + - 32 62 + - 16 63 + - 8 64 + 65 + tRRD-min-tck: 66 + $ref: /schemas/types.yaml#/definitions/uint32 67 + maximum: 16 68 + description: | 69 + Active bank a to active bank b in terms of number of clock cycles. 70 + Obtained from device datasheet. 71 + 72 + tWTR-min-tck: 73 + $ref: /schemas/types.yaml#/definitions/uint32 74 + maximum: 16 75 + description: | 76 + Internal WRITE-to-READ command delay in terms of number of clock cycles. 77 + Obtained from device datasheet. 78 + 79 + tXP-min-tck: 80 + $ref: /schemas/types.yaml#/definitions/uint32 81 + maximum: 16 82 + description: | 83 + Exit power-down to next valid command delay in terms of number of clock 84 + cycles. Obtained from device datasheet. 85 + 86 + tRTP-min-tck: 87 + $ref: /schemas/types.yaml#/definitions/uint32 88 + maximum: 16 89 + description: | 90 + Internal READ to PRECHARGE command delay in terms of number of clock 91 + cycles. Obtained from device datasheet. 92 + 93 + tCKE-min-tck: 94 + $ref: /schemas/types.yaml#/definitions/uint32 95 + maximum: 16 96 + description: | 97 + CKE minimum pulse width (HIGH and LOW pulse width) in terms of number 98 + of clock cycles. Obtained from device datasheet. 99 + 100 + tRPab-min-tck: 101 + $ref: /schemas/types.yaml#/definitions/uint32 102 + maximum: 16 103 + description: | 104 + Row precharge time (all banks) in terms of number of clock cycles. 105 + Obtained from device datasheet. 106 + 107 + tRCD-min-tck: 108 + $ref: /schemas/types.yaml#/definitions/uint32 109 + maximum: 16 110 + description: | 111 + RAS-to-CAS delay in terms of number of clock cycles. Obtained from 112 + device datasheet. 113 + 114 + tWR-min-tck: 115 + $ref: /schemas/types.yaml#/definitions/uint32 116 + maximum: 16 117 + description: | 118 + WRITE recovery time in terms of number of clock cycles. Obtained from 119 + device datasheet. 120 + 121 + tRASmin-min-tck: 122 + $ref: /schemas/types.yaml#/definitions/uint32 123 + maximum: 16 124 + description: | 125 + Row active time in terms of number of clock cycles. Obtained from device 126 + datasheet. 127 + 128 + tCKESR-min-tck: 129 + $ref: /schemas/types.yaml#/definitions/uint32 130 + maximum: 16 131 + description: | 132 + CKE minimum pulse width during SELF REFRESH (low pulse width during 133 + SELF REFRESH) in terms of number of clock cycles. Obtained from device 134 + datasheet. 135 + 136 + tFAW-min-tck: 137 + $ref: /schemas/types.yaml#/definitions/uint32 138 + maximum: 16 139 + description: | 140 + Four-bank activate window in terms of number of clock cycles. Obtained 141 + from device datasheet. 142 + 143 + patternProperties: 144 + "^lpddr2-timings": 145 + type: object 146 + description: | 147 + The lpddr2 node may have one or more child nodes of type "lpddr2-timings". 148 + "lpddr2-timings" provides AC timing parameters of the device for 149 + a given speed-bin. The user may provide the timings for as many 150 + speed-bins as is required. Please see Documentation/devicetree/ 151 + bindings/memory-controllers/ddr/lpddr2-timings.txt for more information 152 + on "lpddr2-timings". 153 + 154 + required: 155 + - compatible 156 + - density 157 + - io-width 158 + 159 + additionalProperties: false 160 + 161 + examples: 162 + - | 163 + elpida_ECB240ABACN: lpddr2 { 164 + compatible = "elpida,ECB240ABACN", "jedec,lpddr2-s4"; 165 + density = <2048>; 166 + io-width = <32>; 167 + revision-id1 = <1>; 168 + revision-id2 = <0>; 169 + 170 + tRPab-min-tck = <3>; 171 + tRCD-min-tck = <3>; 172 + tWR-min-tck = <3>; 173 + tRASmin-min-tck = <3>; 174 + tRRD-min-tck = <2>; 175 + tWTR-min-tck = <2>; 176 + tXP-min-tck = <2>; 177 + tRTP-min-tck = <2>; 178 + tCKE-min-tck = <3>; 179 + tCKESR-min-tck = <3>; 180 + tFAW-min-tck = <8>; 181 + 182 + timings_elpida_ECB240ABACN_400mhz: lpddr2-timings0 { 183 + compatible = "jedec,lpddr2-timings"; 184 + min-freq = <10000000>; 185 + max-freq = <400000000>; 186 + tRPab = <21000>; 187 + tRCD = <18000>; 188 + tWR = <15000>; 189 + tRAS-min = <42000>; 190 + tRRD = <10000>; 191 + tWTR = <7500>; 192 + tXP = <7500>; 193 + tRTP = <7500>; 194 + tCKESR = <15000>; 195 + tDQSCK-max = <5500>; 196 + tFAW = <50000>; 197 + tZQCS = <90000>; 198 + tZQCL = <360000>; 199 + tZQinit = <1000000>; 200 + tRAS-max-ns = <70000>; 201 + }; 202 + 203 + timings_elpida_ECB240ABACN_200mhz: lpddr2-timings1 { 204 + compatible = "jedec,lpddr2-timings"; 205 + min-freq = <10000000>; 206 + max-freq = <200000000>; 207 + tRPab = <21000>; 208 + tRCD = <18000>; 209 + tWR = <15000>; 210 + tRAS-min = <42000>; 211 + tRRD = <10000>; 212 + tWTR = <10000>; 213 + tXP = <7500>; 214 + tRTP = <7500>; 215 + tCKESR = <15000>; 216 + tDQSCK-max = <5500>; 217 + tFAW = <50000>; 218 + tZQCS = <90000>; 219 + tZQCL = <360000>; 220 + tZQinit = <1000000>; 221 + tRAS-max-ns = <70000>; 222 + }; 223 + };
+33 -1
Documentation/devicetree/bindings/memory-controllers/mediatek,smi-common.yaml
··· 16 16 MediaTek SMI have two generations of HW architecture, here is the list 17 17 which generation the SoCs use: 18 18 generation 1: mt2701 and mt7623. 19 - generation 2: mt2712, mt6779, mt8167, mt8173, mt8183 and mt8192. 19 + generation 2: mt2712, mt6779, mt8167, mt8173, mt8183, mt8192 and mt8195. 20 20 21 21 There's slight differences between the two SMI, for generation 2, the 22 22 register which control the iommu port is at each larb's register base. But ··· 36 36 - mediatek,mt8173-smi-common 37 37 - mediatek,mt8183-smi-common 38 38 - mediatek,mt8192-smi-common 39 + - mediatek,mt8195-smi-common-vdo 40 + - mediatek,mt8195-smi-common-vpp 41 + - mediatek,mt8195-smi-sub-common 39 42 40 43 - description: for mt7623 41 44 items: ··· 68 65 minItems: 2 69 66 maxItems: 4 70 67 68 + mediatek,smi: 69 + $ref: /schemas/types.yaml#/definitions/phandle 70 + description: a phandle to the smi-common node above. Only for sub-common. 71 + 71 72 required: 72 73 - compatible 73 74 - reg ··· 98 91 - const: smi 99 92 - const: async 100 93 94 + - if: # only for sub common 95 + properties: 96 + compatible: 97 + contains: 98 + enum: 99 + - mediatek,mt8195-smi-sub-common 100 + then: 101 + required: 102 + - mediatek,smi 103 + properties: 104 + clock: 105 + items: 106 + minItems: 3 107 + maxItems: 3 108 + clock-names: 109 + items: 110 + - const: apb 111 + - const: smi 112 + - const: gals0 113 + else: 114 + properties: 115 + mediatek,smi: false 116 + 101 117 - if: # for gen2 HW that have gals 102 118 properties: 103 119 compatible: ··· 128 98 - mediatek,mt6779-smi-common 129 99 - mediatek,mt8183-smi-common 130 100 - mediatek,mt8192-smi-common 101 + - mediatek,mt8195-smi-common-vdo 102 + - mediatek,mt8195-smi-common-vpp 131 103 132 104 then: 133 105 properties:
+3
Documentation/devicetree/bindings/memory-controllers/mediatek,smi-larb.yaml
··· 24 24 - mediatek,mt8173-smi-larb 25 25 - mediatek,mt8183-smi-larb 26 26 - mediatek,mt8192-smi-larb 27 + - mediatek,mt8195-smi-larb 27 28 28 29 - description: for mt7623 29 30 items: ··· 75 74 compatible: 76 75 enum: 77 76 - mediatek,mt8183-smi-larb 77 + - mediatek,mt8195-smi-larb 78 78 79 79 then: 80 80 properties: ··· 110 108 - mediatek,mt6779-smi-larb 111 109 - mediatek,mt8167-smi-larb 112 110 - mediatek,mt8192-smi-larb 111 + - mediatek,mt8195-smi-larb 113 112 114 113 then: 115 114 required:
+21 -2
Documentation/devicetree/bindings/memory-controllers/nvidia,tegra20-emc.yaml
··· 164 164 "#size-cells": 165 165 const: 0 166 166 167 + lpddr2: 168 + $ref: "ddr/jedec,lpddr2.yaml#" 169 + type: object 170 + 167 171 patternProperties: 168 172 "^emc-table@[0-9]+$": 169 173 $ref: "#/$defs/emc-table" 170 174 171 - required: 172 - - nvidia,ram-code 175 + oneOf: 176 + - required: 177 + - nvidia,ram-code 178 + 179 + - required: 180 + - lpddr2 173 181 174 182 additionalProperties: false 175 183 ··· 233 225 0x00000000 0x00000000 0x00000083 0xf0440303 234 226 0x007fe010 0x00001414 0x00000000 0x00000000 235 227 0x00000000 0x00000000 0x00000000 0x00000000>; 228 + }; 229 + }; 230 + 231 + emc-tables@1 { 232 + reg = <1>; 233 + 234 + lpddr2 { 235 + compatible = "elpida,B8132B2PB-6D-F", "jedec,lpddr2-s4"; 236 + revision-id1 = <1>; 237 + density = <2048>; 238 + io-width = <16>; 236 239 }; 237 240 }; 238 241 };
+1
Documentation/devicetree/bindings/memory-controllers/renesas,rpc-if.yaml
··· 33 33 - renesas,r8a77970-rpc-if # R-Car V3M 34 34 - renesas,r8a77980-rpc-if # R-Car V3H 35 35 - renesas,r8a77995-rpc-if # R-Car D3 36 + - renesas,r8a779a0-rpc-if # R-Car V3U 36 37 - const: renesas,rcar-gen3-rpc-if # a generic R-Car gen3 or RZ/G2 device 37 38 38 39 reg:
+2 -1
Documentation/devicetree/bindings/memory-controllers/samsung,exynos5422-dmc.yaml
··· 51 51 $ref: '/schemas/types.yaml#/definitions/phandle' 52 52 description: | 53 53 phandle of the connected DRAM memory device. For more information please 54 - refer to documentation file: Documentation/devicetree/bindings/ddr/lpddr3.txt 54 + refer to documentation file: 55 + Documentation/devicetree/bindings/memory-controllers/ddr/lpddr3.txt 55 56 56 57 operating-points-v2: true 57 58
+2
Documentation/devicetree/bindings/power/qcom,rpmpd.yaml
··· 19 19 - qcom,mdm9607-rpmpd 20 20 - qcom,msm8916-rpmpd 21 21 - qcom,msm8939-rpmpd 22 + - qcom,msm8953-rpmpd 22 23 - qcom,msm8976-rpmpd 23 24 - qcom,msm8994-rpmpd 24 25 - qcom,msm8996-rpmpd ··· 32 31 - qcom,sdm845-rpmhpd 33 32 - qcom,sdx55-rpmhpd 34 33 - qcom,sm6115-rpmpd 34 + - qcom,sm6350-rpmhpd 35 35 - qcom,sm8150-rpmhpd 36 36 - qcom,sm8250-rpmhpd 37 37 - qcom,sm8350-rpmhpd
+3 -1
Documentation/devicetree/bindings/reset/microchip,rst.yaml
··· 20 20 pattern: "^reset-controller@[0-9a-f]+$" 21 21 22 22 compatible: 23 - const: microchip,sparx5-switch-reset 23 + enum: 24 + - microchip,sparx5-switch-reset 25 + - microchip,lan966x-switch-reset 24 26 25 27 reg: 26 28 items:
+1
Documentation/devicetree/bindings/reset/socionext,uniphier-glue-reset.yaml
··· 23 23 - socionext,uniphier-pxs2-usb3-reset 24 24 - socionext,uniphier-ld20-usb3-reset 25 25 - socionext,uniphier-pxs3-usb3-reset 26 + - socionext,uniphier-nx1-usb3-reset 26 27 - socionext,uniphier-pro4-ahci-reset 27 28 - socionext,uniphier-pxs2-ahci-reset 28 29 - socionext,uniphier-pxs3-ahci-reset
+3
Documentation/devicetree/bindings/reset/socionext,uniphier-reset.yaml
··· 23 23 - socionext,uniphier-ld11-reset 24 24 - socionext,uniphier-ld20-reset 25 25 - socionext,uniphier-pxs3-reset 26 + - socionext,uniphier-nx1-reset 26 27 - description: Media I/O (MIO) reset, SD reset 27 28 enum: 28 29 - socionext,uniphier-ld4-mio-reset ··· 35 34 - socionext,uniphier-ld11-sd-reset 36 35 - socionext,uniphier-ld20-sd-reset 37 36 - socionext,uniphier-pxs3-sd-reset 37 + - socionext,uniphier-nx1-sd-reset 38 38 - description: Peripheral reset 39 39 enum: 40 40 - socionext,uniphier-ld4-peri-reset ··· 46 44 - socionext,uniphier-ld11-peri-reset 47 45 - socionext,uniphier-ld20-peri-reset 48 46 - socionext,uniphier-pxs3-peri-reset 47 + - socionext,uniphier-nx1-peri-reset 49 48 - description: Analog signal amplifier reset 50 49 enum: 51 50 - socionext,uniphier-ld11-adamv-reset
+2 -10
Documentation/devicetree/bindings/soc/qcom/qcom,aoss-qmp.yaml
··· 19 19 20 20 The AOSS side channel exposes control over a set of resources, used to control 21 21 a set of debug related clocks and to affect the low power state of resources 22 - related to the secondary subsystems. These resources are exposed as a set of 23 - power-domains. 22 + related to the secondary subsystems. 24 23 25 24 properties: 26 25 compatible: ··· 29 30 - qcom,sc7280-aoss-qmp 30 31 - qcom,sc8180x-aoss-qmp 31 32 - qcom,sdm845-aoss-qmp 33 + - qcom,sm6350-aoss-qmp 32 34 - qcom,sm8150-aoss-qmp 33 35 - qcom,sm8250-aoss-qmp 34 36 - qcom,sm8350-aoss-qmp ··· 56 56 const: 0 57 57 description: 58 58 The single clock represents the QDSS clock. 59 - 60 - "#power-domain-cells": 61 - const: 1 62 - description: | 63 - The provided power-domains are: 64 - CDSP state (0), LPASS state (1), modem state (2), SLPI 65 - state (3), SPSS state (4) and Venus state (5). 66 59 67 60 required: 68 61 - compatible ··· 94 101 mboxes = <&apss_shared 0>; 95 102 96 103 #clock-cells = <0>; 97 - #power-domain-cells = <1>; 98 104 99 105 cx_cdev: cx { 100 106 #cooling-cells = <2>;
+3
Documentation/devicetree/bindings/soc/qcom/qcom,smd-rpm.yaml
··· 34 34 - qcom,rpm-ipq6018 35 35 - qcom,rpm-msm8226 36 36 - qcom,rpm-msm8916 37 + - qcom,rpm-msm8953 37 38 - qcom,rpm-msm8974 38 39 - qcom,rpm-msm8976 39 40 - qcom,rpm-msm8996 ··· 42 41 - qcom,rpm-sdm660 43 42 - qcom,rpm-sm6115 44 43 - qcom,rpm-sm6125 44 + - qcom,rpm-qcm2290 45 45 - qcom,rpm-qcs404 46 46 47 47 qcom,smd-channels: ··· 59 57 - qcom,rpm-apq8084 60 58 - qcom,rpm-msm8916 61 59 - qcom,rpm-msm8974 60 + - qcom,rpm-msm8953 62 61 then: 63 62 required: 64 63 - qcom,smd-channels
+30 -4
Documentation/devicetree/bindings/soc/qcom/qcom,smem.yaml
··· 10 10 - Andy Gross <agross@kernel.org> 11 11 - Bjorn Andersson <bjorn.andersson@linaro.org> 12 12 13 - description: | 14 - This binding describes the Qualcomm Shared Memory Manager, used to share data 15 - between various subsystems and OSes in Qualcomm platforms. 13 + description: 14 + This binding describes the Qualcomm Shared Memory Manager, a region of 15 + reserved-memory used to share data between various subsystems and OSes in 16 + Qualcomm platforms. 16 17 17 18 properties: 18 19 compatible: 19 20 const: qcom,smem 21 + 22 + reg: 23 + maxItems: 1 20 24 21 25 memory-region: 22 26 maxItems: 1 ··· 33 29 $ref: /schemas/types.yaml#/definitions/phandle 34 30 description: handle to RPM message memory resource 35 31 32 + no-map: true 33 + 36 34 required: 37 35 - compatible 38 - - memory-region 39 36 - hwlocks 37 + 38 + oneOf: 39 + - required: 40 + - reg 41 + - no-map 42 + - required: 43 + - memory-region 40 44 41 45 additionalProperties: false 42 46 43 47 examples: 48 + - | 49 + reserved-memory { 50 + #address-cells = <1>; 51 + #size-cells = <1>; 52 + ranges; 53 + 54 + smem@fa00000 { 55 + compatible = "qcom,smem"; 56 + reg = <0xfa00000 0x200000>; 57 + no-map; 58 + 59 + hwlocks = <&tcsr_mutex 3>; 60 + }; 61 + }; 44 62 - | 45 63 reserved-memory { 46 64 #address-cells = <1>;
+81
Documentation/devicetree/bindings/soc/qcom/qcom,spm.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: "http://devicetree.org/schemas/soc/qcom/qcom,spm.yaml#" 5 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 6 + 7 + title: Qualcomm Subsystem Power Manager binding 8 + 9 + maintainers: 10 + - Andy Gross <agross@kernel.org> 11 + - Bjorn Andersson <bjorn.andersson@linaro.org> 12 + 13 + description: | 14 + This binding describes the Qualcomm Subsystem Power Manager, used to control 15 + the peripheral logic surrounding the application cores in Qualcomm platforms. 16 + 17 + properties: 18 + compatible: 19 + items: 20 + - enum: 21 + - qcom,sdm660-gold-saw2-v4.1-l2 22 + - qcom,sdm660-silver-saw2-v4.1-l2 23 + - qcom,msm8998-gold-saw2-v4.1-l2 24 + - qcom,msm8998-silver-saw2-v4.1-l2 25 + - qcom,msm8916-saw2-v3.0-cpu 26 + - qcom,msm8226-saw2-v2.1-cpu 27 + - qcom,msm8974-saw2-v2.1-cpu 28 + - qcom,apq8084-saw2-v2.1-cpu 29 + - qcom,apq8064-saw2-v1.1-cpu 30 + - const: qcom,saw2 31 + 32 + reg: 33 + description: Base address and size of the SPM register region 34 + maxItems: 1 35 + 36 + required: 37 + - compatible 38 + - reg 39 + 40 + additionalProperties: false 41 + 42 + examples: 43 + - | 44 + 45 + /* Example 1: SoC using SAW2 and kpss-acc-v2 CPUIdle */ 46 + cpus { 47 + #address-cells = <1>; 48 + #size-cells = <0>; 49 + 50 + cpu@0 { 51 + compatible = "qcom,kryo"; 52 + device_type = "cpu"; 53 + enable-method = "qcom,kpss-acc-v2"; 54 + qcom,saw = <&saw0>; 55 + reg = <0x0>; 56 + operating-points-v2 = <&cpu_opp_table>; 57 + }; 58 + }; 59 + 60 + saw0: power-manager@f9089000 { 61 + compatible = "qcom,msm8974-saw2-v2.1-cpu", "qcom,saw2"; 62 + reg = <0xf9089000 0x1000>; 63 + }; 64 + 65 + - | 66 + 67 + /* 68 + * Example 2: New-gen multi cluster SoC using SAW only for L2; 69 + * This does not require any cpuidle driver, nor any cpu phandle. 70 + */ 71 + power-manager@17812000 { 72 + compatible = "qcom,msm8998-gold-saw2-v4.1-l2", "qcom,saw2"; 73 + reg = <0x17812000 0x1000>; 74 + }; 75 + 76 + power-manager@17912000 { 77 + compatible = "qcom,msm8998-silver-saw2-v4.1-l2", "qcom,saw2"; 78 + reg = <0x17912000 0x1000>; 79 + }; 80 + 81 + ...
+47
Documentation/devicetree/bindings/soc/qcom/qcom-stats.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/soc/qcom/qcom-stats.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm Technologies, Inc. (QTI) Stats bindings 8 + 9 + maintainers: 10 + - Maulik Shah <mkshah@codeaurora.org> 11 + 12 + description: 13 + Always On Processor/Resource Power Manager maintains statistics of the SoC 14 + sleep modes involving powering down of the rails and oscillator clock. 15 + 16 + Statistics includes SoC sleep mode type, number of times low power mode were 17 + entered, time of last entry, time of last exit and accumulated sleep duration. 18 + 19 + properties: 20 + compatible: 21 + enum: 22 + - qcom,rpmh-stats 23 + - qcom,rpm-stats 24 + 25 + reg: 26 + maxItems: 1 27 + 28 + required: 29 + - compatible 30 + - reg 31 + 32 + additionalProperties: false 33 + 34 + examples: 35 + # Example of rpmh sleep stats 36 + - | 37 + sram@c3f0000 { 38 + compatible = "qcom,rpmh-stats"; 39 + reg = <0x0c3f0000 0x400>; 40 + }; 41 + # Example of rpm sleep stats 42 + - | 43 + sram@4690000 { 44 + compatible = "qcom,rpm-stats"; 45 + reg = <0x04690000 0x10000>; 46 + }; 47 + ...
+4 -1
Documentation/devicetree/bindings/sram/sram.yaml
··· 31 31 - amlogic,meson-gxbb-sram 32 32 - arm,juno-sram-ns 33 33 - atmel,sama5d2-securam 34 + - qcom,rpm-msg-ram 34 35 - rockchip,rk3288-pmu-sram 35 36 36 37 reg: ··· 136 135 properties: 137 136 compatible: 138 137 contains: 139 - const: rockchip,rk3288-pmu-sram 138 + enum: 139 + - qcom,rpm-msg-ram 140 + - rockchip,rk3288-pmu-sram 140 141 141 142 else: 142 143 required:
+2
Documentation/devicetree/bindings/vendor-prefixes.yaml
··· 359 359 description: Shenzhen Elida Technology Co., Ltd. 360 360 "^elimo,.*": 361 361 description: Elimo Engineering Ltd. 362 + "^elpida,.*": 363 + description: Elpida Memory, Inc. 362 364 "^embest,.*": 363 365 description: Shenzhen Embest Technology Co., Ltd. 364 366 "^emlid,.*":
+8
MAINTAINERS
··· 11992 11992 S: Maintained 11993 11993 F: drivers/char/hw_random/mtk-rng.c 11994 11994 11995 + MEDIATEK SMI DRIVER 11996 + M: Yong Wu <yong.wu@mediatek.com> 11997 + L: linux-mediatek@lists.infradead.org (moderated for non-subscribers) 11998 + S: Supported 11999 + F: Documentation/devicetree/bindings/memory-controllers/mediatek,smi* 12000 + F: drivers/memory/mtk-smi.c 12001 + F: include/soc/mediatek/smi.h 12002 + 11995 12003 MEDIATEK SWITCH DRIVER 11996 12004 M: Sean Wang <sean.wang@mediatek.com> 11997 12005 M: Landen Chao <Landen.Chao@mediatek.com>
+1 -2
arch/arm/Kconfig
··· 351 351 select CLKSRC_MMIO 352 352 select CPU_ARM920T 353 353 select GPIOLIB 354 - select HAVE_LEGACY_CLK 354 + select COMMON_CLK 355 355 help 356 356 This enables support for the Cirrus EP93xx series of CPUs. 357 357 ··· 480 480 select GPIOLIB 481 481 select GENERIC_IRQ_MULTI_HANDLER 482 482 select HAVE_S3C2410_I2C if I2C 483 - select HAVE_S3C_RTC if RTC_CLASS 484 483 select NEED_MACH_IO_H 485 484 select S3C2410_WATCHDOG 486 485 select SAMSUNG_ATAGS
+593 -452
arch/arm/mach-ep93xx/clock.c
··· 16 16 #include <linux/io.h> 17 17 #include <linux/spinlock.h> 18 18 #include <linux/clkdev.h> 19 + #include <linux/clk-provider.h> 19 20 #include <linux/soc/cirrus/ep93xx.h> 20 21 21 22 #include "hardware.h" ··· 25 24 26 25 #include "soc.h" 27 26 28 - struct clk { 29 - struct clk *parent; 30 - unsigned long rate; 31 - int users; 32 - int sw_locked; 33 - void __iomem *enable_reg; 34 - u32 enable_mask; 35 - 36 - unsigned long (*get_rate)(struct clk *clk); 37 - int (*set_rate)(struct clk *clk, unsigned long rate); 38 - }; 39 - 40 - 41 - static unsigned long get_uart_rate(struct clk *clk); 42 - 43 - static int set_keytchclk_rate(struct clk *clk, unsigned long rate); 44 - static int set_div_rate(struct clk *clk, unsigned long rate); 45 - static int set_i2s_sclk_rate(struct clk *clk, unsigned long rate); 46 - static int set_i2s_lrclk_rate(struct clk *clk, unsigned long rate); 47 - 48 - static struct clk clk_xtali = { 49 - .rate = EP93XX_EXT_CLK_RATE, 50 - }; 51 - static struct clk clk_uart1 = { 52 - .parent = &clk_xtali, 53 - .sw_locked = 1, 54 - .enable_reg = EP93XX_SYSCON_DEVCFG, 55 - .enable_mask = EP93XX_SYSCON_DEVCFG_U1EN, 56 - .get_rate = get_uart_rate, 57 - }; 58 - static struct clk clk_uart2 = { 59 - .parent = &clk_xtali, 60 - .sw_locked = 1, 61 - .enable_reg = EP93XX_SYSCON_DEVCFG, 62 - .enable_mask = EP93XX_SYSCON_DEVCFG_U2EN, 63 - .get_rate = get_uart_rate, 64 - }; 65 - static struct clk clk_uart3 = { 66 - .parent = &clk_xtali, 67 - .sw_locked = 1, 68 - .enable_reg = EP93XX_SYSCON_DEVCFG, 69 - .enable_mask = EP93XX_SYSCON_DEVCFG_U3EN, 70 - .get_rate = get_uart_rate, 71 - }; 72 - static struct clk clk_pll1 = { 73 - .parent = &clk_xtali, 74 - }; 75 - static struct clk clk_f = { 76 - .parent = &clk_pll1, 77 - }; 78 - static struct clk clk_h = { 79 - .parent = &clk_pll1, 80 - }; 81 - static struct clk clk_p = { 82 - .parent = &clk_pll1, 83 - }; 84 - static struct clk clk_pll2 = { 85 - .parent = &clk_xtali, 86 - }; 87 - static struct clk clk_usb_host = { 88 - .parent = &clk_pll2, 89 - .enable_reg = EP93XX_SYSCON_PWRCNT, 90 - .enable_mask = EP93XX_SYSCON_PWRCNT_USH_EN, 91 - }; 92 - static struct clk clk_keypad = { 93 - .parent = &clk_xtali, 94 - .sw_locked = 1, 95 - .enable_reg = EP93XX_SYSCON_KEYTCHCLKDIV, 96 - .enable_mask = EP93XX_SYSCON_KEYTCHCLKDIV_KEN, 97 - .set_rate = set_keytchclk_rate, 98 - }; 99 - static struct clk clk_adc = { 100 - .parent = &clk_xtali, 101 - .sw_locked = 1, 102 - .enable_reg = EP93XX_SYSCON_KEYTCHCLKDIV, 103 - .enable_mask = EP93XX_SYSCON_KEYTCHCLKDIV_TSEN, 104 - .set_rate = set_keytchclk_rate, 105 - }; 106 - static struct clk clk_spi = { 107 - .parent = &clk_xtali, 108 - .rate = EP93XX_EXT_CLK_RATE, 109 - }; 110 - static struct clk clk_pwm = { 111 - .parent = &clk_xtali, 112 - .rate = EP93XX_EXT_CLK_RATE, 113 - }; 114 - 115 - static struct clk clk_video = { 116 - .sw_locked = 1, 117 - .enable_reg = EP93XX_SYSCON_VIDCLKDIV, 118 - .enable_mask = EP93XX_SYSCON_CLKDIV_ENABLE, 119 - .set_rate = set_div_rate, 120 - }; 121 - 122 - static struct clk clk_i2s_mclk = { 123 - .sw_locked = 1, 124 - .enable_reg = EP93XX_SYSCON_I2SCLKDIV, 125 - .enable_mask = EP93XX_SYSCON_CLKDIV_ENABLE, 126 - .set_rate = set_div_rate, 127 - }; 128 - 129 - static struct clk clk_i2s_sclk = { 130 - .sw_locked = 1, 131 - .parent = &clk_i2s_mclk, 132 - .enable_reg = EP93XX_SYSCON_I2SCLKDIV, 133 - .enable_mask = EP93XX_SYSCON_I2SCLKDIV_SENA, 134 - .set_rate = set_i2s_sclk_rate, 135 - }; 136 - 137 - static struct clk clk_i2s_lrclk = { 138 - .sw_locked = 1, 139 - .parent = &clk_i2s_sclk, 140 - .enable_reg = EP93XX_SYSCON_I2SCLKDIV, 141 - .enable_mask = EP93XX_SYSCON_I2SCLKDIV_SENA, 142 - .set_rate = set_i2s_lrclk_rate, 143 - }; 144 - 145 - /* DMA Clocks */ 146 - static struct clk clk_m2p0 = { 147 - .parent = &clk_h, 148 - .enable_reg = EP93XX_SYSCON_PWRCNT, 149 - .enable_mask = EP93XX_SYSCON_PWRCNT_DMA_M2P0, 150 - }; 151 - static struct clk clk_m2p1 = { 152 - .parent = &clk_h, 153 - .enable_reg = EP93XX_SYSCON_PWRCNT, 154 - .enable_mask = EP93XX_SYSCON_PWRCNT_DMA_M2P1, 155 - }; 156 - static struct clk clk_m2p2 = { 157 - .parent = &clk_h, 158 - .enable_reg = EP93XX_SYSCON_PWRCNT, 159 - .enable_mask = EP93XX_SYSCON_PWRCNT_DMA_M2P2, 160 - }; 161 - static struct clk clk_m2p3 = { 162 - .parent = &clk_h, 163 - .enable_reg = EP93XX_SYSCON_PWRCNT, 164 - .enable_mask = EP93XX_SYSCON_PWRCNT_DMA_M2P3, 165 - }; 166 - static struct clk clk_m2p4 = { 167 - .parent = &clk_h, 168 - .enable_reg = EP93XX_SYSCON_PWRCNT, 169 - .enable_mask = EP93XX_SYSCON_PWRCNT_DMA_M2P4, 170 - }; 171 - static struct clk clk_m2p5 = { 172 - .parent = &clk_h, 173 - .enable_reg = EP93XX_SYSCON_PWRCNT, 174 - .enable_mask = EP93XX_SYSCON_PWRCNT_DMA_M2P5, 175 - }; 176 - static struct clk clk_m2p6 = { 177 - .parent = &clk_h, 178 - .enable_reg = EP93XX_SYSCON_PWRCNT, 179 - .enable_mask = EP93XX_SYSCON_PWRCNT_DMA_M2P6, 180 - }; 181 - static struct clk clk_m2p7 = { 182 - .parent = &clk_h, 183 - .enable_reg = EP93XX_SYSCON_PWRCNT, 184 - .enable_mask = EP93XX_SYSCON_PWRCNT_DMA_M2P7, 185 - }; 186 - static struct clk clk_m2p8 = { 187 - .parent = &clk_h, 188 - .enable_reg = EP93XX_SYSCON_PWRCNT, 189 - .enable_mask = EP93XX_SYSCON_PWRCNT_DMA_M2P8, 190 - }; 191 - static struct clk clk_m2p9 = { 192 - .parent = &clk_h, 193 - .enable_reg = EP93XX_SYSCON_PWRCNT, 194 - .enable_mask = EP93XX_SYSCON_PWRCNT_DMA_M2P9, 195 - }; 196 - static struct clk clk_m2m0 = { 197 - .parent = &clk_h, 198 - .enable_reg = EP93XX_SYSCON_PWRCNT, 199 - .enable_mask = EP93XX_SYSCON_PWRCNT_DMA_M2M0, 200 - }; 201 - static struct clk clk_m2m1 = { 202 - .parent = &clk_h, 203 - .enable_reg = EP93XX_SYSCON_PWRCNT, 204 - .enable_mask = EP93XX_SYSCON_PWRCNT_DMA_M2M1, 205 - }; 206 - 207 - #define INIT_CK(dev,con,ck) \ 208 - { .dev_id = dev, .con_id = con, .clk = ck } 209 - 210 - static struct clk_lookup clocks[] = { 211 - INIT_CK(NULL, "xtali", &clk_xtali), 212 - INIT_CK("apb:uart1", NULL, &clk_uart1), 213 - INIT_CK("apb:uart2", NULL, &clk_uart2), 214 - INIT_CK("apb:uart3", NULL, &clk_uart3), 215 - INIT_CK(NULL, "pll1", &clk_pll1), 216 - INIT_CK(NULL, "fclk", &clk_f), 217 - INIT_CK(NULL, "hclk", &clk_h), 218 - INIT_CK(NULL, "apb_pclk", &clk_p), 219 - INIT_CK(NULL, "pll2", &clk_pll2), 220 - INIT_CK("ohci-platform", NULL, &clk_usb_host), 221 - INIT_CK("ep93xx-keypad", NULL, &clk_keypad), 222 - INIT_CK("ep93xx-adc", NULL, &clk_adc), 223 - INIT_CK("ep93xx-fb", NULL, &clk_video), 224 - INIT_CK("ep93xx-spi.0", NULL, &clk_spi), 225 - INIT_CK("ep93xx-i2s", "mclk", &clk_i2s_mclk), 226 - INIT_CK("ep93xx-i2s", "sclk", &clk_i2s_sclk), 227 - INIT_CK("ep93xx-i2s", "lrclk", &clk_i2s_lrclk), 228 - INIT_CK(NULL, "pwm_clk", &clk_pwm), 229 - INIT_CK(NULL, "m2p0", &clk_m2p0), 230 - INIT_CK(NULL, "m2p1", &clk_m2p1), 231 - INIT_CK(NULL, "m2p2", &clk_m2p2), 232 - INIT_CK(NULL, "m2p3", &clk_m2p3), 233 - INIT_CK(NULL, "m2p4", &clk_m2p4), 234 - INIT_CK(NULL, "m2p5", &clk_m2p5), 235 - INIT_CK(NULL, "m2p6", &clk_m2p6), 236 - INIT_CK(NULL, "m2p7", &clk_m2p7), 237 - INIT_CK(NULL, "m2p8", &clk_m2p8), 238 - INIT_CK(NULL, "m2p9", &clk_m2p9), 239 - INIT_CK(NULL, "m2m0", &clk_m2m0), 240 - INIT_CK(NULL, "m2m1", &clk_m2m1), 241 - }; 242 - 243 27 static DEFINE_SPINLOCK(clk_lock); 244 28 245 - static void __clk_enable(struct clk *clk) 29 + static char fclk_divisors[] = { 1, 2, 4, 8, 16, 1, 1, 1 }; 30 + static char hclk_divisors[] = { 1, 2, 4, 5, 6, 8, 16, 32 }; 31 + static char pclk_divisors[] = { 1, 2, 4, 8 }; 32 + 33 + static char adc_divisors[] = { 16, 4 }; 34 + static char sclk_divisors[] = { 2, 4 }; 35 + static char lrclk_divisors[] = { 32, 64, 128 }; 36 + 37 + static const char * const mux_parents[] = { 38 + "xtali", 39 + "pll1", 40 + "pll2" 41 + }; 42 + 43 + /* 44 + * PLL rate = 14.7456 MHz * (X1FBD + 1) * (X2FBD + 1) / (X2IPD + 1) / 2^PS 45 + */ 46 + static unsigned long calc_pll_rate(unsigned long long rate, u32 config_word) 246 47 { 247 - if (!clk->users++) { 248 - if (clk->parent) 249 - __clk_enable(clk->parent); 48 + int i; 250 49 251 - if (clk->enable_reg) { 252 - u32 v; 50 + rate *= ((config_word >> 11) & 0x1f) + 1; /* X1FBD */ 51 + rate *= ((config_word >> 5) & 0x3f) + 1; /* X2FBD */ 52 + do_div(rate, (config_word & 0x1f) + 1); /* X2IPD */ 53 + for (i = 0; i < ((config_word >> 16) & 3); i++) /* PS */ 54 + rate >>= 1; 253 55 254 - v = __raw_readl(clk->enable_reg); 255 - v |= clk->enable_mask; 256 - if (clk->sw_locked) 257 - ep93xx_syscon_swlocked_write(v, clk->enable_reg); 258 - else 259 - __raw_writel(v, clk->enable_reg); 260 - } 261 - } 56 + return (unsigned long)rate; 262 57 } 263 58 264 - int clk_enable(struct clk *clk) 59 + struct clk_psc { 60 + struct clk_hw hw; 61 + void __iomem *reg; 62 + u8 bit_idx; 63 + u32 mask; 64 + u8 shift; 65 + u8 width; 66 + char *div; 67 + u8 num_div; 68 + spinlock_t *lock; 69 + }; 70 + 71 + #define to_clk_psc(_hw) container_of(_hw, struct clk_psc, hw) 72 + 73 + static int ep93xx_clk_is_enabled(struct clk_hw *hw) 265 74 { 266 - unsigned long flags; 75 + struct clk_psc *psc = to_clk_psc(hw); 76 + u32 val = readl(psc->reg); 267 77 268 - if (!clk) 269 - return -EINVAL; 270 - 271 - spin_lock_irqsave(&clk_lock, flags); 272 - __clk_enable(clk); 273 - spin_unlock_irqrestore(&clk_lock, flags); 274 - 275 - return 0; 276 - } 277 - EXPORT_SYMBOL(clk_enable); 278 - 279 - static void __clk_disable(struct clk *clk) 280 - { 281 - if (!--clk->users) { 282 - if (clk->enable_reg) { 283 - u32 v; 284 - 285 - v = __raw_readl(clk->enable_reg); 286 - v &= ~clk->enable_mask; 287 - if (clk->sw_locked) 288 - ep93xx_syscon_swlocked_write(v, clk->enable_reg); 289 - else 290 - __raw_writel(v, clk->enable_reg); 291 - } 292 - 293 - if (clk->parent) 294 - __clk_disable(clk->parent); 295 - } 78 + return (val & BIT(psc->bit_idx)) ? 1 : 0; 296 79 } 297 80 298 - void clk_disable(struct clk *clk) 81 + static int ep93xx_clk_enable(struct clk_hw *hw) 299 82 { 300 - unsigned long flags; 301 - 302 - if (!clk) 303 - return; 304 - 305 - spin_lock_irqsave(&clk_lock, flags); 306 - __clk_disable(clk); 307 - spin_unlock_irqrestore(&clk_lock, flags); 308 - } 309 - EXPORT_SYMBOL(clk_disable); 310 - 311 - static unsigned long get_uart_rate(struct clk *clk) 312 - { 313 - unsigned long rate = clk_get_rate(clk->parent); 314 - u32 value; 315 - 316 - value = __raw_readl(EP93XX_SYSCON_PWRCNT); 317 - if (value & EP93XX_SYSCON_PWRCNT_UARTBAUD) 318 - return rate; 319 - else 320 - return rate / 2; 321 - } 322 - 323 - unsigned long clk_get_rate(struct clk *clk) 324 - { 325 - if (clk->get_rate) 326 - return clk->get_rate(clk); 327 - 328 - return clk->rate; 329 - } 330 - EXPORT_SYMBOL(clk_get_rate); 331 - 332 - static int set_keytchclk_rate(struct clk *clk, unsigned long rate) 333 - { 83 + struct clk_psc *psc = to_clk_psc(hw); 84 + unsigned long flags = 0; 334 85 u32 val; 335 - u32 div_bit; 336 86 337 - val = __raw_readl(clk->enable_reg); 87 + if (psc->lock) 88 + spin_lock_irqsave(psc->lock, flags); 338 89 339 - /* 340 - * The Key Matrix and ADC clocks are configured using the same 341 - * System Controller register. The clock used will be either 342 - * 1/4 or 1/16 the external clock rate depending on the 343 - * EP93XX_SYSCON_KEYTCHCLKDIV_KDIV/EP93XX_SYSCON_KEYTCHCLKDIV_ADIV 344 - * bit being set or cleared. 345 - */ 346 - div_bit = clk->enable_mask >> 15; 90 + val = __raw_readl(psc->reg); 91 + val |= BIT(psc->bit_idx); 347 92 348 - if (rate == EP93XX_KEYTCHCLK_DIV4) 349 - val |= div_bit; 350 - else if (rate == EP93XX_KEYTCHCLK_DIV16) 351 - val &= ~div_bit; 352 - else 353 - return -EINVAL; 93 + ep93xx_syscon_swlocked_write(val, psc->reg); 354 94 355 - ep93xx_syscon_swlocked_write(val, clk->enable_reg); 356 - clk->rate = rate; 95 + if (psc->lock) 96 + spin_unlock_irqrestore(psc->lock, flags); 97 + 357 98 return 0; 358 99 } 359 100 360 - static int calc_clk_div(struct clk *clk, unsigned long rate, 361 - int *psel, int *esel, int *pdiv, int *div) 101 + static void ep93xx_clk_disable(struct clk_hw *hw) 362 102 { 363 - struct clk *mclk; 364 - unsigned long max_rate, actual_rate, mclk_rate, rate_err = -1; 365 - int i, found = 0, __div = 0, __pdiv = 0; 103 + struct clk_psc *psc = to_clk_psc(hw); 104 + unsigned long flags = 0; 105 + u32 val; 366 106 367 - /* Don't exceed the maximum rate */ 368 - max_rate = max3(clk_pll1.rate / 4, clk_pll2.rate / 4, clk_xtali.rate / 4); 369 - rate = min(rate, max_rate); 107 + if (psc->lock) 108 + spin_lock_irqsave(psc->lock, flags); 109 + 110 + val = __raw_readl(psc->reg); 111 + val &= ~BIT(psc->bit_idx); 112 + 113 + ep93xx_syscon_swlocked_write(val, psc->reg); 114 + 115 + if (psc->lock) 116 + spin_unlock_irqrestore(psc->lock, flags); 117 + } 118 + 119 + static const struct clk_ops clk_ep93xx_gate_ops = { 120 + .enable = ep93xx_clk_enable, 121 + .disable = ep93xx_clk_disable, 122 + .is_enabled = ep93xx_clk_is_enabled, 123 + }; 124 + 125 + static struct clk_hw *ep93xx_clk_register_gate(const char *name, 126 + const char *parent_name, 127 + void __iomem *reg, 128 + u8 bit_idx) 129 + { 130 + struct clk_init_data init; 131 + struct clk_psc *psc; 132 + struct clk *clk; 133 + 134 + psc = kzalloc(sizeof(*psc), GFP_KERNEL); 135 + if (!psc) 136 + return ERR_PTR(-ENOMEM); 137 + 138 + init.name = name; 139 + init.ops = &clk_ep93xx_gate_ops; 140 + init.flags = CLK_SET_RATE_PARENT; 141 + init.parent_names = (parent_name ? &parent_name : NULL); 142 + init.num_parents = (parent_name ? 1 : 0); 143 + 144 + psc->reg = reg; 145 + psc->bit_idx = bit_idx; 146 + psc->hw.init = &init; 147 + psc->lock = &clk_lock; 148 + 149 + clk = clk_register(NULL, &psc->hw); 150 + if (IS_ERR(clk)) 151 + kfree(psc); 152 + 153 + return &psc->hw; 154 + } 155 + 156 + static u8 ep93xx_mux_get_parent(struct clk_hw *hw) 157 + { 158 + struct clk_psc *psc = to_clk_psc(hw); 159 + u32 val = __raw_readl(psc->reg); 160 + 161 + if (!(val & EP93XX_SYSCON_CLKDIV_ESEL)) 162 + return 0; 163 + 164 + if (!(val & EP93XX_SYSCON_CLKDIV_PSEL)) 165 + return 1; 166 + 167 + return 2; 168 + } 169 + 170 + static int ep93xx_mux_set_parent_lock(struct clk_hw *hw, u8 index) 171 + { 172 + struct clk_psc *psc = to_clk_psc(hw); 173 + unsigned long flags = 0; 174 + u32 val; 175 + 176 + if (index >= ARRAY_SIZE(mux_parents)) 177 + return -EINVAL; 178 + 179 + if (psc->lock) 180 + spin_lock_irqsave(psc->lock, flags); 181 + 182 + val = __raw_readl(psc->reg); 183 + val &= ~(EP93XX_SYSCON_CLKDIV_ESEL | EP93XX_SYSCON_CLKDIV_PSEL); 184 + 185 + 186 + if (index != 0) { 187 + val |= EP93XX_SYSCON_CLKDIV_ESEL; 188 + val |= (index - 1) ? EP93XX_SYSCON_CLKDIV_PSEL : 0; 189 + } 190 + 191 + ep93xx_syscon_swlocked_write(val, psc->reg); 192 + 193 + if (psc->lock) 194 + spin_unlock_irqrestore(psc->lock, flags); 195 + 196 + return 0; 197 + } 198 + 199 + static bool is_best(unsigned long rate, unsigned long now, 200 + unsigned long best) 201 + { 202 + return abs(rate - now) < abs(rate - best); 203 + } 204 + 205 + static int ep93xx_mux_determine_rate(struct clk_hw *hw, 206 + struct clk_rate_request *req) 207 + { 208 + unsigned long rate = req->rate; 209 + struct clk *best_parent = 0; 210 + unsigned long __parent_rate; 211 + unsigned long best_rate = 0, actual_rate, mclk_rate; 212 + unsigned long best_parent_rate; 213 + int __div = 0, __pdiv = 0; 214 + int i; 370 215 371 216 /* 372 217 * Try the two pll's and the external clock ··· 223 376 * http://be-a-maverick.com/en/pubs/appNote/AN269REV1.pdf 224 377 * 225 378 */ 226 - for (i = 0; i < 3; i++) { 227 - if (i == 0) 228 - mclk = &clk_xtali; 229 - else if (i == 1) 230 - mclk = &clk_pll1; 231 - else 232 - mclk = &clk_pll2; 233 - mclk_rate = mclk->rate * 2; 379 + for (i = 0; i < ARRAY_SIZE(mux_parents); i++) { 380 + struct clk *parent = clk_get_sys(mux_parents[i], NULL); 381 + 382 + __parent_rate = clk_get_rate(parent); 383 + mclk_rate = __parent_rate * 2; 234 384 235 385 /* Try each predivider value */ 236 386 for (__pdiv = 4; __pdiv <= 6; __pdiv++) { ··· 236 392 continue; 237 393 238 394 actual_rate = mclk_rate / (__pdiv * __div); 239 - 240 - if (!found || abs(actual_rate - rate) < rate_err) { 241 - *pdiv = __pdiv - 3; 242 - *div = __div; 243 - *psel = (i == 2); 244 - *esel = (i != 0); 245 - clk->parent = mclk; 246 - clk->rate = actual_rate; 247 - rate_err = abs(actual_rate - rate); 248 - found = 1; 395 + if (is_best(rate, actual_rate, best_rate)) { 396 + best_rate = actual_rate; 397 + best_parent_rate = __parent_rate; 398 + best_parent = parent; 249 399 } 250 400 } 251 401 } 252 402 253 - if (!found) 403 + if (!best_parent) 254 404 return -EINVAL; 405 + 406 + req->best_parent_rate = best_parent_rate; 407 + req->best_parent_hw = __clk_get_hw(best_parent); 408 + req->rate = best_rate; 255 409 256 410 return 0; 257 411 } 258 412 259 - static int set_div_rate(struct clk *clk, unsigned long rate) 413 + static unsigned long ep93xx_ddiv_recalc_rate(struct clk_hw *hw, 414 + unsigned long parent_rate) 260 415 { 261 - int err, psel = 0, esel = 0, pdiv = 0, div = 0; 416 + struct clk_psc *psc = to_clk_psc(hw); 417 + unsigned long rate = 0; 418 + u32 val = __raw_readl(psc->reg); 419 + int __pdiv = ((val >> EP93XX_SYSCON_CLKDIV_PDIV_SHIFT) & 0x03); 420 + int __div = val & 0x7f; 421 + 422 + if (__div > 0) 423 + rate = (parent_rate * 2) / ((__pdiv + 3) * __div); 424 + 425 + return rate; 426 + } 427 + 428 + static int ep93xx_ddiv_set_rate(struct clk_hw *hw, unsigned long rate, 429 + unsigned long parent_rate) 430 + { 431 + struct clk_psc *psc = to_clk_psc(hw); 432 + int pdiv = 0, div = 0; 433 + unsigned long best_rate = 0, actual_rate, mclk_rate; 434 + int __div = 0, __pdiv = 0; 262 435 u32 val; 263 436 264 - err = calc_clk_div(clk, rate, &psel, &esel, &pdiv, &div); 265 - if (err) 266 - return err; 437 + mclk_rate = parent_rate * 2; 267 438 268 - /* Clear the esel, psel, pdiv and div bits */ 269 - val = __raw_readl(clk->enable_reg); 270 - val &= ~0x7fff; 439 + for (__pdiv = 4; __pdiv <= 6; __pdiv++) { 440 + __div = mclk_rate / (rate * __pdiv); 441 + if (__div < 2 || __div > 127) 442 + continue; 271 443 272 - /* Set the new esel, psel, pdiv and div bits for the new clock rate */ 273 - val |= (esel ? EP93XX_SYSCON_CLKDIV_ESEL : 0) | 274 - (psel ? EP93XX_SYSCON_CLKDIV_PSEL : 0) | 275 - (pdiv << EP93XX_SYSCON_CLKDIV_PDIV_SHIFT) | div; 276 - ep93xx_syscon_swlocked_write(val, clk->enable_reg); 277 - return 0; 278 - } 444 + actual_rate = mclk_rate / (__pdiv * __div); 445 + if (is_best(rate, actual_rate, best_rate)) { 446 + pdiv = __pdiv - 3; 447 + div = __div; 448 + best_rate = actual_rate; 449 + } 450 + } 279 451 280 - static int set_i2s_sclk_rate(struct clk *clk, unsigned long rate) 281 - { 282 - unsigned val = __raw_readl(clk->enable_reg); 283 - 284 - if (rate == clk_i2s_mclk.rate / 2) 285 - ep93xx_syscon_swlocked_write(val & ~EP93XX_I2SCLKDIV_SDIV, 286 - clk->enable_reg); 287 - else if (rate == clk_i2s_mclk.rate / 4) 288 - ep93xx_syscon_swlocked_write(val | EP93XX_I2SCLKDIV_SDIV, 289 - clk->enable_reg); 290 - else 452 + if (!best_rate) 291 453 return -EINVAL; 292 454 293 - clk_i2s_sclk.rate = rate; 455 + val = __raw_readl(psc->reg); 456 + 457 + /* Clear old dividers */ 458 + val &= ~0x37f; 459 + 460 + /* Set the new pdiv and div bits for the new clock rate */ 461 + val |= (pdiv << EP93XX_SYSCON_CLKDIV_PDIV_SHIFT) | div; 462 + ep93xx_syscon_swlocked_write(val, psc->reg); 463 + 294 464 return 0; 295 465 } 296 466 297 - static int set_i2s_lrclk_rate(struct clk *clk, unsigned long rate) 298 - { 299 - unsigned val = __raw_readl(clk->enable_reg) & 300 - ~EP93XX_I2SCLKDIV_LRDIV_MASK; 301 - 302 - if (rate == clk_i2s_sclk.rate / 32) 303 - ep93xx_syscon_swlocked_write(val | EP93XX_I2SCLKDIV_LRDIV32, 304 - clk->enable_reg); 305 - else if (rate == clk_i2s_sclk.rate / 64) 306 - ep93xx_syscon_swlocked_write(val | EP93XX_I2SCLKDIV_LRDIV64, 307 - clk->enable_reg); 308 - else if (rate == clk_i2s_sclk.rate / 128) 309 - ep93xx_syscon_swlocked_write(val | EP93XX_I2SCLKDIV_LRDIV128, 310 - clk->enable_reg); 311 - else 312 - return -EINVAL; 467 + static const struct clk_ops clk_ddiv_ops = { 468 + .enable = ep93xx_clk_enable, 469 + .disable = ep93xx_clk_disable, 470 + .is_enabled = ep93xx_clk_is_enabled, 471 + .get_parent = ep93xx_mux_get_parent, 472 + .set_parent = ep93xx_mux_set_parent_lock, 473 + .determine_rate = ep93xx_mux_determine_rate, 474 + .recalc_rate = ep93xx_ddiv_recalc_rate, 475 + .set_rate = ep93xx_ddiv_set_rate, 476 + }; 313 477 314 - clk_i2s_lrclk.rate = rate; 315 - return 0; 478 + static struct clk_hw *clk_hw_register_ddiv(const char *name, 479 + void __iomem *reg, 480 + u8 bit_idx) 481 + { 482 + struct clk_init_data init; 483 + struct clk_psc *psc; 484 + struct clk *clk; 485 + 486 + psc = kzalloc(sizeof(*psc), GFP_KERNEL); 487 + if (!psc) 488 + return ERR_PTR(-ENOMEM); 489 + 490 + init.name = name; 491 + init.ops = &clk_ddiv_ops; 492 + init.flags = 0; 493 + init.parent_names = mux_parents; 494 + init.num_parents = ARRAY_SIZE(mux_parents); 495 + 496 + psc->reg = reg; 497 + psc->bit_idx = bit_idx; 498 + psc->lock = &clk_lock; 499 + psc->hw.init = &init; 500 + 501 + clk = clk_register(NULL, &psc->hw); 502 + if (IS_ERR(clk)) 503 + kfree(psc); 504 + 505 + return &psc->hw; 316 506 } 317 507 318 - int clk_set_rate(struct clk *clk, unsigned long rate) 508 + static unsigned long ep93xx_div_recalc_rate(struct clk_hw *hw, 509 + unsigned long parent_rate) 319 510 { 320 - if (clk->set_rate) 321 - return clk->set_rate(clk, rate); 511 + struct clk_psc *psc = to_clk_psc(hw); 512 + u32 val = __raw_readl(psc->reg); 513 + u8 index = (val & psc->mask) >> psc->shift; 322 514 323 - return -EINVAL; 515 + if (index > psc->num_div) 516 + return 0; 517 + 518 + return DIV_ROUND_UP_ULL(parent_rate, psc->div[index]); 324 519 } 325 - EXPORT_SYMBOL(clk_set_rate); 326 520 327 - long clk_round_rate(struct clk *clk, unsigned long rate) 521 + static long ep93xx_div_round_rate(struct clk_hw *hw, unsigned long rate, 522 + unsigned long *parent_rate) 328 523 { 329 - WARN_ON(clk); 330 - return 0; 331 - } 332 - EXPORT_SYMBOL(clk_round_rate); 333 - 334 - int clk_set_parent(struct clk *clk, struct clk *parent) 335 - { 336 - WARN_ON(clk); 337 - return 0; 338 - } 339 - EXPORT_SYMBOL(clk_set_parent); 340 - 341 - struct clk *clk_get_parent(struct clk *clk) 342 - { 343 - return clk->parent; 344 - } 345 - EXPORT_SYMBOL(clk_get_parent); 346 - 347 - 348 - static char fclk_divisors[] = { 1, 2, 4, 8, 16, 1, 1, 1 }; 349 - static char hclk_divisors[] = { 1, 2, 4, 5, 6, 8, 16, 32 }; 350 - static char pclk_divisors[] = { 1, 2, 4, 8 }; 351 - 352 - /* 353 - * PLL rate = 14.7456 MHz * (X1FBD + 1) * (X2FBD + 1) / (X2IPD + 1) / 2^PS 354 - */ 355 - static unsigned long calc_pll_rate(u32 config_word) 356 - { 357 - unsigned long long rate; 524 + struct clk_psc *psc = to_clk_psc(hw); 525 + unsigned long best = 0, now, maxdiv; 358 526 int i; 359 527 360 - rate = clk_xtali.rate; 361 - rate *= ((config_word >> 11) & 0x1f) + 1; /* X1FBD */ 362 - rate *= ((config_word >> 5) & 0x3f) + 1; /* X2FBD */ 363 - do_div(rate, (config_word & 0x1f) + 1); /* X2IPD */ 364 - for (i = 0; i < ((config_word >> 16) & 3); i++) /* PS */ 365 - rate >>= 1; 528 + maxdiv = psc->div[psc->num_div - 1]; 366 529 367 - return (unsigned long)rate; 530 + for (i = 0; i < psc->num_div; i++) { 531 + if ((rate * psc->div[i]) == *parent_rate) 532 + return DIV_ROUND_UP_ULL((u64)*parent_rate, psc->div[i]); 533 + 534 + now = DIV_ROUND_UP_ULL((u64)*parent_rate, psc->div[i]); 535 + 536 + if (is_best(rate, now, best)) 537 + best = now; 538 + } 539 + 540 + if (!best) 541 + best = DIV_ROUND_UP_ULL(*parent_rate, maxdiv); 542 + 543 + return best; 368 544 } 545 + 546 + static int ep93xx_div_set_rate(struct clk_hw *hw, unsigned long rate, 547 + unsigned long parent_rate) 548 + { 549 + struct clk_psc *psc = to_clk_psc(hw); 550 + u32 val = __raw_readl(psc->reg) & ~psc->mask; 551 + int i; 552 + 553 + for (i = 0; i < psc->num_div; i++) 554 + if (rate == parent_rate / psc->div[i]) { 555 + val |= i << psc->shift; 556 + break; 557 + } 558 + 559 + if (i == psc->num_div) 560 + return -EINVAL; 561 + 562 + ep93xx_syscon_swlocked_write(val, psc->reg); 563 + 564 + return 0; 565 + } 566 + 567 + static const struct clk_ops ep93xx_div_ops = { 568 + .enable = ep93xx_clk_enable, 569 + .disable = ep93xx_clk_disable, 570 + .is_enabled = ep93xx_clk_is_enabled, 571 + .recalc_rate = ep93xx_div_recalc_rate, 572 + .round_rate = ep93xx_div_round_rate, 573 + .set_rate = ep93xx_div_set_rate, 574 + }; 575 + 576 + static struct clk_hw *clk_hw_register_div(const char *name, 577 + const char *parent_name, 578 + void __iomem *reg, 579 + u8 enable_bit, 580 + u8 shift, 581 + u8 width, 582 + char *clk_divisors, 583 + u8 num_div) 584 + { 585 + struct clk_init_data init; 586 + struct clk_psc *psc; 587 + struct clk *clk; 588 + 589 + psc = kzalloc(sizeof(*psc), GFP_KERNEL); 590 + if (!psc) 591 + return ERR_PTR(-ENOMEM); 592 + 593 + init.name = name; 594 + init.ops = &ep93xx_div_ops; 595 + init.flags = 0; 596 + init.parent_names = (parent_name ? &parent_name : NULL); 597 + init.num_parents = 1; 598 + 599 + psc->reg = reg; 600 + psc->bit_idx = enable_bit; 601 + psc->mask = GENMASK(shift + width - 1, shift); 602 + psc->shift = shift; 603 + psc->div = clk_divisors; 604 + psc->num_div = num_div; 605 + psc->lock = &clk_lock; 606 + psc->hw.init = &init; 607 + 608 + clk = clk_register(NULL, &psc->hw); 609 + if (IS_ERR(clk)) 610 + kfree(psc); 611 + 612 + return &psc->hw; 613 + } 614 + 615 + struct ep93xx_gate { 616 + unsigned int bit; 617 + const char *dev_id; 618 + const char *con_id; 619 + }; 620 + 621 + static struct ep93xx_gate ep93xx_uarts[] = { 622 + {EP93XX_SYSCON_DEVCFG_U1EN, "apb:uart1", NULL}, 623 + {EP93XX_SYSCON_DEVCFG_U2EN, "apb:uart2", NULL}, 624 + {EP93XX_SYSCON_DEVCFG_U3EN, "apb:uart3", NULL}, 625 + }; 626 + 627 + static void __init ep93xx_uart_clock_init(void) 628 + { 629 + unsigned int i; 630 + struct clk_hw *hw; 631 + u32 value; 632 + unsigned int clk_uart_div; 633 + 634 + value = __raw_readl(EP93XX_SYSCON_PWRCNT); 635 + if (value & EP93XX_SYSCON_PWRCNT_UARTBAUD) 636 + clk_uart_div = 1; 637 + else 638 + clk_uart_div = 2; 639 + 640 + hw = clk_hw_register_fixed_factor(NULL, "uart", "xtali", 0, 1, clk_uart_div); 641 + 642 + /* parenting uart gate clocks to uart clock */ 643 + for (i = 0; i < ARRAY_SIZE(ep93xx_uarts); i++) { 644 + hw = ep93xx_clk_register_gate(ep93xx_uarts[i].dev_id, 645 + "uart", 646 + EP93XX_SYSCON_DEVCFG, 647 + ep93xx_uarts[i].bit); 648 + 649 + clk_hw_register_clkdev(hw, NULL, ep93xx_uarts[i].dev_id); 650 + } 651 + } 652 + 653 + static struct ep93xx_gate ep93xx_dmas[] = { 654 + {EP93XX_SYSCON_PWRCNT_DMA_M2P0, NULL, "m2p0"}, 655 + {EP93XX_SYSCON_PWRCNT_DMA_M2P1, NULL, "m2p1"}, 656 + {EP93XX_SYSCON_PWRCNT_DMA_M2P2, NULL, "m2p2"}, 657 + {EP93XX_SYSCON_PWRCNT_DMA_M2P3, NULL, "m2p3"}, 658 + {EP93XX_SYSCON_PWRCNT_DMA_M2P4, NULL, "m2p4"}, 659 + {EP93XX_SYSCON_PWRCNT_DMA_M2P5, NULL, "m2p5"}, 660 + {EP93XX_SYSCON_PWRCNT_DMA_M2P6, NULL, "m2p6"}, 661 + {EP93XX_SYSCON_PWRCNT_DMA_M2P7, NULL, "m2p7"}, 662 + {EP93XX_SYSCON_PWRCNT_DMA_M2P8, NULL, "m2p8"}, 663 + {EP93XX_SYSCON_PWRCNT_DMA_M2P9, NULL, "m2p9"}, 664 + {EP93XX_SYSCON_PWRCNT_DMA_M2M0, NULL, "m2m0"}, 665 + {EP93XX_SYSCON_PWRCNT_DMA_M2M1, NULL, "m2m1"}, 666 + }; 369 667 370 668 static void __init ep93xx_dma_clock_init(void) 371 669 { 372 - clk_m2p0.rate = clk_h.rate; 373 - clk_m2p1.rate = clk_h.rate; 374 - clk_m2p2.rate = clk_h.rate; 375 - clk_m2p3.rate = clk_h.rate; 376 - clk_m2p4.rate = clk_h.rate; 377 - clk_m2p5.rate = clk_h.rate; 378 - clk_m2p6.rate = clk_h.rate; 379 - clk_m2p7.rate = clk_h.rate; 380 - clk_m2p8.rate = clk_h.rate; 381 - clk_m2p9.rate = clk_h.rate; 382 - clk_m2m0.rate = clk_h.rate; 383 - clk_m2m1.rate = clk_h.rate; 670 + unsigned int i; 671 + struct clk_hw *hw; 672 + int ret; 673 + 674 + for (i = 0; i < ARRAY_SIZE(ep93xx_dmas); i++) { 675 + hw = clk_hw_register_gate(NULL, ep93xx_dmas[i].con_id, 676 + "hclk", 0, 677 + EP93XX_SYSCON_PWRCNT, 678 + ep93xx_dmas[i].bit, 679 + 0, 680 + &clk_lock); 681 + 682 + ret = clk_hw_register_clkdev(hw, ep93xx_dmas[i].con_id, NULL); 683 + if (ret) 684 + pr_err("%s: failed to register lookup %s\n", 685 + __func__, ep93xx_dmas[i].con_id); 686 + } 384 687 } 385 688 386 689 static int __init ep93xx_clock_init(void) 387 690 { 388 691 u32 value; 692 + struct clk_hw *hw; 693 + unsigned long clk_pll1_rate; 694 + unsigned long clk_f_rate; 695 + unsigned long clk_h_rate; 696 + unsigned long clk_p_rate; 697 + unsigned long clk_pll2_rate; 698 + unsigned int clk_f_div; 699 + unsigned int clk_h_div; 700 + unsigned int clk_p_div; 701 + unsigned int clk_usb_div; 702 + unsigned long clk_spi_div; 703 + 704 + hw = clk_hw_register_fixed_rate(NULL, "xtali", NULL, 0, EP93XX_EXT_CLK_RATE); 705 + clk_hw_register_clkdev(hw, NULL, "xtali"); 389 706 390 707 /* Determine the bootloader configured pll1 rate */ 391 708 value = __raw_readl(EP93XX_SYSCON_CLKSET1); 392 709 if (!(value & EP93XX_SYSCON_CLKSET1_NBYP1)) 393 - clk_pll1.rate = clk_xtali.rate; 710 + clk_pll1_rate = EP93XX_EXT_CLK_RATE; 394 711 else 395 - clk_pll1.rate = calc_pll_rate(value); 712 + clk_pll1_rate = calc_pll_rate(EP93XX_EXT_CLK_RATE, value); 713 + 714 + hw = clk_hw_register_fixed_rate(NULL, "pll1", "xtali", 0, clk_pll1_rate); 715 + clk_hw_register_clkdev(hw, NULL, "pll1"); 396 716 397 717 /* Initialize the pll1 derived clocks */ 398 - clk_f.rate = clk_pll1.rate / fclk_divisors[(value >> 25) & 0x7]; 399 - clk_h.rate = clk_pll1.rate / hclk_divisors[(value >> 20) & 0x7]; 400 - clk_p.rate = clk_h.rate / pclk_divisors[(value >> 18) & 0x3]; 718 + clk_f_div = fclk_divisors[(value >> 25) & 0x7]; 719 + clk_h_div = hclk_divisors[(value >> 20) & 0x7]; 720 + clk_p_div = pclk_divisors[(value >> 18) & 0x3]; 721 + 722 + hw = clk_hw_register_fixed_factor(NULL, "fclk", "pll1", 0, 1, clk_f_div); 723 + clk_f_rate = clk_get_rate(hw->clk); 724 + hw = clk_hw_register_fixed_factor(NULL, "hclk", "pll1", 0, 1, clk_h_div); 725 + clk_h_rate = clk_get_rate(hw->clk); 726 + hw = clk_hw_register_fixed_factor(NULL, "pclk", "hclk", 0, 1, clk_p_div); 727 + clk_p_rate = clk_get_rate(hw->clk); 728 + 729 + clk_hw_register_clkdev(hw, "apb_pclk", NULL); 730 + 401 731 ep93xx_dma_clock_init(); 402 732 403 733 /* Determine the bootloader configured pll2 rate */ 404 734 value = __raw_readl(EP93XX_SYSCON_CLKSET2); 405 735 if (!(value & EP93XX_SYSCON_CLKSET2_NBYP2)) 406 - clk_pll2.rate = clk_xtali.rate; 736 + clk_pll2_rate = EP93XX_EXT_CLK_RATE; 407 737 else if (value & EP93XX_SYSCON_CLKSET2_PLL2_EN) 408 - clk_pll2.rate = calc_pll_rate(value); 738 + clk_pll2_rate = calc_pll_rate(EP93XX_EXT_CLK_RATE, value); 409 739 else 410 - clk_pll2.rate = 0; 740 + clk_pll2_rate = 0; 741 + 742 + hw = clk_hw_register_fixed_rate(NULL, "pll2", "xtali", 0, clk_pll2_rate); 743 + clk_hw_register_clkdev(hw, NULL, "pll2"); 411 744 412 745 /* Initialize the pll2 derived clocks */ 413 - clk_usb_host.rate = clk_pll2.rate / (((value >> 28) & 0xf) + 1); 746 + /* 747 + * These four bits set the divide ratio between the PLL2 748 + * output and the USB clock. 749 + * 0000 - Divide by 1 750 + * 0001 - Divide by 2 751 + * 0010 - Divide by 3 752 + * 0011 - Divide by 4 753 + * 0100 - Divide by 5 754 + * 0101 - Divide by 6 755 + * 0110 - Divide by 7 756 + * 0111 - Divide by 8 757 + * 1000 - Divide by 9 758 + * 1001 - Divide by 10 759 + * 1010 - Divide by 11 760 + * 1011 - Divide by 12 761 + * 1100 - Divide by 13 762 + * 1101 - Divide by 14 763 + * 1110 - Divide by 15 764 + * 1111 - Divide by 1 765 + * On power-on-reset these bits are reset to 0000b. 766 + */ 767 + clk_usb_div = (((value >> 28) & 0xf) + 1); 768 + hw = clk_hw_register_fixed_factor(NULL, "usb_clk", "pll2", 0, 1, clk_usb_div); 769 + hw = clk_hw_register_gate(NULL, "ohci-platform", 770 + "usb_clk", 0, 771 + EP93XX_SYSCON_PWRCNT, 772 + EP93XX_SYSCON_PWRCNT_USH_EN, 773 + 0, 774 + &clk_lock); 775 + clk_hw_register_clkdev(hw, NULL, "ohci-platform"); 414 776 415 777 /* 416 778 * EP93xx SSP clock rate was doubled in version E2. For more information 417 779 * see: 418 780 * http://www.cirrus.com/en/pubs/appNote/AN273REV4.pdf 419 781 */ 782 + clk_spi_div = 1; 420 783 if (ep93xx_chip_revision() < EP93XX_CHIP_REV_E2) 421 - clk_spi.rate /= 2; 784 + clk_spi_div = 2; 785 + hw = clk_hw_register_fixed_factor(NULL, "ep93xx-spi.0", "xtali", 0, 1, clk_spi_div); 786 + clk_hw_register_clkdev(hw, NULL, "ep93xx-spi.0"); 787 + 788 + /* pwm clock */ 789 + hw = clk_hw_register_fixed_factor(NULL, "pwm_clk", "xtali", 0, 1, 1); 790 + clk_hw_register_clkdev(hw, "pwm_clk", NULL); 422 791 423 792 pr_info("PLL1 running at %ld MHz, PLL2 at %ld MHz\n", 424 - clk_pll1.rate / 1000000, clk_pll2.rate / 1000000); 793 + clk_pll1_rate / 1000000, clk_pll2_rate / 1000000); 425 794 pr_info("FCLK %ld MHz, HCLK %ld MHz, PCLK %ld MHz\n", 426 - clk_f.rate / 1000000, clk_h.rate / 1000000, 427 - clk_p.rate / 1000000); 795 + clk_f_rate / 1000000, clk_h_rate / 1000000, 796 + clk_p_rate / 1000000); 428 797 429 - clkdev_add_table(clocks, ARRAY_SIZE(clocks)); 798 + ep93xx_uart_clock_init(); 799 + 800 + /* touchscreen/adc clock */ 801 + hw = clk_hw_register_div("ep93xx-adc", 802 + "xtali", 803 + EP93XX_SYSCON_KEYTCHCLKDIV, 804 + EP93XX_SYSCON_KEYTCHCLKDIV_TSEN, 805 + EP93XX_SYSCON_KEYTCHCLKDIV_ADIV, 806 + 1, 807 + adc_divisors, 808 + ARRAY_SIZE(adc_divisors)); 809 + 810 + clk_hw_register_clkdev(hw, NULL, "ep93xx-adc"); 811 + 812 + /* keypad clock */ 813 + hw = clk_hw_register_div("ep93xx-keypad", 814 + "xtali", 815 + EP93XX_SYSCON_KEYTCHCLKDIV, 816 + EP93XX_SYSCON_KEYTCHCLKDIV_KEN, 817 + EP93XX_SYSCON_KEYTCHCLKDIV_KDIV, 818 + 1, 819 + adc_divisors, 820 + ARRAY_SIZE(adc_divisors)); 821 + 822 + clk_hw_register_clkdev(hw, NULL, "ep93xx-keypad"); 823 + 824 + /* On reset PDIV and VDIV is set to zero, while PDIV zero 825 + * means clock disable, VDIV shouldn't be zero. 826 + * So i set both dividers to minimum. 827 + */ 828 + /* ENA - Enable CLK divider. */ 829 + /* PDIV - 00 - Disable clock */ 830 + /* VDIV - at least 2 */ 831 + /* Check and enable video clk registers */ 832 + value = __raw_readl(EP93XX_SYSCON_VIDCLKDIV); 833 + value |= (1 << EP93XX_SYSCON_CLKDIV_PDIV_SHIFT) | 2; 834 + ep93xx_syscon_swlocked_write(value, EP93XX_SYSCON_VIDCLKDIV); 835 + 836 + /* check and enable i2s clk registers */ 837 + value = __raw_readl(EP93XX_SYSCON_I2SCLKDIV); 838 + value |= (1 << EP93XX_SYSCON_CLKDIV_PDIV_SHIFT) | 2; 839 + ep93xx_syscon_swlocked_write(value, EP93XX_SYSCON_I2SCLKDIV); 840 + 841 + /* video clk */ 842 + hw = clk_hw_register_ddiv("ep93xx-fb", 843 + EP93XX_SYSCON_VIDCLKDIV, 844 + EP93XX_SYSCON_CLKDIV_ENABLE); 845 + 846 + clk_hw_register_clkdev(hw, NULL, "ep93xx-fb"); 847 + 848 + /* i2s clk */ 849 + hw = clk_hw_register_ddiv("mclk", 850 + EP93XX_SYSCON_I2SCLKDIV, 851 + EP93XX_SYSCON_CLKDIV_ENABLE); 852 + 853 + clk_hw_register_clkdev(hw, "mclk", "ep93xx-i2s"); 854 + 855 + /* i2s sclk */ 856 + #define EP93XX_I2SCLKDIV_SDIV_SHIFT 16 857 + #define EP93XX_I2SCLKDIV_SDIV_WIDTH 1 858 + hw = clk_hw_register_div("sclk", 859 + "mclk", 860 + EP93XX_SYSCON_I2SCLKDIV, 861 + EP93XX_SYSCON_I2SCLKDIV_SENA, 862 + EP93XX_I2SCLKDIV_SDIV_SHIFT, 863 + EP93XX_I2SCLKDIV_SDIV_WIDTH, 864 + sclk_divisors, 865 + ARRAY_SIZE(sclk_divisors)); 866 + 867 + clk_hw_register_clkdev(hw, "sclk", "ep93xx-i2s"); 868 + 869 + /* i2s lrclk */ 870 + #define EP93XX_I2SCLKDIV_LRDIV32_SHIFT 17 871 + #define EP93XX_I2SCLKDIV_LRDIV32_WIDTH 3 872 + hw = clk_hw_register_div("lrclk", 873 + "sclk", 874 + EP93XX_SYSCON_I2SCLKDIV, 875 + EP93XX_SYSCON_I2SCLKDIV_SENA, 876 + EP93XX_I2SCLKDIV_LRDIV32_SHIFT, 877 + EP93XX_I2SCLKDIV_LRDIV32_WIDTH, 878 + lrclk_divisors, 879 + ARRAY_SIZE(lrclk_divisors)); 880 + 881 + clk_hw_register_clkdev(hw, "lrclk", "ep93xx-i2s"); 882 + 430 883 return 0; 431 884 } 432 885 postcore_initcall(ep93xx_clock_init);
+1 -1
arch/arm/mach-ep93xx/core.c
··· 214 214 return PTR_ERR(ep93xx_ohci_host_clock); 215 215 } 216 216 217 - return clk_enable(ep93xx_ohci_host_clock); 217 + return clk_prepare_enable(ep93xx_ohci_host_clock); 218 218 } 219 219 220 220 static void ep93xx_ohci_power_off(struct platform_device *pdev)
+21 -21
arch/arm/mach-ep93xx/soc.h
··· 111 111 #define EP93XX_SYSCON_PWRCNT EP93XX_SYSCON_REG(0x04) 112 112 #define EP93XX_SYSCON_PWRCNT_FIR_EN (1<<31) 113 113 #define EP93XX_SYSCON_PWRCNT_UARTBAUD (1<<29) 114 - #define EP93XX_SYSCON_PWRCNT_USH_EN (1<<28) 115 - #define EP93XX_SYSCON_PWRCNT_DMA_M2M1 (1<<27) 116 - #define EP93XX_SYSCON_PWRCNT_DMA_M2M0 (1<<26) 117 - #define EP93XX_SYSCON_PWRCNT_DMA_M2P8 (1<<25) 118 - #define EP93XX_SYSCON_PWRCNT_DMA_M2P9 (1<<24) 119 - #define EP93XX_SYSCON_PWRCNT_DMA_M2P6 (1<<23) 120 - #define EP93XX_SYSCON_PWRCNT_DMA_M2P7 (1<<22) 121 - #define EP93XX_SYSCON_PWRCNT_DMA_M2P4 (1<<21) 122 - #define EP93XX_SYSCON_PWRCNT_DMA_M2P5 (1<<20) 123 - #define EP93XX_SYSCON_PWRCNT_DMA_M2P2 (1<<19) 124 - #define EP93XX_SYSCON_PWRCNT_DMA_M2P3 (1<<18) 125 - #define EP93XX_SYSCON_PWRCNT_DMA_M2P0 (1<<17) 126 - #define EP93XX_SYSCON_PWRCNT_DMA_M2P1 (1<<16) 114 + #define EP93XX_SYSCON_PWRCNT_USH_EN 28 115 + #define EP93XX_SYSCON_PWRCNT_DMA_M2M1 27 116 + #define EP93XX_SYSCON_PWRCNT_DMA_M2M0 26 117 + #define EP93XX_SYSCON_PWRCNT_DMA_M2P8 25 118 + #define EP93XX_SYSCON_PWRCNT_DMA_M2P9 24 119 + #define EP93XX_SYSCON_PWRCNT_DMA_M2P6 23 120 + #define EP93XX_SYSCON_PWRCNT_DMA_M2P7 22 121 + #define EP93XX_SYSCON_PWRCNT_DMA_M2P4 21 122 + #define EP93XX_SYSCON_PWRCNT_DMA_M2P5 20 123 + #define EP93XX_SYSCON_PWRCNT_DMA_M2P2 19 124 + #define EP93XX_SYSCON_PWRCNT_DMA_M2P3 18 125 + #define EP93XX_SYSCON_PWRCNT_DMA_M2P0 17 126 + #define EP93XX_SYSCON_PWRCNT_DMA_M2P1 16 127 127 #define EP93XX_SYSCON_HALT EP93XX_SYSCON_REG(0x08) 128 128 #define EP93XX_SYSCON_STANDBY EP93XX_SYSCON_REG(0x0c) 129 129 #define EP93XX_SYSCON_CLKSET1 EP93XX_SYSCON_REG(0x20) ··· 139 139 #define EP93XX_SYSCON_DEVCFG_GONK (1<<27) 140 140 #define EP93XX_SYSCON_DEVCFG_TONG (1<<26) 141 141 #define EP93XX_SYSCON_DEVCFG_MONG (1<<25) 142 - #define EP93XX_SYSCON_DEVCFG_U3EN (1<<24) 142 + #define EP93XX_SYSCON_DEVCFG_U3EN 24 143 143 #define EP93XX_SYSCON_DEVCFG_CPENA (1<<23) 144 144 #define EP93XX_SYSCON_DEVCFG_A2ONG (1<<22) 145 145 #define EP93XX_SYSCON_DEVCFG_A1ONG (1<<21) 146 - #define EP93XX_SYSCON_DEVCFG_U2EN (1<<20) 146 + #define EP93XX_SYSCON_DEVCFG_U2EN 20 147 147 #define EP93XX_SYSCON_DEVCFG_EXVC (1<<19) 148 - #define EP93XX_SYSCON_DEVCFG_U1EN (1<<18) 148 + #define EP93XX_SYSCON_DEVCFG_U1EN 18 149 149 #define EP93XX_SYSCON_DEVCFG_TIN (1<<17) 150 150 #define EP93XX_SYSCON_DEVCFG_HC3IN (1<<15) 151 151 #define EP93XX_SYSCON_DEVCFG_HC3EN (1<<14) ··· 163 163 #define EP93XX_SYSCON_DEVCFG_KEYS (1<<1) 164 164 #define EP93XX_SYSCON_DEVCFG_SHENA (1<<0) 165 165 #define EP93XX_SYSCON_VIDCLKDIV EP93XX_SYSCON_REG(0x84) 166 - #define EP93XX_SYSCON_CLKDIV_ENABLE (1<<15) 166 + #define EP93XX_SYSCON_CLKDIV_ENABLE 15 167 167 #define EP93XX_SYSCON_CLKDIV_ESEL (1<<14) 168 168 #define EP93XX_SYSCON_CLKDIV_PSEL (1<<13) 169 169 #define EP93XX_SYSCON_CLKDIV_PDIV_SHIFT 8 170 170 #define EP93XX_SYSCON_I2SCLKDIV EP93XX_SYSCON_REG(0x8c) 171 - #define EP93XX_SYSCON_I2SCLKDIV_SENA (1<<31) 171 + #define EP93XX_SYSCON_I2SCLKDIV_SENA 31 172 172 #define EP93XX_SYSCON_I2SCLKDIV_ORIDE (1<<29) 173 173 #define EP93XX_SYSCON_I2SCLKDIV_SPOL (1<<19) 174 174 #define EP93XX_I2SCLKDIV_SDIV (1 << 16) ··· 177 177 #define EP93XX_I2SCLKDIV_LRDIV128 (2 << 17) 178 178 #define EP93XX_I2SCLKDIV_LRDIV_MASK (3 << 17) 179 179 #define EP93XX_SYSCON_KEYTCHCLKDIV EP93XX_SYSCON_REG(0x90) 180 - #define EP93XX_SYSCON_KEYTCHCLKDIV_TSEN (1<<31) 181 - #define EP93XX_SYSCON_KEYTCHCLKDIV_ADIV (1<<16) 182 - #define EP93XX_SYSCON_KEYTCHCLKDIV_KEN (1<<15) 180 + #define EP93XX_SYSCON_KEYTCHCLKDIV_TSEN 31 181 + #define EP93XX_SYSCON_KEYTCHCLKDIV_ADIV 16 182 + #define EP93XX_SYSCON_KEYTCHCLKDIV_KEN 15 183 183 #define EP93XX_SYSCON_KEYTCHCLKDIV_KDIV (1<<0) 184 184 #define EP93XX_SYSCON_SYSCFG EP93XX_SYSCON_REG(0x9c) 185 185 #define EP93XX_SYSCON_SYSCFG_REV_MASK (0xf0000000)
-2
arch/arm/mach-exynos/Kconfig
··· 13 13 select ARM_GIC 14 14 select EXYNOS_IRQ_COMBINER 15 15 select COMMON_CLK_SAMSUNG 16 - select EXYNOS_CHIPID 17 16 select EXYNOS_THERMAL 18 17 select EXYNOS_PMU 19 18 select EXYNOS_SROM ··· 21 22 select HAVE_ARM_ARCH_TIMER if ARCH_EXYNOS5 22 23 select HAVE_ARM_SCU if SMP 23 24 select HAVE_S3C2410_I2C if I2C 24 - select HAVE_S3C_RTC if RTC_CLASS 25 25 select PINCTRL 26 26 select PINCTRL_EXYNOS 27 27 select PM_GENERIC_DOMAINS if PM
+72
arch/arm/mach-qcom/platsmp.c
··· 29 29 #define COREPOR_RST BIT(5) 30 30 #define CORE_RST BIT(4) 31 31 #define L2DT_SLP BIT(3) 32 + #define CORE_MEM_CLAMP BIT(1) 32 33 #define CLAMP BIT(0) 33 34 34 35 #define APC_PWR_GATE_CTL 0x14 ··· 74 73 iounmap(base); 75 74 76 75 return 0; 76 + } 77 + 78 + static int cortex_a7_release_secondary(unsigned int cpu) 79 + { 80 + int ret = 0; 81 + void __iomem *reg; 82 + struct device_node *cpu_node, *acc_node; 83 + u32 reg_val; 84 + 85 + cpu_node = of_get_cpu_node(cpu, NULL); 86 + if (!cpu_node) 87 + return -ENODEV; 88 + 89 + acc_node = of_parse_phandle(cpu_node, "qcom,acc", 0); 90 + if (!acc_node) { 91 + ret = -ENODEV; 92 + goto out_acc; 93 + } 94 + 95 + reg = of_iomap(acc_node, 0); 96 + if (!reg) { 97 + ret = -ENOMEM; 98 + goto out_acc_map; 99 + } 100 + 101 + /* Put the CPU into reset. */ 102 + reg_val = CORE_RST | COREPOR_RST | CLAMP | CORE_MEM_CLAMP; 103 + writel(reg_val, reg + APCS_CPU_PWR_CTL); 104 + 105 + /* Turn on the BHS and set the BHS_CNT to 16 XO clock cycles */ 106 + writel(BHS_EN | (0x10 << BHS_CNT_SHIFT), reg + APC_PWR_GATE_CTL); 107 + /* Wait for the BHS to settle */ 108 + udelay(2); 109 + 110 + reg_val &= ~CORE_MEM_CLAMP; 111 + writel(reg_val, reg + APCS_CPU_PWR_CTL); 112 + reg_val |= L2DT_SLP; 113 + writel(reg_val, reg + APCS_CPU_PWR_CTL); 114 + udelay(2); 115 + 116 + reg_val = (reg_val | BIT(17)) & ~CLAMP; 117 + writel(reg_val, reg + APCS_CPU_PWR_CTL); 118 + udelay(2); 119 + 120 + /* Release CPU out of reset and bring it to life. */ 121 + reg_val &= ~(CORE_RST | COREPOR_RST); 122 + writel(reg_val, reg + APCS_CPU_PWR_CTL); 123 + reg_val |= CORE_PWRD_UP; 124 + writel(reg_val, reg + APCS_CPU_PWR_CTL); 125 + 126 + iounmap(reg); 127 + out_acc_map: 128 + of_node_put(acc_node); 129 + out_acc: 130 + of_node_put(cpu_node); 131 + return ret; 77 132 } 78 133 79 134 static int kpssv1_release_secondary(unsigned int cpu) ··· 338 281 return qcom_boot_secondary(cpu, scss_release_secondary); 339 282 } 340 283 284 + static int cortex_a7_boot_secondary(unsigned int cpu, struct task_struct *idle) 285 + { 286 + return qcom_boot_secondary(cpu, cortex_a7_release_secondary); 287 + } 288 + 341 289 static int kpssv1_boot_secondary(unsigned int cpu, struct task_struct *idle) 342 290 { 343 291 return qcom_boot_secondary(cpu, kpssv1_release_secondary); ··· 376 314 #endif 377 315 }; 378 316 CPU_METHOD_OF_DECLARE(qcom_smp, "qcom,gcc-msm8660", &smp_msm8660_ops); 317 + 318 + static const struct smp_operations qcom_smp_cortex_a7_ops __initconst = { 319 + .smp_prepare_cpus = qcom_smp_prepare_cpus, 320 + .smp_boot_secondary = cortex_a7_boot_secondary, 321 + #ifdef CONFIG_HOTPLUG_CPU 322 + .cpu_die = qcom_cpu_die, 323 + #endif 324 + }; 325 + CPU_METHOD_OF_DECLARE(qcom_smp_msm8226, "qcom,msm8226-smp", &qcom_smp_cortex_a7_ops); 326 + CPU_METHOD_OF_DECLARE(qcom_smp_msm8916, "qcom,msm8916-smp", &qcom_smp_cortex_a7_ops); 379 327 380 328 static const struct smp_operations qcom_smp_kpssv1_ops __initconst = { 381 329 .smp_prepare_cpus = qcom_smp_prepare_cpus,
-1
arch/arm/mach-s5pv210/Kconfig
··· 13 13 select COMMON_CLK_SAMSUNG 14 14 select GPIOLIB 15 15 select HAVE_S3C2410_I2C if I2C 16 - select HAVE_S3C_RTC if RTC_CLASS 17 16 select PINCTRL 18 17 select PINCTRL_EXYNOS 19 18 select SOC_SAMSUNG
-2
arch/arm64/Kconfig.platforms
··· 89 89 config ARCH_EXYNOS 90 90 bool "ARMv8 based Samsung Exynos SoC family" 91 91 select COMMON_CLK_SAMSUNG 92 - select EXYNOS_CHIPID 93 92 select EXYNOS_PM_DOMAINS if PM_GENERIC_DOMAINS 94 93 select EXYNOS_PMU 95 - select HAVE_S3C_RTC if RTC_CLASS 96 94 select PINCTRL 97 95 select PINCTRL_EXYNOS 98 96 select PM_GENERIC_DOMAINS if PM
+1 -1
drivers/bus/Kconfig
··· 30 30 found on the ARM Integrator AP (Application Platform) 31 31 32 32 config BRCMSTB_GISB_ARB 33 - bool "Broadcom STB GISB bus arbiter" 33 + tristate "Broadcom STB GISB bus arbiter" 34 34 depends on ARM || ARM64 || MIPS 35 35 default ARCH_BRCMSTB || BMIPS_GENERIC 36 36 help
+6 -1
drivers/bus/brcmstb_gisb.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Copyright (C) 2014-2017 Broadcom 3 + * Copyright (C) 2014-2021 Broadcom 4 4 */ 5 5 6 6 #include <linux/init.h> ··· 536 536 .name = "brcm-gisb-arb", 537 537 .of_match_table = brcmstb_gisb_arb_of_match, 538 538 .pm = &brcmstb_gisb_arb_pm_ops, 539 + .suppress_bind_attrs = true, 539 540 }, 540 541 }; 541 542 ··· 547 546 } 548 547 549 548 module_init(brcm_gisb_driver_init); 549 + 550 + MODULE_AUTHOR("Broadcom"); 551 + MODULE_DESCRIPTION("Broadcom STB GISB arbiter driver"); 552 + MODULE_LICENSE("GPL v2");
+3 -4
drivers/bus/sun50i-de2.c
··· 15 15 int ret; 16 16 17 17 ret = sunxi_sram_claim(&pdev->dev); 18 - if (ret) { 19 - dev_err(&pdev->dev, "Error couldn't map SRAM to device\n"); 20 - return ret; 21 - } 18 + if (ret) 19 + return dev_err_probe(&pdev->dev, ret, 20 + "Couldn't map SRAM to device\n"); 22 21 23 22 of_platform_populate(np, NULL, NULL, &pdev->dev); 24 23
+241 -35
drivers/bus/ti-sysc.c
··· 6 6 #include <linux/io.h> 7 7 #include <linux/clk.h> 8 8 #include <linux/clkdev.h> 9 + #include <linux/cpu_pm.h> 9 10 #include <linux/delay.h> 10 11 #include <linux/list.h> 11 12 #include <linux/module.h> ··· 18 17 #include <linux/of_platform.h> 19 18 #include <linux/slab.h> 20 19 #include <linux/sys_soc.h> 20 + #include <linux/timekeeping.h> 21 21 #include <linux/iopoll.h> 22 22 23 23 #include <linux/platform_data/ti-sysc.h> ··· 53 51 struct list_head node; 54 52 }; 55 53 54 + struct sysc_module { 55 + struct sysc *ddata; 56 + struct list_head node; 57 + }; 58 + 56 59 struct sysc_soc_info { 57 60 unsigned long general_purpose:1; 58 61 enum sysc_soc soc; 59 - struct mutex list_lock; /* disabled modules list lock */ 62 + struct mutex list_lock; /* disabled and restored modules list lock */ 60 63 struct list_head disabled_modules; 64 + struct list_head restored_modules; 65 + struct notifier_block nb; 61 66 }; 62 67 63 68 enum sysc_clocks { ··· 140 131 struct ti_sysc_cookie cookie; 141 132 const char *name; 142 133 u32 revision; 134 + u32 sysconfig; 143 135 unsigned int reserved:1; 144 136 unsigned int enabled:1; 145 137 unsigned int needs_resume:1; ··· 157 147 158 148 static void sysc_parse_dts_quirks(struct sysc *ddata, struct device_node *np, 159 149 bool is_child); 150 + static int sysc_reset(struct sysc *ddata); 160 151 161 152 static void sysc_write(struct sysc *ddata, int offset, u32 value) 162 153 { ··· 234 223 return sysc_read(ddata, offset); 235 224 } 236 225 237 - /* Poll on reset status */ 238 - static int sysc_wait_softreset(struct sysc *ddata) 226 + static int sysc_poll_reset_sysstatus(struct sysc *ddata) 239 227 { 240 - u32 sysc_mask, syss_done, rstval; 241 - int syss_offset, error = 0; 242 - 243 - if (ddata->cap->regbits->srst_shift < 0) 244 - return 0; 245 - 246 - syss_offset = ddata->offsets[SYSC_SYSSTATUS]; 247 - sysc_mask = BIT(ddata->cap->regbits->srst_shift); 228 + int error, retries; 229 + u32 syss_done, rstval; 248 230 249 231 if (ddata->cfg.quirks & SYSS_QUIRK_RESETDONE_INVERTED) 250 232 syss_done = 0; 251 233 else 252 234 syss_done = ddata->cfg.syss_mask; 253 235 254 - if (syss_offset >= 0) { 236 + if (likely(!timekeeping_suspended)) { 255 237 error = readx_poll_timeout_atomic(sysc_read_sysstatus, ddata, 256 238 rstval, (rstval & ddata->cfg.syss_mask) == 257 239 syss_done, 100, MAX_MODULE_SOFTRESET_WAIT); 240 + } else { 241 + retries = MAX_MODULE_SOFTRESET_WAIT; 242 + while (retries--) { 243 + rstval = sysc_read_sysstatus(ddata); 244 + if ((rstval & ddata->cfg.syss_mask) == syss_done) 245 + return 0; 246 + udelay(2); /* Account for udelay flakeyness */ 247 + } 248 + error = -ETIMEDOUT; 249 + } 258 250 259 - } else if (ddata->cfg.quirks & SYSC_QUIRK_RESET_STATUS) { 251 + return error; 252 + } 253 + 254 + static int sysc_poll_reset_sysconfig(struct sysc *ddata) 255 + { 256 + int error, retries; 257 + u32 sysc_mask, rstval; 258 + 259 + sysc_mask = BIT(ddata->cap->regbits->srst_shift); 260 + 261 + if (likely(!timekeeping_suspended)) { 260 262 error = readx_poll_timeout_atomic(sysc_read_sysconfig, ddata, 261 263 rstval, !(rstval & sysc_mask), 262 264 100, MAX_MODULE_SOFTRESET_WAIT); 265 + } else { 266 + retries = MAX_MODULE_SOFTRESET_WAIT; 267 + while (retries--) { 268 + rstval = sysc_read_sysconfig(ddata); 269 + if (!(rstval & sysc_mask)) 270 + return 0; 271 + udelay(2); /* Account for udelay flakeyness */ 272 + } 273 + error = -ETIMEDOUT; 263 274 } 275 + 276 + return error; 277 + } 278 + 279 + /* Poll on reset status */ 280 + static int sysc_wait_softreset(struct sysc *ddata) 281 + { 282 + int syss_offset, error = 0; 283 + 284 + if (ddata->cap->regbits->srst_shift < 0) 285 + return 0; 286 + 287 + syss_offset = ddata->offsets[SYSC_SYSSTATUS]; 288 + 289 + if (syss_offset >= 0) 290 + error = sysc_poll_reset_sysstatus(ddata); 291 + else if (ddata->cfg.quirks & SYSC_QUIRK_RESET_STATUS) 292 + error = sysc_poll_reset_sysconfig(ddata); 264 293 265 294 return error; 266 295 } ··· 1145 1094 best_mode = fls(ddata->cfg.midlemodes) - 1; 1146 1095 if (best_mode > SYSC_IDLE_MASK) { 1147 1096 dev_err(dev, "%s: invalid midlemode\n", __func__); 1148 - return -EINVAL; 1097 + error = -EINVAL; 1098 + goto save_context; 1149 1099 } 1150 1100 1151 1101 if (ddata->cfg.quirks & SYSC_QUIRK_SWSUP_MSTANDBY) ··· 1164 1112 sysc_write_sysconfig(ddata, reg); 1165 1113 } 1166 1114 1167 - /* Flush posted write */ 1168 - sysc_read(ddata, ddata->offsets[SYSC_SYSCONFIG]); 1115 + error = 0; 1116 + 1117 + save_context: 1118 + /* Save context and flush posted write */ 1119 + ddata->sysconfig = sysc_read(ddata, ddata->offsets[SYSC_SYSCONFIG]); 1169 1120 1170 1121 if (ddata->module_enable_quirk) 1171 1122 ddata->module_enable_quirk(ddata); 1172 1123 1173 - return 0; 1124 + return error; 1174 1125 } 1175 1126 1176 1127 static int sysc_best_idle_mode(u32 idlemodes, u32 *best_mode) ··· 1230 1175 set_sidle: 1231 1176 /* Set SIDLE mode */ 1232 1177 idlemodes = ddata->cfg.sidlemodes; 1233 - if (!idlemodes || regbits->sidle_shift < 0) 1234 - return 0; 1178 + if (!idlemodes || regbits->sidle_shift < 0) { 1179 + ret = 0; 1180 + goto save_context; 1181 + } 1235 1182 1236 1183 if (ddata->cfg.quirks & SYSC_QUIRK_SWSUP_SIDLE) { 1237 1184 best_mode = SYSC_IDLE_FORCE; ··· 1241 1184 ret = sysc_best_idle_mode(idlemodes, &best_mode); 1242 1185 if (ret) { 1243 1186 dev_err(dev, "%s: invalid sidlemode\n", __func__); 1244 - return ret; 1187 + ret = -EINVAL; 1188 + goto save_context; 1245 1189 } 1246 1190 } 1247 1191 ··· 1253 1195 reg |= 1 << regbits->autoidle_shift; 1254 1196 sysc_write_sysconfig(ddata, reg); 1255 1197 1256 - /* Flush posted write */ 1257 - sysc_read(ddata, ddata->offsets[SYSC_SYSCONFIG]); 1198 + ret = 0; 1258 1199 1259 - return 0; 1200 + save_context: 1201 + /* Save context and flush posted write */ 1202 + ddata->sysconfig = sysc_read(ddata, ddata->offsets[SYSC_SYSCONFIG]); 1203 + 1204 + return ret; 1260 1205 } 1261 1206 1262 1207 static int __maybe_unused sysc_runtime_suspend_legacy(struct device *dev, ··· 1397 1336 return error; 1398 1337 } 1399 1338 1339 + /* 1340 + * Checks if device context was lost. Assumes the sysconfig register value 1341 + * after lost context is different from the configured value. Only works for 1342 + * enabled devices. 1343 + * 1344 + * Eventually we may want to also add support to using the context lost 1345 + * registers that some SoCs have. 1346 + */ 1347 + static int sysc_check_context(struct sysc *ddata) 1348 + { 1349 + u32 reg; 1350 + 1351 + if (!ddata->enabled) 1352 + return -ENODATA; 1353 + 1354 + reg = sysc_read(ddata, ddata->offsets[SYSC_SYSCONFIG]); 1355 + if (reg == ddata->sysconfig) 1356 + return 0; 1357 + 1358 + return -EACCES; 1359 + } 1360 + 1400 1361 static int sysc_reinit_module(struct sysc *ddata, bool leave_enabled) 1401 1362 { 1402 1363 struct device *dev = ddata->dev; 1403 1364 int error; 1404 1365 1405 - /* Disable target module if it is enabled */ 1406 1366 if (ddata->enabled) { 1367 + /* Nothing to do if enabled and context not lost */ 1368 + error = sysc_check_context(ddata); 1369 + if (!error) 1370 + return 0; 1371 + 1372 + /* Disable target module if it is enabled */ 1407 1373 error = sysc_runtime_suspend(dev); 1408 1374 if (error) 1409 1375 dev_warn(dev, "reinit suspend failed: %i\n", error); ··· 1440 1352 error = sysc_runtime_resume(dev); 1441 1353 if (error) 1442 1354 dev_warn(dev, "reinit resume failed: %i\n", error); 1355 + 1356 + /* Some modules like am335x gpmc need reset and restore of sysconfig */ 1357 + if (ddata->cfg.quirks & SYSC_QUIRK_RESET_ON_CTX_LOST) { 1358 + error = sysc_reset(ddata); 1359 + if (error) 1360 + dev_warn(dev, "reinit reset failed: %i\n", error); 1361 + 1362 + sysc_write_sysconfig(ddata, ddata->sysconfig); 1363 + } 1443 1364 1444 1365 if (leave_enabled) 1445 1366 return error; ··· 1539 1442 1540 1443 static const struct sysc_revision_quirk sysc_revision_quirks[] = { 1541 1444 /* These drivers need to be fixed to not use pm_runtime_irq_safe() */ 1542 - SYSC_QUIRK("gpio", 0, 0, 0x10, 0x114, 0x50600801, 0xffff00ff, 1543 - SYSC_QUIRK_LEGACY_IDLE | SYSC_QUIRK_OPT_CLKS_IN_RESET), 1544 - SYSC_QUIRK("sham", 0, 0x100, 0x110, 0x114, 0x40000c03, 0xffffffff, 1545 - SYSC_QUIRK_LEGACY_IDLE), 1546 1445 SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x00000046, 0xffffffff, 1547 1446 SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE), 1548 1447 SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x00000052, 0xffffffff, ··· 1572 1479 SYSC_QUIRK_CLKDM_NOAUTO), 1573 1480 SYSC_QUIRK("dwc3", 0x488c0000, 0, 0x10, -ENODEV, 0x500a0200, 0xffffffff, 1574 1481 SYSC_QUIRK_CLKDM_NOAUTO), 1482 + SYSC_QUIRK("gpio", 0, 0, 0x10, 0x114, 0x50600801, 0xffff00ff, 1483 + SYSC_QUIRK_OPT_CLKS_IN_RESET), 1575 1484 SYSC_QUIRK("gpmc", 0, 0, 0x10, 0x14, 0x00000060, 0xffffffff, 1485 + SYSC_QUIRK_REINIT_ON_CTX_LOST | SYSC_QUIRK_RESET_ON_CTX_LOST | 1576 1486 SYSC_QUIRK_GPMC_DEBUG), 1577 1487 SYSC_QUIRK("hdmi", 0, 0, 0x10, -ENODEV, 0x50030200, 0xffffffff, 1578 1488 SYSC_QUIRK_OPT_CLKS_NEEDED), ··· 1611 1515 SYSC_QUIRK("usb_host_hs", 0, 0, 0x10, -ENODEV, 0x50700101, 0xffffffff, 1612 1516 SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY), 1613 1517 SYSC_QUIRK("usb_otg_hs", 0, 0x400, 0x404, 0x408, 0x00000050, 1614 - 0xffffffff, SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY), 1518 + 0xffffffff, SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY | 1519 + SYSC_MODULE_QUIRK_OTG), 1615 1520 SYSC_QUIRK("usb_otg_hs", 0, 0, 0x10, -ENODEV, 0x4ea2080d, 0xffffffff, 1616 1521 SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY | 1617 - SYSC_QUIRK_REINIT_ON_RESUME), 1522 + SYSC_QUIRK_REINIT_ON_CTX_LOST), 1618 1523 SYSC_QUIRK("wdt", 0, 0, 0x10, 0x14, 0x502a0500, 0xfffff0f0, 1619 1524 SYSC_MODULE_QUIRK_WDT), 1620 1525 /* PRUSS on am3, am4 and am5 */ ··· 1680 1583 SYSC_QUIRK("sdio", 0, 0, 0x10, -ENODEV, 0x40202301, 0xffff0ff0, 0), 1681 1584 SYSC_QUIRK("sdio", 0, 0x2fc, 0x110, 0x114, 0x31010000, 0xffffffff, 0), 1682 1585 SYSC_QUIRK("sdma", 0, 0, 0x2c, 0x28, 0x00010900, 0xffffffff, 0), 1586 + SYSC_QUIRK("sham", 0, 0x100, 0x110, 0x114, 0x40000c03, 0xffffffff, 0), 1683 1587 SYSC_QUIRK("slimbus", 0, 0, 0x10, -ENODEV, 0x40000902, 0xffffffff, 0), 1684 1588 SYSC_QUIRK("slimbus", 0, 0, 0x10, -ENODEV, 0x40002903, 0xffffffff, 0), 1685 1589 SYSC_QUIRK("smartreflex", 0, -ENODEV, 0x24, -ENODEV, 0x00000000, 0xffffffff, 0), ··· 1972 1874 sysc_quirk_rtc(ddata, true); 1973 1875 } 1974 1876 1877 + /* OTG omap2430 glue layer up to omap4 needs OTG_FORCESTDBY configured */ 1878 + static void sysc_module_enable_quirk_otg(struct sysc *ddata) 1879 + { 1880 + int offset = 0x414; /* OTG_FORCESTDBY */ 1881 + 1882 + sysc_write(ddata, offset, 0); 1883 + } 1884 + 1885 + static void sysc_module_disable_quirk_otg(struct sysc *ddata) 1886 + { 1887 + int offset = 0x414; /* OTG_FORCESTDBY */ 1888 + u32 val = BIT(0); /* ENABLEFORCE */ 1889 + 1890 + sysc_write(ddata, offset, val); 1891 + } 1892 + 1975 1893 /* 36xx SGX needs a quirk for to bypass OCP IPG interrupt logic */ 1976 1894 static void sysc_module_enable_quirk_sgx(struct sysc *ddata) 1977 1895 { ··· 2068 1954 ddata->module_lock_quirk = sysc_module_lock_quirk_rtc; 2069 1955 2070 1956 return; 1957 + } 1958 + 1959 + if (ddata->cfg.quirks & SYSC_MODULE_QUIRK_OTG) { 1960 + ddata->module_enable_quirk = sysc_module_enable_quirk_otg; 1961 + ddata->module_disable_quirk = sysc_module_disable_quirk_otg; 2071 1962 } 2072 1963 2073 1964 if (ddata->cfg.quirks & SYSC_MODULE_QUIRK_SGX) ··· 2519 2400 sysc_child_resume_noirq) 2520 2401 } 2521 2402 }; 2403 + 2404 + /* Caller needs to take list_lock if ever used outside of cpu_pm */ 2405 + static void sysc_reinit_modules(struct sysc_soc_info *soc) 2406 + { 2407 + struct sysc_module *module; 2408 + struct list_head *pos; 2409 + struct sysc *ddata; 2410 + 2411 + list_for_each(pos, &sysc_soc->restored_modules) { 2412 + module = list_entry(pos, struct sysc_module, node); 2413 + ddata = module->ddata; 2414 + sysc_reinit_module(ddata, ddata->enabled); 2415 + } 2416 + } 2417 + 2418 + /** 2419 + * sysc_context_notifier - optionally reset and restore module after idle 2420 + * @nb: notifier block 2421 + * @cmd: unused 2422 + * @v: unused 2423 + * 2424 + * Some interconnect target modules need to be restored, or reset and restored 2425 + * on CPU_PM CPU_PM_CLUSTER_EXIT notifier. This is needed at least for am335x 2426 + * OTG and GPMC target modules even if the modules are unused. 2427 + */ 2428 + static int sysc_context_notifier(struct notifier_block *nb, unsigned long cmd, 2429 + void *v) 2430 + { 2431 + struct sysc_soc_info *soc; 2432 + 2433 + soc = container_of(nb, struct sysc_soc_info, nb); 2434 + 2435 + switch (cmd) { 2436 + case CPU_CLUSTER_PM_ENTER: 2437 + break; 2438 + case CPU_CLUSTER_PM_ENTER_FAILED: /* No need to restore context */ 2439 + break; 2440 + case CPU_CLUSTER_PM_EXIT: 2441 + sysc_reinit_modules(soc); 2442 + break; 2443 + } 2444 + 2445 + return NOTIFY_OK; 2446 + } 2447 + 2448 + /** 2449 + * sysc_add_restored - optionally add reset and restore quirk hanlling 2450 + * @ddata: device data 2451 + */ 2452 + static void sysc_add_restored(struct sysc *ddata) 2453 + { 2454 + struct sysc_module *restored_module; 2455 + 2456 + restored_module = kzalloc(sizeof(*restored_module), GFP_KERNEL); 2457 + if (!restored_module) 2458 + return; 2459 + 2460 + restored_module->ddata = ddata; 2461 + 2462 + mutex_lock(&sysc_soc->list_lock); 2463 + 2464 + list_add(&restored_module->node, &sysc_soc->restored_modules); 2465 + 2466 + if (sysc_soc->nb.notifier_call) 2467 + goto out_unlock; 2468 + 2469 + sysc_soc->nb.notifier_call = sysc_context_notifier; 2470 + cpu_pm_register_notifier(&sysc_soc->nb); 2471 + 2472 + out_unlock: 2473 + mutex_unlock(&sysc_soc->list_lock); 2474 + } 2522 2475 2523 2476 /** 2524 2477 * sysc_legacy_idle_quirk - handle children in omap_device compatible way ··· 3091 2900 } 3092 2901 3093 2902 /* 3094 - * One time init to detect the booted SoC and disable unavailable features. 2903 + * One time init to detect the booted SoC, disable unavailable features 2904 + * and initialize list for optional cpu_pm notifier. 2905 + * 3095 2906 * Note that we initialize static data shared across all ti-sysc instances 3096 2907 * so ddata is only used for SoC type. This can be called from module_init 3097 2908 * once we no longer need to rely on platform data. 3098 2909 */ 3099 - static int sysc_init_soc(struct sysc *ddata) 2910 + static int sysc_init_static_data(struct sysc *ddata) 3100 2911 { 3101 2912 const struct soc_device_attribute *match; 3102 2913 struct ti_sysc_platform_data *pdata; ··· 3114 2921 3115 2922 mutex_init(&sysc_soc->list_lock); 3116 2923 INIT_LIST_HEAD(&sysc_soc->disabled_modules); 2924 + INIT_LIST_HEAD(&sysc_soc->restored_modules); 3117 2925 sysc_soc->general_purpose = true; 3118 2926 3119 2927 pdata = dev_get_platdata(ddata->dev); ··· 3179 2985 return 0; 3180 2986 } 3181 2987 3182 - static void sysc_cleanup_soc(void) 2988 + static void sysc_cleanup_static_data(void) 3183 2989 { 2990 + struct sysc_module *restored_module; 3184 2991 struct sysc_address *disabled_module; 3185 2992 struct list_head *pos, *tmp; 3186 2993 3187 2994 if (!sysc_soc) 3188 2995 return; 3189 2996 2997 + if (sysc_soc->nb.notifier_call) 2998 + cpu_pm_unregister_notifier(&sysc_soc->nb); 2999 + 3190 3000 mutex_lock(&sysc_soc->list_lock); 3001 + list_for_each_safe(pos, tmp, &sysc_soc->restored_modules) { 3002 + restored_module = list_entry(pos, struct sysc_module, node); 3003 + list_del(pos); 3004 + kfree(restored_module); 3005 + } 3191 3006 list_for_each_safe(pos, tmp, &sysc_soc->disabled_modules) { 3192 3007 disabled_module = list_entry(pos, struct sysc_address, node); 3193 3008 list_del(pos); ··· 3264 3061 ddata->dev = &pdev->dev; 3265 3062 platform_set_drvdata(pdev, ddata); 3266 3063 3267 - error = sysc_init_soc(ddata); 3064 + error = sysc_init_static_data(ddata); 3268 3065 if (error) 3269 3066 return error; 3270 3067 ··· 3362 3159 pm_runtime_put(&pdev->dev); 3363 3160 } 3364 3161 3162 + if (ddata->cfg.quirks & SYSC_QUIRK_REINIT_ON_CTX_LOST) 3163 + sysc_add_restored(ddata); 3164 + 3365 3165 return 0; 3366 3166 3367 3167 err: ··· 3446 3240 { 3447 3241 bus_unregister_notifier(&platform_bus_type, &sysc_nb); 3448 3242 platform_driver_unregister(&sysc_driver); 3449 - sysc_cleanup_soc(); 3243 + sysc_cleanup_static_data(); 3450 3244 } 3451 3245 module_exit(sysc_exit); 3452 3246
+2 -1
drivers/cpuidle/Kconfig.arm
··· 99 99 100 100 config ARM_TEGRA_CPUIDLE 101 101 bool "CPU Idle Driver for NVIDIA Tegra SoCs" 102 - depends on ARCH_TEGRA && !ARM64 102 + depends on (ARCH_TEGRA || COMPILE_TEST) && !ARM64 && MMU 103 103 select ARCH_NEEDS_CPU_IDLE_COUPLED if SMP 104 104 select ARM_CPU_SUSPEND 105 105 help ··· 112 112 select CPU_IDLE_MULTIPLE_DRIVERS 113 113 select DT_IDLE_STATES 114 114 select QCOM_SCM 115 + select QCOM_SPM 115 116 help 116 117 Select this to enable cpuidle for Qualcomm processors. 117 118 The Subsystem Power Manager (SPM) controls low power modes for the
+72 -254
drivers/cpuidle/cpuidle-qcom-spm.c
··· 18 18 #include <linux/cpuidle.h> 19 19 #include <linux/cpu_pm.h> 20 20 #include <linux/qcom_scm.h> 21 + #include <soc/qcom/spm.h> 21 22 22 23 #include <asm/proc-fns.h> 23 24 #include <asm/suspend.h> 24 25 25 26 #include "dt_idle_states.h" 26 27 27 - #define MAX_PMIC_DATA 2 28 - #define MAX_SEQ_DATA 64 29 - #define SPM_CTL_INDEX 0x7f 30 - #define SPM_CTL_INDEX_SHIFT 4 31 - #define SPM_CTL_EN BIT(0) 32 - 33 - enum pm_sleep_mode { 34 - PM_SLEEP_MODE_STBY, 35 - PM_SLEEP_MODE_RET, 36 - PM_SLEEP_MODE_SPC, 37 - PM_SLEEP_MODE_PC, 38 - PM_SLEEP_MODE_NR, 39 - }; 40 - 41 - enum spm_reg { 42 - SPM_REG_CFG, 43 - SPM_REG_SPM_CTL, 44 - SPM_REG_DLY, 45 - SPM_REG_PMIC_DLY, 46 - SPM_REG_PMIC_DATA_0, 47 - SPM_REG_PMIC_DATA_1, 48 - SPM_REG_VCTL, 49 - SPM_REG_SEQ_ENTRY, 50 - SPM_REG_SPM_STS, 51 - SPM_REG_PMIC_STS, 52 - SPM_REG_NR, 53 - }; 54 - 55 - struct spm_reg_data { 56 - const u8 *reg_offset; 57 - u32 spm_cfg; 58 - u32 spm_dly; 59 - u32 pmic_dly; 60 - u32 pmic_data[MAX_PMIC_DATA]; 61 - u8 seq[MAX_SEQ_DATA]; 62 - u8 start_index[PM_SLEEP_MODE_NR]; 63 - }; 64 - 65 - struct spm_driver_data { 28 + struct cpuidle_qcom_spm_data { 66 29 struct cpuidle_driver cpuidle_driver; 67 - void __iomem *reg_base; 68 - const struct spm_reg_data *reg_data; 30 + struct spm_driver_data *spm; 69 31 }; 70 - 71 - static const u8 spm_reg_offset_v2_1[SPM_REG_NR] = { 72 - [SPM_REG_CFG] = 0x08, 73 - [SPM_REG_SPM_CTL] = 0x30, 74 - [SPM_REG_DLY] = 0x34, 75 - [SPM_REG_SEQ_ENTRY] = 0x80, 76 - }; 77 - 78 - /* SPM register data for 8974, 8084 */ 79 - static const struct spm_reg_data spm_reg_8974_8084_cpu = { 80 - .reg_offset = spm_reg_offset_v2_1, 81 - .spm_cfg = 0x1, 82 - .spm_dly = 0x3C102800, 83 - .seq = { 0x03, 0x0B, 0x0F, 0x00, 0x20, 0x80, 0x10, 0xE8, 0x5B, 0x03, 84 - 0x3B, 0xE8, 0x5B, 0x82, 0x10, 0x0B, 0x30, 0x06, 0x26, 0x30, 85 - 0x0F }, 86 - .start_index[PM_SLEEP_MODE_STBY] = 0, 87 - .start_index[PM_SLEEP_MODE_SPC] = 3, 88 - }; 89 - 90 - /* SPM register data for 8226 */ 91 - static const struct spm_reg_data spm_reg_8226_cpu = { 92 - .reg_offset = spm_reg_offset_v2_1, 93 - .spm_cfg = 0x0, 94 - .spm_dly = 0x3C102800, 95 - .seq = { 0x60, 0x03, 0x60, 0x0B, 0x0F, 0x20, 0x10, 0x80, 0x30, 0x90, 96 - 0x5B, 0x60, 0x03, 0x60, 0x3B, 0x76, 0x76, 0x0B, 0x94, 0x5B, 97 - 0x80, 0x10, 0x26, 0x30, 0x0F }, 98 - .start_index[PM_SLEEP_MODE_STBY] = 0, 99 - .start_index[PM_SLEEP_MODE_SPC] = 5, 100 - }; 101 - 102 - static const u8 spm_reg_offset_v1_1[SPM_REG_NR] = { 103 - [SPM_REG_CFG] = 0x08, 104 - [SPM_REG_SPM_CTL] = 0x20, 105 - [SPM_REG_PMIC_DLY] = 0x24, 106 - [SPM_REG_PMIC_DATA_0] = 0x28, 107 - [SPM_REG_PMIC_DATA_1] = 0x2C, 108 - [SPM_REG_SEQ_ENTRY] = 0x80, 109 - }; 110 - 111 - /* SPM register data for 8064 */ 112 - static const struct spm_reg_data spm_reg_8064_cpu = { 113 - .reg_offset = spm_reg_offset_v1_1, 114 - .spm_cfg = 0x1F, 115 - .pmic_dly = 0x02020004, 116 - .pmic_data[0] = 0x0084009C, 117 - .pmic_data[1] = 0x00A4001C, 118 - .seq = { 0x03, 0x0F, 0x00, 0x24, 0x54, 0x10, 0x09, 0x03, 0x01, 119 - 0x10, 0x54, 0x30, 0x0C, 0x24, 0x30, 0x0F }, 120 - .start_index[PM_SLEEP_MODE_STBY] = 0, 121 - .start_index[PM_SLEEP_MODE_SPC] = 2, 122 - }; 123 - 124 - static inline void spm_register_write(struct spm_driver_data *drv, 125 - enum spm_reg reg, u32 val) 126 - { 127 - if (drv->reg_data->reg_offset[reg]) 128 - writel_relaxed(val, drv->reg_base + 129 - drv->reg_data->reg_offset[reg]); 130 - } 131 - 132 - /* Ensure a guaranteed write, before return */ 133 - static inline void spm_register_write_sync(struct spm_driver_data *drv, 134 - enum spm_reg reg, u32 val) 135 - { 136 - u32 ret; 137 - 138 - if (!drv->reg_data->reg_offset[reg]) 139 - return; 140 - 141 - do { 142 - writel_relaxed(val, drv->reg_base + 143 - drv->reg_data->reg_offset[reg]); 144 - ret = readl_relaxed(drv->reg_base + 145 - drv->reg_data->reg_offset[reg]); 146 - if (ret == val) 147 - break; 148 - cpu_relax(); 149 - } while (1); 150 - } 151 - 152 - static inline u32 spm_register_read(struct spm_driver_data *drv, 153 - enum spm_reg reg) 154 - { 155 - return readl_relaxed(drv->reg_base + drv->reg_data->reg_offset[reg]); 156 - } 157 - 158 - static void spm_set_low_power_mode(struct spm_driver_data *drv, 159 - enum pm_sleep_mode mode) 160 - { 161 - u32 start_index; 162 - u32 ctl_val; 163 - 164 - start_index = drv->reg_data->start_index[mode]; 165 - 166 - ctl_val = spm_register_read(drv, SPM_REG_SPM_CTL); 167 - ctl_val &= ~(SPM_CTL_INDEX << SPM_CTL_INDEX_SHIFT); 168 - ctl_val |= start_index << SPM_CTL_INDEX_SHIFT; 169 - ctl_val |= SPM_CTL_EN; 170 - spm_register_write_sync(drv, SPM_REG_SPM_CTL, ctl_val); 171 - } 172 32 173 33 static int qcom_pm_collapse(unsigned long int unused) 174 34 { ··· 61 201 static int spm_enter_idle_state(struct cpuidle_device *dev, 62 202 struct cpuidle_driver *drv, int idx) 63 203 { 64 - struct spm_driver_data *data = container_of(drv, struct spm_driver_data, 65 - cpuidle_driver); 204 + struct cpuidle_qcom_spm_data *data = container_of(drv, struct cpuidle_qcom_spm_data, 205 + cpuidle_driver); 66 206 67 - return CPU_PM_CPU_IDLE_ENTER_PARAM(qcom_cpu_spc, idx, data); 207 + return CPU_PM_CPU_IDLE_ENTER_PARAM(qcom_cpu_spc, idx, data->spm); 68 208 } 69 209 70 210 static struct cpuidle_driver qcom_spm_idle_driver = { ··· 85 225 { }, 86 226 }; 87 227 88 - static int spm_cpuidle_init(struct cpuidle_driver *drv, int cpu) 228 + static int spm_cpuidle_register(struct device *cpuidle_dev, int cpu) 89 229 { 230 + struct platform_device *pdev = NULL; 231 + struct device_node *cpu_node, *saw_node; 232 + struct cpuidle_qcom_spm_data *data = NULL; 90 233 int ret; 91 234 92 - memcpy(drv, &qcom_spm_idle_driver, sizeof(*drv)); 93 - drv->cpumask = (struct cpumask *)cpumask_of(cpu); 235 + cpu_node = of_cpu_device_node_get(cpu); 236 + if (!cpu_node) 237 + return -ENODEV; 94 238 95 - /* Parse idle states from device tree */ 96 - ret = dt_init_idle_driver(drv, qcom_idle_state_match, 1); 239 + saw_node = of_parse_phandle(cpu_node, "qcom,saw", 0); 240 + if (!saw_node) 241 + return -ENODEV; 242 + 243 + pdev = of_find_device_by_node(saw_node); 244 + of_node_put(saw_node); 245 + of_node_put(cpu_node); 246 + if (!pdev) 247 + return -ENODEV; 248 + 249 + data = devm_kzalloc(cpuidle_dev, sizeof(*data), GFP_KERNEL); 250 + if (!data) 251 + return -ENOMEM; 252 + 253 + data->spm = dev_get_drvdata(&pdev->dev); 254 + if (!data->spm) 255 + return -EINVAL; 256 + 257 + data->cpuidle_driver = qcom_spm_idle_driver; 258 + data->cpuidle_driver.cpumask = (struct cpumask *)cpumask_of(cpu); 259 + 260 + ret = dt_init_idle_driver(&data->cpuidle_driver, 261 + qcom_idle_state_match, 1); 97 262 if (ret <= 0) 98 263 return ret ? : -ENODEV; 99 264 100 - /* We have atleast one power down mode */ 101 - return qcom_scm_set_warm_boot_addr(cpu_resume_arm, drv->cpumask); 265 + ret = qcom_scm_set_warm_boot_addr(cpu_resume_arm, cpumask_of(cpu)); 266 + if (ret) 267 + return ret; 268 + 269 + return cpuidle_register(&data->cpuidle_driver, NULL); 102 270 } 103 271 104 - static struct spm_driver_data *spm_get_drv(struct platform_device *pdev, 105 - int *spm_cpu) 272 + static int spm_cpuidle_drv_probe(struct platform_device *pdev) 106 273 { 107 - struct spm_driver_data *drv = NULL; 108 - struct device_node *cpu_node, *saw_node; 109 - int cpu; 110 - bool found = 0; 111 - 112 - for_each_possible_cpu(cpu) { 113 - cpu_node = of_cpu_device_node_get(cpu); 114 - if (!cpu_node) 115 - continue; 116 - saw_node = of_parse_phandle(cpu_node, "qcom,saw", 0); 117 - found = (saw_node == pdev->dev.of_node); 118 - of_node_put(saw_node); 119 - of_node_put(cpu_node); 120 - if (found) 121 - break; 122 - } 123 - 124 - if (found) { 125 - drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL); 126 - if (drv) 127 - *spm_cpu = cpu; 128 - } 129 - 130 - return drv; 131 - } 132 - 133 - static const struct of_device_id spm_match_table[] = { 134 - { .compatible = "qcom,msm8226-saw2-v2.1-cpu", 135 - .data = &spm_reg_8226_cpu }, 136 - { .compatible = "qcom,msm8974-saw2-v2.1-cpu", 137 - .data = &spm_reg_8974_8084_cpu }, 138 - { .compatible = "qcom,apq8084-saw2-v2.1-cpu", 139 - .data = &spm_reg_8974_8084_cpu }, 140 - { .compatible = "qcom,apq8064-saw2-v1.1-cpu", 141 - .data = &spm_reg_8064_cpu }, 142 - { }, 143 - }; 144 - 145 - static int spm_dev_probe(struct platform_device *pdev) 146 - { 147 - struct spm_driver_data *drv; 148 - struct resource *res; 149 - const struct of_device_id *match_id; 150 - void __iomem *addr; 151 274 int cpu, ret; 152 275 153 276 if (!qcom_scm_is_available()) 154 277 return -EPROBE_DEFER; 155 278 156 - drv = spm_get_drv(pdev, &cpu); 157 - if (!drv) 158 - return -EINVAL; 159 - platform_set_drvdata(pdev, drv); 279 + for_each_possible_cpu(cpu) { 280 + ret = spm_cpuidle_register(&pdev->dev, cpu); 281 + if (ret && ret != -ENODEV) { 282 + dev_err(&pdev->dev, 283 + "Cannot register for CPU%d: %d\n", cpu, ret); 284 + } 285 + } 160 286 161 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 162 - drv->reg_base = devm_ioremap_resource(&pdev->dev, res); 163 - if (IS_ERR(drv->reg_base)) 164 - return PTR_ERR(drv->reg_base); 165 - 166 - match_id = of_match_node(spm_match_table, pdev->dev.of_node); 167 - if (!match_id) 168 - return -ENODEV; 169 - 170 - drv->reg_data = match_id->data; 171 - 172 - ret = spm_cpuidle_init(&drv->cpuidle_driver, cpu); 173 - if (ret) 174 - return ret; 175 - 176 - /* Write the SPM sequences first.. */ 177 - addr = drv->reg_base + drv->reg_data->reg_offset[SPM_REG_SEQ_ENTRY]; 178 - __iowrite32_copy(addr, drv->reg_data->seq, 179 - ARRAY_SIZE(drv->reg_data->seq) / 4); 180 - 181 - /* 182 - * ..and then the control registers. 183 - * On some SoC if the control registers are written first and if the 184 - * CPU was held in reset, the reset signal could trigger the SPM state 185 - * machine, before the sequences are completely written. 186 - */ 187 - spm_register_write(drv, SPM_REG_CFG, drv->reg_data->spm_cfg); 188 - spm_register_write(drv, SPM_REG_DLY, drv->reg_data->spm_dly); 189 - spm_register_write(drv, SPM_REG_PMIC_DLY, drv->reg_data->pmic_dly); 190 - spm_register_write(drv, SPM_REG_PMIC_DATA_0, 191 - drv->reg_data->pmic_data[0]); 192 - spm_register_write(drv, SPM_REG_PMIC_DATA_1, 193 - drv->reg_data->pmic_data[1]); 194 - 195 - /* Set up Standby as the default low power mode */ 196 - spm_set_low_power_mode(drv, PM_SLEEP_MODE_STBY); 197 - 198 - return cpuidle_register(&drv->cpuidle_driver, NULL); 199 - } 200 - 201 - static int spm_dev_remove(struct platform_device *pdev) 202 - { 203 - struct spm_driver_data *drv = platform_get_drvdata(pdev); 204 - 205 - cpuidle_unregister(&drv->cpuidle_driver); 206 287 return 0; 207 288 } 208 289 209 - static struct platform_driver spm_driver = { 210 - .probe = spm_dev_probe, 211 - .remove = spm_dev_remove, 290 + static struct platform_driver spm_cpuidle_driver = { 291 + .probe = spm_cpuidle_drv_probe, 212 292 .driver = { 213 - .name = "saw", 214 - .of_match_table = spm_match_table, 293 + .name = "qcom-spm-cpuidle", 294 + .suppress_bind_attrs = true, 215 295 }, 216 296 }; 217 297 218 - builtin_platform_driver(spm_driver); 298 + static int __init qcom_spm_cpuidle_init(void) 299 + { 300 + struct platform_device *pdev; 301 + int ret; 302 + 303 + ret = platform_driver_register(&spm_cpuidle_driver); 304 + if (ret) 305 + return ret; 306 + 307 + pdev = platform_device_register_simple("qcom-spm-cpuidle", 308 + -1, NULL, 0); 309 + if (IS_ERR(pdev)) { 310 + platform_driver_unregister(&spm_cpuidle_driver); 311 + return PTR_ERR(pdev); 312 + } 313 + 314 + return 0; 315 + } 316 + device_initcall(qcom_spm_cpuidle_init);
+3
drivers/cpuidle/cpuidle-tegra.c
··· 337 337 338 338 static int tegra_cpuidle_probe(struct platform_device *pdev) 339 339 { 340 + if (tegra_pmc_get_suspend_mode() == TEGRA_SUSPEND_NOT_READY) 341 + return -EPROBE_DEFER; 342 + 340 343 /* LP2 could be disabled in device-tree */ 341 344 if (tegra_pmc_get_suspend_mode() < TEGRA_SUSPEND_LP2) 342 345 tegra_cpuidle_disable_state(TEGRA_CC6);
+48 -5
drivers/firmware/arm_ffa/driver.c
··· 167 167 168 168 static struct ffa_drv_info *drv_info; 169 169 170 + /* 171 + * The driver must be able to support all the versions from the earliest 172 + * supported FFA_MIN_VERSION to the latest supported FFA_DRIVER_VERSION. 173 + * The specification states that if firmware supports a FFA implementation 174 + * that is incompatible with and at a greater version number than specified 175 + * by the caller(FFA_DRIVER_VERSION passed as parameter to FFA_VERSION), 176 + * it must return the NOT_SUPPORTED error code. 177 + */ 178 + static u32 ffa_compatible_version_find(u32 version) 179 + { 180 + u16 major = MAJOR_VERSION(version), minor = MINOR_VERSION(version); 181 + u16 drv_major = MAJOR_VERSION(FFA_DRIVER_VERSION); 182 + u16 drv_minor = MINOR_VERSION(FFA_DRIVER_VERSION); 183 + 184 + if ((major < drv_major) || (major == drv_major && minor <= drv_minor)) 185 + return version; 186 + 187 + pr_info("Firmware version higher than driver version, downgrading\n"); 188 + return FFA_DRIVER_VERSION; 189 + } 190 + 170 191 static int ffa_version_check(u32 *version) 171 192 { 172 193 ffa_value_t ver; ··· 201 180 return -EOPNOTSUPP; 202 181 } 203 182 204 - if (ver.a0 < FFA_MIN_VERSION || ver.a0 > FFA_DRIVER_VERSION) { 205 - pr_err("Incompatible version %d.%d found\n", 206 - MAJOR_VERSION(ver.a0), MINOR_VERSION(ver.a0)); 183 + if (ver.a0 < FFA_MIN_VERSION) { 184 + pr_err("Incompatible v%d.%d! Earliest supported v%d.%d\n", 185 + MAJOR_VERSION(ver.a0), MINOR_VERSION(ver.a0), 186 + MAJOR_VERSION(FFA_MIN_VERSION), 187 + MINOR_VERSION(FFA_MIN_VERSION)); 207 188 return -EINVAL; 208 189 } 209 190 210 - *version = ver.a0; 211 - pr_info("Version %d.%d found\n", MAJOR_VERSION(ver.a0), 191 + pr_info("Driver version %d.%d\n", MAJOR_VERSION(FFA_DRIVER_VERSION), 192 + MINOR_VERSION(FFA_DRIVER_VERSION)); 193 + pr_info("Firmware version %d.%d found\n", MAJOR_VERSION(ver.a0), 212 194 MINOR_VERSION(ver.a0)); 195 + *version = ffa_compatible_version_find(ver.a0); 196 + 213 197 return 0; 214 198 } 215 199 ··· 612 586 return ffa_memory_ops(FFA_FN_NATIVE(MEM_SHARE), args); 613 587 } 614 588 589 + static int 590 + ffa_memory_lend(struct ffa_device *dev, struct ffa_mem_ops_args *args) 591 + { 592 + /* Note that upon a successful MEM_LEND request the caller 593 + * must ensure that the memory region specified is not accessed 594 + * until a successful MEM_RECALIM call has been made. 595 + * On systems with a hypervisor present this will been enforced, 596 + * however on systems without a hypervisor the responsibility 597 + * falls to the calling kernel driver to prevent access. 598 + */ 599 + if (dev->mode_32bit) 600 + return ffa_memory_ops(FFA_MEM_LEND, args); 601 + 602 + return ffa_memory_ops(FFA_FN_NATIVE(MEM_LEND), args); 603 + } 604 + 615 605 static const struct ffa_dev_ops ffa_ops = { 616 606 .api_version_get = ffa_api_version_get, 617 607 .partition_info_get = ffa_partition_info_get, ··· 635 593 .sync_send_receive = ffa_sync_send_receive, 636 594 .memory_reclaim = ffa_memory_reclaim, 637 595 .memory_share = ffa_memory_share, 596 + .memory_lend = ffa_memory_lend, 638 597 }; 639 598 640 599 const struct ffa_dev_ops *ffa_dev_ops_get(struct ffa_device *dev)
+5 -1
drivers/firmware/qcom_scm.c
··· 252 252 break; 253 253 default: 254 254 pr_err("Unknown SMC convention being used\n"); 255 - return -EINVAL; 255 + return false; 256 256 } 257 257 258 258 ret = qcom_scm_call(dev, &desc, &res); ··· 1345 1345 { .compatible = "qcom,scm-msm8660", .data = (void *) SCM_HAS_CORE_CLK }, 1346 1346 { .compatible = "qcom,scm-msm8960", .data = (void *) SCM_HAS_CORE_CLK }, 1347 1347 { .compatible = "qcom,scm-msm8916", .data = (void *)(SCM_HAS_CORE_CLK | 1348 + SCM_HAS_IFACE_CLK | 1349 + SCM_HAS_BUS_CLK) 1350 + }, 1351 + { .compatible = "qcom,scm-msm8953", .data = (void *)(SCM_HAS_CORE_CLK | 1348 1352 SCM_HAS_IFACE_CLK | 1349 1353 SCM_HAS_BUS_CLK) 1350 1354 },
+17 -9
drivers/firmware/tegra/bpmp-debugfs.c
··· 74 74 static const char *get_filename(struct tegra_bpmp *bpmp, 75 75 const struct file *file, char *buf, int size) 76 76 { 77 - char root_path_buf[512]; 78 - const char *root_path; 79 - const char *filename; 77 + const char *root_path, *filename = NULL; 78 + char *root_path_buf; 80 79 size_t root_len; 80 + 81 + root_path_buf = kzalloc(512, GFP_KERNEL); 82 + if (!root_path_buf) 83 + goto out; 81 84 82 85 root_path = dentry_path(bpmp->debugfs_mirror, root_path_buf, 83 86 sizeof(root_path_buf)); 84 87 if (IS_ERR(root_path)) 85 - return NULL; 88 + goto out; 86 89 87 90 root_len = strlen(root_path); 88 91 89 92 filename = dentry_path(file->f_path.dentry, buf, size); 90 - if (IS_ERR(filename)) 91 - return NULL; 93 + if (IS_ERR(filename)) { 94 + filename = NULL; 95 + goto out; 96 + } 92 97 93 - if (strlen(filename) < root_len || 94 - strncmp(filename, root_path, root_len)) 95 - return NULL; 98 + if (strlen(filename) < root_len || strncmp(filename, root_path, root_len)) { 99 + filename = NULL; 100 + goto out; 101 + } 96 102 97 103 filename += root_len; 98 104 105 + out: 106 + kfree(root_path_buf); 99 107 return filename; 100 108 } 101 109
+2 -5
drivers/firmware/tegra/bpmp-tegra210.c
··· 162 162 { 163 163 struct platform_device *pdev = to_platform_device(bpmp->dev); 164 164 struct tegra210_bpmp *priv; 165 - struct resource *res; 166 165 unsigned int i; 167 166 int err; 168 167 ··· 171 172 172 173 bpmp->priv = priv; 173 174 174 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 175 - priv->atomics = devm_ioremap_resource(&pdev->dev, res); 175 + priv->atomics = devm_platform_ioremap_resource(pdev, 0); 176 176 if (IS_ERR(priv->atomics)) 177 177 return PTR_ERR(priv->atomics); 178 178 179 - res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 180 - priv->arb_sema = devm_ioremap_resource(&pdev->dev, res); 179 + priv->arb_sema = devm_platform_ioremap_resource(pdev, 1); 181 180 if (IS_ERR(priv->arb_sema)) 182 181 return PTR_ERR(priv->arb_sema); 183 182
+4 -1
drivers/gpu/drm/mediatek/mtk_dsi.c
··· 11 11 #include <linux/of_platform.h> 12 12 #include <linux/phy/phy.h> 13 13 #include <linux/platform_device.h> 14 + #include <linux/reset.h> 14 15 15 16 #include <video/mipi_display.h> 16 17 #include <video/videomode.h> ··· 981 980 struct mtk_dsi *dsi = dev_get_drvdata(dev); 982 981 983 982 ret = mtk_dsi_encoder_init(drm, dsi); 983 + if (ret) 984 + return ret; 984 985 985 - return ret; 986 + return device_reset_optional(dev); 986 987 } 987 988 988 989 static void mtk_dsi_unbind(struct device *dev, struct device *master,
+3 -2
drivers/memory/Kconfig
··· 55 55 SRAMs, ATA devices, etc. 56 56 57 57 config BRCMSTB_DPFE 58 - bool "Broadcom STB DPFE driver" if COMPILE_TEST 59 - default y if ARCH_BRCMSTB 58 + tristate "Broadcom STB DPFE driver" 59 + default ARCH_BRCMSTB 60 60 depends on ARCH_BRCMSTB || COMPILE_TEST 61 61 help 62 62 This driver provides access to the DPFE interface of Broadcom ··· 210 210 tristate "Renesas RPC-IF driver" 211 211 depends on ARCH_RENESAS || COMPILE_TEST 212 212 select REGMAP_MMIO 213 + select RESET_CONTROLLER 213 214 help 214 215 This supports Renesas R-Car Gen3 or RZ/G2 RPC-IF which provides 215 216 either SPI host or HyperFlash. You'll have to select individual
+6 -7
drivers/memory/fsl_ifc.c
··· 263 263 264 264 ret = fsl_ifc_ctrl_init(fsl_ifc_ctrl_dev); 265 265 if (ret < 0) 266 - goto err; 266 + goto err_unmap_nandirq; 267 267 268 268 init_waitqueue_head(&fsl_ifc_ctrl_dev->nand_wait); 269 269 ··· 272 272 if (ret != 0) { 273 273 dev_err(&dev->dev, "failed to install irq (%d)\n", 274 274 fsl_ifc_ctrl_dev->irq); 275 - goto err_irq; 275 + goto err_unmap_nandirq; 276 276 } 277 277 278 278 if (fsl_ifc_ctrl_dev->nand_irq) { ··· 281 281 if (ret != 0) { 282 282 dev_err(&dev->dev, "failed to install irq (%d)\n", 283 283 fsl_ifc_ctrl_dev->nand_irq); 284 - goto err_nandirq; 284 + goto err_free_irq; 285 285 } 286 286 } 287 287 288 288 return 0; 289 289 290 - err_nandirq: 291 - free_irq(fsl_ifc_ctrl_dev->nand_irq, fsl_ifc_ctrl_dev); 292 - irq_dispose_mapping(fsl_ifc_ctrl_dev->nand_irq); 293 - err_irq: 290 + err_free_irq: 294 291 free_irq(fsl_ifc_ctrl_dev->irq, fsl_ifc_ctrl_dev); 292 + err_unmap_nandirq: 293 + irq_dispose_mapping(fsl_ifc_ctrl_dev->nand_irq); 295 294 irq_dispose_mapping(fsl_ifc_ctrl_dev->irq); 296 295 err: 297 296 iounmap(fsl_ifc_ctrl_dev->gregs);
+47
drivers/memory/jedec_ddr.h
··· 112 112 #define NUM_DDR_ADDR_TABLE_ENTRIES 11 113 113 #define NUM_DDR_TIMING_TABLE_ENTRIES 4 114 114 115 + #define LPDDR2_MANID_SAMSUNG 1 116 + #define LPDDR2_MANID_QIMONDA 2 117 + #define LPDDR2_MANID_ELPIDA 3 118 + #define LPDDR2_MANID_ETRON 4 119 + #define LPDDR2_MANID_NANYA 5 120 + #define LPDDR2_MANID_HYNIX 6 121 + #define LPDDR2_MANID_MOSEL 7 122 + #define LPDDR2_MANID_WINBOND 8 123 + #define LPDDR2_MANID_ESMT 9 124 + #define LPDDR2_MANID_SPANSION 11 125 + #define LPDDR2_MANID_SST 12 126 + #define LPDDR2_MANID_ZMOS 13 127 + #define LPDDR2_MANID_INTEL 14 128 + #define LPDDR2_MANID_NUMONYX 254 129 + #define LPDDR2_MANID_MICRON 255 130 + 131 + #define LPDDR2_TYPE_S4 0 132 + #define LPDDR2_TYPE_S2 1 133 + #define LPDDR2_TYPE_NVM 2 134 + 115 135 /* Structure for DDR addressing info from the JEDEC spec */ 116 136 struct lpddr2_addressing { 117 137 u32 num_banks; ··· 189 169 extern const struct lpddr2_timings 190 170 lpddr2_jedec_timings[NUM_DDR_TIMING_TABLE_ENTRIES]; 191 171 extern const struct lpddr2_min_tck lpddr2_jedec_min_tck; 172 + 173 + /* Structure of MR8 */ 174 + union lpddr2_basic_config4 { 175 + u32 value; 176 + 177 + struct { 178 + unsigned int arch_type : 2; 179 + unsigned int density : 4; 180 + unsigned int io_width : 2; 181 + } __packed; 182 + }; 183 + 184 + /* 185 + * Structure for information about LPDDR2 chip. All parameters are 186 + * matching raw values of standard mode register bitfields or set to 187 + * -ENOENT if info unavailable. 188 + */ 189 + struct lpddr2_info { 190 + int arch_type; 191 + int density; 192 + int io_width; 193 + int manufacturer_id; 194 + int revision_id1; 195 + int revision_id2; 196 + }; 197 + 198 + const char *lpddr2_jedec_manufacturer(unsigned int manufacturer_id); 192 199 193 200 /* 194 201 * Structure for timings for LPDDR3 based on LPDDR2 plus additional fields.
+41
drivers/memory/jedec_ddr_data.c
··· 131 131 .tFAW = 8 132 132 }; 133 133 EXPORT_SYMBOL_GPL(lpddr2_jedec_min_tck); 134 + 135 + const char *lpddr2_jedec_manufacturer(unsigned int manufacturer_id) 136 + { 137 + switch (manufacturer_id) { 138 + case LPDDR2_MANID_SAMSUNG: 139 + return "Samsung"; 140 + case LPDDR2_MANID_QIMONDA: 141 + return "Qimonda"; 142 + case LPDDR2_MANID_ELPIDA: 143 + return "Elpida"; 144 + case LPDDR2_MANID_ETRON: 145 + return "Etron"; 146 + case LPDDR2_MANID_NANYA: 147 + return "Nanya"; 148 + case LPDDR2_MANID_HYNIX: 149 + return "Hynix"; 150 + case LPDDR2_MANID_MOSEL: 151 + return "Mosel"; 152 + case LPDDR2_MANID_WINBOND: 153 + return "Winbond"; 154 + case LPDDR2_MANID_ESMT: 155 + return "ESMT"; 156 + case LPDDR2_MANID_SPANSION: 157 + return "Spansion"; 158 + case LPDDR2_MANID_SST: 159 + return "SST"; 160 + case LPDDR2_MANID_ZMOS: 161 + return "ZMOS"; 162 + case LPDDR2_MANID_INTEL: 163 + return "Intel"; 164 + case LPDDR2_MANID_NUMONYX: 165 + return "Numonyx"; 166 + case LPDDR2_MANID_MICRON: 167 + return "Micron"; 168 + default: 169 + break; 170 + } 171 + 172 + return "invalid"; 173 + } 174 + EXPORT_SYMBOL_GPL(lpddr2_jedec_manufacturer);
+349 -247
drivers/memory/mtk-smi.c
··· 17 17 #include <dt-bindings/memory/mt2701-larb-port.h> 18 18 #include <dt-bindings/memory/mtk-memory-port.h> 19 19 20 - /* mt8173 */ 21 - #define SMI_LARB_MMU_EN 0xf00 20 + /* SMI COMMON */ 21 + #define SMI_L1LEN 0x100 22 22 23 - /* mt8167 */ 24 - #define MT8167_SMI_LARB_MMU_EN 0xfc0 23 + #define SMI_BUS_SEL 0x220 24 + #define SMI_BUS_LARB_SHIFT(larbid) ((larbid) << 1) 25 + /* All are MMU0 defaultly. Only specialize mmu1 here. */ 26 + #define F_MMU1_LARB(larbid) (0x1 << SMI_BUS_LARB_SHIFT(larbid)) 25 27 26 - /* mt2701 */ 28 + #define SMI_M4U_TH 0x234 29 + #define SMI_FIFO_TH1 0x238 30 + #define SMI_FIFO_TH2 0x23c 31 + #define SMI_DCM 0x300 32 + #define SMI_DUMMY 0x444 33 + 34 + /* SMI LARB */ 35 + #define SMI_LARB_CMD_THRT_CON 0x24 36 + #define SMI_LARB_THRT_RD_NU_LMT_MSK GENMASK(7, 4) 37 + #define SMI_LARB_THRT_RD_NU_LMT (5 << 4) 38 + 39 + #define SMI_LARB_SW_FLAG 0x40 40 + #define SMI_LARB_SW_FLAG_1 0x1 41 + 42 + #define SMI_LARB_OSTDL_PORT 0x200 43 + #define SMI_LARB_OSTDL_PORTx(id) (SMI_LARB_OSTDL_PORT + (((id) & 0x1f) << 2)) 44 + 45 + /* Below are about mmu enable registers, they are different in SoCs */ 46 + /* gen1: mt2701 */ 27 47 #define REG_SMI_SECUR_CON_BASE 0x5c0 28 48 29 49 /* every register control 8 port, register offset 0x4 */ ··· 61 41 /* mt2701 domain should be set to 3 */ 62 42 #define SMI_SECUR_CON_VAL_DOMAIN(id) (0x3 << ((((id) & 0x7) << 2) + 1)) 63 43 64 - /* mt2712 */ 65 - #define SMI_LARB_NONSEC_CON(id) (0x380 + ((id) * 4)) 66 - #define F_MMU_EN BIT(0) 67 - #define BANK_SEL(id) ({ \ 44 + /* gen2: */ 45 + /* mt8167 */ 46 + #define MT8167_SMI_LARB_MMU_EN 0xfc0 47 + 48 + /* mt8173 */ 49 + #define MT8173_SMI_LARB_MMU_EN 0xf00 50 + 51 + /* general */ 52 + #define SMI_LARB_NONSEC_CON(id) (0x380 + ((id) * 4)) 53 + #define F_MMU_EN BIT(0) 54 + #define BANK_SEL(id) ({ \ 68 55 u32 _id = (id) & 0x3; \ 69 56 (_id << 8 | _id << 10 | _id << 12 | _id << 14); \ 70 57 }) 71 58 72 - /* SMI COMMON */ 73 - #define SMI_BUS_SEL 0x220 74 - #define SMI_BUS_LARB_SHIFT(larbid) ((larbid) << 1) 75 - /* All are MMU0 defaultly. Only specialize mmu1 here. */ 76 - #define F_MMU1_LARB(larbid) (0x1 << SMI_BUS_LARB_SHIFT(larbid)) 59 + #define SMI_COMMON_INIT_REGS_NR 6 60 + #define SMI_LARB_PORT_NR_MAX 32 77 61 78 - enum mtk_smi_gen { 79 - MTK_SMI_GEN1, 80 - MTK_SMI_GEN2 62 + #define MTK_SMI_FLAG_THRT_UPDATE BIT(0) 63 + #define MTK_SMI_FLAG_SW_FLAG BIT(1) 64 + #define MTK_SMI_CAPS(flags, _x) (!!((flags) & (_x))) 65 + 66 + struct mtk_smi_reg_pair { 67 + unsigned int offset; 68 + u32 value; 81 69 }; 82 70 71 + enum mtk_smi_type { 72 + MTK_SMI_GEN1, 73 + MTK_SMI_GEN2, /* gen2 smi common */ 74 + MTK_SMI_GEN2_SUB_COMM, /* gen2 smi sub common */ 75 + }; 76 + 77 + #define MTK_SMI_CLK_NR_MAX 4 78 + 79 + /* larbs: Require apb/smi clocks while gals is optional. */ 80 + static const char * const mtk_smi_larb_clks[] = {"apb", "smi", "gals"}; 81 + #define MTK_SMI_LARB_REQ_CLK_NR 2 82 + #define MTK_SMI_LARB_OPT_CLK_NR 1 83 + 84 + /* 85 + * common: Require these four clocks in has_gals case. Otherwise, only apb/smi are required. 86 + * sub common: Require apb/smi/gals0 clocks in has_gals case. Otherwise, only apb/smi are required. 87 + */ 88 + static const char * const mtk_smi_common_clks[] = {"apb", "smi", "gals0", "gals1"}; 89 + #define MTK_SMI_COM_REQ_CLK_NR 2 90 + #define MTK_SMI_COM_GALS_REQ_CLK_NR MTK_SMI_CLK_NR_MAX 91 + #define MTK_SMI_SUB_COM_GALS_REQ_CLK_NR 3 92 + 83 93 struct mtk_smi_common_plat { 84 - enum mtk_smi_gen gen; 85 - bool has_gals; 86 - u32 bus_sel; /* Balance some larbs to enter mmu0 or mmu1 */ 94 + enum mtk_smi_type type; 95 + bool has_gals; 96 + u32 bus_sel; /* Balance some larbs to enter mmu0 or mmu1 */ 97 + 98 + const struct mtk_smi_reg_pair *init; 87 99 }; 88 100 89 101 struct mtk_smi_larb_gen { 90 102 int port_in_larb[MTK_LARB_NR_MAX + 1]; 91 103 void (*config_port)(struct device *dev); 92 104 unsigned int larb_direct_to_common_mask; 93 - bool has_gals; 105 + unsigned int flags_general; 106 + const u8 (*ostd)[SMI_LARB_PORT_NR_MAX]; 94 107 }; 95 108 96 109 struct mtk_smi { 97 110 struct device *dev; 98 - struct clk *clk_apb, *clk_smi; 99 - struct clk *clk_gals0, *clk_gals1; 111 + unsigned int clk_num; 112 + struct clk_bulk_data clks[MTK_SMI_CLK_NR_MAX]; 100 113 struct clk *clk_async; /*only needed by mt2701*/ 101 114 union { 102 115 void __iomem *smi_ao_base; /* only for gen1 */ 103 116 void __iomem *base; /* only for gen2 */ 104 117 }; 118 + struct device *smi_common_dev; /* for sub common */ 105 119 const struct mtk_smi_common_plat *plat; 106 120 }; 107 121 108 122 struct mtk_smi_larb { /* larb: local arbiter */ 109 123 struct mtk_smi smi; 110 124 void __iomem *base; 111 - struct device *smi_common_dev; 125 + struct device *smi_common_dev; /* common or sub-common dev */ 112 126 const struct mtk_smi_larb_gen *larb_gen; 113 127 int larbid; 114 128 u32 *mmu; 115 129 unsigned char *bank; 116 130 }; 117 - 118 - static int mtk_smi_clk_enable(const struct mtk_smi *smi) 119 - { 120 - int ret; 121 - 122 - ret = clk_prepare_enable(smi->clk_apb); 123 - if (ret) 124 - return ret; 125 - 126 - ret = clk_prepare_enable(smi->clk_smi); 127 - if (ret) 128 - goto err_disable_apb; 129 - 130 - ret = clk_prepare_enable(smi->clk_gals0); 131 - if (ret) 132 - goto err_disable_smi; 133 - 134 - ret = clk_prepare_enable(smi->clk_gals1); 135 - if (ret) 136 - goto err_disable_gals0; 137 - 138 - return 0; 139 - 140 - err_disable_gals0: 141 - clk_disable_unprepare(smi->clk_gals0); 142 - err_disable_smi: 143 - clk_disable_unprepare(smi->clk_smi); 144 - err_disable_apb: 145 - clk_disable_unprepare(smi->clk_apb); 146 - return ret; 147 - } 148 - 149 - static void mtk_smi_clk_disable(const struct mtk_smi *smi) 150 - { 151 - clk_disable_unprepare(smi->clk_gals1); 152 - clk_disable_unprepare(smi->clk_gals0); 153 - clk_disable_unprepare(smi->clk_smi); 154 - clk_disable_unprepare(smi->clk_apb); 155 - } 156 131 157 132 int mtk_smi_larb_get(struct device *larbdev) 158 133 { ··· 181 166 return -ENODEV; 182 167 } 183 168 184 - static void mtk_smi_larb_config_port_gen2_general(struct device *dev) 169 + static void 170 + mtk_smi_larb_unbind(struct device *dev, struct device *master, void *data) 185 171 { 186 - struct mtk_smi_larb *larb = dev_get_drvdata(dev); 187 - u32 reg; 188 - int i; 189 - 190 - if (BIT(larb->larbid) & larb->larb_gen->larb_direct_to_common_mask) 191 - return; 192 - 193 - for_each_set_bit(i, (unsigned long *)larb->mmu, 32) { 194 - reg = readl_relaxed(larb->base + SMI_LARB_NONSEC_CON(i)); 195 - reg |= F_MMU_EN; 196 - reg |= BANK_SEL(larb->bank[i]); 197 - writel(reg, larb->base + SMI_LARB_NONSEC_CON(i)); 198 - } 172 + /* Do nothing as the iommu is always enabled. */ 199 173 } 200 174 201 - static void mtk_smi_larb_config_port_mt8173(struct device *dev) 202 - { 203 - struct mtk_smi_larb *larb = dev_get_drvdata(dev); 204 - 205 - writel(*larb->mmu, larb->base + SMI_LARB_MMU_EN); 206 - } 207 - 208 - static void mtk_smi_larb_config_port_mt8167(struct device *dev) 209 - { 210 - struct mtk_smi_larb *larb = dev_get_drvdata(dev); 211 - 212 - writel(*larb->mmu, larb->base + MT8167_SMI_LARB_MMU_EN); 213 - } 175 + static const struct component_ops mtk_smi_larb_component_ops = { 176 + .bind = mtk_smi_larb_bind, 177 + .unbind = mtk_smi_larb_unbind, 178 + }; 214 179 215 180 static void mtk_smi_larb_config_port_gen1(struct device *dev) 216 181 { ··· 223 228 } 224 229 } 225 230 226 - static void 227 - mtk_smi_larb_unbind(struct device *dev, struct device *master, void *data) 231 + static void mtk_smi_larb_config_port_mt8167(struct device *dev) 228 232 { 229 - /* Do nothing as the iommu is always enabled. */ 233 + struct mtk_smi_larb *larb = dev_get_drvdata(dev); 234 + 235 + writel(*larb->mmu, larb->base + MT8167_SMI_LARB_MMU_EN); 230 236 } 231 237 232 - static const struct component_ops mtk_smi_larb_component_ops = { 233 - .bind = mtk_smi_larb_bind, 234 - .unbind = mtk_smi_larb_unbind, 235 - }; 238 + static void mtk_smi_larb_config_port_mt8173(struct device *dev) 239 + { 240 + struct mtk_smi_larb *larb = dev_get_drvdata(dev); 236 241 237 - static const struct mtk_smi_larb_gen mtk_smi_larb_mt8173 = { 238 - /* mt8173 do not need the port in larb */ 239 - .config_port = mtk_smi_larb_config_port_mt8173, 240 - }; 242 + writel(*larb->mmu, larb->base + MT8173_SMI_LARB_MMU_EN); 243 + } 241 244 242 - static const struct mtk_smi_larb_gen mtk_smi_larb_mt8167 = { 243 - /* mt8167 do not need the port in larb */ 244 - .config_port = mtk_smi_larb_config_port_mt8167, 245 + static void mtk_smi_larb_config_port_gen2_general(struct device *dev) 246 + { 247 + struct mtk_smi_larb *larb = dev_get_drvdata(dev); 248 + u32 reg, flags_general = larb->larb_gen->flags_general; 249 + const u8 *larbostd = larb->larb_gen->ostd[larb->larbid]; 250 + int i; 251 + 252 + if (BIT(larb->larbid) & larb->larb_gen->larb_direct_to_common_mask) 253 + return; 254 + 255 + if (MTK_SMI_CAPS(flags_general, MTK_SMI_FLAG_THRT_UPDATE)) { 256 + reg = readl_relaxed(larb->base + SMI_LARB_CMD_THRT_CON); 257 + reg &= ~SMI_LARB_THRT_RD_NU_LMT_MSK; 258 + reg |= SMI_LARB_THRT_RD_NU_LMT; 259 + writel_relaxed(reg, larb->base + SMI_LARB_CMD_THRT_CON); 260 + } 261 + 262 + if (MTK_SMI_CAPS(flags_general, MTK_SMI_FLAG_SW_FLAG)) 263 + writel_relaxed(SMI_LARB_SW_FLAG_1, larb->base + SMI_LARB_SW_FLAG); 264 + 265 + for (i = 0; i < SMI_LARB_PORT_NR_MAX && larbostd && !!larbostd[i]; i++) 266 + writel_relaxed(larbostd[i], larb->base + SMI_LARB_OSTDL_PORTx(i)); 267 + 268 + for_each_set_bit(i, (unsigned long *)larb->mmu, 32) { 269 + reg = readl_relaxed(larb->base + SMI_LARB_NONSEC_CON(i)); 270 + reg |= F_MMU_EN; 271 + reg |= BANK_SEL(larb->bank[i]); 272 + writel(reg, larb->base + SMI_LARB_NONSEC_CON(i)); 273 + } 274 + } 275 + 276 + static const u8 mtk_smi_larb_mt8195_ostd[][SMI_LARB_PORT_NR_MAX] = { 277 + [0] = {0x0a, 0xc, 0x22, 0x22, 0x01, 0x0a,}, /* larb0 */ 278 + [1] = {0x0a, 0xc, 0x22, 0x22, 0x01, 0x0a,}, /* larb1 */ 279 + [2] = {0x12, 0x12, 0x12, 0x12, 0x0a,}, /* ... */ 280 + [3] = {0x12, 0x12, 0x12, 0x12, 0x28, 0x28, 0x0a,}, 281 + [4] = {0x06, 0x01, 0x17, 0x06, 0x0a,}, 282 + [5] = {0x06, 0x01, 0x17, 0x06, 0x06, 0x01, 0x06, 0x0a,}, 283 + [6] = {0x06, 0x01, 0x06, 0x0a,}, 284 + [7] = {0x0c, 0x0c, 0x12,}, 285 + [8] = {0x0c, 0x0c, 0x12,}, 286 + [9] = {0x0a, 0x08, 0x04, 0x06, 0x01, 0x01, 0x10, 0x18, 0x11, 0x0a, 287 + 0x08, 0x04, 0x11, 0x06, 0x02, 0x06, 0x01, 0x11, 0x11, 0x06,}, 288 + [10] = {0x18, 0x08, 0x01, 0x01, 0x20, 0x12, 0x18, 0x06, 0x05, 0x10, 289 + 0x08, 0x08, 0x10, 0x08, 0x08, 0x18, 0x0c, 0x09, 0x0b, 0x0d, 290 + 0x0d, 0x06, 0x10, 0x10,}, 291 + [11] = {0x0e, 0x0e, 0x0e, 0x0e, 0x0e, 0x0e, 0x01, 0x01, 0x01, 0x01,}, 292 + [12] = {0x09, 0x09, 0x05, 0x05, 0x0c, 0x18, 0x02, 0x02, 0x04, 0x02,}, 293 + [13] = {0x02, 0x02, 0x12, 0x12, 0x02, 0x02, 0x02, 0x02, 0x08, 0x01,}, 294 + [14] = {0x12, 0x12, 0x02, 0x02, 0x02, 0x02, 0x16, 0x01, 0x16, 0x01, 295 + 0x01, 0x02, 0x02, 0x08, 0x02,}, 296 + [15] = {}, 297 + [16] = {0x28, 0x02, 0x02, 0x12, 0x02, 0x12, 0x10, 0x02, 0x02, 0x0a, 298 + 0x12, 0x02, 0x0a, 0x16, 0x02, 0x04,}, 299 + [17] = {0x1a, 0x0e, 0x0a, 0x0a, 0x0c, 0x0e, 0x10,}, 300 + [18] = {0x12, 0x06, 0x12, 0x06,}, 301 + [19] = {0x01, 0x04, 0x01, 0x01, 0x01, 0x01, 0x01, 0x04, 0x04, 0x01, 302 + 0x01, 0x01, 0x04, 0x0a, 0x06, 0x01, 0x01, 0x01, 0x0a, 0x06, 303 + 0x01, 0x01, 0x05, 0x03, 0x03, 0x04, 0x01,}, 304 + [20] = {0x01, 0x04, 0x01, 0x01, 0x01, 0x01, 0x01, 0x04, 0x04, 0x01, 305 + 0x01, 0x01, 0x04, 0x0a, 0x06, 0x01, 0x01, 0x01, 0x0a, 0x06, 306 + 0x01, 0x01, 0x05, 0x03, 0x03, 0x04, 0x01,}, 307 + [21] = {0x28, 0x19, 0x0c, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x04,}, 308 + [22] = {0x28, 0x19, 0x0c, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x04,}, 309 + [23] = {0x18, 0x01,}, 310 + [24] = {0x01, 0x01, 0x04, 0x01, 0x01, 0x01, 0x01, 0x01, 0x04, 0x01, 311 + 0x01, 0x01,}, 312 + [25] = {0x02, 0x02, 0x02, 0x28, 0x16, 0x02, 0x02, 0x02, 0x12, 0x16, 313 + 0x02, 0x01,}, 314 + [26] = {0x02, 0x02, 0x02, 0x28, 0x16, 0x02, 0x02, 0x02, 0x12, 0x16, 315 + 0x02, 0x01,}, 316 + [27] = {0x02, 0x02, 0x02, 0x28, 0x16, 0x02, 0x02, 0x02, 0x12, 0x16, 317 + 0x02, 0x01,}, 318 + [28] = {0x1a, 0x0e, 0x0a, 0x0a, 0x0c, 0x0e, 0x10,}, 245 319 }; 246 320 247 321 static const struct mtk_smi_larb_gen mtk_smi_larb_mt2701 = { ··· 333 269 /* DUMMY | IPU0 | IPU1 | CCU | MDLA */ 334 270 }; 335 271 272 + static const struct mtk_smi_larb_gen mtk_smi_larb_mt8167 = { 273 + /* mt8167 do not need the port in larb */ 274 + .config_port = mtk_smi_larb_config_port_mt8167, 275 + }; 276 + 277 + static const struct mtk_smi_larb_gen mtk_smi_larb_mt8173 = { 278 + /* mt8173 do not need the port in larb */ 279 + .config_port = mtk_smi_larb_config_port_mt8173, 280 + }; 281 + 336 282 static const struct mtk_smi_larb_gen mtk_smi_larb_mt8183 = { 337 - .has_gals = true, 338 283 .config_port = mtk_smi_larb_config_port_gen2_general, 339 284 .larb_direct_to_common_mask = BIT(2) | BIT(3) | BIT(7), 340 285 /* IPU0 | IPU1 | CCU */ ··· 353 280 .config_port = mtk_smi_larb_config_port_gen2_general, 354 281 }; 355 282 283 + static const struct mtk_smi_larb_gen mtk_smi_larb_mt8195 = { 284 + .config_port = mtk_smi_larb_config_port_gen2_general, 285 + .flags_general = MTK_SMI_FLAG_THRT_UPDATE | MTK_SMI_FLAG_SW_FLAG, 286 + .ostd = mtk_smi_larb_mt8195_ostd, 287 + }; 288 + 356 289 static const struct of_device_id mtk_smi_larb_of_ids[] = { 357 - { 358 - .compatible = "mediatek,mt8167-smi-larb", 359 - .data = &mtk_smi_larb_mt8167 360 - }, 361 - { 362 - .compatible = "mediatek,mt8173-smi-larb", 363 - .data = &mtk_smi_larb_mt8173 364 - }, 365 - { 366 - .compatible = "mediatek,mt2701-smi-larb", 367 - .data = &mtk_smi_larb_mt2701 368 - }, 369 - { 370 - .compatible = "mediatek,mt2712-smi-larb", 371 - .data = &mtk_smi_larb_mt2712 372 - }, 373 - { 374 - .compatible = "mediatek,mt6779-smi-larb", 375 - .data = &mtk_smi_larb_mt6779 376 - }, 377 - { 378 - .compatible = "mediatek,mt8183-smi-larb", 379 - .data = &mtk_smi_larb_mt8183 380 - }, 381 - { 382 - .compatible = "mediatek,mt8192-smi-larb", 383 - .data = &mtk_smi_larb_mt8192 384 - }, 290 + {.compatible = "mediatek,mt2701-smi-larb", .data = &mtk_smi_larb_mt2701}, 291 + {.compatible = "mediatek,mt2712-smi-larb", .data = &mtk_smi_larb_mt2712}, 292 + {.compatible = "mediatek,mt6779-smi-larb", .data = &mtk_smi_larb_mt6779}, 293 + {.compatible = "mediatek,mt8167-smi-larb", .data = &mtk_smi_larb_mt8167}, 294 + {.compatible = "mediatek,mt8173-smi-larb", .data = &mtk_smi_larb_mt8173}, 295 + {.compatible = "mediatek,mt8183-smi-larb", .data = &mtk_smi_larb_mt8183}, 296 + {.compatible = "mediatek,mt8192-smi-larb", .data = &mtk_smi_larb_mt8192}, 297 + {.compatible = "mediatek,mt8195-smi-larb", .data = &mtk_smi_larb_mt8195}, 385 298 {} 386 299 }; 300 + 301 + static int mtk_smi_device_link_common(struct device *dev, struct device **com_dev) 302 + { 303 + struct platform_device *smi_com_pdev; 304 + struct device_node *smi_com_node; 305 + struct device *smi_com_dev; 306 + struct device_link *link; 307 + 308 + smi_com_node = of_parse_phandle(dev->of_node, "mediatek,smi", 0); 309 + if (!smi_com_node) 310 + return -EINVAL; 311 + 312 + smi_com_pdev = of_find_device_by_node(smi_com_node); 313 + of_node_put(smi_com_node); 314 + if (smi_com_pdev) { 315 + /* smi common is the supplier, Make sure it is ready before */ 316 + if (!platform_get_drvdata(smi_com_pdev)) 317 + return -EPROBE_DEFER; 318 + smi_com_dev = &smi_com_pdev->dev; 319 + link = device_link_add(dev, smi_com_dev, 320 + DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS); 321 + if (!link) { 322 + dev_err(dev, "Unable to link smi-common dev\n"); 323 + return -ENODEV; 324 + } 325 + *com_dev = smi_com_dev; 326 + } else { 327 + dev_err(dev, "Failed to get the smi_common device\n"); 328 + return -EINVAL; 329 + } 330 + return 0; 331 + } 332 + 333 + static int mtk_smi_dts_clk_init(struct device *dev, struct mtk_smi *smi, 334 + const char * const clks[], 335 + unsigned int clk_nr_required, 336 + unsigned int clk_nr_optional) 337 + { 338 + int i, ret; 339 + 340 + for (i = 0; i < clk_nr_required; i++) 341 + smi->clks[i].id = clks[i]; 342 + ret = devm_clk_bulk_get(dev, clk_nr_required, smi->clks); 343 + if (ret) 344 + return ret; 345 + 346 + for (i = clk_nr_required; i < clk_nr_required + clk_nr_optional; i++) 347 + smi->clks[i].id = clks[i]; 348 + ret = devm_clk_bulk_get_optional(dev, clk_nr_optional, 349 + smi->clks + clk_nr_required); 350 + smi->clk_num = clk_nr_required + clk_nr_optional; 351 + return ret; 352 + } 387 353 388 354 static int mtk_smi_larb_probe(struct platform_device *pdev) 389 355 { 390 356 struct mtk_smi_larb *larb; 391 - struct resource *res; 392 357 struct device *dev = &pdev->dev; 393 - struct device_node *smi_node; 394 - struct platform_device *smi_pdev; 395 - struct device_link *link; 358 + int ret; 396 359 397 360 larb = devm_kzalloc(dev, sizeof(*larb), GFP_KERNEL); 398 361 if (!larb) 399 362 return -ENOMEM; 400 363 401 364 larb->larb_gen = of_device_get_match_data(dev); 402 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 403 - larb->base = devm_ioremap_resource(dev, res); 365 + larb->base = devm_platform_ioremap_resource(pdev, 0); 404 366 if (IS_ERR(larb->base)) 405 367 return PTR_ERR(larb->base); 406 368 407 - larb->smi.clk_apb = devm_clk_get(dev, "apb"); 408 - if (IS_ERR(larb->smi.clk_apb)) 409 - return PTR_ERR(larb->smi.clk_apb); 369 + ret = mtk_smi_dts_clk_init(dev, &larb->smi, mtk_smi_larb_clks, 370 + MTK_SMI_LARB_REQ_CLK_NR, MTK_SMI_LARB_OPT_CLK_NR); 371 + if (ret) 372 + return ret; 410 373 411 - larb->smi.clk_smi = devm_clk_get(dev, "smi"); 412 - if (IS_ERR(larb->smi.clk_smi)) 413 - return PTR_ERR(larb->smi.clk_smi); 414 - 415 - if (larb->larb_gen->has_gals) { 416 - /* The larbs may still haven't gals even if the SoC support.*/ 417 - larb->smi.clk_gals0 = devm_clk_get(dev, "gals"); 418 - if (PTR_ERR(larb->smi.clk_gals0) == -ENOENT) 419 - larb->smi.clk_gals0 = NULL; 420 - else if (IS_ERR(larb->smi.clk_gals0)) 421 - return PTR_ERR(larb->smi.clk_gals0); 422 - } 423 374 larb->smi.dev = dev; 424 375 425 - smi_node = of_parse_phandle(dev->of_node, "mediatek,smi", 0); 426 - if (!smi_node) 427 - return -EINVAL; 428 - 429 - smi_pdev = of_find_device_by_node(smi_node); 430 - of_node_put(smi_node); 431 - if (smi_pdev) { 432 - if (!platform_get_drvdata(smi_pdev)) 433 - return -EPROBE_DEFER; 434 - larb->smi_common_dev = &smi_pdev->dev; 435 - link = device_link_add(dev, larb->smi_common_dev, 436 - DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS); 437 - if (!link) { 438 - dev_err(dev, "Unable to link smi-common dev\n"); 439 - return -ENODEV; 440 - } 441 - } else { 442 - dev_err(dev, "Failed to get the smi_common device\n"); 443 - return -EINVAL; 444 - } 376 + ret = mtk_smi_device_link_common(dev, &larb->smi_common_dev); 377 + if (ret < 0) 378 + return ret; 445 379 446 380 pm_runtime_enable(dev); 447 381 platform_set_drvdata(pdev, larb); 448 - return component_add(dev, &mtk_smi_larb_component_ops); 382 + ret = component_add(dev, &mtk_smi_larb_component_ops); 383 + if (ret) 384 + goto err_pm_disable; 385 + return 0; 386 + 387 + err_pm_disable: 388 + pm_runtime_disable(dev); 389 + device_link_remove(dev, larb->smi_common_dev); 390 + return ret; 449 391 } 450 392 451 393 static int mtk_smi_larb_remove(struct platform_device *pdev) ··· 479 391 const struct mtk_smi_larb_gen *larb_gen = larb->larb_gen; 480 392 int ret; 481 393 482 - ret = mtk_smi_clk_enable(&larb->smi); 483 - if (ret < 0) { 484 - dev_err(dev, "Failed to enable clock(%d).\n", ret); 394 + ret = clk_bulk_prepare_enable(larb->smi.clk_num, larb->smi.clks); 395 + if (ret < 0) 485 396 return ret; 486 - } 487 397 488 398 /* Configure the basic setting for this larb */ 489 399 larb_gen->config_port(dev); ··· 493 407 { 494 408 struct mtk_smi_larb *larb = dev_get_drvdata(dev); 495 409 496 - mtk_smi_clk_disable(&larb->smi); 410 + clk_bulk_disable_unprepare(larb->smi.clk_num, larb->smi.clks); 497 411 return 0; 498 412 } 499 413 ··· 513 427 } 514 428 }; 515 429 430 + static const struct mtk_smi_reg_pair mtk_smi_common_mt8195_init[SMI_COMMON_INIT_REGS_NR] = { 431 + {SMI_L1LEN, 0xb}, 432 + {SMI_M4U_TH, 0xe100e10}, 433 + {SMI_FIFO_TH1, 0x506090a}, 434 + {SMI_FIFO_TH2, 0x506090a}, 435 + {SMI_DCM, 0x4f1}, 436 + {SMI_DUMMY, 0x1}, 437 + }; 438 + 516 439 static const struct mtk_smi_common_plat mtk_smi_common_gen1 = { 517 - .gen = MTK_SMI_GEN1, 440 + .type = MTK_SMI_GEN1, 518 441 }; 519 442 520 443 static const struct mtk_smi_common_plat mtk_smi_common_gen2 = { 521 - .gen = MTK_SMI_GEN2, 444 + .type = MTK_SMI_GEN2, 522 445 }; 523 446 524 447 static const struct mtk_smi_common_plat mtk_smi_common_mt6779 = { 525 - .gen = MTK_SMI_GEN2, 526 - .has_gals = true, 527 - .bus_sel = F_MMU1_LARB(1) | F_MMU1_LARB(2) | F_MMU1_LARB(4) | 528 - F_MMU1_LARB(5) | F_MMU1_LARB(6) | F_MMU1_LARB(7), 448 + .type = MTK_SMI_GEN2, 449 + .has_gals = true, 450 + .bus_sel = F_MMU1_LARB(1) | F_MMU1_LARB(2) | F_MMU1_LARB(4) | 451 + F_MMU1_LARB(5) | F_MMU1_LARB(6) | F_MMU1_LARB(7), 529 452 }; 530 453 531 454 static const struct mtk_smi_common_plat mtk_smi_common_mt8183 = { 532 - .gen = MTK_SMI_GEN2, 455 + .type = MTK_SMI_GEN2, 533 456 .has_gals = true, 534 457 .bus_sel = F_MMU1_LARB(1) | F_MMU1_LARB(2) | F_MMU1_LARB(5) | 535 458 F_MMU1_LARB(7), 536 459 }; 537 460 538 461 static const struct mtk_smi_common_plat mtk_smi_common_mt8192 = { 539 - .gen = MTK_SMI_GEN2, 462 + .type = MTK_SMI_GEN2, 540 463 .has_gals = true, 541 464 .bus_sel = F_MMU1_LARB(1) | F_MMU1_LARB(2) | F_MMU1_LARB(5) | 542 465 F_MMU1_LARB(6), 543 466 }; 544 467 468 + static const struct mtk_smi_common_plat mtk_smi_common_mt8195_vdo = { 469 + .type = MTK_SMI_GEN2, 470 + .has_gals = true, 471 + .bus_sel = F_MMU1_LARB(1) | F_MMU1_LARB(3) | F_MMU1_LARB(5) | 472 + F_MMU1_LARB(7), 473 + .init = mtk_smi_common_mt8195_init, 474 + }; 475 + 476 + static const struct mtk_smi_common_plat mtk_smi_common_mt8195_vpp = { 477 + .type = MTK_SMI_GEN2, 478 + .has_gals = true, 479 + .bus_sel = F_MMU1_LARB(1) | F_MMU1_LARB(2) | F_MMU1_LARB(7), 480 + .init = mtk_smi_common_mt8195_init, 481 + }; 482 + 483 + static const struct mtk_smi_common_plat mtk_smi_sub_common_mt8195 = { 484 + .type = MTK_SMI_GEN2_SUB_COMM, 485 + .has_gals = true, 486 + }; 487 + 545 488 static const struct of_device_id mtk_smi_common_of_ids[] = { 546 - { 547 - .compatible = "mediatek,mt8173-smi-common", 548 - .data = &mtk_smi_common_gen2, 549 - }, 550 - { 551 - .compatible = "mediatek,mt8167-smi-common", 552 - .data = &mtk_smi_common_gen2, 553 - }, 554 - { 555 - .compatible = "mediatek,mt2701-smi-common", 556 - .data = &mtk_smi_common_gen1, 557 - }, 558 - { 559 - .compatible = "mediatek,mt2712-smi-common", 560 - .data = &mtk_smi_common_gen2, 561 - }, 562 - { 563 - .compatible = "mediatek,mt6779-smi-common", 564 - .data = &mtk_smi_common_mt6779, 565 - }, 566 - { 567 - .compatible = "mediatek,mt8183-smi-common", 568 - .data = &mtk_smi_common_mt8183, 569 - }, 570 - { 571 - .compatible = "mediatek,mt8192-smi-common", 572 - .data = &mtk_smi_common_mt8192, 573 - }, 489 + {.compatible = "mediatek,mt2701-smi-common", .data = &mtk_smi_common_gen1}, 490 + {.compatible = "mediatek,mt2712-smi-common", .data = &mtk_smi_common_gen2}, 491 + {.compatible = "mediatek,mt6779-smi-common", .data = &mtk_smi_common_mt6779}, 492 + {.compatible = "mediatek,mt8167-smi-common", .data = &mtk_smi_common_gen2}, 493 + {.compatible = "mediatek,mt8173-smi-common", .data = &mtk_smi_common_gen2}, 494 + {.compatible = "mediatek,mt8183-smi-common", .data = &mtk_smi_common_mt8183}, 495 + {.compatible = "mediatek,mt8192-smi-common", .data = &mtk_smi_common_mt8192}, 496 + {.compatible = "mediatek,mt8195-smi-common-vdo", .data = &mtk_smi_common_mt8195_vdo}, 497 + {.compatible = "mediatek,mt8195-smi-common-vpp", .data = &mtk_smi_common_mt8195_vpp}, 498 + {.compatible = "mediatek,mt8195-smi-sub-common", .data = &mtk_smi_sub_common_mt8195}, 574 499 {} 575 500 }; 576 501 ··· 589 492 { 590 493 struct device *dev = &pdev->dev; 591 494 struct mtk_smi *common; 592 - struct resource *res; 593 - int ret; 495 + int ret, clk_required = MTK_SMI_COM_REQ_CLK_NR; 594 496 595 497 common = devm_kzalloc(dev, sizeof(*common), GFP_KERNEL); 596 498 if (!common) ··· 597 501 common->dev = dev; 598 502 common->plat = of_device_get_match_data(dev); 599 503 600 - common->clk_apb = devm_clk_get(dev, "apb"); 601 - if (IS_ERR(common->clk_apb)) 602 - return PTR_ERR(common->clk_apb); 603 - 604 - common->clk_smi = devm_clk_get(dev, "smi"); 605 - if (IS_ERR(common->clk_smi)) 606 - return PTR_ERR(common->clk_smi); 607 - 608 504 if (common->plat->has_gals) { 609 - common->clk_gals0 = devm_clk_get(dev, "gals0"); 610 - if (IS_ERR(common->clk_gals0)) 611 - return PTR_ERR(common->clk_gals0); 612 - 613 - common->clk_gals1 = devm_clk_get(dev, "gals1"); 614 - if (IS_ERR(common->clk_gals1)) 615 - return PTR_ERR(common->clk_gals1); 505 + if (common->plat->type == MTK_SMI_GEN2) 506 + clk_required = MTK_SMI_COM_GALS_REQ_CLK_NR; 507 + else if (common->plat->type == MTK_SMI_GEN2_SUB_COMM) 508 + clk_required = MTK_SMI_SUB_COM_GALS_REQ_CLK_NR; 616 509 } 510 + ret = mtk_smi_dts_clk_init(dev, common, mtk_smi_common_clks, clk_required, 0); 511 + if (ret) 512 + return ret; 617 513 618 514 /* 619 515 * for mtk smi gen 1, we need to get the ao(always on) base to config ··· 613 525 * clock into emi clock domain, but for mtk smi gen2, there's no smi ao 614 526 * base. 615 527 */ 616 - if (common->plat->gen == MTK_SMI_GEN1) { 617 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 618 - common->smi_ao_base = devm_ioremap_resource(dev, res); 528 + if (common->plat->type == MTK_SMI_GEN1) { 529 + common->smi_ao_base = devm_platform_ioremap_resource(pdev, 0); 619 530 if (IS_ERR(common->smi_ao_base)) 620 531 return PTR_ERR(common->smi_ao_base); 621 532 ··· 626 539 if (ret) 627 540 return ret; 628 541 } else { 629 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 630 - common->base = devm_ioremap_resource(dev, res); 542 + common->base = devm_platform_ioremap_resource(pdev, 0); 631 543 if (IS_ERR(common->base)) 632 544 return PTR_ERR(common->base); 633 545 } 546 + 547 + /* link its smi-common if this is smi-sub-common */ 548 + if (common->plat->type == MTK_SMI_GEN2_SUB_COMM) { 549 + ret = mtk_smi_device_link_common(dev, &common->smi_common_dev); 550 + if (ret < 0) 551 + return ret; 552 + } 553 + 634 554 pm_runtime_enable(dev); 635 555 platform_set_drvdata(pdev, common); 636 556 return 0; ··· 645 551 646 552 static int mtk_smi_common_remove(struct platform_device *pdev) 647 553 { 554 + struct mtk_smi *common = dev_get_drvdata(&pdev->dev); 555 + 556 + if (common->plat->type == MTK_SMI_GEN2_SUB_COMM) 557 + device_link_remove(&pdev->dev, common->smi_common_dev); 648 558 pm_runtime_disable(&pdev->dev); 649 559 return 0; 650 560 } ··· 656 558 static int __maybe_unused mtk_smi_common_resume(struct device *dev) 657 559 { 658 560 struct mtk_smi *common = dev_get_drvdata(dev); 659 - u32 bus_sel = common->plat->bus_sel; 660 - int ret; 561 + const struct mtk_smi_reg_pair *init = common->plat->init; 562 + u32 bus_sel = common->plat->bus_sel; /* default is 0 */ 563 + int ret, i; 661 564 662 - ret = mtk_smi_clk_enable(common); 663 - if (ret) { 664 - dev_err(common->dev, "Failed to enable clock(%d).\n", ret); 565 + ret = clk_bulk_prepare_enable(common->clk_num, common->clks); 566 + if (ret) 665 567 return ret; 666 - } 667 568 668 - if (common->plat->gen == MTK_SMI_GEN2 && bus_sel) 669 - writel(bus_sel, common->base + SMI_BUS_SEL); 569 + if (common->plat->type != MTK_SMI_GEN2) 570 + return 0; 571 + 572 + for (i = 0; i < SMI_COMMON_INIT_REGS_NR && init && init[i].offset; i++) 573 + writel_relaxed(init[i].value, common->base + init[i].offset); 574 + 575 + writel(bus_sel, common->base + SMI_BUS_SEL); 670 576 return 0; 671 577 } 672 578 ··· 678 576 { 679 577 struct mtk_smi *common = dev_get_drvdata(dev); 680 578 681 - mtk_smi_clk_disable(common); 579 + clk_bulk_disable_unprepare(common->clk_num, common->clks); 682 580 return 0; 683 581 } 684 582
+87
drivers/memory/of_memory.c
··· 298 298 return NULL; 299 299 } 300 300 EXPORT_SYMBOL(of_lpddr3_get_ddr_timings); 301 + 302 + /** 303 + * of_lpddr2_get_info() - extracts information about the lpddr2 chip. 304 + * @np: Pointer to device tree node containing lpddr2 info 305 + * @dev: Device requesting info 306 + * 307 + * Populates lpddr2_info structure by extracting data from device 308 + * tree node. Returns pointer to populated structure. If error 309 + * happened while populating, returns NULL. If property is missing 310 + * in a device-tree, then the corresponding value is set to -ENOENT. 311 + */ 312 + const struct lpddr2_info 313 + *of_lpddr2_get_info(struct device_node *np, struct device *dev) 314 + { 315 + struct lpddr2_info *ret_info, info = {}; 316 + struct property *prop; 317 + const char *cp; 318 + int err; 319 + 320 + err = of_property_read_u32(np, "revision-id1", &info.revision_id1); 321 + if (err) 322 + info.revision_id1 = -ENOENT; 323 + 324 + err = of_property_read_u32(np, "revision-id2", &info.revision_id2); 325 + if (err) 326 + info.revision_id2 = -ENOENT; 327 + 328 + err = of_property_read_u32(np, "io-width", &info.io_width); 329 + if (err) 330 + return NULL; 331 + 332 + info.io_width = 32 / info.io_width - 1; 333 + 334 + err = of_property_read_u32(np, "density", &info.density); 335 + if (err) 336 + return NULL; 337 + 338 + info.density = ffs(info.density) - 7; 339 + 340 + if (of_device_is_compatible(np, "jedec,lpddr2-s4")) 341 + info.arch_type = LPDDR2_TYPE_S4; 342 + else if (of_device_is_compatible(np, "jedec,lpddr2-s2")) 343 + info.arch_type = LPDDR2_TYPE_S2; 344 + else if (of_device_is_compatible(np, "jedec,lpddr2-nvm")) 345 + info.arch_type = LPDDR2_TYPE_NVM; 346 + else 347 + return NULL; 348 + 349 + prop = of_find_property(np, "compatible", NULL); 350 + for (cp = of_prop_next_string(prop, NULL); cp; 351 + cp = of_prop_next_string(prop, cp)) { 352 + 353 + #define OF_LPDDR2_VENDOR_CMP(compat, ID) \ 354 + if (!of_compat_cmp(cp, compat ",", strlen(compat ","))) { \ 355 + info.manufacturer_id = LPDDR2_MANID_##ID; \ 356 + break; \ 357 + } 358 + 359 + OF_LPDDR2_VENDOR_CMP("samsung", SAMSUNG) 360 + OF_LPDDR2_VENDOR_CMP("qimonda", QIMONDA) 361 + OF_LPDDR2_VENDOR_CMP("elpida", ELPIDA) 362 + OF_LPDDR2_VENDOR_CMP("etron", ETRON) 363 + OF_LPDDR2_VENDOR_CMP("nanya", NANYA) 364 + OF_LPDDR2_VENDOR_CMP("hynix", HYNIX) 365 + OF_LPDDR2_VENDOR_CMP("mosel", MOSEL) 366 + OF_LPDDR2_VENDOR_CMP("winbond", WINBOND) 367 + OF_LPDDR2_VENDOR_CMP("esmt", ESMT) 368 + OF_LPDDR2_VENDOR_CMP("spansion", SPANSION) 369 + OF_LPDDR2_VENDOR_CMP("sst", SST) 370 + OF_LPDDR2_VENDOR_CMP("zmos", ZMOS) 371 + OF_LPDDR2_VENDOR_CMP("intel", INTEL) 372 + OF_LPDDR2_VENDOR_CMP("numonyx", NUMONYX) 373 + OF_LPDDR2_VENDOR_CMP("micron", MICRON) 374 + 375 + #undef OF_LPDDR2_VENDOR_CMP 376 + } 377 + 378 + if (!info.manufacturer_id) 379 + info.manufacturer_id = -ENOENT; 380 + 381 + ret_info = devm_kzalloc(dev, sizeof(*ret_info), GFP_KERNEL); 382 + if (ret_info) 383 + *ret_info = info; 384 + 385 + return ret_info; 386 + } 387 + EXPORT_SYMBOL(of_lpddr2_get_info);
+9
drivers/memory/of_memory.h
··· 20 20 const struct lpddr3_timings * 21 21 of_lpddr3_get_ddr_timings(struct device_node *np_ddr, 22 22 struct device *dev, u32 device_type, u32 *nr_frequencies); 23 + 24 + const struct lpddr2_info *of_lpddr2_get_info(struct device_node *np, 25 + struct device *dev); 23 26 #else 24 27 static inline const struct lpddr2_min_tck 25 28 *of_get_min_tck(struct device_node *np, struct device *dev) ··· 46 43 static inline const struct lpddr3_timings 47 44 *of_lpddr3_get_ddr_timings(struct device_node *np_ddr, 48 45 struct device *dev, u32 device_type, u32 *nr_frequencies) 46 + { 47 + return NULL; 48 + } 49 + 50 + static inline const struct lpddr2_info 51 + *of_lpddr2_get_info(struct device_node *np, struct device *dev) 49 52 { 50 53 return NULL; 51 54 }
+123 -36
drivers/memory/renesas-rpc-if.c
··· 160 160 .n_yes_ranges = ARRAY_SIZE(rpcif_volatile_ranges), 161 161 }; 162 162 163 + 164 + /* 165 + * Custom accessor functions to ensure SMRDR0 and SMWDR0 are always accessed 166 + * with proper width. Requires SMENR_SPIDE to be correctly set before! 167 + */ 168 + static int rpcif_reg_read(void *context, unsigned int reg, unsigned int *val) 169 + { 170 + struct rpcif *rpc = context; 171 + 172 + if (reg == RPCIF_SMRDR0 || reg == RPCIF_SMWDR0) { 173 + u32 spide = readl(rpc->base + RPCIF_SMENR) & RPCIF_SMENR_SPIDE(0xF); 174 + 175 + if (spide == 0x8) { 176 + *val = readb(rpc->base + reg); 177 + return 0; 178 + } else if (spide == 0xC) { 179 + *val = readw(rpc->base + reg); 180 + return 0; 181 + } else if (spide != 0xF) { 182 + return -EILSEQ; 183 + } 184 + } 185 + 186 + *val = readl(rpc->base + reg); 187 + return 0; 188 + } 189 + 190 + static int rpcif_reg_write(void *context, unsigned int reg, unsigned int val) 191 + { 192 + struct rpcif *rpc = context; 193 + 194 + if (reg == RPCIF_SMRDR0 || reg == RPCIF_SMWDR0) { 195 + u32 spide = readl(rpc->base + RPCIF_SMENR) & RPCIF_SMENR_SPIDE(0xF); 196 + 197 + if (spide == 0x8) { 198 + writeb(val, rpc->base + reg); 199 + return 0; 200 + } else if (spide == 0xC) { 201 + writew(val, rpc->base + reg); 202 + return 0; 203 + } else if (spide != 0xF) { 204 + return -EILSEQ; 205 + } 206 + } 207 + 208 + writel(val, rpc->base + reg); 209 + return 0; 210 + } 211 + 163 212 static const struct regmap_config rpcif_regmap_config = { 164 213 .reg_bits = 32, 165 214 .val_bits = 32, 166 215 .reg_stride = 4, 216 + .reg_read = rpcif_reg_read, 217 + .reg_write = rpcif_reg_write, 167 218 .fast_io = true, 168 219 .max_register = RPCIF_PHYINT, 169 220 .volatile_table = &rpcif_volatile_table, ··· 224 173 { 225 174 struct platform_device *pdev = to_platform_device(dev); 226 175 struct resource *res; 227 - void __iomem *base; 228 176 229 177 rpc->dev = dev; 230 178 231 179 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs"); 232 - base = devm_ioremap_resource(&pdev->dev, res); 233 - if (IS_ERR(base)) 234 - return PTR_ERR(base); 180 + rpc->base = devm_ioremap_resource(&pdev->dev, res); 181 + if (IS_ERR(rpc->base)) 182 + return PTR_ERR(rpc->base); 235 183 236 - rpc->regmap = devm_regmap_init_mmio(&pdev->dev, base, 237 - &rpcif_regmap_config); 184 + rpc->regmap = devm_regmap_init(&pdev->dev, NULL, rpc, &rpcif_regmap_config); 238 185 if (IS_ERR(rpc->regmap)) { 239 186 dev_err(&pdev->dev, 240 187 "failed to init regmap for rpcif, error %ld\n", ··· 403 354 nbytes = op->data.nbytes; 404 355 rpc->xferlen = nbytes; 405 356 406 - rpc->enable |= RPCIF_SMENR_SPIDE(rpcif_bits_set(rpc, nbytes)) | 407 - RPCIF_SMENR_SPIDB(rpcif_bit_size(op->data.buswidth)); 357 + rpc->enable |= RPCIF_SMENR_SPIDB(rpcif_bit_size(op->data.buswidth)); 408 358 } 409 359 } 410 360 EXPORT_SYMBOL(rpcif_prepare); 411 361 412 362 int rpcif_manual_xfer(struct rpcif *rpc) 413 363 { 414 - u32 smenr, smcr, pos = 0, max = 4; 364 + u32 smenr, smcr, pos = 0, max = rpc->bus_size == 2 ? 8 : 4; 415 365 int ret = 0; 416 - 417 - if (rpc->bus_size == 2) 418 - max = 8; 419 366 420 367 pm_runtime_get_sync(rpc->dev); 421 368 ··· 423 378 regmap_write(rpc->regmap, RPCIF_SMOPR, rpc->option); 424 379 regmap_write(rpc->regmap, RPCIF_SMDMCR, rpc->dummy); 425 380 regmap_write(rpc->regmap, RPCIF_SMDRENR, rpc->ddr); 381 + regmap_write(rpc->regmap, RPCIF_SMADR, rpc->smadr); 426 382 smenr = rpc->enable; 427 383 428 384 switch (rpc->dir) { 429 385 case RPCIF_DATA_OUT: 430 386 while (pos < rpc->xferlen) { 431 - u32 nbytes = rpc->xferlen - pos; 432 - u32 data[2]; 387 + u32 bytes_left = rpc->xferlen - pos; 388 + u32 nbytes, data[2]; 433 389 434 390 smcr = rpc->smcr | RPCIF_SMCR_SPIE; 435 - if (nbytes > max) { 436 - nbytes = max; 391 + 392 + /* nbytes may only be 1, 2, 4, or 8 */ 393 + nbytes = bytes_left >= max ? max : (1 << ilog2(bytes_left)); 394 + if (bytes_left > nbytes) 437 395 smcr |= RPCIF_SMCR_SSLKP; 438 - } 396 + 397 + smenr |= RPCIF_SMENR_SPIDE(rpcif_bits_set(rpc, nbytes)); 398 + regmap_write(rpc->regmap, RPCIF_SMENR, smenr); 439 399 440 400 memcpy(data, rpc->buffer + pos, nbytes); 441 - if (nbytes > 4) { 401 + if (nbytes == 8) { 442 402 regmap_write(rpc->regmap, RPCIF_SMWDR1, 443 403 data[0]); 444 404 regmap_write(rpc->regmap, RPCIF_SMWDR0, 445 405 data[1]); 446 - } else if (nbytes > 2) { 406 + } else { 447 407 regmap_write(rpc->regmap, RPCIF_SMWDR0, 448 408 data[0]); 449 - } else { 450 - regmap_write(rpc->regmap, RPCIF_SMWDR0, 451 - data[0] << 16); 452 409 } 453 410 454 - regmap_write(rpc->regmap, RPCIF_SMADR, 455 - rpc->smadr + pos); 456 - regmap_write(rpc->regmap, RPCIF_SMENR, smenr); 457 411 regmap_write(rpc->regmap, RPCIF_SMCR, smcr); 458 412 ret = wait_msg_xfer_end(rpc); 459 413 if (ret) ··· 492 448 break; 493 449 } 494 450 while (pos < rpc->xferlen) { 495 - u32 nbytes = rpc->xferlen - pos; 496 - u32 data[2]; 451 + u32 bytes_left = rpc->xferlen - pos; 452 + u32 nbytes, data[2]; 497 453 498 - if (nbytes > max) 499 - nbytes = max; 454 + /* nbytes may only be 1, 2, 4, or 8 */ 455 + nbytes = bytes_left >= max ? max : (1 << ilog2(bytes_left)); 500 456 501 457 regmap_write(rpc->regmap, RPCIF_SMADR, 502 458 rpc->smadr + pos); 459 + smenr &= ~RPCIF_SMENR_SPIDE(0xF); 460 + smenr |= RPCIF_SMENR_SPIDE(rpcif_bits_set(rpc, nbytes)); 503 461 regmap_write(rpc->regmap, RPCIF_SMENR, smenr); 504 462 regmap_write(rpc->regmap, RPCIF_SMCR, 505 463 rpc->smcr | RPCIF_SMCR_SPIE); ··· 509 463 if (ret) 510 464 goto err_out; 511 465 512 - if (nbytes > 4) { 466 + if (nbytes == 8) { 513 467 regmap_read(rpc->regmap, RPCIF_SMRDR1, 514 468 &data[0]); 515 469 regmap_read(rpc->regmap, RPCIF_SMRDR0, 516 470 &data[1]); 517 - } else if (nbytes > 2) { 471 + } else { 518 472 regmap_read(rpc->regmap, RPCIF_SMRDR0, 519 473 &data[0]); 520 - } else { 521 - regmap_read(rpc->regmap, RPCIF_SMRDR0, 522 - &data[0]); 523 - data[0] >>= 16; 524 474 } 525 475 memcpy(rpc->buffer + pos, data, nbytes); 526 476 ··· 544 502 } 545 503 EXPORT_SYMBOL(rpcif_manual_xfer); 546 504 505 + static void memcpy_fromio_readw(void *to, 506 + const void __iomem *from, 507 + size_t count) 508 + { 509 + const int maxw = (IS_ENABLED(CONFIG_64BIT)) ? 8 : 4; 510 + u8 buf[2]; 511 + 512 + if (count && ((unsigned long)from & 1)) { 513 + *(u16 *)buf = __raw_readw((void __iomem *)((unsigned long)from & ~1)); 514 + *(u8 *)to = buf[1]; 515 + from++; 516 + to++; 517 + count--; 518 + } 519 + while (count >= 2 && !IS_ALIGNED((unsigned long)from, maxw)) { 520 + *(u16 *)to = __raw_readw(from); 521 + from += 2; 522 + to += 2; 523 + count -= 2; 524 + } 525 + while (count >= maxw) { 526 + #ifdef CONFIG_64BIT 527 + *(u64 *)to = __raw_readq(from); 528 + #else 529 + *(u32 *)to = __raw_readl(from); 530 + #endif 531 + from += maxw; 532 + to += maxw; 533 + count -= maxw; 534 + } 535 + while (count >= 2) { 536 + *(u16 *)to = __raw_readw(from); 537 + from += 2; 538 + to += 2; 539 + count -= 2; 540 + } 541 + if (count) { 542 + *(u16 *)buf = __raw_readw(from); 543 + *(u8 *)to = buf[0]; 544 + } 545 + } 546 + 547 547 ssize_t rpcif_dirmap_read(struct rpcif *rpc, u64 offs, size_t len, void *buf) 548 548 { 549 549 loff_t from = offs & (RPCIF_DIRMAP_SIZE - 1); ··· 607 523 regmap_write(rpc->regmap, RPCIF_DRDMCR, rpc->dummy); 608 524 regmap_write(rpc->regmap, RPCIF_DRDRENR, rpc->ddr); 609 525 610 - memcpy_fromio(buf, rpc->dirmap + from, len); 526 + if (rpc->bus_size == 2) 527 + memcpy_fromio_readw(buf, rpc->dirmap + from, len); 528 + else 529 + memcpy_fromio(buf, rpc->dirmap + from, len); 611 530 612 531 pm_runtime_put(rpc->dev); 613 532
+7 -6
drivers/memory/samsung/Kconfig
··· 14 14 depends on DEVFREQ_GOV_SIMPLE_ONDEMAND 15 15 depends on (PM_DEVFREQ && PM_DEVFREQ_EVENT) 16 16 help 17 - This adds driver for Exynos5422 DMC (Dynamic Memory Controller). 18 - The driver provides support for Dynamic Voltage and Frequency Scaling in 19 - DMC and DRAM. It also supports changing timings of DRAM running with 20 - different frequency. The timings are calculated based on DT memory 21 - information. 17 + This adds driver for Samsung Exynos5422 SoC DMC (Dynamic Memory 18 + Controller). The driver provides support for Dynamic Voltage and 19 + Frequency Scaling in DMC and DRAM. It also supports changing timings 20 + of DRAM running with different frequency. The timings are calculated 21 + based on DT memory information. 22 + If unsure, say Y on devices with Samsung Exynos SoCs. 22 23 23 24 config EXYNOS_SROM 24 25 bool "Exynos SROM controller driver" if COMPILE_TEST ··· 30 29 during suspend. If however appropriate device tree configuration 31 30 is provided, the driver enables support for external memory 32 31 or external devices. 33 - If unsure, say Y on devices with Samsung Exynos SocS. 32 + If unsure, say Y on devices with Samsung Exynos SoCs. 34 33 35 34 endif
+1
drivers/memory/tegra/Kconfig
··· 16 16 depends on ARCH_TEGRA_2x_SOC || COMPILE_TEST 17 17 select DEVFREQ_GOV_SIMPLE_ONDEMAND 18 18 select PM_DEVFREQ 19 + select DDR 19 20 help 20 21 This driver is for the External Memory Controller (EMC) found on 21 22 Tegra20 chips. The EMC controls the external DRAM on the board.
+12 -13
drivers/memory/tegra/mc.c
··· 87 87 return ERR_PTR(-EPROBE_DEFER); 88 88 } 89 89 90 - err = devm_add_action(dev, tegra_mc_devm_action_put_device, mc); 91 - if (err) { 92 - put_device(mc->dev); 90 + err = devm_add_action_or_reset(dev, tegra_mc_devm_action_put_device, mc); 91 + if (err) 93 92 return ERR_PTR(err); 94 - } 95 93 96 94 return mc; 97 95 } ··· 704 706 goto remove_nodes; 705 707 } 706 708 707 - /* 708 - * MC driver is registered too early, so early that generic driver 709 - * syncing doesn't work for the MC. But it doesn't really matter 710 - * since syncing works for the EMC drivers, hence we can sync the 711 - * MC driver by ourselves and then EMC will complete syncing of 712 - * the whole ICC state. 713 - */ 714 - icc_sync_state(mc->dev); 715 - 716 709 return 0; 717 710 718 711 remove_nodes: ··· 824 835 return 0; 825 836 } 826 837 838 + static void tegra_mc_sync_state(struct device *dev) 839 + { 840 + struct tegra_mc *mc = dev_get_drvdata(dev); 841 + 842 + /* check whether ICC provider is registered */ 843 + if (mc->provider.dev == dev) 844 + icc_sync_state(dev); 845 + } 846 + 827 847 static const struct dev_pm_ops tegra_mc_pm_ops = { 828 848 SET_SYSTEM_SLEEP_PM_OPS(tegra_mc_suspend, tegra_mc_resume) 829 849 }; ··· 843 845 .of_match_table = tegra_mc_of_match, 844 846 .pm = &tegra_mc_pm_ops, 845 847 .suppress_bind_attrs = true, 848 + .sync_state = tegra_mc_sync_state, 846 849 }, 847 850 .prevent_deferred_probe = true, 848 851 .probe = tegra_mc_probe,
+5
drivers/memory/tegra/tegra186-emc.c
··· 197 197 dev_err(&pdev->dev, "failed to EMC DVFS pairs: %d\n", err); 198 198 goto put_bpmp; 199 199 } 200 + if (msg.rx.ret < 0) { 201 + err = -EINVAL; 202 + dev_err(&pdev->dev, "EMC DVFS MRQ failed: %d (BPMP error code)\n", msg.rx.ret); 203 + goto put_bpmp; 204 + } 200 205 201 206 emc->debugfs.min_rate = ULONG_MAX; 202 207 emc->debugfs.max_rate = 0;
+186 -14
drivers/memory/tegra/tegra20-emc.c
··· 5 5 * Author: Dmitry Osipenko <digetx@gmail.com> 6 6 */ 7 7 8 + #include <linux/bitfield.h> 8 9 #include <linux/clk.h> 9 10 #include <linux/clk/tegra.h> 10 11 #include <linux/debugfs.h> ··· 28 27 #include <soc/tegra/common.h> 29 28 #include <soc/tegra/fuse.h> 30 29 30 + #include "../jedec_ddr.h" 31 + #include "../of_memory.h" 32 + 31 33 #include "mc.h" 32 34 33 35 #define EMC_INTSTATUS 0x000 34 36 #define EMC_INTMASK 0x004 35 37 #define EMC_DBG 0x008 38 + #define EMC_ADR_CFG_0 0x010 36 39 #define EMC_TIMING_CONTROL 0x028 37 40 #define EMC_RC 0x02c 38 41 #define EMC_RFC 0x030 ··· 73 68 #define EMC_QUSE_EXTRA 0x0ac 74 69 #define EMC_ODT_WRITE 0x0b0 75 70 #define EMC_ODT_READ 0x0b4 71 + #define EMC_MRR 0x0ec 76 72 #define EMC_FBIO_CFG5 0x104 77 73 #define EMC_FBIO_CFG6 0x114 78 74 #define EMC_STAT_CONTROL 0x160 ··· 100 94 101 95 #define EMC_REFRESH_OVERFLOW_INT BIT(3) 102 96 #define EMC_CLKCHANGE_COMPLETE_INT BIT(4) 97 + #define EMC_MRR_DIVLD_INT BIT(5) 103 98 104 99 #define EMC_DBG_READ_MUX_ASSEMBLY BIT(0) 105 100 #define EMC_DBG_WRITE_MUX_ACTIVE BIT(1) ··· 109 102 #define EMC_DBG_CFG_PRIORITY BIT(24) 110 103 111 104 #define EMC_FBIO_CFG5_DRAM_WIDTH_X16 BIT(4) 105 + #define EMC_FBIO_CFG5_DRAM_TYPE GENMASK(1, 0) 106 + 107 + #define EMC_MRR_DEV_SELECTN GENMASK(31, 30) 108 + #define EMC_MRR_MRR_MA GENMASK(23, 16) 109 + #define EMC_MRR_MRR_DATA GENMASK(15, 0) 110 + 111 + #define EMC_ADR_CFG_0_EMEM_NUMDEV GENMASK(25, 24) 112 112 113 113 #define EMC_PWR_GATHER_CLEAR (1 << 8) 114 114 #define EMC_PWR_GATHER_DISABLE (2 << 8) 115 115 #define EMC_PWR_GATHER_ENABLE (3 << 8) 116 + 117 + enum emc_dram_type { 118 + DRAM_TYPE_RESERVED, 119 + DRAM_TYPE_DDR1, 120 + DRAM_TYPE_LPDDR2, 121 + DRAM_TYPE_DDR2, 122 + }; 116 123 117 124 static const u16 emc_timing_registers[] = { 118 125 EMC_RC, ··· 222 201 struct mutex rate_lock; 223 202 224 203 struct devfreq_simple_ondemand_data ondemand_data; 204 + 205 + /* memory chip identity information */ 206 + union lpddr2_basic_config4 basic_conf4; 207 + unsigned int manufacturer_id; 208 + unsigned int revision_id1; 209 + unsigned int revision_id2; 210 + 211 + bool mrr_error; 225 212 }; 226 213 227 214 static irqreturn_t tegra_emc_isr(int irq, void *data) ··· 426 397 if (!emc->timings) 427 398 return -ENOMEM; 428 399 429 - emc->num_timings = child_count; 430 400 timing = emc->timings; 431 401 432 402 for_each_child_of_node(node, child) { 403 + if (of_node_name_eq(child, "lpddr2")) 404 + continue; 405 + 433 406 err = load_one_timing_from_dt(emc, timing++, child); 434 407 if (err) { 435 408 of_node_put(child); 436 409 return err; 437 410 } 411 + 412 + emc->num_timings++; 438 413 } 439 414 440 415 sort(emc->timings, emc->num_timings, sizeof(*timing), cmp_timings, ··· 455 422 } 456 423 457 424 static struct device_node * 458 - tegra_emc_find_node_by_ram_code(struct device *dev) 425 + tegra_emc_find_node_by_ram_code(struct tegra_emc *emc) 459 426 { 427 + struct device *dev = emc->dev; 460 428 struct device_node *np; 461 429 u32 value, ram_code; 462 430 int err; 431 + 432 + if (emc->mrr_error) { 433 + dev_warn(dev, "memory timings skipped due to MRR error\n"); 434 + return NULL; 435 + } 463 436 464 437 if (of_get_child_count(dev->of_node) == 0) { 465 438 dev_info_once(dev, "device-tree doesn't have memory timings\n"); ··· 481 442 np = of_find_node_by_name(np, "emc-tables")) { 482 443 err = of_property_read_u32(np, "nvidia,ram-code", &value); 483 444 if (err || value != ram_code) { 484 - of_node_put(np); 485 - continue; 445 + struct device_node *lpddr2_np; 446 + bool cfg_mismatches = false; 447 + 448 + lpddr2_np = of_find_node_by_name(np, "lpddr2"); 449 + if (lpddr2_np) { 450 + const struct lpddr2_info *info; 451 + 452 + info = of_lpddr2_get_info(lpddr2_np, dev); 453 + if (info) { 454 + if (info->manufacturer_id >= 0 && 455 + info->manufacturer_id != emc->manufacturer_id) 456 + cfg_mismatches = true; 457 + 458 + if (info->revision_id1 >= 0 && 459 + info->revision_id1 != emc->revision_id1) 460 + cfg_mismatches = true; 461 + 462 + if (info->revision_id2 >= 0 && 463 + info->revision_id2 != emc->revision_id2) 464 + cfg_mismatches = true; 465 + 466 + if (info->density != emc->basic_conf4.density) 467 + cfg_mismatches = true; 468 + 469 + if (info->io_width != emc->basic_conf4.io_width) 470 + cfg_mismatches = true; 471 + 472 + if (info->arch_type != emc->basic_conf4.arch_type) 473 + cfg_mismatches = true; 474 + } else { 475 + dev_err(dev, "failed to parse %pOF\n", lpddr2_np); 476 + cfg_mismatches = true; 477 + } 478 + 479 + of_node_put(lpddr2_np); 480 + } else { 481 + cfg_mismatches = true; 482 + } 483 + 484 + if (cfg_mismatches) { 485 + of_node_put(np); 486 + continue; 487 + } 486 488 } 487 489 488 490 return np; ··· 535 455 return NULL; 536 456 } 537 457 458 + static int emc_read_lpddr_mode_register(struct tegra_emc *emc, 459 + unsigned int emem_dev, 460 + unsigned int register_addr, 461 + unsigned int *register_data) 462 + { 463 + u32 memory_dev = emem_dev + 1; 464 + u32 val, mr_mask = 0xff; 465 + int err; 466 + 467 + /* clear data-valid interrupt status */ 468 + writel_relaxed(EMC_MRR_DIVLD_INT, emc->regs + EMC_INTSTATUS); 469 + 470 + /* issue mode register read request */ 471 + val = FIELD_PREP(EMC_MRR_DEV_SELECTN, memory_dev); 472 + val |= FIELD_PREP(EMC_MRR_MRR_MA, register_addr); 473 + 474 + writel_relaxed(val, emc->regs + EMC_MRR); 475 + 476 + /* wait for the LPDDR2 data-valid interrupt */ 477 + err = readl_relaxed_poll_timeout_atomic(emc->regs + EMC_INTSTATUS, val, 478 + val & EMC_MRR_DIVLD_INT, 479 + 1, 100); 480 + if (err) { 481 + dev_err(emc->dev, "mode register %u read failed: %d\n", 482 + register_addr, err); 483 + emc->mrr_error = true; 484 + return err; 485 + } 486 + 487 + /* read out mode register data */ 488 + val = readl_relaxed(emc->regs + EMC_MRR); 489 + *register_data = FIELD_GET(EMC_MRR_MRR_DATA, val) & mr_mask; 490 + 491 + return 0; 492 + } 493 + 494 + static void emc_read_lpddr_sdram_info(struct tegra_emc *emc, 495 + unsigned int emem_dev, 496 + bool print_out) 497 + { 498 + /* these registers are standard for all LPDDR JEDEC memory chips */ 499 + emc_read_lpddr_mode_register(emc, emem_dev, 5, &emc->manufacturer_id); 500 + emc_read_lpddr_mode_register(emc, emem_dev, 6, &emc->revision_id1); 501 + emc_read_lpddr_mode_register(emc, emem_dev, 7, &emc->revision_id2); 502 + emc_read_lpddr_mode_register(emc, emem_dev, 8, &emc->basic_conf4.value); 503 + 504 + if (!print_out) 505 + return; 506 + 507 + dev_info(emc->dev, "SDRAM[dev%u]: manufacturer: 0x%x (%s) rev1: 0x%x rev2: 0x%x prefetch: S%u density: %uMbit iowidth: %ubit\n", 508 + emem_dev, emc->manufacturer_id, 509 + lpddr2_jedec_manufacturer(emc->manufacturer_id), 510 + emc->revision_id1, emc->revision_id2, 511 + 4 >> emc->basic_conf4.arch_type, 512 + 64 << emc->basic_conf4.density, 513 + 32 >> emc->basic_conf4.io_width); 514 + } 515 + 538 516 static int emc_setup_hw(struct tegra_emc *emc) 539 517 { 518 + u32 emc_cfg, emc_dbg, emc_fbio, emc_adr_cfg; 540 519 u32 intmask = EMC_REFRESH_OVERFLOW_INT; 541 - u32 emc_cfg, emc_dbg, emc_fbio; 520 + static bool print_sdram_info_once; 521 + enum emc_dram_type dram_type; 522 + const char *dram_type_str; 523 + unsigned int emem_numdev; 542 524 543 525 emc_cfg = readl_relaxed(emc->regs + EMC_CFG_2); 544 526 ··· 638 496 else 639 497 emc->dram_bus_width = 32; 640 498 641 - dev_info_once(emc->dev, "%ubit DRAM bus\n", emc->dram_bus_width); 499 + dram_type = FIELD_GET(EMC_FBIO_CFG5_DRAM_TYPE, emc_fbio); 500 + 501 + switch (dram_type) { 502 + case DRAM_TYPE_RESERVED: 503 + dram_type_str = "INVALID"; 504 + break; 505 + case DRAM_TYPE_DDR1: 506 + dram_type_str = "DDR1"; 507 + break; 508 + case DRAM_TYPE_LPDDR2: 509 + dram_type_str = "LPDDR2"; 510 + break; 511 + case DRAM_TYPE_DDR2: 512 + dram_type_str = "DDR2"; 513 + break; 514 + } 515 + 516 + emc_adr_cfg = readl_relaxed(emc->regs + EMC_ADR_CFG_0); 517 + emem_numdev = FIELD_GET(EMC_ADR_CFG_0_EMEM_NUMDEV, emc_adr_cfg) + 1; 518 + 519 + dev_info_once(emc->dev, "%ubit DRAM bus, %u %s %s attached\n", 520 + emc->dram_bus_width, emem_numdev, dram_type_str, 521 + emem_numdev == 2 ? "devices" : "device"); 522 + 523 + if (dram_type == DRAM_TYPE_LPDDR2) { 524 + while (emem_numdev--) 525 + emc_read_lpddr_sdram_info(emc, emem_numdev, 526 + !print_sdram_info_once); 527 + print_sdram_info_once = true; 528 + } 642 529 643 530 return 0; 644 531 } ··· 1220 1049 emc->clk_nb.notifier_call = tegra_emc_clk_change_notify; 1221 1050 emc->dev = &pdev->dev; 1222 1051 1223 - np = tegra_emc_find_node_by_ram_code(&pdev->dev); 1224 - if (np) { 1225 - err = tegra_emc_load_timings_from_dt(emc, np); 1226 - of_node_put(np); 1227 - if (err) 1228 - return err; 1229 - } 1230 - 1231 1052 emc->regs = devm_platform_ioremap_resource(pdev, 0); 1232 1053 if (IS_ERR(emc->regs)) 1233 1054 return PTR_ERR(emc->regs); ··· 1227 1064 err = emc_setup_hw(emc); 1228 1065 if (err) 1229 1066 return err; 1067 + 1068 + np = tegra_emc_find_node_by_ram_code(emc); 1069 + if (np) { 1070 + err = tegra_emc_load_timings_from_dt(emc, np); 1071 + of_node_put(np); 1072 + if (err) 1073 + return err; 1074 + } 1230 1075 1231 1076 err = devm_request_irq(&pdev->dev, irq, tegra_emc_isr, 0, 1232 1077 dev_name(&pdev->dev), emc); ··· 1288 1117 1289 1118 MODULE_AUTHOR("Dmitry Osipenko <digetx@gmail.com>"); 1290 1119 MODULE_DESCRIPTION("NVIDIA Tegra20 EMC driver"); 1120 + MODULE_SOFTDEP("pre: governor_simpleondemand"); 1291 1121 MODULE_LICENSE("GPL v2");
+1 -1
drivers/memory/tegra/tegra210-emc-cc-r21021.c
··· 478 478 static u32 tegra210_emc_r21021_periodic_compensation(struct tegra210_emc *emc) 479 479 { 480 480 u32 emc_cfg, emc_cfg_o, emc_cfg_update, del, value; 481 - u32 list[] = { 481 + static const u32 list[] = { 482 482 EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0, 483 483 EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1, 484 484 EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2,
+3 -3
drivers/memory/tegra/tegra210-emc-core.c
··· 1662 1662 return 0; 1663 1663 } 1664 1664 1665 - DEFINE_SIMPLE_ATTRIBUTE(tegra210_emc_debug_min_rate_fops, 1665 + DEFINE_DEBUGFS_ATTRIBUTE(tegra210_emc_debug_min_rate_fops, 1666 1666 tegra210_emc_debug_min_rate_get, 1667 1667 tegra210_emc_debug_min_rate_set, "%llu\n"); 1668 1668 ··· 1692 1692 return 0; 1693 1693 } 1694 1694 1695 - DEFINE_SIMPLE_ATTRIBUTE(tegra210_emc_debug_max_rate_fops, 1695 + DEFINE_DEBUGFS_ATTRIBUTE(tegra210_emc_debug_max_rate_fops, 1696 1696 tegra210_emc_debug_max_rate_get, 1697 1697 tegra210_emc_debug_max_rate_set, "%llu\n"); 1698 1698 ··· 1723 1723 return 0; 1724 1724 } 1725 1725 1726 - DEFINE_SIMPLE_ATTRIBUTE(tegra210_emc_debug_temperature_fops, 1726 + DEFINE_DEBUGFS_ATTRIBUTE(tegra210_emc_debug_temperature_fops, 1727 1727 tegra210_emc_debug_temperature_get, 1728 1728 tegra210_emc_debug_temperature_set, "%llu\n"); 1729 1729
+2 -2
drivers/memory/tegra/tegra30-emc.c
··· 1289 1289 return 0; 1290 1290 } 1291 1291 1292 - DEFINE_SIMPLE_ATTRIBUTE(tegra_emc_debug_min_rate_fops, 1292 + DEFINE_DEBUGFS_ATTRIBUTE(tegra_emc_debug_min_rate_fops, 1293 1293 tegra_emc_debug_min_rate_get, 1294 1294 tegra_emc_debug_min_rate_set, "%llu\n"); 1295 1295 ··· 1319 1319 return 0; 1320 1320 } 1321 1321 1322 - DEFINE_SIMPLE_ATTRIBUTE(tegra_emc_debug_max_rate_fops, 1322 + DEFINE_DEBUGFS_ATTRIBUTE(tegra_emc_debug_max_rate_fops, 1323 1323 tegra_emc_debug_max_rate_get, 1324 1324 tegra_emc_debug_max_rate_set, "%llu\n"); 1325 1325
+1
drivers/of/platform.c
··· 505 505 static const struct of_device_id reserved_mem_matches[] = { 506 506 { .compatible = "qcom,rmtfs-mem" }, 507 507 { .compatible = "qcom,cmd-db" }, 508 + { .compatible = "qcom,smem" }, 508 509 { .compatible = "ramoops" }, 509 510 { .compatible = "nvmem-rmem" }, 510 511 {}
+2 -2
drivers/reset/Kconfig
··· 58 58 a SUN_TOP_CTRL_SW_INIT style controller. 59 59 60 60 config RESET_BRCMSTB_RESCAL 61 - bool "Broadcom STB RESCAL reset controller" 61 + tristate "Broadcom STB RESCAL reset controller" 62 62 depends on HAS_IOMEM 63 63 depends on ARCH_BRCMSTB || COMPILE_TEST 64 64 default ARCH_BRCMSTB ··· 116 116 117 117 config RESET_MCHP_SPARX5 118 118 bool "Microchip Sparx5 reset driver" 119 - depends on ARCH_SPARX5 || COMPILE_TEST 119 + depends on ARCH_SPARX5 || SOC_LAN966 || COMPILE_TEST 120 120 default y if SPARX5_SWITCH 121 121 select MFD_SYSCON 122 122 help
+32 -8
drivers/reset/reset-microchip-sparx5.c
··· 13 13 #include <linux/regmap.h> 14 14 #include <linux/reset-controller.h> 15 15 16 - #define PROTECT_REG 0x84 17 - #define PROTECT_BIT BIT(10) 18 - #define SOFT_RESET_REG 0x00 19 - #define SOFT_RESET_BIT BIT(1) 16 + struct reset_props { 17 + u32 protect_reg; 18 + u32 protect_bit; 19 + u32 reset_reg; 20 + u32 reset_bit; 21 + }; 20 22 21 23 struct mchp_reset_context { 22 24 struct regmap *cpu_ctrl; 23 25 struct regmap *gcb_ctrl; 24 26 struct reset_controller_dev rcdev; 27 + const struct reset_props *props; 25 28 }; 26 29 27 30 static struct regmap_config sparx5_reset_regmap_config = { ··· 41 38 u32 val; 42 39 43 40 /* Make sure the core is PROTECTED from reset */ 44 - regmap_update_bits(ctx->cpu_ctrl, PROTECT_REG, PROTECT_BIT, PROTECT_BIT); 41 + regmap_update_bits(ctx->cpu_ctrl, ctx->props->protect_reg, 42 + ctx->props->protect_bit, ctx->props->protect_bit); 45 43 46 44 /* Start soft reset */ 47 - regmap_write(ctx->gcb_ctrl, SOFT_RESET_REG, SOFT_RESET_BIT); 45 + regmap_write(ctx->gcb_ctrl, ctx->props->reset_reg, 46 + ctx->props->reset_bit); 48 47 49 48 /* Wait for soft reset done */ 50 - return regmap_read_poll_timeout(ctx->gcb_ctrl, SOFT_RESET_REG, val, 51 - (val & SOFT_RESET_BIT) == 0, 49 + return regmap_read_poll_timeout(ctx->gcb_ctrl, ctx->props->reset_reg, val, 50 + (val & ctx->props->reset_bit) == 0, 52 51 1, 100); 53 52 } 54 53 ··· 120 115 ctx->rcdev.nr_resets = 1; 121 116 ctx->rcdev.ops = &sparx5_reset_ops; 122 117 ctx->rcdev.of_node = dn; 118 + ctx->props = device_get_match_data(&pdev->dev); 123 119 124 120 return devm_reset_controller_register(&pdev->dev, &ctx->rcdev); 125 121 } 126 122 123 + static const struct reset_props reset_props_sparx5 = { 124 + .protect_reg = 0x84, 125 + .protect_bit = BIT(10), 126 + .reset_reg = 0x0, 127 + .reset_bit = BIT(1), 128 + }; 129 + 130 + static const struct reset_props reset_props_lan966x = { 131 + .protect_reg = 0x88, 132 + .protect_bit = BIT(5), 133 + .reset_reg = 0x0, 134 + .reset_bit = BIT(1), 135 + }; 136 + 127 137 static const struct of_device_id mchp_sparx5_reset_of_match[] = { 128 138 { 129 139 .compatible = "microchip,sparx5-switch-reset", 140 + .data = &reset_props_sparx5, 141 + }, { 142 + .compatible = "microchip,lan966x-switch-reset", 143 + .data = &reset_props_lan966x, 130 144 }, 131 145 { } 132 146 };
+4
drivers/reset/reset-uniphier-glue.c
··· 156 156 .data = &uniphier_pxs2_data, 157 157 }, 158 158 { 159 + .compatible = "socionext,uniphier-nx1-usb3-reset", 160 + .data = &uniphier_pxs2_data, 161 + }, 162 + { 159 163 .compatible = "socionext,uniphier-pro4-ahci-reset", 160 164 .data = &uniphier_pro4_data, 161 165 },
+27
drivers/reset/reset-uniphier.c
··· 136 136 UNIPHIER_RESETX(28, 0x200c, 7), /* SATA0 */ 137 137 UNIPHIER_RESETX(29, 0x200c, 8), /* SATA1 */ 138 138 UNIPHIER_RESETX(30, 0x200c, 21), /* SATA-PHY */ 139 + UNIPHIER_RESETX(40, 0x2008, 0), /* AIO */ 140 + UNIPHIER_RESETX(42, 0x2010, 2), /* EXIV */ 141 + UNIPHIER_RESET_END, 142 + }; 143 + 144 + static const struct uniphier_reset_data uniphier_nx1_sys_reset_data[] = { 145 + UNIPHIER_RESETX(4, 0x2008, 8), /* eMMC */ 146 + UNIPHIER_RESETX(6, 0x200c, 0), /* Ether */ 147 + UNIPHIER_RESETX(12, 0x200c, 16), /* USB30 link */ 148 + UNIPHIER_RESETX(16, 0x200c, 24), /* USB30-PHY0 */ 149 + UNIPHIER_RESETX(17, 0x200c, 25), /* USB30-PHY1 */ 150 + UNIPHIER_RESETX(18, 0x200c, 26), /* USB30-PHY2 */ 151 + UNIPHIER_RESETX(24, 0x200c, 8), /* PCIe */ 152 + UNIPHIER_RESETX(52, 0x2010, 0), /* VOC */ 153 + UNIPHIER_RESETX(58, 0x2010, 8), /* HDMI-Tx */ 139 154 UNIPHIER_RESET_END, 140 155 }; 141 156 ··· 415 400 .compatible = "socionext,uniphier-pxs3-reset", 416 401 .data = uniphier_pxs3_sys_reset_data, 417 402 }, 403 + { 404 + .compatible = "socionext,uniphier-nx1-reset", 405 + .data = uniphier_nx1_sys_reset_data, 406 + }, 418 407 /* Media I/O reset, SD reset */ 419 408 { 420 409 .compatible = "socionext,uniphier-ld4-mio-reset", ··· 456 437 .compatible = "socionext,uniphier-pxs3-sd-reset", 457 438 .data = uniphier_pro5_sd_reset_data, 458 439 }, 440 + { 441 + .compatible = "socionext,uniphier-nx1-sd-reset", 442 + .data = uniphier_pro5_sd_reset_data, 443 + }, 459 444 /* Peripheral reset */ 460 445 { 461 446 .compatible = "socionext,uniphier-ld4-peri-reset", ··· 491 468 }, 492 469 { 493 470 .compatible = "socionext,uniphier-pxs3-peri-reset", 471 + .data = uniphier_pro4_peri_reset_data, 472 + }, 473 + { 474 + .compatible = "socionext,uniphier-nx1-peri-reset", 494 475 .data = uniphier_pro4_peri_reset_data, 495 476 }, 496 477 /* Analog signal amplifiers reset */
+2 -8
drivers/rtc/Kconfig
··· 1404 1404 This driver can also be built as a module, if so, module 1405 1405 will be called rtc-omap. 1406 1406 1407 - config HAVE_S3C_RTC 1408 - bool 1409 - help 1410 - This will include RTC support for Samsung SoCs. If 1411 - you want to include RTC support for any machine, kindly 1412 - select this in the respective mach-XXXX/Kconfig file. 1413 - 1414 1407 config RTC_DRV_S3C 1415 1408 tristate "Samsung S3C series SoC RTC" 1416 - depends on ARCH_S3C64XX || HAVE_S3C_RTC || COMPILE_TEST 1409 + depends on ARCH_EXYNOS || ARCH_S3C64XX || ARCH_S3C24XX || ARCH_S5PV210 || \ 1410 + COMPILE_TEST 1417 1411 help 1418 1412 RTC (Realtime Clock) driver for the clock inbuilt into the 1419 1413 Samsung S3C24XX series of SoCs. This can provide periodic
+1 -3
drivers/soc/amlogic/meson-canvas.c
··· 168 168 169 169 static int meson_canvas_probe(struct platform_device *pdev) 170 170 { 171 - struct resource *res; 172 171 struct meson_canvas *canvas; 173 172 struct device *dev = &pdev->dev; 174 173 ··· 175 176 if (!canvas) 176 177 return -ENOMEM; 177 178 178 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 179 - canvas->reg_base = devm_ioremap_resource(dev, res); 179 + canvas->reg_base = devm_platform_ioremap_resource(pdev, 0); 180 180 if (IS_ERR(canvas->reg_base)) 181 181 return PTR_ERR(canvas->reg_base); 182 182
+1 -3
drivers/soc/amlogic/meson-clk-measure.c
··· 606 606 { 607 607 const struct meson_msr_id *match_data; 608 608 struct meson_msr *priv; 609 - struct resource *res; 610 609 struct dentry *root, *clks; 611 610 void __iomem *base; 612 611 int i; ··· 623 624 624 625 memcpy(priv->msr_table, match_data, sizeof(priv->msr_table)); 625 626 626 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 627 - base = devm_ioremap_resource(&pdev->dev, res); 627 + base = devm_platform_ioremap_resource(pdev, 0); 628 628 if (IS_ERR(base)) 629 629 return PTR_ERR(base); 630 630
+1
drivers/soc/amlogic/meson-gx-socinfo.c
··· 65 65 { "A113X", 0x25, 0x37, 0xff }, 66 66 { "A113D", 0x25, 0x22, 0xff }, 67 67 { "S905D2", 0x28, 0x10, 0xf0 }, 68 + { "S905Y2", 0x28, 0x30, 0xf0 }, 68 69 { "S905X2", 0x28, 0x40, 0xf0 }, 69 70 { "A311D", 0x29, 0x10, 0xf0 }, 70 71 { "S922X", 0x29, 0x40, 0xf0 },
+10
drivers/soc/aspeed/Kconfig
··· 24 24 allows the BMC to listen on and save the data written by 25 25 the host to an arbitrary LPC I/O port. 26 26 27 + config ASPEED_UART_ROUTING 28 + tristate "ASPEED uart routing control" 29 + select REGMAP 30 + select MFD_SYSCON 31 + default ARCH_ASPEED 32 + help 33 + Provides a driver to control the UART routing paths, allowing 34 + users to perform runtime configuration of the RX muxes among 35 + the UART controllers and I/O pins. 36 + 27 37 config ASPEED_P2A_CTRL 28 38 tristate "ASPEED P2A (VGA MMIO to BMC) bridge control" 29 39 select REGMAP
+5 -4
drivers/soc/aspeed/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 - obj-$(CONFIG_ASPEED_LPC_CTRL) += aspeed-lpc-ctrl.o 3 - obj-$(CONFIG_ASPEED_LPC_SNOOP) += aspeed-lpc-snoop.o 4 - obj-$(CONFIG_ASPEED_P2A_CTRL) += aspeed-p2a-ctrl.o 5 - obj-$(CONFIG_ASPEED_SOCINFO) += aspeed-socinfo.o 2 + obj-$(CONFIG_ASPEED_LPC_CTRL) += aspeed-lpc-ctrl.o 3 + obj-$(CONFIG_ASPEED_LPC_SNOOP) += aspeed-lpc-snoop.o 4 + obj-$(CONFIG_ASPEED_UART_ROUTING) += aspeed-uart-routing.o 5 + obj-$(CONFIG_ASPEED_P2A_CTRL) += aspeed-p2a-ctrl.o 6 + obj-$(CONFIG_ASPEED_SOCINFO) += aspeed-socinfo.o
+603
drivers/soc/aspeed/aspeed-uart-routing.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Copyright (c) 2018 Google LLC 4 + * Copyright (c) 2021 Aspeed Technology Inc. 5 + */ 6 + #include <linux/device.h> 7 + #include <linux/module.h> 8 + #include <linux/of_device.h> 9 + #include <linux/of_platform.h> 10 + #include <linux/mfd/syscon.h> 11 + #include <linux/regmap.h> 12 + #include <linux/platform_device.h> 13 + 14 + /* register offsets */ 15 + #define HICR9 0x98 16 + #define HICRA 0x9c 17 + 18 + /* attributes options */ 19 + #define UART_ROUTING_IO1 "io1" 20 + #define UART_ROUTING_IO2 "io2" 21 + #define UART_ROUTING_IO3 "io3" 22 + #define UART_ROUTING_IO4 "io4" 23 + #define UART_ROUTING_IO5 "io5" 24 + #define UART_ROUTING_IO6 "io6" 25 + #define UART_ROUTING_IO10 "io10" 26 + #define UART_ROUTING_UART1 "uart1" 27 + #define UART_ROUTING_UART2 "uart2" 28 + #define UART_ROUTING_UART3 "uart3" 29 + #define UART_ROUTING_UART4 "uart4" 30 + #define UART_ROUTING_UART5 "uart5" 31 + #define UART_ROUTING_UART6 "uart6" 32 + #define UART_ROUTING_UART10 "uart10" 33 + #define UART_ROUTING_RES "reserved" 34 + 35 + struct aspeed_uart_routing { 36 + struct regmap *map; 37 + struct attribute_group const *attr_grp; 38 + }; 39 + 40 + struct aspeed_uart_routing_selector { 41 + struct device_attribute dev_attr; 42 + uint8_t reg; 43 + uint8_t mask; 44 + uint8_t shift; 45 + const char *const options[]; 46 + }; 47 + 48 + #define to_routing_selector(_dev_attr) \ 49 + container_of(_dev_attr, struct aspeed_uart_routing_selector, dev_attr) 50 + 51 + static ssize_t aspeed_uart_routing_show(struct device *dev, 52 + struct device_attribute *attr, 53 + char *buf); 54 + 55 + static ssize_t aspeed_uart_routing_store(struct device *dev, 56 + struct device_attribute *attr, 57 + const char *buf, size_t count); 58 + 59 + #define ROUTING_ATTR(_name) { \ 60 + .attr = {.name = _name, \ 61 + .mode = VERIFY_OCTAL_PERMISSIONS(0644) }, \ 62 + .show = aspeed_uart_routing_show, \ 63 + .store = aspeed_uart_routing_store, \ 64 + } 65 + 66 + /* routing selector for AST25xx */ 67 + static struct aspeed_uart_routing_selector ast2500_io6_sel = { 68 + .dev_attr = ROUTING_ATTR(UART_ROUTING_IO6), 69 + .reg = HICR9, 70 + .shift = 8, 71 + .mask = 0xf, 72 + .options = { 73 + UART_ROUTING_UART1, 74 + UART_ROUTING_UART2, 75 + UART_ROUTING_UART3, 76 + UART_ROUTING_UART4, 77 + UART_ROUTING_UART5, 78 + UART_ROUTING_IO1, 79 + UART_ROUTING_IO2, 80 + UART_ROUTING_IO3, 81 + UART_ROUTING_IO4, 82 + UART_ROUTING_IO5, 83 + NULL, 84 + }, 85 + }; 86 + 87 + static struct aspeed_uart_routing_selector ast2500_uart5_sel = { 88 + .dev_attr = ROUTING_ATTR(UART_ROUTING_UART5), 89 + .reg = HICRA, 90 + .shift = 28, 91 + .mask = 0xf, 92 + .options = { 93 + UART_ROUTING_IO5, 94 + UART_ROUTING_IO1, 95 + UART_ROUTING_IO2, 96 + UART_ROUTING_IO3, 97 + UART_ROUTING_IO4, 98 + UART_ROUTING_UART1, 99 + UART_ROUTING_UART2, 100 + UART_ROUTING_UART3, 101 + UART_ROUTING_UART4, 102 + UART_ROUTING_IO6, 103 + NULL, 104 + }, 105 + }; 106 + 107 + static struct aspeed_uart_routing_selector ast2500_uart4_sel = { 108 + .dev_attr = ROUTING_ATTR(UART_ROUTING_UART4), 109 + .reg = HICRA, 110 + .shift = 25, 111 + .mask = 0x7, 112 + .options = { 113 + UART_ROUTING_IO4, 114 + UART_ROUTING_IO1, 115 + UART_ROUTING_IO2, 116 + UART_ROUTING_IO3, 117 + UART_ROUTING_UART1, 118 + UART_ROUTING_UART2, 119 + UART_ROUTING_UART3, 120 + UART_ROUTING_IO6, 121 + NULL, 122 + }, 123 + }; 124 + 125 + static struct aspeed_uart_routing_selector ast2500_uart3_sel = { 126 + .dev_attr = ROUTING_ATTR(UART_ROUTING_UART3), 127 + .reg = HICRA, 128 + .shift = 22, 129 + .mask = 0x7, 130 + .options = { 131 + UART_ROUTING_IO3, 132 + UART_ROUTING_IO4, 133 + UART_ROUTING_IO1, 134 + UART_ROUTING_IO2, 135 + UART_ROUTING_UART4, 136 + UART_ROUTING_UART1, 137 + UART_ROUTING_UART2, 138 + UART_ROUTING_IO6, 139 + NULL, 140 + }, 141 + }; 142 + 143 + static struct aspeed_uart_routing_selector ast2500_uart2_sel = { 144 + .dev_attr = ROUTING_ATTR(UART_ROUTING_UART2), 145 + .reg = HICRA, 146 + .shift = 19, 147 + .mask = 0x7, 148 + .options = { 149 + UART_ROUTING_IO2, 150 + UART_ROUTING_IO3, 151 + UART_ROUTING_IO4, 152 + UART_ROUTING_IO1, 153 + UART_ROUTING_UART3, 154 + UART_ROUTING_UART4, 155 + UART_ROUTING_UART1, 156 + UART_ROUTING_IO6, 157 + NULL, 158 + }, 159 + }; 160 + 161 + static struct aspeed_uart_routing_selector ast2500_uart1_sel = { 162 + .dev_attr = ROUTING_ATTR(UART_ROUTING_UART1), 163 + .reg = HICRA, 164 + .shift = 16, 165 + .mask = 0x7, 166 + .options = { 167 + UART_ROUTING_IO1, 168 + UART_ROUTING_IO2, 169 + UART_ROUTING_IO3, 170 + UART_ROUTING_IO4, 171 + UART_ROUTING_UART2, 172 + UART_ROUTING_UART3, 173 + UART_ROUTING_UART4, 174 + UART_ROUTING_IO6, 175 + NULL, 176 + }, 177 + }; 178 + 179 + static struct aspeed_uart_routing_selector ast2500_io5_sel = { 180 + .dev_attr = ROUTING_ATTR(UART_ROUTING_IO5), 181 + .reg = HICRA, 182 + .shift = 12, 183 + .mask = 0x7, 184 + .options = { 185 + UART_ROUTING_UART5, 186 + UART_ROUTING_UART1, 187 + UART_ROUTING_UART2, 188 + UART_ROUTING_UART3, 189 + UART_ROUTING_UART4, 190 + UART_ROUTING_IO1, 191 + UART_ROUTING_IO3, 192 + UART_ROUTING_IO6, 193 + NULL, 194 + }, 195 + }; 196 + 197 + static struct aspeed_uart_routing_selector ast2500_io4_sel = { 198 + .dev_attr = ROUTING_ATTR(UART_ROUTING_IO4), 199 + .reg = HICRA, 200 + .shift = 9, 201 + .mask = 0x7, 202 + .options = { 203 + UART_ROUTING_UART4, 204 + UART_ROUTING_UART5, 205 + UART_ROUTING_UART1, 206 + UART_ROUTING_UART2, 207 + UART_ROUTING_UART3, 208 + UART_ROUTING_IO1, 209 + UART_ROUTING_IO2, 210 + UART_ROUTING_IO6, 211 + NULL, 212 + }, 213 + }; 214 + 215 + static struct aspeed_uart_routing_selector ast2500_io3_sel = { 216 + .dev_attr = ROUTING_ATTR(UART_ROUTING_IO3), 217 + .reg = HICRA, 218 + .shift = 6, 219 + .mask = 0x7, 220 + .options = { 221 + UART_ROUTING_UART3, 222 + UART_ROUTING_UART4, 223 + UART_ROUTING_UART5, 224 + UART_ROUTING_UART1, 225 + UART_ROUTING_UART2, 226 + UART_ROUTING_IO1, 227 + UART_ROUTING_IO2, 228 + UART_ROUTING_IO6, 229 + NULL, 230 + }, 231 + }; 232 + 233 + static struct aspeed_uart_routing_selector ast2500_io2_sel = { 234 + .dev_attr = ROUTING_ATTR(UART_ROUTING_IO2), 235 + .reg = HICRA, 236 + .shift = 3, 237 + .mask = 0x7, 238 + .options = { 239 + UART_ROUTING_UART2, 240 + UART_ROUTING_UART3, 241 + UART_ROUTING_UART4, 242 + UART_ROUTING_UART5, 243 + UART_ROUTING_UART1, 244 + UART_ROUTING_IO3, 245 + UART_ROUTING_IO4, 246 + UART_ROUTING_IO6, 247 + NULL, 248 + }, 249 + }; 250 + 251 + static struct aspeed_uart_routing_selector ast2500_io1_sel = { 252 + .dev_attr = ROUTING_ATTR(UART_ROUTING_IO1), 253 + .reg = HICRA, 254 + .shift = 0, 255 + .mask = 0x7, 256 + .options = { 257 + UART_ROUTING_UART1, 258 + UART_ROUTING_UART2, 259 + UART_ROUTING_UART3, 260 + UART_ROUTING_UART4, 261 + UART_ROUTING_UART5, 262 + UART_ROUTING_IO3, 263 + UART_ROUTING_IO4, 264 + UART_ROUTING_IO6, 265 + NULL, 266 + }, 267 + }; 268 + 269 + static struct attribute *ast2500_uart_routing_attrs[] = { 270 + &ast2500_io6_sel.dev_attr.attr, 271 + &ast2500_uart5_sel.dev_attr.attr, 272 + &ast2500_uart4_sel.dev_attr.attr, 273 + &ast2500_uart3_sel.dev_attr.attr, 274 + &ast2500_uart2_sel.dev_attr.attr, 275 + &ast2500_uart1_sel.dev_attr.attr, 276 + &ast2500_io5_sel.dev_attr.attr, 277 + &ast2500_io4_sel.dev_attr.attr, 278 + &ast2500_io3_sel.dev_attr.attr, 279 + &ast2500_io2_sel.dev_attr.attr, 280 + &ast2500_io1_sel.dev_attr.attr, 281 + NULL, 282 + }; 283 + 284 + static const struct attribute_group ast2500_uart_routing_attr_group = { 285 + .attrs = ast2500_uart_routing_attrs, 286 + }; 287 + 288 + /* routing selector for AST26xx */ 289 + static struct aspeed_uart_routing_selector ast2600_uart10_sel = { 290 + .dev_attr = ROUTING_ATTR(UART_ROUTING_UART10), 291 + .reg = HICR9, 292 + .shift = 12, 293 + .mask = 0xf, 294 + .options = { 295 + UART_ROUTING_IO10, 296 + UART_ROUTING_IO1, 297 + UART_ROUTING_IO2, 298 + UART_ROUTING_IO3, 299 + UART_ROUTING_IO4, 300 + UART_ROUTING_RES, 301 + UART_ROUTING_UART1, 302 + UART_ROUTING_UART2, 303 + UART_ROUTING_UART3, 304 + UART_ROUTING_UART4, 305 + NULL, 306 + }, 307 + }; 308 + 309 + static struct aspeed_uart_routing_selector ast2600_io10_sel = { 310 + .dev_attr = ROUTING_ATTR(UART_ROUTING_IO10), 311 + .reg = HICR9, 312 + .shift = 8, 313 + .mask = 0xf, 314 + .options = { 315 + UART_ROUTING_UART1, 316 + UART_ROUTING_UART2, 317 + UART_ROUTING_UART3, 318 + UART_ROUTING_UART4, 319 + UART_ROUTING_RES, 320 + UART_ROUTING_IO1, 321 + UART_ROUTING_IO2, 322 + UART_ROUTING_IO3, 323 + UART_ROUTING_IO4, 324 + UART_ROUTING_RES, 325 + UART_ROUTING_UART10, 326 + NULL, 327 + }, 328 + }; 329 + 330 + static struct aspeed_uart_routing_selector ast2600_uart4_sel = { 331 + .dev_attr = ROUTING_ATTR(UART_ROUTING_UART4), 332 + .reg = HICRA, 333 + .shift = 25, 334 + .mask = 0x7, 335 + .options = { 336 + UART_ROUTING_IO4, 337 + UART_ROUTING_IO1, 338 + UART_ROUTING_IO2, 339 + UART_ROUTING_IO3, 340 + UART_ROUTING_UART1, 341 + UART_ROUTING_UART2, 342 + UART_ROUTING_UART3, 343 + UART_ROUTING_IO10, 344 + NULL, 345 + }, 346 + }; 347 + 348 + static struct aspeed_uart_routing_selector ast2600_uart3_sel = { 349 + .dev_attr = ROUTING_ATTR(UART_ROUTING_UART3), 350 + .reg = HICRA, 351 + .shift = 22, 352 + .mask = 0x7, 353 + .options = { 354 + UART_ROUTING_IO3, 355 + UART_ROUTING_IO4, 356 + UART_ROUTING_IO1, 357 + UART_ROUTING_IO2, 358 + UART_ROUTING_UART4, 359 + UART_ROUTING_UART1, 360 + UART_ROUTING_UART2, 361 + UART_ROUTING_IO10, 362 + NULL, 363 + }, 364 + }; 365 + 366 + static struct aspeed_uart_routing_selector ast2600_uart2_sel = { 367 + .dev_attr = ROUTING_ATTR(UART_ROUTING_UART2), 368 + .reg = HICRA, 369 + .shift = 19, 370 + .mask = 0x7, 371 + .options = { 372 + UART_ROUTING_IO2, 373 + UART_ROUTING_IO3, 374 + UART_ROUTING_IO4, 375 + UART_ROUTING_IO1, 376 + UART_ROUTING_UART3, 377 + UART_ROUTING_UART4, 378 + UART_ROUTING_UART1, 379 + UART_ROUTING_IO10, 380 + NULL, 381 + }, 382 + }; 383 + 384 + static struct aspeed_uart_routing_selector ast2600_uart1_sel = { 385 + .dev_attr = ROUTING_ATTR(UART_ROUTING_UART1), 386 + .reg = HICRA, 387 + .shift = 16, 388 + .mask = 0x7, 389 + .options = { 390 + UART_ROUTING_IO1, 391 + UART_ROUTING_IO2, 392 + UART_ROUTING_IO3, 393 + UART_ROUTING_IO4, 394 + UART_ROUTING_UART2, 395 + UART_ROUTING_UART3, 396 + UART_ROUTING_UART4, 397 + UART_ROUTING_IO10, 398 + NULL, 399 + }, 400 + }; 401 + 402 + static struct aspeed_uart_routing_selector ast2600_io4_sel = { 403 + .dev_attr = ROUTING_ATTR(UART_ROUTING_IO4), 404 + .reg = HICRA, 405 + .shift = 9, 406 + .mask = 0x7, 407 + .options = { 408 + UART_ROUTING_UART4, 409 + UART_ROUTING_UART10, 410 + UART_ROUTING_UART1, 411 + UART_ROUTING_UART2, 412 + UART_ROUTING_UART3, 413 + UART_ROUTING_IO1, 414 + UART_ROUTING_IO2, 415 + UART_ROUTING_IO10, 416 + NULL, 417 + }, 418 + }; 419 + 420 + static struct aspeed_uart_routing_selector ast2600_io3_sel = { 421 + .dev_attr = ROUTING_ATTR(UART_ROUTING_IO3), 422 + .reg = HICRA, 423 + .shift = 6, 424 + .mask = 0x7, 425 + .options = { 426 + UART_ROUTING_UART3, 427 + UART_ROUTING_UART4, 428 + UART_ROUTING_UART10, 429 + UART_ROUTING_UART1, 430 + UART_ROUTING_UART2, 431 + UART_ROUTING_IO1, 432 + UART_ROUTING_IO2, 433 + UART_ROUTING_IO10, 434 + NULL, 435 + }, 436 + }; 437 + 438 + static struct aspeed_uart_routing_selector ast2600_io2_sel = { 439 + .dev_attr = ROUTING_ATTR(UART_ROUTING_IO2), 440 + .reg = HICRA, 441 + .shift = 3, 442 + .mask = 0x7, 443 + .options = { 444 + UART_ROUTING_UART2, 445 + UART_ROUTING_UART3, 446 + UART_ROUTING_UART4, 447 + UART_ROUTING_UART10, 448 + UART_ROUTING_UART1, 449 + UART_ROUTING_IO3, 450 + UART_ROUTING_IO4, 451 + UART_ROUTING_IO10, 452 + NULL, 453 + }, 454 + }; 455 + 456 + static struct aspeed_uart_routing_selector ast2600_io1_sel = { 457 + .dev_attr = ROUTING_ATTR(UART_ROUTING_IO1), 458 + .reg = HICRA, 459 + .shift = 0, 460 + .mask = 0x7, 461 + .options = { 462 + UART_ROUTING_UART1, 463 + UART_ROUTING_UART2, 464 + UART_ROUTING_UART3, 465 + UART_ROUTING_UART4, 466 + UART_ROUTING_UART10, 467 + UART_ROUTING_IO3, 468 + UART_ROUTING_IO4, 469 + UART_ROUTING_IO10, 470 + NULL, 471 + }, 472 + }; 473 + 474 + static struct attribute *ast2600_uart_routing_attrs[] = { 475 + &ast2600_uart10_sel.dev_attr.attr, 476 + &ast2600_io10_sel.dev_attr.attr, 477 + &ast2600_uart4_sel.dev_attr.attr, 478 + &ast2600_uart3_sel.dev_attr.attr, 479 + &ast2600_uart2_sel.dev_attr.attr, 480 + &ast2600_uart1_sel.dev_attr.attr, 481 + &ast2600_io4_sel.dev_attr.attr, 482 + &ast2600_io3_sel.dev_attr.attr, 483 + &ast2600_io2_sel.dev_attr.attr, 484 + &ast2600_io1_sel.dev_attr.attr, 485 + NULL, 486 + }; 487 + 488 + static const struct attribute_group ast2600_uart_routing_attr_group = { 489 + .attrs = ast2600_uart_routing_attrs, 490 + }; 491 + 492 + static ssize_t aspeed_uart_routing_show(struct device *dev, 493 + struct device_attribute *attr, 494 + char *buf) 495 + { 496 + struct aspeed_uart_routing *uart_routing = dev_get_drvdata(dev); 497 + struct aspeed_uart_routing_selector *sel = to_routing_selector(attr); 498 + int val, pos, len; 499 + 500 + regmap_read(uart_routing->map, sel->reg, &val); 501 + val = (val >> sel->shift) & sel->mask; 502 + 503 + len = 0; 504 + for (pos = 0; sel->options[pos] != NULL; ++pos) { 505 + if (pos == val) 506 + len += sysfs_emit_at(buf, len, "[%s] ", sel->options[pos]); 507 + else 508 + len += sysfs_emit_at(buf, len, "%s ", sel->options[pos]); 509 + } 510 + 511 + if (val >= pos) 512 + len += sysfs_emit_at(buf, len, "[unknown(%d)]", val); 513 + 514 + len += sysfs_emit_at(buf, len, "\n"); 515 + 516 + return len; 517 + } 518 + 519 + static ssize_t aspeed_uart_routing_store(struct device *dev, 520 + struct device_attribute *attr, 521 + const char *buf, size_t count) 522 + { 523 + struct aspeed_uart_routing *uart_routing = dev_get_drvdata(dev); 524 + struct aspeed_uart_routing_selector *sel = to_routing_selector(attr); 525 + int val; 526 + 527 + val = match_string(sel->options, -1, buf); 528 + if (val < 0) { 529 + dev_err(dev, "invalid value \"%s\"\n", buf); 530 + return -EINVAL; 531 + } 532 + 533 + regmap_update_bits(uart_routing->map, sel->reg, 534 + (sel->mask << sel->shift), 535 + (val & sel->mask) << sel->shift); 536 + 537 + return count; 538 + } 539 + 540 + static int aspeed_uart_routing_probe(struct platform_device *pdev) 541 + { 542 + int rc; 543 + struct device *dev = &pdev->dev; 544 + struct aspeed_uart_routing *uart_routing; 545 + 546 + uart_routing = devm_kzalloc(&pdev->dev, sizeof(*uart_routing), GFP_KERNEL); 547 + if (!uart_routing) 548 + return -ENOMEM; 549 + 550 + uart_routing->map = syscon_node_to_regmap(dev->parent->of_node); 551 + if (IS_ERR(uart_routing->map)) { 552 + dev_err(dev, "cannot get regmap\n"); 553 + return PTR_ERR(uart_routing->map); 554 + } 555 + 556 + uart_routing->attr_grp = of_device_get_match_data(dev); 557 + 558 + rc = sysfs_create_group(&dev->kobj, uart_routing->attr_grp); 559 + if (rc < 0) 560 + return rc; 561 + 562 + dev_set_drvdata(dev, uart_routing); 563 + 564 + dev_info(dev, "module loaded\n"); 565 + 566 + return 0; 567 + } 568 + 569 + static int aspeed_uart_routing_remove(struct platform_device *pdev) 570 + { 571 + struct device *dev = &pdev->dev; 572 + struct aspeed_uart_routing *uart_routing = platform_get_drvdata(pdev); 573 + 574 + sysfs_remove_group(&dev->kobj, uart_routing->attr_grp); 575 + 576 + return 0; 577 + } 578 + 579 + static const struct of_device_id aspeed_uart_routing_table[] = { 580 + { .compatible = "aspeed,ast2400-uart-routing", 581 + .data = &ast2500_uart_routing_attr_group }, 582 + { .compatible = "aspeed,ast2500-uart-routing", 583 + .data = &ast2500_uart_routing_attr_group }, 584 + { .compatible = "aspeed,ast2600-uart-routing", 585 + .data = &ast2600_uart_routing_attr_group }, 586 + { }, 587 + }; 588 + 589 + static struct platform_driver aspeed_uart_routing_driver = { 590 + .driver = { 591 + .name = "aspeed-uart-routing", 592 + .of_match_table = aspeed_uart_routing_table, 593 + }, 594 + .probe = aspeed_uart_routing_probe, 595 + .remove = aspeed_uart_routing_remove, 596 + }; 597 + 598 + module_platform_driver(aspeed_uart_routing_driver); 599 + 600 + MODULE_AUTHOR("Oskar Senft <osk@google.com>"); 601 + MODULE_AUTHOR("Chia-Wei Wang <chiawei_wang@aspeedtech.com>"); 602 + MODULE_LICENSE("GPL v2"); 603 + MODULE_DESCRIPTION("Driver to configure Aspeed UART routing");
+1 -3
drivers/soc/bcm/bcm63xx/bcm-pmb.c
··· 276 276 struct device *dev = &pdev->dev; 277 277 const struct bcm_pmb_pd_data *table; 278 278 const struct bcm_pmb_pd_data *e; 279 - struct resource *res; 280 279 struct bcm_pmb *pmb; 281 280 int max_id; 282 281 int err; ··· 286 287 287 288 pmb->dev = dev; 288 289 289 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 290 - pmb->base = devm_ioremap_resource(&pdev->dev, res); 290 + pmb->base = devm_platform_ioremap_resource(pdev, 0); 291 291 if (IS_ERR(pmb->base)) 292 292 return PTR_ERR(pmb->base); 293 293
+1 -3
drivers/soc/bcm/bcm63xx/bcm63xx-power.c
··· 91 91 { 92 92 struct device *dev = &pdev->dev; 93 93 struct device_node *np = dev->of_node; 94 - struct resource *res; 95 94 const struct bcm63xx_power_data *entry, *table; 96 95 struct bcm63xx_power *power; 97 96 unsigned int ndom; ··· 101 102 if (!power) 102 103 return -ENOMEM; 103 104 104 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 105 - power->base = devm_ioremap_resource(&pdev->dev, res); 105 + power->base = devm_platform_ioremap_resource(pdev, 0); 106 106 if (IS_ERR(power->base)) 107 107 return PTR_ERR(power->base); 108 108
+2
drivers/soc/bcm/brcmstb/biuctrl.c
··· 136 136 137 137 static const u32 a72_b53_mach_compat[] = { 138 138 0x7211, 139 + 0x72113, 140 + 0x72116, 139 141 0x7216, 140 142 0x72164, 141 143 0x72165,
+1 -1
drivers/soc/fsl/dpio/dpio-service.c
··· 500 500 qbman_eq_desc_set_no_orp(&ed, 0); 501 501 qbman_eq_desc_set_fq(&ed, fqid); 502 502 503 - return qbman_swp_enqueue_multiple(d->swp, &ed, fd, 0, nb); 503 + return qbman_swp_enqueue_multiple(d->swp, &ed, fd, NULL, nb); 504 504 } 505 505 EXPORT_SYMBOL(dpaa2_io_service_enqueue_multiple_fq); 506 506
+4 -4
drivers/soc/fsl/dpio/qbman-portal.c
··· 693 693 p = (s->addr_cena + QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask)); 694 694 p[0] = cl[0] | s->eqcr.pi_vb; 695 695 if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) { 696 - struct qbman_eq_desc *d = (struct qbman_eq_desc *)p; 696 + struct qbman_eq_desc *eq_desc = (struct qbman_eq_desc *)p; 697 697 698 - d->dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) | 698 + eq_desc->dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) | 699 699 ((flags[i]) & QBMAN_EQCR_DCA_IDXMASK); 700 700 } 701 701 eqcr_pi++; ··· 775 775 p = (s->addr_cena + QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask)); 776 776 p[0] = cl[0] | s->eqcr.pi_vb; 777 777 if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) { 778 - struct qbman_eq_desc *d = (struct qbman_eq_desc *)p; 778 + struct qbman_eq_desc *eq_desc = (struct qbman_eq_desc *)p; 779 779 780 - d->dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) | 780 + eq_desc->dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) | 781 781 ((flags[i]) & QBMAN_EQCR_DCA_IDXMASK); 782 782 } 783 783 eqcr_pi++;
+1 -3
drivers/soc/fsl/guts.c
··· 140 140 { 141 141 struct device_node *np = pdev->dev.of_node; 142 142 struct device *dev = &pdev->dev; 143 - struct resource *res; 144 143 const struct fsl_soc_die_attr *soc_die; 145 144 const char *machine; 146 145 u32 svr; ··· 151 152 152 153 guts->little_endian = of_property_read_bool(np, "little-endian"); 153 154 154 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 155 - guts->regs = devm_ioremap_resource(dev, res); 155 + guts->regs = devm_platform_ioremap_resource(pdev, 0); 156 156 if (IS_ERR(guts->regs)) 157 157 return PTR_ERR(guts->regs); 158 158
+1 -6
drivers/soc/fsl/rcpm.c
··· 146 146 static int rcpm_probe(struct platform_device *pdev) 147 147 { 148 148 struct device *dev = &pdev->dev; 149 - struct resource *r; 150 149 struct rcpm *rcpm; 151 150 int ret; 152 151 ··· 153 154 if (!rcpm) 154 155 return -ENOMEM; 155 156 156 - r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 157 - if (!r) 158 - return -ENODEV; 159 - 160 - rcpm->ippdexpcr_base = devm_ioremap_resource(&pdev->dev, r); 157 + rcpm->ippdexpcr_base = devm_platform_ioremap_resource(pdev, 0); 161 158 if (IS_ERR(rcpm->ippdexpcr_base)) { 162 159 ret = PTR_ERR(rcpm->ippdexpcr_base); 163 160 return ret;
+1
drivers/soc/imx/Kconfig
··· 6 6 depends on ARCH_MXC || (COMPILE_TEST && OF) 7 7 depends on PM 8 8 select PM_GENERIC_DOMAINS 9 + select REGMAP_MMIO 9 10 default y if SOC_IMX7D 10 11 11 12 config SOC_IMX8M
+1
drivers/soc/imx/Makefile
··· 5 5 obj-$(CONFIG_HAVE_IMX_GPC) += gpc.o 6 6 obj-$(CONFIG_IMX_GPCV2_PM_DOMAINS) += gpcv2.o 7 7 obj-$(CONFIG_SOC_IMX8M) += soc-imx8m.o 8 + obj-$(CONFIG_SOC_IMX8M) += imx8m-blk-ctrl.o
+93 -41
drivers/soc/imx/gpcv2.c
··· 192 192 struct clk_bulk_data *clks; 193 193 int num_clks; 194 194 195 - unsigned int pgc; 195 + unsigned long pgc; 196 196 197 197 const struct { 198 198 u32 pxx; ··· 202 202 } bits; 203 203 204 204 const int voltage; 205 + const bool keep_clocks; 205 206 struct device *dev; 206 207 }; 207 208 ··· 221 220 static int imx_pgc_power_up(struct generic_pm_domain *genpd) 222 221 { 223 222 struct imx_pgc_domain *domain = to_imx_pgc_domain(genpd); 224 - u32 reg_val; 223 + u32 reg_val, pgc; 225 224 int ret; 226 225 227 226 ret = pm_runtime_get_sync(domain->dev); ··· 245 244 goto out_regulator_disable; 246 245 } 247 246 247 + reset_control_assert(domain->reset); 248 + 248 249 if (domain->bits.pxx) { 249 250 /* request the domain to power up */ 250 251 regmap_update_bits(domain->regmap, GPC_PU_PGC_SW_PUP_REQ, ··· 265 262 } 266 263 267 264 /* disable power control */ 268 - regmap_clear_bits(domain->regmap, GPC_PGC_CTRL(domain->pgc), 269 - GPC_PGC_CTRL_PCR); 265 + for_each_set_bit(pgc, &domain->pgc, 32) { 266 + regmap_clear_bits(domain->regmap, GPC_PGC_CTRL(pgc), 267 + GPC_PGC_CTRL_PCR); 268 + } 270 269 } 271 - 272 - reset_control_assert(domain->reset); 273 270 274 271 /* delay for reset to propagate */ 275 272 udelay(5); ··· 296 293 } 297 294 298 295 /* Disable reset clocks for all devices in the domain */ 299 - clk_bulk_disable_unprepare(domain->num_clks, domain->clks); 296 + if (!domain->keep_clocks) 297 + clk_bulk_disable_unprepare(domain->num_clks, domain->clks); 300 298 301 299 return 0; 302 300 ··· 315 311 static int imx_pgc_power_down(struct generic_pm_domain *genpd) 316 312 { 317 313 struct imx_pgc_domain *domain = to_imx_pgc_domain(genpd); 318 - u32 reg_val; 314 + u32 reg_val, pgc; 319 315 int ret; 320 316 321 317 /* Enable reset clocks for all devices in the domain */ 322 - ret = clk_bulk_prepare_enable(domain->num_clks, domain->clks); 323 - if (ret) { 324 - dev_err(domain->dev, "failed to enable reset clocks\n"); 325 - return ret; 318 + if (!domain->keep_clocks) { 319 + ret = clk_bulk_prepare_enable(domain->num_clks, domain->clks); 320 + if (ret) { 321 + dev_err(domain->dev, "failed to enable reset clocks\n"); 322 + return ret; 323 + } 326 324 } 327 325 328 326 /* request the ADB400 to power down */ ··· 344 338 345 339 if (domain->bits.pxx) { 346 340 /* enable power control */ 347 - regmap_update_bits(domain->regmap, GPC_PGC_CTRL(domain->pgc), 348 - GPC_PGC_CTRL_PCR, GPC_PGC_CTRL_PCR); 341 + for_each_set_bit(pgc, &domain->pgc, 32) { 342 + regmap_update_bits(domain->regmap, GPC_PGC_CTRL(pgc), 343 + GPC_PGC_CTRL_PCR, GPC_PGC_CTRL_PCR); 344 + } 349 345 350 346 /* request the domain to power down */ 351 347 regmap_update_bits(domain->regmap, GPC_PU_PGC_SW_PDN_REQ, ··· 397 389 .map = IMX7_MIPI_PHY_A_CORE_DOMAIN, 398 390 }, 399 391 .voltage = 1000000, 400 - .pgc = IMX7_PGC_MIPI, 392 + .pgc = BIT(IMX7_PGC_MIPI), 401 393 }, 402 394 403 395 [IMX7_POWER_DOMAIN_PCIE_PHY] = { ··· 409 401 .map = IMX7_PCIE_PHY_A_CORE_DOMAIN, 410 402 }, 411 403 .voltage = 1000000, 412 - .pgc = IMX7_PGC_PCIE, 404 + .pgc = BIT(IMX7_PGC_PCIE), 413 405 }, 414 406 415 407 [IMX7_POWER_DOMAIN_USB_HSIC_PHY] = { ··· 421 413 .map = IMX7_USB_HSIC_PHY_A_CORE_DOMAIN, 422 414 }, 423 415 .voltage = 1200000, 424 - .pgc = IMX7_PGC_USB_HSIC, 416 + .pgc = BIT(IMX7_PGC_USB_HSIC), 425 417 }, 426 418 }; 427 419 ··· 456 448 .pxx = IMX8M_MIPI_SW_Pxx_REQ, 457 449 .map = IMX8M_MIPI_A53_DOMAIN, 458 450 }, 459 - .pgc = IMX8M_PGC_MIPI, 451 + .pgc = BIT(IMX8M_PGC_MIPI), 460 452 }, 461 453 462 454 [IMX8M_POWER_DOMAIN_PCIE1] = { ··· 467 459 .pxx = IMX8M_PCIE1_SW_Pxx_REQ, 468 460 .map = IMX8M_PCIE1_A53_DOMAIN, 469 461 }, 470 - .pgc = IMX8M_PGC_PCIE1, 462 + .pgc = BIT(IMX8M_PGC_PCIE1), 471 463 }, 472 464 473 465 [IMX8M_POWER_DOMAIN_USB_OTG1] = { ··· 478 470 .pxx = IMX8M_OTG1_SW_Pxx_REQ, 479 471 .map = IMX8M_OTG1_A53_DOMAIN, 480 472 }, 481 - .pgc = IMX8M_PGC_OTG1, 473 + .pgc = BIT(IMX8M_PGC_OTG1), 482 474 }, 483 475 484 476 [IMX8M_POWER_DOMAIN_USB_OTG2] = { ··· 489 481 .pxx = IMX8M_OTG2_SW_Pxx_REQ, 490 482 .map = IMX8M_OTG2_A53_DOMAIN, 491 483 }, 492 - .pgc = IMX8M_PGC_OTG2, 484 + .pgc = BIT(IMX8M_PGC_OTG2), 493 485 }, 494 486 495 487 [IMX8M_POWER_DOMAIN_DDR1] = { ··· 500 492 .pxx = IMX8M_DDR1_SW_Pxx_REQ, 501 493 .map = IMX8M_DDR2_A53_DOMAIN, 502 494 }, 503 - .pgc = IMX8M_PGC_DDR1, 495 + .pgc = BIT(IMX8M_PGC_DDR1), 504 496 }, 505 497 506 498 [IMX8M_POWER_DOMAIN_GPU] = { ··· 513 505 .hskreq = IMX8M_GPU_HSK_PWRDNREQN, 514 506 .hskack = IMX8M_GPU_HSK_PWRDNACKN, 515 507 }, 516 - .pgc = IMX8M_PGC_GPU, 508 + .pgc = BIT(IMX8M_PGC_GPU), 517 509 }, 518 510 519 511 [IMX8M_POWER_DOMAIN_VPU] = { ··· 526 518 .hskreq = IMX8M_VPU_HSK_PWRDNREQN, 527 519 .hskack = IMX8M_VPU_HSK_PWRDNACKN, 528 520 }, 529 - .pgc = IMX8M_PGC_VPU, 521 + .pgc = BIT(IMX8M_PGC_VPU), 522 + .keep_clocks = true, 530 523 }, 531 524 532 525 [IMX8M_POWER_DOMAIN_DISP] = { ··· 540 531 .hskreq = IMX8M_DISP_HSK_PWRDNREQN, 541 532 .hskack = IMX8M_DISP_HSK_PWRDNACKN, 542 533 }, 543 - .pgc = IMX8M_PGC_DISP, 534 + .pgc = BIT(IMX8M_PGC_DISP), 544 535 }, 545 536 546 537 [IMX8M_POWER_DOMAIN_MIPI_CSI1] = { ··· 551 542 .pxx = IMX8M_MIPI_CSI1_SW_Pxx_REQ, 552 543 .map = IMX8M_MIPI_CSI1_A53_DOMAIN, 553 544 }, 554 - .pgc = IMX8M_PGC_MIPI_CSI1, 545 + .pgc = BIT(IMX8M_PGC_MIPI_CSI1), 555 546 }, 556 547 557 548 [IMX8M_POWER_DOMAIN_MIPI_CSI2] = { ··· 562 553 .pxx = IMX8M_MIPI_CSI2_SW_Pxx_REQ, 563 554 .map = IMX8M_MIPI_CSI2_A53_DOMAIN, 564 555 }, 565 - .pgc = IMX8M_PGC_MIPI_CSI2, 556 + .pgc = BIT(IMX8M_PGC_MIPI_CSI2), 566 557 }, 567 558 568 559 [IMX8M_POWER_DOMAIN_PCIE2] = { ··· 573 564 .pxx = IMX8M_PCIE2_SW_Pxx_REQ, 574 565 .map = IMX8M_PCIE2_A53_DOMAIN, 575 566 }, 576 - .pgc = IMX8M_PGC_PCIE2, 567 + .pgc = BIT(IMX8M_PGC_PCIE2), 577 568 }, 578 569 }; 579 570 ··· 626 617 .hskreq = IMX8MM_HSIO_HSK_PWRDNREQN, 627 618 .hskack = IMX8MM_HSIO_HSK_PWRDNACKN, 628 619 }, 620 + .keep_clocks = true, 629 621 }, 630 622 631 623 [IMX8MM_POWER_DOMAIN_PCIE] = { ··· 637 627 .pxx = IMX8MM_PCIE_SW_Pxx_REQ, 638 628 .map = IMX8MM_PCIE_A53_DOMAIN, 639 629 }, 640 - .pgc = IMX8MM_PGC_PCIE, 630 + .pgc = BIT(IMX8MM_PGC_PCIE), 641 631 }, 642 632 643 633 [IMX8MM_POWER_DOMAIN_OTG1] = { ··· 648 638 .pxx = IMX8MM_OTG1_SW_Pxx_REQ, 649 639 .map = IMX8MM_OTG1_A53_DOMAIN, 650 640 }, 651 - .pgc = IMX8MM_PGC_OTG1, 641 + .pgc = BIT(IMX8MM_PGC_OTG1), 652 642 }, 653 643 654 644 [IMX8MM_POWER_DOMAIN_OTG2] = { ··· 659 649 .pxx = IMX8MM_OTG2_SW_Pxx_REQ, 660 650 .map = IMX8MM_OTG2_A53_DOMAIN, 661 651 }, 662 - .pgc = IMX8MM_PGC_OTG2, 652 + .pgc = BIT(IMX8MM_PGC_OTG2), 663 653 }, 664 654 665 655 [IMX8MM_POWER_DOMAIN_GPUMIX] = { ··· 672 662 .hskreq = IMX8MM_GPUMIX_HSK_PWRDNREQN, 673 663 .hskack = IMX8MM_GPUMIX_HSK_PWRDNACKN, 674 664 }, 675 - .pgc = IMX8MM_PGC_GPUMIX, 665 + .pgc = BIT(IMX8MM_PGC_GPUMIX), 666 + .keep_clocks = true, 676 667 }, 677 668 678 669 [IMX8MM_POWER_DOMAIN_GPU] = { ··· 686 675 .hskreq = IMX8MM_GPU_HSK_PWRDNREQN, 687 676 .hskack = IMX8MM_GPU_HSK_PWRDNACKN, 688 677 }, 689 - .pgc = IMX8MM_PGC_GPU2D, 678 + .pgc = BIT(IMX8MM_PGC_GPU2D) | BIT(IMX8MM_PGC_GPU3D), 690 679 }, 691 680 692 681 [IMX8MM_POWER_DOMAIN_VPUMIX] = { ··· 699 688 .hskreq = IMX8MM_VPUMIX_HSK_PWRDNREQN, 700 689 .hskack = IMX8MM_VPUMIX_HSK_PWRDNACKN, 701 690 }, 702 - .pgc = IMX8MM_PGC_VPUMIX, 691 + .pgc = BIT(IMX8MM_PGC_VPUMIX), 692 + .keep_clocks = true, 703 693 }, 704 694 705 695 [IMX8MM_POWER_DOMAIN_VPUG1] = { ··· 711 699 .pxx = IMX8MM_VPUG1_SW_Pxx_REQ, 712 700 .map = IMX8MM_VPUG1_A53_DOMAIN, 713 701 }, 714 - .pgc = IMX8MM_PGC_VPUG1, 702 + .pgc = BIT(IMX8MM_PGC_VPUG1), 715 703 }, 716 704 717 705 [IMX8MM_POWER_DOMAIN_VPUG2] = { ··· 722 710 .pxx = IMX8MM_VPUG2_SW_Pxx_REQ, 723 711 .map = IMX8MM_VPUG2_A53_DOMAIN, 724 712 }, 725 - .pgc = IMX8MM_PGC_VPUG2, 713 + .pgc = BIT(IMX8MM_PGC_VPUG2), 726 714 }, 727 715 728 716 [IMX8MM_POWER_DOMAIN_VPUH1] = { ··· 733 721 .pxx = IMX8MM_VPUH1_SW_Pxx_REQ, 734 722 .map = IMX8MM_VPUH1_A53_DOMAIN, 735 723 }, 736 - .pgc = IMX8MM_PGC_VPUH1, 724 + .pgc = BIT(IMX8MM_PGC_VPUH1), 737 725 }, 738 726 739 727 [IMX8MM_POWER_DOMAIN_DISPMIX] = { ··· 746 734 .hskreq = IMX8MM_DISPMIX_HSK_PWRDNREQN, 747 735 .hskack = IMX8MM_DISPMIX_HSK_PWRDNACKN, 748 736 }, 749 - .pgc = IMX8MM_PGC_DISPMIX, 737 + .pgc = BIT(IMX8MM_PGC_DISPMIX), 738 + .keep_clocks = true, 750 739 }, 751 740 752 741 [IMX8MM_POWER_DOMAIN_MIPI] = { ··· 758 745 .pxx = IMX8MM_MIPI_SW_Pxx_REQ, 759 746 .map = IMX8MM_MIPI_A53_DOMAIN, 760 747 }, 761 - .pgc = IMX8MM_PGC_MIPI, 748 + .pgc = BIT(IMX8MM_PGC_MIPI), 762 749 }, 763 750 }; 764 751 ··· 815 802 .hskreq = IMX8MN_HSIO_HSK_PWRDNREQN, 816 803 .hskack = IMX8MN_HSIO_HSK_PWRDNACKN, 817 804 }, 805 + .keep_clocks = true, 818 806 }, 819 807 820 808 [IMX8MN_POWER_DOMAIN_OTG1] = { ··· 826 812 .pxx = IMX8MN_OTG1_SW_Pxx_REQ, 827 813 .map = IMX8MN_OTG1_A53_DOMAIN, 828 814 }, 829 - .pgc = IMX8MN_PGC_OTG1, 815 + .pgc = BIT(IMX8MN_PGC_OTG1), 830 816 }, 831 817 832 818 [IMX8MN_POWER_DOMAIN_GPUMIX] = { ··· 839 825 .hskreq = IMX8MN_GPUMIX_HSK_PWRDNREQN, 840 826 .hskack = IMX8MN_GPUMIX_HSK_PWRDNACKN, 841 827 }, 842 - .pgc = IMX8MN_PGC_GPUMIX, 828 + .pgc = BIT(IMX8MN_PGC_GPUMIX), 843 829 }, 844 830 }; 845 831 ··· 908 894 goto out_domain_unmap; 909 895 } 910 896 897 + if (IS_ENABLED(CONFIG_LOCKDEP) && 898 + of_property_read_bool(domain->dev->of_node, "power-domains")) 899 + lockdep_set_subclass(&domain->genpd.mlock, 1); 900 + 911 901 ret = of_genpd_add_provider_simple(domain->dev->of_node, 912 902 &domain->genpd); 913 903 if (ret) { ··· 948 930 return 0; 949 931 } 950 932 933 + #ifdef CONFIG_PM_SLEEP 934 + static int imx_pgc_domain_suspend(struct device *dev) 935 + { 936 + int ret; 937 + 938 + /* 939 + * This may look strange, but is done so the generic PM_SLEEP code 940 + * can power down our domain and more importantly power it up again 941 + * after resume, without tripping over our usage of runtime PM to 942 + * power up/down the nested domains. 943 + */ 944 + ret = pm_runtime_get_sync(dev); 945 + if (ret < 0) { 946 + pm_runtime_put_noidle(dev); 947 + return ret; 948 + } 949 + 950 + return 0; 951 + } 952 + 953 + static int imx_pgc_domain_resume(struct device *dev) 954 + { 955 + return pm_runtime_put(dev); 956 + } 957 + #endif 958 + 959 + static const struct dev_pm_ops imx_pgc_domain_pm_ops = { 960 + SET_SYSTEM_SLEEP_PM_OPS(imx_pgc_domain_suspend, imx_pgc_domain_resume) 961 + }; 962 + 951 963 static const struct platform_device_id imx_pgc_domain_id[] = { 952 964 { "imx-pgc-domain", }, 953 965 { }, ··· 986 938 static struct platform_driver imx_pgc_domain_driver = { 987 939 .driver = { 988 940 .name = "imx-pgc", 941 + .pm = &imx_pgc_domain_pm_ops, 989 942 }, 990 943 .probe = imx_pgc_domain_probe, 991 944 .remove = imx_pgc_domain_remove, ··· 1034 985 struct platform_device *pd_pdev; 1035 986 struct imx_pgc_domain *domain; 1036 987 u32 domain_index; 988 + 989 + if (!of_device_is_available(np)) 990 + continue; 1037 991 1038 992 ret = of_property_read_u32(np, "reg", &domain_index); 1039 993 if (ret) {
+523
drivers/soc/imx/imx8m-blk-ctrl.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + 3 + /* 4 + * Copyright 2021 Pengutronix, Lucas Stach <kernel@pengutronix.de> 5 + */ 6 + 7 + #include <linux/device.h> 8 + #include <linux/module.h> 9 + #include <linux/of_device.h> 10 + #include <linux/platform_device.h> 11 + #include <linux/pm_domain.h> 12 + #include <linux/pm_runtime.h> 13 + #include <linux/regmap.h> 14 + #include <linux/clk.h> 15 + 16 + #include <dt-bindings/power/imx8mm-power.h> 17 + 18 + #define BLK_SFT_RSTN 0x0 19 + #define BLK_CLK_EN 0x4 20 + 21 + struct imx8m_blk_ctrl_domain; 22 + 23 + struct imx8m_blk_ctrl { 24 + struct device *dev; 25 + struct notifier_block power_nb; 26 + struct device *bus_power_dev; 27 + struct regmap *regmap; 28 + struct imx8m_blk_ctrl_domain *domains; 29 + struct genpd_onecell_data onecell_data; 30 + }; 31 + 32 + struct imx8m_blk_ctrl_domain_data { 33 + const char *name; 34 + const char * const *clk_names; 35 + int num_clks; 36 + const char *gpc_name; 37 + u32 rst_mask; 38 + u32 clk_mask; 39 + }; 40 + 41 + #define DOMAIN_MAX_CLKS 3 42 + 43 + struct imx8m_blk_ctrl_domain { 44 + struct generic_pm_domain genpd; 45 + const struct imx8m_blk_ctrl_domain_data *data; 46 + struct clk_bulk_data clks[DOMAIN_MAX_CLKS]; 47 + struct device *power_dev; 48 + struct imx8m_blk_ctrl *bc; 49 + }; 50 + 51 + struct imx8m_blk_ctrl_data { 52 + int max_reg; 53 + notifier_fn_t power_notifier_fn; 54 + const struct imx8m_blk_ctrl_domain_data *domains; 55 + int num_domains; 56 + }; 57 + 58 + static inline struct imx8m_blk_ctrl_domain * 59 + to_imx8m_blk_ctrl_domain(struct generic_pm_domain *genpd) 60 + { 61 + return container_of(genpd, struct imx8m_blk_ctrl_domain, genpd); 62 + } 63 + 64 + static int imx8m_blk_ctrl_power_on(struct generic_pm_domain *genpd) 65 + { 66 + struct imx8m_blk_ctrl_domain *domain = to_imx8m_blk_ctrl_domain(genpd); 67 + const struct imx8m_blk_ctrl_domain_data *data = domain->data; 68 + struct imx8m_blk_ctrl *bc = domain->bc; 69 + int ret; 70 + 71 + /* make sure bus domain is awake */ 72 + ret = pm_runtime_get_sync(bc->bus_power_dev); 73 + if (ret < 0) { 74 + pm_runtime_put_noidle(bc->bus_power_dev); 75 + dev_err(bc->dev, "failed to power up bus domain\n"); 76 + return ret; 77 + } 78 + 79 + /* put devices into reset */ 80 + regmap_clear_bits(bc->regmap, BLK_SFT_RSTN, data->rst_mask); 81 + 82 + /* enable upstream and blk-ctrl clocks to allow reset to propagate */ 83 + ret = clk_bulk_prepare_enable(data->num_clks, domain->clks); 84 + if (ret) { 85 + dev_err(bc->dev, "failed to enable clocks\n"); 86 + goto bus_put; 87 + } 88 + regmap_set_bits(bc->regmap, BLK_CLK_EN, data->clk_mask); 89 + 90 + /* power up upstream GPC domain */ 91 + ret = pm_runtime_get_sync(domain->power_dev); 92 + if (ret < 0) { 93 + dev_err(bc->dev, "failed to power up peripheral domain\n"); 94 + goto clk_disable; 95 + } 96 + 97 + /* wait for reset to propagate */ 98 + udelay(5); 99 + 100 + /* release reset */ 101 + regmap_set_bits(bc->regmap, BLK_SFT_RSTN, data->rst_mask); 102 + 103 + /* disable upstream clocks */ 104 + clk_bulk_disable_unprepare(data->num_clks, domain->clks); 105 + 106 + return 0; 107 + 108 + clk_disable: 109 + clk_bulk_disable_unprepare(data->num_clks, domain->clks); 110 + bus_put: 111 + pm_runtime_put(bc->bus_power_dev); 112 + 113 + return ret; 114 + } 115 + 116 + static int imx8m_blk_ctrl_power_off(struct generic_pm_domain *genpd) 117 + { 118 + struct imx8m_blk_ctrl_domain *domain = to_imx8m_blk_ctrl_domain(genpd); 119 + const struct imx8m_blk_ctrl_domain_data *data = domain->data; 120 + struct imx8m_blk_ctrl *bc = domain->bc; 121 + 122 + /* put devices into reset and disable clocks */ 123 + regmap_clear_bits(bc->regmap, BLK_SFT_RSTN, data->rst_mask); 124 + regmap_clear_bits(bc->regmap, BLK_CLK_EN, data->clk_mask); 125 + 126 + /* power down upstream GPC domain */ 127 + pm_runtime_put(domain->power_dev); 128 + 129 + /* allow bus domain to suspend */ 130 + pm_runtime_put(bc->bus_power_dev); 131 + 132 + return 0; 133 + } 134 + 135 + static struct generic_pm_domain * 136 + imx8m_blk_ctrl_xlate(struct of_phandle_args *args, void *data) 137 + { 138 + struct genpd_onecell_data *onecell_data = data; 139 + unsigned int index = args->args[0]; 140 + 141 + if (args->args_count != 1 || 142 + index >= onecell_data->num_domains) 143 + return ERR_PTR(-EINVAL); 144 + 145 + return onecell_data->domains[index]; 146 + } 147 + 148 + static struct lock_class_key blk_ctrl_genpd_lock_class; 149 + 150 + static int imx8m_blk_ctrl_probe(struct platform_device *pdev) 151 + { 152 + const struct imx8m_blk_ctrl_data *bc_data; 153 + struct device *dev = &pdev->dev; 154 + struct imx8m_blk_ctrl *bc; 155 + void __iomem *base; 156 + int i, ret; 157 + 158 + struct regmap_config regmap_config = { 159 + .reg_bits = 32, 160 + .val_bits = 32, 161 + .reg_stride = 4, 162 + }; 163 + 164 + bc = devm_kzalloc(dev, sizeof(*bc), GFP_KERNEL); 165 + if (!bc) 166 + return -ENOMEM; 167 + 168 + bc->dev = dev; 169 + 170 + bc_data = of_device_get_match_data(dev); 171 + 172 + base = devm_platform_ioremap_resource(pdev, 0); 173 + if (IS_ERR(base)) 174 + return PTR_ERR(base); 175 + 176 + regmap_config.max_register = bc_data->max_reg; 177 + bc->regmap = devm_regmap_init_mmio(dev, base, &regmap_config); 178 + if (IS_ERR(bc->regmap)) 179 + return dev_err_probe(dev, PTR_ERR(bc->regmap), 180 + "failed to init regmap\n"); 181 + 182 + bc->domains = devm_kcalloc(dev, bc_data->num_domains, 183 + sizeof(struct imx8m_blk_ctrl_domain), 184 + GFP_KERNEL); 185 + if (!bc->domains) 186 + return -ENOMEM; 187 + 188 + bc->onecell_data.num_domains = bc_data->num_domains; 189 + bc->onecell_data.xlate = imx8m_blk_ctrl_xlate; 190 + bc->onecell_data.domains = 191 + devm_kcalloc(dev, bc_data->num_domains, 192 + sizeof(struct generic_pm_domain *), GFP_KERNEL); 193 + if (!bc->onecell_data.domains) 194 + return -ENOMEM; 195 + 196 + bc->bus_power_dev = genpd_dev_pm_attach_by_name(dev, "bus"); 197 + if (IS_ERR(bc->bus_power_dev)) 198 + return dev_err_probe(dev, PTR_ERR(bc->bus_power_dev), 199 + "failed to attach power domain\n"); 200 + 201 + for (i = 0; i < bc_data->num_domains; i++) { 202 + const struct imx8m_blk_ctrl_domain_data *data = &bc_data->domains[i]; 203 + struct imx8m_blk_ctrl_domain *domain = &bc->domains[i]; 204 + int j; 205 + 206 + domain->data = data; 207 + 208 + for (j = 0; j < data->num_clks; j++) 209 + domain->clks[j].id = data->clk_names[j]; 210 + 211 + ret = devm_clk_bulk_get(dev, data->num_clks, domain->clks); 212 + if (ret) { 213 + dev_err_probe(dev, ret, "failed to get clock\n"); 214 + goto cleanup_pds; 215 + } 216 + 217 + domain->power_dev = 218 + dev_pm_domain_attach_by_name(dev, data->gpc_name); 219 + if (IS_ERR(domain->power_dev)) { 220 + dev_err_probe(dev, PTR_ERR(domain->power_dev), 221 + "failed to attach power domain\n"); 222 + ret = PTR_ERR(domain->power_dev); 223 + goto cleanup_pds; 224 + } 225 + 226 + domain->genpd.name = data->name; 227 + domain->genpd.power_on = imx8m_blk_ctrl_power_on; 228 + domain->genpd.power_off = imx8m_blk_ctrl_power_off; 229 + domain->bc = bc; 230 + 231 + ret = pm_genpd_init(&domain->genpd, NULL, true); 232 + if (ret) { 233 + dev_err_probe(dev, ret, "failed to init power domain\n"); 234 + dev_pm_domain_detach(domain->power_dev, true); 235 + goto cleanup_pds; 236 + } 237 + 238 + /* 239 + * We use runtime PM to trigger power on/off of the upstream GPC 240 + * domain, as a strict hierarchical parent/child power domain 241 + * setup doesn't allow us to meet the sequencing requirements. 242 + * This means we have nested locking of genpd locks, without the 243 + * nesting being visible at the genpd level, so we need a 244 + * separate lock class to make lockdep aware of the fact that 245 + * this are separate domain locks that can be nested without a 246 + * self-deadlock. 247 + */ 248 + lockdep_set_class(&domain->genpd.mlock, 249 + &blk_ctrl_genpd_lock_class); 250 + 251 + bc->onecell_data.domains[i] = &domain->genpd; 252 + } 253 + 254 + ret = of_genpd_add_provider_onecell(dev->of_node, &bc->onecell_data); 255 + if (ret) { 256 + dev_err_probe(dev, ret, "failed to add power domain provider\n"); 257 + goto cleanup_pds; 258 + } 259 + 260 + bc->power_nb.notifier_call = bc_data->power_notifier_fn; 261 + ret = dev_pm_genpd_add_notifier(bc->bus_power_dev, &bc->power_nb); 262 + if (ret) { 263 + dev_err_probe(dev, ret, "failed to add power notifier\n"); 264 + goto cleanup_provider; 265 + } 266 + 267 + dev_set_drvdata(dev, bc); 268 + 269 + return 0; 270 + 271 + cleanup_provider: 272 + of_genpd_del_provider(dev->of_node); 273 + cleanup_pds: 274 + for (i--; i >= 0; i--) { 275 + pm_genpd_remove(&bc->domains[i].genpd); 276 + dev_pm_domain_detach(bc->domains[i].power_dev, true); 277 + } 278 + 279 + dev_pm_domain_detach(bc->bus_power_dev, true); 280 + 281 + return ret; 282 + } 283 + 284 + static int imx8m_blk_ctrl_remove(struct platform_device *pdev) 285 + { 286 + struct imx8m_blk_ctrl *bc = dev_get_drvdata(&pdev->dev); 287 + int i; 288 + 289 + of_genpd_del_provider(pdev->dev.of_node); 290 + 291 + for (i = 0; bc->onecell_data.num_domains; i++) { 292 + struct imx8m_blk_ctrl_domain *domain = &bc->domains[i]; 293 + 294 + pm_genpd_remove(&domain->genpd); 295 + dev_pm_domain_detach(domain->power_dev, true); 296 + } 297 + 298 + dev_pm_genpd_remove_notifier(bc->bus_power_dev); 299 + 300 + dev_pm_domain_detach(bc->bus_power_dev, true); 301 + 302 + return 0; 303 + } 304 + 305 + #ifdef CONFIG_PM_SLEEP 306 + static int imx8m_blk_ctrl_suspend(struct device *dev) 307 + { 308 + struct imx8m_blk_ctrl *bc = dev_get_drvdata(dev); 309 + int ret, i; 310 + 311 + /* 312 + * This may look strange, but is done so the generic PM_SLEEP code 313 + * can power down our domains and more importantly power them up again 314 + * after resume, without tripping over our usage of runtime PM to 315 + * control the upstream GPC domains. Things happen in the right order 316 + * in the system suspend/resume paths due to the device parent/child 317 + * hierarchy. 318 + */ 319 + ret = pm_runtime_get_sync(bc->bus_power_dev); 320 + if (ret < 0) { 321 + pm_runtime_put_noidle(bc->bus_power_dev); 322 + return ret; 323 + } 324 + 325 + for (i = 0; i < bc->onecell_data.num_domains; i++) { 326 + struct imx8m_blk_ctrl_domain *domain = &bc->domains[i]; 327 + 328 + ret = pm_runtime_get_sync(domain->power_dev); 329 + if (ret < 0) { 330 + pm_runtime_put_noidle(domain->power_dev); 331 + goto out_fail; 332 + } 333 + } 334 + 335 + return 0; 336 + 337 + out_fail: 338 + for (i--; i >= 0; i--) 339 + pm_runtime_put(bc->domains[i].power_dev); 340 + 341 + pm_runtime_put(bc->bus_power_dev); 342 + 343 + return ret; 344 + } 345 + 346 + static int imx8m_blk_ctrl_resume(struct device *dev) 347 + { 348 + struct imx8m_blk_ctrl *bc = dev_get_drvdata(dev); 349 + int i; 350 + 351 + for (i = 0; i < bc->onecell_data.num_domains; i++) 352 + pm_runtime_put(bc->domains[i].power_dev); 353 + 354 + pm_runtime_put(bc->bus_power_dev); 355 + 356 + return 0; 357 + } 358 + #endif 359 + 360 + static const struct dev_pm_ops imx8m_blk_ctrl_pm_ops = { 361 + SET_SYSTEM_SLEEP_PM_OPS(imx8m_blk_ctrl_suspend, imx8m_blk_ctrl_resume) 362 + }; 363 + 364 + static int imx8mm_vpu_power_notifier(struct notifier_block *nb, 365 + unsigned long action, void *data) 366 + { 367 + struct imx8m_blk_ctrl *bc = container_of(nb, struct imx8m_blk_ctrl, 368 + power_nb); 369 + 370 + if (action != GENPD_NOTIFY_ON && action != GENPD_NOTIFY_PRE_OFF) 371 + return NOTIFY_OK; 372 + 373 + /* 374 + * The ADB in the VPUMIX domain has no separate reset and clock 375 + * enable bits, but is ungated together with the VPU clocks. To 376 + * allow the handshake with the GPC to progress we put the VPUs 377 + * in reset and ungate the clocks. 378 + */ 379 + regmap_clear_bits(bc->regmap, BLK_SFT_RSTN, BIT(0) | BIT(1) | BIT(2)); 380 + regmap_set_bits(bc->regmap, BLK_CLK_EN, BIT(0) | BIT(1) | BIT(2)); 381 + 382 + if (action == GENPD_NOTIFY_ON) { 383 + /* 384 + * On power up we have no software backchannel to the GPC to 385 + * wait for the ADB handshake to happen, so we just delay for a 386 + * bit. On power down the GPC driver waits for the handshake. 387 + */ 388 + udelay(5); 389 + 390 + /* set "fuse" bits to enable the VPUs */ 391 + regmap_set_bits(bc->regmap, 0x8, 0xffffffff); 392 + regmap_set_bits(bc->regmap, 0xc, 0xffffffff); 393 + regmap_set_bits(bc->regmap, 0x10, 0xffffffff); 394 + regmap_set_bits(bc->regmap, 0x14, 0xffffffff); 395 + } 396 + 397 + return NOTIFY_OK; 398 + } 399 + 400 + static const struct imx8m_blk_ctrl_domain_data imx8mm_vpu_blk_ctl_domain_data[] = { 401 + [IMX8MM_VPUBLK_PD_G1] = { 402 + .name = "vpublk-g1", 403 + .clk_names = (const char *[]){ "g1", }, 404 + .num_clks = 1, 405 + .gpc_name = "g1", 406 + .rst_mask = BIT(1), 407 + .clk_mask = BIT(1), 408 + }, 409 + [IMX8MM_VPUBLK_PD_G2] = { 410 + .name = "vpublk-g2", 411 + .clk_names = (const char *[]){ "g2", }, 412 + .num_clks = 1, 413 + .gpc_name = "g2", 414 + .rst_mask = BIT(0), 415 + .clk_mask = BIT(0), 416 + }, 417 + [IMX8MM_VPUBLK_PD_H1] = { 418 + .name = "vpublk-h1", 419 + .clk_names = (const char *[]){ "h1", }, 420 + .num_clks = 1, 421 + .gpc_name = "h1", 422 + .rst_mask = BIT(2), 423 + .clk_mask = BIT(2), 424 + }, 425 + }; 426 + 427 + static const struct imx8m_blk_ctrl_data imx8mm_vpu_blk_ctl_dev_data = { 428 + .max_reg = 0x18, 429 + .power_notifier_fn = imx8mm_vpu_power_notifier, 430 + .domains = imx8mm_vpu_blk_ctl_domain_data, 431 + .num_domains = ARRAY_SIZE(imx8mm_vpu_blk_ctl_domain_data), 432 + }; 433 + 434 + static int imx8mm_disp_power_notifier(struct notifier_block *nb, 435 + unsigned long action, void *data) 436 + { 437 + struct imx8m_blk_ctrl *bc = container_of(nb, struct imx8m_blk_ctrl, 438 + power_nb); 439 + 440 + if (action != GENPD_NOTIFY_ON && action != GENPD_NOTIFY_PRE_OFF) 441 + return NOTIFY_OK; 442 + 443 + /* Enable bus clock and deassert bus reset */ 444 + regmap_set_bits(bc->regmap, BLK_CLK_EN, BIT(12)); 445 + regmap_set_bits(bc->regmap, BLK_SFT_RSTN, BIT(6)); 446 + 447 + /* 448 + * On power up we have no software backchannel to the GPC to 449 + * wait for the ADB handshake to happen, so we just delay for a 450 + * bit. On power down the GPC driver waits for the handshake. 451 + */ 452 + if (action == GENPD_NOTIFY_ON) 453 + udelay(5); 454 + 455 + 456 + return NOTIFY_OK; 457 + } 458 + 459 + static const struct imx8m_blk_ctrl_domain_data imx8mm_disp_blk_ctl_domain_data[] = { 460 + [IMX8MM_DISPBLK_PD_CSI_BRIDGE] = { 461 + .name = "dispblk-csi-bridge", 462 + .clk_names = (const char *[]){ "csi-bridge-axi", "csi-bridge-apb", 463 + "csi-bridge-core", }, 464 + .num_clks = 3, 465 + .gpc_name = "csi-bridge", 466 + .rst_mask = BIT(0) | BIT(1) | BIT(2), 467 + .clk_mask = BIT(0) | BIT(1) | BIT(2) | BIT(3) | BIT(4) | BIT(5), 468 + }, 469 + [IMX8MM_DISPBLK_PD_LCDIF] = { 470 + .name = "dispblk-lcdif", 471 + .clk_names = (const char *[]){ "lcdif-axi", "lcdif-apb", "lcdif-pix", }, 472 + .num_clks = 3, 473 + .gpc_name = "lcdif", 474 + .clk_mask = BIT(6) | BIT(7), 475 + }, 476 + [IMX8MM_DISPBLK_PD_MIPI_DSI] = { 477 + .name = "dispblk-mipi-dsi", 478 + .clk_names = (const char *[]){ "dsi-pclk", "dsi-ref", }, 479 + .num_clks = 2, 480 + .gpc_name = "mipi-dsi", 481 + .rst_mask = BIT(5), 482 + .clk_mask = BIT(8) | BIT(9), 483 + }, 484 + [IMX8MM_DISPBLK_PD_MIPI_CSI] = { 485 + .name = "dispblk-mipi-csi", 486 + .clk_names = (const char *[]){ "csi-aclk", "csi-pclk" }, 487 + .num_clks = 2, 488 + .gpc_name = "mipi-csi", 489 + .rst_mask = BIT(3) | BIT(4), 490 + .clk_mask = BIT(10) | BIT(11), 491 + }, 492 + }; 493 + 494 + static const struct imx8m_blk_ctrl_data imx8mm_disp_blk_ctl_dev_data = { 495 + .max_reg = 0x2c, 496 + .power_notifier_fn = imx8mm_disp_power_notifier, 497 + .domains = imx8mm_disp_blk_ctl_domain_data, 498 + .num_domains = ARRAY_SIZE(imx8mm_disp_blk_ctl_domain_data), 499 + }; 500 + 501 + static const struct of_device_id imx8m_blk_ctrl_of_match[] = { 502 + { 503 + .compatible = "fsl,imx8mm-vpu-blk-ctrl", 504 + .data = &imx8mm_vpu_blk_ctl_dev_data 505 + }, { 506 + .compatible = "fsl,imx8mm-disp-blk-ctrl", 507 + .data = &imx8mm_disp_blk_ctl_dev_data 508 + } ,{ 509 + /* Sentinel */ 510 + } 511 + }; 512 + MODULE_DEVICE_TABLE(of, imx8m_blk_ctrl_of_match); 513 + 514 + static struct platform_driver imx8m_blk_ctrl_driver = { 515 + .probe = imx8m_blk_ctrl_probe, 516 + .remove = imx8m_blk_ctrl_remove, 517 + .driver = { 518 + .name = "imx8m-blk-ctrl", 519 + .pm = &imx8m_blk_ctrl_pm_ops, 520 + .of_match_table = imx8m_blk_ctrl_of_match, 521 + }, 522 + }; 523 + module_platform_driver(imx8m_blk_ctrl_driver);
+76
drivers/soc/mediatek/mt8192-mmsys.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + 3 + #ifndef __SOC_MEDIATEK_MT8192_MMSYS_H 4 + #define __SOC_MEDIATEK_MT8192_MMSYS_H 5 + 6 + #define MT8192_MMSYS_OVL_MOUT_EN 0xf04 7 + #define MT8192_DISP_OVL1_2L_MOUT_EN 0xf08 8 + #define MT8192_DISP_OVL0_2L_MOUT_EN 0xf18 9 + #define MT8192_DISP_OVL0_MOUT_EN 0xf1c 10 + #define MT8192_DISP_RDMA0_SEL_IN 0xf2c 11 + #define MT8192_DISP_RDMA0_SOUT_SEL 0xf30 12 + #define MT8192_DISP_CCORR0_SOUT_SEL 0xf34 13 + #define MT8192_DISP_AAL0_SEL_IN 0xf38 14 + #define MT8192_DISP_DITHER0_MOUT_EN 0xf3c 15 + #define MT8192_DISP_DSI0_SEL_IN 0xf40 16 + #define MT8192_DISP_OVL2_2L_MOUT_EN 0xf4c 17 + 18 + #define MT8192_DISP_OVL0_GO_BLEND BIT(0) 19 + #define MT8192_DITHER0_MOUT_IN_DSI0 BIT(0) 20 + #define MT8192_OVL0_MOUT_EN_DISP_RDMA0 BIT(0) 21 + #define MT8192_OVL2_2L_MOUT_EN_RDMA4 BIT(0) 22 + #define MT8192_DISP_OVL0_GO_BG BIT(1) 23 + #define MT8192_DISP_OVL0_2L_GO_BLEND BIT(2) 24 + #define MT8192_DISP_OVL0_2L_GO_BG BIT(3) 25 + #define MT8192_OVL1_2L_MOUT_EN_RDMA1 BIT(4) 26 + #define MT8192_OVL0_MOUT_EN_OVL0_2L BIT(4) 27 + #define MT8192_RDMA0_SEL_IN_OVL0_2L 0x3 28 + #define MT8192_RDMA0_SOUT_COLOR0 0x1 29 + #define MT8192_CCORR0_SOUT_AAL0 0x1 30 + #define MT8192_AAL0_SEL_IN_CCORR0 0x1 31 + #define MT8192_DSI0_SEL_IN_DITHER0 0x1 32 + 33 + static const struct mtk_mmsys_routes mmsys_mt8192_routing_table[] = { 34 + { 35 + DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA0, 36 + MT8192_DISP_OVL0_2L_MOUT_EN, MT8192_OVL0_MOUT_EN_DISP_RDMA0, 37 + MT8192_OVL0_MOUT_EN_DISP_RDMA0 38 + }, { 39 + DDP_COMPONENT_OVL_2L2, DDP_COMPONENT_RDMA4, 40 + MT8192_DISP_OVL2_2L_MOUT_EN, MT8192_OVL2_2L_MOUT_EN_RDMA4, 41 + MT8192_OVL2_2L_MOUT_EN_RDMA4 42 + }, { 43 + DDP_COMPONENT_DITHER, DDP_COMPONENT_DSI0, 44 + MT8192_DISP_DITHER0_MOUT_EN, MT8192_DITHER0_MOUT_IN_DSI0, 45 + MT8192_DITHER0_MOUT_IN_DSI0 46 + }, { 47 + DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA0, 48 + MT8192_DISP_RDMA0_SEL_IN, MT8192_RDMA0_SEL_IN_OVL0_2L, 49 + MT8192_RDMA0_SEL_IN_OVL0_2L 50 + }, { 51 + DDP_COMPONENT_CCORR, DDP_COMPONENT_AAL0, 52 + MT8192_DISP_AAL0_SEL_IN, MT8192_AAL0_SEL_IN_CCORR0, 53 + MT8192_AAL0_SEL_IN_CCORR0 54 + }, { 55 + DDP_COMPONENT_DITHER, DDP_COMPONENT_DSI0, 56 + MT8192_DISP_DSI0_SEL_IN, MT8192_DSI0_SEL_IN_DITHER0 57 + }, { 58 + DDP_COMPONENT_RDMA0, DDP_COMPONENT_COLOR0, 59 + MT8192_DISP_RDMA0_SOUT_SEL, MT8192_RDMA0_SOUT_COLOR0, 60 + MT8192_RDMA0_SOUT_COLOR0 61 + }, { 62 + DDP_COMPONENT_CCORR, DDP_COMPONENT_AAL0, 63 + MT8192_DISP_CCORR0_SOUT_SEL, MT8192_CCORR0_SOUT_AAL0, 64 + MT8192_CCORR0_SOUT_AAL0 65 + }, { 66 + DDP_COMPONENT_OVL0, DDP_COMPONENT_OVL_2L0, 67 + MT8192_MMSYS_OVL_MOUT_EN, MT8192_DISP_OVL0_GO_BG, 68 + MT8192_DISP_OVL0_GO_BG 69 + }, { 70 + DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA0, 71 + MT8192_MMSYS_OVL_MOUT_EN, MT8192_DISP_OVL0_2L_GO_BLEND, 72 + MT8192_DISP_OVL0_2L_GO_BLEND 73 + } 74 + }; 75 + 76 + #endif /* __SOC_MEDIATEK_MT8192_MMSYS_H */
+79
drivers/soc/mediatek/mtk-mmsys.c
··· 4 4 * Author: James Liao <jamesjj.liao@mediatek.com> 5 5 */ 6 6 7 + #include <linux/delay.h> 7 8 #include <linux/device.h> 8 9 #include <linux/io.h> 9 10 #include <linux/of_device.h> 10 11 #include <linux/platform_device.h> 12 + #include <linux/reset-controller.h> 11 13 #include <linux/soc/mediatek/mtk-mmsys.h> 12 14 13 15 #include "mtk-mmsys.h" 14 16 #include "mt8167-mmsys.h" 15 17 #include "mt8183-mmsys.h" 18 + #include "mt8192-mmsys.h" 16 19 #include "mt8365-mmsys.h" 17 20 18 21 static const struct mtk_mmsys_driver_data mt2701_mmsys_driver_data = { ··· 56 53 .num_routes = ARRAY_SIZE(mmsys_mt8183_routing_table), 57 54 }; 58 55 56 + static const struct mtk_mmsys_driver_data mt8192_mmsys_driver_data = { 57 + .clk_driver = "clk-mt8192-mm", 58 + .routes = mmsys_mt8192_routing_table, 59 + .num_routes = ARRAY_SIZE(mmsys_mt8192_routing_table), 60 + }; 61 + 59 62 static const struct mtk_mmsys_driver_data mt8365_mmsys_driver_data = { 60 63 .clk_driver = "clk-mt8365-mm", 61 64 .routes = mt8365_mmsys_routing_table, ··· 71 62 struct mtk_mmsys { 72 63 void __iomem *regs; 73 64 const struct mtk_mmsys_driver_data *data; 65 + spinlock_t lock; /* protects mmsys_sw_rst_b reg */ 66 + struct reset_controller_dev rcdev; 74 67 }; 75 68 76 69 void mtk_mmsys_ddp_connect(struct device *dev, ··· 112 101 } 113 102 EXPORT_SYMBOL_GPL(mtk_mmsys_ddp_disconnect); 114 103 104 + static int mtk_mmsys_reset_update(struct reset_controller_dev *rcdev, unsigned long id, 105 + bool assert) 106 + { 107 + struct mtk_mmsys *mmsys = container_of(rcdev, struct mtk_mmsys, rcdev); 108 + unsigned long flags; 109 + u32 reg; 110 + 111 + spin_lock_irqsave(&mmsys->lock, flags); 112 + 113 + reg = readl_relaxed(mmsys->regs + MMSYS_SW0_RST_B); 114 + 115 + if (assert) 116 + reg &= ~BIT(id); 117 + else 118 + reg |= BIT(id); 119 + 120 + writel_relaxed(reg, mmsys->regs + MMSYS_SW0_RST_B); 121 + 122 + spin_unlock_irqrestore(&mmsys->lock, flags); 123 + 124 + return 0; 125 + } 126 + 127 + static int mtk_mmsys_reset_assert(struct reset_controller_dev *rcdev, unsigned long id) 128 + { 129 + return mtk_mmsys_reset_update(rcdev, id, true); 130 + } 131 + 132 + static int mtk_mmsys_reset_deassert(struct reset_controller_dev *rcdev, unsigned long id) 133 + { 134 + return mtk_mmsys_reset_update(rcdev, id, false); 135 + } 136 + 137 + static int mtk_mmsys_reset(struct reset_controller_dev *rcdev, unsigned long id) 138 + { 139 + int ret; 140 + 141 + ret = mtk_mmsys_reset_assert(rcdev, id); 142 + if (ret) 143 + return ret; 144 + 145 + usleep_range(1000, 1100); 146 + 147 + return mtk_mmsys_reset_deassert(rcdev, id); 148 + } 149 + 150 + static const struct reset_control_ops mtk_mmsys_reset_ops = { 151 + .assert = mtk_mmsys_reset_assert, 152 + .deassert = mtk_mmsys_reset_deassert, 153 + .reset = mtk_mmsys_reset, 154 + }; 155 + 115 156 static int mtk_mmsys_probe(struct platform_device *pdev) 116 157 { 117 158 struct device *dev = &pdev->dev; ··· 180 117 if (IS_ERR(mmsys->regs)) { 181 118 ret = PTR_ERR(mmsys->regs); 182 119 dev_err(dev, "Failed to ioremap mmsys registers: %d\n", ret); 120 + return ret; 121 + } 122 + 123 + spin_lock_init(&mmsys->lock); 124 + 125 + mmsys->rcdev.owner = THIS_MODULE; 126 + mmsys->rcdev.nr_resets = 32; 127 + mmsys->rcdev.ops = &mtk_mmsys_reset_ops; 128 + mmsys->rcdev.of_node = pdev->dev.of_node; 129 + ret = devm_reset_controller_register(&pdev->dev, &mmsys->rcdev); 130 + if (ret) { 131 + dev_err(&pdev->dev, "Couldn't register mmsys reset controller: %d\n", ret); 183 132 return ret; 184 133 } 185 134 ··· 241 166 { 242 167 .compatible = "mediatek,mt8183-mmsys", 243 168 .data = &mt8183_mmsys_driver_data, 169 + }, 170 + { 171 + .compatible = "mediatek,mt8192-mmsys", 172 + .data = &mt8192_mmsys_driver_data, 244 173 }, 245 174 { 246 175 .compatible = "mediatek,mt8365-mmsys",
+2
drivers/soc/mediatek/mtk-mmsys.h
··· 78 78 #define DSI_SEL_IN_RDMA 0x1 79 79 #define DSI_SEL_IN_MASK 0x1 80 80 81 + #define MMSYS_SW0_RST_B 0x140 82 + 81 83 struct mtk_mmsys_routes { 82 84 u32 from_comp; 83 85 u32 to_comp;
+35
drivers/soc/mediatek/mtk-mutex.c
··· 39 39 #define MT8167_MUTEX_MOD_DISP_DITHER 15 40 40 #define MT8167_MUTEX_MOD_DISP_UFOE 16 41 41 42 + #define MT8192_MUTEX_MOD_DISP_OVL0 0 43 + #define MT8192_MUTEX_MOD_DISP_OVL0_2L 1 44 + #define MT8192_MUTEX_MOD_DISP_RDMA0 2 45 + #define MT8192_MUTEX_MOD_DISP_COLOR0 4 46 + #define MT8192_MUTEX_MOD_DISP_CCORR0 5 47 + #define MT8192_MUTEX_MOD_DISP_AAL0 6 48 + #define MT8192_MUTEX_MOD_DISP_GAMMA0 7 49 + #define MT8192_MUTEX_MOD_DISP_POSTMASK0 8 50 + #define MT8192_MUTEX_MOD_DISP_DITHER0 9 51 + #define MT8192_MUTEX_MOD_DISP_OVL2_2L 16 52 + #define MT8192_MUTEX_MOD_DISP_RDMA4 17 53 + 42 54 #define MT8183_MUTEX_MOD_DISP_RDMA0 0 43 55 #define MT8183_MUTEX_MOD_DISP_RDMA1 1 44 56 #define MT8183_MUTEX_MOD_DISP_OVL0 9 ··· 226 214 [DDP_COMPONENT_WDMA0] = MT8183_MUTEX_MOD_DISP_WDMA0, 227 215 }; 228 216 217 + static const unsigned int mt8192_mutex_mod[DDP_COMPONENT_ID_MAX] = { 218 + [DDP_COMPONENT_AAL0] = MT8192_MUTEX_MOD_DISP_AAL0, 219 + [DDP_COMPONENT_CCORR] = MT8192_MUTEX_MOD_DISP_CCORR0, 220 + [DDP_COMPONENT_COLOR0] = MT8192_MUTEX_MOD_DISP_COLOR0, 221 + [DDP_COMPONENT_DITHER] = MT8192_MUTEX_MOD_DISP_DITHER0, 222 + [DDP_COMPONENT_GAMMA] = MT8192_MUTEX_MOD_DISP_GAMMA0, 223 + [DDP_COMPONENT_POSTMASK0] = MT8192_MUTEX_MOD_DISP_POSTMASK0, 224 + [DDP_COMPONENT_OVL0] = MT8192_MUTEX_MOD_DISP_OVL0, 225 + [DDP_COMPONENT_OVL_2L0] = MT8192_MUTEX_MOD_DISP_OVL0_2L, 226 + [DDP_COMPONENT_OVL_2L2] = MT8192_MUTEX_MOD_DISP_OVL2_2L, 227 + [DDP_COMPONENT_RDMA0] = MT8192_MUTEX_MOD_DISP_RDMA0, 228 + [DDP_COMPONENT_RDMA4] = MT8192_MUTEX_MOD_DISP_RDMA4, 229 + }; 230 + 229 231 static const unsigned int mt2712_mutex_sof[MUTEX_SOF_DSI3 + 1] = { 230 232 [MUTEX_SOF_SINGLE_MODE] = MUTEX_SOF_SINGLE_MODE, 231 233 [MUTEX_SOF_DSI0] = MUTEX_SOF_DSI0, ··· 299 273 .mutex_mod_reg = MT8183_MUTEX0_MOD0, 300 274 .mutex_sof_reg = MT8183_MUTEX0_SOF0, 301 275 .no_clk = true, 276 + }; 277 + 278 + static const struct mtk_mutex_data mt8192_mutex_driver_data = { 279 + .mutex_mod = mt8192_mutex_mod, 280 + .mutex_sof = mt8183_mutex_sof, 281 + .mutex_mod_reg = MT8183_MUTEX0_MOD0, 282 + .mutex_sof_reg = MT8183_MUTEX0_SOF0, 302 283 }; 303 284 304 285 struct mtk_mutex *mtk_mutex_get(struct device *dev) ··· 540 507 .data = &mt8173_mutex_driver_data}, 541 508 { .compatible = "mediatek,mt8183-disp-mutex", 542 509 .data = &mt8183_mutex_driver_data}, 510 + { .compatible = "mediatek,mt8192-disp-mutex", 511 + .data = &mt8192_mutex_driver_data}, 543 512 {}, 544 513 }; 545 514 MODULE_DEVICE_TABLE(of, mutex_driver_dt_match);
+19
drivers/soc/qcom/Kconfig
··· 190 190 Say yes here to support the Qualcomm socinfo driver, providing 191 191 information about the SoC to user space. 192 192 193 + config QCOM_SPM 194 + tristate "Qualcomm Subsystem Power Manager (SPM)" 195 + depends on ARCH_QCOM || COMPILE_TEST 196 + select QCOM_SCM 197 + help 198 + Enable the support for the Qualcomm Subsystem Power Manager, used 199 + to manage cores, L2 low power modes and to configure the internal 200 + Adaptive Voltage Scaler parameters, where supported. 201 + 202 + config QCOM_STATS 203 + tristate "Qualcomm Technologies, Inc. (QTI) Sleep stats driver" 204 + depends on (ARCH_QCOM && DEBUG_FS) || COMPILE_TEST 205 + depends on QCOM_SMEM 206 + help 207 + Qualcomm Technologies, Inc. (QTI) Sleep stats driver to read 208 + the shared memory exported by the remote processor related to 209 + various SoC level low power modes statistics and export to debugfs 210 + interface. 211 + 193 212 config QCOM_WCNSS_CTRL 194 213 tristate "Qualcomm WCNSS control driver" 195 214 depends on ARCH_QCOM || COMPILE_TEST
+2
drivers/soc/qcom/Makefile
··· 20 20 obj-$(CONFIG_QCOM_SMP2P) += smp2p.o 21 21 obj-$(CONFIG_QCOM_SMSM) += smsm.o 22 22 obj-$(CONFIG_QCOM_SOCINFO) += socinfo.o 23 + obj-$(CONFIG_QCOM_SPM) += spm.o 24 + obj-$(CONFIG_QCOM_STATS) += qcom_stats.o 23 25 obj-$(CONFIG_QCOM_WCNSS_CTRL) += wcnss_ctrl.o 24 26 obj-$(CONFIG_QCOM_APR) += apr.o 25 27 obj-$(CONFIG_QCOM_LLCC) += llcc-qcom.o
+2
drivers/soc/qcom/apr.c
··· 492 492 1, &service_path); 493 493 if (ret < 0) { 494 494 dev_err(dev, "pdr service path missing: %d\n", ret); 495 + of_node_put(node); 495 496 return ret; 496 497 } 497 498 498 499 pds = pdr_add_lookup(apr->pdr, service_name, service_path); 499 500 if (IS_ERR(pds) && PTR_ERR(pds) != -EALREADY) { 500 501 dev_err(dev, "pdr add lookup failed: %ld\n", PTR_ERR(pds)); 502 + of_node_put(node); 501 503 return PTR_ERR(pds); 502 504 } 503 505 }
+1 -3
drivers/soc/qcom/cpr.c
··· 1614 1614 1615 1615 static int cpr_probe(struct platform_device *pdev) 1616 1616 { 1617 - struct resource *res; 1618 1617 struct device *dev = &pdev->dev; 1619 1618 struct cpr_drv *drv; 1620 1619 int irq, ret; ··· 1647 1648 if (IS_ERR(drv->tcsr)) 1648 1649 return PTR_ERR(drv->tcsr); 1649 1650 1650 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1651 - drv->base = devm_ioremap_resource(dev, res); 1651 + drv->base = devm_platform_ioremap_resource(pdev, 0); 1652 1652 if (IS_ERR(drv->base)) 1653 1653 return PTR_ERR(drv->base); 1654 1654
+17 -1
drivers/soc/qcom/llcc-qcom.c
··· 115 115 { LLCC_CMPT, 10, 768, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, 116 116 { LLCC_GPUHTW, 11, 256, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, 117 117 { LLCC_GPU, 12, 512, 1, 0, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, 118 - { LLCC_MMUHWT, 13, 256, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 1, 0}, 118 + { LLCC_MMUHWT, 13, 256, 1, 1, 0x3f, 0x0, 0, 0, 0, 0, 1, 0}, 119 119 { LLCC_MDMPNG, 21, 768, 0, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, 120 120 { LLCC_WLHW, 24, 256, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, 121 121 { LLCC_MODPE, 29, 64, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0}, ··· 140 140 { LLCC_MDMHPFX, 20, 1024, 2, 1, 0x0, 0xf00, 0, 0, 1, 1, 0 }, 141 141 { LLCC_MDMPNG, 21, 1024, 0, 1, 0x1e, 0x0, 0, 0, 1, 1, 0 }, 142 142 { LLCC_AUDHW, 22, 1024, 1, 1, 0xffc, 0x2, 0, 0, 1, 1, 0 }, 143 + }; 144 + 145 + static const struct llcc_slice_config sm6350_data[] = { 146 + { LLCC_CPUSS, 1, 768, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 1 }, 147 + { LLCC_MDM, 8, 512, 2, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 }, 148 + { LLCC_GPUHTW, 11, 256, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 }, 149 + { LLCC_GPU, 12, 512, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 }, 150 + { LLCC_MDMPNG, 21, 768, 0, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 }, 151 + { LLCC_NPU, 23, 768, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 }, 152 + { LLCC_MODPE, 29, 64, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 }, 143 153 }; 144 154 145 155 static const struct llcc_slice_config sm8150_data[] = { ··· 211 201 .sct_data = sdm845_data, 212 202 .size = ARRAY_SIZE(sdm845_data), 213 203 .need_llcc_cfg = false, 204 + }; 205 + 206 + static const struct qcom_llcc_config sm6350_cfg = { 207 + .sct_data = sm6350_data, 208 + .size = ARRAY_SIZE(sm6350_data), 214 209 }; 215 210 216 211 static const struct qcom_llcc_config sm8150_cfg = { ··· 641 626 { .compatible = "qcom,sc7180-llcc", .data = &sc7180_cfg }, 642 627 { .compatible = "qcom,sc7280-llcc", .data = &sc7280_cfg }, 643 628 { .compatible = "qcom,sdm845-llcc", .data = &sdm845_cfg }, 629 + { .compatible = "qcom,sm6350-llcc", .data = &sm6350_cfg }, 644 630 { .compatible = "qcom,sm8150-llcc", .data = &sm8150_cfg }, 645 631 { .compatible = "qcom,sm8250-llcc", .data = &sm8250_cfg }, 646 632 { }
+1 -3
drivers/soc/qcom/ocmem.c
··· 300 300 struct device *dev = &pdev->dev; 301 301 unsigned long reg, region_size; 302 302 int i, j, ret, num_banks; 303 - struct resource *res; 304 303 struct ocmem *ocmem; 305 304 306 305 if (!qcom_scm_is_available()) ··· 320 321 return ret; 321 322 } 322 323 323 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ctrl"); 324 - ocmem->mmio = devm_ioremap_resource(&pdev->dev, res); 324 + ocmem->mmio = devm_platform_ioremap_resource_byname(pdev, "ctrl"); 325 325 if (IS_ERR(ocmem->mmio)) { 326 326 dev_err(&pdev->dev, "Failed to ioremap ocmem_ctrl resource\n"); 327 327 return PTR_ERR(ocmem->mmio);
+6 -6
drivers/soc/qcom/pdr_interface.c
··· 131 131 return ret; 132 132 133 133 req.enable = enable; 134 - strcpy(req.service_path, pds->service_path); 134 + strscpy(req.service_path, pds->service_path, sizeof(req.service_path)); 135 135 136 136 ret = qmi_send_request(&pdr->notifier_hdl, &pds->addr, 137 137 &txn, SERVREG_REGISTER_LISTENER_REQ, ··· 257 257 return ret; 258 258 259 259 req.transaction_id = tid; 260 - strcpy(req.service_path, pds->service_path); 260 + strscpy(req.service_path, pds->service_path, sizeof(req.service_path)); 261 261 262 262 ret = qmi_send_request(&pdr->notifier_hdl, &pds->addr, 263 263 &txn, SERVREG_SET_ACK_REQ, ··· 406 406 return -ENOMEM; 407 407 408 408 /* Prepare req message */ 409 - strcpy(req.service_name, pds->service_name); 409 + strscpy(req.service_name, pds->service_name, sizeof(req.service_name)); 410 410 req.domain_offset_valid = true; 411 411 req.domain_offset = 0; 412 412 ··· 531 531 return ERR_PTR(-ENOMEM); 532 532 533 533 pds->service = SERVREG_NOTIFIER_SERVICE; 534 - strcpy(pds->service_name, service_name); 535 - strcpy(pds->service_path, service_path); 534 + strscpy(pds->service_name, service_name, sizeof(pds->service_name)); 535 + strscpy(pds->service_path, service_path, sizeof(pds->service_path)); 536 536 pds->need_locator_lookup = true; 537 537 538 538 mutex_lock(&pdr->list_lock); ··· 587 587 break; 588 588 589 589 /* Prepare req message */ 590 - strcpy(req.service_path, pds->service_path); 590 + strscpy(req.service_path, pds->service_path, sizeof(req.service_path)); 591 591 addr = pds->addr; 592 592 break; 593 593 }
+1 -3
drivers/soc/qcom/qcom-geni-se.c
··· 871 871 static int geni_se_probe(struct platform_device *pdev) 872 872 { 873 873 struct device *dev = &pdev->dev; 874 - struct resource *res; 875 874 struct geni_wrapper *wrapper; 876 875 int ret; 877 876 ··· 879 880 return -ENOMEM; 880 881 881 882 wrapper->dev = dev; 882 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 883 - wrapper->base = devm_ioremap_resource(dev, res); 883 + wrapper->base = devm_platform_ioremap_resource(pdev, 0); 884 884 if (IS_ERR(wrapper->base)) 885 885 return PTR_ERR(wrapper->base); 886 886
+54 -111
drivers/soc/qcom/qcom_aoss.c
··· 2 2 /* 3 3 * Copyright (c) 2019, Linaro Ltd 4 4 */ 5 - #include <dt-bindings/power/qcom-aoss-qmp.h> 6 5 #include <linux/clk-provider.h> 7 6 #include <linux/interrupt.h> 8 7 #include <linux/io.h> 9 8 #include <linux/mailbox_client.h> 10 9 #include <linux/module.h> 10 + #include <linux/of_platform.h> 11 11 #include <linux/platform_device.h> 12 - #include <linux/pm_domain.h> 13 12 #include <linux/thermal.h> 14 13 #include <linux/slab.h> 14 + #include <linux/soc/qcom/qcom_aoss.h> 15 15 16 16 #define QMP_DESC_MAGIC 0x0 17 17 #define QMP_DESC_VERSION 0x4 ··· 64 64 * @event: wait_queue for synchronization with the IRQ 65 65 * @tx_lock: provides synchronization between multiple callers of qmp_send() 66 66 * @qdss_clk: QDSS clock hw struct 67 - * @pd_data: genpd data 68 67 * @cooling_devs: thermal cooling devices 69 68 */ 70 69 struct qmp { ··· 81 82 struct mutex tx_lock; 82 83 83 84 struct clk_hw qdss_clk; 84 - struct genpd_onecell_data pd_data; 85 85 struct qmp_cooling_device *cooling_devs; 86 86 }; 87 - 88 - struct qmp_pd { 89 - struct qmp *qmp; 90 - struct generic_pm_domain pd; 91 - }; 92 - 93 - #define to_qmp_pd_resource(res) container_of(res, struct qmp_pd, pd) 94 87 95 88 static void qmp_kick(struct qmp *qmp) 96 89 { ··· 214 223 * 215 224 * Return: 0 on success, negative errno on failure 216 225 */ 217 - static int qmp_send(struct qmp *qmp, const void *data, size_t len) 226 + int qmp_send(struct qmp *qmp, const void *data, size_t len) 218 227 { 219 228 long time_left; 220 229 int ret; 230 + 231 + if (WARN_ON(IS_ERR_OR_NULL(qmp) || !data)) 232 + return -EINVAL; 221 233 222 234 if (WARN_ON(len + sizeof(u32) > qmp->size)) 223 235 return -EINVAL; ··· 255 261 256 262 return ret; 257 263 } 264 + EXPORT_SYMBOL(qmp_send); 258 265 259 266 static int qmp_qdss_clk_prepare(struct clk_hw *hw) 260 267 { ··· 307 312 { 308 313 of_clk_del_provider(qmp->dev->of_node); 309 314 clk_hw_unregister(&qmp->qdss_clk); 310 - } 311 - 312 - static int qmp_pd_power_toggle(struct qmp_pd *res, bool enable) 313 - { 314 - char buf[QMP_MSG_LEN] = {}; 315 - 316 - snprintf(buf, sizeof(buf), 317 - "{class: image, res: load_state, name: %s, val: %s}", 318 - res->pd.name, enable ? "on" : "off"); 319 - return qmp_send(res->qmp, buf, sizeof(buf)); 320 - } 321 - 322 - static int qmp_pd_power_on(struct generic_pm_domain *domain) 323 - { 324 - return qmp_pd_power_toggle(to_qmp_pd_resource(domain), true); 325 - } 326 - 327 - static int qmp_pd_power_off(struct generic_pm_domain *domain) 328 - { 329 - return qmp_pd_power_toggle(to_qmp_pd_resource(domain), false); 330 - } 331 - 332 - static const char * const sdm845_resources[] = { 333 - [AOSS_QMP_LS_CDSP] = "cdsp", 334 - [AOSS_QMP_LS_LPASS] = "adsp", 335 - [AOSS_QMP_LS_MODEM] = "modem", 336 - [AOSS_QMP_LS_SLPI] = "slpi", 337 - [AOSS_QMP_LS_SPSS] = "spss", 338 - [AOSS_QMP_LS_VENUS] = "venus", 339 - }; 340 - 341 - static int qmp_pd_add(struct qmp *qmp) 342 - { 343 - struct genpd_onecell_data *data = &qmp->pd_data; 344 - struct device *dev = qmp->dev; 345 - struct qmp_pd *res; 346 - size_t num = ARRAY_SIZE(sdm845_resources); 347 - int ret; 348 - int i; 349 - 350 - res = devm_kcalloc(dev, num, sizeof(*res), GFP_KERNEL); 351 - if (!res) 352 - return -ENOMEM; 353 - 354 - data->domains = devm_kcalloc(dev, num, sizeof(*data->domains), 355 - GFP_KERNEL); 356 - if (!data->domains) 357 - return -ENOMEM; 358 - 359 - for (i = 0; i < num; i++) { 360 - res[i].qmp = qmp; 361 - res[i].pd.name = sdm845_resources[i]; 362 - res[i].pd.power_on = qmp_pd_power_on; 363 - res[i].pd.power_off = qmp_pd_power_off; 364 - 365 - ret = pm_genpd_init(&res[i].pd, NULL, true); 366 - if (ret < 0) { 367 - dev_err(dev, "failed to init genpd\n"); 368 - goto unroll_genpds; 369 - } 370 - 371 - data->domains[i] = &res[i].pd; 372 - } 373 - 374 - data->num_domains = i; 375 - 376 - ret = of_genpd_add_provider_onecell(dev->of_node, data); 377 - if (ret < 0) 378 - goto unroll_genpds; 379 - 380 - return 0; 381 - 382 - unroll_genpds: 383 - for (i--; i >= 0; i--) 384 - pm_genpd_remove(data->domains[i]); 385 - 386 - return ret; 387 - } 388 - 389 - static void qmp_pd_remove(struct qmp *qmp) 390 - { 391 - struct genpd_onecell_data *data = &qmp->pd_data; 392 - struct device *dev = qmp->dev; 393 - int i; 394 - 395 - of_genpd_del_provider(dev->of_node); 396 - 397 - for (i = 0; i < data->num_domains; i++) 398 - pm_genpd_remove(data->domains[i]); 399 315 } 400 316 401 317 static int qmp_cdev_get_max_state(struct thermal_cooling_device *cdev, ··· 425 519 thermal_cooling_device_unregister(qmp->cooling_devs[i].cdev); 426 520 } 427 521 522 + /** 523 + * qmp_get() - get a qmp handle from a device 524 + * @dev: client device pointer 525 + * 526 + * Return: handle to qmp device on success, ERR_PTR() on failure 527 + */ 528 + struct qmp *qmp_get(struct device *dev) 529 + { 530 + struct platform_device *pdev; 531 + struct device_node *np; 532 + struct qmp *qmp; 533 + 534 + if (!dev || !dev->of_node) 535 + return ERR_PTR(-EINVAL); 536 + 537 + np = of_parse_phandle(dev->of_node, "qcom,qmp", 0); 538 + if (!np) 539 + return ERR_PTR(-ENODEV); 540 + 541 + pdev = of_find_device_by_node(np); 542 + of_node_put(np); 543 + if (!pdev) 544 + return ERR_PTR(-EINVAL); 545 + 546 + qmp = platform_get_drvdata(pdev); 547 + 548 + return qmp ? qmp : ERR_PTR(-EPROBE_DEFER); 549 + } 550 + EXPORT_SYMBOL(qmp_get); 551 + 552 + /** 553 + * qmp_put() - release a qmp handle 554 + * @qmp: qmp handle obtained from qmp_get() 555 + */ 556 + void qmp_put(struct qmp *qmp) 557 + { 558 + /* 559 + * Match get_device() inside of_find_device_by_node() in 560 + * qmp_get() 561 + */ 562 + if (!IS_ERR_OR_NULL(qmp)) 563 + put_device(qmp->dev); 564 + } 565 + EXPORT_SYMBOL(qmp_put); 566 + 428 567 static int qmp_probe(struct platform_device *pdev) 429 568 { 430 - struct resource *res; 431 569 struct qmp *qmp; 432 570 int irq; 433 571 int ret; ··· 484 534 init_waitqueue_head(&qmp->event); 485 535 mutex_init(&qmp->tx_lock); 486 536 487 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 488 - qmp->msgram = devm_ioremap_resource(&pdev->dev, res); 537 + qmp->msgram = devm_platform_ioremap_resource(pdev, 0); 489 538 if (IS_ERR(qmp->msgram)) 490 539 return PTR_ERR(qmp->msgram); 491 540 ··· 512 563 if (ret) 513 564 goto err_close_qmp; 514 565 515 - ret = qmp_pd_add(qmp); 516 - if (ret) 517 - goto err_remove_qdss_clk; 518 - 519 566 ret = qmp_cooling_devices_register(qmp); 520 567 if (ret) 521 568 dev_err(&pdev->dev, "failed to register aoss cooling devices\n"); ··· 520 575 521 576 return 0; 522 577 523 - err_remove_qdss_clk: 524 - qmp_qdss_clk_remove(qmp); 525 578 err_close_qmp: 526 579 qmp_close(qmp); 527 580 err_free_mbox: ··· 533 590 struct qmp *qmp = platform_get_drvdata(pdev); 534 591 535 592 qmp_qdss_clk_remove(qmp); 536 - qmp_pd_remove(qmp); 537 593 qmp_cooling_devices_remove(qmp); 538 594 539 595 qmp_close(qmp); ··· 557 615 .driver = { 558 616 .name = "qcom_aoss_qmp", 559 617 .of_match_table = qmp_dt_match, 618 + .suppress_bind_attrs = true, 560 619 }, 561 620 .probe = qmp_probe, 562 621 .remove = qmp_remove,
+1 -3
drivers/soc/qcom/qcom_gsbi.c
··· 127 127 struct device_node *node = pdev->dev.of_node; 128 128 struct device_node *tcsr_node; 129 129 const struct of_device_id *match; 130 - struct resource *res; 131 130 void __iomem *base; 132 131 struct gsbi_info *gsbi; 133 132 int i, ret; ··· 138 139 if (!gsbi) 139 140 return -ENOMEM; 140 141 141 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 142 - base = devm_ioremap_resource(&pdev->dev, res); 142 + base = devm_platform_ioremap_resource(pdev, 0); 143 143 if (IS_ERR(base)) 144 144 return PTR_ERR(base); 145 145
+277
drivers/soc/qcom/qcom_stats.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (c) 2011-2021, The Linux Foundation. All rights reserved. 4 + */ 5 + 6 + #include <linux/debugfs.h> 7 + #include <linux/device.h> 8 + #include <linux/io.h> 9 + #include <linux/module.h> 10 + #include <linux/of.h> 11 + #include <linux/platform_device.h> 12 + #include <linux/seq_file.h> 13 + 14 + #include <linux/soc/qcom/smem.h> 15 + #include <clocksource/arm_arch_timer.h> 16 + 17 + #define RPM_DYNAMIC_ADDR 0x14 18 + #define RPM_DYNAMIC_ADDR_MASK 0xFFFF 19 + 20 + #define STAT_TYPE_OFFSET 0x0 21 + #define COUNT_OFFSET 0x4 22 + #define LAST_ENTERED_AT_OFFSET 0x8 23 + #define LAST_EXITED_AT_OFFSET 0x10 24 + #define ACCUMULATED_OFFSET 0x18 25 + #define CLIENT_VOTES_OFFSET 0x20 26 + 27 + struct subsystem_data { 28 + const char *name; 29 + u32 smem_item; 30 + u32 pid; 31 + }; 32 + 33 + static const struct subsystem_data subsystems[] = { 34 + { "modem", 605, 1 }, 35 + { "wpss", 605, 13 }, 36 + { "adsp", 606, 2 }, 37 + { "cdsp", 607, 5 }, 38 + { "slpi", 608, 3 }, 39 + { "gpu", 609, 0 }, 40 + { "display", 610, 0 }, 41 + { "adsp_island", 613, 2 }, 42 + { "slpi_island", 613, 3 }, 43 + }; 44 + 45 + struct stats_config { 46 + size_t stats_offset; 47 + size_t num_records; 48 + bool appended_stats_avail; 49 + bool dynamic_offset; 50 + bool subsystem_stats_in_smem; 51 + }; 52 + 53 + struct stats_data { 54 + bool appended_stats_avail; 55 + void __iomem *base; 56 + }; 57 + 58 + struct sleep_stats { 59 + u32 stat_type; 60 + u32 count; 61 + u64 last_entered_at; 62 + u64 last_exited_at; 63 + u64 accumulated; 64 + }; 65 + 66 + struct appended_stats { 67 + u32 client_votes; 68 + u32 reserved[3]; 69 + }; 70 + 71 + static void qcom_print_stats(struct seq_file *s, const struct sleep_stats *stat) 72 + { 73 + u64 accumulated = stat->accumulated; 74 + /* 75 + * If a subsystem is in sleep when reading the sleep stats adjust 76 + * the accumulated sleep duration to show actual sleep time. 77 + */ 78 + if (stat->last_entered_at > stat->last_exited_at) 79 + accumulated += arch_timer_read_counter() - stat->last_entered_at; 80 + 81 + seq_printf(s, "Count: %u\n", stat->count); 82 + seq_printf(s, "Last Entered At: %llu\n", stat->last_entered_at); 83 + seq_printf(s, "Last Exited At: %llu\n", stat->last_exited_at); 84 + seq_printf(s, "Accumulated Duration: %llu\n", accumulated); 85 + } 86 + 87 + static int qcom_subsystem_sleep_stats_show(struct seq_file *s, void *unused) 88 + { 89 + struct subsystem_data *subsystem = s->private; 90 + struct sleep_stats *stat; 91 + 92 + /* Items are allocated lazily, so lookup pointer each time */ 93 + stat = qcom_smem_get(subsystem->pid, subsystem->smem_item, NULL); 94 + if (IS_ERR(stat)) 95 + return -EIO; 96 + 97 + qcom_print_stats(s, stat); 98 + 99 + return 0; 100 + } 101 + 102 + static int qcom_soc_sleep_stats_show(struct seq_file *s, void *unused) 103 + { 104 + struct stats_data *d = s->private; 105 + void __iomem *reg = d->base; 106 + struct sleep_stats stat; 107 + 108 + memcpy_fromio(&stat, reg, sizeof(stat)); 109 + qcom_print_stats(s, &stat); 110 + 111 + if (d->appended_stats_avail) { 112 + struct appended_stats votes; 113 + 114 + memcpy_fromio(&votes, reg + CLIENT_VOTES_OFFSET, sizeof(votes)); 115 + seq_printf(s, "Client Votes: %#x\n", votes.client_votes); 116 + } 117 + 118 + return 0; 119 + } 120 + 121 + DEFINE_SHOW_ATTRIBUTE(qcom_soc_sleep_stats); 122 + DEFINE_SHOW_ATTRIBUTE(qcom_subsystem_sleep_stats); 123 + 124 + static void qcom_create_soc_sleep_stat_files(struct dentry *root, void __iomem *reg, 125 + struct stats_data *d, 126 + const struct stats_config *config) 127 + { 128 + char stat_type[sizeof(u32) + 1] = {0}; 129 + size_t stats_offset = config->stats_offset; 130 + u32 offset = 0, type; 131 + int i, j; 132 + 133 + /* 134 + * On RPM targets, stats offset location is dynamic and changes from target 135 + * to target and sometimes from build to build for same target. 136 + * 137 + * In such cases the dynamic address is present at 0x14 offset from base 138 + * address in devicetree. The last 16bits indicates the stats_offset. 139 + */ 140 + if (config->dynamic_offset) { 141 + stats_offset = readl(reg + RPM_DYNAMIC_ADDR); 142 + stats_offset &= RPM_DYNAMIC_ADDR_MASK; 143 + } 144 + 145 + for (i = 0; i < config->num_records; i++) { 146 + d[i].base = reg + offset + stats_offset; 147 + 148 + /* 149 + * Read the low power mode name and create debugfs file for it. 150 + * The names read could be of below, 151 + * (may change depending on low power mode supported). 152 + * For rpmh-sleep-stats: "aosd", "cxsd" and "ddr". 153 + * For rpm-sleep-stats: "vmin" and "vlow". 154 + */ 155 + type = readl(d[i].base); 156 + for (j = 0; j < sizeof(u32); j++) { 157 + stat_type[j] = type & 0xff; 158 + type = type >> 8; 159 + } 160 + strim(stat_type); 161 + debugfs_create_file(stat_type, 0400, root, &d[i], 162 + &qcom_soc_sleep_stats_fops); 163 + 164 + offset += sizeof(struct sleep_stats); 165 + if (d[i].appended_stats_avail) 166 + offset += sizeof(struct appended_stats); 167 + } 168 + } 169 + 170 + static void qcom_create_subsystem_stat_files(struct dentry *root, 171 + const struct stats_config *config) 172 + { 173 + const struct sleep_stats *stat; 174 + int i; 175 + 176 + if (!config->subsystem_stats_in_smem) 177 + return; 178 + 179 + for (i = 0; i < ARRAY_SIZE(subsystems); i++) { 180 + stat = qcom_smem_get(subsystems[i].pid, subsystems[i].smem_item, NULL); 181 + if (IS_ERR(stat)) 182 + continue; 183 + 184 + debugfs_create_file(subsystems[i].name, 0400, root, (void *)&subsystems[i], 185 + &qcom_subsystem_sleep_stats_fops); 186 + } 187 + } 188 + 189 + static int qcom_stats_probe(struct platform_device *pdev) 190 + { 191 + void __iomem *reg; 192 + struct dentry *root; 193 + const struct stats_config *config; 194 + struct stats_data *d; 195 + int i; 196 + 197 + config = device_get_match_data(&pdev->dev); 198 + if (!config) 199 + return -ENODEV; 200 + 201 + reg = devm_platform_get_and_ioremap_resource(pdev, 0, NULL); 202 + if (IS_ERR(reg)) 203 + return -ENOMEM; 204 + 205 + d = devm_kcalloc(&pdev->dev, config->num_records, 206 + sizeof(*d), GFP_KERNEL); 207 + if (!d) 208 + return -ENOMEM; 209 + 210 + for (i = 0; i < config->num_records; i++) 211 + d[i].appended_stats_avail = config->appended_stats_avail; 212 + 213 + root = debugfs_create_dir("qcom_stats", NULL); 214 + 215 + qcom_create_subsystem_stat_files(root, config); 216 + qcom_create_soc_sleep_stat_files(root, reg, d, config); 217 + 218 + platform_set_drvdata(pdev, root); 219 + 220 + return 0; 221 + } 222 + 223 + static int qcom_stats_remove(struct platform_device *pdev) 224 + { 225 + struct dentry *root = platform_get_drvdata(pdev); 226 + 227 + debugfs_remove_recursive(root); 228 + 229 + return 0; 230 + } 231 + 232 + static const struct stats_config rpm_data = { 233 + .stats_offset = 0, 234 + .num_records = 2, 235 + .appended_stats_avail = true, 236 + .dynamic_offset = true, 237 + .subsystem_stats_in_smem = false, 238 + }; 239 + 240 + static const struct stats_config rpmh_data = { 241 + .stats_offset = 0x48, 242 + .num_records = 3, 243 + .appended_stats_avail = false, 244 + .dynamic_offset = false, 245 + .subsystem_stats_in_smem = true, 246 + }; 247 + 248 + static const struct of_device_id qcom_stats_table[] = { 249 + { .compatible = "qcom,rpm-stats", .data = &rpm_data }, 250 + { .compatible = "qcom,rpmh-stats", .data = &rpmh_data }, 251 + { } 252 + }; 253 + MODULE_DEVICE_TABLE(of, qcom_stats_table); 254 + 255 + static struct platform_driver qcom_stats = { 256 + .probe = qcom_stats_probe, 257 + .remove = qcom_stats_remove, 258 + .driver = { 259 + .name = "qcom_stats", 260 + .of_match_table = qcom_stats_table, 261 + }, 262 + }; 263 + 264 + static int __init qcom_stats_init(void) 265 + { 266 + return platform_driver_register(&qcom_stats); 267 + } 268 + late_initcall(qcom_stats_init); 269 + 270 + static void __exit qcom_stats_exit(void) 271 + { 272 + platform_driver_unregister(&qcom_stats); 273 + } 274 + module_exit(qcom_stats_exit) 275 + 276 + MODULE_DESCRIPTION("Qualcomm Technologies, Inc. (QTI) Stats driver"); 277 + MODULE_LICENSE("GPL v2");
+1 -3
drivers/soc/qcom/rpmh-rsc.c
··· 910 910 { 911 911 struct device_node *dn = pdev->dev.of_node; 912 912 struct rsc_drv *drv; 913 - struct resource *res; 914 913 char drv_id[10] = {0}; 915 914 int ret, irq; 916 915 u32 solver_config; ··· 940 941 drv->name = dev_name(&pdev->dev); 941 942 942 943 snprintf(drv_id, ARRAY_SIZE(drv_id), "drv-%d", drv->id); 943 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, drv_id); 944 - base = devm_ioremap_resource(&pdev->dev, res); 944 + base = devm_platform_ioremap_resource_byname(pdev, drv_id); 945 945 if (IS_ERR(base)) 946 946 return PTR_ERR(base); 947 947
+31 -5
drivers/soc/qcom/rpmhpd.c
··· 30 30 * @active_only: True if it represents an Active only peer 31 31 * @corner: current corner 32 32 * @active_corner: current active corner 33 + * @enable_corner: lowest non-zero corner 33 34 * @level: An array of level (vlvl) to corner (hlvl) mappings 34 35 * derived from cmd-db 35 36 * @level_count: Number of levels supported by the power domain. max ··· 48 47 const bool active_only; 49 48 unsigned int corner; 50 49 unsigned int active_corner; 50 + unsigned int enable_corner; 51 51 u32 level[RPMH_ARC_MAX_LEVELS]; 52 52 size_t level_count; 53 53 bool enabled; ··· 149 147 .num_pds = ARRAY_SIZE(sdx55_rpmhpds), 150 148 }; 151 149 150 + /* SM6350 RPMH powerdomains */ 151 + static struct rpmhpd *sm6350_rpmhpds[] = { 152 + [SM6350_CX] = &sdm845_cx, 153 + [SM6350_GFX] = &sdm845_gfx, 154 + [SM6350_LCX] = &sdm845_lcx, 155 + [SM6350_LMX] = &sdm845_lmx, 156 + [SM6350_MSS] = &sdm845_mss, 157 + [SM6350_MX] = &sdm845_mx, 158 + }; 159 + 160 + static const struct rpmhpd_desc sm6350_desc = { 161 + .rpmhpds = sm6350_rpmhpds, 162 + .num_pds = ARRAY_SIZE(sm6350_rpmhpds), 163 + }; 164 + 152 165 /* SM8150 RPMH powerdomains */ 153 166 154 167 static struct rpmhpd sm8150_mmcx_ao; ··· 221 204 static struct rpmhpd sm8350_mxc_ao; 222 205 static struct rpmhpd sm8350_mxc = { 223 206 .pd = { .name = "mxc", }, 224 - .peer = &sm8150_mmcx_ao, 207 + .peer = &sm8350_mxc_ao, 225 208 .res_name = "mxc.lvl", 226 209 }; 227 210 ··· 314 297 { .compatible = "qcom,sc8180x-rpmhpd", .data = &sc8180x_desc }, 315 298 { .compatible = "qcom,sdm845-rpmhpd", .data = &sdm845_desc }, 316 299 { .compatible = "qcom,sdx55-rpmhpd", .data = &sdx55_desc}, 300 + { .compatible = "qcom,sm6350-rpmhpd", .data = &sm6350_desc }, 317 301 { .compatible = "qcom,sm8150-rpmhpd", .data = &sm8150_desc }, 318 302 { .compatible = "qcom,sm8250-rpmhpd", .data = &sm8250_desc }, 319 303 { .compatible = "qcom,sm8350-rpmhpd", .data = &sm8350_desc }, ··· 403 385 static int rpmhpd_power_on(struct generic_pm_domain *domain) 404 386 { 405 387 struct rpmhpd *pd = domain_to_rpmhpd(domain); 406 - int ret = 0; 388 + unsigned int corner; 389 + int ret; 407 390 408 391 mutex_lock(&rpmhpd_lock); 409 392 410 - if (pd->corner) 411 - ret = rpmhpd_aggregate_corner(pd, pd->corner); 412 - 393 + corner = max(pd->corner, pd->enable_corner); 394 + ret = rpmhpd_aggregate_corner(pd, corner); 413 395 if (!ret) 414 396 pd->enabled = true; 415 397 ··· 454 436 i--; 455 437 456 438 if (pd->enabled) { 439 + /* Ensure that the domain isn't turn off */ 440 + if (i < pd->enable_corner) 441 + i = pd->enable_corner; 442 + 457 443 ret = rpmhpd_aggregate_corner(pd, i); 458 444 if (ret) 459 445 goto out; ··· 493 471 494 472 for (i = 0; i < rpmhpd->level_count; i++) { 495 473 rpmhpd->level[i] = buf[i]; 474 + 475 + /* Remember the first corner with non-zero level */ 476 + if (!rpmhpd->level[rpmhpd->enable_corner] && rpmhpd->level[i]) 477 + rpmhpd->enable_corner = i; 496 478 497 479 /* 498 480 * The AUX data may be zero padded. These 0 valued entries at
+24
drivers/soc/qcom/rpmpd.c
··· 185 185 .max_state = MAX_CORNER_RPMPD_STATE, 186 186 }; 187 187 188 + /* msm8953 RPM Power Domains */ 189 + DEFINE_RPMPD_PAIR(msm8953, vddmd, vddmd_ao, SMPA, LEVEL, 1); 190 + DEFINE_RPMPD_PAIR(msm8953, vddcx, vddcx_ao, SMPA, LEVEL, 2); 191 + DEFINE_RPMPD_PAIR(msm8953, vddmx, vddmx_ao, SMPA, LEVEL, 7); 192 + 193 + DEFINE_RPMPD_VFL(msm8953, vddcx_vfl, SMPA, 2); 194 + 195 + static struct rpmpd *msm8953_rpmpds[] = { 196 + [MSM8953_VDDMD] = &msm8953_vddmd, 197 + [MSM8953_VDDMD_AO] = &msm8953_vddmd_ao, 198 + [MSM8953_VDDCX] = &msm8953_vddcx, 199 + [MSM8953_VDDCX_AO] = &msm8953_vddcx_ao, 200 + [MSM8953_VDDCX_VFL] = &msm8953_vddcx_vfl, 201 + [MSM8953_VDDMX] = &msm8953_vddmx, 202 + [MSM8953_VDDMX_AO] = &msm8953_vddmx_ao, 203 + }; 204 + 205 + static const struct rpmpd_desc msm8953_desc = { 206 + .rpmpds = msm8953_rpmpds, 207 + .num_pds = ARRAY_SIZE(msm8953_rpmpds), 208 + .max_state = RPM_SMD_LEVEL_TURBO, 209 + }; 210 + 188 211 /* msm8976 RPM Power Domains */ 189 212 DEFINE_RPMPD_PAIR(msm8976, vddcx, vddcx_ao, SMPA, LEVEL, 2); 190 213 DEFINE_RPMPD_PAIR(msm8976, vddmx, vddmx_ao, SMPA, LEVEL, 6); ··· 400 377 { .compatible = "qcom,mdm9607-rpmpd", .data = &mdm9607_desc }, 401 378 { .compatible = "qcom,msm8916-rpmpd", .data = &msm8916_desc }, 402 379 { .compatible = "qcom,msm8939-rpmpd", .data = &msm8939_desc }, 380 + { .compatible = "qcom,msm8953-rpmpd", .data = &msm8953_desc }, 403 381 { .compatible = "qcom,msm8976-rpmpd", .data = &msm8976_desc }, 404 382 { .compatible = "qcom,msm8994-rpmpd", .data = &msm8994_desc }, 405 383 { .compatible = "qcom,msm8996-rpmpd", .data = &msm8996_desc },
+2
drivers/soc/qcom/smd-rpm.c
··· 236 236 { .compatible = "qcom,rpm-msm8226" }, 237 237 { .compatible = "qcom,rpm-msm8916" }, 238 238 { .compatible = "qcom,rpm-msm8936" }, 239 + { .compatible = "qcom,rpm-msm8953" }, 239 240 { .compatible = "qcom,rpm-msm8974" }, 240 241 { .compatible = "qcom,rpm-msm8976" }, 241 242 { .compatible = "qcom,rpm-msm8994" }, ··· 245 244 { .compatible = "qcom,rpm-sdm660" }, 246 245 { .compatible = "qcom,rpm-sm6115" }, 247 246 { .compatible = "qcom,rpm-sm6125" }, 247 + { .compatible = "qcom,rpm-qcm2290" }, 248 248 { .compatible = "qcom,rpm-qcs404" }, 249 249 {} 250 250 };
+39 -18
drivers/soc/qcom/smem.c
··· 9 9 #include <linux/module.h> 10 10 #include <linux/of.h> 11 11 #include <linux/of_address.h> 12 + #include <linux/of_reserved_mem.h> 12 13 #include <linux/platform_device.h> 13 14 #include <linux/sizes.h> 14 15 #include <linux/slab.h> ··· 241 240 * @size: size of the memory region 242 241 */ 243 242 struct smem_region { 244 - u32 aux_base; 243 + phys_addr_t aux_base; 245 244 void __iomem *virt_base; 246 245 size_t size; 247 246 }; ··· 500 499 for (i = 0; i < smem->num_regions; i++) { 501 500 region = &smem->regions[i]; 502 501 503 - if (region->aux_base == aux_base || !aux_base) { 502 + if ((u32)region->aux_base == aux_base || !aux_base) { 504 503 if (size != NULL) 505 504 *size = le32_to_cpu(entry->size); 506 505 return region->virt_base + le32_to_cpu(entry->offset); ··· 665 664 if (p < region->virt_base + region->size) { 666 665 u64 offset = p - region->virt_base; 667 666 668 - return (phys_addr_t)region->aux_base + offset; 667 + return region->aux_base + offset; 669 668 } 670 669 } 671 670 ··· 864 863 return 0; 865 864 } 866 865 867 - static int qcom_smem_map_memory(struct qcom_smem *smem, struct device *dev, 868 - const char *name, int i) 866 + static int qcom_smem_resolve_mem(struct qcom_smem *smem, const char *name, 867 + struct smem_region *region) 869 868 { 869 + struct device *dev = smem->dev; 870 870 struct device_node *np; 871 871 struct resource r; 872 - resource_size_t size; 873 872 int ret; 874 873 875 874 np = of_parse_phandle(dev->of_node, name, 0); ··· 882 881 of_node_put(np); 883 882 if (ret) 884 883 return ret; 885 - size = resource_size(&r); 886 884 887 - smem->regions[i].virt_base = devm_ioremap_wc(dev, r.start, size); 888 - if (!smem->regions[i].virt_base) 889 - return -ENOMEM; 890 - smem->regions[i].aux_base = (u32)r.start; 891 - smem->regions[i].size = size; 885 + region->aux_base = r.start; 886 + region->size = resource_size(&r); 892 887 893 888 return 0; 894 889 } ··· 892 895 static int qcom_smem_probe(struct platform_device *pdev) 893 896 { 894 897 struct smem_header *header; 898 + struct reserved_mem *rmem; 895 899 struct qcom_smem *smem; 896 900 size_t array_size; 897 901 int num_regions; 898 902 int hwlock_id; 899 903 u32 version; 900 904 int ret; 905 + int i; 901 906 902 907 num_regions = 1; 903 908 if (of_find_property(pdev->dev.of_node, "qcom,rpm-msg-ram", NULL)) ··· 913 914 smem->dev = &pdev->dev; 914 915 smem->num_regions = num_regions; 915 916 916 - ret = qcom_smem_map_memory(smem, &pdev->dev, "memory-region", 0); 917 - if (ret) 918 - return ret; 917 + rmem = of_reserved_mem_lookup(pdev->dev.of_node); 918 + if (rmem) { 919 + smem->regions[0].aux_base = rmem->base; 920 + smem->regions[0].size = rmem->size; 921 + } else { 922 + /* 923 + * Fall back to the memory-region reference, if we're not a 924 + * reserved-memory node. 925 + */ 926 + ret = qcom_smem_resolve_mem(smem, "memory-region", &smem->regions[0]); 927 + if (ret) 928 + return ret; 929 + } 919 930 920 - if (num_regions > 1 && (ret = qcom_smem_map_memory(smem, &pdev->dev, 921 - "qcom,rpm-msg-ram", 1))) 922 - return ret; 931 + if (num_regions > 1) { 932 + ret = qcom_smem_resolve_mem(smem, "qcom,rpm-msg-ram", &smem->regions[1]); 933 + if (ret) 934 + return ret; 935 + } 936 + 937 + for (i = 0; i < num_regions; i++) { 938 + smem->regions[i].virt_base = devm_ioremap_wc(&pdev->dev, 939 + smem->regions[i].aux_base, 940 + smem->regions[i].size); 941 + if (!smem->regions[i].virt_base) { 942 + dev_err(&pdev->dev, "failed to remap %pa\n", &smem->regions[i].aux_base); 943 + return -ENOMEM; 944 + } 945 + } 923 946 924 947 header = smem->regions[0].virt_base; 925 948 if (le32_to_cpu(header->initialized) != 1 ||
+126 -28
drivers/soc/qcom/smp2p.c
··· 14 14 #include <linux/mfd/syscon.h> 15 15 #include <linux/module.h> 16 16 #include <linux/platform_device.h> 17 + #include <linux/pm_wakeirq.h> 17 18 #include <linux/regmap.h> 18 19 #include <linux/soc/qcom/smem.h> 19 20 #include <linux/soc/qcom/smem_state.h> ··· 41 40 #define SMP2P_MAX_ENTRY_NAME 16 42 41 43 42 #define SMP2P_FEATURE_SSR_ACK 0x1 43 + #define SMP2P_FLAGS_RESTART_DONE_BIT 0 44 + #define SMP2P_FLAGS_RESTART_ACK_BIT 1 44 45 45 46 #define SMP2P_MAGIC 0x504d5324 47 + #define SMP2P_ALL_FEATURES SMP2P_FEATURE_SSR_ACK 46 48 47 49 /** 48 50 * struct smp2p_smem_item - in memory communication structure ··· 139 135 140 136 unsigned valid_entries; 141 137 138 + bool ssr_ack_enabled; 139 + bool ssr_ack; 140 + bool negotiation_done; 141 + 142 142 unsigned local_pid; 143 143 unsigned remote_pid; 144 144 ··· 170 162 } 171 163 } 172 164 173 - /** 174 - * qcom_smp2p_intr() - interrupt handler for incoming notifications 175 - * @irq: unused 176 - * @data: smp2p driver context 177 - * 178 - * Handle notifications from the remote side to handle newly allocated entries 179 - * or any changes to the state bits of existing entries. 180 - */ 181 - static irqreturn_t qcom_smp2p_intr(int irq, void *data) 165 + static bool qcom_smp2p_check_ssr(struct qcom_smp2p *smp2p) 166 + { 167 + struct smp2p_smem_item *in = smp2p->in; 168 + bool restart; 169 + 170 + if (!smp2p->ssr_ack_enabled) 171 + return false; 172 + 173 + restart = in->flags & BIT(SMP2P_FLAGS_RESTART_DONE_BIT); 174 + 175 + return restart != smp2p->ssr_ack; 176 + } 177 + 178 + static void qcom_smp2p_do_ssr_ack(struct qcom_smp2p *smp2p) 179 + { 180 + struct smp2p_smem_item *out = smp2p->out; 181 + u32 val; 182 + 183 + smp2p->ssr_ack = !smp2p->ssr_ack; 184 + 185 + val = out->flags & ~BIT(SMP2P_FLAGS_RESTART_ACK_BIT); 186 + if (smp2p->ssr_ack) 187 + val |= BIT(SMP2P_FLAGS_RESTART_ACK_BIT); 188 + out->flags = val; 189 + 190 + qcom_smp2p_kick(smp2p); 191 + } 192 + 193 + static void qcom_smp2p_negotiate(struct qcom_smp2p *smp2p) 194 + { 195 + struct smp2p_smem_item *out = smp2p->out; 196 + struct smp2p_smem_item *in = smp2p->in; 197 + 198 + if (in->version == out->version) { 199 + out->features &= in->features; 200 + 201 + if (out->features & SMP2P_FEATURE_SSR_ACK) 202 + smp2p->ssr_ack_enabled = true; 203 + 204 + smp2p->negotiation_done = true; 205 + } 206 + } 207 + 208 + static void qcom_smp2p_notify_in(struct qcom_smp2p *smp2p) 182 209 { 183 210 struct smp2p_smem_item *in; 184 211 struct smp2p_entry *entry; 185 - struct qcom_smp2p *smp2p = data; 186 - unsigned smem_id = smp2p->smem_items[SMP2P_INBOUND]; 187 - unsigned pid = smp2p->remote_pid; 188 - size_t size; 189 212 int irq_pin; 190 213 u32 status; 191 214 char buf[SMP2P_MAX_ENTRY_NAME]; ··· 224 185 int i; 225 186 226 187 in = smp2p->in; 227 - 228 - /* Acquire smem item, if not already found */ 229 - if (!in) { 230 - in = qcom_smem_get(pid, smem_id, &size); 231 - if (IS_ERR(in)) { 232 - dev_err(smp2p->dev, 233 - "Unable to acquire remote smp2p item\n"); 234 - return IRQ_HANDLED; 235 - } 236 - 237 - smp2p->in = in; 238 - } 239 188 240 189 /* Match newly created entries */ 241 190 for (i = smp2p->valid_entries; i < in->valid_entries; i++) { ··· 263 236 } 264 237 } 265 238 } 239 + } 266 240 241 + /** 242 + * qcom_smp2p_intr() - interrupt handler for incoming notifications 243 + * @irq: unused 244 + * @data: smp2p driver context 245 + * 246 + * Handle notifications from the remote side to handle newly allocated entries 247 + * or any changes to the state bits of existing entries. 248 + */ 249 + static irqreturn_t qcom_smp2p_intr(int irq, void *data) 250 + { 251 + struct smp2p_smem_item *in; 252 + struct qcom_smp2p *smp2p = data; 253 + unsigned int smem_id = smp2p->smem_items[SMP2P_INBOUND]; 254 + unsigned int pid = smp2p->remote_pid; 255 + bool ack_restart; 256 + size_t size; 257 + 258 + in = smp2p->in; 259 + 260 + /* Acquire smem item, if not already found */ 261 + if (!in) { 262 + in = qcom_smem_get(pid, smem_id, &size); 263 + if (IS_ERR(in)) { 264 + dev_err(smp2p->dev, 265 + "Unable to acquire remote smp2p item\n"); 266 + goto out; 267 + } 268 + 269 + smp2p->in = in; 270 + } 271 + 272 + if (!smp2p->negotiation_done) 273 + qcom_smp2p_negotiate(smp2p); 274 + 275 + if (smp2p->negotiation_done) { 276 + ack_restart = qcom_smp2p_check_ssr(smp2p); 277 + qcom_smp2p_notify_in(smp2p); 278 + 279 + if (ack_restart) 280 + qcom_smp2p_do_ssr_ack(smp2p); 281 + } 282 + 283 + out: 267 284 return IRQ_HANDLED; 268 285 } 269 286 ··· 463 392 out->remote_pid = smp2p->remote_pid; 464 393 out->total_entries = SMP2P_MAX_ENTRY; 465 394 out->valid_entries = 0; 395 + out->features = SMP2P_ALL_FEATURES; 466 396 467 397 /* 468 398 * Make sure the rest of the header is written before we validate the ··· 573 501 entry = devm_kzalloc(&pdev->dev, sizeof(*entry), GFP_KERNEL); 574 502 if (!entry) { 575 503 ret = -ENOMEM; 504 + of_node_put(node); 576 505 goto unwind_interfaces; 577 506 } 578 507 ··· 581 508 spin_lock_init(&entry->lock); 582 509 583 510 ret = of_property_read_string(node, "qcom,entry-name", &entry->name); 584 - if (ret < 0) 511 + if (ret < 0) { 512 + of_node_put(node); 585 513 goto unwind_interfaces; 514 + } 586 515 587 516 if (of_property_read_bool(node, "interrupt-controller")) { 588 517 ret = qcom_smp2p_inbound_entry(smp2p, entry, node); 589 - if (ret < 0) 518 + if (ret < 0) { 519 + of_node_put(node); 590 520 goto unwind_interfaces; 521 + } 591 522 592 523 list_add(&entry->node, &smp2p->inbound); 593 524 } else { 594 525 ret = qcom_smp2p_outbound_entry(smp2p, entry, node); 595 - if (ret < 0) 526 + if (ret < 0) { 527 + of_node_put(node); 596 528 goto unwind_interfaces; 529 + } 597 530 598 531 list_add(&entry->node, &smp2p->outbound); 599 532 } ··· 617 538 goto unwind_interfaces; 618 539 } 619 540 541 + /* 542 + * Treat smp2p interrupt as wakeup source, but keep it disabled 543 + * by default. User space can decide enabling it depending on its 544 + * use cases. For example if remoteproc crashes and device wants 545 + * to handle it immediatedly (e.g. to not miss phone calls) it can 546 + * enable wakeup source from user space, while other devices which 547 + * do not have proper autosleep feature may want to handle it with 548 + * other wakeup events (e.g. Power button) instead waking up immediately. 549 + */ 550 + device_set_wakeup_capable(&pdev->dev, true); 551 + 552 + ret = dev_pm_set_wake_irq(&pdev->dev, irq); 553 + if (ret) 554 + goto set_wake_irq_fail; 620 555 621 556 return 0; 557 + 558 + set_wake_irq_fail: 559 + dev_pm_clear_wake_irq(&pdev->dev); 622 560 623 561 unwind_interfaces: 624 562 list_for_each_entry(entry, &smp2p->inbound, node) ··· 660 564 { 661 565 struct qcom_smp2p *smp2p = platform_get_drvdata(pdev); 662 566 struct smp2p_entry *entry; 567 + 568 + dev_pm_clear_wake_irq(&pdev->dev); 663 569 664 570 list_for_each_entry(entry, &smp2p->inbound, node) 665 571 irq_domain_remove(entry->domain);
+16 -2
drivers/soc/qcom/socinfo.c
··· 87 87 [15] = "PM8901", 88 88 [16] = "PM8950/PM8027", 89 89 [17] = "PMI8950/ISL9519", 90 - [18] = "PM8921", 91 - [19] = "PM8018", 90 + [18] = "PMK8001/PM8921", 91 + [19] = "PMI8996/PM8018", 92 92 [20] = "PM8998/PM8015", 93 93 [21] = "PMI8998/PM8014", 94 94 [22] = "PM8821", ··· 102 102 [32] = "PM8150B", 103 103 [33] = "PMK8002", 104 104 [36] = "PM8009", 105 + [38] = "PM8150C", 106 + [41] = "SMB2351", 105 107 }; 106 108 #endif /* CONFIG_DEBUG_FS */ 107 109 ··· 283 281 { 319, "APQ8098" }, 284 282 { 321, "SDM845" }, 285 283 { 322, "MDM9206" }, 284 + { 323, "IPQ8074" }, 286 285 { 324, "SDA660" }, 287 286 { 325, "SDM658" }, 288 287 { 326, "SDA658" }, 289 288 { 327, "SDA630" }, 290 289 { 338, "SDM450" }, 291 290 { 341, "SDA845" }, 291 + { 342, "IPQ8072" }, 292 + { 343, "IPQ8076" }, 293 + { 344, "IPQ8078" }, 292 294 { 345, "SDM636" }, 293 295 { 346, "SDA636" }, 294 296 { 349, "SDM632" }, 295 297 { 350, "SDA632" }, 296 298 { 351, "SDA450" }, 297 299 { 356, "SM8250" }, 300 + { 375, "IPQ8070" }, 301 + { 376, "IPQ8071" }, 302 + { 389, "IPQ8072A" }, 303 + { 390, "IPQ8074A" }, 304 + { 391, "IPQ8076A" }, 305 + { 392, "IPQ8078A" }, 298 306 { 394, "SM6125" }, 307 + { 395, "IPQ8070A" }, 308 + { 396, "IPQ8071A" }, 299 309 { 402, "IPQ6018" }, 300 310 { 403, "IPQ6028" }, 301 311 { 421, "IPQ6000" },
+279
drivers/soc/qcom/spm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (c) 2011-2014, The Linux Foundation. All rights reserved. 4 + * Copyright (c) 2014,2015, Linaro Ltd. 5 + * 6 + * SAW power controller driver 7 + */ 8 + 9 + #include <linux/kernel.h> 10 + #include <linux/init.h> 11 + #include <linux/io.h> 12 + #include <linux/module.h> 13 + #include <linux/slab.h> 14 + #include <linux/of.h> 15 + #include <linux/of_address.h> 16 + #include <linux/of_device.h> 17 + #include <linux/err.h> 18 + #include <linux/platform_device.h> 19 + #include <soc/qcom/spm.h> 20 + 21 + #define SPM_CTL_INDEX 0x7f 22 + #define SPM_CTL_INDEX_SHIFT 4 23 + #define SPM_CTL_EN BIT(0) 24 + 25 + enum spm_reg { 26 + SPM_REG_CFG, 27 + SPM_REG_SPM_CTL, 28 + SPM_REG_DLY, 29 + SPM_REG_PMIC_DLY, 30 + SPM_REG_PMIC_DATA_0, 31 + SPM_REG_PMIC_DATA_1, 32 + SPM_REG_VCTL, 33 + SPM_REG_SEQ_ENTRY, 34 + SPM_REG_SPM_STS, 35 + SPM_REG_PMIC_STS, 36 + SPM_REG_AVS_CTL, 37 + SPM_REG_AVS_LIMIT, 38 + SPM_REG_NR, 39 + }; 40 + 41 + static const u16 spm_reg_offset_v4_1[SPM_REG_NR] = { 42 + [SPM_REG_AVS_CTL] = 0x904, 43 + [SPM_REG_AVS_LIMIT] = 0x908, 44 + }; 45 + 46 + static const struct spm_reg_data spm_reg_660_gold_l2 = { 47 + .reg_offset = spm_reg_offset_v4_1, 48 + .avs_ctl = 0x1010031, 49 + .avs_limit = 0x4580458, 50 + }; 51 + 52 + static const struct spm_reg_data spm_reg_660_silver_l2 = { 53 + .reg_offset = spm_reg_offset_v4_1, 54 + .avs_ctl = 0x101c031, 55 + .avs_limit = 0x4580458, 56 + }; 57 + 58 + static const struct spm_reg_data spm_reg_8998_gold_l2 = { 59 + .reg_offset = spm_reg_offset_v4_1, 60 + .avs_ctl = 0x1010031, 61 + .avs_limit = 0x4700470, 62 + }; 63 + 64 + static const struct spm_reg_data spm_reg_8998_silver_l2 = { 65 + .reg_offset = spm_reg_offset_v4_1, 66 + .avs_ctl = 0x1010031, 67 + .avs_limit = 0x4200420, 68 + }; 69 + 70 + static const u16 spm_reg_offset_v3_0[SPM_REG_NR] = { 71 + [SPM_REG_CFG] = 0x08, 72 + [SPM_REG_SPM_CTL] = 0x30, 73 + [SPM_REG_DLY] = 0x34, 74 + [SPM_REG_SEQ_ENTRY] = 0x400, 75 + }; 76 + 77 + /* SPM register data for 8916 */ 78 + static const struct spm_reg_data spm_reg_8916_cpu = { 79 + .reg_offset = spm_reg_offset_v3_0, 80 + .spm_cfg = 0x1, 81 + .spm_dly = 0x3C102800, 82 + .seq = { 0x60, 0x03, 0x60, 0x0B, 0x0F, 0x20, 0x10, 0x80, 0x30, 0x90, 83 + 0x5B, 0x60, 0x03, 0x60, 0x3B, 0x76, 0x76, 0x0B, 0x94, 0x5B, 84 + 0x80, 0x10, 0x26, 0x30, 0x0F }, 85 + .start_index[PM_SLEEP_MODE_STBY] = 0, 86 + .start_index[PM_SLEEP_MODE_SPC] = 5, 87 + }; 88 + 89 + static const u16 spm_reg_offset_v2_1[SPM_REG_NR] = { 90 + [SPM_REG_CFG] = 0x08, 91 + [SPM_REG_SPM_CTL] = 0x30, 92 + [SPM_REG_DLY] = 0x34, 93 + [SPM_REG_SEQ_ENTRY] = 0x80, 94 + }; 95 + 96 + /* SPM register data for 8974, 8084 */ 97 + static const struct spm_reg_data spm_reg_8974_8084_cpu = { 98 + .reg_offset = spm_reg_offset_v2_1, 99 + .spm_cfg = 0x1, 100 + .spm_dly = 0x3C102800, 101 + .seq = { 0x03, 0x0B, 0x0F, 0x00, 0x20, 0x80, 0x10, 0xE8, 0x5B, 0x03, 102 + 0x3B, 0xE8, 0x5B, 0x82, 0x10, 0x0B, 0x30, 0x06, 0x26, 0x30, 103 + 0x0F }, 104 + .start_index[PM_SLEEP_MODE_STBY] = 0, 105 + .start_index[PM_SLEEP_MODE_SPC] = 3, 106 + }; 107 + 108 + /* SPM register data for 8226 */ 109 + static const struct spm_reg_data spm_reg_8226_cpu = { 110 + .reg_offset = spm_reg_offset_v2_1, 111 + .spm_cfg = 0x0, 112 + .spm_dly = 0x3C102800, 113 + .seq = { 0x60, 0x03, 0x60, 0x0B, 0x0F, 0x20, 0x10, 0x80, 0x30, 0x90, 114 + 0x5B, 0x60, 0x03, 0x60, 0x3B, 0x76, 0x76, 0x0B, 0x94, 0x5B, 115 + 0x80, 0x10, 0x26, 0x30, 0x0F }, 116 + .start_index[PM_SLEEP_MODE_STBY] = 0, 117 + .start_index[PM_SLEEP_MODE_SPC] = 5, 118 + }; 119 + 120 + static const u16 spm_reg_offset_v1_1[SPM_REG_NR] = { 121 + [SPM_REG_CFG] = 0x08, 122 + [SPM_REG_SPM_CTL] = 0x20, 123 + [SPM_REG_PMIC_DLY] = 0x24, 124 + [SPM_REG_PMIC_DATA_0] = 0x28, 125 + [SPM_REG_PMIC_DATA_1] = 0x2C, 126 + [SPM_REG_SEQ_ENTRY] = 0x80, 127 + }; 128 + 129 + /* SPM register data for 8064 */ 130 + static const struct spm_reg_data spm_reg_8064_cpu = { 131 + .reg_offset = spm_reg_offset_v1_1, 132 + .spm_cfg = 0x1F, 133 + .pmic_dly = 0x02020004, 134 + .pmic_data[0] = 0x0084009C, 135 + .pmic_data[1] = 0x00A4001C, 136 + .seq = { 0x03, 0x0F, 0x00, 0x24, 0x54, 0x10, 0x09, 0x03, 0x01, 137 + 0x10, 0x54, 0x30, 0x0C, 0x24, 0x30, 0x0F }, 138 + .start_index[PM_SLEEP_MODE_STBY] = 0, 139 + .start_index[PM_SLEEP_MODE_SPC] = 2, 140 + }; 141 + 142 + static inline void spm_register_write(struct spm_driver_data *drv, 143 + enum spm_reg reg, u32 val) 144 + { 145 + if (drv->reg_data->reg_offset[reg]) 146 + writel_relaxed(val, drv->reg_base + 147 + drv->reg_data->reg_offset[reg]); 148 + } 149 + 150 + /* Ensure a guaranteed write, before return */ 151 + static inline void spm_register_write_sync(struct spm_driver_data *drv, 152 + enum spm_reg reg, u32 val) 153 + { 154 + u32 ret; 155 + 156 + if (!drv->reg_data->reg_offset[reg]) 157 + return; 158 + 159 + do { 160 + writel_relaxed(val, drv->reg_base + 161 + drv->reg_data->reg_offset[reg]); 162 + ret = readl_relaxed(drv->reg_base + 163 + drv->reg_data->reg_offset[reg]); 164 + if (ret == val) 165 + break; 166 + cpu_relax(); 167 + } while (1); 168 + } 169 + 170 + static inline u32 spm_register_read(struct spm_driver_data *drv, 171 + enum spm_reg reg) 172 + { 173 + return readl_relaxed(drv->reg_base + drv->reg_data->reg_offset[reg]); 174 + } 175 + 176 + void spm_set_low_power_mode(struct spm_driver_data *drv, 177 + enum pm_sleep_mode mode) 178 + { 179 + u32 start_index; 180 + u32 ctl_val; 181 + 182 + start_index = drv->reg_data->start_index[mode]; 183 + 184 + ctl_val = spm_register_read(drv, SPM_REG_SPM_CTL); 185 + ctl_val &= ~(SPM_CTL_INDEX << SPM_CTL_INDEX_SHIFT); 186 + ctl_val |= start_index << SPM_CTL_INDEX_SHIFT; 187 + ctl_val |= SPM_CTL_EN; 188 + spm_register_write_sync(drv, SPM_REG_SPM_CTL, ctl_val); 189 + } 190 + 191 + static const struct of_device_id spm_match_table[] = { 192 + { .compatible = "qcom,sdm660-gold-saw2-v4.1-l2", 193 + .data = &spm_reg_660_gold_l2 }, 194 + { .compatible = "qcom,sdm660-silver-saw2-v4.1-l2", 195 + .data = &spm_reg_660_silver_l2 }, 196 + { .compatible = "qcom,msm8226-saw2-v2.1-cpu", 197 + .data = &spm_reg_8226_cpu }, 198 + { .compatible = "qcom,msm8916-saw2-v3.0-cpu", 199 + .data = &spm_reg_8916_cpu }, 200 + { .compatible = "qcom,msm8974-saw2-v2.1-cpu", 201 + .data = &spm_reg_8974_8084_cpu }, 202 + { .compatible = "qcom,msm8998-gold-saw2-v4.1-l2", 203 + .data = &spm_reg_8998_gold_l2 }, 204 + { .compatible = "qcom,msm8998-silver-saw2-v4.1-l2", 205 + .data = &spm_reg_8998_silver_l2 }, 206 + { .compatible = "qcom,apq8084-saw2-v2.1-cpu", 207 + .data = &spm_reg_8974_8084_cpu }, 208 + { .compatible = "qcom,apq8064-saw2-v1.1-cpu", 209 + .data = &spm_reg_8064_cpu }, 210 + { }, 211 + }; 212 + MODULE_DEVICE_TABLE(of, spm_match_table); 213 + 214 + static int spm_dev_probe(struct platform_device *pdev) 215 + { 216 + const struct of_device_id *match_id; 217 + struct spm_driver_data *drv; 218 + struct resource *res; 219 + void __iomem *addr; 220 + 221 + drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL); 222 + if (!drv) 223 + return -ENOMEM; 224 + 225 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 226 + drv->reg_base = devm_ioremap_resource(&pdev->dev, res); 227 + if (IS_ERR(drv->reg_base)) 228 + return PTR_ERR(drv->reg_base); 229 + 230 + match_id = of_match_node(spm_match_table, pdev->dev.of_node); 231 + if (!match_id) 232 + return -ENODEV; 233 + 234 + drv->reg_data = match_id->data; 235 + platform_set_drvdata(pdev, drv); 236 + 237 + /* Write the SPM sequences first.. */ 238 + addr = drv->reg_base + drv->reg_data->reg_offset[SPM_REG_SEQ_ENTRY]; 239 + __iowrite32_copy(addr, drv->reg_data->seq, 240 + ARRAY_SIZE(drv->reg_data->seq) / 4); 241 + 242 + /* 243 + * ..and then the control registers. 244 + * On some SoC if the control registers are written first and if the 245 + * CPU was held in reset, the reset signal could trigger the SPM state 246 + * machine, before the sequences are completely written. 247 + */ 248 + spm_register_write(drv, SPM_REG_AVS_CTL, drv->reg_data->avs_ctl); 249 + spm_register_write(drv, SPM_REG_AVS_LIMIT, drv->reg_data->avs_limit); 250 + spm_register_write(drv, SPM_REG_CFG, drv->reg_data->spm_cfg); 251 + spm_register_write(drv, SPM_REG_DLY, drv->reg_data->spm_dly); 252 + spm_register_write(drv, SPM_REG_PMIC_DLY, drv->reg_data->pmic_dly); 253 + spm_register_write(drv, SPM_REG_PMIC_DATA_0, 254 + drv->reg_data->pmic_data[0]); 255 + spm_register_write(drv, SPM_REG_PMIC_DATA_1, 256 + drv->reg_data->pmic_data[1]); 257 + 258 + /* Set up Standby as the default low power mode */ 259 + if (drv->reg_data->reg_offset[SPM_REG_SPM_CTL]) 260 + spm_set_low_power_mode(drv, PM_SLEEP_MODE_STBY); 261 + 262 + return 0; 263 + } 264 + 265 + static struct platform_driver spm_driver = { 266 + .probe = spm_dev_probe, 267 + .driver = { 268 + .name = "qcom_spm", 269 + .of_match_table = spm_match_table, 270 + }, 271 + }; 272 + 273 + static int __init qcom_spm_init(void) 274 + { 275 + return platform_driver_register(&spm_driver); 276 + } 277 + arch_initcall(qcom_spm_init); 278 + 279 + MODULE_LICENSE("GPL v2");
+5 -2
drivers/soc/renesas/Kconfig
··· 186 186 select SYSC_R8A77995 187 187 help 188 188 This enables support for the Renesas R-Car D3 SoC. 189 + This includes different gradings like R-Car D3e. 189 190 190 191 config ARCH_R8A77990 191 192 bool "ARM64 Platform support for R-Car E3" ··· 194 193 select SYSC_R8A77990 195 194 help 196 195 This enables support for the Renesas R-Car E3 SoC. 196 + This includes different gradings like R-Car E3e. 197 197 198 198 config ARCH_R8A77950 199 199 bool "ARM64 Platform support for R-Car H3 ES1.x" ··· 210 208 help 211 209 This enables support for the Renesas R-Car H3 SoC (revisions 2.0 and 212 210 later). 213 - This includes different gradings like R-Car H3e-2G. 211 + This includes different gradings like R-Car H3e, H3e-2G, and H3Ne. 214 212 215 213 config ARCH_R8A77965 216 214 bool "ARM64 Platform support for R-Car M3-N" ··· 218 216 select SYSC_R8A77965 219 217 help 220 218 This enables support for the Renesas R-Car M3-N SoC. 219 + This includes different gradings like R-Car M3Ne and M3Ne-2G. 221 220 222 221 config ARCH_R8A77960 223 222 bool "ARM64 Platform support for R-Car M3-W" ··· 233 230 select SYSC_R8A77961 234 231 help 235 232 This enables support for the Renesas R-Car M3-W+ SoC. 236 - This includes different gradings like R-Car M3e-2G. 233 + This includes different gradings like R-Car M3e and M3e-2G. 237 234 238 235 config ARCH_R8A77980 239 236 bool "ARM64 Platform support for R-Car V3H"
+7
drivers/soc/renesas/renesas-soc.c
··· 285 285 { .compatible = "renesas,r8a7795", .data = &soc_rcar_h3 }, 286 286 #endif 287 287 #ifdef CONFIG_ARCH_R8A77951 288 + { .compatible = "renesas,r8a779m0", .data = &soc_rcar_h3 }, 288 289 { .compatible = "renesas,r8a779m1", .data = &soc_rcar_h3 }, 290 + { .compatible = "renesas,r8a779m8", .data = &soc_rcar_h3 }, 289 291 #endif 290 292 #ifdef CONFIG_ARCH_R8A77960 291 293 { .compatible = "renesas,r8a7796", .data = &soc_rcar_m3_w }, 292 294 #endif 293 295 #ifdef CONFIG_ARCH_R8A77961 294 296 { .compatible = "renesas,r8a77961", .data = &soc_rcar_m3_w }, 297 + { .compatible = "renesas,r8a779m2", .data = &soc_rcar_m3_w }, 295 298 { .compatible = "renesas,r8a779m3", .data = &soc_rcar_m3_w }, 296 299 #endif 297 300 #ifdef CONFIG_ARCH_R8A77965 298 301 { .compatible = "renesas,r8a77965", .data = &soc_rcar_m3_n }, 302 + { .compatible = "renesas,r8a779m4", .data = &soc_rcar_m3_n }, 303 + { .compatible = "renesas,r8a779m5", .data = &soc_rcar_m3_n }, 299 304 #endif 300 305 #ifdef CONFIG_ARCH_R8A77970 301 306 { .compatible = "renesas,r8a77970", .data = &soc_rcar_v3m }, ··· 310 305 #endif 311 306 #ifdef CONFIG_ARCH_R8A77990 312 307 { .compatible = "renesas,r8a77990", .data = &soc_rcar_e3 }, 308 + { .compatible = "renesas,r8a779m6", .data = &soc_rcar_e3 }, 313 309 #endif 314 310 #ifdef CONFIG_ARCH_R8A77995 315 311 { .compatible = "renesas,r8a77995", .data = &soc_rcar_d3 }, 312 + { .compatible = "renesas,r8a779m7", .data = &soc_rcar_d3 }, 316 313 #endif 317 314 #ifdef CONFIG_ARCH_R8A779A0 318 315 { .compatible = "renesas,r8a779a0", .data = &soc_rcar_v3u },
+4 -1
drivers/soc/samsung/Kconfig
··· 13 13 depends on EXYNOS_CHIPID 14 14 15 15 config EXYNOS_CHIPID 16 - bool "Exynos ChipID controller and ASV driver" if COMPILE_TEST 16 + tristate "Exynos ChipID controller and ASV driver" 17 17 depends on ARCH_EXYNOS || COMPILE_TEST 18 + default ARCH_EXYNOS 18 19 select EXYNOS_ASV_ARM if ARM && ARCH_EXYNOS 19 20 select MFD_SYSCON 20 21 select SOC_BUS 21 22 help 22 23 Support for Samsung Exynos SoC ChipID and Adaptive Supply Voltage. 24 + This driver can also be built as module (exynos_chipid). 23 25 24 26 config EXYNOS_PMU 25 27 bool "Exynos PMU controller driver" if COMPILE_TEST 26 28 depends on ARCH_EXYNOS || ((ARM || ARM64) && COMPILE_TEST) 27 29 select EXYNOS_PMU_ARM_DRIVERS if ARM && ARCH_EXYNOS 30 + select MFD_CORE 28 31 29 32 # There is no need to enable these drivers for ARMv8 30 33 config EXYNOS_PMU_ARM_DRIVERS
+2 -1
drivers/soc/samsung/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 3 obj-$(CONFIG_EXYNOS_ASV_ARM) += exynos5422-asv.o 4 + obj-$(CONFIG_EXYNOS_CHIPID) += exynos_chipid.o 5 + exynos_chipid-y += exynos-chipid.o exynos-asv.o 4 6 5 - obj-$(CONFIG_EXYNOS_CHIPID) += exynos-chipid.o exynos-asv.o 6 7 obj-$(CONFIG_EXYNOS_PMU) += exynos-pmu.o 7 8 8 9 obj-$(CONFIG_EXYNOS_PMU_ARM_DRIVERS) += exynos3250-pmu.o exynos4-pmu.o \
+81 -15
drivers/soc/samsung/exynos-chipid.c
··· 15 15 #include <linux/device.h> 16 16 #include <linux/errno.h> 17 17 #include <linux/mfd/syscon.h> 18 + #include <linux/module.h> 18 19 #include <linux/of.h> 20 + #include <linux/of_device.h> 19 21 #include <linux/platform_device.h> 20 22 #include <linux/regmap.h> 21 23 #include <linux/slab.h> ··· 25 23 #include <linux/sys_soc.h> 26 24 27 25 #include "exynos-asv.h" 26 + 27 + struct exynos_chipid_variant { 28 + unsigned int rev_reg; /* revision register offset */ 29 + unsigned int main_rev_shift; /* main revision offset in rev_reg */ 30 + unsigned int sub_rev_shift; /* sub revision offset in rev_reg */ 31 + }; 32 + 33 + struct exynos_chipid_info { 34 + u32 product_id; 35 + u32 revision; 36 + }; 28 37 29 38 static const struct exynos_soc_id { 30 39 const char *name; ··· 55 42 { "EXYNOS5440", 0xE5440000 }, 56 43 { "EXYNOS5800", 0xE5422000 }, 57 44 { "EXYNOS7420", 0xE7420000 }, 45 + { "EXYNOS850", 0xE3830000 }, 46 + { "EXYNOSAUTOV9", 0xAAA80000 }, 58 47 }; 59 48 60 49 static const char *product_id_to_soc_id(unsigned int product_id) ··· 64 49 int i; 65 50 66 51 for (i = 0; i < ARRAY_SIZE(soc_ids); i++) 67 - if ((product_id & EXYNOS_MASK) == soc_ids[i].id) 52 + if (product_id == soc_ids[i].id) 68 53 return soc_ids[i].name; 69 54 return NULL; 70 55 } 71 56 57 + static int exynos_chipid_get_chipid_info(struct regmap *regmap, 58 + const struct exynos_chipid_variant *data, 59 + struct exynos_chipid_info *soc_info) 60 + { 61 + int ret; 62 + unsigned int val, main_rev, sub_rev; 63 + 64 + ret = regmap_read(regmap, EXYNOS_CHIPID_REG_PRO_ID, &val); 65 + if (ret < 0) 66 + return ret; 67 + soc_info->product_id = val & EXYNOS_MASK; 68 + 69 + if (data->rev_reg != EXYNOS_CHIPID_REG_PRO_ID) { 70 + ret = regmap_read(regmap, data->rev_reg, &val); 71 + if (ret < 0) 72 + return ret; 73 + } 74 + main_rev = (val >> data->main_rev_shift) & EXYNOS_REV_PART_MASK; 75 + sub_rev = (val >> data->sub_rev_shift) & EXYNOS_REV_PART_MASK; 76 + soc_info->revision = (main_rev << EXYNOS_REV_PART_SHIFT) | sub_rev; 77 + 78 + return 0; 79 + } 80 + 72 81 static int exynos_chipid_probe(struct platform_device *pdev) 73 82 { 83 + const struct exynos_chipid_variant *drv_data; 84 + struct exynos_chipid_info soc_info; 74 85 struct soc_device_attribute *soc_dev_attr; 75 86 struct soc_device *soc_dev; 76 87 struct device_node *root; 77 88 struct regmap *regmap; 78 - u32 product_id; 79 - u32 revision; 80 89 int ret; 90 + 91 + drv_data = of_device_get_match_data(&pdev->dev); 92 + if (!drv_data) 93 + return -EINVAL; 81 94 82 95 regmap = device_node_to_regmap(pdev->dev.of_node); 83 96 if (IS_ERR(regmap)) 84 97 return PTR_ERR(regmap); 85 98 86 - ret = regmap_read(regmap, EXYNOS_CHIPID_REG_PRO_ID, &product_id); 99 + ret = exynos_chipid_get_chipid_info(regmap, drv_data, &soc_info); 87 100 if (ret < 0) 88 101 return ret; 89 - 90 - revision = product_id & EXYNOS_REV_MASK; 91 102 92 103 soc_dev_attr = devm_kzalloc(&pdev->dev, sizeof(*soc_dev_attr), 93 104 GFP_KERNEL); ··· 127 86 of_node_put(root); 128 87 129 88 soc_dev_attr->revision = devm_kasprintf(&pdev->dev, GFP_KERNEL, 130 - "%x", revision); 131 - soc_dev_attr->soc_id = product_id_to_soc_id(product_id); 89 + "%x", soc_info.revision); 90 + soc_dev_attr->soc_id = product_id_to_soc_id(soc_info.product_id); 132 91 if (!soc_dev_attr->soc_id) { 133 92 pr_err("Unknown SoC\n"); 134 93 return -ENODEV; ··· 145 104 146 105 platform_set_drvdata(pdev, soc_dev); 147 106 148 - dev_info(soc_device_to_device(soc_dev), 149 - "Exynos: CPU[%s] PRO_ID[0x%x] REV[0x%x] Detected\n", 150 - soc_dev_attr->soc_id, product_id, revision); 107 + dev_info(&pdev->dev, "Exynos: CPU[%s] PRO_ID[0x%x] REV[0x%x] Detected\n", 108 + soc_dev_attr->soc_id, soc_info.product_id, soc_info.revision); 151 109 152 110 return 0; 153 111 ··· 165 125 return 0; 166 126 } 167 127 168 - static const struct of_device_id exynos_chipid_of_device_ids[] = { 169 - { .compatible = "samsung,exynos4210-chipid" }, 170 - {} 128 + static const struct exynos_chipid_variant exynos4210_chipid_drv_data = { 129 + .rev_reg = 0x0, 130 + .main_rev_shift = 4, 131 + .sub_rev_shift = 0, 171 132 }; 133 + 134 + static const struct exynos_chipid_variant exynos850_chipid_drv_data = { 135 + .rev_reg = 0x10, 136 + .main_rev_shift = 20, 137 + .sub_rev_shift = 16, 138 + }; 139 + 140 + static const struct of_device_id exynos_chipid_of_device_ids[] = { 141 + { 142 + .compatible = "samsung,exynos4210-chipid", 143 + .data = &exynos4210_chipid_drv_data, 144 + }, { 145 + .compatible = "samsung,exynos850-chipid", 146 + .data = &exynos850_chipid_drv_data, 147 + }, 148 + { } 149 + }; 150 + MODULE_DEVICE_TABLE(of, exynos_chipid_of_device_ids); 172 151 173 152 static struct platform_driver exynos_chipid_driver = { 174 153 .driver = { ··· 197 138 .probe = exynos_chipid_probe, 198 139 .remove = exynos_chipid_remove, 199 140 }; 200 - builtin_platform_driver(exynos_chipid_driver); 141 + module_platform_driver(exynos_chipid_driver); 142 + 143 + MODULE_DESCRIPTION("Samsung Exynos ChipID controller and ASV driver"); 144 + MODULE_AUTHOR("Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>"); 145 + MODULE_AUTHOR("Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com>"); 146 + MODULE_AUTHOR("Pankaj Dubey <pankaj.dubey@samsung.com>"); 147 + MODULE_AUTHOR("Sylwester Nawrocki <s.nawrocki@samsung.com>"); 148 + MODULE_LICENSE("GPL");
+1
drivers/soc/samsung/exynos5422-asv.c
··· 503 503 504 504 return 0; 505 505 } 506 + EXPORT_SYMBOL_GPL(exynos5422_asv_init);
-1
drivers/soc/samsung/pm_domains.c
··· 28 28 */ 29 29 struct exynos_pm_domain { 30 30 void __iomem *base; 31 - bool is_off; 32 31 struct generic_pm_domain pd; 33 32 u32 local_pwr_cfg; 34 33 };
+1 -3
drivers/soc/sunxi/sunxi_sram.c
··· 331 331 332 332 static int sunxi_sram_probe(struct platform_device *pdev) 333 333 { 334 - struct resource *res; 335 334 struct dentry *d; 336 335 struct regmap *emac_clock; 337 336 const struct sunxi_sramc_variant *variant; ··· 341 342 if (!variant) 342 343 return -EINVAL; 343 344 344 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 345 - base = devm_ioremap_resource(&pdev->dev, res); 345 + base = devm_platform_ioremap_resource(pdev, 0); 346 346 if (IS_ERR(base)) 347 347 return PTR_ERR(base); 348 348
+1
drivers/soc/tegra/Makefile
··· 7 7 obj-$(CONFIG_SOC_TEGRA_POWERGATE_BPMP) += powergate-bpmp.o 8 8 obj-$(CONFIG_SOC_TEGRA20_VOLTAGE_COUPLER) += regulators-tegra20.o 9 9 obj-$(CONFIG_SOC_TEGRA30_VOLTAGE_COUPLER) += regulators-tegra30.o 10 + obj-$(CONFIG_ARCH_TEGRA_186_SOC) += ari-tegra186.o
+80
drivers/soc/tegra/ari-tegra186.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. 4 + */ 5 + 6 + #include <linux/arm-smccc.h> 7 + #include <linux/kernel.h> 8 + #include <linux/of.h> 9 + #include <linux/panic_notifier.h> 10 + 11 + #define SMC_SIP_INVOKE_MCE 0xc2ffff00 12 + #define MCE_SMC_READ_MCA 12 13 + 14 + #define MCA_ARI_CMD_RD_SERR 1 15 + 16 + #define MCA_ARI_RW_SUBIDX_STAT 1 17 + #define SERR_STATUS_VAL BIT_ULL(63) 18 + 19 + #define MCA_ARI_RW_SUBIDX_ADDR 2 20 + #define MCA_ARI_RW_SUBIDX_MSC1 3 21 + #define MCA_ARI_RW_SUBIDX_MSC2 4 22 + 23 + static const char * const bank_names[] = { 24 + "SYS:DPMU", "ROC:IOB", "ROC:MCB", "ROC:CCE", "ROC:CQX", "ROC:CTU", 25 + }; 26 + 27 + static void read_uncore_mca(u8 cmd, u8 idx, u8 subidx, u8 inst, u64 *data) 28 + { 29 + struct arm_smccc_res res; 30 + 31 + arm_smccc_smc(SMC_SIP_INVOKE_MCE | MCE_SMC_READ_MCA, 32 + ((u64)inst << 24) | ((u64)idx << 16) | 33 + ((u64)subidx << 8) | ((u64)cmd << 0), 34 + 0, 0, 0, 0, 0, 0, &res); 35 + 36 + *data = res.a2; 37 + } 38 + 39 + static int tegra186_ari_panic_handler(struct notifier_block *nb, 40 + unsigned long code, void *unused) 41 + { 42 + u64 status; 43 + int i; 44 + 45 + for (i = 0; i < ARRAY_SIZE(bank_names); i++) { 46 + read_uncore_mca(MCA_ARI_CMD_RD_SERR, i, MCA_ARI_RW_SUBIDX_STAT, 47 + 0, &status); 48 + 49 + if (status & SERR_STATUS_VAL) { 50 + u64 addr, misc1, misc2; 51 + 52 + read_uncore_mca(MCA_ARI_CMD_RD_SERR, i, 53 + MCA_ARI_RW_SUBIDX_ADDR, 0, &addr); 54 + read_uncore_mca(MCA_ARI_CMD_RD_SERR, i, 55 + MCA_ARI_RW_SUBIDX_MSC1, 0, &misc1); 56 + read_uncore_mca(MCA_ARI_CMD_RD_SERR, i, 57 + MCA_ARI_RW_SUBIDX_MSC2, 0, &misc2); 58 + 59 + pr_crit("Machine Check Error in %s\n" 60 + " status=0x%llx addr=0x%llx\n" 61 + " msc1=0x%llx msc2=0x%llx\n", 62 + bank_names[i], status, addr, misc1, misc2); 63 + } 64 + } 65 + 66 + return NOTIFY_DONE; 67 + } 68 + 69 + static struct notifier_block tegra186_ari_panic_nb = { 70 + .notifier_call = tegra186_ari_panic_handler, 71 + }; 72 + 73 + static int __init tegra186_ari_init(void) 74 + { 75 + if (of_machine_is_compatible("nvidia,tegra186")) 76 + atomic_notifier_chain_register(&panic_notifier_list, &tegra186_ari_panic_nb); 77 + 78 + return 0; 79 + } 80 + early_initcall(tegra186_ari_init);
+22 -6
drivers/soc/tegra/pmc.c
··· 360 360 unsigned int num_pmc_clks; 361 361 bool has_blink_output; 362 362 bool has_usb_sleepwalk; 363 + bool supports_core_domain; 363 364 }; 364 365 365 366 /** ··· 783 782 784 783 err = reset_control_deassert(pg->reset); 785 784 if (err) 786 - goto powergate_off; 785 + goto disable_clks; 787 786 788 787 usleep_range(10, 20); 789 788 ··· 2816 2815 return err; 2817 2816 2818 2817 /* take over the memory region from the early initialization */ 2819 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 2820 - base = devm_ioremap_resource(&pdev->dev, res); 2818 + base = devm_platform_ioremap_resource(pdev, 0); 2821 2819 if (IS_ERR(base)) 2822 2820 return PTR_ERR(base); 2823 2821 ··· 3041 3041 } 3042 3042 3043 3043 static const struct tegra_pmc_soc tegra20_pmc_soc = { 3044 + .supports_core_domain = false, 3044 3045 .num_powergates = ARRAY_SIZE(tegra20_powergates), 3045 3046 .powergates = tegra20_powergates, 3046 3047 .num_cpu_powergates = 0, ··· 3066 3065 .pmc_clks_data = NULL, 3067 3066 .num_pmc_clks = 0, 3068 3067 .has_blink_output = true, 3069 - .has_usb_sleepwalk = false, 3068 + .has_usb_sleepwalk = true, 3070 3069 }; 3071 3070 3072 3071 static const char * const tegra30_powergates[] = { ··· 3102 3101 }; 3103 3102 3104 3103 static const struct tegra_pmc_soc tegra30_pmc_soc = { 3104 + .supports_core_domain = false, 3105 3105 .num_powergates = ARRAY_SIZE(tegra30_powergates), 3106 3106 .powergates = tegra30_powergates, 3107 3107 .num_cpu_powergates = ARRAY_SIZE(tegra30_cpu_powergates), ··· 3127 3125 .pmc_clks_data = tegra_pmc_clks_data, 3128 3126 .num_pmc_clks = ARRAY_SIZE(tegra_pmc_clks_data), 3129 3127 .has_blink_output = true, 3130 - .has_usb_sleepwalk = false, 3128 + .has_usb_sleepwalk = true, 3131 3129 }; 3132 3130 3133 3131 static const char * const tegra114_powergates[] = { ··· 3159 3157 }; 3160 3158 3161 3159 static const struct tegra_pmc_soc tegra114_pmc_soc = { 3160 + .supports_core_domain = false, 3162 3161 .num_powergates = ARRAY_SIZE(tegra114_powergates), 3163 3162 .powergates = tegra114_powergates, 3164 3163 .num_cpu_powergates = ARRAY_SIZE(tegra114_cpu_powergates), ··· 3184 3181 .pmc_clks_data = tegra_pmc_clks_data, 3185 3182 .num_pmc_clks = ARRAY_SIZE(tegra_pmc_clks_data), 3186 3183 .has_blink_output = true, 3187 - .has_usb_sleepwalk = false, 3184 + .has_usb_sleepwalk = true, 3188 3185 }; 3189 3186 3190 3187 static const char * const tegra124_powergates[] = { ··· 3276 3273 }; 3277 3274 3278 3275 static const struct tegra_pmc_soc tegra124_pmc_soc = { 3276 + .supports_core_domain = false, 3279 3277 .num_powergates = ARRAY_SIZE(tegra124_powergates), 3280 3278 .powergates = tegra124_powergates, 3281 3279 .num_cpu_powergates = ARRAY_SIZE(tegra124_cpu_powergates), ··· 3402 3398 }; 3403 3399 3404 3400 static const struct tegra_pmc_soc tegra210_pmc_soc = { 3401 + .supports_core_domain = false, 3405 3402 .num_powergates = ARRAY_SIZE(tegra210_powergates), 3406 3403 .powergates = tegra210_powergates, 3407 3404 .num_cpu_powergates = ARRAY_SIZE(tegra210_cpu_powergates), ··· 3560 3555 }; 3561 3556 3562 3557 static const struct tegra_pmc_soc tegra186_pmc_soc = { 3558 + .supports_core_domain = false, 3563 3559 .num_powergates = 0, 3564 3560 .powergates = NULL, 3565 3561 .num_cpu_powergates = 0, ··· 3695 3689 }; 3696 3690 3697 3691 static const struct tegra_pmc_soc tegra194_pmc_soc = { 3692 + .supports_core_domain = false, 3698 3693 .num_powergates = 0, 3699 3694 .powergates = NULL, 3700 3695 .num_cpu_powergates = 0, ··· 3764 3757 }; 3765 3758 3766 3759 static const struct tegra_pmc_soc tegra234_pmc_soc = { 3760 + .supports_core_domain = false, 3767 3761 .num_powergates = 0, 3768 3762 .powergates = NULL, 3769 3763 .num_cpu_powergates = 0, ··· 3810 3802 static void tegra_pmc_sync_state(struct device *dev) 3811 3803 { 3812 3804 int err; 3805 + 3806 + /* 3807 + * Newer device-trees have power domains, but we need to prepare all 3808 + * device drivers with runtime PM and OPP support first, otherwise 3809 + * state syncing is unsafe. 3810 + */ 3811 + if (!pmc->soc->supports_core_domain) 3812 + return; 3813 3813 3814 3814 /* 3815 3815 * Older device-trees don't have core PD, and thus, there are
+3 -2
drivers/tee/optee/Makefile
··· 4 4 optee-objs += call.o 5 5 optee-objs += rpc.o 6 6 optee-objs += supp.o 7 - optee-objs += shm_pool.o 8 7 optee-objs += device.o 8 + optee-objs += smc_abi.o 9 + optee-objs += ffa_abi.o 9 10 10 11 # for tracing framework to find optee_trace.h 11 - CFLAGS_call.o := -I$(src) 12 + CFLAGS_smc_abi.o := -I$(src)
+59 -388
drivers/tee/optee/call.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Copyright (c) 2015, Linaro Limited 3 + * Copyright (c) 2015-2021, Linaro Limited 4 4 */ 5 - #include <linux/arm-smccc.h> 6 5 #include <linux/device.h> 7 6 #include <linux/err.h> 8 7 #include <linux/errno.h> 9 8 #include <linux/mm.h> 10 - #include <linux/sched.h> 11 9 #include <linux/slab.h> 12 10 #include <linux/tee_drv.h> 13 11 #include <linux/types.h> 14 - #include <linux/uaccess.h> 15 12 #include "optee_private.h" 16 - #include "optee_smc.h" 17 - #define CREATE_TRACE_POINTS 18 - #include "optee_trace.h" 19 13 20 - struct optee_call_waiter { 21 - struct list_head list_node; 22 - struct completion c; 23 - }; 24 - 25 - static void optee_cq_wait_init(struct optee_call_queue *cq, 26 - struct optee_call_waiter *w) 14 + void optee_cq_wait_init(struct optee_call_queue *cq, 15 + struct optee_call_waiter *w) 27 16 { 28 17 /* 29 18 * We're preparing to make a call to secure world. In case we can't ··· 36 47 mutex_unlock(&cq->mutex); 37 48 } 38 49 39 - static void optee_cq_wait_for_completion(struct optee_call_queue *cq, 40 - struct optee_call_waiter *w) 50 + void optee_cq_wait_for_completion(struct optee_call_queue *cq, 51 + struct optee_call_waiter *w) 41 52 { 42 53 wait_for_completion(&w->c); 43 54 ··· 63 74 } 64 75 } 65 76 66 - static void optee_cq_wait_final(struct optee_call_queue *cq, 67 - struct optee_call_waiter *w) 77 + void optee_cq_wait_final(struct optee_call_queue *cq, 78 + struct optee_call_waiter *w) 68 79 { 69 80 /* 70 81 * We're done with the call to secure world. The thread in secure ··· 104 115 return NULL; 105 116 } 106 117 107 - /** 108 - * optee_do_call_with_arg() - Do an SMC to OP-TEE in secure world 109 - * @ctx: calling context 110 - * @parg: physical address of message to pass to secure world 111 - * 112 - * Does and SMC to OP-TEE in secure world and handles eventual resulting 113 - * Remote Procedure Calls (RPC) from OP-TEE. 114 - * 115 - * Returns return code from secure world, 0 is OK 116 - */ 117 - u32 optee_do_call_with_arg(struct tee_context *ctx, phys_addr_t parg) 118 + struct tee_shm *optee_get_msg_arg(struct tee_context *ctx, size_t num_params, 119 + struct optee_msg_arg **msg_arg) 118 120 { 119 121 struct optee *optee = tee_get_drvdata(ctx->teedev); 120 - struct optee_call_waiter w; 121 - struct optee_rpc_param param = { }; 122 - struct optee_call_ctx call_ctx = { }; 123 - u32 ret; 124 - 125 - param.a0 = OPTEE_SMC_CALL_WITH_ARG; 126 - reg_pair_from_64(&param.a1, &param.a2, parg); 127 - /* Initialize waiter */ 128 - optee_cq_wait_init(&optee->call_queue, &w); 129 - while (true) { 130 - struct arm_smccc_res res; 131 - 132 - trace_optee_invoke_fn_begin(&param); 133 - optee->invoke_fn(param.a0, param.a1, param.a2, param.a3, 134 - param.a4, param.a5, param.a6, param.a7, 135 - &res); 136 - trace_optee_invoke_fn_end(&param, &res); 137 - 138 - if (res.a0 == OPTEE_SMC_RETURN_ETHREAD_LIMIT) { 139 - /* 140 - * Out of threads in secure world, wait for a thread 141 - * become available. 142 - */ 143 - optee_cq_wait_for_completion(&optee->call_queue, &w); 144 - } else if (OPTEE_SMC_RETURN_IS_RPC(res.a0)) { 145 - cond_resched(); 146 - param.a0 = res.a0; 147 - param.a1 = res.a1; 148 - param.a2 = res.a2; 149 - param.a3 = res.a3; 150 - optee_handle_rpc(ctx, &param, &call_ctx); 151 - } else { 152 - ret = res.a0; 153 - break; 154 - } 155 - } 156 - 157 - optee_rpc_finalize_call(&call_ctx); 158 - /* 159 - * We're done with our thread in secure world, if there's any 160 - * thread waiters wake up one. 161 - */ 162 - optee_cq_wait_final(&optee->call_queue, &w); 163 - 164 - return ret; 165 - } 166 - 167 - static struct tee_shm *get_msg_arg(struct tee_context *ctx, size_t num_params, 168 - struct optee_msg_arg **msg_arg, 169 - phys_addr_t *msg_parg) 170 - { 171 - int rc; 122 + size_t sz = OPTEE_MSG_GET_ARG_SIZE(num_params); 172 123 struct tee_shm *shm; 173 124 struct optee_msg_arg *ma; 174 125 175 - shm = tee_shm_alloc(ctx, OPTEE_MSG_GET_ARG_SIZE(num_params), 176 - TEE_SHM_MAPPED | TEE_SHM_PRIV); 126 + /* 127 + * rpc_arg_count is set to the number of allocated parameters in 128 + * the RPC argument struct if a second MSG arg struct is expected. 129 + * The second arg struct will then be used for RPC. 130 + */ 131 + if (optee->rpc_arg_count) 132 + sz += OPTEE_MSG_GET_ARG_SIZE(optee->rpc_arg_count); 133 + 134 + shm = tee_shm_alloc(ctx, sz, TEE_SHM_MAPPED | TEE_SHM_PRIV); 177 135 if (IS_ERR(shm)) 178 136 return shm; 179 137 180 138 ma = tee_shm_get_va(shm, 0); 181 139 if (IS_ERR(ma)) { 182 - rc = PTR_ERR(ma); 183 - goto out; 140 + tee_shm_free(shm); 141 + return (void *)ma; 184 142 } 185 - 186 - rc = tee_shm_get_pa(shm, 0, msg_parg); 187 - if (rc) 188 - goto out; 189 143 190 144 memset(ma, 0, OPTEE_MSG_GET_ARG_SIZE(num_params)); 191 145 ma->num_params = num_params; 192 146 *msg_arg = ma; 193 - out: 194 - if (rc) { 195 - tee_shm_free(shm); 196 - return ERR_PTR(rc); 197 - } 198 147 199 148 return shm; 200 149 } ··· 141 214 struct tee_ioctl_open_session_arg *arg, 142 215 struct tee_param *param) 143 216 { 217 + struct optee *optee = tee_get_drvdata(ctx->teedev); 144 218 struct optee_context_data *ctxdata = ctx->data; 145 219 int rc; 146 220 struct tee_shm *shm; 147 221 struct optee_msg_arg *msg_arg; 148 - phys_addr_t msg_parg; 149 222 struct optee_session *sess = NULL; 150 223 uuid_t client_uuid; 151 224 152 225 /* +2 for the meta parameters added below */ 153 - shm = get_msg_arg(ctx, arg->num_params + 2, &msg_arg, &msg_parg); 226 + shm = optee_get_msg_arg(ctx, arg->num_params + 2, &msg_arg); 154 227 if (IS_ERR(shm)) 155 228 return PTR_ERR(shm); 156 229 ··· 174 247 goto out; 175 248 export_uuid(msg_arg->params[1].u.octets, &client_uuid); 176 249 177 - rc = optee_to_msg_param(msg_arg->params + 2, arg->num_params, param); 250 + rc = optee->ops->to_msg_param(optee, msg_arg->params + 2, 251 + arg->num_params, param); 178 252 if (rc) 179 253 goto out; 180 254 ··· 185 257 goto out; 186 258 } 187 259 188 - if (optee_do_call_with_arg(ctx, msg_parg)) { 260 + if (optee->ops->do_call_with_arg(ctx, shm)) { 189 261 msg_arg->ret = TEEC_ERROR_COMMUNICATION; 190 262 msg_arg->ret_origin = TEEC_ORIGIN_COMMS; 191 263 } ··· 200 272 kfree(sess); 201 273 } 202 274 203 - if (optee_from_msg_param(param, arg->num_params, msg_arg->params + 2)) { 275 + if (optee->ops->from_msg_param(optee, param, arg->num_params, 276 + msg_arg->params + 2)) { 204 277 arg->ret = TEEC_ERROR_COMMUNICATION; 205 278 arg->ret_origin = TEEC_ORIGIN_COMMS; 206 279 /* Close session again to avoid leakage */ ··· 217 288 return rc; 218 289 } 219 290 291 + int optee_close_session_helper(struct tee_context *ctx, u32 session) 292 + { 293 + struct tee_shm *shm; 294 + struct optee *optee = tee_get_drvdata(ctx->teedev); 295 + struct optee_msg_arg *msg_arg; 296 + 297 + shm = optee_get_msg_arg(ctx, 0, &msg_arg); 298 + if (IS_ERR(shm)) 299 + return PTR_ERR(shm); 300 + 301 + msg_arg->cmd = OPTEE_MSG_CMD_CLOSE_SESSION; 302 + msg_arg->session = session; 303 + optee->ops->do_call_with_arg(ctx, shm); 304 + 305 + tee_shm_free(shm); 306 + 307 + return 0; 308 + } 309 + 220 310 int optee_close_session(struct tee_context *ctx, u32 session) 221 311 { 222 312 struct optee_context_data *ctxdata = ctx->data; 223 - struct tee_shm *shm; 224 - struct optee_msg_arg *msg_arg; 225 - phys_addr_t msg_parg; 226 313 struct optee_session *sess; 227 314 228 315 /* Check that the session is valid and remove it from the list */ ··· 251 306 return -EINVAL; 252 307 kfree(sess); 253 308 254 - shm = get_msg_arg(ctx, 0, &msg_arg, &msg_parg); 255 - if (IS_ERR(shm)) 256 - return PTR_ERR(shm); 257 - 258 - msg_arg->cmd = OPTEE_MSG_CMD_CLOSE_SESSION; 259 - msg_arg->session = session; 260 - optee_do_call_with_arg(ctx, msg_parg); 261 - 262 - tee_shm_free(shm); 263 - return 0; 309 + return optee_close_session_helper(ctx, session); 264 310 } 265 311 266 312 int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, 267 313 struct tee_param *param) 268 314 { 315 + struct optee *optee = tee_get_drvdata(ctx->teedev); 269 316 struct optee_context_data *ctxdata = ctx->data; 270 317 struct tee_shm *shm; 271 318 struct optee_msg_arg *msg_arg; 272 - phys_addr_t msg_parg; 273 319 struct optee_session *sess; 274 320 int rc; 275 321 ··· 271 335 if (!sess) 272 336 return -EINVAL; 273 337 274 - shm = get_msg_arg(ctx, arg->num_params, &msg_arg, &msg_parg); 338 + shm = optee_get_msg_arg(ctx, arg->num_params, &msg_arg); 275 339 if (IS_ERR(shm)) 276 340 return PTR_ERR(shm); 277 341 msg_arg->cmd = OPTEE_MSG_CMD_INVOKE_COMMAND; ··· 279 343 msg_arg->session = arg->session; 280 344 msg_arg->cancel_id = arg->cancel_id; 281 345 282 - rc = optee_to_msg_param(msg_arg->params, arg->num_params, param); 346 + rc = optee->ops->to_msg_param(optee, msg_arg->params, arg->num_params, 347 + param); 283 348 if (rc) 284 349 goto out; 285 350 286 - if (optee_do_call_with_arg(ctx, msg_parg)) { 351 + if (optee->ops->do_call_with_arg(ctx, shm)) { 287 352 msg_arg->ret = TEEC_ERROR_COMMUNICATION; 288 353 msg_arg->ret_origin = TEEC_ORIGIN_COMMS; 289 354 } 290 355 291 - if (optee_from_msg_param(param, arg->num_params, msg_arg->params)) { 356 + if (optee->ops->from_msg_param(optee, param, arg->num_params, 357 + msg_arg->params)) { 292 358 msg_arg->ret = TEEC_ERROR_COMMUNICATION; 293 359 msg_arg->ret_origin = TEEC_ORIGIN_COMMS; 294 360 } ··· 304 366 305 367 int optee_cancel_req(struct tee_context *ctx, u32 cancel_id, u32 session) 306 368 { 369 + struct optee *optee = tee_get_drvdata(ctx->teedev); 307 370 struct optee_context_data *ctxdata = ctx->data; 308 371 struct tee_shm *shm; 309 372 struct optee_msg_arg *msg_arg; 310 - phys_addr_t msg_parg; 311 373 struct optee_session *sess; 312 374 313 375 /* Check that the session is valid */ ··· 317 379 if (!sess) 318 380 return -EINVAL; 319 381 320 - shm = get_msg_arg(ctx, 0, &msg_arg, &msg_parg); 382 + shm = optee_get_msg_arg(ctx, 0, &msg_arg); 321 383 if (IS_ERR(shm)) 322 384 return PTR_ERR(shm); 323 385 324 386 msg_arg->cmd = OPTEE_MSG_CMD_CANCEL; 325 387 msg_arg->session = session; 326 388 msg_arg->cancel_id = cancel_id; 327 - optee_do_call_with_arg(ctx, msg_parg); 389 + optee->ops->do_call_with_arg(ctx, shm); 328 390 329 391 tee_shm_free(shm); 330 392 return 0; 331 - } 332 - 333 - /** 334 - * optee_enable_shm_cache() - Enables caching of some shared memory allocation 335 - * in OP-TEE 336 - * @optee: main service struct 337 - */ 338 - void optee_enable_shm_cache(struct optee *optee) 339 - { 340 - struct optee_call_waiter w; 341 - 342 - /* We need to retry until secure world isn't busy. */ 343 - optee_cq_wait_init(&optee->call_queue, &w); 344 - while (true) { 345 - struct arm_smccc_res res; 346 - 347 - optee->invoke_fn(OPTEE_SMC_ENABLE_SHM_CACHE, 0, 0, 0, 0, 0, 0, 348 - 0, &res); 349 - if (res.a0 == OPTEE_SMC_RETURN_OK) 350 - break; 351 - optee_cq_wait_for_completion(&optee->call_queue, &w); 352 - } 353 - optee_cq_wait_final(&optee->call_queue, &w); 354 - } 355 - 356 - /** 357 - * __optee_disable_shm_cache() - Disables caching of some shared memory 358 - * allocation in OP-TEE 359 - * @optee: main service struct 360 - * @is_mapped: true if the cached shared memory addresses were mapped by this 361 - * kernel, are safe to dereference, and should be freed 362 - */ 363 - static void __optee_disable_shm_cache(struct optee *optee, bool is_mapped) 364 - { 365 - struct optee_call_waiter w; 366 - 367 - /* We need to retry until secure world isn't busy. */ 368 - optee_cq_wait_init(&optee->call_queue, &w); 369 - while (true) { 370 - union { 371 - struct arm_smccc_res smccc; 372 - struct optee_smc_disable_shm_cache_result result; 373 - } res; 374 - 375 - optee->invoke_fn(OPTEE_SMC_DISABLE_SHM_CACHE, 0, 0, 0, 0, 0, 0, 376 - 0, &res.smccc); 377 - if (res.result.status == OPTEE_SMC_RETURN_ENOTAVAIL) 378 - break; /* All shm's freed */ 379 - if (res.result.status == OPTEE_SMC_RETURN_OK) { 380 - struct tee_shm *shm; 381 - 382 - /* 383 - * Shared memory references that were not mapped by 384 - * this kernel must be ignored to prevent a crash. 385 - */ 386 - if (!is_mapped) 387 - continue; 388 - 389 - shm = reg_pair_to_ptr(res.result.shm_upper32, 390 - res.result.shm_lower32); 391 - tee_shm_free(shm); 392 - } else { 393 - optee_cq_wait_for_completion(&optee->call_queue, &w); 394 - } 395 - } 396 - optee_cq_wait_final(&optee->call_queue, &w); 397 - } 398 - 399 - /** 400 - * optee_disable_shm_cache() - Disables caching of mapped shared memory 401 - * allocations in OP-TEE 402 - * @optee: main service struct 403 - */ 404 - void optee_disable_shm_cache(struct optee *optee) 405 - { 406 - return __optee_disable_shm_cache(optee, true); 407 - } 408 - 409 - /** 410 - * optee_disable_unmapped_shm_cache() - Disables caching of shared memory 411 - * allocations in OP-TEE which are not 412 - * currently mapped 413 - * @optee: main service struct 414 - */ 415 - void optee_disable_unmapped_shm_cache(struct optee *optee) 416 - { 417 - return __optee_disable_shm_cache(optee, false); 418 - } 419 - 420 - #define PAGELIST_ENTRIES_PER_PAGE \ 421 - ((OPTEE_MSG_NONCONTIG_PAGE_SIZE / sizeof(u64)) - 1) 422 - 423 - /** 424 - * optee_fill_pages_list() - write list of user pages to given shared 425 - * buffer. 426 - * 427 - * @dst: page-aligned buffer where list of pages will be stored 428 - * @pages: array of pages that represents shared buffer 429 - * @num_pages: number of entries in @pages 430 - * @page_offset: offset of user buffer from page start 431 - * 432 - * @dst should be big enough to hold list of user page addresses and 433 - * links to the next pages of buffer 434 - */ 435 - void optee_fill_pages_list(u64 *dst, struct page **pages, int num_pages, 436 - size_t page_offset) 437 - { 438 - int n = 0; 439 - phys_addr_t optee_page; 440 - /* 441 - * Refer to OPTEE_MSG_ATTR_NONCONTIG description in optee_msg.h 442 - * for details. 443 - */ 444 - struct { 445 - u64 pages_list[PAGELIST_ENTRIES_PER_PAGE]; 446 - u64 next_page_data; 447 - } *pages_data; 448 - 449 - /* 450 - * Currently OP-TEE uses 4k page size and it does not looks 451 - * like this will change in the future. On other hand, there are 452 - * no know ARM architectures with page size < 4k. 453 - * Thus the next built assert looks redundant. But the following 454 - * code heavily relies on this assumption, so it is better be 455 - * safe than sorry. 456 - */ 457 - BUILD_BUG_ON(PAGE_SIZE < OPTEE_MSG_NONCONTIG_PAGE_SIZE); 458 - 459 - pages_data = (void *)dst; 460 - /* 461 - * If linux page is bigger than 4k, and user buffer offset is 462 - * larger than 4k/8k/12k/etc this will skip first 4k pages, 463 - * because they bear no value data for OP-TEE. 464 - */ 465 - optee_page = page_to_phys(*pages) + 466 - round_down(page_offset, OPTEE_MSG_NONCONTIG_PAGE_SIZE); 467 - 468 - while (true) { 469 - pages_data->pages_list[n++] = optee_page; 470 - 471 - if (n == PAGELIST_ENTRIES_PER_PAGE) { 472 - pages_data->next_page_data = 473 - virt_to_phys(pages_data + 1); 474 - pages_data++; 475 - n = 0; 476 - } 477 - 478 - optee_page += OPTEE_MSG_NONCONTIG_PAGE_SIZE; 479 - if (!(optee_page & ~PAGE_MASK)) { 480 - if (!--num_pages) 481 - break; 482 - pages++; 483 - optee_page = page_to_phys(*pages); 484 - } 485 - } 486 - } 487 - 488 - /* 489 - * The final entry in each pagelist page is a pointer to the next 490 - * pagelist page. 491 - */ 492 - static size_t get_pages_list_size(size_t num_entries) 493 - { 494 - int pages = DIV_ROUND_UP(num_entries, PAGELIST_ENTRIES_PER_PAGE); 495 - 496 - return pages * OPTEE_MSG_NONCONTIG_PAGE_SIZE; 497 - } 498 - 499 - u64 *optee_allocate_pages_list(size_t num_entries) 500 - { 501 - return alloc_pages_exact(get_pages_list_size(num_entries), GFP_KERNEL); 502 - } 503 - 504 - void optee_free_pages_list(void *list, size_t num_entries) 505 - { 506 - free_pages_exact(list, get_pages_list_size(num_entries)); 507 393 } 508 394 509 395 static bool is_normal_memory(pgprot_t p) ··· 353 591 return -EINVAL; 354 592 } 355 593 356 - static int check_mem_type(unsigned long start, size_t num_pages) 594 + int optee_check_mem_type(unsigned long start, size_t num_pages) 357 595 { 358 596 struct mm_struct *mm = current->mm; 359 597 int rc; ··· 371 609 mmap_read_unlock(mm); 372 610 373 611 return rc; 374 - } 375 - 376 - int optee_shm_register(struct tee_context *ctx, struct tee_shm *shm, 377 - struct page **pages, size_t num_pages, 378 - unsigned long start) 379 - { 380 - struct tee_shm *shm_arg = NULL; 381 - struct optee_msg_arg *msg_arg; 382 - u64 *pages_list; 383 - phys_addr_t msg_parg; 384 - int rc; 385 - 386 - if (!num_pages) 387 - return -EINVAL; 388 - 389 - rc = check_mem_type(start, num_pages); 390 - if (rc) 391 - return rc; 392 - 393 - pages_list = optee_allocate_pages_list(num_pages); 394 - if (!pages_list) 395 - return -ENOMEM; 396 - 397 - shm_arg = get_msg_arg(ctx, 1, &msg_arg, &msg_parg); 398 - if (IS_ERR(shm_arg)) { 399 - rc = PTR_ERR(shm_arg); 400 - goto out; 401 - } 402 - 403 - optee_fill_pages_list(pages_list, pages, num_pages, 404 - tee_shm_get_page_offset(shm)); 405 - 406 - msg_arg->cmd = OPTEE_MSG_CMD_REGISTER_SHM; 407 - msg_arg->params->attr = OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT | 408 - OPTEE_MSG_ATTR_NONCONTIG; 409 - msg_arg->params->u.tmem.shm_ref = (unsigned long)shm; 410 - msg_arg->params->u.tmem.size = tee_shm_get_size(shm); 411 - /* 412 - * In the least bits of msg_arg->params->u.tmem.buf_ptr we 413 - * store buffer offset from 4k page, as described in OP-TEE ABI. 414 - */ 415 - msg_arg->params->u.tmem.buf_ptr = virt_to_phys(pages_list) | 416 - (tee_shm_get_page_offset(shm) & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1)); 417 - 418 - if (optee_do_call_with_arg(ctx, msg_parg) || 419 - msg_arg->ret != TEEC_SUCCESS) 420 - rc = -EINVAL; 421 - 422 - tee_shm_free(shm_arg); 423 - out: 424 - optee_free_pages_list(pages_list, num_pages); 425 - return rc; 426 - } 427 - 428 - int optee_shm_unregister(struct tee_context *ctx, struct tee_shm *shm) 429 - { 430 - struct tee_shm *shm_arg; 431 - struct optee_msg_arg *msg_arg; 432 - phys_addr_t msg_parg; 433 - int rc = 0; 434 - 435 - shm_arg = get_msg_arg(ctx, 1, &msg_arg, &msg_parg); 436 - if (IS_ERR(shm_arg)) 437 - return PTR_ERR(shm_arg); 438 - 439 - msg_arg->cmd = OPTEE_MSG_CMD_UNREGISTER_SHM; 440 - 441 - msg_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_RMEM_INPUT; 442 - msg_arg->params[0].u.rmem.shm_ref = (unsigned long)shm; 443 - 444 - if (optee_do_call_with_arg(ctx, msg_parg) || 445 - msg_arg->ret != TEEC_SUCCESS) 446 - rc = -EINVAL; 447 - tee_shm_free(shm_arg); 448 - return rc; 449 - } 450 - 451 - int optee_shm_register_supp(struct tee_context *ctx, struct tee_shm *shm, 452 - struct page **pages, size_t num_pages, 453 - unsigned long start) 454 - { 455 - /* 456 - * We don't want to register supplicant memory in OP-TEE. 457 - * Instead information about it will be passed in RPC code. 458 - */ 459 - return check_mem_type(start, num_pages); 460 - } 461 - 462 - int optee_shm_unregister_supp(struct tee_context *ctx, struct tee_shm *shm) 463 - { 464 - return 0; 465 612 }
+79 -652
drivers/tee/optee/core.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Copyright (c) 2015, Linaro Limited 3 + * Copyright (c) 2015-2021, Linaro Limited 4 + * Copyright (c) 2016, EPAM Systems 4 5 */ 5 6 6 7 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 7 8 8 - #include <linux/arm-smccc.h> 9 9 #include <linux/crash_dump.h> 10 10 #include <linux/errno.h> 11 11 #include <linux/io.h> 12 + #include <linux/mm.h> 12 13 #include <linux/module.h> 13 - #include <linux/of.h> 14 - #include <linux/of_platform.h> 15 - #include <linux/platform_device.h> 16 14 #include <linux/slab.h> 17 15 #include <linux/string.h> 18 16 #include <linux/tee_drv.h> 19 17 #include <linux/types.h> 20 - #include <linux/uaccess.h> 21 18 #include <linux/workqueue.h> 22 19 #include "optee_private.h" 23 - #include "optee_smc.h" 24 - #include "shm_pool.h" 25 20 26 - #define DRIVER_NAME "optee" 27 - 28 - #define OPTEE_SHM_NUM_PRIV_PAGES CONFIG_OPTEE_SHM_NUM_PRIV_PAGES 29 - 30 - /** 31 - * optee_from_msg_param() - convert from OPTEE_MSG parameters to 32 - * struct tee_param 33 - * @params: subsystem internal parameter representation 34 - * @num_params: number of elements in the parameter arrays 35 - * @msg_params: OPTEE_MSG parameters 36 - * Returns 0 on success or <0 on failure 37 - */ 38 - int optee_from_msg_param(struct tee_param *params, size_t num_params, 39 - const struct optee_msg_param *msg_params) 21 + int optee_pool_op_alloc_helper(struct tee_shm_pool_mgr *poolm, 22 + struct tee_shm *shm, size_t size, 23 + int (*shm_register)(struct tee_context *ctx, 24 + struct tee_shm *shm, 25 + struct page **pages, 26 + size_t num_pages, 27 + unsigned long start)) 40 28 { 41 - int rc; 42 - size_t n; 43 - struct tee_shm *shm; 44 - phys_addr_t pa; 29 + unsigned int order = get_order(size); 30 + struct page *page; 31 + int rc = 0; 45 32 46 - for (n = 0; n < num_params; n++) { 47 - struct tee_param *p = params + n; 48 - const struct optee_msg_param *mp = msg_params + n; 49 - u32 attr = mp->attr & OPTEE_MSG_ATTR_TYPE_MASK; 33 + page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order); 34 + if (!page) 35 + return -ENOMEM; 50 36 51 - switch (attr) { 52 - case OPTEE_MSG_ATTR_TYPE_NONE: 53 - p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; 54 - memset(&p->u, 0, sizeof(p->u)); 55 - break; 56 - case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: 57 - case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: 58 - case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT: 59 - p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT + 60 - attr - OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; 61 - p->u.value.a = mp->u.value.a; 62 - p->u.value.b = mp->u.value.b; 63 - p->u.value.c = mp->u.value.c; 64 - break; 65 - case OPTEE_MSG_ATTR_TYPE_TMEM_INPUT: 66 - case OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT: 67 - case OPTEE_MSG_ATTR_TYPE_TMEM_INOUT: 68 - p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + 69 - attr - OPTEE_MSG_ATTR_TYPE_TMEM_INPUT; 70 - p->u.memref.size = mp->u.tmem.size; 71 - shm = (struct tee_shm *)(unsigned long) 72 - mp->u.tmem.shm_ref; 73 - if (!shm) { 74 - p->u.memref.shm_offs = 0; 75 - p->u.memref.shm = NULL; 76 - break; 77 - } 78 - rc = tee_shm_get_pa(shm, 0, &pa); 79 - if (rc) 80 - return rc; 81 - p->u.memref.shm_offs = mp->u.tmem.buf_ptr - pa; 82 - p->u.memref.shm = shm; 83 - break; 84 - case OPTEE_MSG_ATTR_TYPE_RMEM_INPUT: 85 - case OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT: 86 - case OPTEE_MSG_ATTR_TYPE_RMEM_INOUT: 87 - p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + 88 - attr - OPTEE_MSG_ATTR_TYPE_RMEM_INPUT; 89 - p->u.memref.size = mp->u.rmem.size; 90 - shm = (struct tee_shm *)(unsigned long) 91 - mp->u.rmem.shm_ref; 37 + shm->kaddr = page_address(page); 38 + shm->paddr = page_to_phys(page); 39 + shm->size = PAGE_SIZE << order; 92 40 93 - if (!shm) { 94 - p->u.memref.shm_offs = 0; 95 - p->u.memref.shm = NULL; 96 - break; 97 - } 98 - p->u.memref.shm_offs = mp->u.rmem.offs; 99 - p->u.memref.shm = shm; 41 + if (shm_register) { 42 + unsigned int nr_pages = 1 << order, i; 43 + struct page **pages; 100 44 101 - break; 102 - 103 - default: 104 - return -EINVAL; 45 + pages = kcalloc(nr_pages, sizeof(*pages), GFP_KERNEL); 46 + if (!pages) { 47 + rc = -ENOMEM; 48 + goto err; 105 49 } 106 - } 107 - return 0; 108 - } 109 50 110 - static int to_msg_param_tmp_mem(struct optee_msg_param *mp, 111 - const struct tee_param *p) 112 - { 113 - int rc; 114 - phys_addr_t pa; 115 - 116 - mp->attr = OPTEE_MSG_ATTR_TYPE_TMEM_INPUT + p->attr - 117 - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT; 118 - 119 - mp->u.tmem.shm_ref = (unsigned long)p->u.memref.shm; 120 - mp->u.tmem.size = p->u.memref.size; 121 - 122 - if (!p->u.memref.shm) { 123 - mp->u.tmem.buf_ptr = 0; 124 - return 0; 125 - } 126 - 127 - rc = tee_shm_get_pa(p->u.memref.shm, p->u.memref.shm_offs, &pa); 128 - if (rc) 129 - return rc; 130 - 131 - mp->u.tmem.buf_ptr = pa; 132 - mp->attr |= OPTEE_MSG_ATTR_CACHE_PREDEFINED << 133 - OPTEE_MSG_ATTR_CACHE_SHIFT; 134 - 135 - return 0; 136 - } 137 - 138 - static int to_msg_param_reg_mem(struct optee_msg_param *mp, 139 - const struct tee_param *p) 140 - { 141 - mp->attr = OPTEE_MSG_ATTR_TYPE_RMEM_INPUT + p->attr - 142 - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT; 143 - 144 - mp->u.rmem.shm_ref = (unsigned long)p->u.memref.shm; 145 - mp->u.rmem.size = p->u.memref.size; 146 - mp->u.rmem.offs = p->u.memref.shm_offs; 147 - return 0; 148 - } 149 - 150 - /** 151 - * optee_to_msg_param() - convert from struct tee_params to OPTEE_MSG parameters 152 - * @msg_params: OPTEE_MSG parameters 153 - * @num_params: number of elements in the parameter arrays 154 - * @params: subsystem itnernal parameter representation 155 - * Returns 0 on success or <0 on failure 156 - */ 157 - int optee_to_msg_param(struct optee_msg_param *msg_params, size_t num_params, 158 - const struct tee_param *params) 159 - { 160 - int rc; 161 - size_t n; 162 - 163 - for (n = 0; n < num_params; n++) { 164 - const struct tee_param *p = params + n; 165 - struct optee_msg_param *mp = msg_params + n; 166 - 167 - switch (p->attr) { 168 - case TEE_IOCTL_PARAM_ATTR_TYPE_NONE: 169 - mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; 170 - memset(&mp->u, 0, sizeof(mp->u)); 171 - break; 172 - case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: 173 - case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: 174 - case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT: 175 - mp->attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + p->attr - 176 - TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; 177 - mp->u.value.a = p->u.value.a; 178 - mp->u.value.b = p->u.value.b; 179 - mp->u.value.c = p->u.value.c; 180 - break; 181 - case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: 182 - case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: 183 - case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: 184 - if (tee_shm_is_registered(p->u.memref.shm)) 185 - rc = to_msg_param_reg_mem(mp, p); 186 - else 187 - rc = to_msg_param_tmp_mem(mp, p); 188 - if (rc) 189 - return rc; 190 - break; 191 - default: 192 - return -EINVAL; 51 + for (i = 0; i < nr_pages; i++) { 52 + pages[i] = page; 53 + page++; 193 54 } 55 + 56 + shm->flags |= TEE_SHM_REGISTER; 57 + rc = shm_register(shm->ctx, shm, pages, nr_pages, 58 + (unsigned long)shm->kaddr); 59 + kfree(pages); 60 + if (rc) 61 + goto err; 194 62 } 63 + 195 64 return 0; 196 - } 197 65 198 - static void optee_get_version(struct tee_device *teedev, 199 - struct tee_ioctl_version_data *vers) 200 - { 201 - struct tee_ioctl_version_data v = { 202 - .impl_id = TEE_IMPL_ID_OPTEE, 203 - .impl_caps = TEE_OPTEE_CAP_TZ, 204 - .gen_caps = TEE_GEN_CAP_GP, 205 - }; 206 - struct optee *optee = tee_get_drvdata(teedev); 207 - 208 - if (optee->sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) 209 - v.gen_caps |= TEE_GEN_CAP_REG_MEM; 210 - if (optee->sec_caps & OPTEE_SMC_SEC_CAP_MEMREF_NULL) 211 - v.gen_caps |= TEE_GEN_CAP_MEMREF_NULL; 212 - *vers = v; 66 + err: 67 + __free_pages(page, order); 68 + return rc; 213 69 } 214 70 215 71 static void optee_bus_scan(struct work_struct *work) ··· 73 217 WARN_ON(optee_enumerate_devices(PTA_CMD_GET_DEVICES_SUPP)); 74 218 } 75 219 76 - static int optee_open(struct tee_context *ctx) 220 + int optee_open(struct tee_context *ctx, bool cap_memref_null) 77 221 { 78 222 struct optee_context_data *ctxdata; 79 223 struct tee_device *teedev = ctx->teedev; ··· 111 255 mutex_init(&ctxdata->mutex); 112 256 INIT_LIST_HEAD(&ctxdata->sess_list); 113 257 114 - if (optee->sec_caps & OPTEE_SMC_SEC_CAP_MEMREF_NULL) 115 - ctx->cap_memref_null = true; 116 - else 117 - ctx->cap_memref_null = false; 118 - 258 + ctx->cap_memref_null = cap_memref_null; 119 259 ctx->data = ctxdata; 120 260 return 0; 121 261 } 122 262 123 - static void optee_release(struct tee_context *ctx) 263 + static void optee_release_helper(struct tee_context *ctx, 264 + int (*close_session)(struct tee_context *ctx, 265 + u32 session)) 124 266 { 125 267 struct optee_context_data *ctxdata = ctx->data; 126 - struct tee_device *teedev = ctx->teedev; 127 - struct optee *optee = tee_get_drvdata(teedev); 128 - struct tee_shm *shm; 129 - struct optee_msg_arg *arg = NULL; 130 - phys_addr_t parg; 131 268 struct optee_session *sess; 132 269 struct optee_session *sess_tmp; 133 270 134 271 if (!ctxdata) 135 272 return; 136 273 137 - shm = tee_shm_alloc(ctx, sizeof(struct optee_msg_arg), 138 - TEE_SHM_MAPPED | TEE_SHM_PRIV); 139 - if (!IS_ERR(shm)) { 140 - arg = tee_shm_get_va(shm, 0); 141 - /* 142 - * If va2pa fails for some reason, we can't call into 143 - * secure world, only free the memory. Secure OS will leak 144 - * sessions and finally refuse more sessions, but we will 145 - * at least let normal world reclaim its memory. 146 - */ 147 - if (!IS_ERR(arg)) 148 - if (tee_shm_va2pa(shm, arg, &parg)) 149 - arg = NULL; /* prevent usage of parg below */ 150 - } 151 - 152 274 list_for_each_entry_safe(sess, sess_tmp, &ctxdata->sess_list, 153 275 list_node) { 154 276 list_del(&sess->list_node); 155 - if (!IS_ERR_OR_NULL(arg)) { 156 - memset(arg, 0, sizeof(*arg)); 157 - arg->cmd = OPTEE_MSG_CMD_CLOSE_SESSION; 158 - arg->session = sess->session_id; 159 - optee_do_call_with_arg(ctx, parg); 160 - } 277 + close_session(ctx, sess->session_id); 161 278 kfree(sess); 162 279 } 163 280 kfree(ctxdata); 164 - 165 - if (!IS_ERR(shm)) 166 - tee_shm_free(shm); 167 - 168 281 ctx->data = NULL; 282 + } 169 283 170 - if (teedev == optee->supp_teedev) { 171 - if (optee->scan_bus_wq) { 172 - destroy_workqueue(optee->scan_bus_wq); 173 - optee->scan_bus_wq = NULL; 174 - } 175 - optee_supp_release(&optee->supp); 284 + void optee_release(struct tee_context *ctx) 285 + { 286 + optee_release_helper(ctx, optee_close_session_helper); 287 + } 288 + 289 + void optee_release_supp(struct tee_context *ctx) 290 + { 291 + struct optee *optee = tee_get_drvdata(ctx->teedev); 292 + 293 + optee_release_helper(ctx, optee_close_session_helper); 294 + if (optee->scan_bus_wq) { 295 + destroy_workqueue(optee->scan_bus_wq); 296 + optee->scan_bus_wq = NULL; 176 297 } 298 + optee_supp_release(&optee->supp); 177 299 } 178 300 179 - static const struct tee_driver_ops optee_ops = { 180 - .get_version = optee_get_version, 181 - .open = optee_open, 182 - .release = optee_release, 183 - .open_session = optee_open_session, 184 - .close_session = optee_close_session, 185 - .invoke_func = optee_invoke_func, 186 - .cancel_req = optee_cancel_req, 187 - .shm_register = optee_shm_register, 188 - .shm_unregister = optee_shm_unregister, 189 - }; 190 - 191 - static const struct tee_desc optee_desc = { 192 - .name = DRIVER_NAME "-clnt", 193 - .ops = &optee_ops, 194 - .owner = THIS_MODULE, 195 - }; 196 - 197 - static const struct tee_driver_ops optee_supp_ops = { 198 - .get_version = optee_get_version, 199 - .open = optee_open, 200 - .release = optee_release, 201 - .supp_recv = optee_supp_recv, 202 - .supp_send = optee_supp_send, 203 - .shm_register = optee_shm_register_supp, 204 - .shm_unregister = optee_shm_unregister_supp, 205 - }; 206 - 207 - static const struct tee_desc optee_supp_desc = { 208 - .name = DRIVER_NAME "-supp", 209 - .ops = &optee_supp_ops, 210 - .owner = THIS_MODULE, 211 - .flags = TEE_DESC_PRIVILEGED, 212 - }; 213 - 214 - static bool optee_msg_api_uid_is_optee_api(optee_invoke_fn *invoke_fn) 301 + void optee_remove_common(struct optee *optee) 215 302 { 216 - struct arm_smccc_res res; 217 - 218 - invoke_fn(OPTEE_SMC_CALLS_UID, 0, 0, 0, 0, 0, 0, 0, &res); 219 - 220 - if (res.a0 == OPTEE_MSG_UID_0 && res.a1 == OPTEE_MSG_UID_1 && 221 - res.a2 == OPTEE_MSG_UID_2 && res.a3 == OPTEE_MSG_UID_3) 222 - return true; 223 - return false; 224 - } 225 - 226 - static void optee_msg_get_os_revision(optee_invoke_fn *invoke_fn) 227 - { 228 - union { 229 - struct arm_smccc_res smccc; 230 - struct optee_smc_call_get_os_revision_result result; 231 - } res = { 232 - .result = { 233 - .build_id = 0 234 - } 235 - }; 236 - 237 - invoke_fn(OPTEE_SMC_CALL_GET_OS_REVISION, 0, 0, 0, 0, 0, 0, 0, 238 - &res.smccc); 239 - 240 - if (res.result.build_id) 241 - pr_info("revision %lu.%lu (%08lx)", res.result.major, 242 - res.result.minor, res.result.build_id); 243 - else 244 - pr_info("revision %lu.%lu", res.result.major, res.result.minor); 245 - } 246 - 247 - static bool optee_msg_api_revision_is_compatible(optee_invoke_fn *invoke_fn) 248 - { 249 - union { 250 - struct arm_smccc_res smccc; 251 - struct optee_smc_calls_revision_result result; 252 - } res; 253 - 254 - invoke_fn(OPTEE_SMC_CALLS_REVISION, 0, 0, 0, 0, 0, 0, 0, &res.smccc); 255 - 256 - if (res.result.major == OPTEE_MSG_REVISION_MAJOR && 257 - (int)res.result.minor >= OPTEE_MSG_REVISION_MINOR) 258 - return true; 259 - return false; 260 - } 261 - 262 - static bool optee_msg_exchange_capabilities(optee_invoke_fn *invoke_fn, 263 - u32 *sec_caps) 264 - { 265 - union { 266 - struct arm_smccc_res smccc; 267 - struct optee_smc_exchange_capabilities_result result; 268 - } res; 269 - u32 a1 = 0; 270 - 271 - /* 272 - * TODO This isn't enough to tell if it's UP system (from kernel 273 - * point of view) or not, is_smp() returns the the information 274 - * needed, but can't be called directly from here. 275 - */ 276 - if (!IS_ENABLED(CONFIG_SMP) || nr_cpu_ids == 1) 277 - a1 |= OPTEE_SMC_NSEC_CAP_UNIPROCESSOR; 278 - 279 - invoke_fn(OPTEE_SMC_EXCHANGE_CAPABILITIES, a1, 0, 0, 0, 0, 0, 0, 280 - &res.smccc); 281 - 282 - if (res.result.status != OPTEE_SMC_RETURN_OK) 283 - return false; 284 - 285 - *sec_caps = res.result.capabilities; 286 - return true; 287 - } 288 - 289 - static struct tee_shm_pool *optee_config_dyn_shm(void) 290 - { 291 - struct tee_shm_pool_mgr *priv_mgr; 292 - struct tee_shm_pool_mgr *dmabuf_mgr; 293 - void *rc; 294 - 295 - rc = optee_shm_pool_alloc_pages(); 296 - if (IS_ERR(rc)) 297 - return rc; 298 - priv_mgr = rc; 299 - 300 - rc = optee_shm_pool_alloc_pages(); 301 - if (IS_ERR(rc)) { 302 - tee_shm_pool_mgr_destroy(priv_mgr); 303 - return rc; 304 - } 305 - dmabuf_mgr = rc; 306 - 307 - rc = tee_shm_pool_alloc(priv_mgr, dmabuf_mgr); 308 - if (IS_ERR(rc)) { 309 - tee_shm_pool_mgr_destroy(priv_mgr); 310 - tee_shm_pool_mgr_destroy(dmabuf_mgr); 311 - } 312 - 313 - return rc; 314 - } 315 - 316 - static struct tee_shm_pool * 317 - optee_config_shm_memremap(optee_invoke_fn *invoke_fn, void **memremaped_shm) 318 - { 319 - union { 320 - struct arm_smccc_res smccc; 321 - struct optee_smc_get_shm_config_result result; 322 - } res; 323 - unsigned long vaddr; 324 - phys_addr_t paddr; 325 - size_t size; 326 - phys_addr_t begin; 327 - phys_addr_t end; 328 - void *va; 329 - struct tee_shm_pool_mgr *priv_mgr; 330 - struct tee_shm_pool_mgr *dmabuf_mgr; 331 - void *rc; 332 - const int sz = OPTEE_SHM_NUM_PRIV_PAGES * PAGE_SIZE; 333 - 334 - invoke_fn(OPTEE_SMC_GET_SHM_CONFIG, 0, 0, 0, 0, 0, 0, 0, &res.smccc); 335 - if (res.result.status != OPTEE_SMC_RETURN_OK) { 336 - pr_err("static shm service not available\n"); 337 - return ERR_PTR(-ENOENT); 338 - } 339 - 340 - if (res.result.settings != OPTEE_SMC_SHM_CACHED) { 341 - pr_err("only normal cached shared memory supported\n"); 342 - return ERR_PTR(-EINVAL); 343 - } 344 - 345 - begin = roundup(res.result.start, PAGE_SIZE); 346 - end = rounddown(res.result.start + res.result.size, PAGE_SIZE); 347 - paddr = begin; 348 - size = end - begin; 349 - 350 - if (size < 2 * OPTEE_SHM_NUM_PRIV_PAGES * PAGE_SIZE) { 351 - pr_err("too small shared memory area\n"); 352 - return ERR_PTR(-EINVAL); 353 - } 354 - 355 - va = memremap(paddr, size, MEMREMAP_WB); 356 - if (!va) { 357 - pr_err("shared memory ioremap failed\n"); 358 - return ERR_PTR(-EINVAL); 359 - } 360 - vaddr = (unsigned long)va; 361 - 362 - rc = tee_shm_pool_mgr_alloc_res_mem(vaddr, paddr, sz, 363 - 3 /* 8 bytes aligned */); 364 - if (IS_ERR(rc)) 365 - goto err_memunmap; 366 - priv_mgr = rc; 367 - 368 - vaddr += sz; 369 - paddr += sz; 370 - size -= sz; 371 - 372 - rc = tee_shm_pool_mgr_alloc_res_mem(vaddr, paddr, size, PAGE_SHIFT); 373 - if (IS_ERR(rc)) 374 - goto err_free_priv_mgr; 375 - dmabuf_mgr = rc; 376 - 377 - rc = tee_shm_pool_alloc(priv_mgr, dmabuf_mgr); 378 - if (IS_ERR(rc)) 379 - goto err_free_dmabuf_mgr; 380 - 381 - *memremaped_shm = va; 382 - 383 - return rc; 384 - 385 - err_free_dmabuf_mgr: 386 - tee_shm_pool_mgr_destroy(dmabuf_mgr); 387 - err_free_priv_mgr: 388 - tee_shm_pool_mgr_destroy(priv_mgr); 389 - err_memunmap: 390 - memunmap(va); 391 - return rc; 392 - } 393 - 394 - /* Simple wrapper functions to be able to use a function pointer */ 395 - static void optee_smccc_smc(unsigned long a0, unsigned long a1, 396 - unsigned long a2, unsigned long a3, 397 - unsigned long a4, unsigned long a5, 398 - unsigned long a6, unsigned long a7, 399 - struct arm_smccc_res *res) 400 - { 401 - arm_smccc_smc(a0, a1, a2, a3, a4, a5, a6, a7, res); 402 - } 403 - 404 - static void optee_smccc_hvc(unsigned long a0, unsigned long a1, 405 - unsigned long a2, unsigned long a3, 406 - unsigned long a4, unsigned long a5, 407 - unsigned long a6, unsigned long a7, 408 - struct arm_smccc_res *res) 409 - { 410 - arm_smccc_hvc(a0, a1, a2, a3, a4, a5, a6, a7, res); 411 - } 412 - 413 - static optee_invoke_fn *get_invoke_func(struct device *dev) 414 - { 415 - const char *method; 416 - 417 - pr_info("probing for conduit method.\n"); 418 - 419 - if (device_property_read_string(dev, "method", &method)) { 420 - pr_warn("missing \"method\" property\n"); 421 - return ERR_PTR(-ENXIO); 422 - } 423 - 424 - if (!strcmp("hvc", method)) 425 - return optee_smccc_hvc; 426 - else if (!strcmp("smc", method)) 427 - return optee_smccc_smc; 428 - 429 - pr_warn("invalid \"method\" property: %s\n", method); 430 - return ERR_PTR(-EINVAL); 431 - } 432 - 433 - /* optee_remove - Device Removal Routine 434 - * @pdev: platform device information struct 435 - * 436 - * optee_remove is called by platform subsystem to alert the driver 437 - * that it should release the device 438 - */ 439 - 440 - static int optee_remove(struct platform_device *pdev) 441 - { 442 - struct optee *optee = platform_get_drvdata(pdev); 443 - 444 303 /* Unregister OP-TEE specific client devices on TEE bus */ 445 304 optee_unregister_devices(); 446 - 447 - /* 448 - * Ask OP-TEE to free all cached shared memory objects to decrease 449 - * reference counters and also avoid wild pointers in secure world 450 - * into the old shared memory range. 451 - */ 452 - optee_disable_shm_cache(optee); 453 305 454 306 /* 455 307 * The two devices have to be unregistered before we can free the ··· 167 603 tee_device_unregister(optee->teedev); 168 604 169 605 tee_shm_pool_free(optee->pool); 170 - if (optee->memremaped_shm) 171 - memunmap(optee->memremaped_shm); 172 606 optee_wait_queue_exit(&optee->wait_queue); 173 607 optee_supp_uninit(&optee->supp); 174 608 mutex_destroy(&optee->call_queue.mutex); 175 - 176 - kfree(optee); 177 - 178 - return 0; 179 609 } 180 610 181 - /* optee_shutdown - Device Removal Routine 182 - * @pdev: platform device information struct 183 - * 184 - * platform_shutdown is called by the platform subsystem to alert 185 - * the driver that a shutdown, reboot, or kexec is happening and 186 - * device must be disabled. 187 - */ 188 - static void optee_shutdown(struct platform_device *pdev) 189 - { 190 - optee_disable_shm_cache(platform_get_drvdata(pdev)); 191 - } 611 + static int smc_abi_rc; 612 + static int ffa_abi_rc; 192 613 193 - static int optee_probe(struct platform_device *pdev) 614 + static int optee_core_init(void) 194 615 { 195 - optee_invoke_fn *invoke_fn; 196 - struct tee_shm_pool *pool = ERR_PTR(-EINVAL); 197 - struct optee *optee = NULL; 198 - void *memremaped_shm = NULL; 199 - struct tee_device *teedev; 200 - u32 sec_caps; 201 - int rc; 202 - 203 616 /* 204 617 * The kernel may have crashed at the same time that all available 205 618 * secure world threads were suspended and we cannot reschedule the ··· 187 646 if (is_kdump_kernel()) 188 647 return -ENODEV; 189 648 190 - invoke_fn = get_invoke_func(&pdev->dev); 191 - if (IS_ERR(invoke_fn)) 192 - return PTR_ERR(invoke_fn); 649 + smc_abi_rc = optee_smc_abi_register(); 650 + ffa_abi_rc = optee_ffa_abi_register(); 193 651 194 - if (!optee_msg_api_uid_is_optee_api(invoke_fn)) { 195 - pr_warn("api uid mismatch\n"); 196 - return -EINVAL; 197 - } 198 - 199 - optee_msg_get_os_revision(invoke_fn); 200 - 201 - if (!optee_msg_api_revision_is_compatible(invoke_fn)) { 202 - pr_warn("api revision mismatch\n"); 203 - return -EINVAL; 204 - } 205 - 206 - if (!optee_msg_exchange_capabilities(invoke_fn, &sec_caps)) { 207 - pr_warn("capabilities mismatch\n"); 208 - return -EINVAL; 209 - } 210 - 211 - /* 212 - * Try to use dynamic shared memory if possible 213 - */ 214 - if (sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) 215 - pool = optee_config_dyn_shm(); 216 - 217 - /* 218 - * If dynamic shared memory is not available or failed - try static one 219 - */ 220 - if (IS_ERR(pool) && (sec_caps & OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM)) 221 - pool = optee_config_shm_memremap(invoke_fn, &memremaped_shm); 222 - 223 - if (IS_ERR(pool)) 224 - return PTR_ERR(pool); 225 - 226 - optee = kzalloc(sizeof(*optee), GFP_KERNEL); 227 - if (!optee) { 228 - rc = -ENOMEM; 229 - goto err; 230 - } 231 - 232 - optee->invoke_fn = invoke_fn; 233 - optee->sec_caps = sec_caps; 234 - 235 - teedev = tee_device_alloc(&optee_desc, NULL, pool, optee); 236 - if (IS_ERR(teedev)) { 237 - rc = PTR_ERR(teedev); 238 - goto err; 239 - } 240 - optee->teedev = teedev; 241 - 242 - teedev = tee_device_alloc(&optee_supp_desc, NULL, pool, optee); 243 - if (IS_ERR(teedev)) { 244 - rc = PTR_ERR(teedev); 245 - goto err; 246 - } 247 - optee->supp_teedev = teedev; 248 - 249 - rc = tee_device_register(optee->teedev); 250 - if (rc) 251 - goto err; 252 - 253 - rc = tee_device_register(optee->supp_teedev); 254 - if (rc) 255 - goto err; 256 - 257 - mutex_init(&optee->call_queue.mutex); 258 - INIT_LIST_HEAD(&optee->call_queue.waiters); 259 - optee_wait_queue_init(&optee->wait_queue); 260 - optee_supp_init(&optee->supp); 261 - optee->memremaped_shm = memremaped_shm; 262 - optee->pool = pool; 263 - 264 - /* 265 - * Ensure that there are no pre-existing shm objects before enabling 266 - * the shm cache so that there's no chance of receiving an invalid 267 - * address during shutdown. This could occur, for example, if we're 268 - * kexec booting from an older kernel that did not properly cleanup the 269 - * shm cache. 270 - */ 271 - optee_disable_unmapped_shm_cache(optee); 272 - 273 - optee_enable_shm_cache(optee); 274 - 275 - if (optee->sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) 276 - pr_info("dynamic shared memory is enabled\n"); 277 - 278 - platform_set_drvdata(pdev, optee); 279 - 280 - rc = optee_enumerate_devices(PTA_CMD_GET_DEVICES); 281 - if (rc) { 282 - optee_remove(pdev); 283 - return rc; 284 - } 285 - 286 - pr_info("initialized driver\n"); 652 + /* If both failed there's no point with this module */ 653 + if (smc_abi_rc && ffa_abi_rc) 654 + return smc_abi_rc; 287 655 return 0; 288 - err: 289 - if (optee) { 290 - /* 291 - * tee_device_unregister() is safe to call even if the 292 - * devices hasn't been registered with 293 - * tee_device_register() yet. 294 - */ 295 - tee_device_unregister(optee->supp_teedev); 296 - tee_device_unregister(optee->teedev); 297 - kfree(optee); 298 - } 299 - if (pool) 300 - tee_shm_pool_free(pool); 301 - if (memremaped_shm) 302 - memunmap(memremaped_shm); 303 - return rc; 304 656 } 657 + module_init(optee_core_init); 305 658 306 - static const struct of_device_id optee_dt_match[] = { 307 - { .compatible = "linaro,optee-tz" }, 308 - {}, 309 - }; 310 - MODULE_DEVICE_TABLE(of, optee_dt_match); 311 - 312 - static struct platform_driver optee_driver = { 313 - .probe = optee_probe, 314 - .remove = optee_remove, 315 - .shutdown = optee_shutdown, 316 - .driver = { 317 - .name = "optee", 318 - .of_match_table = optee_dt_match, 319 - }, 320 - }; 321 - module_platform_driver(optee_driver); 659 + static void optee_core_exit(void) 660 + { 661 + if (!smc_abi_rc) 662 + optee_smc_abi_unregister(); 663 + if (!ffa_abi_rc) 664 + optee_ffa_abi_unregister(); 665 + } 666 + module_exit(optee_core_exit); 322 667 323 668 MODULE_AUTHOR("Linaro"); 324 669 MODULE_DESCRIPTION("OP-TEE driver");
+911
drivers/tee/optee/ffa_abi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (c) 2021, Linaro Limited 4 + */ 5 + 6 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 7 + 8 + #include <linux/arm_ffa.h> 9 + #include <linux/errno.h> 10 + #include <linux/scatterlist.h> 11 + #include <linux/sched.h> 12 + #include <linux/slab.h> 13 + #include <linux/string.h> 14 + #include <linux/tee_drv.h> 15 + #include <linux/types.h> 16 + #include "optee_private.h" 17 + #include "optee_ffa.h" 18 + #include "optee_rpc_cmd.h" 19 + 20 + /* 21 + * This file implement the FF-A ABI used when communicating with secure world 22 + * OP-TEE OS via FF-A. 23 + * This file is divided into the following sections: 24 + * 1. Maintain a hash table for lookup of a global FF-A memory handle 25 + * 2. Convert between struct tee_param and struct optee_msg_param 26 + * 3. Low level support functions to register shared memory in secure world 27 + * 4. Dynamic shared memory pool based on alloc_pages() 28 + * 5. Do a normal scheduled call into secure world 29 + * 6. Driver initialization. 30 + */ 31 + 32 + /* 33 + * 1. Maintain a hash table for lookup of a global FF-A memory handle 34 + * 35 + * FF-A assigns a global memory handle for each piece shared memory. 36 + * This handle is then used when communicating with secure world. 37 + * 38 + * Main functions are optee_shm_add_ffa_handle() and optee_shm_rem_ffa_handle() 39 + */ 40 + struct shm_rhash { 41 + struct tee_shm *shm; 42 + u64 global_id; 43 + struct rhash_head linkage; 44 + }; 45 + 46 + static void rh_free_fn(void *ptr, void *arg) 47 + { 48 + kfree(ptr); 49 + } 50 + 51 + static const struct rhashtable_params shm_rhash_params = { 52 + .head_offset = offsetof(struct shm_rhash, linkage), 53 + .key_len = sizeof(u64), 54 + .key_offset = offsetof(struct shm_rhash, global_id), 55 + .automatic_shrinking = true, 56 + }; 57 + 58 + static struct tee_shm *optee_shm_from_ffa_handle(struct optee *optee, 59 + u64 global_id) 60 + { 61 + struct tee_shm *shm = NULL; 62 + struct shm_rhash *r; 63 + 64 + mutex_lock(&optee->ffa.mutex); 65 + r = rhashtable_lookup_fast(&optee->ffa.global_ids, &global_id, 66 + shm_rhash_params); 67 + if (r) 68 + shm = r->shm; 69 + mutex_unlock(&optee->ffa.mutex); 70 + 71 + return shm; 72 + } 73 + 74 + static int optee_shm_add_ffa_handle(struct optee *optee, struct tee_shm *shm, 75 + u64 global_id) 76 + { 77 + struct shm_rhash *r; 78 + int rc; 79 + 80 + r = kmalloc(sizeof(*r), GFP_KERNEL); 81 + if (!r) 82 + return -ENOMEM; 83 + r->shm = shm; 84 + r->global_id = global_id; 85 + 86 + mutex_lock(&optee->ffa.mutex); 87 + rc = rhashtable_lookup_insert_fast(&optee->ffa.global_ids, &r->linkage, 88 + shm_rhash_params); 89 + mutex_unlock(&optee->ffa.mutex); 90 + 91 + if (rc) 92 + kfree(r); 93 + 94 + return rc; 95 + } 96 + 97 + static int optee_shm_rem_ffa_handle(struct optee *optee, u64 global_id) 98 + { 99 + struct shm_rhash *r; 100 + int rc = -ENOENT; 101 + 102 + mutex_lock(&optee->ffa.mutex); 103 + r = rhashtable_lookup_fast(&optee->ffa.global_ids, &global_id, 104 + shm_rhash_params); 105 + if (r) 106 + rc = rhashtable_remove_fast(&optee->ffa.global_ids, 107 + &r->linkage, shm_rhash_params); 108 + mutex_unlock(&optee->ffa.mutex); 109 + 110 + if (!rc) 111 + kfree(r); 112 + 113 + return rc; 114 + } 115 + 116 + /* 117 + * 2. Convert between struct tee_param and struct optee_msg_param 118 + * 119 + * optee_ffa_from_msg_param() and optee_ffa_to_msg_param() are the main 120 + * functions. 121 + */ 122 + 123 + static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, 124 + u32 attr, const struct optee_msg_param *mp) 125 + { 126 + struct tee_shm *shm = NULL; 127 + u64 offs_high = 0; 128 + u64 offs_low = 0; 129 + 130 + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + 131 + attr - OPTEE_MSG_ATTR_TYPE_FMEM_INPUT; 132 + p->u.memref.size = mp->u.fmem.size; 133 + 134 + if (mp->u.fmem.global_id != OPTEE_MSG_FMEM_INVALID_GLOBAL_ID) 135 + shm = optee_shm_from_ffa_handle(optee, mp->u.fmem.global_id); 136 + p->u.memref.shm = shm; 137 + 138 + if (shm) { 139 + offs_low = mp->u.fmem.offs_low; 140 + offs_high = mp->u.fmem.offs_high; 141 + } 142 + p->u.memref.shm_offs = offs_low | offs_high << 32; 143 + } 144 + 145 + /** 146 + * optee_ffa_from_msg_param() - convert from OPTEE_MSG parameters to 147 + * struct tee_param 148 + * @optee: main service struct 149 + * @params: subsystem internal parameter representation 150 + * @num_params: number of elements in the parameter arrays 151 + * @msg_params: OPTEE_MSG parameters 152 + * 153 + * Returns 0 on success or <0 on failure 154 + */ 155 + static int optee_ffa_from_msg_param(struct optee *optee, 156 + struct tee_param *params, size_t num_params, 157 + const struct optee_msg_param *msg_params) 158 + { 159 + size_t n; 160 + 161 + for (n = 0; n < num_params; n++) { 162 + struct tee_param *p = params + n; 163 + const struct optee_msg_param *mp = msg_params + n; 164 + u32 attr = mp->attr & OPTEE_MSG_ATTR_TYPE_MASK; 165 + 166 + switch (attr) { 167 + case OPTEE_MSG_ATTR_TYPE_NONE: 168 + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; 169 + memset(&p->u, 0, sizeof(p->u)); 170 + break; 171 + case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: 172 + case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: 173 + case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT: 174 + optee_from_msg_param_value(p, attr, mp); 175 + break; 176 + case OPTEE_MSG_ATTR_TYPE_FMEM_INPUT: 177 + case OPTEE_MSG_ATTR_TYPE_FMEM_OUTPUT: 178 + case OPTEE_MSG_ATTR_TYPE_FMEM_INOUT: 179 + from_msg_param_ffa_mem(optee, p, attr, mp); 180 + break; 181 + default: 182 + return -EINVAL; 183 + } 184 + } 185 + 186 + return 0; 187 + } 188 + 189 + static int to_msg_param_ffa_mem(struct optee_msg_param *mp, 190 + const struct tee_param *p) 191 + { 192 + struct tee_shm *shm = p->u.memref.shm; 193 + 194 + mp->attr = OPTEE_MSG_ATTR_TYPE_FMEM_INPUT + p->attr - 195 + TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT; 196 + 197 + if (shm) { 198 + u64 shm_offs = p->u.memref.shm_offs; 199 + 200 + mp->u.fmem.internal_offs = shm->offset; 201 + 202 + mp->u.fmem.offs_low = shm_offs; 203 + mp->u.fmem.offs_high = shm_offs >> 32; 204 + /* Check that the entire offset could be stored. */ 205 + if (mp->u.fmem.offs_high != shm_offs >> 32) 206 + return -EINVAL; 207 + 208 + mp->u.fmem.global_id = shm->sec_world_id; 209 + } else { 210 + memset(&mp->u, 0, sizeof(mp->u)); 211 + mp->u.fmem.global_id = OPTEE_MSG_FMEM_INVALID_GLOBAL_ID; 212 + } 213 + mp->u.fmem.size = p->u.memref.size; 214 + 215 + return 0; 216 + } 217 + 218 + /** 219 + * optee_ffa_to_msg_param() - convert from struct tee_params to OPTEE_MSG 220 + * parameters 221 + * @optee: main service struct 222 + * @msg_params: OPTEE_MSG parameters 223 + * @num_params: number of elements in the parameter arrays 224 + * @params: subsystem itnernal parameter representation 225 + * Returns 0 on success or <0 on failure 226 + */ 227 + static int optee_ffa_to_msg_param(struct optee *optee, 228 + struct optee_msg_param *msg_params, 229 + size_t num_params, 230 + const struct tee_param *params) 231 + { 232 + size_t n; 233 + 234 + for (n = 0; n < num_params; n++) { 235 + const struct tee_param *p = params + n; 236 + struct optee_msg_param *mp = msg_params + n; 237 + 238 + switch (p->attr) { 239 + case TEE_IOCTL_PARAM_ATTR_TYPE_NONE: 240 + mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; 241 + memset(&mp->u, 0, sizeof(mp->u)); 242 + break; 243 + case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: 244 + case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: 245 + case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT: 246 + optee_to_msg_param_value(mp, p); 247 + break; 248 + case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: 249 + case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: 250 + case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: 251 + if (to_msg_param_ffa_mem(mp, p)) 252 + return -EINVAL; 253 + break; 254 + default: 255 + return -EINVAL; 256 + } 257 + } 258 + 259 + return 0; 260 + } 261 + 262 + /* 263 + * 3. Low level support functions to register shared memory in secure world 264 + * 265 + * Functions to register and unregister shared memory both for normal 266 + * clients and for tee-supplicant. 267 + */ 268 + 269 + static int optee_ffa_shm_register(struct tee_context *ctx, struct tee_shm *shm, 270 + struct page **pages, size_t num_pages, 271 + unsigned long start) 272 + { 273 + struct optee *optee = tee_get_drvdata(ctx->teedev); 274 + const struct ffa_dev_ops *ffa_ops = optee->ffa.ffa_ops; 275 + struct ffa_device *ffa_dev = optee->ffa.ffa_dev; 276 + struct ffa_mem_region_attributes mem_attr = { 277 + .receiver = ffa_dev->vm_id, 278 + .attrs = FFA_MEM_RW, 279 + }; 280 + struct ffa_mem_ops_args args = { 281 + .use_txbuf = true, 282 + .attrs = &mem_attr, 283 + .nattrs = 1, 284 + }; 285 + struct sg_table sgt; 286 + int rc; 287 + 288 + rc = optee_check_mem_type(start, num_pages); 289 + if (rc) 290 + return rc; 291 + 292 + rc = sg_alloc_table_from_pages(&sgt, pages, num_pages, 0, 293 + num_pages * PAGE_SIZE, GFP_KERNEL); 294 + if (rc) 295 + return rc; 296 + args.sg = sgt.sgl; 297 + rc = ffa_ops->memory_share(ffa_dev, &args); 298 + sg_free_table(&sgt); 299 + if (rc) 300 + return rc; 301 + 302 + rc = optee_shm_add_ffa_handle(optee, shm, args.g_handle); 303 + if (rc) { 304 + ffa_ops->memory_reclaim(args.g_handle, 0); 305 + return rc; 306 + } 307 + 308 + shm->sec_world_id = args.g_handle; 309 + 310 + return 0; 311 + } 312 + 313 + static int optee_ffa_shm_unregister(struct tee_context *ctx, 314 + struct tee_shm *shm) 315 + { 316 + struct optee *optee = tee_get_drvdata(ctx->teedev); 317 + const struct ffa_dev_ops *ffa_ops = optee->ffa.ffa_ops; 318 + struct ffa_device *ffa_dev = optee->ffa.ffa_dev; 319 + u64 global_handle = shm->sec_world_id; 320 + struct ffa_send_direct_data data = { 321 + .data0 = OPTEE_FFA_UNREGISTER_SHM, 322 + .data1 = (u32)global_handle, 323 + .data2 = (u32)(global_handle >> 32) 324 + }; 325 + int rc; 326 + 327 + optee_shm_rem_ffa_handle(optee, global_handle); 328 + shm->sec_world_id = 0; 329 + 330 + rc = ffa_ops->sync_send_receive(ffa_dev, &data); 331 + if (rc) 332 + pr_err("Unregister SHM id 0x%llx rc %d\n", global_handle, rc); 333 + 334 + rc = ffa_ops->memory_reclaim(global_handle, 0); 335 + if (rc) 336 + pr_err("mem_reclaim: 0x%llx %d", global_handle, rc); 337 + 338 + return rc; 339 + } 340 + 341 + static int optee_ffa_shm_unregister_supp(struct tee_context *ctx, 342 + struct tee_shm *shm) 343 + { 344 + struct optee *optee = tee_get_drvdata(ctx->teedev); 345 + const struct ffa_dev_ops *ffa_ops = optee->ffa.ffa_ops; 346 + u64 global_handle = shm->sec_world_id; 347 + int rc; 348 + 349 + /* 350 + * We're skipping the OPTEE_FFA_YIELDING_CALL_UNREGISTER_SHM call 351 + * since this is OP-TEE freeing via RPC so it has already retired 352 + * this ID. 353 + */ 354 + 355 + optee_shm_rem_ffa_handle(optee, global_handle); 356 + rc = ffa_ops->memory_reclaim(global_handle, 0); 357 + if (rc) 358 + pr_err("mem_reclaim: 0x%llx %d", global_handle, rc); 359 + 360 + shm->sec_world_id = 0; 361 + 362 + return rc; 363 + } 364 + 365 + /* 366 + * 4. Dynamic shared memory pool based on alloc_pages() 367 + * 368 + * Implements an OP-TEE specific shared memory pool. 369 + * The main function is optee_ffa_shm_pool_alloc_pages(). 370 + */ 371 + 372 + static int pool_ffa_op_alloc(struct tee_shm_pool_mgr *poolm, 373 + struct tee_shm *shm, size_t size) 374 + { 375 + return optee_pool_op_alloc_helper(poolm, shm, size, 376 + optee_ffa_shm_register); 377 + } 378 + 379 + static void pool_ffa_op_free(struct tee_shm_pool_mgr *poolm, 380 + struct tee_shm *shm) 381 + { 382 + optee_ffa_shm_unregister(shm->ctx, shm); 383 + free_pages((unsigned long)shm->kaddr, get_order(shm->size)); 384 + shm->kaddr = NULL; 385 + } 386 + 387 + static void pool_ffa_op_destroy_poolmgr(struct tee_shm_pool_mgr *poolm) 388 + { 389 + kfree(poolm); 390 + } 391 + 392 + static const struct tee_shm_pool_mgr_ops pool_ffa_ops = { 393 + .alloc = pool_ffa_op_alloc, 394 + .free = pool_ffa_op_free, 395 + .destroy_poolmgr = pool_ffa_op_destroy_poolmgr, 396 + }; 397 + 398 + /** 399 + * optee_ffa_shm_pool_alloc_pages() - create page-based allocator pool 400 + * 401 + * This pool is used with OP-TEE over FF-A. In this case command buffers 402 + * and such are allocated from kernel's own memory. 403 + */ 404 + static struct tee_shm_pool_mgr *optee_ffa_shm_pool_alloc_pages(void) 405 + { 406 + struct tee_shm_pool_mgr *mgr = kzalloc(sizeof(*mgr), GFP_KERNEL); 407 + 408 + if (!mgr) 409 + return ERR_PTR(-ENOMEM); 410 + 411 + mgr->ops = &pool_ffa_ops; 412 + 413 + return mgr; 414 + } 415 + 416 + /* 417 + * 5. Do a normal scheduled call into secure world 418 + * 419 + * The function optee_ffa_do_call_with_arg() performs a normal scheduled 420 + * call into secure world. During this call may normal world request help 421 + * from normal world using RPCs, Remote Procedure Calls. This includes 422 + * delivery of non-secure interrupts to for instance allow rescheduling of 423 + * the current task. 424 + */ 425 + 426 + static void handle_ffa_rpc_func_cmd_shm_alloc(struct tee_context *ctx, 427 + struct optee_msg_arg *arg) 428 + { 429 + struct tee_shm *shm; 430 + 431 + if (arg->num_params != 1 || 432 + arg->params[0].attr != OPTEE_MSG_ATTR_TYPE_VALUE_INPUT) { 433 + arg->ret = TEEC_ERROR_BAD_PARAMETERS; 434 + return; 435 + } 436 + 437 + switch (arg->params[0].u.value.a) { 438 + case OPTEE_RPC_SHM_TYPE_APPL: 439 + shm = optee_rpc_cmd_alloc_suppl(ctx, arg->params[0].u.value.b); 440 + break; 441 + case OPTEE_RPC_SHM_TYPE_KERNEL: 442 + shm = tee_shm_alloc(ctx, arg->params[0].u.value.b, 443 + TEE_SHM_MAPPED | TEE_SHM_PRIV); 444 + break; 445 + default: 446 + arg->ret = TEEC_ERROR_BAD_PARAMETERS; 447 + return; 448 + } 449 + 450 + if (IS_ERR(shm)) { 451 + arg->ret = TEEC_ERROR_OUT_OF_MEMORY; 452 + return; 453 + } 454 + 455 + arg->params[0] = (struct optee_msg_param){ 456 + .attr = OPTEE_MSG_ATTR_TYPE_FMEM_OUTPUT, 457 + .u.fmem.size = tee_shm_get_size(shm), 458 + .u.fmem.global_id = shm->sec_world_id, 459 + .u.fmem.internal_offs = shm->offset, 460 + }; 461 + 462 + arg->ret = TEEC_SUCCESS; 463 + } 464 + 465 + static void handle_ffa_rpc_func_cmd_shm_free(struct tee_context *ctx, 466 + struct optee *optee, 467 + struct optee_msg_arg *arg) 468 + { 469 + struct tee_shm *shm; 470 + 471 + if (arg->num_params != 1 || 472 + arg->params[0].attr != OPTEE_MSG_ATTR_TYPE_VALUE_INPUT) 473 + goto err_bad_param; 474 + 475 + shm = optee_shm_from_ffa_handle(optee, arg->params[0].u.value.b); 476 + if (!shm) 477 + goto err_bad_param; 478 + switch (arg->params[0].u.value.a) { 479 + case OPTEE_RPC_SHM_TYPE_APPL: 480 + optee_rpc_cmd_free_suppl(ctx, shm); 481 + break; 482 + case OPTEE_RPC_SHM_TYPE_KERNEL: 483 + tee_shm_free(shm); 484 + break; 485 + default: 486 + goto err_bad_param; 487 + } 488 + arg->ret = TEEC_SUCCESS; 489 + return; 490 + 491 + err_bad_param: 492 + arg->ret = TEEC_ERROR_BAD_PARAMETERS; 493 + } 494 + 495 + static void handle_ffa_rpc_func_cmd(struct tee_context *ctx, 496 + struct optee_msg_arg *arg) 497 + { 498 + struct optee *optee = tee_get_drvdata(ctx->teedev); 499 + 500 + arg->ret_origin = TEEC_ORIGIN_COMMS; 501 + switch (arg->cmd) { 502 + case OPTEE_RPC_CMD_SHM_ALLOC: 503 + handle_ffa_rpc_func_cmd_shm_alloc(ctx, arg); 504 + break; 505 + case OPTEE_RPC_CMD_SHM_FREE: 506 + handle_ffa_rpc_func_cmd_shm_free(ctx, optee, arg); 507 + break; 508 + default: 509 + optee_rpc_cmd(ctx, optee, arg); 510 + } 511 + } 512 + 513 + static void optee_handle_ffa_rpc(struct tee_context *ctx, u32 cmd, 514 + struct optee_msg_arg *arg) 515 + { 516 + switch (cmd) { 517 + case OPTEE_FFA_YIELDING_CALL_RETURN_RPC_CMD: 518 + handle_ffa_rpc_func_cmd(ctx, arg); 519 + break; 520 + case OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT: 521 + /* Interrupt delivered by now */ 522 + break; 523 + default: 524 + pr_warn("Unknown RPC func 0x%x\n", cmd); 525 + break; 526 + } 527 + } 528 + 529 + static int optee_ffa_yielding_call(struct tee_context *ctx, 530 + struct ffa_send_direct_data *data, 531 + struct optee_msg_arg *rpc_arg) 532 + { 533 + struct optee *optee = tee_get_drvdata(ctx->teedev); 534 + const struct ffa_dev_ops *ffa_ops = optee->ffa.ffa_ops; 535 + struct ffa_device *ffa_dev = optee->ffa.ffa_dev; 536 + struct optee_call_waiter w; 537 + u32 cmd = data->data0; 538 + u32 w4 = data->data1; 539 + u32 w5 = data->data2; 540 + u32 w6 = data->data3; 541 + int rc; 542 + 543 + /* Initialize waiter */ 544 + optee_cq_wait_init(&optee->call_queue, &w); 545 + while (true) { 546 + rc = ffa_ops->sync_send_receive(ffa_dev, data); 547 + if (rc) 548 + goto done; 549 + 550 + switch ((int)data->data0) { 551 + case TEEC_SUCCESS: 552 + break; 553 + case TEEC_ERROR_BUSY: 554 + if (cmd == OPTEE_FFA_YIELDING_CALL_RESUME) { 555 + rc = -EIO; 556 + goto done; 557 + } 558 + 559 + /* 560 + * Out of threads in secure world, wait for a thread 561 + * become available. 562 + */ 563 + optee_cq_wait_for_completion(&optee->call_queue, &w); 564 + data->data0 = cmd; 565 + data->data1 = w4; 566 + data->data2 = w5; 567 + data->data3 = w6; 568 + continue; 569 + default: 570 + rc = -EIO; 571 + goto done; 572 + } 573 + 574 + if (data->data1 == OPTEE_FFA_YIELDING_CALL_RETURN_DONE) 575 + goto done; 576 + 577 + /* 578 + * OP-TEE has returned with a RPC request. 579 + * 580 + * Note that data->data4 (passed in register w7) is already 581 + * filled in by ffa_ops->sync_send_receive() returning 582 + * above. 583 + */ 584 + cond_resched(); 585 + optee_handle_ffa_rpc(ctx, data->data1, rpc_arg); 586 + cmd = OPTEE_FFA_YIELDING_CALL_RESUME; 587 + data->data0 = cmd; 588 + data->data1 = 0; 589 + data->data2 = 0; 590 + data->data3 = 0; 591 + } 592 + done: 593 + /* 594 + * We're done with our thread in secure world, if there's any 595 + * thread waiters wake up one. 596 + */ 597 + optee_cq_wait_final(&optee->call_queue, &w); 598 + 599 + return rc; 600 + } 601 + 602 + /** 603 + * optee_ffa_do_call_with_arg() - Do a FF-A call to enter OP-TEE in secure world 604 + * @ctx: calling context 605 + * @shm: shared memory holding the message to pass to secure world 606 + * 607 + * Does a FF-A call to OP-TEE in secure world and handles eventual resulting 608 + * Remote Procedure Calls (RPC) from OP-TEE. 609 + * 610 + * Returns return code from FF-A, 0 is OK 611 + */ 612 + 613 + static int optee_ffa_do_call_with_arg(struct tee_context *ctx, 614 + struct tee_shm *shm) 615 + { 616 + struct ffa_send_direct_data data = { 617 + .data0 = OPTEE_FFA_YIELDING_CALL_WITH_ARG, 618 + .data1 = (u32)shm->sec_world_id, 619 + .data2 = (u32)(shm->sec_world_id >> 32), 620 + .data3 = shm->offset, 621 + }; 622 + struct optee_msg_arg *arg = tee_shm_get_va(shm, 0); 623 + unsigned int rpc_arg_offs = OPTEE_MSG_GET_ARG_SIZE(arg->num_params); 624 + struct optee_msg_arg *rpc_arg = tee_shm_get_va(shm, rpc_arg_offs); 625 + 626 + return optee_ffa_yielding_call(ctx, &data, rpc_arg); 627 + } 628 + 629 + /* 630 + * 6. Driver initialization 631 + * 632 + * During driver inititialization is the OP-TEE Secure Partition is probed 633 + * to find out which features it supports so the driver can be initialized 634 + * with a matching configuration. 635 + */ 636 + 637 + static bool optee_ffa_api_is_compatbile(struct ffa_device *ffa_dev, 638 + const struct ffa_dev_ops *ops) 639 + { 640 + struct ffa_send_direct_data data = { OPTEE_FFA_GET_API_VERSION }; 641 + int rc; 642 + 643 + ops->mode_32bit_set(ffa_dev); 644 + 645 + rc = ops->sync_send_receive(ffa_dev, &data); 646 + if (rc) { 647 + pr_err("Unexpected error %d\n", rc); 648 + return false; 649 + } 650 + if (data.data0 != OPTEE_FFA_VERSION_MAJOR || 651 + data.data1 < OPTEE_FFA_VERSION_MINOR) { 652 + pr_err("Incompatible OP-TEE API version %lu.%lu", 653 + data.data0, data.data1); 654 + return false; 655 + } 656 + 657 + data = (struct ffa_send_direct_data){ OPTEE_FFA_GET_OS_VERSION }; 658 + rc = ops->sync_send_receive(ffa_dev, &data); 659 + if (rc) { 660 + pr_err("Unexpected error %d\n", rc); 661 + return false; 662 + } 663 + if (data.data2) 664 + pr_info("revision %lu.%lu (%08lx)", 665 + data.data0, data.data1, data.data2); 666 + else 667 + pr_info("revision %lu.%lu", data.data0, data.data1); 668 + 669 + return true; 670 + } 671 + 672 + static bool optee_ffa_exchange_caps(struct ffa_device *ffa_dev, 673 + const struct ffa_dev_ops *ops, 674 + unsigned int *rpc_arg_count) 675 + { 676 + struct ffa_send_direct_data data = { OPTEE_FFA_EXCHANGE_CAPABILITIES }; 677 + int rc; 678 + 679 + rc = ops->sync_send_receive(ffa_dev, &data); 680 + if (rc) { 681 + pr_err("Unexpected error %d", rc); 682 + return false; 683 + } 684 + if (data.data0) { 685 + pr_err("Unexpected exchange error %lu", data.data0); 686 + return false; 687 + } 688 + 689 + *rpc_arg_count = (u8)data.data1; 690 + 691 + return true; 692 + } 693 + 694 + static struct tee_shm_pool *optee_ffa_config_dyn_shm(void) 695 + { 696 + struct tee_shm_pool_mgr *priv_mgr; 697 + struct tee_shm_pool_mgr *dmabuf_mgr; 698 + void *rc; 699 + 700 + rc = optee_ffa_shm_pool_alloc_pages(); 701 + if (IS_ERR(rc)) 702 + return rc; 703 + priv_mgr = rc; 704 + 705 + rc = optee_ffa_shm_pool_alloc_pages(); 706 + if (IS_ERR(rc)) { 707 + tee_shm_pool_mgr_destroy(priv_mgr); 708 + return rc; 709 + } 710 + dmabuf_mgr = rc; 711 + 712 + rc = tee_shm_pool_alloc(priv_mgr, dmabuf_mgr); 713 + if (IS_ERR(rc)) { 714 + tee_shm_pool_mgr_destroy(priv_mgr); 715 + tee_shm_pool_mgr_destroy(dmabuf_mgr); 716 + } 717 + 718 + return rc; 719 + } 720 + 721 + static void optee_ffa_get_version(struct tee_device *teedev, 722 + struct tee_ioctl_version_data *vers) 723 + { 724 + struct tee_ioctl_version_data v = { 725 + .impl_id = TEE_IMPL_ID_OPTEE, 726 + .impl_caps = TEE_OPTEE_CAP_TZ, 727 + .gen_caps = TEE_GEN_CAP_GP | TEE_GEN_CAP_REG_MEM | 728 + TEE_GEN_CAP_MEMREF_NULL, 729 + }; 730 + 731 + *vers = v; 732 + } 733 + 734 + static int optee_ffa_open(struct tee_context *ctx) 735 + { 736 + return optee_open(ctx, true); 737 + } 738 + 739 + static const struct tee_driver_ops optee_ffa_clnt_ops = { 740 + .get_version = optee_ffa_get_version, 741 + .open = optee_ffa_open, 742 + .release = optee_release, 743 + .open_session = optee_open_session, 744 + .close_session = optee_close_session, 745 + .invoke_func = optee_invoke_func, 746 + .cancel_req = optee_cancel_req, 747 + .shm_register = optee_ffa_shm_register, 748 + .shm_unregister = optee_ffa_shm_unregister, 749 + }; 750 + 751 + static const struct tee_desc optee_ffa_clnt_desc = { 752 + .name = DRIVER_NAME "-ffa-clnt", 753 + .ops = &optee_ffa_clnt_ops, 754 + .owner = THIS_MODULE, 755 + }; 756 + 757 + static const struct tee_driver_ops optee_ffa_supp_ops = { 758 + .get_version = optee_ffa_get_version, 759 + .open = optee_ffa_open, 760 + .release = optee_release_supp, 761 + .supp_recv = optee_supp_recv, 762 + .supp_send = optee_supp_send, 763 + .shm_register = optee_ffa_shm_register, /* same as for clnt ops */ 764 + .shm_unregister = optee_ffa_shm_unregister_supp, 765 + }; 766 + 767 + static const struct tee_desc optee_ffa_supp_desc = { 768 + .name = DRIVER_NAME "-ffa-supp", 769 + .ops = &optee_ffa_supp_ops, 770 + .owner = THIS_MODULE, 771 + .flags = TEE_DESC_PRIVILEGED, 772 + }; 773 + 774 + static const struct optee_ops optee_ffa_ops = { 775 + .do_call_with_arg = optee_ffa_do_call_with_arg, 776 + .to_msg_param = optee_ffa_to_msg_param, 777 + .from_msg_param = optee_ffa_from_msg_param, 778 + }; 779 + 780 + static void optee_ffa_remove(struct ffa_device *ffa_dev) 781 + { 782 + struct optee *optee = ffa_dev->dev.driver_data; 783 + 784 + optee_remove_common(optee); 785 + 786 + mutex_destroy(&optee->ffa.mutex); 787 + rhashtable_free_and_destroy(&optee->ffa.global_ids, rh_free_fn, NULL); 788 + 789 + kfree(optee); 790 + } 791 + 792 + static int optee_ffa_probe(struct ffa_device *ffa_dev) 793 + { 794 + const struct ffa_dev_ops *ffa_ops; 795 + unsigned int rpc_arg_count; 796 + struct tee_device *teedev; 797 + struct optee *optee; 798 + int rc; 799 + 800 + ffa_ops = ffa_dev_ops_get(ffa_dev); 801 + if (!ffa_ops) { 802 + pr_warn("failed \"method\" init: ffa\n"); 803 + return -ENOENT; 804 + } 805 + 806 + if (!optee_ffa_api_is_compatbile(ffa_dev, ffa_ops)) 807 + return -EINVAL; 808 + 809 + if (!optee_ffa_exchange_caps(ffa_dev, ffa_ops, &rpc_arg_count)) 810 + return -EINVAL; 811 + 812 + optee = kzalloc(sizeof(*optee), GFP_KERNEL); 813 + if (!optee) { 814 + rc = -ENOMEM; 815 + goto err; 816 + } 817 + optee->pool = optee_ffa_config_dyn_shm(); 818 + if (IS_ERR(optee->pool)) { 819 + rc = PTR_ERR(optee->pool); 820 + optee->pool = NULL; 821 + goto err; 822 + } 823 + 824 + optee->ops = &optee_ffa_ops; 825 + optee->ffa.ffa_dev = ffa_dev; 826 + optee->ffa.ffa_ops = ffa_ops; 827 + optee->rpc_arg_count = rpc_arg_count; 828 + 829 + teedev = tee_device_alloc(&optee_ffa_clnt_desc, NULL, optee->pool, 830 + optee); 831 + if (IS_ERR(teedev)) { 832 + rc = PTR_ERR(teedev); 833 + goto err; 834 + } 835 + optee->teedev = teedev; 836 + 837 + teedev = tee_device_alloc(&optee_ffa_supp_desc, NULL, optee->pool, 838 + optee); 839 + if (IS_ERR(teedev)) { 840 + rc = PTR_ERR(teedev); 841 + goto err; 842 + } 843 + optee->supp_teedev = teedev; 844 + 845 + rc = tee_device_register(optee->teedev); 846 + if (rc) 847 + goto err; 848 + 849 + rc = tee_device_register(optee->supp_teedev); 850 + if (rc) 851 + goto err; 852 + 853 + rc = rhashtable_init(&optee->ffa.global_ids, &shm_rhash_params); 854 + if (rc) 855 + goto err; 856 + mutex_init(&optee->ffa.mutex); 857 + mutex_init(&optee->call_queue.mutex); 858 + INIT_LIST_HEAD(&optee->call_queue.waiters); 859 + optee_wait_queue_init(&optee->wait_queue); 860 + optee_supp_init(&optee->supp); 861 + ffa_dev_set_drvdata(ffa_dev, optee); 862 + 863 + rc = optee_enumerate_devices(PTA_CMD_GET_DEVICES); 864 + if (rc) { 865 + optee_ffa_remove(ffa_dev); 866 + return rc; 867 + } 868 + 869 + pr_info("initialized driver\n"); 870 + return 0; 871 + err: 872 + /* 873 + * tee_device_unregister() is safe to call even if the 874 + * devices hasn't been registered with 875 + * tee_device_register() yet. 876 + */ 877 + tee_device_unregister(optee->supp_teedev); 878 + tee_device_unregister(optee->teedev); 879 + if (optee->pool) 880 + tee_shm_pool_free(optee->pool); 881 + kfree(optee); 882 + return rc; 883 + } 884 + 885 + static const struct ffa_device_id optee_ffa_device_id[] = { 886 + /* 486178e0-e7f8-11e3-bc5e0002a5d5c51b */ 887 + { UUID_INIT(0x486178e0, 0xe7f8, 0x11e3, 888 + 0xbc, 0x5e, 0x00, 0x02, 0xa5, 0xd5, 0xc5, 0x1b) }, 889 + {} 890 + }; 891 + 892 + static struct ffa_driver optee_ffa_driver = { 893 + .name = "optee", 894 + .probe = optee_ffa_probe, 895 + .remove = optee_ffa_remove, 896 + .id_table = optee_ffa_device_id, 897 + }; 898 + 899 + int optee_ffa_abi_register(void) 900 + { 901 + if (IS_REACHABLE(CONFIG_ARM_FFA_TRANSPORT)) 902 + return ffa_register(&optee_ffa_driver); 903 + else 904 + return -EOPNOTSUPP; 905 + } 906 + 907 + void optee_ffa_abi_unregister(void) 908 + { 909 + if (IS_REACHABLE(CONFIG_ARM_FFA_TRANSPORT)) 910 + ffa_unregister(&optee_ffa_driver); 911 + }
+153
drivers/tee/optee/optee_ffa.h
··· 1 + /* SPDX-License-Identifier: BSD-2-Clause */ 2 + /* 3 + * Copyright (c) 2019-2021, Linaro Limited 4 + */ 5 + 6 + /* 7 + * This file is exported by OP-TEE and is kept in sync between secure world 8 + * and normal world drivers. We're using ARM FF-A 1.0 specification. 9 + */ 10 + 11 + #ifndef __OPTEE_FFA_H 12 + #define __OPTEE_FFA_H 13 + 14 + #include <linux/arm_ffa.h> 15 + 16 + /* 17 + * Normal world sends requests with FFA_MSG_SEND_DIRECT_REQ and 18 + * responses are returned with FFA_MSG_SEND_DIRECT_RESP for normal 19 + * messages. 20 + * 21 + * All requests with FFA_MSG_SEND_DIRECT_REQ and FFA_MSG_SEND_DIRECT_RESP 22 + * are using the AArch32 SMC calling convention with register usage as 23 + * defined in FF-A specification: 24 + * w0: Function ID (0x8400006F or 0x84000070) 25 + * w1: Source/Destination IDs 26 + * w2: Reserved (MBZ) 27 + * w3-w7: Implementation defined, free to be used below 28 + */ 29 + 30 + #define OPTEE_FFA_VERSION_MAJOR 1 31 + #define OPTEE_FFA_VERSION_MINOR 0 32 + 33 + #define OPTEE_FFA_BLOCKING_CALL(id) (id) 34 + #define OPTEE_FFA_YIELDING_CALL_BIT 31 35 + #define OPTEE_FFA_YIELDING_CALL(id) ((id) | BIT(OPTEE_FFA_YIELDING_CALL_BIT)) 36 + 37 + /* 38 + * Returns the API version implemented, currently follows the FF-A version. 39 + * Call register usage: 40 + * w3: Service ID, OPTEE_FFA_GET_API_VERSION 41 + * w4-w7: Not used (MBZ) 42 + * 43 + * Return register usage: 44 + * w3: OPTEE_FFA_VERSION_MAJOR 45 + * w4: OPTEE_FFA_VERSION_MINOR 46 + * w5-w7: Not used (MBZ) 47 + */ 48 + #define OPTEE_FFA_GET_API_VERSION OPTEE_FFA_BLOCKING_CALL(0) 49 + 50 + /* 51 + * Returns the revision of OP-TEE. 52 + * 53 + * Used by non-secure world to figure out which version of the Trusted OS 54 + * is installed. Note that the returned revision is the revision of the 55 + * Trusted OS, not of the API. 56 + * 57 + * Call register usage: 58 + * w3: Service ID, OPTEE_FFA_GET_OS_VERSION 59 + * w4-w7: Unused (MBZ) 60 + * 61 + * Return register usage: 62 + * w3: CFG_OPTEE_REVISION_MAJOR 63 + * w4: CFG_OPTEE_REVISION_MINOR 64 + * w5: TEE_IMPL_GIT_SHA1 (or zero if not supported) 65 + */ 66 + #define OPTEE_FFA_GET_OS_VERSION OPTEE_FFA_BLOCKING_CALL(1) 67 + 68 + /* 69 + * Exchange capabilities between normal world and secure world. 70 + * 71 + * Currently there are no defined capabilities. When features are added new 72 + * capabilities may be added. 73 + * 74 + * Call register usage: 75 + * w3: Service ID, OPTEE_FFA_EXCHANGE_CAPABILITIES 76 + * w4-w7: Note used (MBZ) 77 + * 78 + * Return register usage: 79 + * w3: Error code, 0 on success 80 + * w4: Bit[7:0]: Number of parameters needed for RPC to be supplied 81 + * as the second MSG arg struct for 82 + * OPTEE_FFA_YIELDING_CALL_WITH_ARG. 83 + * Bit[31:8]: Reserved (MBZ) 84 + * w5-w7: Note used (MBZ) 85 + */ 86 + #define OPTEE_FFA_EXCHANGE_CAPABILITIES OPTEE_FFA_BLOCKING_CALL(2) 87 + 88 + /* 89 + * Unregister shared memory 90 + * 91 + * Call register usage: 92 + * w3: Service ID, OPTEE_FFA_YIELDING_CALL_UNREGISTER_SHM 93 + * w4: Shared memory handle, lower bits 94 + * w5: Shared memory handle, higher bits 95 + * w6-w7: Not used (MBZ) 96 + * 97 + * Return register usage: 98 + * w3: Error code, 0 on success 99 + * w4-w7: Note used (MBZ) 100 + */ 101 + #define OPTEE_FFA_UNREGISTER_SHM OPTEE_FFA_BLOCKING_CALL(3) 102 + 103 + /* 104 + * Call with struct optee_msg_arg as argument in the supplied shared memory 105 + * with a zero internal offset and normal cached memory attributes. 106 + * Register usage: 107 + * w3: Service ID, OPTEE_FFA_YIELDING_CALL_WITH_ARG 108 + * w4: Lower 32 bits of a 64-bit Shared memory handle 109 + * w5: Upper 32 bits of a 64-bit Shared memory handle 110 + * w6: Offset into shared memory pointing to a struct optee_msg_arg 111 + * right after the parameters of this struct (at offset 112 + * OPTEE_MSG_GET_ARG_SIZE(num_params) follows a struct optee_msg_arg 113 + * for RPC, this struct has reserved space for the number of RPC 114 + * parameters as returned by OPTEE_FFA_EXCHANGE_CAPABILITIES. 115 + * w7: Not used (MBZ) 116 + * Resume from RPC. Register usage: 117 + * w3: Service ID, OPTEE_FFA_YIELDING_CALL_RESUME 118 + * w4-w6: Not used (MBZ) 119 + * w7: Resume info 120 + * 121 + * Normal return (yielding call is completed). Register usage: 122 + * w3: Error code, 0 on success 123 + * w4: OPTEE_FFA_YIELDING_CALL_RETURN_DONE 124 + * w5-w7: Not used (MBZ) 125 + * 126 + * RPC interrupt return (RPC from secure world). Register usage: 127 + * w3: Error code == 0 128 + * w4: Any defined RPC code but OPTEE_FFA_YIELDING_CALL_RETURN_DONE 129 + * w5-w6: Not used (MBZ) 130 + * w7: Resume info 131 + * 132 + * Possible error codes in register w3: 133 + * 0: Success 134 + * FFA_DENIED: w4 isn't one of OPTEE_FFA_YIELDING_CALL_START 135 + * OPTEE_FFA_YIELDING_CALL_RESUME 136 + * 137 + * Possible error codes for OPTEE_FFA_YIELDING_CALL_START, 138 + * FFA_BUSY: Number of OP-TEE OS threads exceeded, 139 + * try again later 140 + * FFA_DENIED: RPC shared memory object not found 141 + * FFA_INVALID_PARAMETER: Bad shared memory handle or offset into the memory 142 + * 143 + * Possible error codes for OPTEE_FFA_YIELDING_CALL_RESUME 144 + * FFA_INVALID_PARAMETER: Bad resume info 145 + */ 146 + #define OPTEE_FFA_YIELDING_CALL_WITH_ARG OPTEE_FFA_YIELDING_CALL(0) 147 + #define OPTEE_FFA_YIELDING_CALL_RESUME OPTEE_FFA_YIELDING_CALL(1) 148 + 149 + #define OPTEE_FFA_YIELDING_CALL_RETURN_DONE 0 150 + #define OPTEE_FFA_YIELDING_CALL_RETURN_RPC_CMD 1 151 + #define OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT 2 152 + 153 + #endif /*__OPTEE_FFA_H*/
+26 -1
drivers/tee/optee/optee_msg.h
··· 28 28 #define OPTEE_MSG_ATTR_TYPE_RMEM_INPUT 0x5 29 29 #define OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT 0x6 30 30 #define OPTEE_MSG_ATTR_TYPE_RMEM_INOUT 0x7 31 + #define OPTEE_MSG_ATTR_TYPE_FMEM_INPUT OPTEE_MSG_ATTR_TYPE_RMEM_INPUT 32 + #define OPTEE_MSG_ATTR_TYPE_FMEM_OUTPUT OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT 33 + #define OPTEE_MSG_ATTR_TYPE_FMEM_INOUT OPTEE_MSG_ATTR_TYPE_RMEM_INOUT 31 34 #define OPTEE_MSG_ATTR_TYPE_TMEM_INPUT 0x9 32 35 #define OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT 0xa 33 36 #define OPTEE_MSG_ATTR_TYPE_TMEM_INOUT 0xb ··· 99 96 */ 100 97 #define OPTEE_MSG_NONCONTIG_PAGE_SIZE 4096 101 98 99 + #define OPTEE_MSG_FMEM_INVALID_GLOBAL_ID 0xffffffffffffffff 100 + 102 101 /** 103 102 * struct optee_msg_param_tmem - temporary memory reference parameter 104 103 * @buf_ptr: Address of the buffer ··· 133 128 }; 134 129 135 130 /** 131 + * struct optee_msg_param_fmem - ffa memory reference parameter 132 + * @offs_lower: Lower bits of offset into shared memory reference 133 + * @offs_upper: Upper bits of offset into shared memory reference 134 + * @internal_offs: Internal offset into the first page of shared memory 135 + * reference 136 + * @size: Size of the buffer 137 + * @global_id: Global identifier of Shared memory 138 + */ 139 + struct optee_msg_param_fmem { 140 + u32 offs_low; 141 + u16 offs_high; 142 + u16 internal_offs; 143 + u64 size; 144 + u64 global_id; 145 + }; 146 + 147 + /** 136 148 * struct optee_msg_param_value - opaque value parameter 137 149 * 138 150 * Value parameters are passed unchecked between normal and secure world. ··· 165 143 * @attr: attributes 166 144 * @tmem: parameter by temporary memory reference 167 145 * @rmem: parameter by registered memory reference 146 + * @fmem: parameter by ffa registered memory reference 168 147 * @value: parameter by opaque value 169 148 * @octets: parameter by octet string 170 149 * 171 150 * @attr & OPTEE_MSG_ATTR_TYPE_MASK indicates if tmem, rmem or value is used in 172 151 * the union. OPTEE_MSG_ATTR_TYPE_VALUE_* indicates value or octets, 173 152 * OPTEE_MSG_ATTR_TYPE_TMEM_* indicates @tmem and 174 - * OPTEE_MSG_ATTR_TYPE_RMEM_* indicates @rmem, 153 + * OPTEE_MSG_ATTR_TYPE_RMEM_* or the alias PTEE_MSG_ATTR_TYPE_FMEM_* indicates 154 + * @rmem or @fmem depending on the conduit. 175 155 * OPTEE_MSG_ATTR_TYPE_NONE indicates that none of the members are used. 176 156 */ 177 157 struct optee_msg_param { ··· 181 157 union { 182 158 struct optee_msg_param_tmem tmem; 183 159 struct optee_msg_param_rmem rmem; 160 + struct optee_msg_param_fmem fmem; 184 161 struct optee_msg_param_value value; 185 162 u8 octets[24]; 186 163 } u;
+122 -37
drivers/tee/optee/optee_private.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 - * Copyright (c) 2015, Linaro Limited 3 + * Copyright (c) 2015-2021, Linaro Limited 4 4 */ 5 5 6 6 #ifndef OPTEE_PRIVATE_H 7 7 #define OPTEE_PRIVATE_H 8 8 9 9 #include <linux/arm-smccc.h> 10 + #include <linux/rhashtable.h> 10 11 #include <linux/semaphore.h> 11 12 #include <linux/tee_drv.h> 12 13 #include <linux/types.h> 13 14 #include "optee_msg.h" 15 + 16 + #define DRIVER_NAME "optee" 14 17 15 18 #define OPTEE_MAX_ARG_SIZE 1024 16 19 ··· 23 20 #define TEEC_ERROR_NOT_SUPPORTED 0xFFFF000A 24 21 #define TEEC_ERROR_COMMUNICATION 0xFFFF000E 25 22 #define TEEC_ERROR_OUT_OF_MEMORY 0xFFFF000C 23 + #define TEEC_ERROR_BUSY 0xFFFF000D 26 24 #define TEEC_ERROR_SHORT_BUFFER 0xFFFF0010 27 25 28 26 #define TEEC_ORIGIN_COMMS 0x00000002 ··· 32 28 unsigned long, unsigned long, unsigned long, 33 29 unsigned long, unsigned long, 34 30 struct arm_smccc_res *); 31 + 32 + struct optee_call_waiter { 33 + struct list_head list_node; 34 + struct completion c; 35 + }; 35 36 36 37 struct optee_call_queue { 37 38 /* Serializes access to this struct */ ··· 75 66 struct completion reqs_c; 76 67 }; 77 68 69 + struct optee_smc { 70 + optee_invoke_fn *invoke_fn; 71 + void *memremaped_shm; 72 + u32 sec_caps; 73 + }; 74 + 75 + /** 76 + * struct optee_ffa_data - FFA communication struct 77 + * @ffa_dev FFA device, contains the destination id, the id of 78 + * OP-TEE in secure world 79 + * @ffa_ops FFA operations 80 + * @mutex Serializes access to @global_ids 81 + * @global_ids FF-A shared memory global handle translation 82 + */ 83 + struct optee_ffa { 84 + struct ffa_device *ffa_dev; 85 + const struct ffa_dev_ops *ffa_ops; 86 + /* Serializes access to @global_ids */ 87 + struct mutex mutex; 88 + struct rhashtable global_ids; 89 + }; 90 + 91 + struct optee; 92 + 93 + /** 94 + * struct optee_ops - OP-TEE driver internal operations 95 + * @do_call_with_arg: enters OP-TEE in secure world 96 + * @to_msg_param: converts from struct tee_param to OPTEE_MSG parameters 97 + * @from_msg_param: converts from OPTEE_MSG parameters to struct tee_param 98 + * 99 + * These OPs are only supposed to be used internally in the OP-TEE driver 100 + * as a way of abstracting the different methogs of entering OP-TEE in 101 + * secure world. 102 + */ 103 + struct optee_ops { 104 + int (*do_call_with_arg)(struct tee_context *ctx, 105 + struct tee_shm *shm_arg); 106 + int (*to_msg_param)(struct optee *optee, 107 + struct optee_msg_param *msg_params, 108 + size_t num_params, const struct tee_param *params); 109 + int (*from_msg_param)(struct optee *optee, struct tee_param *params, 110 + size_t num_params, 111 + const struct optee_msg_param *msg_params); 112 + }; 113 + 78 114 /** 79 115 * struct optee - main service struct 80 116 * @supp_teedev: supplicant device 117 + * @ops: internal callbacks for different ways to reach secure 118 + * world 81 119 * @teedev: client device 82 - * @invoke_fn: function to issue smc or hvc 120 + * @smc: specific to SMC ABI 121 + * @ffa: specific to FF-A ABI 83 122 * @call_queue: queue of threads waiting to call @invoke_fn 84 123 * @wait_queue: queue of threads from secure world waiting for a 85 124 * secure world sync object 86 125 * @supp: supplicant synchronization struct for RPC to supplicant 87 126 * @pool: shared memory pool 88 - * @memremaped_shm virtual address of memory in shared memory pool 89 - * @sec_caps: secure world capabilities defined by 90 - * OPTEE_SMC_SEC_CAP_* in optee_smc.h 127 + * @rpc_arg_count: If > 0 number of RPC parameters to make room for 91 128 * @scan_bus_done flag if device registation was already done. 92 129 * @scan_bus_wq workqueue to scan optee bus and register optee drivers 93 130 * @scan_bus_work workq to scan optee bus and register optee drivers ··· 141 86 struct optee { 142 87 struct tee_device *supp_teedev; 143 88 struct tee_device *teedev; 144 - optee_invoke_fn *invoke_fn; 89 + const struct optee_ops *ops; 90 + union { 91 + struct optee_smc smc; 92 + struct optee_ffa ffa; 93 + }; 145 94 struct optee_call_queue call_queue; 146 95 struct optee_wait_queue wait_queue; 147 96 struct optee_supp supp; 148 97 struct tee_shm_pool *pool; 149 - void *memremaped_shm; 150 - u32 sec_caps; 98 + unsigned int rpc_arg_count; 151 99 bool scan_bus_done; 152 100 struct workqueue_struct *scan_bus_wq; 153 101 struct work_struct scan_bus_work; ··· 185 127 size_t num_entries; 186 128 }; 187 129 188 - void optee_handle_rpc(struct tee_context *ctx, struct optee_rpc_param *param, 189 - struct optee_call_ctx *call_ctx); 190 - void optee_rpc_finalize_call(struct optee_call_ctx *call_ctx); 191 - 192 130 void optee_wait_queue_init(struct optee_wait_queue *wq); 193 131 void optee_wait_queue_exit(struct optee_wait_queue *wq); 194 132 ··· 202 148 int optee_supp_send(struct tee_context *ctx, u32 ret, u32 num_params, 203 149 struct tee_param *param); 204 150 205 - u32 optee_do_call_with_arg(struct tee_context *ctx, phys_addr_t parg); 206 151 int optee_open_session(struct tee_context *ctx, 207 152 struct tee_ioctl_open_session_arg *arg, 208 153 struct tee_param *param); 154 + int optee_close_session_helper(struct tee_context *ctx, u32 session); 209 155 int optee_close_session(struct tee_context *ctx, u32 session); 210 156 int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, 211 157 struct tee_param *param); 212 158 int optee_cancel_req(struct tee_context *ctx, u32 cancel_id, u32 session); 213 159 214 - void optee_enable_shm_cache(struct optee *optee); 215 - void optee_disable_shm_cache(struct optee *optee); 216 - void optee_disable_unmapped_shm_cache(struct optee *optee); 217 - 218 - int optee_shm_register(struct tee_context *ctx, struct tee_shm *shm, 219 - struct page **pages, size_t num_pages, 220 - unsigned long start); 221 - int optee_shm_unregister(struct tee_context *ctx, struct tee_shm *shm); 222 - 223 - int optee_shm_register_supp(struct tee_context *ctx, struct tee_shm *shm, 224 - struct page **pages, size_t num_pages, 225 - unsigned long start); 226 - int optee_shm_unregister_supp(struct tee_context *ctx, struct tee_shm *shm); 227 - 228 - int optee_from_msg_param(struct tee_param *params, size_t num_params, 229 - const struct optee_msg_param *msg_params); 230 - int optee_to_msg_param(struct optee_msg_param *msg_params, size_t num_params, 231 - const struct tee_param *params); 232 - 233 - u64 *optee_allocate_pages_list(size_t num_entries); 234 - void optee_free_pages_list(void *array, size_t num_entries); 235 - void optee_fill_pages_list(u64 *dst, struct page **pages, int num_pages, 236 - size_t page_offset); 237 - 238 160 #define PTA_CMD_GET_DEVICES 0x0 239 161 #define PTA_CMD_GET_DEVICES_SUPP 0x1 240 162 int optee_enumerate_devices(u32 func); 241 163 void optee_unregister_devices(void); 164 + 165 + int optee_pool_op_alloc_helper(struct tee_shm_pool_mgr *poolm, 166 + struct tee_shm *shm, size_t size, 167 + int (*shm_register)(struct tee_context *ctx, 168 + struct tee_shm *shm, 169 + struct page **pages, 170 + size_t num_pages, 171 + unsigned long start)); 172 + 173 + 174 + void optee_remove_common(struct optee *optee); 175 + int optee_open(struct tee_context *ctx, bool cap_memref_null); 176 + void optee_release(struct tee_context *ctx); 177 + void optee_release_supp(struct tee_context *ctx); 178 + 179 + static inline void optee_from_msg_param_value(struct tee_param *p, u32 attr, 180 + const struct optee_msg_param *mp) 181 + { 182 + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT + 183 + attr - OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; 184 + p->u.value.a = mp->u.value.a; 185 + p->u.value.b = mp->u.value.b; 186 + p->u.value.c = mp->u.value.c; 187 + } 188 + 189 + static inline void optee_to_msg_param_value(struct optee_msg_param *mp, 190 + const struct tee_param *p) 191 + { 192 + mp->attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + p->attr - 193 + TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; 194 + mp->u.value.a = p->u.value.a; 195 + mp->u.value.b = p->u.value.b; 196 + mp->u.value.c = p->u.value.c; 197 + } 198 + 199 + void optee_cq_wait_init(struct optee_call_queue *cq, 200 + struct optee_call_waiter *w); 201 + void optee_cq_wait_for_completion(struct optee_call_queue *cq, 202 + struct optee_call_waiter *w); 203 + void optee_cq_wait_final(struct optee_call_queue *cq, 204 + struct optee_call_waiter *w); 205 + int optee_check_mem_type(unsigned long start, size_t num_pages); 206 + struct tee_shm *optee_get_msg_arg(struct tee_context *ctx, size_t num_params, 207 + struct optee_msg_arg **msg_arg); 208 + 209 + struct tee_shm *optee_rpc_cmd_alloc_suppl(struct tee_context *ctx, size_t sz); 210 + void optee_rpc_cmd_free_suppl(struct tee_context *ctx, struct tee_shm *shm); 211 + void optee_rpc_cmd(struct tee_context *ctx, struct optee *optee, 212 + struct optee_msg_arg *arg); 242 213 243 214 /* 244 215 * Small helpers ··· 279 200 *reg0 = val >> 32; 280 201 *reg1 = val; 281 202 } 203 + 204 + /* Registration of the ABIs */ 205 + int optee_smc_abi_register(void); 206 + void optee_smc_abi_unregister(void); 207 + int optee_ffa_abi_register(void); 208 + void optee_ffa_abi_unregister(void); 282 209 283 210 #endif /*OPTEE_PRIVATE_H*/
+16 -221
drivers/tee/optee/rpc.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Copyright (c) 2015-2016, Linaro Limited 3 + * Copyright (c) 2015-2021, Linaro Limited 4 4 */ 5 5 6 6 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 7 7 8 8 #include <linux/delay.h> 9 - #include <linux/device.h> 10 9 #include <linux/i2c.h> 11 10 #include <linux/slab.h> 12 11 #include <linux/tee_drv.h> 13 12 #include "optee_private.h" 14 - #include "optee_smc.h" 15 13 #include "optee_rpc_cmd.h" 16 14 17 15 struct wq_entry { ··· 53 55 static void handle_rpc_func_cmd_i2c_transfer(struct tee_context *ctx, 54 56 struct optee_msg_arg *arg) 55 57 { 58 + struct optee *optee = tee_get_drvdata(ctx->teedev); 56 59 struct tee_param *params; 57 60 struct i2c_adapter *adapter; 58 61 struct i2c_msg msg = { }; ··· 78 79 return; 79 80 } 80 81 81 - if (optee_from_msg_param(params, arg->num_params, arg->params)) 82 + if (optee->ops->from_msg_param(optee, params, arg->num_params, 83 + arg->params)) 82 84 goto bad; 83 85 84 86 for (i = 0; i < arg->num_params; i++) { ··· 122 122 arg->ret = TEEC_ERROR_COMMUNICATION; 123 123 } else { 124 124 params[3].u.value.a = msg.len; 125 - if (optee_to_msg_param(arg->params, arg->num_params, params)) 125 + if (optee->ops->to_msg_param(optee, arg->params, 126 + arg->num_params, params)) 126 127 arg->ret = TEEC_ERROR_BAD_PARAMETERS; 127 128 else 128 129 arg->ret = TEEC_SUCCESS; ··· 235 234 arg->ret = TEEC_ERROR_BAD_PARAMETERS; 236 235 } 237 236 238 - static void handle_rpc_supp_cmd(struct tee_context *ctx, 237 + static void handle_rpc_supp_cmd(struct tee_context *ctx, struct optee *optee, 239 238 struct optee_msg_arg *arg) 240 239 { 241 240 struct tee_param *params; ··· 249 248 return; 250 249 } 251 250 252 - if (optee_from_msg_param(params, arg->num_params, arg->params)) { 251 + if (optee->ops->from_msg_param(optee, params, arg->num_params, 252 + arg->params)) { 253 253 arg->ret = TEEC_ERROR_BAD_PARAMETERS; 254 254 goto out; 255 255 } 256 256 257 257 arg->ret = optee_supp_thrd_req(ctx, arg->cmd, arg->num_params, params); 258 258 259 - if (optee_to_msg_param(arg->params, arg->num_params, params)) 259 + if (optee->ops->to_msg_param(optee, arg->params, arg->num_params, 260 + params)) 260 261 arg->ret = TEEC_ERROR_BAD_PARAMETERS; 261 262 out: 262 263 kfree(params); 263 264 } 264 265 265 - static struct tee_shm *cmd_alloc_suppl(struct tee_context *ctx, size_t sz) 266 + struct tee_shm *optee_rpc_cmd_alloc_suppl(struct tee_context *ctx, size_t sz) 266 267 { 267 268 u32 ret; 268 269 struct tee_param param; ··· 287 284 return shm; 288 285 } 289 286 290 - static void handle_rpc_func_cmd_shm_alloc(struct tee_context *ctx, 291 - struct optee_msg_arg *arg, 292 - struct optee_call_ctx *call_ctx) 293 - { 294 - phys_addr_t pa; 295 - struct tee_shm *shm; 296 - size_t sz; 297 - size_t n; 298 - 299 - arg->ret_origin = TEEC_ORIGIN_COMMS; 300 - 301 - if (!arg->num_params || 302 - arg->params[0].attr != OPTEE_MSG_ATTR_TYPE_VALUE_INPUT) { 303 - arg->ret = TEEC_ERROR_BAD_PARAMETERS; 304 - return; 305 - } 306 - 307 - for (n = 1; n < arg->num_params; n++) { 308 - if (arg->params[n].attr != OPTEE_MSG_ATTR_TYPE_NONE) { 309 - arg->ret = TEEC_ERROR_BAD_PARAMETERS; 310 - return; 311 - } 312 - } 313 - 314 - sz = arg->params[0].u.value.b; 315 - switch (arg->params[0].u.value.a) { 316 - case OPTEE_RPC_SHM_TYPE_APPL: 317 - shm = cmd_alloc_suppl(ctx, sz); 318 - break; 319 - case OPTEE_RPC_SHM_TYPE_KERNEL: 320 - shm = tee_shm_alloc(ctx, sz, TEE_SHM_MAPPED | TEE_SHM_PRIV); 321 - break; 322 - default: 323 - arg->ret = TEEC_ERROR_BAD_PARAMETERS; 324 - return; 325 - } 326 - 327 - if (IS_ERR(shm)) { 328 - arg->ret = TEEC_ERROR_OUT_OF_MEMORY; 329 - return; 330 - } 331 - 332 - if (tee_shm_get_pa(shm, 0, &pa)) { 333 - arg->ret = TEEC_ERROR_BAD_PARAMETERS; 334 - goto bad; 335 - } 336 - 337 - sz = tee_shm_get_size(shm); 338 - 339 - if (tee_shm_is_registered(shm)) { 340 - struct page **pages; 341 - u64 *pages_list; 342 - size_t page_num; 343 - 344 - pages = tee_shm_get_pages(shm, &page_num); 345 - if (!pages || !page_num) { 346 - arg->ret = TEEC_ERROR_OUT_OF_MEMORY; 347 - goto bad; 348 - } 349 - 350 - pages_list = optee_allocate_pages_list(page_num); 351 - if (!pages_list) { 352 - arg->ret = TEEC_ERROR_OUT_OF_MEMORY; 353 - goto bad; 354 - } 355 - 356 - call_ctx->pages_list = pages_list; 357 - call_ctx->num_entries = page_num; 358 - 359 - arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT | 360 - OPTEE_MSG_ATTR_NONCONTIG; 361 - /* 362 - * In the least bits of u.tmem.buf_ptr we store buffer offset 363 - * from 4k page, as described in OP-TEE ABI. 364 - */ 365 - arg->params[0].u.tmem.buf_ptr = virt_to_phys(pages_list) | 366 - (tee_shm_get_page_offset(shm) & 367 - (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1)); 368 - arg->params[0].u.tmem.size = tee_shm_get_size(shm); 369 - arg->params[0].u.tmem.shm_ref = (unsigned long)shm; 370 - 371 - optee_fill_pages_list(pages_list, pages, page_num, 372 - tee_shm_get_page_offset(shm)); 373 - } else { 374 - arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT; 375 - arg->params[0].u.tmem.buf_ptr = pa; 376 - arg->params[0].u.tmem.size = sz; 377 - arg->params[0].u.tmem.shm_ref = (unsigned long)shm; 378 - } 379 - 380 - arg->ret = TEEC_SUCCESS; 381 - return; 382 - bad: 383 - tee_shm_free(shm); 384 - } 385 - 386 - static void cmd_free_suppl(struct tee_context *ctx, struct tee_shm *shm) 287 + void optee_rpc_cmd_free_suppl(struct tee_context *ctx, struct tee_shm *shm) 387 288 { 388 289 struct tee_param param; 389 290 ··· 312 405 optee_supp_thrd_req(ctx, OPTEE_RPC_CMD_SHM_FREE, 1, &param); 313 406 } 314 407 315 - static void handle_rpc_func_cmd_shm_free(struct tee_context *ctx, 316 - struct optee_msg_arg *arg) 408 + void optee_rpc_cmd(struct tee_context *ctx, struct optee *optee, 409 + struct optee_msg_arg *arg) 317 410 { 318 - struct tee_shm *shm; 319 - 320 - arg->ret_origin = TEEC_ORIGIN_COMMS; 321 - 322 - if (arg->num_params != 1 || 323 - arg->params[0].attr != OPTEE_MSG_ATTR_TYPE_VALUE_INPUT) { 324 - arg->ret = TEEC_ERROR_BAD_PARAMETERS; 325 - return; 326 - } 327 - 328 - shm = (struct tee_shm *)(unsigned long)arg->params[0].u.value.b; 329 - switch (arg->params[0].u.value.a) { 330 - case OPTEE_RPC_SHM_TYPE_APPL: 331 - cmd_free_suppl(ctx, shm); 332 - break; 333 - case OPTEE_RPC_SHM_TYPE_KERNEL: 334 - tee_shm_free(shm); 335 - break; 336 - default: 337 - arg->ret = TEEC_ERROR_BAD_PARAMETERS; 338 - } 339 - arg->ret = TEEC_SUCCESS; 340 - } 341 - 342 - static void free_pages_list(struct optee_call_ctx *call_ctx) 343 - { 344 - if (call_ctx->pages_list) { 345 - optee_free_pages_list(call_ctx->pages_list, 346 - call_ctx->num_entries); 347 - call_ctx->pages_list = NULL; 348 - call_ctx->num_entries = 0; 349 - } 350 - } 351 - 352 - void optee_rpc_finalize_call(struct optee_call_ctx *call_ctx) 353 - { 354 - free_pages_list(call_ctx); 355 - } 356 - 357 - static void handle_rpc_func_cmd(struct tee_context *ctx, struct optee *optee, 358 - struct tee_shm *shm, 359 - struct optee_call_ctx *call_ctx) 360 - { 361 - struct optee_msg_arg *arg; 362 - 363 - arg = tee_shm_get_va(shm, 0); 364 - if (IS_ERR(arg)) { 365 - pr_err("%s: tee_shm_get_va %p failed\n", __func__, shm); 366 - return; 367 - } 368 - 369 411 switch (arg->cmd) { 370 412 case OPTEE_RPC_CMD_GET_TIME: 371 413 handle_rpc_func_cmd_get_time(arg); ··· 325 469 case OPTEE_RPC_CMD_SUSPEND: 326 470 handle_rpc_func_cmd_wait(arg); 327 471 break; 328 - case OPTEE_RPC_CMD_SHM_ALLOC: 329 - free_pages_list(call_ctx); 330 - handle_rpc_func_cmd_shm_alloc(ctx, arg, call_ctx); 331 - break; 332 - case OPTEE_RPC_CMD_SHM_FREE: 333 - handle_rpc_func_cmd_shm_free(ctx, arg); 334 - break; 335 472 case OPTEE_RPC_CMD_I2C_TRANSFER: 336 473 handle_rpc_func_cmd_i2c_transfer(ctx, arg); 337 474 break; 338 475 default: 339 - handle_rpc_supp_cmd(ctx, arg); 476 + handle_rpc_supp_cmd(ctx, optee, arg); 340 477 } 341 478 } 342 479 343 - /** 344 - * optee_handle_rpc() - handle RPC from secure world 345 - * @ctx: context doing the RPC 346 - * @param: value of registers for the RPC 347 - * @call_ctx: call context. Preserved during one OP-TEE invocation 348 - * 349 - * Result of RPC is written back into @param. 350 - */ 351 - void optee_handle_rpc(struct tee_context *ctx, struct optee_rpc_param *param, 352 - struct optee_call_ctx *call_ctx) 353 - { 354 - struct tee_device *teedev = ctx->teedev; 355 - struct optee *optee = tee_get_drvdata(teedev); 356 - struct tee_shm *shm; 357 - phys_addr_t pa; 358 480 359 - switch (OPTEE_SMC_RETURN_GET_RPC_FUNC(param->a0)) { 360 - case OPTEE_SMC_RPC_FUNC_ALLOC: 361 - shm = tee_shm_alloc(ctx, param->a1, 362 - TEE_SHM_MAPPED | TEE_SHM_PRIV); 363 - if (!IS_ERR(shm) && !tee_shm_get_pa(shm, 0, &pa)) { 364 - reg_pair_from_64(&param->a1, &param->a2, pa); 365 - reg_pair_from_64(&param->a4, &param->a5, 366 - (unsigned long)shm); 367 - } else { 368 - param->a1 = 0; 369 - param->a2 = 0; 370 - param->a4 = 0; 371 - param->a5 = 0; 372 - } 373 - break; 374 - case OPTEE_SMC_RPC_FUNC_FREE: 375 - shm = reg_pair_to_ptr(param->a1, param->a2); 376 - tee_shm_free(shm); 377 - break; 378 - case OPTEE_SMC_RPC_FUNC_FOREIGN_INTR: 379 - /* 380 - * A foreign interrupt was raised while secure world was 381 - * executing, since they are handled in Linux a dummy RPC is 382 - * performed to let Linux take the interrupt through the normal 383 - * vector. 384 - */ 385 - break; 386 - case OPTEE_SMC_RPC_FUNC_CMD: 387 - shm = reg_pair_to_ptr(param->a1, param->a2); 388 - handle_rpc_func_cmd(ctx, optee, shm, call_ctx); 389 - break; 390 - default: 391 - pr_warn("Unknown RPC func 0x%x\n", 392 - (u32)OPTEE_SMC_RETURN_GET_RPC_FUNC(param->a0)); 393 - break; 394 - } 395 - 396 - param->a0 = OPTEE_SMC_CALL_RETURN_FROM_RPC; 397 - }
-101
drivers/tee/optee/shm_pool.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * Copyright (c) 2015, Linaro Limited 4 - * Copyright (c) 2017, EPAM Systems 5 - */ 6 - #include <linux/device.h> 7 - #include <linux/dma-buf.h> 8 - #include <linux/genalloc.h> 9 - #include <linux/slab.h> 10 - #include <linux/tee_drv.h> 11 - #include "optee_private.h" 12 - #include "optee_smc.h" 13 - #include "shm_pool.h" 14 - 15 - static int pool_op_alloc(struct tee_shm_pool_mgr *poolm, 16 - struct tee_shm *shm, size_t size) 17 - { 18 - unsigned int order = get_order(size); 19 - struct page *page; 20 - int rc = 0; 21 - 22 - page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order); 23 - if (!page) 24 - return -ENOMEM; 25 - 26 - shm->kaddr = page_address(page); 27 - shm->paddr = page_to_phys(page); 28 - shm->size = PAGE_SIZE << order; 29 - 30 - /* 31 - * Shared memory private to the OP-TEE driver doesn't need 32 - * to be registered with OP-TEE. 33 - */ 34 - if (!(shm->flags & TEE_SHM_PRIV)) { 35 - unsigned int nr_pages = 1 << order, i; 36 - struct page **pages; 37 - 38 - pages = kcalloc(nr_pages, sizeof(*pages), GFP_KERNEL); 39 - if (!pages) { 40 - rc = -ENOMEM; 41 - goto err; 42 - } 43 - 44 - for (i = 0; i < nr_pages; i++) { 45 - pages[i] = page; 46 - page++; 47 - } 48 - 49 - shm->flags |= TEE_SHM_REGISTER; 50 - rc = optee_shm_register(shm->ctx, shm, pages, nr_pages, 51 - (unsigned long)shm->kaddr); 52 - kfree(pages); 53 - if (rc) 54 - goto err; 55 - } 56 - 57 - return 0; 58 - 59 - err: 60 - __free_pages(page, order); 61 - return rc; 62 - } 63 - 64 - static void pool_op_free(struct tee_shm_pool_mgr *poolm, 65 - struct tee_shm *shm) 66 - { 67 - if (!(shm->flags & TEE_SHM_PRIV)) 68 - optee_shm_unregister(shm->ctx, shm); 69 - 70 - free_pages((unsigned long)shm->kaddr, get_order(shm->size)); 71 - shm->kaddr = NULL; 72 - } 73 - 74 - static void pool_op_destroy_poolmgr(struct tee_shm_pool_mgr *poolm) 75 - { 76 - kfree(poolm); 77 - } 78 - 79 - static const struct tee_shm_pool_mgr_ops pool_ops = { 80 - .alloc = pool_op_alloc, 81 - .free = pool_op_free, 82 - .destroy_poolmgr = pool_op_destroy_poolmgr, 83 - }; 84 - 85 - /** 86 - * optee_shm_pool_alloc_pages() - create page-based allocator pool 87 - * 88 - * This pool is used when OP-TEE supports dymanic SHM. In this case 89 - * command buffers and such are allocated from kernel's own memory. 90 - */ 91 - struct tee_shm_pool_mgr *optee_shm_pool_alloc_pages(void) 92 - { 93 - struct tee_shm_pool_mgr *mgr = kzalloc(sizeof(*mgr), GFP_KERNEL); 94 - 95 - if (!mgr) 96 - return ERR_PTR(-ENOMEM); 97 - 98 - mgr->ops = &pool_ops; 99 - 100 - return mgr; 101 - }
-14
drivers/tee/optee/shm_pool.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * Copyright (c) 2015, Linaro Limited 4 - * Copyright (c) 2016, EPAM Systems 5 - */ 6 - 7 - #ifndef SHM_POOL_H 8 - #define SHM_POOL_H 9 - 10 - #include <linux/tee_drv.h> 11 - 12 - struct tee_shm_pool_mgr *optee_shm_pool_alloc_pages(void); 13 - 14 - #endif
+1362
drivers/tee/optee/smc_abi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (c) 2015-2021, Linaro Limited 4 + * Copyright (c) 2016, EPAM Systems 5 + */ 6 + 7 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 8 + 9 + #include <linux/arm-smccc.h> 10 + #include <linux/errno.h> 11 + #include <linux/io.h> 12 + #include <linux/sched.h> 13 + #include <linux/mm.h> 14 + #include <linux/module.h> 15 + #include <linux/of.h> 16 + #include <linux/of_platform.h> 17 + #include <linux/platform_device.h> 18 + #include <linux/slab.h> 19 + #include <linux/string.h> 20 + #include <linux/tee_drv.h> 21 + #include <linux/types.h> 22 + #include <linux/workqueue.h> 23 + #include "optee_private.h" 24 + #include "optee_smc.h" 25 + #include "optee_rpc_cmd.h" 26 + #define CREATE_TRACE_POINTS 27 + #include "optee_trace.h" 28 + 29 + /* 30 + * This file implement the SMC ABI used when communicating with secure world 31 + * OP-TEE OS via raw SMCs. 32 + * This file is divided into the following sections: 33 + * 1. Convert between struct tee_param and struct optee_msg_param 34 + * 2. Low level support functions to register shared memory in secure world 35 + * 3. Dynamic shared memory pool based on alloc_pages() 36 + * 4. Do a normal scheduled call into secure world 37 + * 5. Driver initialization. 38 + */ 39 + 40 + #define OPTEE_SHM_NUM_PRIV_PAGES CONFIG_OPTEE_SHM_NUM_PRIV_PAGES 41 + 42 + /* 43 + * 1. Convert between struct tee_param and struct optee_msg_param 44 + * 45 + * optee_from_msg_param() and optee_to_msg_param() are the main 46 + * functions. 47 + */ 48 + 49 + static int from_msg_param_tmp_mem(struct tee_param *p, u32 attr, 50 + const struct optee_msg_param *mp) 51 + { 52 + struct tee_shm *shm; 53 + phys_addr_t pa; 54 + int rc; 55 + 56 + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + 57 + attr - OPTEE_MSG_ATTR_TYPE_TMEM_INPUT; 58 + p->u.memref.size = mp->u.tmem.size; 59 + shm = (struct tee_shm *)(unsigned long)mp->u.tmem.shm_ref; 60 + if (!shm) { 61 + p->u.memref.shm_offs = 0; 62 + p->u.memref.shm = NULL; 63 + return 0; 64 + } 65 + 66 + rc = tee_shm_get_pa(shm, 0, &pa); 67 + if (rc) 68 + return rc; 69 + 70 + p->u.memref.shm_offs = mp->u.tmem.buf_ptr - pa; 71 + p->u.memref.shm = shm; 72 + 73 + /* Check that the memref is covered by the shm object */ 74 + if (p->u.memref.size) { 75 + size_t o = p->u.memref.shm_offs + 76 + p->u.memref.size - 1; 77 + 78 + rc = tee_shm_get_pa(shm, o, NULL); 79 + if (rc) 80 + return rc; 81 + } 82 + 83 + return 0; 84 + } 85 + 86 + static void from_msg_param_reg_mem(struct tee_param *p, u32 attr, 87 + const struct optee_msg_param *mp) 88 + { 89 + struct tee_shm *shm; 90 + 91 + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + 92 + attr - OPTEE_MSG_ATTR_TYPE_RMEM_INPUT; 93 + p->u.memref.size = mp->u.rmem.size; 94 + shm = (struct tee_shm *)(unsigned long)mp->u.rmem.shm_ref; 95 + 96 + if (shm) { 97 + p->u.memref.shm_offs = mp->u.rmem.offs; 98 + p->u.memref.shm = shm; 99 + } else { 100 + p->u.memref.shm_offs = 0; 101 + p->u.memref.shm = NULL; 102 + } 103 + } 104 + 105 + /** 106 + * optee_from_msg_param() - convert from OPTEE_MSG parameters to 107 + * struct tee_param 108 + * @optee: main service struct 109 + * @params: subsystem internal parameter representation 110 + * @num_params: number of elements in the parameter arrays 111 + * @msg_params: OPTEE_MSG parameters 112 + * Returns 0 on success or <0 on failure 113 + */ 114 + static int optee_from_msg_param(struct optee *optee, struct tee_param *params, 115 + size_t num_params, 116 + const struct optee_msg_param *msg_params) 117 + { 118 + int rc; 119 + size_t n; 120 + 121 + for (n = 0; n < num_params; n++) { 122 + struct tee_param *p = params + n; 123 + const struct optee_msg_param *mp = msg_params + n; 124 + u32 attr = mp->attr & OPTEE_MSG_ATTR_TYPE_MASK; 125 + 126 + switch (attr) { 127 + case OPTEE_MSG_ATTR_TYPE_NONE: 128 + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; 129 + memset(&p->u, 0, sizeof(p->u)); 130 + break; 131 + case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: 132 + case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: 133 + case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT: 134 + optee_from_msg_param_value(p, attr, mp); 135 + break; 136 + case OPTEE_MSG_ATTR_TYPE_TMEM_INPUT: 137 + case OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT: 138 + case OPTEE_MSG_ATTR_TYPE_TMEM_INOUT: 139 + rc = from_msg_param_tmp_mem(p, attr, mp); 140 + if (rc) 141 + return rc; 142 + break; 143 + case OPTEE_MSG_ATTR_TYPE_RMEM_INPUT: 144 + case OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT: 145 + case OPTEE_MSG_ATTR_TYPE_RMEM_INOUT: 146 + from_msg_param_reg_mem(p, attr, mp); 147 + break; 148 + 149 + default: 150 + return -EINVAL; 151 + } 152 + } 153 + return 0; 154 + } 155 + 156 + static int to_msg_param_tmp_mem(struct optee_msg_param *mp, 157 + const struct tee_param *p) 158 + { 159 + int rc; 160 + phys_addr_t pa; 161 + 162 + mp->attr = OPTEE_MSG_ATTR_TYPE_TMEM_INPUT + p->attr - 163 + TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT; 164 + 165 + mp->u.tmem.shm_ref = (unsigned long)p->u.memref.shm; 166 + mp->u.tmem.size = p->u.memref.size; 167 + 168 + if (!p->u.memref.shm) { 169 + mp->u.tmem.buf_ptr = 0; 170 + return 0; 171 + } 172 + 173 + rc = tee_shm_get_pa(p->u.memref.shm, p->u.memref.shm_offs, &pa); 174 + if (rc) 175 + return rc; 176 + 177 + mp->u.tmem.buf_ptr = pa; 178 + mp->attr |= OPTEE_MSG_ATTR_CACHE_PREDEFINED << 179 + OPTEE_MSG_ATTR_CACHE_SHIFT; 180 + 181 + return 0; 182 + } 183 + 184 + static int to_msg_param_reg_mem(struct optee_msg_param *mp, 185 + const struct tee_param *p) 186 + { 187 + mp->attr = OPTEE_MSG_ATTR_TYPE_RMEM_INPUT + p->attr - 188 + TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT; 189 + 190 + mp->u.rmem.shm_ref = (unsigned long)p->u.memref.shm; 191 + mp->u.rmem.size = p->u.memref.size; 192 + mp->u.rmem.offs = p->u.memref.shm_offs; 193 + return 0; 194 + } 195 + 196 + /** 197 + * optee_to_msg_param() - convert from struct tee_params to OPTEE_MSG parameters 198 + * @optee: main service struct 199 + * @msg_params: OPTEE_MSG parameters 200 + * @num_params: number of elements in the parameter arrays 201 + * @params: subsystem itnernal parameter representation 202 + * Returns 0 on success or <0 on failure 203 + */ 204 + static int optee_to_msg_param(struct optee *optee, 205 + struct optee_msg_param *msg_params, 206 + size_t num_params, const struct tee_param *params) 207 + { 208 + int rc; 209 + size_t n; 210 + 211 + for (n = 0; n < num_params; n++) { 212 + const struct tee_param *p = params + n; 213 + struct optee_msg_param *mp = msg_params + n; 214 + 215 + switch (p->attr) { 216 + case TEE_IOCTL_PARAM_ATTR_TYPE_NONE: 217 + mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; 218 + memset(&mp->u, 0, sizeof(mp->u)); 219 + break; 220 + case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: 221 + case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: 222 + case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT: 223 + optee_to_msg_param_value(mp, p); 224 + break; 225 + case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: 226 + case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: 227 + case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: 228 + if (tee_shm_is_registered(p->u.memref.shm)) 229 + rc = to_msg_param_reg_mem(mp, p); 230 + else 231 + rc = to_msg_param_tmp_mem(mp, p); 232 + if (rc) 233 + return rc; 234 + break; 235 + default: 236 + return -EINVAL; 237 + } 238 + } 239 + return 0; 240 + } 241 + 242 + /* 243 + * 2. Low level support functions to register shared memory in secure world 244 + * 245 + * Functions to enable/disable shared memory caching in secure world, that 246 + * is, lazy freeing of previously allocated shared memory. Freeing is 247 + * performed when a request has been compled. 248 + * 249 + * Functions to register and unregister shared memory both for normal 250 + * clients and for tee-supplicant. 251 + */ 252 + 253 + /** 254 + * optee_enable_shm_cache() - Enables caching of some shared memory allocation 255 + * in OP-TEE 256 + * @optee: main service struct 257 + */ 258 + static void optee_enable_shm_cache(struct optee *optee) 259 + { 260 + struct optee_call_waiter w; 261 + 262 + /* We need to retry until secure world isn't busy. */ 263 + optee_cq_wait_init(&optee->call_queue, &w); 264 + while (true) { 265 + struct arm_smccc_res res; 266 + 267 + optee->smc.invoke_fn(OPTEE_SMC_ENABLE_SHM_CACHE, 268 + 0, 0, 0, 0, 0, 0, 0, &res); 269 + if (res.a0 == OPTEE_SMC_RETURN_OK) 270 + break; 271 + optee_cq_wait_for_completion(&optee->call_queue, &w); 272 + } 273 + optee_cq_wait_final(&optee->call_queue, &w); 274 + } 275 + 276 + /** 277 + * __optee_disable_shm_cache() - Disables caching of some shared memory 278 + * allocation in OP-TEE 279 + * @optee: main service struct 280 + * @is_mapped: true if the cached shared memory addresses were mapped by this 281 + * kernel, are safe to dereference, and should be freed 282 + */ 283 + static void __optee_disable_shm_cache(struct optee *optee, bool is_mapped) 284 + { 285 + struct optee_call_waiter w; 286 + 287 + /* We need to retry until secure world isn't busy. */ 288 + optee_cq_wait_init(&optee->call_queue, &w); 289 + while (true) { 290 + union { 291 + struct arm_smccc_res smccc; 292 + struct optee_smc_disable_shm_cache_result result; 293 + } res; 294 + 295 + optee->smc.invoke_fn(OPTEE_SMC_DISABLE_SHM_CACHE, 296 + 0, 0, 0, 0, 0, 0, 0, &res.smccc); 297 + if (res.result.status == OPTEE_SMC_RETURN_ENOTAVAIL) 298 + break; /* All shm's freed */ 299 + if (res.result.status == OPTEE_SMC_RETURN_OK) { 300 + struct tee_shm *shm; 301 + 302 + /* 303 + * Shared memory references that were not mapped by 304 + * this kernel must be ignored to prevent a crash. 305 + */ 306 + if (!is_mapped) 307 + continue; 308 + 309 + shm = reg_pair_to_ptr(res.result.shm_upper32, 310 + res.result.shm_lower32); 311 + tee_shm_free(shm); 312 + } else { 313 + optee_cq_wait_for_completion(&optee->call_queue, &w); 314 + } 315 + } 316 + optee_cq_wait_final(&optee->call_queue, &w); 317 + } 318 + 319 + /** 320 + * optee_disable_shm_cache() - Disables caching of mapped shared memory 321 + * allocations in OP-TEE 322 + * @optee: main service struct 323 + */ 324 + static void optee_disable_shm_cache(struct optee *optee) 325 + { 326 + return __optee_disable_shm_cache(optee, true); 327 + } 328 + 329 + /** 330 + * optee_disable_unmapped_shm_cache() - Disables caching of shared memory 331 + * allocations in OP-TEE which are not 332 + * currently mapped 333 + * @optee: main service struct 334 + */ 335 + static void optee_disable_unmapped_shm_cache(struct optee *optee) 336 + { 337 + return __optee_disable_shm_cache(optee, false); 338 + } 339 + 340 + #define PAGELIST_ENTRIES_PER_PAGE \ 341 + ((OPTEE_MSG_NONCONTIG_PAGE_SIZE / sizeof(u64)) - 1) 342 + 343 + /* 344 + * The final entry in each pagelist page is a pointer to the next 345 + * pagelist page. 346 + */ 347 + static size_t get_pages_list_size(size_t num_entries) 348 + { 349 + int pages = DIV_ROUND_UP(num_entries, PAGELIST_ENTRIES_PER_PAGE); 350 + 351 + return pages * OPTEE_MSG_NONCONTIG_PAGE_SIZE; 352 + } 353 + 354 + static u64 *optee_allocate_pages_list(size_t num_entries) 355 + { 356 + return alloc_pages_exact(get_pages_list_size(num_entries), GFP_KERNEL); 357 + } 358 + 359 + static void optee_free_pages_list(void *list, size_t num_entries) 360 + { 361 + free_pages_exact(list, get_pages_list_size(num_entries)); 362 + } 363 + 364 + /** 365 + * optee_fill_pages_list() - write list of user pages to given shared 366 + * buffer. 367 + * 368 + * @dst: page-aligned buffer where list of pages will be stored 369 + * @pages: array of pages that represents shared buffer 370 + * @num_pages: number of entries in @pages 371 + * @page_offset: offset of user buffer from page start 372 + * 373 + * @dst should be big enough to hold list of user page addresses and 374 + * links to the next pages of buffer 375 + */ 376 + static void optee_fill_pages_list(u64 *dst, struct page **pages, int num_pages, 377 + size_t page_offset) 378 + { 379 + int n = 0; 380 + phys_addr_t optee_page; 381 + /* 382 + * Refer to OPTEE_MSG_ATTR_NONCONTIG description in optee_msg.h 383 + * for details. 384 + */ 385 + struct { 386 + u64 pages_list[PAGELIST_ENTRIES_PER_PAGE]; 387 + u64 next_page_data; 388 + } *pages_data; 389 + 390 + /* 391 + * Currently OP-TEE uses 4k page size and it does not looks 392 + * like this will change in the future. On other hand, there are 393 + * no know ARM architectures with page size < 4k. 394 + * Thus the next built assert looks redundant. But the following 395 + * code heavily relies on this assumption, so it is better be 396 + * safe than sorry. 397 + */ 398 + BUILD_BUG_ON(PAGE_SIZE < OPTEE_MSG_NONCONTIG_PAGE_SIZE); 399 + 400 + pages_data = (void *)dst; 401 + /* 402 + * If linux page is bigger than 4k, and user buffer offset is 403 + * larger than 4k/8k/12k/etc this will skip first 4k pages, 404 + * because they bear no value data for OP-TEE. 405 + */ 406 + optee_page = page_to_phys(*pages) + 407 + round_down(page_offset, OPTEE_MSG_NONCONTIG_PAGE_SIZE); 408 + 409 + while (true) { 410 + pages_data->pages_list[n++] = optee_page; 411 + 412 + if (n == PAGELIST_ENTRIES_PER_PAGE) { 413 + pages_data->next_page_data = 414 + virt_to_phys(pages_data + 1); 415 + pages_data++; 416 + n = 0; 417 + } 418 + 419 + optee_page += OPTEE_MSG_NONCONTIG_PAGE_SIZE; 420 + if (!(optee_page & ~PAGE_MASK)) { 421 + if (!--num_pages) 422 + break; 423 + pages++; 424 + optee_page = page_to_phys(*pages); 425 + } 426 + } 427 + } 428 + 429 + static int optee_shm_register(struct tee_context *ctx, struct tee_shm *shm, 430 + struct page **pages, size_t num_pages, 431 + unsigned long start) 432 + { 433 + struct optee *optee = tee_get_drvdata(ctx->teedev); 434 + struct optee_msg_arg *msg_arg; 435 + struct tee_shm *shm_arg; 436 + u64 *pages_list; 437 + int rc; 438 + 439 + if (!num_pages) 440 + return -EINVAL; 441 + 442 + rc = optee_check_mem_type(start, num_pages); 443 + if (rc) 444 + return rc; 445 + 446 + pages_list = optee_allocate_pages_list(num_pages); 447 + if (!pages_list) 448 + return -ENOMEM; 449 + 450 + shm_arg = optee_get_msg_arg(ctx, 1, &msg_arg); 451 + if (IS_ERR(shm_arg)) { 452 + rc = PTR_ERR(shm_arg); 453 + goto out; 454 + } 455 + 456 + optee_fill_pages_list(pages_list, pages, num_pages, 457 + tee_shm_get_page_offset(shm)); 458 + 459 + msg_arg->cmd = OPTEE_MSG_CMD_REGISTER_SHM; 460 + msg_arg->params->attr = OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT | 461 + OPTEE_MSG_ATTR_NONCONTIG; 462 + msg_arg->params->u.tmem.shm_ref = (unsigned long)shm; 463 + msg_arg->params->u.tmem.size = tee_shm_get_size(shm); 464 + /* 465 + * In the least bits of msg_arg->params->u.tmem.buf_ptr we 466 + * store buffer offset from 4k page, as described in OP-TEE ABI. 467 + */ 468 + msg_arg->params->u.tmem.buf_ptr = virt_to_phys(pages_list) | 469 + (tee_shm_get_page_offset(shm) & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1)); 470 + 471 + if (optee->ops->do_call_with_arg(ctx, shm_arg) || 472 + msg_arg->ret != TEEC_SUCCESS) 473 + rc = -EINVAL; 474 + 475 + tee_shm_free(shm_arg); 476 + out: 477 + optee_free_pages_list(pages_list, num_pages); 478 + return rc; 479 + } 480 + 481 + static int optee_shm_unregister(struct tee_context *ctx, struct tee_shm *shm) 482 + { 483 + struct optee *optee = tee_get_drvdata(ctx->teedev); 484 + struct optee_msg_arg *msg_arg; 485 + struct tee_shm *shm_arg; 486 + int rc = 0; 487 + 488 + shm_arg = optee_get_msg_arg(ctx, 1, &msg_arg); 489 + if (IS_ERR(shm_arg)) 490 + return PTR_ERR(shm_arg); 491 + 492 + msg_arg->cmd = OPTEE_MSG_CMD_UNREGISTER_SHM; 493 + 494 + msg_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_RMEM_INPUT; 495 + msg_arg->params[0].u.rmem.shm_ref = (unsigned long)shm; 496 + 497 + if (optee->ops->do_call_with_arg(ctx, shm_arg) || 498 + msg_arg->ret != TEEC_SUCCESS) 499 + rc = -EINVAL; 500 + tee_shm_free(shm_arg); 501 + return rc; 502 + } 503 + 504 + static int optee_shm_register_supp(struct tee_context *ctx, struct tee_shm *shm, 505 + struct page **pages, size_t num_pages, 506 + unsigned long start) 507 + { 508 + /* 509 + * We don't want to register supplicant memory in OP-TEE. 510 + * Instead information about it will be passed in RPC code. 511 + */ 512 + return optee_check_mem_type(start, num_pages); 513 + } 514 + 515 + static int optee_shm_unregister_supp(struct tee_context *ctx, 516 + struct tee_shm *shm) 517 + { 518 + return 0; 519 + } 520 + 521 + /* 522 + * 3. Dynamic shared memory pool based on alloc_pages() 523 + * 524 + * Implements an OP-TEE specific shared memory pool which is used 525 + * when dynamic shared memory is supported by secure world. 526 + * 527 + * The main function is optee_shm_pool_alloc_pages(). 528 + */ 529 + 530 + static int pool_op_alloc(struct tee_shm_pool_mgr *poolm, 531 + struct tee_shm *shm, size_t size) 532 + { 533 + /* 534 + * Shared memory private to the OP-TEE driver doesn't need 535 + * to be registered with OP-TEE. 536 + */ 537 + if (shm->flags & TEE_SHM_PRIV) 538 + return optee_pool_op_alloc_helper(poolm, shm, size, NULL); 539 + 540 + return optee_pool_op_alloc_helper(poolm, shm, size, optee_shm_register); 541 + } 542 + 543 + static void pool_op_free(struct tee_shm_pool_mgr *poolm, 544 + struct tee_shm *shm) 545 + { 546 + if (!(shm->flags & TEE_SHM_PRIV)) 547 + optee_shm_unregister(shm->ctx, shm); 548 + 549 + free_pages((unsigned long)shm->kaddr, get_order(shm->size)); 550 + shm->kaddr = NULL; 551 + } 552 + 553 + static void pool_op_destroy_poolmgr(struct tee_shm_pool_mgr *poolm) 554 + { 555 + kfree(poolm); 556 + } 557 + 558 + static const struct tee_shm_pool_mgr_ops pool_ops = { 559 + .alloc = pool_op_alloc, 560 + .free = pool_op_free, 561 + .destroy_poolmgr = pool_op_destroy_poolmgr, 562 + }; 563 + 564 + /** 565 + * optee_shm_pool_alloc_pages() - create page-based allocator pool 566 + * 567 + * This pool is used when OP-TEE supports dymanic SHM. In this case 568 + * command buffers and such are allocated from kernel's own memory. 569 + */ 570 + static struct tee_shm_pool_mgr *optee_shm_pool_alloc_pages(void) 571 + { 572 + struct tee_shm_pool_mgr *mgr = kzalloc(sizeof(*mgr), GFP_KERNEL); 573 + 574 + if (!mgr) 575 + return ERR_PTR(-ENOMEM); 576 + 577 + mgr->ops = &pool_ops; 578 + 579 + return mgr; 580 + } 581 + 582 + /* 583 + * 4. Do a normal scheduled call into secure world 584 + * 585 + * The function optee_smc_do_call_with_arg() performs a normal scheduled 586 + * call into secure world. During this call may normal world request help 587 + * from normal world using RPCs, Remote Procedure Calls. This includes 588 + * delivery of non-secure interrupts to for instance allow rescheduling of 589 + * the current task. 590 + */ 591 + 592 + static void handle_rpc_func_cmd_shm_free(struct tee_context *ctx, 593 + struct optee_msg_arg *arg) 594 + { 595 + struct tee_shm *shm; 596 + 597 + arg->ret_origin = TEEC_ORIGIN_COMMS; 598 + 599 + if (arg->num_params != 1 || 600 + arg->params[0].attr != OPTEE_MSG_ATTR_TYPE_VALUE_INPUT) { 601 + arg->ret = TEEC_ERROR_BAD_PARAMETERS; 602 + return; 603 + } 604 + 605 + shm = (struct tee_shm *)(unsigned long)arg->params[0].u.value.b; 606 + switch (arg->params[0].u.value.a) { 607 + case OPTEE_RPC_SHM_TYPE_APPL: 608 + optee_rpc_cmd_free_suppl(ctx, shm); 609 + break; 610 + case OPTEE_RPC_SHM_TYPE_KERNEL: 611 + tee_shm_free(shm); 612 + break; 613 + default: 614 + arg->ret = TEEC_ERROR_BAD_PARAMETERS; 615 + } 616 + arg->ret = TEEC_SUCCESS; 617 + } 618 + 619 + static void handle_rpc_func_cmd_shm_alloc(struct tee_context *ctx, 620 + struct optee_msg_arg *arg, 621 + struct optee_call_ctx *call_ctx) 622 + { 623 + phys_addr_t pa; 624 + struct tee_shm *shm; 625 + size_t sz; 626 + size_t n; 627 + 628 + arg->ret_origin = TEEC_ORIGIN_COMMS; 629 + 630 + if (!arg->num_params || 631 + arg->params[0].attr != OPTEE_MSG_ATTR_TYPE_VALUE_INPUT) { 632 + arg->ret = TEEC_ERROR_BAD_PARAMETERS; 633 + return; 634 + } 635 + 636 + for (n = 1; n < arg->num_params; n++) { 637 + if (arg->params[n].attr != OPTEE_MSG_ATTR_TYPE_NONE) { 638 + arg->ret = TEEC_ERROR_BAD_PARAMETERS; 639 + return; 640 + } 641 + } 642 + 643 + sz = arg->params[0].u.value.b; 644 + switch (arg->params[0].u.value.a) { 645 + case OPTEE_RPC_SHM_TYPE_APPL: 646 + shm = optee_rpc_cmd_alloc_suppl(ctx, sz); 647 + break; 648 + case OPTEE_RPC_SHM_TYPE_KERNEL: 649 + shm = tee_shm_alloc(ctx, sz, TEE_SHM_MAPPED | TEE_SHM_PRIV); 650 + break; 651 + default: 652 + arg->ret = TEEC_ERROR_BAD_PARAMETERS; 653 + return; 654 + } 655 + 656 + if (IS_ERR(shm)) { 657 + arg->ret = TEEC_ERROR_OUT_OF_MEMORY; 658 + return; 659 + } 660 + 661 + if (tee_shm_get_pa(shm, 0, &pa)) { 662 + arg->ret = TEEC_ERROR_BAD_PARAMETERS; 663 + goto bad; 664 + } 665 + 666 + sz = tee_shm_get_size(shm); 667 + 668 + if (tee_shm_is_registered(shm)) { 669 + struct page **pages; 670 + u64 *pages_list; 671 + size_t page_num; 672 + 673 + pages = tee_shm_get_pages(shm, &page_num); 674 + if (!pages || !page_num) { 675 + arg->ret = TEEC_ERROR_OUT_OF_MEMORY; 676 + goto bad; 677 + } 678 + 679 + pages_list = optee_allocate_pages_list(page_num); 680 + if (!pages_list) { 681 + arg->ret = TEEC_ERROR_OUT_OF_MEMORY; 682 + goto bad; 683 + } 684 + 685 + call_ctx->pages_list = pages_list; 686 + call_ctx->num_entries = page_num; 687 + 688 + arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT | 689 + OPTEE_MSG_ATTR_NONCONTIG; 690 + /* 691 + * In the least bits of u.tmem.buf_ptr we store buffer offset 692 + * from 4k page, as described in OP-TEE ABI. 693 + */ 694 + arg->params[0].u.tmem.buf_ptr = virt_to_phys(pages_list) | 695 + (tee_shm_get_page_offset(shm) & 696 + (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1)); 697 + arg->params[0].u.tmem.size = tee_shm_get_size(shm); 698 + arg->params[0].u.tmem.shm_ref = (unsigned long)shm; 699 + 700 + optee_fill_pages_list(pages_list, pages, page_num, 701 + tee_shm_get_page_offset(shm)); 702 + } else { 703 + arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT; 704 + arg->params[0].u.tmem.buf_ptr = pa; 705 + arg->params[0].u.tmem.size = sz; 706 + arg->params[0].u.tmem.shm_ref = (unsigned long)shm; 707 + } 708 + 709 + arg->ret = TEEC_SUCCESS; 710 + return; 711 + bad: 712 + tee_shm_free(shm); 713 + } 714 + 715 + static void free_pages_list(struct optee_call_ctx *call_ctx) 716 + { 717 + if (call_ctx->pages_list) { 718 + optee_free_pages_list(call_ctx->pages_list, 719 + call_ctx->num_entries); 720 + call_ctx->pages_list = NULL; 721 + call_ctx->num_entries = 0; 722 + } 723 + } 724 + 725 + static void optee_rpc_finalize_call(struct optee_call_ctx *call_ctx) 726 + { 727 + free_pages_list(call_ctx); 728 + } 729 + 730 + static void handle_rpc_func_cmd(struct tee_context *ctx, struct optee *optee, 731 + struct tee_shm *shm, 732 + struct optee_call_ctx *call_ctx) 733 + { 734 + struct optee_msg_arg *arg; 735 + 736 + arg = tee_shm_get_va(shm, 0); 737 + if (IS_ERR(arg)) { 738 + pr_err("%s: tee_shm_get_va %p failed\n", __func__, shm); 739 + return; 740 + } 741 + 742 + switch (arg->cmd) { 743 + case OPTEE_RPC_CMD_SHM_ALLOC: 744 + free_pages_list(call_ctx); 745 + handle_rpc_func_cmd_shm_alloc(ctx, arg, call_ctx); 746 + break; 747 + case OPTEE_RPC_CMD_SHM_FREE: 748 + handle_rpc_func_cmd_shm_free(ctx, arg); 749 + break; 750 + default: 751 + optee_rpc_cmd(ctx, optee, arg); 752 + } 753 + } 754 + 755 + /** 756 + * optee_handle_rpc() - handle RPC from secure world 757 + * @ctx: context doing the RPC 758 + * @param: value of registers for the RPC 759 + * @call_ctx: call context. Preserved during one OP-TEE invocation 760 + * 761 + * Result of RPC is written back into @param. 762 + */ 763 + static void optee_handle_rpc(struct tee_context *ctx, 764 + struct optee_rpc_param *param, 765 + struct optee_call_ctx *call_ctx) 766 + { 767 + struct tee_device *teedev = ctx->teedev; 768 + struct optee *optee = tee_get_drvdata(teedev); 769 + struct tee_shm *shm; 770 + phys_addr_t pa; 771 + 772 + switch (OPTEE_SMC_RETURN_GET_RPC_FUNC(param->a0)) { 773 + case OPTEE_SMC_RPC_FUNC_ALLOC: 774 + shm = tee_shm_alloc(ctx, param->a1, 775 + TEE_SHM_MAPPED | TEE_SHM_PRIV); 776 + if (!IS_ERR(shm) && !tee_shm_get_pa(shm, 0, &pa)) { 777 + reg_pair_from_64(&param->a1, &param->a2, pa); 778 + reg_pair_from_64(&param->a4, &param->a5, 779 + (unsigned long)shm); 780 + } else { 781 + param->a1 = 0; 782 + param->a2 = 0; 783 + param->a4 = 0; 784 + param->a5 = 0; 785 + } 786 + break; 787 + case OPTEE_SMC_RPC_FUNC_FREE: 788 + shm = reg_pair_to_ptr(param->a1, param->a2); 789 + tee_shm_free(shm); 790 + break; 791 + case OPTEE_SMC_RPC_FUNC_FOREIGN_INTR: 792 + /* 793 + * A foreign interrupt was raised while secure world was 794 + * executing, since they are handled in Linux a dummy RPC is 795 + * performed to let Linux take the interrupt through the normal 796 + * vector. 797 + */ 798 + break; 799 + case OPTEE_SMC_RPC_FUNC_CMD: 800 + shm = reg_pair_to_ptr(param->a1, param->a2); 801 + handle_rpc_func_cmd(ctx, optee, shm, call_ctx); 802 + break; 803 + default: 804 + pr_warn("Unknown RPC func 0x%x\n", 805 + (u32)OPTEE_SMC_RETURN_GET_RPC_FUNC(param->a0)); 806 + break; 807 + } 808 + 809 + param->a0 = OPTEE_SMC_CALL_RETURN_FROM_RPC; 810 + } 811 + 812 + /** 813 + * optee_smc_do_call_with_arg() - Do an SMC to OP-TEE in secure world 814 + * @ctx: calling context 815 + * @arg: shared memory holding the message to pass to secure world 816 + * 817 + * Does and SMC to OP-TEE in secure world and handles eventual resulting 818 + * Remote Procedure Calls (RPC) from OP-TEE. 819 + * 820 + * Returns return code from secure world, 0 is OK 821 + */ 822 + static int optee_smc_do_call_with_arg(struct tee_context *ctx, 823 + struct tee_shm *arg) 824 + { 825 + struct optee *optee = tee_get_drvdata(ctx->teedev); 826 + struct optee_call_waiter w; 827 + struct optee_rpc_param param = { }; 828 + struct optee_call_ctx call_ctx = { }; 829 + phys_addr_t parg; 830 + int rc; 831 + 832 + rc = tee_shm_get_pa(arg, 0, &parg); 833 + if (rc) 834 + return rc; 835 + 836 + param.a0 = OPTEE_SMC_CALL_WITH_ARG; 837 + reg_pair_from_64(&param.a1, &param.a2, parg); 838 + /* Initialize waiter */ 839 + optee_cq_wait_init(&optee->call_queue, &w); 840 + while (true) { 841 + struct arm_smccc_res res; 842 + 843 + trace_optee_invoke_fn_begin(&param); 844 + optee->smc.invoke_fn(param.a0, param.a1, param.a2, param.a3, 845 + param.a4, param.a5, param.a6, param.a7, 846 + &res); 847 + trace_optee_invoke_fn_end(&param, &res); 848 + 849 + if (res.a0 == OPTEE_SMC_RETURN_ETHREAD_LIMIT) { 850 + /* 851 + * Out of threads in secure world, wait for a thread 852 + * become available. 853 + */ 854 + optee_cq_wait_for_completion(&optee->call_queue, &w); 855 + } else if (OPTEE_SMC_RETURN_IS_RPC(res.a0)) { 856 + cond_resched(); 857 + param.a0 = res.a0; 858 + param.a1 = res.a1; 859 + param.a2 = res.a2; 860 + param.a3 = res.a3; 861 + optee_handle_rpc(ctx, &param, &call_ctx); 862 + } else { 863 + rc = res.a0; 864 + break; 865 + } 866 + } 867 + 868 + optee_rpc_finalize_call(&call_ctx); 869 + /* 870 + * We're done with our thread in secure world, if there's any 871 + * thread waiters wake up one. 872 + */ 873 + optee_cq_wait_final(&optee->call_queue, &w); 874 + 875 + return rc; 876 + } 877 + 878 + /* 879 + * 5. Driver initialization 880 + * 881 + * During driver inititialization is secure world probed to find out which 882 + * features it supports so the driver can be initialized with a matching 883 + * configuration. This involves for instance support for dynamic shared 884 + * memory instead of a static memory carvout. 885 + */ 886 + 887 + static void optee_get_version(struct tee_device *teedev, 888 + struct tee_ioctl_version_data *vers) 889 + { 890 + struct tee_ioctl_version_data v = { 891 + .impl_id = TEE_IMPL_ID_OPTEE, 892 + .impl_caps = TEE_OPTEE_CAP_TZ, 893 + .gen_caps = TEE_GEN_CAP_GP, 894 + }; 895 + struct optee *optee = tee_get_drvdata(teedev); 896 + 897 + if (optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) 898 + v.gen_caps |= TEE_GEN_CAP_REG_MEM; 899 + if (optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_MEMREF_NULL) 900 + v.gen_caps |= TEE_GEN_CAP_MEMREF_NULL; 901 + *vers = v; 902 + } 903 + 904 + static int optee_smc_open(struct tee_context *ctx) 905 + { 906 + struct optee *optee = tee_get_drvdata(ctx->teedev); 907 + u32 sec_caps = optee->smc.sec_caps; 908 + 909 + return optee_open(ctx, sec_caps & OPTEE_SMC_SEC_CAP_MEMREF_NULL); 910 + } 911 + 912 + static const struct tee_driver_ops optee_clnt_ops = { 913 + .get_version = optee_get_version, 914 + .open = optee_smc_open, 915 + .release = optee_release, 916 + .open_session = optee_open_session, 917 + .close_session = optee_close_session, 918 + .invoke_func = optee_invoke_func, 919 + .cancel_req = optee_cancel_req, 920 + .shm_register = optee_shm_register, 921 + .shm_unregister = optee_shm_unregister, 922 + }; 923 + 924 + static const struct tee_desc optee_clnt_desc = { 925 + .name = DRIVER_NAME "-clnt", 926 + .ops = &optee_clnt_ops, 927 + .owner = THIS_MODULE, 928 + }; 929 + 930 + static const struct tee_driver_ops optee_supp_ops = { 931 + .get_version = optee_get_version, 932 + .open = optee_smc_open, 933 + .release = optee_release_supp, 934 + .supp_recv = optee_supp_recv, 935 + .supp_send = optee_supp_send, 936 + .shm_register = optee_shm_register_supp, 937 + .shm_unregister = optee_shm_unregister_supp, 938 + }; 939 + 940 + static const struct tee_desc optee_supp_desc = { 941 + .name = DRIVER_NAME "-supp", 942 + .ops = &optee_supp_ops, 943 + .owner = THIS_MODULE, 944 + .flags = TEE_DESC_PRIVILEGED, 945 + }; 946 + 947 + static const struct optee_ops optee_ops = { 948 + .do_call_with_arg = optee_smc_do_call_with_arg, 949 + .to_msg_param = optee_to_msg_param, 950 + .from_msg_param = optee_from_msg_param, 951 + }; 952 + 953 + static bool optee_msg_api_uid_is_optee_api(optee_invoke_fn *invoke_fn) 954 + { 955 + struct arm_smccc_res res; 956 + 957 + invoke_fn(OPTEE_SMC_CALLS_UID, 0, 0, 0, 0, 0, 0, 0, &res); 958 + 959 + if (res.a0 == OPTEE_MSG_UID_0 && res.a1 == OPTEE_MSG_UID_1 && 960 + res.a2 == OPTEE_MSG_UID_2 && res.a3 == OPTEE_MSG_UID_3) 961 + return true; 962 + return false; 963 + } 964 + 965 + static void optee_msg_get_os_revision(optee_invoke_fn *invoke_fn) 966 + { 967 + union { 968 + struct arm_smccc_res smccc; 969 + struct optee_smc_call_get_os_revision_result result; 970 + } res = { 971 + .result = { 972 + .build_id = 0 973 + } 974 + }; 975 + 976 + invoke_fn(OPTEE_SMC_CALL_GET_OS_REVISION, 0, 0, 0, 0, 0, 0, 0, 977 + &res.smccc); 978 + 979 + if (res.result.build_id) 980 + pr_info("revision %lu.%lu (%08lx)", res.result.major, 981 + res.result.minor, res.result.build_id); 982 + else 983 + pr_info("revision %lu.%lu", res.result.major, res.result.minor); 984 + } 985 + 986 + static bool optee_msg_api_revision_is_compatible(optee_invoke_fn *invoke_fn) 987 + { 988 + union { 989 + struct arm_smccc_res smccc; 990 + struct optee_smc_calls_revision_result result; 991 + } res; 992 + 993 + invoke_fn(OPTEE_SMC_CALLS_REVISION, 0, 0, 0, 0, 0, 0, 0, &res.smccc); 994 + 995 + if (res.result.major == OPTEE_MSG_REVISION_MAJOR && 996 + (int)res.result.minor >= OPTEE_MSG_REVISION_MINOR) 997 + return true; 998 + return false; 999 + } 1000 + 1001 + static bool optee_msg_exchange_capabilities(optee_invoke_fn *invoke_fn, 1002 + u32 *sec_caps) 1003 + { 1004 + union { 1005 + struct arm_smccc_res smccc; 1006 + struct optee_smc_exchange_capabilities_result result; 1007 + } res; 1008 + u32 a1 = 0; 1009 + 1010 + /* 1011 + * TODO This isn't enough to tell if it's UP system (from kernel 1012 + * point of view) or not, is_smp() returns the information 1013 + * needed, but can't be called directly from here. 1014 + */ 1015 + if (!IS_ENABLED(CONFIG_SMP) || nr_cpu_ids == 1) 1016 + a1 |= OPTEE_SMC_NSEC_CAP_UNIPROCESSOR; 1017 + 1018 + invoke_fn(OPTEE_SMC_EXCHANGE_CAPABILITIES, a1, 0, 0, 0, 0, 0, 0, 1019 + &res.smccc); 1020 + 1021 + if (res.result.status != OPTEE_SMC_RETURN_OK) 1022 + return false; 1023 + 1024 + *sec_caps = res.result.capabilities; 1025 + return true; 1026 + } 1027 + 1028 + static struct tee_shm_pool *optee_config_dyn_shm(void) 1029 + { 1030 + struct tee_shm_pool_mgr *priv_mgr; 1031 + struct tee_shm_pool_mgr *dmabuf_mgr; 1032 + void *rc; 1033 + 1034 + rc = optee_shm_pool_alloc_pages(); 1035 + if (IS_ERR(rc)) 1036 + return rc; 1037 + priv_mgr = rc; 1038 + 1039 + rc = optee_shm_pool_alloc_pages(); 1040 + if (IS_ERR(rc)) { 1041 + tee_shm_pool_mgr_destroy(priv_mgr); 1042 + return rc; 1043 + } 1044 + dmabuf_mgr = rc; 1045 + 1046 + rc = tee_shm_pool_alloc(priv_mgr, dmabuf_mgr); 1047 + if (IS_ERR(rc)) { 1048 + tee_shm_pool_mgr_destroy(priv_mgr); 1049 + tee_shm_pool_mgr_destroy(dmabuf_mgr); 1050 + } 1051 + 1052 + return rc; 1053 + } 1054 + 1055 + static struct tee_shm_pool * 1056 + optee_config_shm_memremap(optee_invoke_fn *invoke_fn, void **memremaped_shm) 1057 + { 1058 + union { 1059 + struct arm_smccc_res smccc; 1060 + struct optee_smc_get_shm_config_result result; 1061 + } res; 1062 + unsigned long vaddr; 1063 + phys_addr_t paddr; 1064 + size_t size; 1065 + phys_addr_t begin; 1066 + phys_addr_t end; 1067 + void *va; 1068 + struct tee_shm_pool_mgr *priv_mgr; 1069 + struct tee_shm_pool_mgr *dmabuf_mgr; 1070 + void *rc; 1071 + const int sz = OPTEE_SHM_NUM_PRIV_PAGES * PAGE_SIZE; 1072 + 1073 + invoke_fn(OPTEE_SMC_GET_SHM_CONFIG, 0, 0, 0, 0, 0, 0, 0, &res.smccc); 1074 + if (res.result.status != OPTEE_SMC_RETURN_OK) { 1075 + pr_err("static shm service not available\n"); 1076 + return ERR_PTR(-ENOENT); 1077 + } 1078 + 1079 + if (res.result.settings != OPTEE_SMC_SHM_CACHED) { 1080 + pr_err("only normal cached shared memory supported\n"); 1081 + return ERR_PTR(-EINVAL); 1082 + } 1083 + 1084 + begin = roundup(res.result.start, PAGE_SIZE); 1085 + end = rounddown(res.result.start + res.result.size, PAGE_SIZE); 1086 + paddr = begin; 1087 + size = end - begin; 1088 + 1089 + if (size < 2 * OPTEE_SHM_NUM_PRIV_PAGES * PAGE_SIZE) { 1090 + pr_err("too small shared memory area\n"); 1091 + return ERR_PTR(-EINVAL); 1092 + } 1093 + 1094 + va = memremap(paddr, size, MEMREMAP_WB); 1095 + if (!va) { 1096 + pr_err("shared memory ioremap failed\n"); 1097 + return ERR_PTR(-EINVAL); 1098 + } 1099 + vaddr = (unsigned long)va; 1100 + 1101 + rc = tee_shm_pool_mgr_alloc_res_mem(vaddr, paddr, sz, 1102 + 3 /* 8 bytes aligned */); 1103 + if (IS_ERR(rc)) 1104 + goto err_memunmap; 1105 + priv_mgr = rc; 1106 + 1107 + vaddr += sz; 1108 + paddr += sz; 1109 + size -= sz; 1110 + 1111 + rc = tee_shm_pool_mgr_alloc_res_mem(vaddr, paddr, size, PAGE_SHIFT); 1112 + if (IS_ERR(rc)) 1113 + goto err_free_priv_mgr; 1114 + dmabuf_mgr = rc; 1115 + 1116 + rc = tee_shm_pool_alloc(priv_mgr, dmabuf_mgr); 1117 + if (IS_ERR(rc)) 1118 + goto err_free_dmabuf_mgr; 1119 + 1120 + *memremaped_shm = va; 1121 + 1122 + return rc; 1123 + 1124 + err_free_dmabuf_mgr: 1125 + tee_shm_pool_mgr_destroy(dmabuf_mgr); 1126 + err_free_priv_mgr: 1127 + tee_shm_pool_mgr_destroy(priv_mgr); 1128 + err_memunmap: 1129 + memunmap(va); 1130 + return rc; 1131 + } 1132 + 1133 + /* Simple wrapper functions to be able to use a function pointer */ 1134 + static void optee_smccc_smc(unsigned long a0, unsigned long a1, 1135 + unsigned long a2, unsigned long a3, 1136 + unsigned long a4, unsigned long a5, 1137 + unsigned long a6, unsigned long a7, 1138 + struct arm_smccc_res *res) 1139 + { 1140 + arm_smccc_smc(a0, a1, a2, a3, a4, a5, a6, a7, res); 1141 + } 1142 + 1143 + static void optee_smccc_hvc(unsigned long a0, unsigned long a1, 1144 + unsigned long a2, unsigned long a3, 1145 + unsigned long a4, unsigned long a5, 1146 + unsigned long a6, unsigned long a7, 1147 + struct arm_smccc_res *res) 1148 + { 1149 + arm_smccc_hvc(a0, a1, a2, a3, a4, a5, a6, a7, res); 1150 + } 1151 + 1152 + static optee_invoke_fn *get_invoke_func(struct device *dev) 1153 + { 1154 + const char *method; 1155 + 1156 + pr_info("probing for conduit method.\n"); 1157 + 1158 + if (device_property_read_string(dev, "method", &method)) { 1159 + pr_warn("missing \"method\" property\n"); 1160 + return ERR_PTR(-ENXIO); 1161 + } 1162 + 1163 + if (!strcmp("hvc", method)) 1164 + return optee_smccc_hvc; 1165 + else if (!strcmp("smc", method)) 1166 + return optee_smccc_smc; 1167 + 1168 + pr_warn("invalid \"method\" property: %s\n", method); 1169 + return ERR_PTR(-EINVAL); 1170 + } 1171 + 1172 + /* optee_remove - Device Removal Routine 1173 + * @pdev: platform device information struct 1174 + * 1175 + * optee_remove is called by platform subsystem to alert the driver 1176 + * that it should release the device 1177 + */ 1178 + static int optee_smc_remove(struct platform_device *pdev) 1179 + { 1180 + struct optee *optee = platform_get_drvdata(pdev); 1181 + 1182 + /* 1183 + * Ask OP-TEE to free all cached shared memory objects to decrease 1184 + * reference counters and also avoid wild pointers in secure world 1185 + * into the old shared memory range. 1186 + */ 1187 + optee_disable_shm_cache(optee); 1188 + 1189 + optee_remove_common(optee); 1190 + 1191 + if (optee->smc.memremaped_shm) 1192 + memunmap(optee->smc.memremaped_shm); 1193 + 1194 + kfree(optee); 1195 + 1196 + return 0; 1197 + } 1198 + 1199 + /* optee_shutdown - Device Removal Routine 1200 + * @pdev: platform device information struct 1201 + * 1202 + * platform_shutdown is called by the platform subsystem to alert 1203 + * the driver that a shutdown, reboot, or kexec is happening and 1204 + * device must be disabled. 1205 + */ 1206 + static void optee_shutdown(struct platform_device *pdev) 1207 + { 1208 + optee_disable_shm_cache(platform_get_drvdata(pdev)); 1209 + } 1210 + 1211 + static int optee_probe(struct platform_device *pdev) 1212 + { 1213 + optee_invoke_fn *invoke_fn; 1214 + struct tee_shm_pool *pool = ERR_PTR(-EINVAL); 1215 + struct optee *optee = NULL; 1216 + void *memremaped_shm = NULL; 1217 + struct tee_device *teedev; 1218 + u32 sec_caps; 1219 + int rc; 1220 + 1221 + invoke_fn = get_invoke_func(&pdev->dev); 1222 + if (IS_ERR(invoke_fn)) 1223 + return PTR_ERR(invoke_fn); 1224 + 1225 + if (!optee_msg_api_uid_is_optee_api(invoke_fn)) { 1226 + pr_warn("api uid mismatch\n"); 1227 + return -EINVAL; 1228 + } 1229 + 1230 + optee_msg_get_os_revision(invoke_fn); 1231 + 1232 + if (!optee_msg_api_revision_is_compatible(invoke_fn)) { 1233 + pr_warn("api revision mismatch\n"); 1234 + return -EINVAL; 1235 + } 1236 + 1237 + if (!optee_msg_exchange_capabilities(invoke_fn, &sec_caps)) { 1238 + pr_warn("capabilities mismatch\n"); 1239 + return -EINVAL; 1240 + } 1241 + 1242 + /* 1243 + * Try to use dynamic shared memory if possible 1244 + */ 1245 + if (sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) 1246 + pool = optee_config_dyn_shm(); 1247 + 1248 + /* 1249 + * If dynamic shared memory is not available or failed - try static one 1250 + */ 1251 + if (IS_ERR(pool) && (sec_caps & OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM)) 1252 + pool = optee_config_shm_memremap(invoke_fn, &memremaped_shm); 1253 + 1254 + if (IS_ERR(pool)) 1255 + return PTR_ERR(pool); 1256 + 1257 + optee = kzalloc(sizeof(*optee), GFP_KERNEL); 1258 + if (!optee) { 1259 + rc = -ENOMEM; 1260 + goto err; 1261 + } 1262 + 1263 + optee->ops = &optee_ops; 1264 + optee->smc.invoke_fn = invoke_fn; 1265 + optee->smc.sec_caps = sec_caps; 1266 + 1267 + teedev = tee_device_alloc(&optee_clnt_desc, NULL, pool, optee); 1268 + if (IS_ERR(teedev)) { 1269 + rc = PTR_ERR(teedev); 1270 + goto err; 1271 + } 1272 + optee->teedev = teedev; 1273 + 1274 + teedev = tee_device_alloc(&optee_supp_desc, NULL, pool, optee); 1275 + if (IS_ERR(teedev)) { 1276 + rc = PTR_ERR(teedev); 1277 + goto err; 1278 + } 1279 + optee->supp_teedev = teedev; 1280 + 1281 + rc = tee_device_register(optee->teedev); 1282 + if (rc) 1283 + goto err; 1284 + 1285 + rc = tee_device_register(optee->supp_teedev); 1286 + if (rc) 1287 + goto err; 1288 + 1289 + mutex_init(&optee->call_queue.mutex); 1290 + INIT_LIST_HEAD(&optee->call_queue.waiters); 1291 + optee_wait_queue_init(&optee->wait_queue); 1292 + optee_supp_init(&optee->supp); 1293 + optee->smc.memremaped_shm = memremaped_shm; 1294 + optee->pool = pool; 1295 + 1296 + /* 1297 + * Ensure that there are no pre-existing shm objects before enabling 1298 + * the shm cache so that there's no chance of receiving an invalid 1299 + * address during shutdown. This could occur, for example, if we're 1300 + * kexec booting from an older kernel that did not properly cleanup the 1301 + * shm cache. 1302 + */ 1303 + optee_disable_unmapped_shm_cache(optee); 1304 + 1305 + optee_enable_shm_cache(optee); 1306 + 1307 + if (optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) 1308 + pr_info("dynamic shared memory is enabled\n"); 1309 + 1310 + platform_set_drvdata(pdev, optee); 1311 + 1312 + rc = optee_enumerate_devices(PTA_CMD_GET_DEVICES); 1313 + if (rc) { 1314 + optee_smc_remove(pdev); 1315 + return rc; 1316 + } 1317 + 1318 + pr_info("initialized driver\n"); 1319 + return 0; 1320 + err: 1321 + if (optee) { 1322 + /* 1323 + * tee_device_unregister() is safe to call even if the 1324 + * devices hasn't been registered with 1325 + * tee_device_register() yet. 1326 + */ 1327 + tee_device_unregister(optee->supp_teedev); 1328 + tee_device_unregister(optee->teedev); 1329 + kfree(optee); 1330 + } 1331 + if (pool) 1332 + tee_shm_pool_free(pool); 1333 + if (memremaped_shm) 1334 + memunmap(memremaped_shm); 1335 + return rc; 1336 + } 1337 + 1338 + static const struct of_device_id optee_dt_match[] = { 1339 + { .compatible = "linaro,optee-tz" }, 1340 + {}, 1341 + }; 1342 + MODULE_DEVICE_TABLE(of, optee_dt_match); 1343 + 1344 + static struct platform_driver optee_driver = { 1345 + .probe = optee_probe, 1346 + .remove = optee_smc_remove, 1347 + .shutdown = optee_shutdown, 1348 + .driver = { 1349 + .name = "optee", 1350 + .of_match_table = optee_dt_match, 1351 + }, 1352 + }; 1353 + 1354 + int optee_smc_abi_register(void) 1355 + { 1356 + return platform_driver_register(&optee_driver); 1357 + } 1358 + 1359 + void optee_smc_abi_unregister(void) 1360 + { 1361 + platform_driver_unregister(&optee_driver); 1362 + }
-14
include/dt-bindings/power/qcom-aoss-qmp.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* Copyright (c) 2018, Linaro Ltd. */ 3 - 4 - #ifndef __DT_BINDINGS_POWER_QCOM_AOSS_QMP_H 5 - #define __DT_BINDINGS_POWER_QCOM_AOSS_QMP_H 6 - 7 - #define AOSS_QMP_LS_CDSP 0 8 - #define AOSS_QMP_LS_LPASS 1 9 - #define AOSS_QMP_LS_MODEM 2 10 - #define AOSS_QMP_LS_SLPI 3 11 - #define AOSS_QMP_LS_SPSS 4 12 - #define AOSS_QMP_LS_VENUS 5 13 - 14 - #endif
+17
include/dt-bindings/power/qcom-rpmpd.h
··· 20 20 #define SDX55_MX 1 21 21 #define SDX55_CX 2 22 22 23 + /* SM6350 Power Domain Indexes */ 24 + #define SM6350_CX 0 25 + #define SM6350_GFX 1 26 + #define SM6350_LCX 2 27 + #define SM6350_LMX 3 28 + #define SM6350_MSS 4 29 + #define SM6350_MX 5 30 + 23 31 /* SM8150 Power Domain Indexes */ 24 32 #define SM8150_MSS 0 25 33 #define SM8150_EBI 1 ··· 140 132 #define MSM8916_VDDCX_VFC 2 141 133 #define MSM8916_VDDMX 3 142 134 #define MSM8916_VDDMX_AO 4 135 + 136 + /* MSM8953 Power Domain Indexes */ 137 + #define MSM8953_VDDMD 0 138 + #define MSM8953_VDDMD_AO 1 139 + #define MSM8953_VDDCX 2 140 + #define MSM8953_VDDCX_AO 3 141 + #define MSM8953_VDDCX_VFL 4 142 + #define MSM8953_VDDMX 5 143 + #define MSM8953_VDDMX_AO 6 143 144 144 145 /* MSM8976 Power Domain Indexes */ 145 146 #define MSM8976_VDDCX 0
+2
include/linux/arm_ffa.h
··· 262 262 int (*memory_reclaim)(u64 g_handle, u32 flags); 263 263 int (*memory_share)(struct ffa_device *dev, 264 264 struct ffa_mem_ops_args *args); 265 + int (*memory_lend)(struct ffa_device *dev, 266 + struct ffa_mem_ops_args *args); 265 267 }; 266 268 267 269 #endif /* _LINUX_ARM_FFA_H */
+23 -1
include/linux/clk/tegra.h
··· 42 42 #endif 43 43 }; 44 44 45 + #ifdef CONFIG_ARCH_TEGRA 45 46 extern struct tegra_cpu_car_ops *tegra_cpu_car_ops; 46 47 47 48 static inline void tegra_wait_cpu_in_reset(u32 cpu) ··· 84 83 85 84 tegra_cpu_car_ops->disable_clock(cpu); 86 85 } 86 + #else 87 + static inline void tegra_wait_cpu_in_reset(u32 cpu) 88 + { 89 + } 87 90 88 - #ifdef CONFIG_PM_SLEEP 91 + static inline void tegra_put_cpu_in_reset(u32 cpu) 92 + { 93 + } 94 + 95 + static inline void tegra_cpu_out_of_reset(u32 cpu) 96 + { 97 + } 98 + 99 + static inline void tegra_enable_cpu_clock(u32 cpu) 100 + { 101 + } 102 + 103 + static inline void tegra_disable_cpu_clock(u32 cpu) 104 + { 105 + } 106 + #endif 107 + 108 + #if defined(CONFIG_ARCH_TEGRA) && defined(CONFIG_PM_SLEEP) 89 109 static inline bool tegra_cpu_rail_off_ready(void) 90 110 { 91 111 if (WARN_ON(!tegra_cpu_car_ops->rail_off_ready))
+3
include/linux/platform_data/ti-sysc.h
··· 50 50 s8 emufree_shift; 51 51 }; 52 52 53 + #define SYSC_MODULE_QUIRK_OTG BIT(30) 54 + #define SYSC_QUIRK_RESET_ON_CTX_LOST BIT(29) 55 + #define SYSC_QUIRK_REINIT_ON_CTX_LOST BIT(28) 53 56 #define SYSC_QUIRK_REINIT_ON_RESUME BIT(27) 54 57 #define SYSC_QUIRK_GPMC_DEBUG BIT(26) 55 58 #define SYSC_MODULE_QUIRK_ENA_RESETDONE BIT(25)
+3
include/linux/soc/mediatek/mtk-mmsys.h
··· 29 29 DDP_COMPONENT_OVL0, 30 30 DDP_COMPONENT_OVL_2L0, 31 31 DDP_COMPONENT_OVL_2L1, 32 + DDP_COMPONENT_OVL_2L2, 32 33 DDP_COMPONENT_OVL1, 34 + DDP_COMPONENT_POSTMASK0, 33 35 DDP_COMPONENT_PWM0, 34 36 DDP_COMPONENT_PWM1, 35 37 DDP_COMPONENT_PWM2, 36 38 DDP_COMPONENT_RDMA0, 37 39 DDP_COMPONENT_RDMA1, 38 40 DDP_COMPONENT_RDMA2, 41 + DDP_COMPONENT_RDMA4, 39 42 DDP_COMPONENT_UFOE, 40 43 DDP_COMPONENT_WDMA0, 41 44 DDP_COMPONENT_WDMA1,
+38
include/linux/soc/qcom/qcom_aoss.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (c) 2021, The Linux Foundation. All rights reserved. 4 + */ 5 + 6 + #ifndef __QCOM_AOSS_H__ 7 + #define __QCOM_AOSS_H__ 8 + 9 + #include <linux/err.h> 10 + #include <linux/device.h> 11 + 12 + struct qmp; 13 + 14 + #if IS_ENABLED(CONFIG_QCOM_AOSS_QMP) 15 + 16 + int qmp_send(struct qmp *qmp, const void *data, size_t len); 17 + struct qmp *qmp_get(struct device *dev); 18 + void qmp_put(struct qmp *qmp); 19 + 20 + #else 21 + 22 + static inline int qmp_send(struct qmp *qmp, const void *data, size_t len) 23 + { 24 + return -ENODEV; 25 + } 26 + 27 + static inline struct qmp *qmp_get(struct device *dev) 28 + { 29 + return ERR_PTR(-ENODEV); 30 + } 31 + 32 + static inline void qmp_put(struct qmp *qmp) 33 + { 34 + } 35 + 36 + #endif 37 + 38 + #endif
+2 -4
include/linux/soc/samsung/exynos-chipid.h
··· 9 9 #define __LINUX_SOC_EXYNOS_CHIPID_H 10 10 11 11 #define EXYNOS_CHIPID_REG_PRO_ID 0x00 12 - #define EXYNOS_SUBREV_MASK (0xf << 4) 13 - #define EXYNOS_MAINREV_MASK (0xf << 0) 14 - #define EXYNOS_REV_MASK (EXYNOS_SUBREV_MASK | \ 15 - EXYNOS_MAINREV_MASK) 12 + #define EXYNOS_REV_PART_MASK 0xf 13 + #define EXYNOS_REV_PART_SHIFT 4 16 14 #define EXYNOS_MASK 0xfffff000 17 15 18 16 #define EXYNOS_CHIPID_REG_PKG_ID 0x04
+6 -1
include/linux/tee_drv.h
··· 197 197 * @num_pages: number of locked pages 198 198 * @dmabuf: dmabuf used to for exporting to user space 199 199 * @flags: defined by TEE_SHM_* in tee_drv.h 200 - * @id: unique id of a shared memory object on this device 200 + * @id: unique id of a shared memory object on this device, shared 201 + * with user space 202 + * @sec_world_id: 203 + * secure world assigned id of this shared memory object, not 204 + * used by all drivers 201 205 * 202 206 * This pool is only supposed to be accessed directly from the TEE 203 207 * subsystem and from drivers that implements their own shm pool manager. ··· 217 213 struct dma_buf *dmabuf; 218 214 u32 flags; 219 215 int id; 216 + u64 sec_world_id; 220 217 }; 221 218 222 219 /**
+1
include/memory/renesas-rpc-if.h
··· 59 59 60 60 struct rpcif { 61 61 struct device *dev; 62 + void __iomem *base; 62 63 void __iomem *dirmap; 63 64 struct regmap *regmap; 64 65 struct reset_control *rstc;
+43
include/soc/qcom/spm.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (c) 2011-2014, The Linux Foundation. All rights reserved. 4 + * Copyright (c) 2014,2015, Linaro Ltd. 5 + */ 6 + 7 + #ifndef __SPM_H__ 8 + #define __SPM_H__ 9 + 10 + #include <linux/cpuidle.h> 11 + 12 + #define MAX_PMIC_DATA 2 13 + #define MAX_SEQ_DATA 64 14 + 15 + enum pm_sleep_mode { 16 + PM_SLEEP_MODE_STBY, 17 + PM_SLEEP_MODE_RET, 18 + PM_SLEEP_MODE_SPC, 19 + PM_SLEEP_MODE_PC, 20 + PM_SLEEP_MODE_NR, 21 + }; 22 + 23 + struct spm_reg_data { 24 + const u16 *reg_offset; 25 + u32 spm_cfg; 26 + u32 spm_dly; 27 + u32 pmic_dly; 28 + u32 pmic_data[MAX_PMIC_DATA]; 29 + u32 avs_ctl; 30 + u32 avs_limit; 31 + u8 seq[MAX_SEQ_DATA]; 32 + u8 start_index[PM_SLEEP_MODE_NR]; 33 + }; 34 + 35 + struct spm_driver_data { 36 + void __iomem *reg_base; 37 + const struct spm_reg_data *reg_data; 38 + }; 39 + 40 + void spm_set_low_power_mode(struct spm_driver_data *drv, 41 + enum pm_sleep_mode mode); 42 + 43 + #endif /* __SPM_H__ */
+26 -5
include/soc/tegra/fuse.h
··· 6 6 #ifndef __SOC_TEGRA_FUSE_H__ 7 7 #define __SOC_TEGRA_FUSE_H__ 8 8 9 + #include <linux/types.h> 10 + 9 11 #define TEGRA20 0x20 10 12 #define TEGRA30 0x30 11 13 #define TEGRA114 0x35 ··· 23 21 #define TEGRA_FUSE_USB_CALIB_EXT_0 0x250 24 22 25 23 #ifndef __ASSEMBLY__ 26 - 27 - u32 tegra_read_chipid(void); 28 - u8 tegra_get_chip_id(void); 29 - u8 tegra_get_platform(void); 30 - bool tegra_is_silicon(void); 31 24 32 25 enum tegra_revision { 33 26 TEGRA_REVISION_UNKNOWN = 0, ··· 54 57 u32 tegra_read_straps(void); 55 58 u32 tegra_read_ram_code(void); 56 59 int tegra_fuse_readl(unsigned long offset, u32 *value); 60 + u32 tegra_read_chipid(void); 61 + u8 tegra_get_chip_id(void); 62 + u8 tegra_get_platform(void); 63 + bool tegra_is_silicon(void); 57 64 #else 58 65 static struct tegra_sku_info tegra_sku_info __maybe_unused; 59 66 ··· 74 73 static inline int tegra_fuse_readl(unsigned long offset, u32 *value) 75 74 { 76 75 return -ENODEV; 76 + } 77 + 78 + static inline u32 tegra_read_chipid(void) 79 + { 80 + return 0; 81 + } 82 + 83 + static inline u8 tegra_get_chip_id(void) 84 + { 85 + return 0; 86 + } 87 + 88 + static inline u8 tegra_get_platform(void) 89 + { 90 + return 0; 91 + } 92 + 93 + static inline bool tegra_is_silicon(void) 94 + { 95 + return false; 77 96 } 78 97 #endif 79 98
+8 -1
include/soc/tegra/irq.h
··· 6 6 #ifndef __SOC_TEGRA_IRQ_H 7 7 #define __SOC_TEGRA_IRQ_H 8 8 9 - #if defined(CONFIG_ARM) 9 + #include <linux/types.h> 10 + 11 + #if defined(CONFIG_ARM) && defined(CONFIG_ARCH_TEGRA) 10 12 bool tegra_pending_sgi(void); 13 + #else 14 + static inline bool tegra_pending_sgi(void) 15 + { 16 + return false; 17 + } 11 18 #endif 12 19 13 20 #endif /* __SOC_TEGRA_IRQ_H */
+1 -1
include/soc/tegra/pm.h
··· 17 17 TEGRA_SUSPEND_NOT_READY, 18 18 }; 19 19 20 - #if defined(CONFIG_PM_SLEEP) && defined(CONFIG_ARM) 20 + #if defined(CONFIG_PM_SLEEP) && defined(CONFIG_ARM) && defined(CONFIG_ARCH_TEGRA) 21 21 enum tegra_suspend_mode 22 22 tegra_pm_validate_suspend_mode(enum tegra_suspend_mode mode); 23 23
+6 -6
sound/soc/cirrus/ep93xx-i2s.c
··· 111 111 if ((ep93xx_i2s_read_reg(info, EP93XX_I2S_TX0EN) & 0x1) == 0 && 112 112 (ep93xx_i2s_read_reg(info, EP93XX_I2S_RX0EN) & 0x1) == 0) { 113 113 /* Enable clocks */ 114 - clk_enable(info->mclk); 115 - clk_enable(info->sclk); 116 - clk_enable(info->lrclk); 114 + clk_prepare_enable(info->mclk); 115 + clk_prepare_enable(info->sclk); 116 + clk_prepare_enable(info->lrclk); 117 117 118 118 /* Enable i2s */ 119 119 ep93xx_i2s_write_reg(info, EP93XX_I2S_GLCTRL, 1); ··· 156 156 ep93xx_i2s_write_reg(info, EP93XX_I2S_GLCTRL, 0); 157 157 158 158 /* Disable clocks */ 159 - clk_disable(info->lrclk); 160 - clk_disable(info->sclk); 161 - clk_disable(info->mclk); 159 + clk_disable_unprepare(info->lrclk); 160 + clk_disable_unprepare(info->sclk); 161 + clk_disable_unprepare(info->mclk); 162 162 } 163 163 } 164 164