Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'arm-drivers-5.18' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM driver updates from Arnd Bergmann:
"There are a few separately maintained driver subsystems that we merge
through the SoC tree, notable changes are:

- Memory controller updates, mainly for Tegra and Mediatek SoCs, and
clarifications for the memory controller DT bindings

- SCMI firmware interface updates, in particular a new transport
based on OPTEE and support for atomic operations.

- Cleanups to the TEE subsystem, refactoring its memory management

For SoC specific drivers without a separate subsystem, changes include

- Smaller updates and fixes for TI, AT91/SAMA5, Qualcomm and NXP
Layerscape SoCs.

- Driver support for Microchip SAMA5D29, Tesla FSD, Renesas RZ/G2L,
and Qualcomm SM8450.

- Better power management on Mediatek MT81xx, NXP i.MX8MQ and older
NVIDIA Tegra chips"

* tag 'arm-drivers-5.18' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (154 commits)
ARM: spear: fix typos in comments
soc/microchip: fix invalid free in mpfs_sys_controller_delete
soc: s4: Add support for power domains controller
dt-bindings: power: add Amlogic s4 power domains bindings
ARM: at91: add support in soc driver for new SAMA5D29
soc: mediatek: mmsys: add sw0_rst_offset in mmsys driver data
dt-bindings: memory: renesas,rpc-if: Document RZ/V2L SoC
memory: emif: check the pointer temp in get_device_details()
memory: emif: Add check for setup_interrupts
dt-bindings: arm: mediatek: mmsys: add support for MT8186
dt-bindings: mediatek: add compatible for MT8186 pwrap
soc: mediatek: pwrap: add pwrap driver for MT8186 SoC
soc: mediatek: mmsys: add mmsys reset control for MT8186
soc: mediatek: mtk-infracfg: Disable ACP on MT8192
soc: ti: k3-socinfo: Add AM62x JTAG ID
soc: mediatek: add MTK mutex support for MT8186
soc: mediatek: mmsys: add mt8186 mmsys routing table
soc: mediatek: pm-domains: Add support for mt8186
dt-bindings: power: Add MT8186 power domains
soc: mediatek: pm-domains: Add support for mt8195
...

+7644 -1435
+1
Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.yaml
··· 29 29 - mediatek,mt8167-mmsys 30 30 - mediatek,mt8173-mmsys 31 31 - mediatek,mt8183-mmsys 32 + - mediatek,mt8186-mmsys 32 33 - mediatek,mt8192-mmsys 33 34 - mediatek,mt8365-mmsys 34 35 - const: syscon
+2
Documentation/devicetree/bindings/arm/msm/qcom,llcc.yaml
··· 27 27 - qcom,sm6350-llcc 28 28 - qcom,sm8150-llcc 29 29 - qcom,sm8250-llcc 30 + - qcom,sm8350-llcc 31 + - qcom,sm8450-llcc 30 32 31 33 reg: 32 34 items:
+198
Documentation/devicetree/bindings/clock/tesla,fsd-clock.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/clock/tesla,fsd-clock.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Tesla FSD (Full Self-Driving) SoC clock controller 8 + 9 + maintainers: 10 + - Alim Akhtar <alim.akhtar@samsung.com> 11 + - linux-fsd@tesla.com 12 + 13 + description: | 14 + FSD clock controller consist of several clock management unit 15 + (CMU), which generates clocks for various inteernal SoC blocks. 16 + The root clock comes from external OSC clock (24 MHz). 17 + 18 + All available clocks are defined as preprocessor macros in 19 + 'dt-bindings/clock/fsd-clk.h' header. 20 + 21 + properties: 22 + compatible: 23 + enum: 24 + - tesla,fsd-clock-cmu 25 + - tesla,fsd-clock-imem 26 + - tesla,fsd-clock-peric 27 + - tesla,fsd-clock-fsys0 28 + - tesla,fsd-clock-fsys1 29 + - tesla,fsd-clock-mfc 30 + - tesla,fsd-clock-cam_csi 31 + 32 + clocks: 33 + minItems: 1 34 + maxItems: 6 35 + 36 + clock-names: 37 + minItems: 1 38 + maxItems: 6 39 + 40 + "#clock-cells": 41 + const: 1 42 + 43 + reg: 44 + maxItems: 1 45 + 46 + allOf: 47 + - if: 48 + properties: 49 + compatible: 50 + contains: 51 + const: tesla,fsd-clock-cmu 52 + then: 53 + properties: 54 + clocks: 55 + items: 56 + - description: External reference clock (24 MHz) 57 + clock-names: 58 + items: 59 + - const: fin_pll 60 + 61 + - if: 62 + properties: 63 + compatible: 64 + contains: 65 + const: tesla,fsd-clock-imem 66 + then: 67 + properties: 68 + clocks: 69 + items: 70 + - description: External reference clock (24 MHz) 71 + - description: IMEM TCU clock (from CMU_CMU) 72 + - description: IMEM bus clock (from CMU_CMU) 73 + - description: IMEM DMA clock (from CMU_CMU) 74 + clock-names: 75 + items: 76 + - const: fin_pll 77 + - const: dout_cmu_imem_tcuclk 78 + - const: dout_cmu_imem_aclk 79 + - const: dout_cmu_imem_dmaclk 80 + 81 + - if: 82 + properties: 83 + compatible: 84 + contains: 85 + const: tesla,fsd-clock-peric 86 + then: 87 + properties: 88 + clocks: 89 + items: 90 + - description: External reference clock (24 MHz) 91 + - description: Shared0 PLL div4 clock (from CMU_CMU) 92 + - description: PERIC shared1 div36 clock (from CMU_CMU) 93 + - description: PERIC shared0 div3 TBU clock (from CMU_CMU) 94 + - description: PERIC shared0 div20 clock (from CMU_CMU) 95 + - description: PERIC shared1 div4 DMAclock (from CMU_CMU) 96 + clock-names: 97 + items: 98 + - const: fin_pll 99 + - const: dout_cmu_pll_shared0_div4 100 + - const: dout_cmu_peric_shared1div36 101 + - const: dout_cmu_peric_shared0div3_tbuclk 102 + - const: dout_cmu_peric_shared0div20 103 + - const: dout_cmu_peric_shared1div4_dmaclk 104 + 105 + - if: 106 + properties: 107 + compatible: 108 + contains: 109 + const: tesla,fsd-clock-fsys0 110 + then: 111 + properties: 112 + clocks: 113 + items: 114 + - description: External reference clock (24 MHz) 115 + - description: Shared0 PLL div6 clock (from CMU_CMU) 116 + - description: FSYS0 shared1 div4 clock (from CMU_CMU) 117 + - description: FSYS0 shared0 div4 clock (from CMU_CMU) 118 + clock-names: 119 + items: 120 + - const: fin_pll 121 + - const: dout_cmu_pll_shared0_div6 122 + - const: dout_cmu_fsys0_shared1div4 123 + - const: dout_cmu_fsys0_shared0div4 124 + 125 + - if: 126 + properties: 127 + compatible: 128 + contains: 129 + const: tesla,fsd-clock-fsys1 130 + then: 131 + properties: 132 + clocks: 133 + items: 134 + - description: External reference clock (24 MHz) 135 + - description: FSYS1 shared0 div8 clock (from CMU_CMU) 136 + - description: FSYS1 shared0 div4 clock (from CMU_CMU) 137 + clock-names: 138 + items: 139 + - const: fin_pll 140 + - const: dout_cmu_fsys1_shared0div8 141 + - const: dout_cmu_fsys1_shared0div4 142 + 143 + - if: 144 + properties: 145 + compatible: 146 + contains: 147 + const: tesla,fsd-clock-mfc 148 + then: 149 + properties: 150 + clocks: 151 + items: 152 + - description: External reference clock (24 MHz) 153 + clock-names: 154 + items: 155 + - const: fin_pll 156 + 157 + - if: 158 + properties: 159 + compatible: 160 + contains: 161 + const: tesla,fsd-clock-cam_csi 162 + then: 163 + properties: 164 + clocks: 165 + items: 166 + - description: External reference clock (24 MHz) 167 + clock-names: 168 + items: 169 + - const: fin_pll 170 + 171 + required: 172 + - compatible 173 + - "#clock-cells" 174 + - clocks 175 + - clock-names 176 + - reg 177 + 178 + additionalProperties: false 179 + 180 + examples: 181 + # Clock controller node for CMU_FSYS1 182 + - | 183 + #include <dt-bindings/clock/fsd-clk.h> 184 + 185 + clock_fsys1: clock-controller@16810000 { 186 + compatible = "tesla,fsd-clock-fsys1"; 187 + reg = <0x16810000 0x3000>; 188 + #clock-cells = <1>; 189 + 190 + clocks = <&fin_pll>, 191 + <&clock_cmu DOUT_CMU_FSYS1_SHARED0DIV8>, 192 + <&clock_cmu DOUT_CMU_FSYS1_SHARED0DIV4>; 193 + clock-names = "fin_pll", 194 + "dout_cmu_fsys1_shared0div8", 195 + "dout_cmu_fsys1_shared0div4"; 196 + }; 197 + 198 + ...
+75
Documentation/devicetree/bindings/firmware/arm,scmi.yaml
··· 38 38 The virtio transport only supports a single device. 39 39 items: 40 40 - const: arm,scmi-virtio 41 + - description: SCMI compliant firmware with OP-TEE transport 42 + items: 43 + - const: linaro,scmi-optee 41 44 42 45 interrupts: 43 46 description: ··· 81 78 '#size-cells': 82 79 const: 0 83 80 81 + atomic-threshold-us: 82 + description: 83 + An optional time value, expressed in microseconds, representing, on this 84 + platform, the threshold above which any SCMI command, advertised to have 85 + an higher-than-threshold execution latency, should not be considered for 86 + atomic mode of operation, even if requested. 87 + default: 0 88 + 84 89 arm,smc-id: 85 90 $ref: /schemas/types.yaml#/definitions/uint32 86 91 description: 87 92 SMC id required when using smc or hvc transports 93 + 94 + linaro,optee-channel-id: 95 + $ref: /schemas/types.yaml#/definitions/uint32 96 + description: 97 + Channel specifier required when using OP-TEE transport. 88 98 89 99 protocol@11: 90 100 type: object ··· 211 195 minItems: 1 212 196 maxItems: 2 213 197 198 + linaro,optee-channel-id: 199 + $ref: /schemas/types.yaml#/definitions/uint32 200 + description: 201 + Channel specifier required when using OP-TEE transport and 202 + protocol has a dedicated communication channel. 203 + 214 204 required: 215 205 - reg 216 206 ··· 248 226 - arm,smc-id 249 227 - shmem 250 228 229 + else: 230 + if: 231 + properties: 232 + compatible: 233 + contains: 234 + const: linaro,scmi-optee 235 + then: 236 + required: 237 + - linaro,optee-channel-id 238 + 251 239 examples: 252 240 - | 253 241 firmware { ··· 271 239 272 240 #address-cells = <1>; 273 241 #size-cells = <0>; 242 + 243 + atomic-threshold-us = <10000>; 274 244 275 245 scmi_devpd: protocol@11 { 276 246 reg = <0x11>; ··· 374 340 reg = <0x11>; 375 341 #power-domain-cells = <1>; 376 342 }; 343 + }; 344 + }; 377 345 346 + - | 347 + firmware { 348 + scmi { 349 + compatible = "linaro,scmi-optee"; 350 + linaro,optee-channel-id = <0>; 351 + 352 + #address-cells = <1>; 353 + #size-cells = <0>; 354 + 355 + scmi_dvfs1: protocol@13 { 356 + reg = <0x13>; 357 + linaro,optee-channel-id = <1>; 358 + shmem = <&cpu_optee_lpri0>; 359 + #clock-cells = <1>; 360 + }; 361 + 362 + scmi_clk0: protocol@14 { 363 + reg = <0x14>; 364 + #clock-cells = <1>; 365 + }; 366 + }; 367 + }; 368 + 369 + soc { 370 + #address-cells = <2>; 371 + #size-cells = <2>; 372 + 373 + sram@51000000 { 374 + compatible = "mmio-sram"; 375 + reg = <0x0 0x51000000 0x0 0x10000>; 376 + 377 + #address-cells = <1>; 378 + #size-cells = <1>; 379 + ranges = <0 0x0 0x51000000 0x10000>; 380 + 381 + cpu_optee_lpri0: optee-sram-section@0 { 382 + compatible = "arm,scmi-shmem"; 383 + reg = <0x0 0x80>; 384 + }; 378 385 }; 379 386 }; 380 387
+135
Documentation/devicetree/bindings/memory-controllers/ddr/jedec,lpddr2-timings.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/memory-controllers/ddr/jedec,lpddr2-timings.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: LPDDR2 SDRAM AC timing parameters for a given speed-bin 8 + 9 + maintainers: 10 + - Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 11 + 12 + properties: 13 + compatible: 14 + const: jedec,lpddr2-timings 15 + 16 + max-freq: 17 + $ref: /schemas/types.yaml#/definitions/uint32 18 + description: | 19 + Maximum DDR clock frequency for the speed-bin, in Hz. 20 + 21 + min-freq: 22 + $ref: /schemas/types.yaml#/definitions/uint32 23 + description: | 24 + Minimum DDR clock frequency for the speed-bin, in Hz. 25 + 26 + tCKESR: 27 + $ref: /schemas/types.yaml#/definitions/uint32 28 + description: | 29 + CKE minimum pulse width during SELF REFRESH (low pulse width during 30 + SELF REFRESH) in pico seconds. 31 + 32 + tDQSCK-max: 33 + $ref: /schemas/types.yaml#/definitions/uint32 34 + description: | 35 + DQS output data access time from CK_t/CK_c in pico seconds. 36 + 37 + tDQSCK-max-derated: 38 + $ref: /schemas/types.yaml#/definitions/uint32 39 + description: | 40 + DQS output data access time from CK_t/CK_c, temperature de-rated, in pico 41 + seconds. 42 + 43 + tFAW: 44 + $ref: /schemas/types.yaml#/definitions/uint32 45 + description: | 46 + Four-bank activate window in pico seconds. 47 + 48 + tRAS-max-ns: 49 + description: | 50 + Row active time in nano seconds. 51 + 52 + tRAS-min: 53 + $ref: /schemas/types.yaml#/definitions/uint32 54 + description: | 55 + Row active time in pico seconds. 56 + 57 + tRCD: 58 + $ref: /schemas/types.yaml#/definitions/uint32 59 + description: | 60 + RAS-to-CAS delay in pico seconds. 61 + 62 + tRPab: 63 + $ref: /schemas/types.yaml#/definitions/uint32 64 + description: | 65 + Row precharge time (all banks) in pico seconds. 66 + 67 + tRRD: 68 + $ref: /schemas/types.yaml#/definitions/uint32 69 + description: | 70 + Active bank A to active bank B in pico seconds. 71 + 72 + tRTP: 73 + $ref: /schemas/types.yaml#/definitions/uint32 74 + description: | 75 + Internal READ to PRECHARGE command delay in pico seconds. 76 + 77 + tWR: 78 + $ref: /schemas/types.yaml#/definitions/uint32 79 + description: | 80 + WRITE recovery time in pico seconds. 81 + 82 + tWTR: 83 + $ref: /schemas/types.yaml#/definitions/uint32 84 + description: | 85 + Internal WRITE-to-READ command delay in pico seconds. 86 + 87 + tXP: 88 + $ref: /schemas/types.yaml#/definitions/uint32 89 + description: | 90 + Exit power-down to next valid command delay in pico seconds. 91 + 92 + tZQCL: 93 + $ref: /schemas/types.yaml#/definitions/uint32 94 + description: | 95 + Long calibration time in pico seconds. 96 + 97 + tZQCS: 98 + $ref: /schemas/types.yaml#/definitions/uint32 99 + description: | 100 + Short calibration time in pico seconds. 101 + 102 + tZQinit: 103 + $ref: /schemas/types.yaml#/definitions/uint32 104 + description: | 105 + Initialization calibration time in pico seconds. 106 + 107 + required: 108 + - compatible 109 + - min-freq 110 + - max-freq 111 + 112 + additionalProperties: false 113 + 114 + examples: 115 + - | 116 + timings { 117 + compatible = "jedec,lpddr2-timings"; 118 + min-freq = <10000000>; 119 + max-freq = <400000000>; 120 + tCKESR = <15000>; 121 + tDQSCK-max = <5500>; 122 + tFAW = <50000>; 123 + tRAS-max-ns = <70000>; 124 + tRAS-min = <42000>; 125 + tRPab = <21000>; 126 + tRCD = <18000>; 127 + tRRD = <10000>; 128 + tRTP = <7500>; 129 + tWR = <15000>; 130 + tWTR = <7500>; 131 + tXP = <7500>; 132 + tZQCL = <360000>; 133 + tZQCS = <90000>; 134 + tZQinit = <1000000>; 135 + };
+17 -6
Documentation/devicetree/bindings/memory-controllers/ddr/jedec,lpddr2.yaml
··· 30 30 maximum: 255 31 31 description: | 32 32 Revision 1 value of SDRAM chip. Obtained from device datasheet. 33 + Property is deprecated, use revision-id instead. 34 + deprecated: true 33 35 34 36 revision-id2: 35 37 $ref: /schemas/types.yaml#/definitions/uint32 36 38 maximum: 255 37 39 description: | 38 40 Revision 2 value of SDRAM chip. Obtained from device datasheet. 41 + Property is deprecated, use revision-id instead. 42 + deprecated: true 43 + 44 + revision-id: 45 + $ref: /schemas/types.yaml#/definitions/uint32-array 46 + description: | 47 + Revision IDs read from Mode Register 6 and 7. One byte per uint32 cell (i.e. <MR6 MR7>). 48 + minItems: 2 49 + maxItems: 2 50 + items: 51 + minimum: 0 52 + maximum: 255 39 53 40 54 density: 41 55 $ref: /schemas/types.yaml#/definitions/uint32 ··· 156 142 157 143 patternProperties: 158 144 "^lpddr2-timings": 159 - type: object 145 + $ref: jedec,lpddr2-timings.yaml 160 146 description: | 161 147 The lpddr2 node may have one or more child nodes of type "lpddr2-timings". 162 148 "lpddr2-timings" provides AC timing parameters of the device for 163 149 a given speed-bin. The user may provide the timings for as many 164 - speed-bins as is required. Please see Documentation/devicetree/ 165 - bindings/memory-controllers/ddr/lpddr2-timings.txt for more information 166 - on "lpddr2-timings". 150 + speed-bins as is required. 167 151 168 152 required: 169 153 - compatible ··· 176 164 compatible = "elpida,ECB240ABACN", "jedec,lpddr2-s4"; 177 165 density = <2048>; 178 166 io-width = <32>; 179 - revision-id1 = <1>; 180 - revision-id2 = <0>; 167 + revision-id = <1 0>; 181 168 182 169 tRPab-min-tck = <3>; 183 170 tRCD-min-tck = <3>;
+157
Documentation/devicetree/bindings/memory-controllers/ddr/jedec,lpddr3-timings.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/memory-controllers/ddr/jedec,lpddr3-timings.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: LPDDR3 SDRAM AC timing parameters for a given speed-bin 8 + 9 + maintainers: 10 + - Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 11 + 12 + properties: 13 + compatible: 14 + const: jedec,lpddr3-timings 15 + 16 + reg: 17 + maxItems: 1 18 + description: | 19 + Maximum DDR clock frequency for the speed-bin, in Hz. 20 + Property is deprecated, use max-freq. 21 + deprecated: true 22 + 23 + max-freq: 24 + $ref: /schemas/types.yaml#/definitions/uint32 25 + description: | 26 + Maximum DDR clock frequency for the speed-bin, in Hz. 27 + 28 + min-freq: 29 + $ref: /schemas/types.yaml#/definitions/uint32 30 + description: | 31 + Minimum DDR clock frequency for the speed-bin, in Hz. 32 + 33 + tCKE: 34 + $ref: /schemas/types.yaml#/definitions/uint32 35 + description: | 36 + CKE minimum pulse width (HIGH and LOW pulse width) in pico seconds. 37 + 38 + tCKESR: 39 + $ref: /schemas/types.yaml#/definitions/uint32 40 + description: | 41 + CKE minimum pulse width during SELF REFRESH (low pulse width during 42 + SELF REFRESH) in pico seconds. 43 + 44 + tFAW: 45 + $ref: /schemas/types.yaml#/definitions/uint32 46 + description: | 47 + Four-bank activate window in pico seconds. 48 + 49 + tMRD: 50 + $ref: /schemas/types.yaml#/definitions/uint32 51 + description: | 52 + Mode register set command delay in pico seconds. 53 + 54 + tR2R-C2C: 55 + $ref: /schemas/types.yaml#/definitions/uint32 56 + description: | 57 + Additional READ-to-READ delay in chip-to-chip cases in pico seconds. 58 + 59 + tRAS: 60 + $ref: /schemas/types.yaml#/definitions/uint32 61 + description: | 62 + Row active time in pico seconds. 63 + 64 + tRC: 65 + $ref: /schemas/types.yaml#/definitions/uint32 66 + description: | 67 + ACTIVATE-to-ACTIVATE command period in pico seconds. 68 + 69 + tRCD: 70 + $ref: /schemas/types.yaml#/definitions/uint32 71 + description: | 72 + RAS-to-CAS delay in pico seconds. 73 + 74 + tRFC: 75 + $ref: /schemas/types.yaml#/definitions/uint32 76 + description: | 77 + Refresh Cycle time in pico seconds. 78 + 79 + tRPab: 80 + $ref: /schemas/types.yaml#/definitions/uint32 81 + description: | 82 + Row precharge time (all banks) in pico seconds. 83 + 84 + tRPpb: 85 + $ref: /schemas/types.yaml#/definitions/uint32 86 + description: | 87 + Row precharge time (single banks) in pico seconds. 88 + 89 + tRRD: 90 + $ref: /schemas/types.yaml#/definitions/uint32 91 + description: | 92 + Active bank A to active bank B in pico seconds. 93 + 94 + tRTP: 95 + $ref: /schemas/types.yaml#/definitions/uint32 96 + description: | 97 + Internal READ to PRECHARGE command delay in pico seconds. 98 + 99 + tW2W-C2C: 100 + $ref: /schemas/types.yaml#/definitions/uint32 101 + description: | 102 + Additional WRITE-to-WRITE delay in chip-to-chip cases in pico seconds. 103 + 104 + tWR: 105 + $ref: /schemas/types.yaml#/definitions/uint32 106 + description: | 107 + WRITE recovery time in pico seconds. 108 + 109 + tWTR: 110 + $ref: /schemas/types.yaml#/definitions/uint32 111 + description: | 112 + Internal WRITE-to-READ command delay in pico seconds. 113 + 114 + tXP: 115 + $ref: /schemas/types.yaml#/definitions/uint32 116 + description: | 117 + Exit power-down to next valid command delay in pico seconds. 118 + 119 + tXSR: 120 + $ref: /schemas/types.yaml#/definitions/uint32 121 + description: | 122 + SELF REFRESH exit to next valid command delay in pico seconds. 123 + 124 + required: 125 + - compatible 126 + - min-freq 127 + - max-freq 128 + 129 + additionalProperties: false 130 + 131 + examples: 132 + - | 133 + lpddr3 { 134 + timings { 135 + compatible = "jedec,lpddr3-timings"; 136 + max-freq = <800000000>; 137 + min-freq = <100000000>; 138 + tCKE = <3750>; 139 + tCKESR = <3750>; 140 + tFAW = <25000>; 141 + tMRD = <7000>; 142 + tR2R-C2C = <0>; 143 + tRAS = <23000>; 144 + tRC = <33750>; 145 + tRCD = <10000>; 146 + tRFC = <65000>; 147 + tRPab = <12000>; 148 + tRPpb = <12000>; 149 + tRRD = <6000>; 150 + tRTP = <3750>; 151 + tW2W-C2C = <0>; 152 + tWR = <7500>; 153 + tWTR = <3750>; 154 + tXP = <3750>; 155 + tXSR = <70000>; 156 + }; 157 + };
+263
Documentation/devicetree/bindings/memory-controllers/ddr/jedec,lpddr3.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/memory-controllers/ddr/jedec,lpddr3.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: LPDDR3 SDRAM compliant to JEDEC JESD209-3 8 + 9 + maintainers: 10 + - Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 11 + 12 + properties: 13 + compatible: 14 + items: 15 + - enum: 16 + - samsung,K3QF2F20DB 17 + - const: jedec,lpddr3 18 + 19 + '#address-cells': 20 + const: 1 21 + deprecated: true 22 + 23 + density: 24 + $ref: /schemas/types.yaml#/definitions/uint32 25 + description: | 26 + Density in megabits of SDRAM chip. 27 + enum: 28 + - 4096 29 + - 8192 30 + - 16384 31 + - 32768 32 + 33 + io-width: 34 + $ref: /schemas/types.yaml#/definitions/uint32 35 + description: | 36 + IO bus width in bits of SDRAM chip. 37 + enum: 38 + - 32 39 + - 16 40 + 41 + manufacturer-id: 42 + $ref: /schemas/types.yaml#/definitions/uint32 43 + description: | 44 + Manufacturer ID value read from Mode Register 5. The property is 45 + deprecated, manufacturer should be derived from the compatible. 46 + deprecated: true 47 + 48 + revision-id: 49 + $ref: /schemas/types.yaml#/definitions/uint32-array 50 + minItems: 2 51 + maxItems: 2 52 + items: 53 + maximum: 255 54 + description: | 55 + Revision value of SDRAM chip read from Mode Registers 6 and 7. 56 + 57 + '#size-cells': 58 + const: 0 59 + deprecated: true 60 + 61 + tCKE-min-tck: 62 + $ref: /schemas/types.yaml#/definitions/uint32 63 + maximum: 15 64 + description: | 65 + CKE minimum pulse width (HIGH and LOW pulse width) in terms of number 66 + of clock cycles. 67 + 68 + tCKESR-min-tck: 69 + $ref: /schemas/types.yaml#/definitions/uint32 70 + maximum: 15 71 + description: | 72 + CKE minimum pulse width during SELF REFRESH (low pulse width during 73 + SELF REFRESH) in terms of number of clock cycles. 74 + 75 + tDQSCK-min-tck: 76 + $ref: /schemas/types.yaml#/definitions/uint32 77 + maximum: 15 78 + description: | 79 + DQS output data access time from CK_t/CK_c in terms of number of clock 80 + cycles. 81 + 82 + tFAW-min-tck: 83 + $ref: /schemas/types.yaml#/definitions/uint32 84 + maximum: 63 85 + description: | 86 + Four-bank activate window in terms of number of clock cycles. 87 + 88 + tMRD-min-tck: 89 + $ref: /schemas/types.yaml#/definitions/uint32 90 + maximum: 15 91 + description: | 92 + Mode register set command delay in terms of number of clock cycles. 93 + 94 + tR2R-C2C-min-tck: 95 + $ref: /schemas/types.yaml#/definitions/uint32 96 + enum: [0, 1] 97 + description: | 98 + Additional READ-to-READ delay in chip-to-chip cases in terms of number 99 + of clock cycles. 100 + 101 + tRAS-min-tck: 102 + $ref: /schemas/types.yaml#/definitions/uint32 103 + maximum: 63 104 + description: | 105 + Row active time in terms of number of clock cycles. 106 + 107 + tRC-min-tck: 108 + $ref: /schemas/types.yaml#/definitions/uint32 109 + maximum: 63 110 + description: | 111 + ACTIVATE-to-ACTIVATE command period in terms of number of clock cycles. 112 + 113 + tRCD-min-tck: 114 + $ref: /schemas/types.yaml#/definitions/uint32 115 + maximum: 15 116 + description: | 117 + RAS-to-CAS delay in terms of number of clock cycles. 118 + 119 + tRFC-min-tck: 120 + $ref: /schemas/types.yaml#/definitions/uint32 121 + maximum: 255 122 + description: | 123 + Refresh Cycle time in terms of number of clock cycles. 124 + 125 + tRL-min-tck: 126 + $ref: /schemas/types.yaml#/definitions/uint32 127 + maximum: 15 128 + description: | 129 + READ data latency in terms of number of clock cycles. 130 + 131 + tRPab-min-tck: 132 + $ref: /schemas/types.yaml#/definitions/uint32 133 + maximum: 15 134 + description: | 135 + Row precharge time (all banks) in terms of number of clock cycles. 136 + 137 + tRPpb-min-tck: 138 + $ref: /schemas/types.yaml#/definitions/uint32 139 + maximum: 15 140 + description: | 141 + Row precharge time (single banks) in terms of number of clock cycles. 142 + 143 + tRRD-min-tck: 144 + $ref: /schemas/types.yaml#/definitions/uint32 145 + maximum: 15 146 + description: | 147 + Active bank A to active bank B in terms of number of clock cycles. 148 + 149 + tRTP-min-tck: 150 + $ref: /schemas/types.yaml#/definitions/uint32 151 + maximum: 15 152 + description: | 153 + Internal READ to PRECHARGE command delay in terms of number of clock 154 + cycles. 155 + 156 + tW2W-C2C-min-tck: 157 + $ref: /schemas/types.yaml#/definitions/uint32 158 + enum: [0, 1] 159 + description: | 160 + Additional WRITE-to-WRITE delay in chip-to-chip cases in terms of number 161 + of clock cycles. 162 + 163 + tWL-min-tck: 164 + $ref: /schemas/types.yaml#/definitions/uint32 165 + maximum: 15 166 + description: | 167 + WRITE data latency in terms of number of clock cycles. 168 + 169 + tWR-min-tck: 170 + $ref: /schemas/types.yaml#/definitions/uint32 171 + maximum: 15 172 + description: | 173 + WRITE recovery time in terms of number of clock cycles. 174 + 175 + tWTR-min-tck: 176 + $ref: /schemas/types.yaml#/definitions/uint32 177 + maximum: 15 178 + description: | 179 + Internal WRITE-to-READ command delay in terms of number of clock cycles. 180 + 181 + tXP-min-tck: 182 + $ref: /schemas/types.yaml#/definitions/uint32 183 + maximum: 255 184 + description: | 185 + Exit power-down to next valid command delay in terms of number of clock 186 + cycles. 187 + 188 + tXSR-min-tck: 189 + $ref: /schemas/types.yaml#/definitions/uint32 190 + maximum: 1023 191 + description: | 192 + SELF REFRESH exit to next valid command delay in terms of number of clock 193 + cycles. 194 + 195 + patternProperties: 196 + "^timings((-[0-9])+|(@[0-9a-f]+))?$": 197 + $ref: jedec,lpddr3-timings.yaml 198 + description: | 199 + The lpddr3 node may have one or more child nodes with timings. 200 + Each timing node provides AC timing parameters of the device for a given 201 + speed-bin. The user may provide the timings for as many speed-bins as is 202 + required. 203 + 204 + required: 205 + - compatible 206 + - density 207 + - io-width 208 + 209 + additionalProperties: false 210 + 211 + examples: 212 + - | 213 + lpddr3 { 214 + compatible = "samsung,K3QF2F20DB", "jedec,lpddr3"; 215 + density = <16384>; 216 + io-width = <32>; 217 + 218 + tCKE-min-tck = <2>; 219 + tCKESR-min-tck = <2>; 220 + tDQSCK-min-tck = <5>; 221 + tFAW-min-tck = <5>; 222 + tMRD-min-tck = <5>; 223 + tR2R-C2C-min-tck = <0>; 224 + tRAS-min-tck = <5>; 225 + tRC-min-tck = <6>; 226 + tRCD-min-tck = <3>; 227 + tRFC-min-tck = <17>; 228 + tRL-min-tck = <14>; 229 + tRPab-min-tck = <2>; 230 + tRPpb-min-tck = <2>; 231 + tRRD-min-tck = <2>; 232 + tRTP-min-tck = <2>; 233 + tW2W-C2C-min-tck = <0>; 234 + tWL-min-tck = <8>; 235 + tWR-min-tck = <7>; 236 + tWTR-min-tck = <2>; 237 + tXP-min-tck = <2>; 238 + tXSR-min-tck = <12>; 239 + 240 + timings { 241 + compatible = "jedec,lpddr3-timings"; 242 + max-freq = <800000000>; 243 + min-freq = <100000000>; 244 + tCKE = <3750>; 245 + tCKESR = <3750>; 246 + tFAW = <25000>; 247 + tMRD = <7000>; 248 + tR2R-C2C = <0>; 249 + tRAS = <23000>; 250 + tRC = <33750>; 251 + tRCD = <10000>; 252 + tRFC = <65000>; 253 + tRPab = <12000>; 254 + tRPpb = <12000>; 255 + tRRD = <6000>; 256 + tRTP = <3750>; 257 + tW2W-C2C = <0>; 258 + tWR = <7500>; 259 + tWTR = <3750>; 260 + tXP = <3750>; 261 + tXSR = <70000>; 262 + }; 263 + };
-52
Documentation/devicetree/bindings/memory-controllers/ddr/lpddr2-timings.txt
··· 1 - * AC timing parameters of LPDDR2(JESD209-2) memories for a given speed-bin 2 - 3 - Required properties: 4 - - compatible : Should be "jedec,lpddr2-timings" 5 - - min-freq : minimum DDR clock frequency for the speed-bin. Type is <u32> 6 - - max-freq : maximum DDR clock frequency for the speed-bin. Type is <u32> 7 - 8 - Optional properties: 9 - 10 - The following properties represent AC timing parameters from the memory 11 - data-sheet of the device for a given speed-bin. All these properties are 12 - of type <u32> and the default unit is ps (pico seconds). Parameters with 13 - a different unit have a suffix indicating the unit such as 'tRAS-max-ns' 14 - - tRCD 15 - - tWR 16 - - tRAS-min 17 - - tRRD 18 - - tWTR 19 - - tXP 20 - - tRTP 21 - - tDQSCK-max 22 - - tFAW 23 - - tZQCS 24 - - tZQinit 25 - - tRPab 26 - - tZQCL 27 - - tCKESR 28 - - tRAS-max-ns 29 - - tDQSCK-max-derated 30 - 31 - Example: 32 - 33 - timings_elpida_ECB240ABACN_400mhz: lpddr2-timings@0 { 34 - compatible = "jedec,lpddr2-timings"; 35 - min-freq = <10000000>; 36 - max-freq = <400000000>; 37 - tRPab = <21000>; 38 - tRCD = <18000>; 39 - tWR = <15000>; 40 - tRAS-min = <42000>; 41 - tRRD = <10000>; 42 - tWTR = <7500>; 43 - tXP = <7500>; 44 - tRTP = <7500>; 45 - tCKESR = <15000>; 46 - tDQSCK-max = <5500>; 47 - tFAW = <50000>; 48 - tZQCS = <90000>; 49 - tZQCL = <360000>; 50 - tZQinit = <1000000>; 51 - tRAS-max-ns = <70000>; 52 - };
-58
Documentation/devicetree/bindings/memory-controllers/ddr/lpddr3-timings.txt
··· 1 - * AC timing parameters of LPDDR3 memories for a given speed-bin. 2 - 3 - The structures are based on LPDDR2 and extended where needed. 4 - 5 - Required properties: 6 - - compatible : Should be "jedec,lpddr3-timings" 7 - - min-freq : minimum DDR clock frequency for the speed-bin. Type is <u32> 8 - - reg : maximum DDR clock frequency for the speed-bin. Type is <u32> 9 - 10 - Optional properties: 11 - 12 - The following properties represent AC timing parameters from the memory 13 - data-sheet of the device for a given speed-bin. All these properties are 14 - of type <u32> and the default unit is ps (pico seconds). 15 - - tRFC 16 - - tRRD 17 - - tRPab 18 - - tRPpb 19 - - tRCD 20 - - tRC 21 - - tRAS 22 - - tWTR 23 - - tWR 24 - - tRTP 25 - - tW2W-C2C 26 - - tR2R-C2C 27 - - tFAW 28 - - tXSR 29 - - tXP 30 - - tCKE 31 - - tCKESR 32 - - tMRD 33 - 34 - Example: 35 - 36 - timings_samsung_K3QF2F20DB_800mhz: lpddr3-timings@800000000 { 37 - compatible = "jedec,lpddr3-timings"; 38 - reg = <800000000>; /* workaround: it shows max-freq */ 39 - min-freq = <100000000>; 40 - tRFC = <65000>; 41 - tRRD = <6000>; 42 - tRPab = <12000>; 43 - tRPpb = <12000>; 44 - tRCD = <10000>; 45 - tRC = <33750>; 46 - tRAS = <23000>; 47 - tWTR = <3750>; 48 - tWR = <7500>; 49 - tRTP = <3750>; 50 - tW2W-C2C = <0>; 51 - tR2R-C2C = <0>; 52 - tFAW = <25000>; 53 - tXSR = <70000>; 54 - tXP = <3750>; 55 - tCKE = <3750>; 56 - tCKESR = <3750>; 57 - tMRD = <7000>; 58 - };
-107
Documentation/devicetree/bindings/memory-controllers/ddr/lpddr3.txt
··· 1 - * LPDDR3 SDRAM memories compliant to JEDEC JESD209-3C 2 - 3 - Required properties: 4 - - compatible : Should be "<vendor>,<type>", and generic value "jedec,lpddr3". 5 - Example "<vendor>,<type>" values: 6 - "samsung,K3QF2F20DB" 7 - 8 - - density : <u32> representing density in Mb (Mega bits) 9 - - io-width : <u32> representing bus width. Possible values are 8, 16, 32, 64 10 - - #address-cells: Must be set to 1 11 - - #size-cells: Must be set to 0 12 - 13 - Optional properties: 14 - 15 - - manufacturer-id : <u32> Manufacturer ID value read from Mode Register 5 16 - - revision-id : <u32 u32> Revision IDs read from Mode Registers 6 and 7 17 - 18 - The following optional properties represent the minimum value of some AC 19 - timing parameters of the DDR device in terms of number of clock cycles. 20 - These values shall be obtained from the device data-sheet. 21 - - tRFC-min-tck 22 - - tRRD-min-tck 23 - - tRPab-min-tck 24 - - tRPpb-min-tck 25 - - tRCD-min-tck 26 - - tRC-min-tck 27 - - tRAS-min-tck 28 - - tWTR-min-tck 29 - - tWR-min-tck 30 - - tRTP-min-tck 31 - - tW2W-C2C-min-tck 32 - - tR2R-C2C-min-tck 33 - - tWL-min-tck 34 - - tDQSCK-min-tck 35 - - tRL-min-tck 36 - - tFAW-min-tck 37 - - tXSR-min-tck 38 - - tXP-min-tck 39 - - tCKE-min-tck 40 - - tCKESR-min-tck 41 - - tMRD-min-tck 42 - 43 - Child nodes: 44 - - The lpddr3 node may have one or more child nodes of type "lpddr3-timings". 45 - "lpddr3-timings" provides AC timing parameters of the device for 46 - a given speed-bin. Please see 47 - Documentation/devicetree/bindings/memory-controllers/ddr/lpddr3-timings.txt 48 - for more information on "lpddr3-timings" 49 - 50 - Example: 51 - 52 - samsung_K3QF2F20DB: lpddr3 { 53 - compatible = "samsung,K3QF2F20DB", "jedec,lpddr3"; 54 - density = <16384>; 55 - io-width = <32>; 56 - manufacturer-id = <1>; 57 - revision-id = <123 234>; 58 - #address-cells = <1>; 59 - #size-cells = <0>; 60 - 61 - tRFC-min-tck = <17>; 62 - tRRD-min-tck = <2>; 63 - tRPab-min-tck = <2>; 64 - tRPpb-min-tck = <2>; 65 - tRCD-min-tck = <3>; 66 - tRC-min-tck = <6>; 67 - tRAS-min-tck = <5>; 68 - tWTR-min-tck = <2>; 69 - tWR-min-tck = <7>; 70 - tRTP-min-tck = <2>; 71 - tW2W-C2C-min-tck = <0>; 72 - tR2R-C2C-min-tck = <0>; 73 - tWL-min-tck = <8>; 74 - tDQSCK-min-tck = <5>; 75 - tRL-min-tck = <14>; 76 - tFAW-min-tck = <5>; 77 - tXSR-min-tck = <12>; 78 - tXP-min-tck = <2>; 79 - tCKE-min-tck = <2>; 80 - tCKESR-min-tck = <2>; 81 - tMRD-min-tck = <5>; 82 - 83 - timings_samsung_K3QF2F20DB_800mhz: lpddr3-timings@800000000 { 84 - compatible = "jedec,lpddr3-timings"; 85 - /* workaround: 'reg' shows max-freq */ 86 - reg = <800000000>; 87 - min-freq = <100000000>; 88 - tRFC = <65000>; 89 - tRRD = <6000>; 90 - tRPab = <12000>; 91 - tRPpb = <12000>; 92 - tRCD = <10000>; 93 - tRC = <33750>; 94 - tRAS = <23000>; 95 - tWTR = <3750>; 96 - tWR = <7500>; 97 - tRTP = <3750>; 98 - tW2W-C2C = <0>; 99 - tR2R-C2C = <0>; 100 - tFAW = <25000>; 101 - tXSR = <70000>; 102 - tXP = <3750>; 103 - tCKE = <3750>; 104 - tCKESR = <3750>; 105 - tMRD = <7000>; 106 - }; 107 - }
+113
Documentation/devicetree/bindings/memory-controllers/fsl/fsl,ifc.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/memory-controllers/fsl/fsl,ifc.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: FSL/NXP Integrated Flash Controller 8 + 9 + maintainers: 10 + - Li Yang <leoyang.li@nxp.com> 11 + 12 + description: | 13 + NXP's integrated flash controller (IFC) is an advanced version of the 14 + enhanced local bus controller which includes similar programming and signal 15 + interfaces with an extended feature set. The IFC provides access to multiple 16 + external memory types, such as NAND flash (SLC and MLC), NOR flash, EPROM, 17 + SRAM and other memories where address and data are shared on a bus. 18 + 19 + properties: 20 + $nodename: 21 + pattern: "^memory-controller@[0-9a-f]+$" 22 + 23 + compatible: 24 + const: fsl,ifc 25 + 26 + "#address-cells": 27 + enum: [2, 3] 28 + description: | 29 + Should be either two or three. The first cell is the chipselect 30 + number, and the remaining cells are the offset into the chipselect. 31 + 32 + "#size-cells": 33 + enum: [1, 2] 34 + description: | 35 + Either one or two, depending on how large each chipselect can be. 36 + 37 + reg: 38 + maxItems: 1 39 + 40 + interrupts: 41 + minItems: 1 42 + maxItems: 2 43 + description: | 44 + IFC may have one or two interrupts. If two interrupt specifiers are 45 + present, the first is the "common" interrupt (CM_EVTER_STAT), and the 46 + second is the NAND interrupt (NAND_EVTER_STAT). If there is only one, 47 + that interrupt reports both types of event. 48 + 49 + little-endian: 50 + type: boolean 51 + description: | 52 + If this property is absent, the big-endian mode will be in use as default 53 + for registers. 54 + 55 + ranges: 56 + description: | 57 + Each range corresponds to a single chipselect, and covers the entire 58 + access window as configured. 59 + 60 + patternProperties: 61 + "^.*@[a-f0-9]+(,[a-f0-9]+)+$": 62 + type: object 63 + description: | 64 + Child device nodes describe the devices connected to IFC such as NOR (e.g. 65 + cfi-flash) and NAND (fsl,ifc-nand). There might be board specific devices 66 + like FPGAs, CPLDs, etc. 67 + 68 + required: 69 + - compatible 70 + - reg 71 + 72 + required: 73 + - compatible 74 + - reg 75 + - interrupts 76 + 77 + additionalProperties: false 78 + 79 + examples: 80 + - | 81 + soc { 82 + #address-cells = <2>; 83 + #size-cells = <2>; 84 + 85 + memory-controller@ffe1e000 { 86 + compatible = "fsl,ifc"; 87 + #address-cells = <2>; 88 + #size-cells = <1>; 89 + reg = <0x0 0xffe1e000 0 0x2000>; 90 + interrupts = <16 2 19 2>; 91 + little-endian; 92 + 93 + /* NOR, NAND Flashes and CPLD on board */ 94 + ranges = <0x0 0x0 0x0 0xee000000 0x02000000>, 95 + <0x1 0x0 0x0 0xffa00000 0x00010000>, 96 + <0x3 0x0 0x0 0xffb00000 0x00020000>; 97 + 98 + flash@0,0 { 99 + #address-cells = <1>; 100 + #size-cells = <1>; 101 + compatible = "cfi-flash"; 102 + reg = <0x0 0x0 0x2000000>; 103 + bank-width = <2>; 104 + device-width = <1>; 105 + 106 + partition@0 { 107 + /* 32MB for user data */ 108 + reg = <0x0 0x02000000>; 109 + label = "NOR Data"; 110 + }; 111 + }; 112 + }; 113 + };
-82
Documentation/devicetree/bindings/memory-controllers/fsl/ifc.txt
··· 1 - Integrated Flash Controller 2 - 3 - Properties: 4 - - name : Should be ifc 5 - - compatible : should contain "fsl,ifc". The version of the integrated 6 - flash controller can be found in the IFC_REV register at 7 - offset zero. 8 - 9 - - #address-cells : Should be either two or three. The first cell is the 10 - chipselect number, and the remaining cells are the 11 - offset into the chipselect. 12 - - #size-cells : Either one or two, depending on how large each chipselect 13 - can be. 14 - - reg : Offset and length of the register set for the device 15 - - interrupts: IFC may have one or two interrupts. If two interrupt 16 - specifiers are present, the first is the "common" 17 - interrupt (CM_EVTER_STAT), and the second is the NAND 18 - interrupt (NAND_EVTER_STAT). If there is only one, 19 - that interrupt reports both types of event. 20 - 21 - - little-endian : If this property is absent, the big-endian mode will 22 - be in use as default for registers. 23 - 24 - - ranges : Each range corresponds to a single chipselect, and covers 25 - the entire access window as configured. 26 - 27 - Child device nodes describe the devices connected to IFC such as NOR (e.g. 28 - cfi-flash) and NAND (fsl,ifc-nand). There might be board specific devices 29 - like FPGAs, CPLDs, etc. 30 - 31 - Example: 32 - 33 - ifc@ffe1e000 { 34 - compatible = "fsl,ifc", "simple-bus"; 35 - #address-cells = <2>; 36 - #size-cells = <1>; 37 - reg = <0x0 0xffe1e000 0 0x2000>; 38 - interrupts = <16 2 19 2>; 39 - little-endian; 40 - 41 - /* NOR, NAND Flashes and CPLD on board */ 42 - ranges = <0x0 0x0 0x0 0xee000000 0x02000000 43 - 0x1 0x0 0x0 0xffa00000 0x00010000 44 - 0x3 0x0 0x0 0xffb00000 0x00020000>; 45 - 46 - flash@0,0 { 47 - #address-cells = <1>; 48 - #size-cells = <1>; 49 - compatible = "cfi-flash"; 50 - reg = <0x0 0x0 0x2000000>; 51 - bank-width = <2>; 52 - device-width = <1>; 53 - 54 - partition@0 { 55 - /* 32MB for user data */ 56 - reg = <0x0 0x02000000>; 57 - label = "NOR Data"; 58 - }; 59 - }; 60 - 61 - flash@1,0 { 62 - #address-cells = <1>; 63 - #size-cells = <1>; 64 - compatible = "fsl,ifc-nand"; 65 - reg = <0x1 0x0 0x10000>; 66 - 67 - partition@0 { 68 - /* This location must not be altered */ 69 - /* 1MB for u-boot Bootloader Image */ 70 - reg = <0x0 0x00100000>; 71 - label = "NAND U-Boot Image"; 72 - read-only; 73 - }; 74 - }; 75 - 76 - cpld@3,0 { 77 - #address-cells = <1>; 78 - #size-cells = <1>; 79 - compatible = "fsl,p1010rdb-cpld"; 80 - reg = <0x3 0x0 0x000001f>; 81 - }; 82 - };
+15 -17
Documentation/devicetree/bindings/memory-controllers/mediatek,smi-common.yaml
··· 16 16 MediaTek SMI have two generations of HW architecture, here is the list 17 17 which generation the SoCs use: 18 18 generation 1: mt2701 and mt7623. 19 - generation 2: mt2712, mt6779, mt8167, mt8173, mt8183, mt8192 and mt8195. 19 + generation 2: mt2712, mt6779, mt8167, mt8173, mt8183, mt8186, mt8192 and mt8195. 20 20 21 21 There's slight differences between the two SMI, for generation 2, the 22 22 register which control the iommu port is at each larb's register base. But ··· 35 35 - mediatek,mt8167-smi-common 36 36 - mediatek,mt8173-smi-common 37 37 - mediatek,mt8183-smi-common 38 + - mediatek,mt8186-smi-common 38 39 - mediatek,mt8192-smi-common 39 40 - mediatek,mt8195-smi-common-vdo 40 41 - mediatek,mt8195-smi-common-vpp ··· 89 88 - mediatek,mt2701-smi-common 90 89 then: 91 90 properties: 92 - clock: 93 - items: 94 - minItems: 3 95 - maxItems: 3 91 + clocks: 92 + minItems: 3 93 + maxItems: 3 96 94 clock-names: 97 95 items: 98 96 - const: apb ··· 108 108 required: 109 109 - mediatek,smi 110 110 properties: 111 - clock: 112 - items: 113 - minItems: 3 114 - maxItems: 3 111 + clocks: 112 + minItems: 3 113 + maxItems: 3 115 114 clock-names: 116 115 items: 117 116 - const: apb ··· 126 127 enum: 127 128 - mediatek,mt6779-smi-common 128 129 - mediatek,mt8183-smi-common 130 + - mediatek,mt8186-smi-common 129 131 - mediatek,mt8192-smi-common 130 132 - mediatek,mt8195-smi-common-vdo 131 133 - mediatek,mt8195-smi-common-vpp 132 134 133 135 then: 134 136 properties: 135 - clock: 136 - items: 137 - minItems: 4 138 - maxItems: 4 137 + clocks: 138 + minItems: 4 139 + maxItems: 4 139 140 clock-names: 140 141 items: 141 142 - const: apb ··· 145 146 146 147 else: # for gen2 HW that don't have gals 147 148 properties: 148 - clock: 149 - items: 150 - minItems: 2 151 - maxItems: 2 149 + clocks: 150 + minItems: 2 151 + maxItems: 2 152 152 clock-names: 153 153 items: 154 154 - const: apb
+10 -9
Documentation/devicetree/bindings/memory-controllers/mediatek,smi-larb.yaml
··· 23 23 - mediatek,mt8167-smi-larb 24 24 - mediatek,mt8173-smi-larb 25 25 - mediatek,mt8183-smi-larb 26 + - mediatek,mt8186-smi-larb 26 27 - mediatek,mt8192-smi-larb 27 28 - mediatek,mt8195-smi-larb 28 29 ··· 76 75 compatible: 77 76 enum: 78 77 - mediatek,mt8183-smi-larb 78 + - mediatek,mt8186-smi-larb 79 79 - mediatek,mt8195-smi-larb 80 80 81 81 then: 82 82 properties: 83 - clock: 84 - items: 85 - minItems: 3 86 - maxItems: 3 83 + clocks: 84 + minItems: 2 85 + maxItems: 3 87 86 clock-names: 87 + minItems: 2 88 88 items: 89 89 - const: apb 90 90 - const: smi ··· 93 91 94 92 else: 95 93 properties: 96 - clock: 97 - items: 98 - minItems: 2 99 - maxItems: 2 94 + clocks: 95 + minItems: 2 96 + maxItems: 2 100 97 clock-names: 101 98 items: 102 99 - const: apb ··· 109 108 - mediatek,mt2701-smi-larb 110 109 - mediatek,mt2712-smi-larb 111 110 - mediatek,mt6779-smi-larb 112 - - mediatek,mt8167-smi-larb 111 + - mediatek,mt8186-smi-larb 113 112 - mediatek,mt8192-smi-larb 114 113 - mediatek,mt8195-smi-larb 115 114
+2 -1
Documentation/devicetree/bindings/memory-controllers/renesas,rpc-if.yaml
··· 40 40 - items: 41 41 - enum: 42 42 - renesas,r9a07g044-rpc-if # RZ/G2{L,LC} 43 - - const: renesas,rzg2l-rpc-if # RZ/G2L family 43 + - renesas,r9a07g054-rpc-if # RZ/V2L 44 + - const: renesas,rzg2l-rpc-if 44 45 45 46 reg: 46 47 items:
+1 -2
Documentation/devicetree/bindings/memory-controllers/samsung,exynos5422-dmc.yaml
··· 51 51 $ref: '/schemas/types.yaml#/definitions/phandle' 52 52 description: | 53 53 phandle of the connected DRAM memory device. For more information please 54 - refer to documentation file: 55 - Documentation/devicetree/bindings/memory-controllers/ddr/lpddr3.txt 54 + refer to jedec,lpddr3.yaml. 56 55 57 56 operating-points-v2: true 58 57
+2 -1
Documentation/devicetree/bindings/power/amlogic,meson-sec-pwrc.yaml
··· 12 12 - Jianxin Pan <jianxin.pan@amlogic.com> 13 13 14 14 description: |+ 15 - Secure Power Domains used in Meson A1/C1 SoCs, and should be the child node 15 + Secure Power Domains used in Meson A1/C1/S4 SoCs, and should be the child node 16 16 of secure-monitor. 17 17 18 18 properties: 19 19 compatible: 20 20 enum: 21 21 - amlogic,meson-a1-pwrc 22 + - amlogic,meson-s4-pwrc 22 23 23 24 "#power-domain-cells": 24 25 const: 1
+3
Documentation/devicetree/bindings/power/mediatek,power-controller.yaml
··· 26 26 - mediatek,mt8167-power-controller 27 27 - mediatek,mt8173-power-controller 28 28 - mediatek,mt8183-power-controller 29 + - mediatek,mt8186-power-controller 29 30 - mediatek,mt8192-power-controller 31 + - mediatek,mt8195-power-controller 30 32 31 33 '#power-domain-cells': 32 34 const: 1 ··· 66 64 "include/dt-bindings/power/mt8173-power.h" - for MT8173 type power domain. 67 65 "include/dt-bindings/power/mt8183-power.h" - for MT8183 type power domain. 68 66 "include/dt-bindings/power/mt8192-power.h" - for MT8192 type power domain. 67 + "include/dt-bindings/power/mt8195-power.h" - for MT8195 type power domain. 69 68 maxItems: 1 70 69 71 70 clocks:
+1
Documentation/devicetree/bindings/power/qcom,rpmpd.yaml
··· 17 17 compatible: 18 18 enum: 19 19 - qcom,mdm9607-rpmpd 20 + - qcom,msm8226-rpmpd 20 21 - qcom,msm8916-rpmpd 21 22 - qcom,msm8939-rpmpd 22 23 - qcom,msm8953-rpmpd
+16
Documentation/devicetree/bindings/remoteproc/qcom,adsp.yaml
··· 47 47 - qcom,sm8350-cdsp-pas 48 48 - qcom,sm8350-slpi-pas 49 49 - qcom,sm8350-mpss-pas 50 + - qcom,sm8450-adsp-pas 51 + - qcom,sm8450-cdsp-pas 52 + - qcom,sm8450-mpss-pas 53 + - qcom,sm8450-slpi-pas 50 54 51 55 reg: 52 56 maxItems: 1 ··· 179 175 - qcom,sm8350-cdsp-pas 180 176 - qcom,sm8350-slpi-pas 181 177 - qcom,sm8350-mpss-pas 178 + - qcom,sm8450-adsp-pas 179 + - qcom,sm8450-cdsp-pas 180 + - qcom,sm8450-slpi-pas 181 + - qcom,sm8450-mpss-pas 182 182 then: 183 183 properties: 184 184 clocks: ··· 291 283 - qcom,sm8350-adsp-pas 292 284 - qcom,sm8350-cdsp-pas 293 285 - qcom,sm8350-slpi-pas 286 + - qcom,sm8450-adsp-pas 287 + - qcom,sm8450-cdsp-pas 288 + - qcom,sm8450-slpi-pas 294 289 then: 295 290 properties: 296 291 interrupts: ··· 323 312 - qcom,sm6350-mpss-pas 324 313 - qcom,sm8150-mpss-pas 325 314 - qcom,sm8350-mpss-pas 315 + - qcom,sm8450-mpss-pas 326 316 then: 327 317 properties: 328 318 interrupts: ··· 446 434 - qcom,sm6350-mpss-pas 447 435 - qcom,sm8150-mpss-pas 448 436 - qcom,sm8350-mpss-pas 437 + - qcom,sm8450-mpss-pas 449 438 then: 450 439 properties: 451 440 power-domains: ··· 471 458 - qcom,sm8250-slpi-pas 472 459 - qcom,sm8350-adsp-pas 473 460 - qcom,sm8350-slpi-pas 461 + - qcom,sm8450-adsp-pas 462 + - qcom,sm8450-slpi-pas 474 463 then: 475 464 properties: 476 465 power-domains: ··· 490 475 contains: 491 476 enum: 492 477 - qcom,sm8350-cdsp-pas 478 + - qcom,sm8450-cdsp-pas 493 479 then: 494 480 properties: 495 481 power-domains:
+1
Documentation/devicetree/bindings/soc/mediatek/pwrap.txt
··· 27 27 "mediatek,mt8135-pwrap" for MT8135 SoCs 28 28 "mediatek,mt8173-pwrap" for MT8173 SoCs 29 29 "mediatek,mt8183-pwrap" for MT8183 SoCs 30 + "mediatek,mt8186-pwrap" for MT8186 SoCs 30 31 "mediatek,mt8195-pwrap" for MT8195 SoCs 31 32 "mediatek,mt8516-pwrap" for MT8516 SoCs 32 33 - interrupts: IRQ for pwrap in SOC
+1 -2
arch/arm/mach-qcom/platsmp.c
··· 357 357 { 358 358 int cpu; 359 359 360 - if (qcom_scm_set_cold_boot_addr(secondary_startup_arm, 361 - cpu_present_mask)) { 360 + if (qcom_scm_set_cold_boot_addr(secondary_startup_arm)) { 362 361 for_each_present_cpu(cpu) { 363 362 if (cpu == smp_processor_id()) 364 363 continue;
+1 -1
arch/arm/mach-spear/spear13xx.c
··· 29 29 /* 30 30 * 512KB (64KB/way), 8-way associativity, parity supported 31 31 * 32 - * FIXME: 9th bit, of Auxillary Controller register must be set 32 + * FIXME: 9th bit, of Auxiliary Controller register must be set 33 33 * for some spear13xx devices for stable L2 operation. 34 34 * 35 35 * Enable Early BRESP, L2 prefetch for Instruction and Data,
+126 -9
drivers/bus/imx-weim.c
··· 64 64 struct cs_timing cs[MAX_CS_COUNT]; 65 65 }; 66 66 67 + struct weim_priv { 68 + void __iomem *base; 69 + struct cs_timing_state timing_state; 70 + }; 71 + 67 72 static const struct of_device_id weim_id_table[] = { 68 73 /* i.MX1/21 */ 69 74 { .compatible = "fsl,imx1-weim", .data = &imx1_weim_devtype, }, ··· 133 128 } 134 129 135 130 /* Parse and set the timing for this device. */ 136 - static int weim_timing_setup(struct device *dev, 137 - struct device_node *np, void __iomem *base, 138 - const struct imx_weim_devtype *devtype, 139 - struct cs_timing_state *ts) 131 + static int weim_timing_setup(struct device *dev, struct device_node *np, 132 + const struct imx_weim_devtype *devtype) 140 133 { 141 134 u32 cs_idx, value[MAX_CS_REGS_COUNT]; 142 135 int i, ret; 143 136 int reg_idx, num_regs; 144 137 struct cs_timing *cst; 138 + struct weim_priv *priv; 139 + struct cs_timing_state *ts; 140 + void __iomem *base; 145 141 146 142 if (WARN_ON(devtype->cs_regs_count > MAX_CS_REGS_COUNT)) 147 143 return -EINVAL; 148 144 if (WARN_ON(devtype->cs_count > MAX_CS_COUNT)) 149 145 return -EINVAL; 146 + 147 + priv = dev_get_drvdata(dev); 148 + base = priv->base; 149 + ts = &priv->timing_state; 150 150 151 151 ret = of_property_read_u32_array(np, "fsl,weim-cs-timing", 152 152 value, devtype->cs_regs_count); ··· 199 189 return 0; 200 190 } 201 191 202 - static int weim_parse_dt(struct platform_device *pdev, void __iomem *base) 192 + static int weim_parse_dt(struct platform_device *pdev) 203 193 { 204 194 const struct of_device_id *of_id = of_match_device(weim_id_table, 205 195 &pdev->dev); 206 196 const struct imx_weim_devtype *devtype = of_id->data; 207 197 struct device_node *child; 208 198 int ret, have_child = 0; 209 - struct cs_timing_state ts = {}; 199 + struct weim_priv *priv; 200 + void __iomem *base; 210 201 u32 reg; 211 202 212 203 if (devtype == &imx50_weim_devtype) { ··· 215 204 if (ret) 216 205 return ret; 217 206 } 207 + 208 + priv = dev_get_drvdata(&pdev->dev); 209 + base = priv->base; 218 210 219 211 if (of_property_read_bool(pdev->dev.of_node, "fsl,burst-clk-enable")) { 220 212 if (devtype->wcr_bcm) { ··· 243 229 } 244 230 245 231 for_each_available_child_of_node(pdev->dev.of_node, child) { 246 - ret = weim_timing_setup(&pdev->dev, child, base, devtype, &ts); 232 + ret = weim_timing_setup(&pdev->dev, child, devtype); 247 233 if (ret) 248 234 dev_warn(&pdev->dev, "%pOF set timing failed.\n", 249 235 child); ··· 262 248 263 249 static int weim_probe(struct platform_device *pdev) 264 250 { 251 + struct weim_priv *priv; 265 252 struct resource *res; 266 253 struct clk *clk; 267 254 void __iomem *base; 268 255 int ret; 256 + 257 + priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); 258 + if (!priv) 259 + return -ENOMEM; 269 260 270 261 /* get the resource */ 271 262 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 272 263 base = devm_ioremap_resource(&pdev->dev, res); 273 264 if (IS_ERR(base)) 274 265 return PTR_ERR(base); 266 + 267 + priv->base = base; 268 + dev_set_drvdata(&pdev->dev, priv); 275 269 276 270 /* get the clock */ 277 271 clk = devm_clk_get(&pdev->dev, NULL); ··· 291 269 return ret; 292 270 293 271 /* parse the device node */ 294 - ret = weim_parse_dt(pdev, base); 272 + ret = weim_parse_dt(pdev); 295 273 if (ret) 296 274 clk_disable_unprepare(clk); 297 275 else ··· 300 278 return ret; 301 279 } 302 280 281 + #if IS_ENABLED(CONFIG_OF_DYNAMIC) 282 + static int of_weim_notify(struct notifier_block *nb, unsigned long action, 283 + void *arg) 284 + { 285 + const struct imx_weim_devtype *devtype; 286 + struct of_reconfig_data *rd = arg; 287 + const struct of_device_id *of_id; 288 + struct platform_device *pdev; 289 + int ret = NOTIFY_OK; 290 + 291 + switch (of_reconfig_get_state_change(action, rd)) { 292 + case OF_RECONFIG_CHANGE_ADD: 293 + of_id = of_match_node(weim_id_table, rd->dn->parent); 294 + if (!of_id) 295 + return NOTIFY_OK; /* not for us */ 296 + 297 + devtype = of_id->data; 298 + 299 + pdev = of_find_device_by_node(rd->dn->parent); 300 + if (!pdev) { 301 + pr_err("%s: could not find platform device for '%pOF'\n", 302 + __func__, rd->dn->parent); 303 + 304 + return notifier_from_errno(-EINVAL); 305 + } 306 + 307 + if (weim_timing_setup(&pdev->dev, rd->dn, devtype)) 308 + dev_warn(&pdev->dev, 309 + "Failed to setup timing for '%pOF'\n", rd->dn); 310 + 311 + if (!of_node_check_flag(rd->dn, OF_POPULATED)) { 312 + if (!of_platform_device_create(rd->dn, NULL, &pdev->dev)) { 313 + dev_err(&pdev->dev, 314 + "Failed to create child device '%pOF'\n", 315 + rd->dn); 316 + ret = notifier_from_errno(-EINVAL); 317 + } 318 + } 319 + 320 + platform_device_put(pdev); 321 + 322 + break; 323 + case OF_RECONFIG_CHANGE_REMOVE: 324 + if (!of_node_check_flag(rd->dn, OF_POPULATED)) 325 + return NOTIFY_OK; /* device already destroyed */ 326 + 327 + of_id = of_match_node(weim_id_table, rd->dn->parent); 328 + if (!of_id) 329 + return NOTIFY_OK; /* not for us */ 330 + 331 + pdev = of_find_device_by_node(rd->dn); 332 + if (!pdev) { 333 + dev_err(&pdev->dev, 334 + "Could not find platform device for '%pOF'\n", 335 + rd->dn); 336 + 337 + ret = notifier_from_errno(-EINVAL); 338 + } else { 339 + of_platform_device_destroy(&pdev->dev, NULL); 340 + platform_device_put(pdev); 341 + } 342 + 343 + break; 344 + default: 345 + break; 346 + } 347 + 348 + return ret; 349 + } 350 + 351 + struct notifier_block weim_of_notifier = { 352 + .notifier_call = of_weim_notify, 353 + }; 354 + #endif /* IS_ENABLED(CONFIG_OF_DYNAMIC) */ 355 + 303 356 static struct platform_driver weim_driver = { 304 357 .driver = { 305 358 .name = "imx-weim", ··· 382 285 }, 383 286 .probe = weim_probe, 384 287 }; 385 - module_platform_driver(weim_driver); 288 + 289 + static int __init weim_init(void) 290 + { 291 + #if IS_ENABLED(CONFIG_OF_DYNAMIC) 292 + WARN_ON(of_reconfig_notifier_register(&weim_of_notifier)); 293 + #endif /* IS_ENABLED(CONFIG_OF_DYNAMIC) */ 294 + 295 + return platform_driver_register(&weim_driver); 296 + } 297 + module_init(weim_init); 298 + 299 + static void __exit weim_exit(void) 300 + { 301 + #if IS_ENABLED(CONFIG_OF_DYNAMIC) 302 + of_reconfig_notifier_unregister(&weim_of_notifier); 303 + #endif /* IS_ENABLED(CONFIG_OF_DYNAMIC) */ 304 + 305 + return platform_driver_unregister(&weim_driver); 306 + 307 + } 308 + module_exit(weim_exit); 386 309 387 310 MODULE_AUTHOR("Freescale Semiconductor Inc."); 388 311 MODULE_DESCRIPTION("i.MX EIM Controller Driver");
+3 -3
drivers/char/hw_random/optee-rng.c
··· 145 145 struct optee_rng_private *pvt_data = to_optee_rng_private(rng); 146 146 struct tee_shm *entropy_shm_pool = NULL; 147 147 148 - entropy_shm_pool = tee_shm_alloc(pvt_data->ctx, MAX_ENTROPY_REQ_SZ, 149 - TEE_SHM_MAPPED | TEE_SHM_DMA_BUF); 148 + entropy_shm_pool = tee_shm_alloc_kernel_buf(pvt_data->ctx, 149 + MAX_ENTROPY_REQ_SZ); 150 150 if (IS_ERR(entropy_shm_pool)) { 151 - dev_err(pvt_data->dev, "tee_shm_alloc failed\n"); 151 + dev_err(pvt_data->dev, "tee_shm_alloc_kernel_buf failed\n"); 152 152 return PTR_ERR(entropy_shm_pool); 153 153 } 154 154
+60 -11
drivers/clk/clk-scmi.c
··· 2 2 /* 3 3 * System Control and Power Interface (SCMI) Protocol based clock driver 4 4 * 5 - * Copyright (C) 2018-2021 ARM Ltd. 5 + * Copyright (C) 2018-2022 ARM Ltd. 6 6 */ 7 7 8 8 #include <linux/clk-provider.h> ··· 88 88 scmi_proto_clk_ops->disable(clk->ph, clk->id); 89 89 } 90 90 91 + static int scmi_clk_atomic_enable(struct clk_hw *hw) 92 + { 93 + struct scmi_clk *clk = to_scmi_clk(hw); 94 + 95 + return scmi_proto_clk_ops->enable_atomic(clk->ph, clk->id); 96 + } 97 + 98 + static void scmi_clk_atomic_disable(struct clk_hw *hw) 99 + { 100 + struct scmi_clk *clk = to_scmi_clk(hw); 101 + 102 + scmi_proto_clk_ops->disable_atomic(clk->ph, clk->id); 103 + } 104 + 105 + /* 106 + * We can provide enable/disable atomic callbacks only if the underlying SCMI 107 + * transport for an SCMI instance is configured to handle SCMI commands in an 108 + * atomic manner. 109 + * 110 + * When no SCMI atomic transport support is available we instead provide only 111 + * the prepare/unprepare API, as allowed by the clock framework when atomic 112 + * calls are not available. 113 + * 114 + * Two distinct sets of clk_ops are provided since we could have multiple SCMI 115 + * instances with different underlying transport quality, so they cannot be 116 + * shared. 117 + */ 91 118 static const struct clk_ops scmi_clk_ops = { 92 119 .recalc_rate = scmi_clk_recalc_rate, 93 120 .round_rate = scmi_clk_round_rate, 94 121 .set_rate = scmi_clk_set_rate, 95 - /* 96 - * We can't provide enable/disable callback as we can't perform the same 97 - * in atomic context. Since the clock framework provides standard API 98 - * clk_prepare_enable that helps cases using clk_enable in non-atomic 99 - * context, it should be fine providing prepare/unprepare. 100 - */ 101 122 .prepare = scmi_clk_enable, 102 123 .unprepare = scmi_clk_disable, 103 124 }; 104 125 105 - static int scmi_clk_ops_init(struct device *dev, struct scmi_clk *sclk) 126 + static const struct clk_ops scmi_atomic_clk_ops = { 127 + .recalc_rate = scmi_clk_recalc_rate, 128 + .round_rate = scmi_clk_round_rate, 129 + .set_rate = scmi_clk_set_rate, 130 + .enable = scmi_clk_atomic_enable, 131 + .disable = scmi_clk_atomic_disable, 132 + }; 133 + 134 + static int scmi_clk_ops_init(struct device *dev, struct scmi_clk *sclk, 135 + const struct clk_ops *scmi_ops) 106 136 { 107 137 int ret; 108 138 unsigned long min_rate, max_rate; ··· 140 110 struct clk_init_data init = { 141 111 .flags = CLK_GET_RATE_NOCACHE, 142 112 .num_parents = 0, 143 - .ops = &scmi_clk_ops, 113 + .ops = scmi_ops, 144 114 .name = sclk->info->name, 145 115 }; 146 116 ··· 169 139 static int scmi_clocks_probe(struct scmi_device *sdev) 170 140 { 171 141 int idx, count, err; 142 + unsigned int atomic_threshold; 143 + bool is_atomic; 172 144 struct clk_hw **hws; 173 145 struct clk_hw_onecell_data *clk_data; 174 146 struct device *dev = &sdev->dev; ··· 200 168 clk_data->num = count; 201 169 hws = clk_data->hws; 202 170 171 + is_atomic = handle->is_transport_atomic(handle, &atomic_threshold); 172 + 203 173 for (idx = 0; idx < count; idx++) { 204 174 struct scmi_clk *sclk; 175 + const struct clk_ops *scmi_ops; 205 176 206 177 sclk = devm_kzalloc(dev, sizeof(*sclk), GFP_KERNEL); 207 178 if (!sclk) ··· 219 184 sclk->id = idx; 220 185 sclk->ph = ph; 221 186 222 - err = scmi_clk_ops_init(dev, sclk); 187 + /* 188 + * Note that when transport is atomic but SCMI protocol did not 189 + * specify (or support) an enable_latency associated with a 190 + * clock, we default to use atomic operations mode. 191 + */ 192 + if (is_atomic && 193 + sclk->info->enable_latency <= atomic_threshold) 194 + scmi_ops = &scmi_atomic_clk_ops; 195 + else 196 + scmi_ops = &scmi_clk_ops; 197 + 198 + err = scmi_clk_ops_init(dev, sclk, scmi_ops); 223 199 if (err) { 224 200 dev_err(dev, "failed to register clock %d\n", idx); 225 201 devm_kfree(dev, sclk); 226 202 hws[idx] = NULL; 227 203 } else { 228 - dev_dbg(dev, "Registered clock:%s\n", sclk->info->name); 204 + dev_dbg(dev, "Registered clock:%s%s\n", 205 + sclk->info->name, 206 + scmi_ops == &scmi_atomic_clk_ops ? 207 + " (atomic ops)" : ""); 229 208 hws[idx] = &sclk->hw; 230 209 } 231 210 }
+9
drivers/clk/samsung/Kconfig
··· 11 11 select EXYNOS_5410_COMMON_CLK if ARM && SOC_EXYNOS5410 12 12 select EXYNOS_5420_COMMON_CLK if ARM && SOC_EXYNOS5420 13 13 select EXYNOS_ARM64_COMMON_CLK if ARM64 && ARCH_EXYNOS 14 + select TESLA_FSD_COMMON_CLK if ARM64 && ARCH_TESLA_FSD 14 15 15 16 config S3C64XX_COMMON_CLK 16 17 bool "Samsung S3C64xx clock controller support" if COMPILE_TEST ··· 125 124 help 126 125 Support for the clock controller present on the Samsung 127 126 S3C2416/S3C2443 SoCs. Choose Y here only if you build for this SoC. 127 + 128 + config TESLA_FSD_COMMON_CLK 129 + bool "Tesla FSD clock controller support" if COMPILE_TEST 130 + depends on COMMON_CLK_SAMSUNG 131 + depends on EXYNOS_ARM64_COMMON_CLK 132 + help 133 + Support for the clock controller present on the Tesla FSD SoC. 134 + Choose Y here only if you build for this SoC.
+1
drivers/clk/samsung/Makefile
··· 26 26 obj-$(CONFIG_S3C2443_COMMON_CLK)+= clk-s3c2443.o 27 27 obj-$(CONFIG_S3C64XX_COMMON_CLK) += clk-s3c64xx.o 28 28 obj-$(CONFIG_S5PV210_COMMON_CLK) += clk-s5pv210.o clk-s5pv210-audss.o 29 + obj-$(CONFIG_TESLA_FSD_COMMON_CLK) += clk-fsd.o
+1803
drivers/clk/samsung/clk-fsd.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (c) 2017-2022 Samsung Electronics Co., Ltd. 4 + * https://www.samsung.com 5 + * Copyright (c) 2017-2022 Tesla, Inc. 6 + * https://www.tesla.com 7 + * 8 + * Common Clock Framework support for FSD SoC. 9 + */ 10 + 11 + #include <linux/clk.h> 12 + #include <linux/clk-provider.h> 13 + #include <linux/init.h> 14 + #include <linux/kernel.h> 15 + #include <linux/of.h> 16 + #include <linux/of_address.h> 17 + #include <linux/of_device.h> 18 + #include <linux/platform_device.h> 19 + 20 + #include <dt-bindings/clock/fsd-clk.h> 21 + 22 + #include "clk.h" 23 + #include "clk-exynos-arm64.h" 24 + 25 + /* Register Offset definitions for CMU_CMU (0x11c10000) */ 26 + #define PLL_LOCKTIME_PLL_SHARED0 0x0 27 + #define PLL_LOCKTIME_PLL_SHARED1 0x4 28 + #define PLL_LOCKTIME_PLL_SHARED2 0x8 29 + #define PLL_LOCKTIME_PLL_SHARED3 0xc 30 + #define PLL_CON0_PLL_SHARED0 0x100 31 + #define PLL_CON0_PLL_SHARED1 0x120 32 + #define PLL_CON0_PLL_SHARED2 0x140 33 + #define PLL_CON0_PLL_SHARED3 0x160 34 + #define MUX_CMU_CIS0_CLKMUX 0x1000 35 + #define MUX_CMU_CIS1_CLKMUX 0x1004 36 + #define MUX_CMU_CIS2_CLKMUX 0x1008 37 + #define MUX_CMU_CPUCL_SWITCHMUX 0x100c 38 + #define MUX_CMU_FSYS1_ACLK_MUX 0x1014 39 + #define MUX_PLL_SHARED0_MUX 0x1020 40 + #define MUX_PLL_SHARED1_MUX 0x1024 41 + #define DIV_CMU_CIS0_CLK 0x1800 42 + #define DIV_CMU_CIS1_CLK 0x1804 43 + #define DIV_CMU_CIS2_CLK 0x1808 44 + #define DIV_CMU_CMU_ACLK 0x180c 45 + #define DIV_CMU_CPUCL_SWITCH 0x1810 46 + #define DIV_CMU_FSYS0_SHARED0DIV4 0x181c 47 + #define DIV_CMU_FSYS0_SHARED1DIV3 0x1820 48 + #define DIV_CMU_FSYS0_SHARED1DIV4 0x1824 49 + #define DIV_CMU_FSYS1_SHARED0DIV4 0x1828 50 + #define DIV_CMU_FSYS1_SHARED0DIV8 0x182c 51 + #define DIV_CMU_IMEM_ACLK 0x1834 52 + #define DIV_CMU_IMEM_DMACLK 0x1838 53 + #define DIV_CMU_IMEM_TCUCLK 0x183c 54 + #define DIV_CMU_PERIC_SHARED0DIV20 0x1844 55 + #define DIV_CMU_PERIC_SHARED0DIV3_TBUCLK 0x1848 56 + #define DIV_CMU_PERIC_SHARED1DIV36 0x184c 57 + #define DIV_CMU_PERIC_SHARED1DIV4_DMACLK 0x1850 58 + #define DIV_PLL_SHARED0_DIV2 0x1858 59 + #define DIV_PLL_SHARED0_DIV3 0x185c 60 + #define DIV_PLL_SHARED0_DIV4 0x1860 61 + #define DIV_PLL_SHARED0_DIV6 0x1864 62 + #define DIV_PLL_SHARED1_DIV3 0x1868 63 + #define DIV_PLL_SHARED1_DIV36 0x186c 64 + #define DIV_PLL_SHARED1_DIV4 0x1870 65 + #define DIV_PLL_SHARED1_DIV9 0x1874 66 + #define GAT_CMU_CIS0_CLKGATE 0x2000 67 + #define GAT_CMU_CIS1_CLKGATE 0x2004 68 + #define GAT_CMU_CIS2_CLKGATE 0x2008 69 + #define GAT_CMU_CPUCL_SWITCH_GATE 0x200c 70 + #define GAT_CMU_FSYS0_SHARED0DIV4_GATE 0x2018 71 + #define GAT_CMU_FSYS0_SHARED1DIV4_CLK 0x201c 72 + #define GAT_CMU_FSYS0_SHARED1DIV4_GATE 0x2020 73 + #define GAT_CMU_FSYS1_SHARED0DIV4_GATE 0x2024 74 + #define GAT_CMU_FSYS1_SHARED1DIV4_GATE 0x2028 75 + #define GAT_CMU_IMEM_ACLK_GATE 0x2030 76 + #define GAT_CMU_IMEM_DMACLK_GATE 0x2034 77 + #define GAT_CMU_IMEM_TCUCLK_GATE 0x2038 78 + #define GAT_CMU_PERIC_SHARED0DIVE3_TBUCLK_GATE 0x2040 79 + #define GAT_CMU_PERIC_SHARED0DIVE4_GATE 0x2044 80 + #define GAT_CMU_PERIC_SHARED1DIV4_DMACLK_GATE 0x2048 81 + #define GAT_CMU_PERIC_SHARED1DIVE4_GATE 0x204c 82 + #define GAT_CMU_CMU_CMU_IPCLKPORT_PCLK 0x2054 83 + #define GAT_CMU_AXI2APB_CMU_IPCLKPORT_ACLK 0x2058 84 + #define GAT_CMU_NS_BRDG_CMU_IPCLKPORT_CLK__PSOC_CMU__CLK_CMU 0x205c 85 + #define GAT_CMU_SYSREG_CMU_IPCLKPORT_PCLK 0x2060 86 + 87 + static const unsigned long cmu_clk_regs[] __initconst = { 88 + PLL_LOCKTIME_PLL_SHARED0, 89 + PLL_LOCKTIME_PLL_SHARED1, 90 + PLL_LOCKTIME_PLL_SHARED2, 91 + PLL_LOCKTIME_PLL_SHARED3, 92 + PLL_CON0_PLL_SHARED0, 93 + PLL_CON0_PLL_SHARED1, 94 + PLL_CON0_PLL_SHARED2, 95 + PLL_CON0_PLL_SHARED3, 96 + MUX_CMU_CIS0_CLKMUX, 97 + MUX_CMU_CIS1_CLKMUX, 98 + MUX_CMU_CIS2_CLKMUX, 99 + MUX_CMU_CPUCL_SWITCHMUX, 100 + MUX_CMU_FSYS1_ACLK_MUX, 101 + MUX_PLL_SHARED0_MUX, 102 + MUX_PLL_SHARED1_MUX, 103 + DIV_CMU_CIS0_CLK, 104 + DIV_CMU_CIS1_CLK, 105 + DIV_CMU_CIS2_CLK, 106 + DIV_CMU_CMU_ACLK, 107 + DIV_CMU_CPUCL_SWITCH, 108 + DIV_CMU_FSYS0_SHARED0DIV4, 109 + DIV_CMU_FSYS0_SHARED1DIV3, 110 + DIV_CMU_FSYS0_SHARED1DIV4, 111 + DIV_CMU_FSYS1_SHARED0DIV4, 112 + DIV_CMU_FSYS1_SHARED0DIV8, 113 + DIV_CMU_IMEM_ACLK, 114 + DIV_CMU_IMEM_DMACLK, 115 + DIV_CMU_IMEM_TCUCLK, 116 + DIV_CMU_PERIC_SHARED0DIV20, 117 + DIV_CMU_PERIC_SHARED0DIV3_TBUCLK, 118 + DIV_CMU_PERIC_SHARED1DIV36, 119 + DIV_CMU_PERIC_SHARED1DIV4_DMACLK, 120 + DIV_PLL_SHARED0_DIV2, 121 + DIV_PLL_SHARED0_DIV3, 122 + DIV_PLL_SHARED0_DIV4, 123 + DIV_PLL_SHARED0_DIV6, 124 + DIV_PLL_SHARED1_DIV3, 125 + DIV_PLL_SHARED1_DIV36, 126 + DIV_PLL_SHARED1_DIV4, 127 + DIV_PLL_SHARED1_DIV9, 128 + GAT_CMU_CIS0_CLKGATE, 129 + GAT_CMU_CIS1_CLKGATE, 130 + GAT_CMU_CIS2_CLKGATE, 131 + GAT_CMU_CPUCL_SWITCH_GATE, 132 + GAT_CMU_FSYS0_SHARED0DIV4_GATE, 133 + GAT_CMU_FSYS0_SHARED1DIV4_CLK, 134 + GAT_CMU_FSYS0_SHARED1DIV4_GATE, 135 + GAT_CMU_FSYS1_SHARED0DIV4_GATE, 136 + GAT_CMU_FSYS1_SHARED1DIV4_GATE, 137 + GAT_CMU_IMEM_ACLK_GATE, 138 + GAT_CMU_IMEM_DMACLK_GATE, 139 + GAT_CMU_IMEM_TCUCLK_GATE, 140 + GAT_CMU_PERIC_SHARED0DIVE3_TBUCLK_GATE, 141 + GAT_CMU_PERIC_SHARED0DIVE4_GATE, 142 + GAT_CMU_PERIC_SHARED1DIV4_DMACLK_GATE, 143 + GAT_CMU_PERIC_SHARED1DIVE4_GATE, 144 + GAT_CMU_CMU_CMU_IPCLKPORT_PCLK, 145 + GAT_CMU_AXI2APB_CMU_IPCLKPORT_ACLK, 146 + GAT_CMU_NS_BRDG_CMU_IPCLKPORT_CLK__PSOC_CMU__CLK_CMU, 147 + GAT_CMU_SYSREG_CMU_IPCLKPORT_PCLK, 148 + }; 149 + 150 + static const struct samsung_pll_rate_table pll_shared0_rate_table[] __initconst = { 151 + PLL_35XX_RATE(24 * MHZ, 2000000000U, 250, 3, 0), 152 + }; 153 + 154 + static const struct samsung_pll_rate_table pll_shared1_rate_table[] __initconst = { 155 + PLL_35XX_RATE(24 * MHZ, 2400000000U, 200, 2, 0), 156 + }; 157 + 158 + static const struct samsung_pll_rate_table pll_shared2_rate_table[] __initconst = { 159 + PLL_35XX_RATE(24 * MHZ, 2400000000U, 200, 2, 0), 160 + }; 161 + 162 + static const struct samsung_pll_rate_table pll_shared3_rate_table[] __initconst = { 163 + PLL_35XX_RATE(24 * MHZ, 1800000000U, 150, 2, 0), 164 + }; 165 + 166 + static const struct samsung_pll_clock cmu_pll_clks[] __initconst = { 167 + PLL(pll_142xx, 0, "fout_pll_shared0", "fin_pll", PLL_LOCKTIME_PLL_SHARED0, 168 + PLL_CON0_PLL_SHARED0, pll_shared0_rate_table), 169 + PLL(pll_142xx, 0, "fout_pll_shared1", "fin_pll", PLL_LOCKTIME_PLL_SHARED1, 170 + PLL_CON0_PLL_SHARED1, pll_shared1_rate_table), 171 + PLL(pll_142xx, 0, "fout_pll_shared2", "fin_pll", PLL_LOCKTIME_PLL_SHARED2, 172 + PLL_CON0_PLL_SHARED2, pll_shared2_rate_table), 173 + PLL(pll_142xx, 0, "fout_pll_shared3", "fin_pll", PLL_LOCKTIME_PLL_SHARED3, 174 + PLL_CON0_PLL_SHARED3, pll_shared3_rate_table), 175 + }; 176 + 177 + /* List of parent clocks for Muxes in CMU_CMU */ 178 + PNAME(mout_cmu_shared0_pll_p) = { "fin_pll", "fout_pll_shared0" }; 179 + PNAME(mout_cmu_shared1_pll_p) = { "fin_pll", "fout_pll_shared1" }; 180 + PNAME(mout_cmu_shared2_pll_p) = { "fin_pll", "fout_pll_shared2" }; 181 + PNAME(mout_cmu_shared3_pll_p) = { "fin_pll", "fout_pll_shared3" }; 182 + PNAME(mout_cmu_cis0_clkmux_p) = { "fin_pll", "dout_cmu_pll_shared0_div4" }; 183 + PNAME(mout_cmu_cis1_clkmux_p) = { "fin_pll", "dout_cmu_pll_shared0_div4" }; 184 + PNAME(mout_cmu_cis2_clkmux_p) = { "fin_pll", "dout_cmu_pll_shared0_div4" }; 185 + PNAME(mout_cmu_cpucl_switchmux_p) = { "mout_cmu_pll_shared2", "mout_cmu_pll_shared0_mux" }; 186 + PNAME(mout_cmu_fsys1_aclk_mux_p) = { "dout_cmu_pll_shared0_div4", "fin_pll" }; 187 + PNAME(mout_cmu_pll_shared0_mux_p) = { "fin_pll", "mout_cmu_pll_shared0" }; 188 + PNAME(mout_cmu_pll_shared1_mux_p) = { "fin_pll", "mout_cmu_pll_shared1" }; 189 + 190 + static const struct samsung_mux_clock cmu_mux_clks[] __initconst = { 191 + MUX(0, "mout_cmu_pll_shared0", mout_cmu_shared0_pll_p, PLL_CON0_PLL_SHARED0, 4, 1), 192 + MUX(0, "mout_cmu_pll_shared1", mout_cmu_shared1_pll_p, PLL_CON0_PLL_SHARED1, 4, 1), 193 + MUX(0, "mout_cmu_pll_shared2", mout_cmu_shared2_pll_p, PLL_CON0_PLL_SHARED2, 4, 1), 194 + MUX(0, "mout_cmu_pll_shared3", mout_cmu_shared3_pll_p, PLL_CON0_PLL_SHARED3, 4, 1), 195 + MUX(0, "mout_cmu_cis0_clkmux", mout_cmu_cis0_clkmux_p, MUX_CMU_CIS0_CLKMUX, 0, 1), 196 + MUX(0, "mout_cmu_cis1_clkmux", mout_cmu_cis1_clkmux_p, MUX_CMU_CIS1_CLKMUX, 0, 1), 197 + MUX(0, "mout_cmu_cis2_clkmux", mout_cmu_cis2_clkmux_p, MUX_CMU_CIS2_CLKMUX, 0, 1), 198 + MUX(0, "mout_cmu_cpucl_switchmux", mout_cmu_cpucl_switchmux_p, 199 + MUX_CMU_CPUCL_SWITCHMUX, 0, 1), 200 + MUX(0, "mout_cmu_fsys1_aclk_mux", mout_cmu_fsys1_aclk_mux_p, MUX_CMU_FSYS1_ACLK_MUX, 0, 1), 201 + MUX(0, "mout_cmu_pll_shared0_mux", mout_cmu_pll_shared0_mux_p, MUX_PLL_SHARED0_MUX, 0, 1), 202 + MUX(0, "mout_cmu_pll_shared1_mux", mout_cmu_pll_shared1_mux_p, MUX_PLL_SHARED1_MUX, 0, 1), 203 + }; 204 + 205 + static const struct samsung_div_clock cmu_div_clks[] __initconst = { 206 + DIV(0, "dout_cmu_cis0_clk", "cmu_cis0_clkgate", DIV_CMU_CIS0_CLK, 0, 4), 207 + DIV(0, "dout_cmu_cis1_clk", "cmu_cis1_clkgate", DIV_CMU_CIS1_CLK, 0, 4), 208 + DIV(0, "dout_cmu_cis2_clk", "cmu_cis2_clkgate", DIV_CMU_CIS2_CLK, 0, 4), 209 + DIV(0, "dout_cmu_cmu_aclk", "dout_cmu_pll_shared1_div9", DIV_CMU_CMU_ACLK, 0, 4), 210 + DIV(0, "dout_cmu_cpucl_switch", "cmu_cpucl_switch_gate", DIV_CMU_CPUCL_SWITCH, 0, 4), 211 + DIV(DOUT_CMU_FSYS0_SHARED0DIV4, "dout_cmu_fsys0_shared0div4", "cmu_fsys0_shared0div4_gate", 212 + DIV_CMU_FSYS0_SHARED0DIV4, 0, 4), 213 + DIV(0, "dout_cmu_fsys0_shared1div3", "cmu_fsys0_shared1div4_clk", 214 + DIV_CMU_FSYS0_SHARED1DIV3, 0, 4), 215 + DIV(DOUT_CMU_FSYS0_SHARED1DIV4, "dout_cmu_fsys0_shared1div4", "cmu_fsys0_shared1div4_gate", 216 + DIV_CMU_FSYS0_SHARED1DIV4, 0, 4), 217 + DIV(DOUT_CMU_FSYS1_SHARED0DIV4, "dout_cmu_fsys1_shared0div4", "cmu_fsys1_shared0div4_gate", 218 + DIV_CMU_FSYS1_SHARED0DIV4, 0, 4), 219 + DIV(DOUT_CMU_FSYS1_SHARED0DIV8, "dout_cmu_fsys1_shared0div8", "cmu_fsys1_shared1div4_gate", 220 + DIV_CMU_FSYS1_SHARED0DIV8, 0, 4), 221 + DIV(DOUT_CMU_IMEM_ACLK, "dout_cmu_imem_aclk", "cmu_imem_aclk_gate", 222 + DIV_CMU_IMEM_ACLK, 0, 4), 223 + DIV(DOUT_CMU_IMEM_DMACLK, "dout_cmu_imem_dmaclk", "cmu_imem_dmaclk_gate", 224 + DIV_CMU_IMEM_DMACLK, 0, 4), 225 + DIV(DOUT_CMU_IMEM_TCUCLK, "dout_cmu_imem_tcuclk", "cmu_imem_tcuclk_gate", 226 + DIV_CMU_IMEM_TCUCLK, 0, 4), 227 + DIV(DOUT_CMU_PERIC_SHARED0DIV20, "dout_cmu_peric_shared0div20", 228 + "cmu_peric_shared0dive4_gate", DIV_CMU_PERIC_SHARED0DIV20, 0, 4), 229 + DIV(DOUT_CMU_PERIC_SHARED0DIV3_TBUCLK, "dout_cmu_peric_shared0div3_tbuclk", 230 + "cmu_peric_shared0dive3_tbuclk_gate", DIV_CMU_PERIC_SHARED0DIV3_TBUCLK, 0, 4), 231 + DIV(DOUT_CMU_PERIC_SHARED1DIV36, "dout_cmu_peric_shared1div36", 232 + "cmu_peric_shared1dive4_gate", DIV_CMU_PERIC_SHARED1DIV36, 0, 4), 233 + DIV(DOUT_CMU_PERIC_SHARED1DIV4_DMACLK, "dout_cmu_peric_shared1div4_dmaclk", 234 + "cmu_peric_shared1div4_dmaclk_gate", DIV_CMU_PERIC_SHARED1DIV4_DMACLK, 0, 4), 235 + DIV(0, "dout_cmu_pll_shared0_div2", "mout_cmu_pll_shared0_mux", 236 + DIV_PLL_SHARED0_DIV2, 0, 4), 237 + DIV(0, "dout_cmu_pll_shared0_div3", "mout_cmu_pll_shared0_mux", 238 + DIV_PLL_SHARED0_DIV3, 0, 4), 239 + DIV(DOUT_CMU_PLL_SHARED0_DIV4, "dout_cmu_pll_shared0_div4", "dout_cmu_pll_shared0_div2", 240 + DIV_PLL_SHARED0_DIV4, 0, 4), 241 + DIV(DOUT_CMU_PLL_SHARED0_DIV6, "dout_cmu_pll_shared0_div6", "dout_cmu_pll_shared0_div3", 242 + DIV_PLL_SHARED0_DIV6, 0, 4), 243 + DIV(0, "dout_cmu_pll_shared1_div3", "mout_cmu_pll_shared1_mux", 244 + DIV_PLL_SHARED1_DIV3, 0, 4), 245 + DIV(0, "dout_cmu_pll_shared1_div36", "dout_cmu_pll_shared1_div9", 246 + DIV_PLL_SHARED1_DIV36, 0, 4), 247 + DIV(0, "dout_cmu_pll_shared1_div4", "mout_cmu_pll_shared1_mux", 248 + DIV_PLL_SHARED1_DIV4, 0, 4), 249 + DIV(0, "dout_cmu_pll_shared1_div9", "dout_cmu_pll_shared1_div3", 250 + DIV_PLL_SHARED1_DIV9, 0, 4), 251 + }; 252 + 253 + static const struct samsung_gate_clock cmu_gate_clks[] __initconst = { 254 + GATE(0, "cmu_cis0_clkgate", "mout_cmu_cis0_clkmux", GAT_CMU_CIS0_CLKGATE, 21, 255 + CLK_IGNORE_UNUSED, 0), 256 + GATE(0, "cmu_cis1_clkgate", "mout_cmu_cis1_clkmux", GAT_CMU_CIS1_CLKGATE, 21, 257 + CLK_IGNORE_UNUSED, 0), 258 + GATE(0, "cmu_cis2_clkgate", "mout_cmu_cis2_clkmux", GAT_CMU_CIS2_CLKGATE, 21, 259 + CLK_IGNORE_UNUSED, 0), 260 + GATE(CMU_CPUCL_SWITCH_GATE, "cmu_cpucl_switch_gate", "mout_cmu_cpucl_switchmux", 261 + GAT_CMU_CPUCL_SWITCH_GATE, 21, CLK_IGNORE_UNUSED, 0), 262 + GATE(GAT_CMU_FSYS0_SHARED0DIV4, "cmu_fsys0_shared0div4_gate", "dout_cmu_pll_shared0_div4", 263 + GAT_CMU_FSYS0_SHARED0DIV4_GATE, 21, CLK_IGNORE_UNUSED, 0), 264 + GATE(0, "cmu_fsys0_shared1div4_clk", "dout_cmu_pll_shared1_div3", 265 + GAT_CMU_FSYS0_SHARED1DIV4_CLK, 21, CLK_IGNORE_UNUSED, 0), 266 + GATE(0, "cmu_fsys0_shared1div4_gate", "dout_cmu_pll_shared1_div4", 267 + GAT_CMU_FSYS0_SHARED1DIV4_GATE, 21, CLK_IGNORE_UNUSED, 0), 268 + GATE(0, "cmu_fsys1_shared0div4_gate", "mout_cmu_fsys1_aclk_mux", 269 + GAT_CMU_FSYS1_SHARED0DIV4_GATE, 21, CLK_IGNORE_UNUSED, 0), 270 + GATE(0, "cmu_fsys1_shared1div4_gate", "dout_cmu_fsys1_shared0div4", 271 + GAT_CMU_FSYS1_SHARED1DIV4_GATE, 21, CLK_IGNORE_UNUSED, 0), 272 + GATE(0, "cmu_imem_aclk_gate", "dout_cmu_pll_shared1_div9", GAT_CMU_IMEM_ACLK_GATE, 21, 273 + CLK_IGNORE_UNUSED, 0), 274 + GATE(0, "cmu_imem_dmaclk_gate", "mout_cmu_pll_shared1_mux", GAT_CMU_IMEM_DMACLK_GATE, 21, 275 + CLK_IGNORE_UNUSED, 0), 276 + GATE(0, "cmu_imem_tcuclk_gate", "dout_cmu_pll_shared0_div3", GAT_CMU_IMEM_TCUCLK_GATE, 21, 277 + CLK_IGNORE_UNUSED, 0), 278 + GATE(0, "cmu_peric_shared0dive3_tbuclk_gate", "dout_cmu_pll_shared0_div3", 279 + GAT_CMU_PERIC_SHARED0DIVE3_TBUCLK_GATE, 21, CLK_IGNORE_UNUSED, 0), 280 + GATE(0, "cmu_peric_shared0dive4_gate", "dout_cmu_pll_shared0_div4", 281 + GAT_CMU_PERIC_SHARED0DIVE4_GATE, 21, CLK_IGNORE_UNUSED, 0), 282 + GATE(0, "cmu_peric_shared1div4_dmaclk_gate", "dout_cmu_pll_shared1_div4", 283 + GAT_CMU_PERIC_SHARED1DIV4_DMACLK_GATE, 21, CLK_IGNORE_UNUSED, 0), 284 + GATE(0, "cmu_peric_shared1dive4_gate", "dout_cmu_pll_shared1_div36", 285 + GAT_CMU_PERIC_SHARED1DIVE4_GATE, 21, CLK_IGNORE_UNUSED, 0), 286 + GATE(0, "cmu_uid_cmu_cmu_cmu_ipclkport_pclk", "dout_cmu_cmu_aclk", 287 + GAT_CMU_CMU_CMU_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 288 + GATE(0, "cmu_uid_axi2apb_cmu_ipclkport_aclk", "dout_cmu_cmu_aclk", 289 + GAT_CMU_AXI2APB_CMU_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 290 + GATE(0, "cmu_uid_ns_brdg_cmu_ipclkport_clk__psoc_cmu__clk_cmu", "dout_cmu_cmu_aclk", 291 + GAT_CMU_NS_BRDG_CMU_IPCLKPORT_CLK__PSOC_CMU__CLK_CMU, 21, CLK_IGNORE_UNUSED, 0), 292 + GATE(0, "cmu_uid_sysreg_cmu_ipclkport_pclk", "dout_cmu_cmu_aclk", 293 + GAT_CMU_SYSREG_CMU_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 294 + }; 295 + 296 + static const struct samsung_cmu_info cmu_cmu_info __initconst = { 297 + .pll_clks = cmu_pll_clks, 298 + .nr_pll_clks = ARRAY_SIZE(cmu_pll_clks), 299 + .mux_clks = cmu_mux_clks, 300 + .nr_mux_clks = ARRAY_SIZE(cmu_mux_clks), 301 + .div_clks = cmu_div_clks, 302 + .nr_div_clks = ARRAY_SIZE(cmu_div_clks), 303 + .gate_clks = cmu_gate_clks, 304 + .nr_gate_clks = ARRAY_SIZE(cmu_gate_clks), 305 + .nr_clk_ids = CMU_NR_CLK, 306 + .clk_regs = cmu_clk_regs, 307 + .nr_clk_regs = ARRAY_SIZE(cmu_clk_regs), 308 + }; 309 + 310 + static void __init fsd_clk_cmu_init(struct device_node *np) 311 + { 312 + samsung_cmu_register_one(np, &cmu_cmu_info); 313 + } 314 + 315 + CLK_OF_DECLARE(fsd_clk_cmu, "tesla,fsd-clock-cmu", fsd_clk_cmu_init); 316 + 317 + /* Register Offset definitions for CMU_PERIC (0x14010000) */ 318 + #define PLL_CON0_PERIC_DMACLK_MUX 0x100 319 + #define PLL_CON0_PERIC_EQOS_BUSCLK_MUX 0x120 320 + #define PLL_CON0_PERIC_PCLK_MUX 0x140 321 + #define PLL_CON0_PERIC_TBUCLK_MUX 0x160 322 + #define PLL_CON0_SPI_CLK 0x180 323 + #define PLL_CON0_SPI_PCLK 0x1a0 324 + #define PLL_CON0_UART_CLK 0x1c0 325 + #define PLL_CON0_UART_PCLK 0x1e0 326 + #define MUX_PERIC_EQOS_PHYRXCLK 0x1000 327 + #define DIV_EQOS_BUSCLK 0x1800 328 + #define DIV_PERIC_MCAN_CLK 0x1804 329 + #define DIV_RGMII_CLK 0x1808 330 + #define DIV_RII_CLK 0x180c 331 + #define DIV_RMII_CLK 0x1810 332 + #define DIV_SPI_CLK 0x1814 333 + #define DIV_UART_CLK 0x1818 334 + #define GAT_EQOS_TOP_IPCLKPORT_CLK_PTP_REF_I 0x2000 335 + #define GAT_GPIO_PERIC_IPCLKPORT_OSCCLK 0x2004 336 + #define GAT_PERIC_ADC0_IPCLKPORT_I_OSCCLK 0x2008 337 + #define GAT_PERIC_CMU_PERIC_IPCLKPORT_PCLK 0x200c 338 + #define GAT_PERIC_PWM0_IPCLKPORT_I_OSCCLK 0x2010 339 + #define GAT_PERIC_PWM1_IPCLKPORT_I_OSCCLK 0x2014 340 + #define GAT_ASYNC_APB_DMA0_IPCLKPORT_PCLKM 0x2018 341 + #define GAT_ASYNC_APB_DMA0_IPCLKPORT_PCLKS 0x201c 342 + #define GAT_ASYNC_APB_DMA1_IPCLKPORT_PCLKM 0x2020 343 + #define GAT_ASYNC_APB_DMA1_IPCLKPORT_PCLKS 0x2024 344 + #define GAT_AXI2APB_PERIC0_IPCLKPORT_ACLK 0x2028 345 + #define GAT_AXI2APB_PERIC1_IPCLKPORT_ACLK 0x202c 346 + #define GAT_AXI2APB_PERIC2_IPCLKPORT_ACLK 0x2030 347 + #define GAT_BUS_D_PERIC_IPCLKPORT_DMACLK 0x2034 348 + #define GAT_BUS_D_PERIC_IPCLKPORT_EQOSCLK 0x2038 349 + #define GAT_BUS_D_PERIC_IPCLKPORT_MAINCLK 0x203c 350 + #define GAT_BUS_P_PERIC_IPCLKPORT_EQOSCLK 0x2040 351 + #define GAT_BUS_P_PERIC_IPCLKPORT_MAINCLK 0x2044 352 + #define GAT_BUS_P_PERIC_IPCLKPORT_SMMUCLK 0x2048 353 + #define GAT_EQOS_TOP_IPCLKPORT_ACLK_I 0x204c 354 + #define GAT_EQOS_TOP_IPCLKPORT_CLK_RX_I 0x2050 355 + #define GAT_EQOS_TOP_IPCLKPORT_HCLK_I 0x2054 356 + #define GAT_EQOS_TOP_IPCLKPORT_RGMII_CLK_I 0x2058 357 + #define GAT_EQOS_TOP_IPCLKPORT_RII_CLK_I 0x205c 358 + #define GAT_EQOS_TOP_IPCLKPORT_RMII_CLK_I 0x2060 359 + #define GAT_GPIO_PERIC_IPCLKPORT_PCLK 0x2064 360 + #define GAT_NS_BRDG_PERIC_IPCLKPORT_CLK__PSOC_PERIC__CLK_PERIC_D 0x2068 361 + #define GAT_NS_BRDG_PERIC_IPCLKPORT_CLK__PSOC_PERIC__CLK_PERIC_P 0x206c 362 + #define GAT_PERIC_ADC0_IPCLKPORT_PCLK_S0 0x2070 363 + #define GAT_PERIC_DMA0_IPCLKPORT_ACLK 0x2074 364 + #define GAT_PERIC_DMA1_IPCLKPORT_ACLK 0x2078 365 + #define GAT_PERIC_I2C0_IPCLKPORT_I_PCLK 0x207c 366 + #define GAT_PERIC_I2C1_IPCLKPORT_I_PCLK 0x2080 367 + #define GAT_PERIC_I2C2_IPCLKPORT_I_PCLK 0x2084 368 + #define GAT_PERIC_I2C3_IPCLKPORT_I_PCLK 0x2088 369 + #define GAT_PERIC_I2C4_IPCLKPORT_I_PCLK 0x208c 370 + #define GAT_PERIC_I2C5_IPCLKPORT_I_PCLK 0x2090 371 + #define GAT_PERIC_I2C6_IPCLKPORT_I_PCLK 0x2094 372 + #define GAT_PERIC_I2C7_IPCLKPORT_I_PCLK 0x2098 373 + #define GAT_PERIC_MCAN0_IPCLKPORT_CCLK 0x209c 374 + #define GAT_PERIC_MCAN0_IPCLKPORT_PCLK 0x20a0 375 + #define GAT_PERIC_MCAN1_IPCLKPORT_CCLK 0x20a4 376 + #define GAT_PERIC_MCAN1_IPCLKPORT_PCLK 0x20a8 377 + #define GAT_PERIC_MCAN2_IPCLKPORT_CCLK 0x20ac 378 + #define GAT_PERIC_MCAN2_IPCLKPORT_PCLK 0x20b0 379 + #define GAT_PERIC_MCAN3_IPCLKPORT_CCLK 0x20b4 380 + #define GAT_PERIC_MCAN3_IPCLKPORT_PCLK 0x20b8 381 + #define GAT_PERIC_PWM0_IPCLKPORT_I_PCLK_S0 0x20bc 382 + #define GAT_PERIC_PWM1_IPCLKPORT_I_PCLK_S0 0x20c0 383 + #define GAT_PERIC_SMMU_IPCLKPORT_CCLK 0x20c4 384 + #define GAT_PERIC_SMMU_IPCLKPORT_PERIC_BCLK 0x20c8 385 + #define GAT_PERIC_SPI0_IPCLKPORT_I_PCLK 0x20cc 386 + #define GAT_PERIC_SPI0_IPCLKPORT_I_SCLK_SPI 0x20d0 387 + #define GAT_PERIC_SPI1_IPCLKPORT_I_PCLK 0x20d4 388 + #define GAT_PERIC_SPI1_IPCLKPORT_I_SCLK_SPI 0x20d8 389 + #define GAT_PERIC_SPI2_IPCLKPORT_I_PCLK 0x20dc 390 + #define GAT_PERIC_SPI2_IPCLKPORT_I_SCLK_SPI 0x20e0 391 + #define GAT_PERIC_TDM0_IPCLKPORT_HCLK_M 0x20e4 392 + #define GAT_PERIC_TDM0_IPCLKPORT_PCLK 0x20e8 393 + #define GAT_PERIC_TDM1_IPCLKPORT_HCLK_M 0x20ec 394 + #define GAT_PERIC_TDM1_IPCLKPORT_PCLK 0x20f0 395 + #define GAT_PERIC_UART0_IPCLKPORT_I_SCLK_UART 0x20f4 396 + #define GAT_PERIC_UART0_IPCLKPORT_PCLK 0x20f8 397 + #define GAT_PERIC_UART1_IPCLKPORT_I_SCLK_UART 0x20fc 398 + #define GAT_PERIC_UART1_IPCLKPORT_PCLK 0x2100 399 + #define GAT_SYSREG_PERI_IPCLKPORT_PCLK 0x2104 400 + 401 + static const unsigned long peric_clk_regs[] __initconst = { 402 + PLL_CON0_PERIC_DMACLK_MUX, 403 + PLL_CON0_PERIC_EQOS_BUSCLK_MUX, 404 + PLL_CON0_PERIC_PCLK_MUX, 405 + PLL_CON0_PERIC_TBUCLK_MUX, 406 + PLL_CON0_SPI_CLK, 407 + PLL_CON0_SPI_PCLK, 408 + PLL_CON0_UART_CLK, 409 + PLL_CON0_UART_PCLK, 410 + MUX_PERIC_EQOS_PHYRXCLK, 411 + DIV_EQOS_BUSCLK, 412 + DIV_PERIC_MCAN_CLK, 413 + DIV_RGMII_CLK, 414 + DIV_RII_CLK, 415 + DIV_RMII_CLK, 416 + DIV_SPI_CLK, 417 + DIV_UART_CLK, 418 + GAT_EQOS_TOP_IPCLKPORT_CLK_PTP_REF_I, 419 + GAT_GPIO_PERIC_IPCLKPORT_OSCCLK, 420 + GAT_PERIC_ADC0_IPCLKPORT_I_OSCCLK, 421 + GAT_PERIC_CMU_PERIC_IPCLKPORT_PCLK, 422 + GAT_PERIC_PWM0_IPCLKPORT_I_OSCCLK, 423 + GAT_PERIC_PWM1_IPCLKPORT_I_OSCCLK, 424 + GAT_ASYNC_APB_DMA0_IPCLKPORT_PCLKM, 425 + GAT_ASYNC_APB_DMA0_IPCLKPORT_PCLKS, 426 + GAT_ASYNC_APB_DMA1_IPCLKPORT_PCLKM, 427 + GAT_ASYNC_APB_DMA1_IPCLKPORT_PCLKS, 428 + GAT_AXI2APB_PERIC0_IPCLKPORT_ACLK, 429 + GAT_AXI2APB_PERIC1_IPCLKPORT_ACLK, 430 + GAT_AXI2APB_PERIC2_IPCLKPORT_ACLK, 431 + GAT_BUS_D_PERIC_IPCLKPORT_DMACLK, 432 + GAT_BUS_D_PERIC_IPCLKPORT_EQOSCLK, 433 + GAT_BUS_D_PERIC_IPCLKPORT_MAINCLK, 434 + GAT_BUS_P_PERIC_IPCLKPORT_EQOSCLK, 435 + GAT_BUS_P_PERIC_IPCLKPORT_MAINCLK, 436 + GAT_BUS_P_PERIC_IPCLKPORT_SMMUCLK, 437 + GAT_EQOS_TOP_IPCLKPORT_ACLK_I, 438 + GAT_EQOS_TOP_IPCLKPORT_CLK_RX_I, 439 + GAT_EQOS_TOP_IPCLKPORT_HCLK_I, 440 + GAT_EQOS_TOP_IPCLKPORT_RGMII_CLK_I, 441 + GAT_EQOS_TOP_IPCLKPORT_RII_CLK_I, 442 + GAT_EQOS_TOP_IPCLKPORT_RMII_CLK_I, 443 + GAT_GPIO_PERIC_IPCLKPORT_PCLK, 444 + GAT_NS_BRDG_PERIC_IPCLKPORT_CLK__PSOC_PERIC__CLK_PERIC_D, 445 + GAT_NS_BRDG_PERIC_IPCLKPORT_CLK__PSOC_PERIC__CLK_PERIC_P, 446 + GAT_PERIC_ADC0_IPCLKPORT_PCLK_S0, 447 + GAT_PERIC_DMA0_IPCLKPORT_ACLK, 448 + GAT_PERIC_DMA1_IPCLKPORT_ACLK, 449 + GAT_PERIC_I2C0_IPCLKPORT_I_PCLK, 450 + GAT_PERIC_I2C1_IPCLKPORT_I_PCLK, 451 + GAT_PERIC_I2C2_IPCLKPORT_I_PCLK, 452 + GAT_PERIC_I2C3_IPCLKPORT_I_PCLK, 453 + GAT_PERIC_I2C4_IPCLKPORT_I_PCLK, 454 + GAT_PERIC_I2C5_IPCLKPORT_I_PCLK, 455 + GAT_PERIC_I2C6_IPCLKPORT_I_PCLK, 456 + GAT_PERIC_I2C7_IPCLKPORT_I_PCLK, 457 + GAT_PERIC_MCAN0_IPCLKPORT_CCLK, 458 + GAT_PERIC_MCAN0_IPCLKPORT_PCLK, 459 + GAT_PERIC_MCAN1_IPCLKPORT_CCLK, 460 + GAT_PERIC_MCAN1_IPCLKPORT_PCLK, 461 + GAT_PERIC_MCAN2_IPCLKPORT_CCLK, 462 + GAT_PERIC_MCAN2_IPCLKPORT_PCLK, 463 + GAT_PERIC_MCAN3_IPCLKPORT_CCLK, 464 + GAT_PERIC_MCAN3_IPCLKPORT_PCLK, 465 + GAT_PERIC_PWM0_IPCLKPORT_I_PCLK_S0, 466 + GAT_PERIC_PWM1_IPCLKPORT_I_PCLK_S0, 467 + GAT_PERIC_SMMU_IPCLKPORT_CCLK, 468 + GAT_PERIC_SMMU_IPCLKPORT_PERIC_BCLK, 469 + GAT_PERIC_SPI0_IPCLKPORT_I_PCLK, 470 + GAT_PERIC_SPI0_IPCLKPORT_I_SCLK_SPI, 471 + GAT_PERIC_SPI1_IPCLKPORT_I_PCLK, 472 + GAT_PERIC_SPI1_IPCLKPORT_I_SCLK_SPI, 473 + GAT_PERIC_SPI2_IPCLKPORT_I_PCLK, 474 + GAT_PERIC_SPI2_IPCLKPORT_I_SCLK_SPI, 475 + GAT_PERIC_TDM0_IPCLKPORT_HCLK_M, 476 + GAT_PERIC_TDM0_IPCLKPORT_PCLK, 477 + GAT_PERIC_TDM1_IPCLKPORT_HCLK_M, 478 + GAT_PERIC_TDM1_IPCLKPORT_PCLK, 479 + GAT_PERIC_UART0_IPCLKPORT_I_SCLK_UART, 480 + GAT_PERIC_UART0_IPCLKPORT_PCLK, 481 + GAT_PERIC_UART1_IPCLKPORT_I_SCLK_UART, 482 + GAT_PERIC_UART1_IPCLKPORT_PCLK, 483 + GAT_SYSREG_PERI_IPCLKPORT_PCLK, 484 + }; 485 + 486 + static const struct samsung_fixed_rate_clock peric_fixed_clks[] __initconst = { 487 + FRATE(PERIC_EQOS_PHYRXCLK, "eqos_phyrxclk", NULL, 0, 125000000), 488 + }; 489 + 490 + /* List of parent clocks for Muxes in CMU_PERIC */ 491 + PNAME(mout_peric_dmaclk_p) = { "fin_pll", "cmu_peric_shared1div4_dmaclk_gate" }; 492 + PNAME(mout_peric_eqos_busclk_p) = { "fin_pll", "dout_cmu_pll_shared0_div4" }; 493 + PNAME(mout_peric_pclk_p) = { "fin_pll", "dout_cmu_peric_shared1div36" }; 494 + PNAME(mout_peric_tbuclk_p) = { "fin_pll", "dout_cmu_peric_shared0div3_tbuclk" }; 495 + PNAME(mout_peric_spi_clk_p) = { "fin_pll", "dout_cmu_peric_shared0div20" }; 496 + PNAME(mout_peric_spi_pclk_p) = { "fin_pll", "dout_cmu_peric_shared1div36" }; 497 + PNAME(mout_peric_uart_clk_p) = { "fin_pll", "dout_cmu_peric_shared1div4_dmaclk" }; 498 + PNAME(mout_peric_uart_pclk_p) = { "fin_pll", "dout_cmu_peric_shared1div36" }; 499 + PNAME(mout_peric_eqos_phyrxclk_p) = { "dout_peric_rgmii_clk", "eqos_phyrxclk" }; 500 + 501 + static const struct samsung_mux_clock peric_mux_clks[] __initconst = { 502 + MUX(0, "mout_peric_dmaclk", mout_peric_dmaclk_p, PLL_CON0_PERIC_DMACLK_MUX, 4, 1), 503 + MUX(0, "mout_peric_eqos_busclk", mout_peric_eqos_busclk_p, 504 + PLL_CON0_PERIC_EQOS_BUSCLK_MUX, 4, 1), 505 + MUX(0, "mout_peric_pclk", mout_peric_pclk_p, PLL_CON0_PERIC_PCLK_MUX, 4, 1), 506 + MUX(0, "mout_peric_tbuclk", mout_peric_tbuclk_p, PLL_CON0_PERIC_TBUCLK_MUX, 4, 1), 507 + MUX(0, "mout_peric_spi_clk", mout_peric_spi_clk_p, PLL_CON0_SPI_CLK, 4, 1), 508 + MUX(0, "mout_peric_spi_pclk", mout_peric_spi_pclk_p, PLL_CON0_SPI_PCLK, 4, 1), 509 + MUX(0, "mout_peric_uart_clk", mout_peric_uart_clk_p, PLL_CON0_UART_CLK, 4, 1), 510 + MUX(0, "mout_peric_uart_pclk", mout_peric_uart_pclk_p, PLL_CON0_UART_PCLK, 4, 1), 511 + MUX(PERIC_EQOS_PHYRXCLK_MUX, "mout_peric_eqos_phyrxclk", mout_peric_eqos_phyrxclk_p, 512 + MUX_PERIC_EQOS_PHYRXCLK, 0, 1), 513 + }; 514 + 515 + static const struct samsung_div_clock peric_div_clks[] __initconst = { 516 + DIV(0, "dout_peric_eqos_busclk", "mout_peric_eqos_busclk", DIV_EQOS_BUSCLK, 0, 4), 517 + DIV(0, "dout_peric_mcan_clk", "mout_peric_dmaclk", DIV_PERIC_MCAN_CLK, 0, 4), 518 + DIV(PERIC_DOUT_RGMII_CLK, "dout_peric_rgmii_clk", "mout_peric_eqos_busclk", 519 + DIV_RGMII_CLK, 0, 4), 520 + DIV(0, "dout_peric_rii_clk", "dout_peric_rmii_clk", DIV_RII_CLK, 0, 4), 521 + DIV(0, "dout_peric_rmii_clk", "dout_peric_rgmii_clk", DIV_RMII_CLK, 0, 4), 522 + DIV(0, "dout_peric_spi_clk", "mout_peric_spi_clk", DIV_SPI_CLK, 0, 6), 523 + DIV(0, "dout_peric_uart_clk", "mout_peric_uart_clk", DIV_UART_CLK, 0, 6), 524 + }; 525 + 526 + static const struct samsung_gate_clock peric_gate_clks[] __initconst = { 527 + GATE(PERIC_EQOS_TOP_IPCLKPORT_CLK_PTP_REF_I, "peric_eqos_top_ipclkport_clk_ptp_ref_i", 528 + "fin_pll", GAT_EQOS_TOP_IPCLKPORT_CLK_PTP_REF_I, 21, CLK_IGNORE_UNUSED, 0), 529 + GATE(0, "peric_gpio_peric_ipclkport_oscclk", "fin_pll", GAT_GPIO_PERIC_IPCLKPORT_OSCCLK, 530 + 21, CLK_IGNORE_UNUSED, 0), 531 + GATE(PERIC_PCLK_ADCIF, "peric_adc0_ipclkport_i_oscclk", "fin_pll", 532 + GAT_PERIC_ADC0_IPCLKPORT_I_OSCCLK, 21, CLK_IGNORE_UNUSED, 0), 533 + GATE(0, "peric_cmu_peric_ipclkport_pclk", "mout_peric_pclk", 534 + GAT_PERIC_CMU_PERIC_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 535 + GATE(0, "peric_pwm0_ipclkport_i_oscclk", "fin_pll", GAT_PERIC_PWM0_IPCLKPORT_I_OSCCLK, 21, 536 + CLK_IGNORE_UNUSED, 0), 537 + GATE(0, "peric_pwm1_ipclkport_i_oscclk", "fin_pll", GAT_PERIC_PWM1_IPCLKPORT_I_OSCCLK, 21, 538 + CLK_IGNORE_UNUSED, 0), 539 + GATE(0, "peric_async_apb_dma0_ipclkport_pclkm", "mout_peric_dmaclk", 540 + GAT_ASYNC_APB_DMA0_IPCLKPORT_PCLKM, 21, CLK_IGNORE_UNUSED, 0), 541 + GATE(0, "peric_async_apb_dma0_ipclkport_pclks", "mout_peric_pclk", 542 + GAT_ASYNC_APB_DMA0_IPCLKPORT_PCLKS, 21, CLK_IGNORE_UNUSED, 0), 543 + GATE(0, "peric_async_apb_dma1_ipclkport_pclkm", "mout_peric_dmaclk", 544 + GAT_ASYNC_APB_DMA1_IPCLKPORT_PCLKM, 21, CLK_IGNORE_UNUSED, 0), 545 + GATE(0, "peric_async_apb_dma1_ipclkport_pclks", "mout_peric_pclk", 546 + GAT_ASYNC_APB_DMA1_IPCLKPORT_PCLKS, 21, CLK_IGNORE_UNUSED, 0), 547 + GATE(0, "peric_axi2apb_peric0_ipclkport_aclk", "mout_peric_pclk", 548 + GAT_AXI2APB_PERIC0_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 549 + GATE(0, "peric_axi2apb_peric1_ipclkport_aclk", "mout_peric_pclk", 550 + GAT_AXI2APB_PERIC1_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 551 + GATE(0, "peric_axi2apb_peric2_ipclkport_aclk", "mout_peric_pclk", 552 + GAT_AXI2APB_PERIC2_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 553 + GATE(0, "peric_bus_d_peric_ipclkport_dmaclk", "mout_peric_dmaclk", 554 + GAT_BUS_D_PERIC_IPCLKPORT_DMACLK, 21, CLK_IGNORE_UNUSED, 0), 555 + GATE(PERIC_BUS_D_PERIC_IPCLKPORT_EQOSCLK, "peric_bus_d_peric_ipclkport_eqosclk", 556 + "dout_peric_eqos_busclk", GAT_BUS_D_PERIC_IPCLKPORT_EQOSCLK, 21, CLK_IGNORE_UNUSED, 0), 557 + GATE(0, "peric_bus_d_peric_ipclkport_mainclk", "mout_peric_tbuclk", 558 + GAT_BUS_D_PERIC_IPCLKPORT_MAINCLK, 21, CLK_IGNORE_UNUSED, 0), 559 + GATE(PERIC_BUS_P_PERIC_IPCLKPORT_EQOSCLK, "peric_bus_p_peric_ipclkport_eqosclk", 560 + "dout_peric_eqos_busclk", GAT_BUS_P_PERIC_IPCLKPORT_EQOSCLK, 21, CLK_IGNORE_UNUSED, 0), 561 + GATE(0, "peric_bus_p_peric_ipclkport_mainclk", "mout_peric_pclk", 562 + GAT_BUS_P_PERIC_IPCLKPORT_MAINCLK, 21, CLK_IGNORE_UNUSED, 0), 563 + GATE(0, "peric_bus_p_peric_ipclkport_smmuclk", "mout_peric_tbuclk", 564 + GAT_BUS_P_PERIC_IPCLKPORT_SMMUCLK, 21, CLK_IGNORE_UNUSED, 0), 565 + GATE(PERIC_EQOS_TOP_IPCLKPORT_ACLK_I, "peric_eqos_top_ipclkport_aclk_i", 566 + "dout_peric_eqos_busclk", GAT_EQOS_TOP_IPCLKPORT_ACLK_I, 21, CLK_IGNORE_UNUSED, 0), 567 + GATE(PERIC_EQOS_TOP_IPCLKPORT_CLK_RX_I, "peric_eqos_top_ipclkport_clk_rx_i", 568 + "mout_peric_eqos_phyrxclk", GAT_EQOS_TOP_IPCLKPORT_CLK_RX_I, 21, CLK_IGNORE_UNUSED, 0), 569 + GATE(PERIC_EQOS_TOP_IPCLKPORT_HCLK_I, "peric_eqos_top_ipclkport_hclk_i", 570 + "dout_peric_eqos_busclk", GAT_EQOS_TOP_IPCLKPORT_HCLK_I, 21, CLK_IGNORE_UNUSED, 0), 571 + GATE(PERIC_EQOS_TOP_IPCLKPORT_RGMII_CLK_I, "peric_eqos_top_ipclkport_rgmii_clk_i", 572 + "dout_peric_rgmii_clk", GAT_EQOS_TOP_IPCLKPORT_RGMII_CLK_I, 21, CLK_IGNORE_UNUSED, 0), 573 + GATE(0, "peric_eqos_top_ipclkport_rii_clk_i", "dout_peric_rii_clk", 574 + GAT_EQOS_TOP_IPCLKPORT_RII_CLK_I, 21, CLK_IGNORE_UNUSED, 0), 575 + GATE(0, "peric_eqos_top_ipclkport_rmii_clk_i", "dout_peric_rmii_clk", 576 + GAT_EQOS_TOP_IPCLKPORT_RMII_CLK_I, 21, CLK_IGNORE_UNUSED, 0), 577 + GATE(0, "peric_gpio_peric_ipclkport_pclk", "mout_peric_pclk", 578 + GAT_GPIO_PERIC_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 579 + GATE(0, "peric_ns_brdg_peric_ipclkport_clk__psoc_peric__clk_peric_d", "mout_peric_tbuclk", 580 + GAT_NS_BRDG_PERIC_IPCLKPORT_CLK__PSOC_PERIC__CLK_PERIC_D, 21, CLK_IGNORE_UNUSED, 0), 581 + GATE(0, "peric_ns_brdg_peric_ipclkport_clk__psoc_peric__clk_peric_p", "mout_peric_pclk", 582 + GAT_NS_BRDG_PERIC_IPCLKPORT_CLK__PSOC_PERIC__CLK_PERIC_P, 21, CLK_IGNORE_UNUSED, 0), 583 + GATE(0, "peric_adc0_ipclkport_pclk_s0", "mout_peric_pclk", 584 + GAT_PERIC_ADC0_IPCLKPORT_PCLK_S0, 21, CLK_IGNORE_UNUSED, 0), 585 + GATE(PERIC_DMA0_IPCLKPORT_ACLK, "peric_dma0_ipclkport_aclk", "mout_peric_dmaclk", 586 + GAT_PERIC_DMA0_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 587 + GATE(PERIC_DMA1_IPCLKPORT_ACLK, "peric_dma1_ipclkport_aclk", "mout_peric_dmaclk", 588 + GAT_PERIC_DMA1_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 589 + GATE(PERIC_PCLK_HSI2C0, "peric_i2c0_ipclkport_i_pclk", "mout_peric_pclk", 590 + GAT_PERIC_I2C0_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 591 + GATE(PERIC_PCLK_HSI2C1, "peric_i2c1_ipclkport_i_pclk", "mout_peric_pclk", 592 + GAT_PERIC_I2C1_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 593 + GATE(PERIC_PCLK_HSI2C2, "peric_i2c2_ipclkport_i_pclk", "mout_peric_pclk", 594 + GAT_PERIC_I2C2_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 595 + GATE(PERIC_PCLK_HSI2C3, "peric_i2c3_ipclkport_i_pclk", "mout_peric_pclk", 596 + GAT_PERIC_I2C3_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 597 + GATE(PERIC_PCLK_HSI2C4, "peric_i2c4_ipclkport_i_pclk", "mout_peric_pclk", 598 + GAT_PERIC_I2C4_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 599 + GATE(PERIC_PCLK_HSI2C5, "peric_i2c5_ipclkport_i_pclk", "mout_peric_pclk", 600 + GAT_PERIC_I2C5_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 601 + GATE(PERIC_PCLK_HSI2C6, "peric_i2c6_ipclkport_i_pclk", "mout_peric_pclk", 602 + GAT_PERIC_I2C6_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 603 + GATE(PERIC_PCLK_HSI2C7, "peric_i2c7_ipclkport_i_pclk", "mout_peric_pclk", 604 + GAT_PERIC_I2C7_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 605 + GATE(PERIC_MCAN0_IPCLKPORT_CCLK, "peric_mcan0_ipclkport_cclk", "dout_peric_mcan_clk", 606 + GAT_PERIC_MCAN0_IPCLKPORT_CCLK, 21, CLK_IGNORE_UNUSED, 0), 607 + GATE(PERIC_MCAN0_IPCLKPORT_PCLK, "peric_mcan0_ipclkport_pclk", "mout_peric_pclk", 608 + GAT_PERIC_MCAN0_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 609 + GATE(PERIC_MCAN1_IPCLKPORT_CCLK, "peric_mcan1_ipclkport_cclk", "dout_peric_mcan_clk", 610 + GAT_PERIC_MCAN1_IPCLKPORT_CCLK, 21, CLK_IGNORE_UNUSED, 0), 611 + GATE(PERIC_MCAN1_IPCLKPORT_PCLK, "peric_mcan1_ipclkport_pclk", "mout_peric_pclk", 612 + GAT_PERIC_MCAN1_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 613 + GATE(PERIC_MCAN2_IPCLKPORT_CCLK, "peric_mcan2_ipclkport_cclk", "dout_peric_mcan_clk", 614 + GAT_PERIC_MCAN2_IPCLKPORT_CCLK, 21, CLK_IGNORE_UNUSED, 0), 615 + GATE(PERIC_MCAN2_IPCLKPORT_PCLK, "peric_mcan2_ipclkport_pclk", "mout_peric_pclk", 616 + GAT_PERIC_MCAN2_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 617 + GATE(PERIC_MCAN3_IPCLKPORT_CCLK, "peric_mcan3_ipclkport_cclk", "dout_peric_mcan_clk", 618 + GAT_PERIC_MCAN3_IPCLKPORT_CCLK, 21, CLK_IGNORE_UNUSED, 0), 619 + GATE(PERIC_MCAN3_IPCLKPORT_PCLK, "peric_mcan3_ipclkport_pclk", "mout_peric_pclk", 620 + GAT_PERIC_MCAN3_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 621 + GATE(PERIC_PWM0_IPCLKPORT_I_PCLK_S0, "peric_pwm0_ipclkport_i_pclk_s0", "mout_peric_pclk", 622 + GAT_PERIC_PWM0_IPCLKPORT_I_PCLK_S0, 21, CLK_IGNORE_UNUSED, 0), 623 + GATE(PERIC_PWM1_IPCLKPORT_I_PCLK_S0, "peric_pwm1_ipclkport_i_pclk_s0", "mout_peric_pclk", 624 + GAT_PERIC_PWM1_IPCLKPORT_I_PCLK_S0, 21, CLK_IGNORE_UNUSED, 0), 625 + GATE(0, "peric_smmu_ipclkport_cclk", "mout_peric_tbuclk", 626 + GAT_PERIC_SMMU_IPCLKPORT_CCLK, 21, CLK_IGNORE_UNUSED, 0), 627 + GATE(0, "peric_smmu_ipclkport_peric_bclk", "mout_peric_tbuclk", 628 + GAT_PERIC_SMMU_IPCLKPORT_PERIC_BCLK, 21, CLK_IGNORE_UNUSED, 0), 629 + GATE(PERIC_PCLK_SPI0, "peric_spi0_ipclkport_i_pclk", "mout_peric_spi_pclk", 630 + GAT_PERIC_SPI0_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 631 + GATE(PERIC_SCLK_SPI0, "peric_spi0_ipclkport_i_sclk_spi", "dout_peric_spi_clk", 632 + GAT_PERIC_SPI0_IPCLKPORT_I_SCLK_SPI, 21, CLK_IGNORE_UNUSED, 0), 633 + GATE(PERIC_PCLK_SPI1, "peric_spi1_ipclkport_i_pclk", "mout_peric_spi_pclk", 634 + GAT_PERIC_SPI1_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 635 + GATE(PERIC_SCLK_SPI1, "peric_spi1_ipclkport_i_sclk_spi", "dout_peric_spi_clk", 636 + GAT_PERIC_SPI1_IPCLKPORT_I_SCLK_SPI, 21, CLK_IGNORE_UNUSED, 0), 637 + GATE(PERIC_PCLK_SPI2, "peric_spi2_ipclkport_i_pclk", "mout_peric_spi_pclk", 638 + GAT_PERIC_SPI2_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 639 + GATE(PERIC_SCLK_SPI2, "peric_spi2_ipclkport_i_sclk_spi", "dout_peric_spi_clk", 640 + GAT_PERIC_SPI2_IPCLKPORT_I_SCLK_SPI, 21, CLK_IGNORE_UNUSED, 0), 641 + GATE(PERIC_HCLK_TDM0, "peric_tdm0_ipclkport_hclk_m", "mout_peric_pclk", 642 + GAT_PERIC_TDM0_IPCLKPORT_HCLK_M, 21, CLK_IGNORE_UNUSED, 0), 643 + GATE(PERIC_PCLK_TDM0, "peric_tdm0_ipclkport_pclk", "mout_peric_pclk", 644 + GAT_PERIC_TDM0_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 645 + GATE(PERIC_HCLK_TDM1, "peric_tdm1_ipclkport_hclk_m", "mout_peric_pclk", 646 + GAT_PERIC_TDM1_IPCLKPORT_HCLK_M, 21, CLK_IGNORE_UNUSED, 0), 647 + GATE(PERIC_PCLK_TDM1, "peric_tdm1_ipclkport_pclk", "mout_peric_pclk", 648 + GAT_PERIC_TDM1_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 649 + GATE(PERIC_SCLK_UART0, "peric_uart0_ipclkport_i_sclk_uart", "dout_peric_uart_clk", 650 + GAT_PERIC_UART0_IPCLKPORT_I_SCLK_UART, 21, CLK_IGNORE_UNUSED, 0), 651 + GATE(PERIC_PCLK_UART0, "peric_uart0_ipclkport_pclk", "mout_peric_uart_pclk", 652 + GAT_PERIC_UART0_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 653 + GATE(PERIC_SCLK_UART1, "peric_uart1_ipclkport_i_sclk_uart", "dout_peric_uart_clk", 654 + GAT_PERIC_UART1_IPCLKPORT_I_SCLK_UART, 21, CLK_IGNORE_UNUSED, 0), 655 + GATE(PERIC_PCLK_UART1, "peric_uart1_ipclkport_pclk", "mout_peric_uart_pclk", 656 + GAT_PERIC_UART1_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 657 + GATE(0, "peric_sysreg_peri_ipclkport_pclk", "mout_peric_pclk", 658 + GAT_SYSREG_PERI_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 659 + }; 660 + 661 + static const struct samsung_cmu_info peric_cmu_info __initconst = { 662 + .mux_clks = peric_mux_clks, 663 + .nr_mux_clks = ARRAY_SIZE(peric_mux_clks), 664 + .div_clks = peric_div_clks, 665 + .nr_div_clks = ARRAY_SIZE(peric_div_clks), 666 + .gate_clks = peric_gate_clks, 667 + .nr_gate_clks = ARRAY_SIZE(peric_gate_clks), 668 + .fixed_clks = peric_fixed_clks, 669 + .nr_fixed_clks = ARRAY_SIZE(peric_fixed_clks), 670 + .nr_clk_ids = PERIC_NR_CLK, 671 + .clk_regs = peric_clk_regs, 672 + .nr_clk_regs = ARRAY_SIZE(peric_clk_regs), 673 + .clk_name = "dout_cmu_pll_shared0_div4", 674 + }; 675 + 676 + /* Register Offset definitions for CMU_FSYS0 (0x15010000) */ 677 + #define PLL_CON0_CLKCMU_FSYS0_UNIPRO 0x100 678 + #define PLL_CON0_CLK_FSYS0_SLAVEBUSCLK 0x140 679 + #define PLL_CON0_EQOS_RGMII_125_MUX1 0x160 680 + #define DIV_CLK_UNIPRO 0x1800 681 + #define DIV_EQS_RGMII_CLK_125 0x1804 682 + #define DIV_PERIBUS_GRP 0x1808 683 + #define DIV_EQOS_RII_CLK2O5 0x180c 684 + #define DIV_EQOS_RMIICLK_25 0x1810 685 + #define DIV_PCIE_PHY_OSCCLK 0x1814 686 + #define GAT_FSYS0_EQOS_TOP0_IPCLKPORT_CLK_PTP_REF_I 0x2004 687 + #define GAT_FSYS0_EQOS_TOP0_IPCLKPORT_CLK_RX_I 0x2008 688 + #define GAT_FSYS0_FSYS0_CMU_FSYS0_IPCLKPORT_PCLK 0x200c 689 + #define GAT_FSYS0_GPIO_FSYS0_IPCLKPORT_OSCCLK 0x2010 690 + #define GAT_FSYS0_PCIE_TOP_IPCLKPORT_PCIEG3_PHY_X4_INST_0_PLL_REFCLK_FROM_XO 0x2014 691 + #define GAT_FSYS0_PCIE_TOP_IPCLKPORT_PIPE_PAL_INST_0_I_IMMORTAL_CLK 0x2018 692 + #define GAT_FSYS0_PCIE_TOP_IPCLKPORT_FSD_PCIE_SUB_CTRL_INST_0_AUX_CLK_SOC 0x201c 693 + #define GAT_FSYS0_UFS_TOP0_IPCLKPORT_I_MPHY_REFCLK_IXTAL24 0x2020 694 + #define GAT_FSYS0_UFS_TOP0_IPCLKPORT_I_MPHY_REFCLK_IXTAL26 0x2024 695 + #define GAT_FSYS0_UFS_TOP1_IPCLKPORT_I_MPHY_REFCLK_IXTAL24 0x2028 696 + #define GAT_FSYS0_UFS_TOP1_IPCLKPORT_I_MPHY_REFCLK_IXTAL26 0x202c 697 + #define GAT_FSYS0_AHBBR_FSYS0_IPCLKPORT_HCLK 0x2038 698 + #define GAT_FSYS0_AXI2APB_FSYS0_IPCLKPORT_ACLK 0x203c 699 + #define GAT_FSYS0_BUS_D_FSYS0_IPCLKPORT_MAINCLK 0x2040 700 + #define GAT_FSYS0_BUS_D_FSYS0_IPCLKPORT_PERICLK 0x2044 701 + #define GAT_FSYS0_BUS_P_FSYS0_IPCLKPORT_MAINCLK 0x2048 702 + #define GAT_FSYS0_BUS_P_FSYS0_IPCLKPORT_TCUCLK 0x204c 703 + #define GAT_FSYS0_CPE425_IPCLKPORT_ACLK 0x2050 704 + #define GAT_FSYS0_EQOS_TOP0_IPCLKPORT_ACLK_I 0x2054 705 + #define GAT_FSYS0_EQOS_TOP0_IPCLKPORT_HCLK_I 0x2058 706 + #define GAT_FSYS0_EQOS_TOP0_IPCLKPORT_RGMII_CLK_I 0x205c 707 + #define GAT_FSYS0_EQOS_TOP0_IPCLKPORT_RII_CLK_I 0x2060 708 + #define GAT_FSYS0_EQOS_TOP0_IPCLKPORT_RMII_CLK_I 0x2064 709 + #define GAT_FSYS0_GPIO_FSYS0_IPCLKPORT_PCLK 0x2068 710 + #define GAT_FSYS0_NS_BRDG_FSYS0_IPCLKPORT_CLK__PSOC_FSYS0__CLK_FSYS0_D 0x206c 711 + #define GAT_FSYS0_NS_BRDG_FSYS0_IPCLKPORT_CLK__PSOC_FSYS0__CLK_FSYS0_D1 0x2070 712 + #define GAT_FSYS0_NS_BRDG_FSYS0_IPCLKPORT_CLK__PSOC_FSYS0__CLK_FSYS0_P 0x2074 713 + #define GAT_FSYS0_NS_BRDG_FSYS0_IPCLKPORT_CLK__PSOC_FSYS0__CLK_FSYS0_S 0x2078 714 + #define GAT_FSYS0_PCIE_TOP_IPCLKPORT_PCIEG3_PHY_X4_INST_0_I_APB_PCLK 0x207c 715 + #define GAT_FSYS0_PCIE_TOP_IPCLKPORT_PCIEG3_PHY_X4_INST_0_PLL_REFCLK_FROM_SYSPLL 0x2080 716 + #define GAT_FSYS0_PCIE_TOP_IPCLKPORT_PIPE_PAL_INST_0_I_APB_PCLK_0 0x2084 717 + #define GAT_FSYS0_PCIE_TOP_IPCLKPORT_FSD_PCIE_SUB_CTRL_INST_0_DBI_ACLK_SOC 0x2088 718 + #define GAT_FSYS0_PCIE_TOP_IPCLKPORT_FSD_PCIE_SUB_CTRL_INST_0_I_DRIVER_APB_CLK 0x208c 719 + #define GAT_FSYS0_PCIE_TOP_IPCLKPORT_FSD_PCIE_SUB_CTRL_INST_0_MSTR_ACLK_SOC 0x2090 720 + #define GAT_FSYS0_PCIE_TOP_IPCLKPORT_FSD_PCIE_SUB_CTRL_INST_0_SLV_ACLK_SOC 0x2094 721 + #define GAT_FSYS0_SMMU_FSYS0_IPCLKPORT_CCLK 0x2098 722 + #define GAT_FSYS0_SMMU_FSYS0_IPCLKPORT_FSYS0_BCLK 0x209c 723 + #define GAT_FSYS0_SYSREG_FSYS0_IPCLKPORT_PCLK 0x20a0 724 + #define GAT_FSYS0_UFS_TOP0_IPCLKPORT_HCLK_BUS 0x20a4 725 + #define GAT_FSYS0_UFS_TOP0_IPCLKPORT_I_ACLK 0x20a8 726 + #define GAT_FSYS0_UFS_TOP0_IPCLKPORT_I_CLK_UNIPRO 0x20ac 727 + #define GAT_FSYS0_UFS_TOP0_IPCLKPORT_I_FMP_CLK 0x20b0 728 + #define GAT_FSYS0_UFS_TOP1_IPCLKPORT_HCLK_BUS 0x20b4 729 + #define GAT_FSYS0_UFS_TOP1_IPCLKPORT_I_ACLK 0x20b8 730 + #define GAT_FSYS0_UFS_TOP1_IPCLKPORT_I_CLK_UNIPRO 0x20bc 731 + #define GAT_FSYS0_UFS_TOP1_IPCLKPORT_I_FMP_CLK 0x20c0 732 + #define GAT_FSYS0_RII_CLK_DIVGATE 0x20d4 733 + 734 + static const unsigned long fsys0_clk_regs[] __initconst = { 735 + PLL_CON0_CLKCMU_FSYS0_UNIPRO, 736 + PLL_CON0_CLK_FSYS0_SLAVEBUSCLK, 737 + PLL_CON0_EQOS_RGMII_125_MUX1, 738 + DIV_CLK_UNIPRO, 739 + DIV_EQS_RGMII_CLK_125, 740 + DIV_PERIBUS_GRP, 741 + DIV_EQOS_RII_CLK2O5, 742 + DIV_EQOS_RMIICLK_25, 743 + DIV_PCIE_PHY_OSCCLK, 744 + GAT_FSYS0_EQOS_TOP0_IPCLKPORT_CLK_PTP_REF_I, 745 + GAT_FSYS0_EQOS_TOP0_IPCLKPORT_CLK_RX_I, 746 + GAT_FSYS0_FSYS0_CMU_FSYS0_IPCLKPORT_PCLK, 747 + GAT_FSYS0_GPIO_FSYS0_IPCLKPORT_OSCCLK, 748 + GAT_FSYS0_PCIE_TOP_IPCLKPORT_PCIEG3_PHY_X4_INST_0_PLL_REFCLK_FROM_XO, 749 + GAT_FSYS0_PCIE_TOP_IPCLKPORT_PIPE_PAL_INST_0_I_IMMORTAL_CLK, 750 + GAT_FSYS0_PCIE_TOP_IPCLKPORT_FSD_PCIE_SUB_CTRL_INST_0_AUX_CLK_SOC, 751 + GAT_FSYS0_UFS_TOP0_IPCLKPORT_I_MPHY_REFCLK_IXTAL24, 752 + GAT_FSYS0_UFS_TOP0_IPCLKPORT_I_MPHY_REFCLK_IXTAL26, 753 + GAT_FSYS0_UFS_TOP1_IPCLKPORT_I_MPHY_REFCLK_IXTAL24, 754 + GAT_FSYS0_UFS_TOP1_IPCLKPORT_I_MPHY_REFCLK_IXTAL26, 755 + GAT_FSYS0_AHBBR_FSYS0_IPCLKPORT_HCLK, 756 + GAT_FSYS0_AXI2APB_FSYS0_IPCLKPORT_ACLK, 757 + GAT_FSYS0_BUS_D_FSYS0_IPCLKPORT_MAINCLK, 758 + GAT_FSYS0_BUS_D_FSYS0_IPCLKPORT_PERICLK, 759 + GAT_FSYS0_BUS_P_FSYS0_IPCLKPORT_MAINCLK, 760 + GAT_FSYS0_BUS_P_FSYS0_IPCLKPORT_TCUCLK, 761 + GAT_FSYS0_CPE425_IPCLKPORT_ACLK, 762 + GAT_FSYS0_EQOS_TOP0_IPCLKPORT_ACLK_I, 763 + GAT_FSYS0_EQOS_TOP0_IPCLKPORT_HCLK_I, 764 + GAT_FSYS0_EQOS_TOP0_IPCLKPORT_RGMII_CLK_I, 765 + GAT_FSYS0_EQOS_TOP0_IPCLKPORT_RII_CLK_I, 766 + GAT_FSYS0_EQOS_TOP0_IPCLKPORT_RMII_CLK_I, 767 + GAT_FSYS0_GPIO_FSYS0_IPCLKPORT_PCLK, 768 + GAT_FSYS0_NS_BRDG_FSYS0_IPCLKPORT_CLK__PSOC_FSYS0__CLK_FSYS0_D, 769 + GAT_FSYS0_NS_BRDG_FSYS0_IPCLKPORT_CLK__PSOC_FSYS0__CLK_FSYS0_D1, 770 + GAT_FSYS0_NS_BRDG_FSYS0_IPCLKPORT_CLK__PSOC_FSYS0__CLK_FSYS0_P, 771 + GAT_FSYS0_NS_BRDG_FSYS0_IPCLKPORT_CLK__PSOC_FSYS0__CLK_FSYS0_S, 772 + GAT_FSYS0_PCIE_TOP_IPCLKPORT_PCIEG3_PHY_X4_INST_0_I_APB_PCLK, 773 + GAT_FSYS0_PCIE_TOP_IPCLKPORT_PCIEG3_PHY_X4_INST_0_PLL_REFCLK_FROM_SYSPLL, 774 + GAT_FSYS0_PCIE_TOP_IPCLKPORT_PIPE_PAL_INST_0_I_APB_PCLK_0, 775 + GAT_FSYS0_PCIE_TOP_IPCLKPORT_FSD_PCIE_SUB_CTRL_INST_0_DBI_ACLK_SOC, 776 + GAT_FSYS0_PCIE_TOP_IPCLKPORT_FSD_PCIE_SUB_CTRL_INST_0_I_DRIVER_APB_CLK, 777 + GAT_FSYS0_PCIE_TOP_IPCLKPORT_FSD_PCIE_SUB_CTRL_INST_0_MSTR_ACLK_SOC, 778 + GAT_FSYS0_PCIE_TOP_IPCLKPORT_FSD_PCIE_SUB_CTRL_INST_0_SLV_ACLK_SOC, 779 + GAT_FSYS0_SMMU_FSYS0_IPCLKPORT_CCLK, 780 + GAT_FSYS0_SMMU_FSYS0_IPCLKPORT_FSYS0_BCLK, 781 + GAT_FSYS0_SYSREG_FSYS0_IPCLKPORT_PCLK, 782 + GAT_FSYS0_UFS_TOP0_IPCLKPORT_HCLK_BUS, 783 + GAT_FSYS0_UFS_TOP0_IPCLKPORT_I_ACLK, 784 + GAT_FSYS0_UFS_TOP0_IPCLKPORT_I_CLK_UNIPRO, 785 + GAT_FSYS0_UFS_TOP0_IPCLKPORT_I_FMP_CLK, 786 + GAT_FSYS0_UFS_TOP1_IPCLKPORT_HCLK_BUS, 787 + GAT_FSYS0_UFS_TOP1_IPCLKPORT_I_ACLK, 788 + GAT_FSYS0_UFS_TOP1_IPCLKPORT_I_CLK_UNIPRO, 789 + GAT_FSYS0_UFS_TOP1_IPCLKPORT_I_FMP_CLK, 790 + GAT_FSYS0_RII_CLK_DIVGATE, 791 + }; 792 + 793 + static const struct samsung_fixed_rate_clock fsys0_fixed_clks[] __initconst = { 794 + FRATE(0, "pad_eqos0_phyrxclk", NULL, 0, 125000000), 795 + FRATE(0, "i_mphy_refclk_ixtal26", NULL, 0, 26000000), 796 + FRATE(0, "xtal_clk_pcie_phy", NULL, 0, 100000000), 797 + }; 798 + 799 + /* List of parent clocks for Muxes in CMU_FSYS0 */ 800 + PNAME(mout_fsys0_clkcmu_fsys0_unipro_p) = { "fin_pll", "dout_cmu_pll_shared0_div6" }; 801 + PNAME(mout_fsys0_clk_fsys0_slavebusclk_p) = { "fin_pll", "dout_cmu_fsys0_shared1div4" }; 802 + PNAME(mout_fsys0_eqos_rgmii_125_mux1_p) = { "fin_pll", "dout_cmu_fsys0_shared0div4" }; 803 + 804 + static const struct samsung_mux_clock fsys0_mux_clks[] __initconst = { 805 + MUX(0, "mout_fsys0_clkcmu_fsys0_unipro", mout_fsys0_clkcmu_fsys0_unipro_p, 806 + PLL_CON0_CLKCMU_FSYS0_UNIPRO, 4, 1), 807 + MUX(0, "mout_fsys0_clk_fsys0_slavebusclk", mout_fsys0_clk_fsys0_slavebusclk_p, 808 + PLL_CON0_CLK_FSYS0_SLAVEBUSCLK, 4, 1), 809 + MUX(0, "mout_fsys0_eqos_rgmii_125_mux1", mout_fsys0_eqos_rgmii_125_mux1_p, 810 + PLL_CON0_EQOS_RGMII_125_MUX1, 4, 1), 811 + }; 812 + 813 + static const struct samsung_div_clock fsys0_div_clks[] __initconst = { 814 + DIV(0, "dout_fsys0_clk_unipro", "mout_fsys0_clkcmu_fsys0_unipro", DIV_CLK_UNIPRO, 0, 4), 815 + DIV(0, "dout_fsys0_eqs_rgmii_clk_125", "mout_fsys0_eqos_rgmii_125_mux1", 816 + DIV_EQS_RGMII_CLK_125, 0, 4), 817 + DIV(FSYS0_DOUT_FSYS0_PERIBUS_GRP, "dout_fsys0_peribus_grp", 818 + "mout_fsys0_clk_fsys0_slavebusclk", DIV_PERIBUS_GRP, 0, 4), 819 + DIV(0, "dout_fsys0_eqos_rii_clk2o5", "fsys0_rii_clk_divgate", DIV_EQOS_RII_CLK2O5, 0, 4), 820 + DIV(0, "dout_fsys0_eqos_rmiiclk_25", "mout_fsys0_eqos_rgmii_125_mux1", 821 + DIV_EQOS_RMIICLK_25, 0, 5), 822 + DIV(0, "dout_fsys0_pcie_phy_oscclk", "mout_fsys0_eqos_rgmii_125_mux1", 823 + DIV_PCIE_PHY_OSCCLK, 0, 4), 824 + }; 825 + 826 + static const struct samsung_gate_clock fsys0_gate_clks[] __initconst = { 827 + GATE(FSYS0_EQOS_TOP0_IPCLKPORT_CLK_RX_I, "fsys0_eqos_top0_ipclkport_clk_rx_i", 828 + "pad_eqos0_phyrxclk", GAT_FSYS0_EQOS_TOP0_IPCLKPORT_CLK_RX_I, 21, 829 + CLK_IGNORE_UNUSED, 0), 830 + GATE(PCIE_SUBCTRL_INST0_AUX_CLK_SOC, 831 + "fsys0_pcie_top_ipclkport_fsd_pcie_sub_ctrl_inst_0_aux_clk_soc", "fin_pll", 832 + GAT_FSYS0_PCIE_TOP_IPCLKPORT_FSD_PCIE_SUB_CTRL_INST_0_AUX_CLK_SOC, 21, 833 + CLK_IGNORE_UNUSED, 0), 834 + GATE(0, "fsys0_fsys0_cmu_fsys0_ipclkport_pclk", "dout_fsys0_peribus_grp", 835 + GAT_FSYS0_FSYS0_CMU_FSYS0_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 836 + GATE(0, 837 + "fsys0_pcie_top_ipclkport_pcieg3_phy_x4_inst_0_pll_refclk_from_xo", 838 + "xtal_clk_pcie_phy", 839 + GAT_FSYS0_PCIE_TOP_IPCLKPORT_PCIEG3_PHY_X4_INST_0_PLL_REFCLK_FROM_XO, 21, 840 + CLK_IGNORE_UNUSED, 0), 841 + GATE(UFS0_MPHY_REFCLK_IXTAL24, "fsys0_ufs_top0_ipclkport_i_mphy_refclk_ixtal24", 842 + "i_mphy_refclk_ixtal26", GAT_FSYS0_UFS_TOP0_IPCLKPORT_I_MPHY_REFCLK_IXTAL24, 21, 843 + CLK_IGNORE_UNUSED, 0), 844 + GATE(UFS0_MPHY_REFCLK_IXTAL26, "fsys0_ufs_top0_ipclkport_i_mphy_refclk_ixtal26", 845 + "i_mphy_refclk_ixtal26", GAT_FSYS0_UFS_TOP0_IPCLKPORT_I_MPHY_REFCLK_IXTAL26, 21, 846 + CLK_IGNORE_UNUSED, 0), 847 + GATE(UFS1_MPHY_REFCLK_IXTAL24, "fsys0_ufs_top1_ipclkport_i_mphy_refclk_ixtal24", 848 + "i_mphy_refclk_ixtal26", GAT_FSYS0_UFS_TOP1_IPCLKPORT_I_MPHY_REFCLK_IXTAL24, 21, 849 + CLK_IGNORE_UNUSED, 0), 850 + GATE(UFS1_MPHY_REFCLK_IXTAL26, "fsys0_ufs_top1_ipclkport_i_mphy_refclk_ixtal26", 851 + "i_mphy_refclk_ixtal26", GAT_FSYS0_UFS_TOP1_IPCLKPORT_I_MPHY_REFCLK_IXTAL26, 21, 852 + CLK_IGNORE_UNUSED, 0), 853 + GATE(0, "fsys0_ahbbr_fsys0_ipclkport_hclk", "dout_fsys0_peribus_grp", 854 + GAT_FSYS0_AHBBR_FSYS0_IPCLKPORT_HCLK, 21, CLK_IGNORE_UNUSED, 0), 855 + GATE(0, "fsys0_axi2apb_fsys0_ipclkport_aclk", "dout_fsys0_peribus_grp", 856 + GAT_FSYS0_AXI2APB_FSYS0_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 857 + GATE(0, "fsys0_bus_d_fsys0_ipclkport_mainclk", "mout_fsys0_clk_fsys0_slavebusclk", 858 + GAT_FSYS0_BUS_D_FSYS0_IPCLKPORT_MAINCLK, 21, CLK_IGNORE_UNUSED, 0), 859 + GATE(0, "fsys0_bus_d_fsys0_ipclkport_periclk", "dout_fsys0_peribus_grp", 860 + GAT_FSYS0_BUS_D_FSYS0_IPCLKPORT_PERICLK, 21, CLK_IGNORE_UNUSED, 0), 861 + GATE(0, "fsys0_bus_p_fsys0_ipclkport_mainclk", "dout_fsys0_peribus_grp", 862 + GAT_FSYS0_BUS_P_FSYS0_IPCLKPORT_MAINCLK, 21, CLK_IGNORE_UNUSED, 0), 863 + GATE(0, "fsys0_bus_p_fsys0_ipclkport_tcuclk", "mout_fsys0_eqos_rgmii_125_mux1", 864 + GAT_FSYS0_BUS_P_FSYS0_IPCLKPORT_TCUCLK, 21, CLK_IGNORE_UNUSED, 0), 865 + GATE(0, "fsys0_cpe425_ipclkport_aclk", "mout_fsys0_clk_fsys0_slavebusclk", 866 + GAT_FSYS0_CPE425_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 867 + GATE(FSYS0_EQOS_TOP0_IPCLKPORT_ACLK_I, "fsys0_eqos_top0_ipclkport_aclk_i", 868 + "dout_fsys0_peribus_grp", GAT_FSYS0_EQOS_TOP0_IPCLKPORT_ACLK_I, 21, 869 + CLK_IGNORE_UNUSED, 0), 870 + GATE(FSYS0_EQOS_TOP0_IPCLKPORT_HCLK_I, "fsys0_eqos_top0_ipclkport_hclk_i", 871 + "dout_fsys0_peribus_grp", GAT_FSYS0_EQOS_TOP0_IPCLKPORT_HCLK_I, 21, 872 + CLK_IGNORE_UNUSED, 0), 873 + GATE(FSYS0_EQOS_TOP0_IPCLKPORT_RGMII_CLK_I, "fsys0_eqos_top0_ipclkport_rgmii_clk_i", 874 + "dout_fsys0_eqs_rgmii_clk_125", GAT_FSYS0_EQOS_TOP0_IPCLKPORT_RGMII_CLK_I, 21, 875 + CLK_IGNORE_UNUSED, 0), 876 + GATE(0, "fsys0_eqos_top0_ipclkport_rii_clk_i", "dout_fsys0_eqos_rii_clk2o5", 877 + GAT_FSYS0_EQOS_TOP0_IPCLKPORT_RII_CLK_I, 21, CLK_IGNORE_UNUSED, 0), 878 + GATE(0, "fsys0_eqos_top0_ipclkport_rmii_clk_i", "dout_fsys0_eqos_rmiiclk_25", 879 + GAT_FSYS0_EQOS_TOP0_IPCLKPORT_RMII_CLK_I, 21, CLK_IGNORE_UNUSED, 0), 880 + GATE(0, "fsys0_gpio_fsys0_ipclkport_pclk", "dout_fsys0_peribus_grp", 881 + GAT_FSYS0_GPIO_FSYS0_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 882 + GATE(0, "fsys0_gpio_fsys0_ipclkport_oscclk", "fin_pll", 883 + GAT_FSYS0_GPIO_FSYS0_IPCLKPORT_OSCCLK, 21, CLK_IGNORE_UNUSED, 0), 884 + GATE(0, "fsys0_ns_brdg_fsys0_ipclkport_clk__psoc_fsys0__clk_fsys0_d", 885 + "mout_fsys0_clk_fsys0_slavebusclk", 886 + GAT_FSYS0_NS_BRDG_FSYS0_IPCLKPORT_CLK__PSOC_FSYS0__CLK_FSYS0_D, 21, 887 + CLK_IGNORE_UNUSED, 0), 888 + GATE(0, "fsys0_ns_brdg_fsys0_ipclkport_clk__psoc_fsys0__clk_fsys0_d1", 889 + "mout_fsys0_eqos_rgmii_125_mux1", 890 + GAT_FSYS0_NS_BRDG_FSYS0_IPCLKPORT_CLK__PSOC_FSYS0__CLK_FSYS0_D1, 21, 891 + CLK_IGNORE_UNUSED, 0), 892 + GATE(0, "fsys0_ns_brdg_fsys0_ipclkport_clk__psoc_fsys0__clk_fsys0_p", 893 + "dout_fsys0_peribus_grp", 894 + GAT_FSYS0_NS_BRDG_FSYS0_IPCLKPORT_CLK__PSOC_FSYS0__CLK_FSYS0_P, 21, 895 + CLK_IGNORE_UNUSED, 0), 896 + GATE(0, "fsys0_ns_brdg_fsys0_ipclkport_clk__psoc_fsys0__clk_fsys0_s", 897 + "mout_fsys0_clk_fsys0_slavebusclk", 898 + GAT_FSYS0_NS_BRDG_FSYS0_IPCLKPORT_CLK__PSOC_FSYS0__CLK_FSYS0_S, 21, 899 + CLK_IGNORE_UNUSED, 0), 900 + GATE(0, "fsys0_pcie_top_ipclkport_pcieg3_phy_x4_inst_0_i_apb_pclk", 901 + "dout_fsys0_peribus_grp", 902 + GAT_FSYS0_PCIE_TOP_IPCLKPORT_PCIEG3_PHY_X4_INST_0_I_APB_PCLK, 21, 903 + CLK_IGNORE_UNUSED, 0), 904 + GATE(0, 905 + "fsys0_pcie_top_ipclkport_pcieg3_phy_x4_inst_0_pll_refclk_from_syspll", 906 + "dout_fsys0_pcie_phy_oscclk", 907 + GAT_FSYS0_PCIE_TOP_IPCLKPORT_PCIEG3_PHY_X4_INST_0_PLL_REFCLK_FROM_SYSPLL, 908 + 21, CLK_IGNORE_UNUSED, 0), 909 + GATE(0, "fsys0_pcie_top_ipclkport_pipe_pal_inst_0_i_apb_pclk_0", "dout_fsys0_peribus_grp", 910 + GAT_FSYS0_PCIE_TOP_IPCLKPORT_PIPE_PAL_INST_0_I_APB_PCLK_0, 21, CLK_IGNORE_UNUSED, 0), 911 + GATE(0, "fsys0_pcie_top_ipclkport_pipe_pal_inst_0_i_immortal_clk", "fin_pll", 912 + GAT_FSYS0_PCIE_TOP_IPCLKPORT_PIPE_PAL_INST_0_I_IMMORTAL_CLK, 21, CLK_IGNORE_UNUSED, 0), 913 + GATE(PCIE_SUBCTRL_INST0_DBI_ACLK_SOC, 914 + "fsys0_pcie_top_ipclkport_fsd_pcie_sub_ctrl_inst_0_dbi_aclk_soc", 915 + "dout_fsys0_peribus_grp", 916 + GAT_FSYS0_PCIE_TOP_IPCLKPORT_FSD_PCIE_SUB_CTRL_INST_0_DBI_ACLK_SOC, 21, 917 + CLK_IGNORE_UNUSED, 0), 918 + GATE(0, "fsys0_pcie_top_ipclkport_fsd_pcie_sub_ctrl_inst_0_i_driver_apb_clk", 919 + "dout_fsys0_peribus_grp", 920 + GAT_FSYS0_PCIE_TOP_IPCLKPORT_FSD_PCIE_SUB_CTRL_INST_0_I_DRIVER_APB_CLK, 21, 921 + CLK_IGNORE_UNUSED, 0), 922 + GATE(PCIE_SUBCTRL_INST0_MSTR_ACLK_SOC, 923 + "fsys0_pcie_top_ipclkport_fsd_pcie_sub_ctrl_inst_0_mstr_aclk_soc", 924 + "mout_fsys0_clk_fsys0_slavebusclk", 925 + GAT_FSYS0_PCIE_TOP_IPCLKPORT_FSD_PCIE_SUB_CTRL_INST_0_MSTR_ACLK_SOC, 21, 926 + CLK_IGNORE_UNUSED, 0), 927 + GATE(PCIE_SUBCTRL_INST0_SLV_ACLK_SOC, 928 + "fsys0_pcie_top_ipclkport_fsd_pcie_sub_ctrl_inst_0_slv_aclk_soc", 929 + "mout_fsys0_clk_fsys0_slavebusclk", 930 + GAT_FSYS0_PCIE_TOP_IPCLKPORT_FSD_PCIE_SUB_CTRL_INST_0_SLV_ACLK_SOC, 21, 931 + CLK_IGNORE_UNUSED, 0), 932 + GATE(0, "fsys0_smmu_fsys0_ipclkport_cclk", "mout_fsys0_eqos_rgmii_125_mux1", 933 + GAT_FSYS0_SMMU_FSYS0_IPCLKPORT_CCLK, 21, CLK_IGNORE_UNUSED, 0), 934 + GATE(0, "fsys0_smmu_fsys0_ipclkport_fsys0_bclk", "mout_fsys0_clk_fsys0_slavebusclk", 935 + GAT_FSYS0_SMMU_FSYS0_IPCLKPORT_FSYS0_BCLK, 21, CLK_IGNORE_UNUSED, 0), 936 + GATE(0, "fsys0_sysreg_fsys0_ipclkport_pclk", "dout_fsys0_peribus_grp", 937 + GAT_FSYS0_SYSREG_FSYS0_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 938 + GATE(UFS0_TOP0_HCLK_BUS, "fsys0_ufs_top0_ipclkport_hclk_bus", "dout_fsys0_peribus_grp", 939 + GAT_FSYS0_UFS_TOP0_IPCLKPORT_HCLK_BUS, 21, CLK_IGNORE_UNUSED, 0), 940 + GATE(UFS0_TOP0_ACLK, "fsys0_ufs_top0_ipclkport_i_aclk", "dout_fsys0_peribus_grp", 941 + GAT_FSYS0_UFS_TOP0_IPCLKPORT_I_ACLK, 21, CLK_IGNORE_UNUSED, 0), 942 + GATE(UFS0_TOP0_CLK_UNIPRO, "fsys0_ufs_top0_ipclkport_i_clk_unipro", "dout_fsys0_clk_unipro", 943 + GAT_FSYS0_UFS_TOP0_IPCLKPORT_I_CLK_UNIPRO, 21, CLK_IGNORE_UNUSED, 0), 944 + GATE(UFS0_TOP0_FMP_CLK, "fsys0_ufs_top0_ipclkport_i_fmp_clk", "dout_fsys0_peribus_grp", 945 + GAT_FSYS0_UFS_TOP0_IPCLKPORT_I_FMP_CLK, 21, CLK_IGNORE_UNUSED, 0), 946 + GATE(UFS1_TOP1_HCLK_BUS, "fsys0_ufs_top1_ipclkport_hclk_bus", "dout_fsys0_peribus_grp", 947 + GAT_FSYS0_UFS_TOP1_IPCLKPORT_HCLK_BUS, 21, CLK_IGNORE_UNUSED, 0), 948 + GATE(UFS1_TOP1_ACLK, "fsys0_ufs_top1_ipclkport_i_aclk", "dout_fsys0_peribus_grp", 949 + GAT_FSYS0_UFS_TOP1_IPCLKPORT_I_ACLK, 21, CLK_IGNORE_UNUSED, 0), 950 + GATE(UFS1_TOP1_CLK_UNIPRO, "fsys0_ufs_top1_ipclkport_i_clk_unipro", "dout_fsys0_clk_unipro", 951 + GAT_FSYS0_UFS_TOP1_IPCLKPORT_I_CLK_UNIPRO, 21, CLK_IGNORE_UNUSED, 0), 952 + GATE(UFS1_TOP1_FMP_CLK, "fsys0_ufs_top1_ipclkport_i_fmp_clk", "dout_fsys0_peribus_grp", 953 + GAT_FSYS0_UFS_TOP1_IPCLKPORT_I_FMP_CLK, 21, CLK_IGNORE_UNUSED, 0), 954 + GATE(0, "fsys0_rii_clk_divgate", "dout_fsys0_eqos_rmiiclk_25", GAT_FSYS0_RII_CLK_DIVGATE, 955 + 21, CLK_IGNORE_UNUSED, 0), 956 + GATE(FSYS0_EQOS_TOP0_IPCLKPORT_CLK_PTP_REF_I, "fsys0_eqos_top0_ipclkport_clk_ptp_ref_i", 957 + "fin_pll", GAT_FSYS0_EQOS_TOP0_IPCLKPORT_CLK_PTP_REF_I, 21, CLK_IGNORE_UNUSED, 0), 958 + }; 959 + 960 + static const struct samsung_cmu_info fsys0_cmu_info __initconst = { 961 + .mux_clks = fsys0_mux_clks, 962 + .nr_mux_clks = ARRAY_SIZE(fsys0_mux_clks), 963 + .div_clks = fsys0_div_clks, 964 + .nr_div_clks = ARRAY_SIZE(fsys0_div_clks), 965 + .gate_clks = fsys0_gate_clks, 966 + .nr_gate_clks = ARRAY_SIZE(fsys0_gate_clks), 967 + .fixed_clks = fsys0_fixed_clks, 968 + .nr_fixed_clks = ARRAY_SIZE(fsys0_fixed_clks), 969 + .nr_clk_ids = FSYS0_NR_CLK, 970 + .clk_regs = fsys0_clk_regs, 971 + .nr_clk_regs = ARRAY_SIZE(fsys0_clk_regs), 972 + .clk_name = "dout_cmu_fsys0_shared1div4", 973 + }; 974 + 975 + /* Register Offset definitions for CMU_FSYS1 (0x16810000) */ 976 + #define PLL_CON0_ACLK_FSYS1_BUSP_MUX 0x100 977 + #define PLL_CON0_PCLKL_FSYS1_BUSP_MUX 0x180 978 + #define DIV_CLK_FSYS1_PHY0_OSCCLK 0x1800 979 + #define DIV_CLK_FSYS1_PHY1_OSCCLK 0x1804 980 + #define GAT_FSYS1_CMU_FSYS1_IPCLKPORT_PCLK 0x2000 981 + #define GAT_FSYS1_PCIE_LINK0_IPCLKPORT_AUXCLK 0x2004 982 + #define GAT_FSYS1_PCIE_LINK0_IPCLKPORT_I_SOC_REF_CLK 0x2008 983 + #define GAT_FSYS1_PCIE_LINK1_IPCLKPORT_AUXCLK 0x200c 984 + #define GAT_FSYS1_PCIE_PHY0_IPCLKPORT_I_REF_XTAL 0x202c 985 + #define GAT_FSYS1_PHY0_OSCCLLK 0x2034 986 + #define GAT_FSYS1_PHY1_OSCCLK 0x2038 987 + #define GAT_FSYS1_AXI2APB_FSYS1_IPCLKPORT_ACLK 0x203c 988 + #define GAT_FSYS1_BUS_D0_FSYS1_IPCLKPORT_MAINCLK 0x2040 989 + #define GAT_FSYS1_BUS_S0_FSYS1_IPCLKPORT_M250CLK 0x2048 990 + #define GAT_FSYS1_BUS_S0_FSYS1_IPCLKPORT_MAINCLK 0x204c 991 + #define GAT_FSYS1_CPE425_0_FSYS1_IPCLKPORT_ACLK 0x2054 992 + #define GAT_FSYS1_NS_BRDG_FSYS1_IPCLKPORT_CLK__PSOC_FSYS1__CLK_FSYS1_D0 0x205c 993 + #define GAT_FSYS1_NS_BRDG_FSYS1_IPCLKPORT_CLK__PSOC_FSYS1__CLK_FSYS1_S0 0x2064 994 + #define GAT_FSYS1_PCIE_LINK0_IPCLKPORT_DBI_ACLK 0x206c 995 + #define GAT_FSYS1_PCIE_LINK0_IPCLKPORT_I_APB_CLK 0x2070 996 + #define GAT_FSYS1_PCIE_LINK0_IPCLKPORT_I_DRIVER_APB_CLK 0x2074 997 + #define GAT_FSYS1_PCIE_LINK0_IPCLKPORT_MSTR_ACLK 0x2078 998 + #define GAT_FSYS1_PCIE_LINK0_IPCLKPORT_SLV_ACLK 0x207c 999 + #define GAT_FSYS1_PCIE_LINK1_IPCLKPORT_DBI_ACLK 0x2080 1000 + #define GAT_FSYS1_PCIE_LINK1_IPCLKPORT_I_DRIVER_APB_CLK 0x2084 1001 + #define GAT_FSYS1_PCIE_LINK1_IPCLKPORT_MSTR_ACLK 0x2088 1002 + #define GAT_FSYS1_PCIE_LINK1_IPCLKPORT_SLV_ACLK 0x208c 1003 + #define GAT_FSYS1_PCIE_PHY0_IPCLKPORT_I_APB_CLK 0x20a4 1004 + #define GAT_FSYS1_PCIE_PHY0_IPCLKPORT_I_REF_SOC_PLL 0x20a8 1005 + #define GAT_FSYS1_SYSREG_FSYS1_IPCLKPORT_PCLK 0x20b4 1006 + #define GAT_FSYS1_TBU0_FSYS1_IPCLKPORT_ACLK 0x20b8 1007 + 1008 + static const unsigned long fsys1_clk_regs[] __initconst = { 1009 + PLL_CON0_ACLK_FSYS1_BUSP_MUX, 1010 + PLL_CON0_PCLKL_FSYS1_BUSP_MUX, 1011 + DIV_CLK_FSYS1_PHY0_OSCCLK, 1012 + DIV_CLK_FSYS1_PHY1_OSCCLK, 1013 + GAT_FSYS1_CMU_FSYS1_IPCLKPORT_PCLK, 1014 + GAT_FSYS1_PCIE_LINK0_IPCLKPORT_AUXCLK, 1015 + GAT_FSYS1_PCIE_LINK0_IPCLKPORT_I_SOC_REF_CLK, 1016 + GAT_FSYS1_PCIE_LINK1_IPCLKPORT_AUXCLK, 1017 + GAT_FSYS1_PCIE_PHY0_IPCLKPORT_I_REF_XTAL, 1018 + GAT_FSYS1_PHY0_OSCCLLK, 1019 + GAT_FSYS1_PHY1_OSCCLK, 1020 + GAT_FSYS1_AXI2APB_FSYS1_IPCLKPORT_ACLK, 1021 + GAT_FSYS1_BUS_D0_FSYS1_IPCLKPORT_MAINCLK, 1022 + GAT_FSYS1_BUS_S0_FSYS1_IPCLKPORT_M250CLK, 1023 + GAT_FSYS1_BUS_S0_FSYS1_IPCLKPORT_MAINCLK, 1024 + GAT_FSYS1_CPE425_0_FSYS1_IPCLKPORT_ACLK, 1025 + GAT_FSYS1_NS_BRDG_FSYS1_IPCLKPORT_CLK__PSOC_FSYS1__CLK_FSYS1_D0, 1026 + GAT_FSYS1_NS_BRDG_FSYS1_IPCLKPORT_CLK__PSOC_FSYS1__CLK_FSYS1_S0, 1027 + GAT_FSYS1_PCIE_LINK0_IPCLKPORT_DBI_ACLK, 1028 + GAT_FSYS1_PCIE_LINK0_IPCLKPORT_I_APB_CLK, 1029 + GAT_FSYS1_PCIE_LINK0_IPCLKPORT_I_DRIVER_APB_CLK, 1030 + GAT_FSYS1_PCIE_LINK0_IPCLKPORT_MSTR_ACLK, 1031 + GAT_FSYS1_PCIE_LINK0_IPCLKPORT_SLV_ACLK, 1032 + GAT_FSYS1_PCIE_LINK1_IPCLKPORT_DBI_ACLK, 1033 + GAT_FSYS1_PCIE_LINK1_IPCLKPORT_I_DRIVER_APB_CLK, 1034 + GAT_FSYS1_PCIE_LINK1_IPCLKPORT_MSTR_ACLK, 1035 + GAT_FSYS1_PCIE_LINK1_IPCLKPORT_SLV_ACLK, 1036 + GAT_FSYS1_PCIE_PHY0_IPCLKPORT_I_APB_CLK, 1037 + GAT_FSYS1_PCIE_PHY0_IPCLKPORT_I_REF_SOC_PLL, 1038 + GAT_FSYS1_SYSREG_FSYS1_IPCLKPORT_PCLK, 1039 + GAT_FSYS1_TBU0_FSYS1_IPCLKPORT_ACLK, 1040 + }; 1041 + 1042 + static const struct samsung_fixed_rate_clock fsys1_fixed_clks[] __initconst = { 1043 + FRATE(0, "clk_fsys1_phy0_ref", NULL, 0, 100000000), 1044 + FRATE(0, "clk_fsys1_phy1_ref", NULL, 0, 100000000), 1045 + }; 1046 + 1047 + /* List of parent clocks for Muxes in CMU_FSYS1 */ 1048 + PNAME(mout_fsys1_pclkl_fsys1_busp_mux_p) = { "fin_pll", "dout_cmu_fsys1_shared0div8" }; 1049 + PNAME(mout_fsys1_aclk_fsys1_busp_mux_p) = { "fin_pll", "dout_cmu_fsys1_shared0div4" }; 1050 + 1051 + static const struct samsung_mux_clock fsys1_mux_clks[] __initconst = { 1052 + MUX(0, "mout_fsys1_pclkl_fsys1_busp_mux", mout_fsys1_pclkl_fsys1_busp_mux_p, 1053 + PLL_CON0_PCLKL_FSYS1_BUSP_MUX, 4, 1), 1054 + MUX(0, "mout_fsys1_aclk_fsys1_busp_mux", mout_fsys1_aclk_fsys1_busp_mux_p, 1055 + PLL_CON0_ACLK_FSYS1_BUSP_MUX, 4, 1), 1056 + }; 1057 + 1058 + static const struct samsung_div_clock fsys1_div_clks[] __initconst = { 1059 + DIV(0, "dout_fsys1_clk_fsys1_phy0_oscclk", "fsys1_phy0_osccllk", 1060 + DIV_CLK_FSYS1_PHY0_OSCCLK, 0, 4), 1061 + DIV(0, "dout_fsys1_clk_fsys1_phy1_oscclk", "fsys1_phy1_oscclk", 1062 + DIV_CLK_FSYS1_PHY1_OSCCLK, 0, 4), 1063 + }; 1064 + 1065 + static const struct samsung_gate_clock fsys1_gate_clks[] __initconst = { 1066 + GATE(0, "fsys1_cmu_fsys1_ipclkport_pclk", "mout_fsys1_pclkl_fsys1_busp_mux", 1067 + GAT_FSYS1_CMU_FSYS1_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1068 + GATE(0, "fsys1_pcie_phy0_ipclkport_i_ref_xtal", "clk_fsys1_phy0_ref", 1069 + GAT_FSYS1_PCIE_PHY0_IPCLKPORT_I_REF_XTAL, 21, CLK_IGNORE_UNUSED, 0), 1070 + GATE(0, "fsys1_phy0_osccllk", "mout_fsys1_aclk_fsys1_busp_mux", 1071 + GAT_FSYS1_PHY0_OSCCLLK, 21, CLK_IGNORE_UNUSED, 0), 1072 + GATE(0, "fsys1_phy1_oscclk", "mout_fsys1_aclk_fsys1_busp_mux", 1073 + GAT_FSYS1_PHY1_OSCCLK, 21, CLK_IGNORE_UNUSED, 0), 1074 + GATE(0, "fsys1_axi2apb_fsys1_ipclkport_aclk", "mout_fsys1_pclkl_fsys1_busp_mux", 1075 + GAT_FSYS1_AXI2APB_FSYS1_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1076 + GATE(0, "fsys1_bus_d0_fsys1_ipclkport_mainclk", "mout_fsys1_aclk_fsys1_busp_mux", 1077 + GAT_FSYS1_BUS_D0_FSYS1_IPCLKPORT_MAINCLK, 21, CLK_IGNORE_UNUSED, 0), 1078 + GATE(0, "fsys1_bus_s0_fsys1_ipclkport_m250clk", "mout_fsys1_pclkl_fsys1_busp_mux", 1079 + GAT_FSYS1_BUS_S0_FSYS1_IPCLKPORT_M250CLK, 21, CLK_IGNORE_UNUSED, 0), 1080 + GATE(0, "fsys1_bus_s0_fsys1_ipclkport_mainclk", "mout_fsys1_aclk_fsys1_busp_mux", 1081 + GAT_FSYS1_BUS_S0_FSYS1_IPCLKPORT_MAINCLK, 21, CLK_IGNORE_UNUSED, 0), 1082 + GATE(0, "fsys1_cpe425_0_fsys1_ipclkport_aclk", "mout_fsys1_aclk_fsys1_busp_mux", 1083 + GAT_FSYS1_CPE425_0_FSYS1_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1084 + GATE(0, "fsys1_ns_brdg_fsys1_ipclkport_clk__psoc_fsys1__clk_fsys1_d0", 1085 + "mout_fsys1_aclk_fsys1_busp_mux", 1086 + GAT_FSYS1_NS_BRDG_FSYS1_IPCLKPORT_CLK__PSOC_FSYS1__CLK_FSYS1_D0, 21, 1087 + CLK_IGNORE_UNUSED, 0), 1088 + GATE(0, "fsys1_ns_brdg_fsys1_ipclkport_clk__psoc_fsys1__clk_fsys1_s0", 1089 + "mout_fsys1_aclk_fsys1_busp_mux", 1090 + GAT_FSYS1_NS_BRDG_FSYS1_IPCLKPORT_CLK__PSOC_FSYS1__CLK_FSYS1_S0, 21, 1091 + CLK_IGNORE_UNUSED, 0), 1092 + GATE(PCIE_LINK0_IPCLKPORT_DBI_ACLK, "fsys1_pcie_link0_ipclkport_dbi_aclk", 1093 + "mout_fsys1_aclk_fsys1_busp_mux", GAT_FSYS1_PCIE_LINK0_IPCLKPORT_DBI_ACLK, 21, 1094 + CLK_IGNORE_UNUSED, 0), 1095 + GATE(0, "fsys1_pcie_link0_ipclkport_i_apb_clk", "mout_fsys1_pclkl_fsys1_busp_mux", 1096 + GAT_FSYS1_PCIE_LINK0_IPCLKPORT_I_APB_CLK, 21, CLK_IGNORE_UNUSED, 0), 1097 + GATE(0, "fsys1_pcie_link0_ipclkport_i_soc_ref_clk", "fin_pll", 1098 + GAT_FSYS1_PCIE_LINK0_IPCLKPORT_I_SOC_REF_CLK, 21, CLK_IGNORE_UNUSED, 0), 1099 + GATE(0, "fsys1_pcie_link0_ipclkport_i_driver_apb_clk", "mout_fsys1_pclkl_fsys1_busp_mux", 1100 + GAT_FSYS1_PCIE_LINK0_IPCLKPORT_I_DRIVER_APB_CLK, 21, CLK_IGNORE_UNUSED, 0), 1101 + GATE(PCIE_LINK0_IPCLKPORT_MSTR_ACLK, "fsys1_pcie_link0_ipclkport_mstr_aclk", 1102 + "mout_fsys1_aclk_fsys1_busp_mux", GAT_FSYS1_PCIE_LINK0_IPCLKPORT_MSTR_ACLK, 21, 1103 + CLK_IGNORE_UNUSED, 0), 1104 + GATE(PCIE_LINK0_IPCLKPORT_SLV_ACLK, "fsys1_pcie_link0_ipclkport_slv_aclk", 1105 + "mout_fsys1_aclk_fsys1_busp_mux", GAT_FSYS1_PCIE_LINK0_IPCLKPORT_SLV_ACLK, 21, 1106 + CLK_IGNORE_UNUSED, 0), 1107 + GATE(PCIE_LINK1_IPCLKPORT_DBI_ACLK, "fsys1_pcie_link1_ipclkport_dbi_aclk", 1108 + "mout_fsys1_aclk_fsys1_busp_mux", GAT_FSYS1_PCIE_LINK1_IPCLKPORT_DBI_ACLK, 21, 1109 + CLK_IGNORE_UNUSED, 0), 1110 + GATE(0, "fsys1_pcie_link1_ipclkport_i_driver_apb_clk", "mout_fsys1_pclkl_fsys1_busp_mux", 1111 + GAT_FSYS1_PCIE_LINK1_IPCLKPORT_I_DRIVER_APB_CLK, 21, CLK_IGNORE_UNUSED, 0), 1112 + GATE(PCIE_LINK1_IPCLKPORT_MSTR_ACLK, "fsys1_pcie_link1_ipclkport_mstr_aclk", 1113 + "mout_fsys1_aclk_fsys1_busp_mux", GAT_FSYS1_PCIE_LINK1_IPCLKPORT_MSTR_ACLK, 21, 1114 + CLK_IGNORE_UNUSED, 0), 1115 + GATE(PCIE_LINK1_IPCLKPORT_SLV_ACLK, "fsys1_pcie_link1_ipclkport_slv_aclk", 1116 + "mout_fsys1_aclk_fsys1_busp_mux", GAT_FSYS1_PCIE_LINK1_IPCLKPORT_SLV_ACLK, 21, 1117 + CLK_IGNORE_UNUSED, 0), 1118 + GATE(0, "fsys1_pcie_phy0_ipclkport_i_apb_clk", "mout_fsys1_pclkl_fsys1_busp_mux", 1119 + GAT_FSYS1_PCIE_PHY0_IPCLKPORT_I_APB_CLK, 21, CLK_IGNORE_UNUSED, 0), 1120 + GATE(PCIE_LINK0_IPCLKPORT_AUX_ACLK, "fsys1_pcie_link0_ipclkport_auxclk", "fin_pll", 1121 + GAT_FSYS1_PCIE_LINK0_IPCLKPORT_AUXCLK, 21, CLK_IGNORE_UNUSED, 0), 1122 + GATE(PCIE_LINK1_IPCLKPORT_AUX_ACLK, "fsys1_pcie_link1_ipclkport_auxclk", "fin_pll", 1123 + GAT_FSYS1_PCIE_LINK1_IPCLKPORT_AUXCLK, 21, CLK_IGNORE_UNUSED, 0), 1124 + GATE(0, "fsys1_pcie_phy0_ipclkport_i_ref_soc_pll", "dout_fsys1_clk_fsys1_phy0_oscclk", 1125 + GAT_FSYS1_PCIE_PHY0_IPCLKPORT_I_REF_SOC_PLL, 21, CLK_IGNORE_UNUSED, 0), 1126 + GATE(0, "fsys1_sysreg_fsys1_ipclkport_pclk", "mout_fsys1_pclkl_fsys1_busp_mux", 1127 + GAT_FSYS1_SYSREG_FSYS1_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1128 + GATE(0, "fsys1_tbu0_fsys1_ipclkport_aclk", "mout_fsys1_aclk_fsys1_busp_mux", 1129 + GAT_FSYS1_TBU0_FSYS1_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1130 + }; 1131 + 1132 + static const struct samsung_cmu_info fsys1_cmu_info __initconst = { 1133 + .mux_clks = fsys1_mux_clks, 1134 + .nr_mux_clks = ARRAY_SIZE(fsys1_mux_clks), 1135 + .div_clks = fsys1_div_clks, 1136 + .nr_div_clks = ARRAY_SIZE(fsys1_div_clks), 1137 + .gate_clks = fsys1_gate_clks, 1138 + .nr_gate_clks = ARRAY_SIZE(fsys1_gate_clks), 1139 + .fixed_clks = fsys1_fixed_clks, 1140 + .nr_fixed_clks = ARRAY_SIZE(fsys1_fixed_clks), 1141 + .nr_clk_ids = FSYS1_NR_CLK, 1142 + .clk_regs = fsys1_clk_regs, 1143 + .nr_clk_regs = ARRAY_SIZE(fsys1_clk_regs), 1144 + .clk_name = "dout_cmu_fsys1_shared0div4", 1145 + }; 1146 + 1147 + /* Register Offset definitions for CMU_IMEM (0x10010000) */ 1148 + #define PLL_CON0_CLK_IMEM_ACLK 0x100 1149 + #define PLL_CON0_CLK_IMEM_INTMEMCLK 0x120 1150 + #define PLL_CON0_CLK_IMEM_TCUCLK 0x140 1151 + #define DIV_OSCCLK_IMEM_TMUTSCLK 0x1800 1152 + #define GAT_IMEM_IMEM_CMU_IMEM_IPCLKPORT_PCLK 0x2000 1153 + #define GAT_IMEM_MCT_IPCLKPORT_OSCCLK__ALO 0x2004 1154 + #define GAT_IMEM_OTP_CON_TOP_IPCLKPORT_I_OSCCLK 0x2008 1155 + #define GAT_IMEM_RSTNSYNC_OSCCLK_IPCLKPORT_CLK 0x200c 1156 + #define GAT_IMEM_TMU_CPU0_IPCLKPORT_I_CLK 0x2010 1157 + #define GAT_IMEM_TMU_CPU0_IPCLKPORT_I_CLK_TS 0x2014 1158 + #define GAT_IMEM_TMU_CPU2_IPCLKPORT_I_CLK 0x2018 1159 + #define GAT_IMEM_TMU_CPU2_IPCLKPORT_I_CLK_TS 0x201c 1160 + #define GAT_IMEM_TMU_GPU_IPCLKPORT_I_CLK 0x2020 1161 + #define GAT_IMEM_TMU_GPU_IPCLKPORT_I_CLK_TS 0x2024 1162 + #define GAT_IMEM_TMU_GT_IPCLKPORT_I_CLK 0x2028 1163 + #define GAT_IMEM_TMU_GT_IPCLKPORT_I_CLK_TS 0x202c 1164 + #define GAT_IMEM_TMU_TOP_IPCLKPORT_I_CLK 0x2030 1165 + #define GAT_IMEM_TMU_TOP_IPCLKPORT_I_CLK_TS 0x2034 1166 + #define GAT_IMEM_WDT0_IPCLKPORT_CLK 0x2038 1167 + #define GAT_IMEM_WDT1_IPCLKPORT_CLK 0x203c 1168 + #define GAT_IMEM_WDT2_IPCLKPORT_CLK 0x2040 1169 + #define GAT_IMEM_ADM_AXI4ST_I0_IMEM_IPCLKPORT_ACLKM 0x2044 1170 + #define GAT_IMEM_ADM_AXI4ST_I1_IMEM_IPCLKPORT_ACLKM 0x2048 1171 + #define GAT_IMEM_ADM_AXI4ST_I2_IMEM_IPCLKPORT_ACLKM 0x204c 1172 + #define GAT_IMEM_ADS_AXI4ST_I0_IMEM_IPCLKPORT_ACLKS 0x2050 1173 + #define GAT_IMEM_ADS_AXI4ST_I1_IMEM_IPCLKPORT_ACLKS 0x2054 1174 + #define GAT_IMEM_ADS_AXI4ST_I2_IMEM_IPCLKPORT_ACLKS 0x2058 1175 + #define GAT_IMEM_ASYNC_DMA0_IPCLKPORT_PCLKM 0x205c 1176 + #define GAT_IMEM_ASYNC_DMA0_IPCLKPORT_PCLKS 0x2060 1177 + #define GAT_IMEM_ASYNC_DMA1_IPCLKPORT_PCLKM 0x2064 1178 + #define GAT_IMEM_ASYNC_DMA1_IPCLKPORT_PCLKS 0x2068 1179 + #define GAT_IMEM_AXI2APB_IMEMP0_IPCLKPORT_ACLK 0x206c 1180 + #define GAT_IMEM_AXI2APB_IMEMP1_IPCLKPORT_ACLK 0x2070 1181 + #define GAT_IMEM_BUS_D_IMEM_IPCLKPORT_MAINCLK 0x2074 1182 + #define GAT_IMEM_BUS_P_IMEM_IPCLKPORT_MAINCLK 0x2078 1183 + #define GAT_IMEM_BUS_P_IMEM_IPCLKPORT_PERICLK 0x207c 1184 + #define GAT_IMEM_BUS_P_IMEM_IPCLKPORT_TCUCLK 0x2080 1185 + #define GAT_IMEM_DMA0_IPCLKPORT_ACLK 0x2084 1186 + #define GAT_IMEM_DMA1_IPCLKPORT_ACLK 0x2088 1187 + #define GAT_IMEM_GIC500_INPUT_SYNC_IPCLKPORT_CLK 0x208c 1188 + #define GAT_IMEM_GIC_IPCLKPORT_CLK 0x2090 1189 + #define GAT_IMEM_INTMEM_IPCLKPORT_ACLK 0x2094 1190 + #define GAT_IMEM_MAILBOX_SCS_CA72_IPCLKPORT_PCLK 0x2098 1191 + #define GAT_IMEM_MAILBOX_SMS_CA72_IPCLKPORT_PCLK 0x209c 1192 + #define GAT_IMEM_MCT_IPCLKPORT_PCLK 0x20a0 1193 + #define GAT_IMEM_NS_BRDG_IMEM_IPCLKPORT_CLK__PSCO_IMEM__CLK_IMEM_D 0x20a4 1194 + #define GAT_IMEM_NS_BRDG_IMEM_IPCLKPORT_CLK__PSCO_IMEM__CLK_IMEM_TCU 0x20a8 1195 + #define GAT_IMEM_NS_BRDG_IMEM_IPCLKPORT_CLK__PSOC_IMEM__CLK_IMEM_P 0x20ac 1196 + #define GAT_IMEM_OTP_CON_TOP_IPCLKPORT_PCLK 0x20b0 1197 + #define GAT_IMEM_RSTNSYNC_ACLK_IPCLKPORT_CLK 0x20b4 1198 + #define GAT_IMEM_RSTNSYNC_INTMEMCLK_IPCLKPORT_CLK 0x20b8 1199 + #define GAT_IMEM_RSTNSYNC_TCUCLK_IPCLKPORT_CLK 0x20bc 1200 + #define GAT_IMEM_SFRIF_TMU0_IMEM_IPCLKPORT_PCLK 0x20c0 1201 + #define GAT_IMEM_SFRIF_TMU1_IMEM_IPCLKPORT_PCLK 0x20c4 1202 + #define GAT_IMEM_SYSREG_IMEM_IPCLKPORT_PCLK 0x20c8 1203 + #define GAT_IMEM_TBU_IMEM_IPCLKPORT_ACLK 0x20cc 1204 + #define GAT_IMEM_TCU_IPCLKPORT_ACLK 0x20d0 1205 + #define GAT_IMEM_WDT0_IPCLKPORT_PCLK 0x20d4 1206 + #define GAT_IMEM_WDT1_IPCLKPORT_PCLK 0x20d8 1207 + #define GAT_IMEM_WDT2_IPCLKPORT_PCLK 0x20dc 1208 + 1209 + static const unsigned long imem_clk_regs[] __initconst = { 1210 + PLL_CON0_CLK_IMEM_ACLK, 1211 + PLL_CON0_CLK_IMEM_INTMEMCLK, 1212 + PLL_CON0_CLK_IMEM_TCUCLK, 1213 + DIV_OSCCLK_IMEM_TMUTSCLK, 1214 + GAT_IMEM_IMEM_CMU_IMEM_IPCLKPORT_PCLK, 1215 + GAT_IMEM_MCT_IPCLKPORT_OSCCLK__ALO, 1216 + GAT_IMEM_OTP_CON_TOP_IPCLKPORT_I_OSCCLK, 1217 + GAT_IMEM_RSTNSYNC_OSCCLK_IPCLKPORT_CLK, 1218 + GAT_IMEM_TMU_CPU0_IPCLKPORT_I_CLK, 1219 + GAT_IMEM_TMU_CPU0_IPCLKPORT_I_CLK_TS, 1220 + GAT_IMEM_TMU_CPU2_IPCLKPORT_I_CLK, 1221 + GAT_IMEM_TMU_CPU2_IPCLKPORT_I_CLK_TS, 1222 + GAT_IMEM_TMU_GPU_IPCLKPORT_I_CLK, 1223 + GAT_IMEM_TMU_GPU_IPCLKPORT_I_CLK_TS, 1224 + GAT_IMEM_TMU_GT_IPCLKPORT_I_CLK, 1225 + GAT_IMEM_TMU_GT_IPCLKPORT_I_CLK_TS, 1226 + GAT_IMEM_TMU_TOP_IPCLKPORT_I_CLK, 1227 + GAT_IMEM_TMU_TOP_IPCLKPORT_I_CLK_TS, 1228 + GAT_IMEM_WDT0_IPCLKPORT_CLK, 1229 + GAT_IMEM_WDT1_IPCLKPORT_CLK, 1230 + GAT_IMEM_WDT2_IPCLKPORT_CLK, 1231 + GAT_IMEM_ADM_AXI4ST_I0_IMEM_IPCLKPORT_ACLKM, 1232 + GAT_IMEM_ADM_AXI4ST_I1_IMEM_IPCLKPORT_ACLKM, 1233 + GAT_IMEM_ADM_AXI4ST_I2_IMEM_IPCLKPORT_ACLKM, 1234 + GAT_IMEM_ADS_AXI4ST_I0_IMEM_IPCLKPORT_ACLKS, 1235 + GAT_IMEM_ADS_AXI4ST_I1_IMEM_IPCLKPORT_ACLKS, 1236 + GAT_IMEM_ADS_AXI4ST_I2_IMEM_IPCLKPORT_ACLKS, 1237 + GAT_IMEM_ASYNC_DMA0_IPCLKPORT_PCLKM, 1238 + GAT_IMEM_ASYNC_DMA0_IPCLKPORT_PCLKS, 1239 + GAT_IMEM_ASYNC_DMA1_IPCLKPORT_PCLKM, 1240 + GAT_IMEM_ASYNC_DMA1_IPCLKPORT_PCLKS, 1241 + GAT_IMEM_AXI2APB_IMEMP0_IPCLKPORT_ACLK, 1242 + GAT_IMEM_AXI2APB_IMEMP1_IPCLKPORT_ACLK, 1243 + GAT_IMEM_BUS_D_IMEM_IPCLKPORT_MAINCLK, 1244 + GAT_IMEM_BUS_P_IMEM_IPCLKPORT_MAINCLK, 1245 + GAT_IMEM_BUS_P_IMEM_IPCLKPORT_PERICLK, 1246 + GAT_IMEM_BUS_P_IMEM_IPCLKPORT_TCUCLK, 1247 + GAT_IMEM_DMA0_IPCLKPORT_ACLK, 1248 + GAT_IMEM_DMA1_IPCLKPORT_ACLK, 1249 + GAT_IMEM_GIC500_INPUT_SYNC_IPCLKPORT_CLK, 1250 + GAT_IMEM_GIC_IPCLKPORT_CLK, 1251 + GAT_IMEM_INTMEM_IPCLKPORT_ACLK, 1252 + GAT_IMEM_MAILBOX_SCS_CA72_IPCLKPORT_PCLK, 1253 + GAT_IMEM_MAILBOX_SMS_CA72_IPCLKPORT_PCLK, 1254 + GAT_IMEM_MCT_IPCLKPORT_PCLK, 1255 + GAT_IMEM_NS_BRDG_IMEM_IPCLKPORT_CLK__PSCO_IMEM__CLK_IMEM_D, 1256 + GAT_IMEM_NS_BRDG_IMEM_IPCLKPORT_CLK__PSCO_IMEM__CLK_IMEM_TCU, 1257 + GAT_IMEM_NS_BRDG_IMEM_IPCLKPORT_CLK__PSOC_IMEM__CLK_IMEM_P, 1258 + GAT_IMEM_OTP_CON_TOP_IPCLKPORT_PCLK, 1259 + GAT_IMEM_RSTNSYNC_ACLK_IPCLKPORT_CLK, 1260 + GAT_IMEM_RSTNSYNC_INTMEMCLK_IPCLKPORT_CLK, 1261 + GAT_IMEM_RSTNSYNC_TCUCLK_IPCLKPORT_CLK, 1262 + GAT_IMEM_SFRIF_TMU0_IMEM_IPCLKPORT_PCLK, 1263 + GAT_IMEM_SFRIF_TMU1_IMEM_IPCLKPORT_PCLK, 1264 + GAT_IMEM_SYSREG_IMEM_IPCLKPORT_PCLK, 1265 + GAT_IMEM_TBU_IMEM_IPCLKPORT_ACLK, 1266 + GAT_IMEM_TCU_IPCLKPORT_ACLK, 1267 + GAT_IMEM_WDT0_IPCLKPORT_PCLK, 1268 + GAT_IMEM_WDT1_IPCLKPORT_PCLK, 1269 + GAT_IMEM_WDT2_IPCLKPORT_PCLK, 1270 + }; 1271 + 1272 + PNAME(mout_imem_clk_imem_tcuclk_p) = { "fin_pll", "dout_cmu_imem_tcuclk" }; 1273 + PNAME(mout_imem_clk_imem_aclk_p) = { "fin_pll", "dout_cmu_imem_aclk" }; 1274 + PNAME(mout_imem_clk_imem_intmemclk_p) = { "fin_pll", "dout_cmu_imem_dmaclk" }; 1275 + 1276 + static const struct samsung_mux_clock imem_mux_clks[] __initconst = { 1277 + MUX(0, "mout_imem_clk_imem_tcuclk", mout_imem_clk_imem_tcuclk_p, 1278 + PLL_CON0_CLK_IMEM_TCUCLK, 4, 1), 1279 + MUX(0, "mout_imem_clk_imem_aclk", mout_imem_clk_imem_aclk_p, PLL_CON0_CLK_IMEM_ACLK, 4, 1), 1280 + MUX(0, "mout_imem_clk_imem_intmemclk", mout_imem_clk_imem_intmemclk_p, 1281 + PLL_CON0_CLK_IMEM_INTMEMCLK, 4, 1), 1282 + }; 1283 + 1284 + static const struct samsung_div_clock imem_div_clks[] __initconst = { 1285 + DIV(0, "dout_imem_oscclk_imem_tmutsclk", "fin_pll", DIV_OSCCLK_IMEM_TMUTSCLK, 0, 4), 1286 + }; 1287 + 1288 + static const struct samsung_gate_clock imem_gate_clks[] __initconst = { 1289 + GATE(0, "imem_imem_cmu_imem_ipclkport_pclk", "mout_imem_clk_imem_aclk", 1290 + GAT_IMEM_IMEM_CMU_IMEM_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1291 + GATE(0, "imem_otp_con_top_ipclkport_i_oscclk", "fin_pll", 1292 + GAT_IMEM_OTP_CON_TOP_IPCLKPORT_I_OSCCLK, 21, CLK_IGNORE_UNUSED, 0), 1293 + GATE(0, "imem_tmu_top_ipclkport_i_clk", "fin_pll", 1294 + GAT_IMEM_TMU_TOP_IPCLKPORT_I_CLK, 21, CLK_IGNORE_UNUSED, 0), 1295 + GATE(0, "imem_tmu_gt_ipclkport_i_clk", "fin_pll", 1296 + GAT_IMEM_TMU_GT_IPCLKPORT_I_CLK, 21, CLK_IGNORE_UNUSED, 0), 1297 + GATE(0, "imem_tmu_cpu0_ipclkport_i_clk", "fin_pll", 1298 + GAT_IMEM_TMU_CPU0_IPCLKPORT_I_CLK, 21, CLK_IGNORE_UNUSED, 0), 1299 + GATE(0, "imem_tmu_gpu_ipclkport_i_clk", "fin_pll", 1300 + GAT_IMEM_TMU_GPU_IPCLKPORT_I_CLK, 21, CLK_IGNORE_UNUSED, 0), 1301 + GATE(0, "imem_mct_ipclkport_oscclk__alo", "fin_pll", 1302 + GAT_IMEM_MCT_IPCLKPORT_OSCCLK__ALO, 21, CLK_IGNORE_UNUSED, 0), 1303 + GATE(0, "imem_wdt0_ipclkport_clk", "fin_pll", 1304 + GAT_IMEM_WDT0_IPCLKPORT_CLK, 21, CLK_IGNORE_UNUSED, 0), 1305 + GATE(0, "imem_wdt1_ipclkport_clk", "fin_pll", 1306 + GAT_IMEM_WDT1_IPCLKPORT_CLK, 21, CLK_IGNORE_UNUSED, 0), 1307 + GATE(0, "imem_wdt2_ipclkport_clk", "fin_pll", 1308 + GAT_IMEM_WDT2_IPCLKPORT_CLK, 21, CLK_IGNORE_UNUSED, 0), 1309 + GATE(IMEM_TMU_CPU0_IPCLKPORT_I_CLK_TS, "imem_tmu_cpu0_ipclkport_i_clk_ts", 1310 + "dout_imem_oscclk_imem_tmutsclk", 1311 + GAT_IMEM_TMU_CPU0_IPCLKPORT_I_CLK_TS, 21, CLK_IGNORE_UNUSED, 0), 1312 + GATE(IMEM_TMU_CPU2_IPCLKPORT_I_CLK_TS, "imem_tmu_cpu2_ipclkport_i_clk_ts", 1313 + "dout_imem_oscclk_imem_tmutsclk", 1314 + GAT_IMEM_TMU_CPU2_IPCLKPORT_I_CLK_TS, 21, CLK_IGNORE_UNUSED, 0), 1315 + GATE(IMEM_TMU_GPU_IPCLKPORT_I_CLK_TS, "imem_tmu_gpu_ipclkport_i_clk_ts", 1316 + "dout_imem_oscclk_imem_tmutsclk", 1317 + GAT_IMEM_TMU_GPU_IPCLKPORT_I_CLK_TS, 21, CLK_IGNORE_UNUSED, 0), 1318 + GATE(IMEM_TMU_GT_IPCLKPORT_I_CLK_TS, "imem_tmu_gt_ipclkport_i_clk_ts", 1319 + "dout_imem_oscclk_imem_tmutsclk", 1320 + GAT_IMEM_TMU_GT_IPCLKPORT_I_CLK_TS, 21, CLK_IGNORE_UNUSED, 0), 1321 + GATE(IMEM_TMU_TOP_IPCLKPORT_I_CLK_TS, "imem_tmu_top_ipclkport_i_clk_ts", 1322 + "dout_imem_oscclk_imem_tmutsclk", 1323 + GAT_IMEM_TMU_TOP_IPCLKPORT_I_CLK_TS, 21, CLK_IGNORE_UNUSED, 0), 1324 + GATE(0, "imem_adm_axi4st_i0_imem_ipclkport_aclkm", "mout_imem_clk_imem_aclk", 1325 + GAT_IMEM_ADM_AXI4ST_I0_IMEM_IPCLKPORT_ACLKM, 21, CLK_IGNORE_UNUSED, 0), 1326 + GATE(0, "imem_adm_axi4st_i1_imem_ipclkport_aclkm", "mout_imem_clk_imem_aclk", 1327 + GAT_IMEM_ADM_AXI4ST_I1_IMEM_IPCLKPORT_ACLKM, 21, CLK_IGNORE_UNUSED, 0), 1328 + GATE(0, "imem_adm_axi4st_i2_imem_ipclkport_aclkm", "mout_imem_clk_imem_aclk", 1329 + GAT_IMEM_ADM_AXI4ST_I2_IMEM_IPCLKPORT_ACLKM, 21, CLK_IGNORE_UNUSED, 0), 1330 + GATE(0, "imem_ads_axi4st_i0_imem_ipclkport_aclks", "mout_imem_clk_imem_aclk", 1331 + GAT_IMEM_ADS_AXI4ST_I0_IMEM_IPCLKPORT_ACLKS, 21, CLK_IGNORE_UNUSED, 0), 1332 + GATE(0, "imem_ads_axi4st_i1_imem_ipclkport_aclks", "mout_imem_clk_imem_aclk", 1333 + GAT_IMEM_ADS_AXI4ST_I1_IMEM_IPCLKPORT_ACLKS, 21, CLK_IGNORE_UNUSED, 0), 1334 + GATE(0, "imem_ads_axi4st_i2_imem_ipclkport_aclks", "mout_imem_clk_imem_aclk", 1335 + GAT_IMEM_ADS_AXI4ST_I2_IMEM_IPCLKPORT_ACLKS, 21, CLK_IGNORE_UNUSED, 0), 1336 + GATE(0, "imem_async_dma0_ipclkport_pclkm", "mout_imem_clk_imem_tcuclk", 1337 + GAT_IMEM_ASYNC_DMA0_IPCLKPORT_PCLKM, 21, CLK_IGNORE_UNUSED, 0), 1338 + GATE(0, "imem_async_dma0_ipclkport_pclks", "mout_imem_clk_imem_aclk", 1339 + GAT_IMEM_ASYNC_DMA0_IPCLKPORT_PCLKS, 21, CLK_IGNORE_UNUSED, 0), 1340 + GATE(0, "imem_async_dma1_ipclkport_pclkm", "mout_imem_clk_imem_tcuclk", 1341 + GAT_IMEM_ASYNC_DMA1_IPCLKPORT_PCLKM, 21, CLK_IGNORE_UNUSED, 0), 1342 + GATE(0, "imem_async_dma1_ipclkport_pclks", "mout_imem_clk_imem_aclk", 1343 + GAT_IMEM_ASYNC_DMA1_IPCLKPORT_PCLKS, 21, CLK_IGNORE_UNUSED, 0), 1344 + GATE(0, "imem_axi2apb_imemp0_ipclkport_aclk", "mout_imem_clk_imem_aclk", 1345 + GAT_IMEM_AXI2APB_IMEMP0_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1346 + GATE(0, "imem_axi2apb_imemp1_ipclkport_aclk", "mout_imem_clk_imem_aclk", 1347 + GAT_IMEM_AXI2APB_IMEMP1_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1348 + GATE(0, "imem_bus_d_imem_ipclkport_mainclk", "mout_imem_clk_imem_tcuclk", 1349 + GAT_IMEM_BUS_D_IMEM_IPCLKPORT_MAINCLK, 21, CLK_IGNORE_UNUSED, 0), 1350 + GATE(0, "imem_bus_p_imem_ipclkport_mainclk", "mout_imem_clk_imem_aclk", 1351 + GAT_IMEM_BUS_P_IMEM_IPCLKPORT_MAINCLK, 21, CLK_IGNORE_UNUSED, 0), 1352 + GATE(0, "imem_bus_p_imem_ipclkport_pericclk", "mout_imem_clk_imem_aclk", 1353 + GAT_IMEM_BUS_P_IMEM_IPCLKPORT_PERICLK, 21, CLK_IGNORE_UNUSED, 0), 1354 + GATE(0, "imem_bus_p_imem_ipclkport_tcuclk", "mout_imem_clk_imem_tcuclk", 1355 + GAT_IMEM_BUS_P_IMEM_IPCLKPORT_TCUCLK, 21, CLK_IGNORE_UNUSED, 0), 1356 + GATE(IMEM_DMA0_IPCLKPORT_ACLK, "imem_dma0_ipclkport_aclk", "mout_imem_clk_imem_tcuclk", 1357 + GAT_IMEM_DMA0_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED | CLK_IS_CRITICAL, 0), 1358 + GATE(IMEM_DMA1_IPCLKPORT_ACLK, "imem_dma1_ipclkport_aclk", "mout_imem_clk_imem_tcuclk", 1359 + GAT_IMEM_DMA1_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED | CLK_IS_CRITICAL, 0), 1360 + GATE(0, "imem_gic500_input_sync_ipclkport_clk", "mout_imem_clk_imem_aclk", 1361 + GAT_IMEM_GIC500_INPUT_SYNC_IPCLKPORT_CLK, 21, CLK_IGNORE_UNUSED, 0), 1362 + GATE(0, "imem_gic_ipclkport_clk", "mout_imem_clk_imem_aclk", 1363 + GAT_IMEM_GIC_IPCLKPORT_CLK, 21, CLK_IGNORE_UNUSED, 0), 1364 + GATE(0, "imem_intmem_ipclkport_aclk", "mout_imem_clk_imem_intmemclk", 1365 + GAT_IMEM_INTMEM_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1366 + GATE(0, "imem_mailbox_scs_ca72_ipclkport_pclk", "mout_imem_clk_imem_aclk", 1367 + GAT_IMEM_MAILBOX_SCS_CA72_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1368 + GATE(0, "imem_mailbox_sms_ca72_ipclkport_pclk", "mout_imem_clk_imem_aclk", 1369 + GAT_IMEM_MAILBOX_SMS_CA72_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1370 + GATE(IMEM_MCT_PCLK, "imem_mct_ipclkport_pclk", "mout_imem_clk_imem_aclk", 1371 + GAT_IMEM_MCT_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1372 + GATE(0, "imem_ns_brdg_imem_ipclkport_clk__psco_imem__clk_imem_d", 1373 + "mout_imem_clk_imem_tcuclk", 1374 + GAT_IMEM_NS_BRDG_IMEM_IPCLKPORT_CLK__PSCO_IMEM__CLK_IMEM_D, 21, CLK_IGNORE_UNUSED, 0), 1375 + GATE(0, "imem_ns_brdg_imem_ipclkport_clk__psco_imem__clk_imem_tcu", 1376 + "mout_imem_clk_imem_tcuclk", 1377 + GAT_IMEM_NS_BRDG_IMEM_IPCLKPORT_CLK__PSCO_IMEM__CLK_IMEM_TCU, 21, 1378 + CLK_IGNORE_UNUSED, 0), 1379 + GATE(0, "imem_ns_brdg_imem_ipclkport_clk__psoc_imem__clk_imem_p", "mout_imem_clk_imem_aclk", 1380 + GAT_IMEM_NS_BRDG_IMEM_IPCLKPORT_CLK__PSOC_IMEM__CLK_IMEM_P, 21, CLK_IGNORE_UNUSED, 0), 1381 + GATE(0, "imem_otp_con_top_ipclkport_pclk", "mout_imem_clk_imem_aclk", 1382 + GAT_IMEM_OTP_CON_TOP_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1383 + GATE(0, "imem_rstnsync_aclk_ipclkport_clk", "mout_imem_clk_imem_aclk", 1384 + GAT_IMEM_RSTNSYNC_ACLK_IPCLKPORT_CLK, 21, CLK_IGNORE_UNUSED, 0), 1385 + GATE(0, "imem_rstnsync_oscclk_ipclkport_clk", "fin_pll", 1386 + GAT_IMEM_RSTNSYNC_OSCCLK_IPCLKPORT_CLK, 21, CLK_IGNORE_UNUSED, 0), 1387 + GATE(0, "imem_rstnsync_intmemclk_ipclkport_clk", "mout_imem_clk_imem_intmemclk", 1388 + GAT_IMEM_RSTNSYNC_INTMEMCLK_IPCLKPORT_CLK, 21, CLK_IGNORE_UNUSED, 0), 1389 + GATE(0, "imem_rstnsync_tcuclk_ipclkport_clk", "mout_imem_clk_imem_tcuclk", 1390 + GAT_IMEM_RSTNSYNC_TCUCLK_IPCLKPORT_CLK, 21, CLK_IGNORE_UNUSED, 0), 1391 + GATE(0, "imem_sfrif_tmu0_imem_ipclkport_pclk", "mout_imem_clk_imem_aclk", 1392 + GAT_IMEM_SFRIF_TMU0_IMEM_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1393 + GATE(0, "imem_sfrif_tmu1_imem_ipclkport_pclk", "mout_imem_clk_imem_aclk", 1394 + GAT_IMEM_SFRIF_TMU1_IMEM_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1395 + GATE(0, "imem_tmu_cpu2_ipclkport_i_clk", "fin_pll", 1396 + GAT_IMEM_TMU_CPU2_IPCLKPORT_I_CLK, 21, CLK_IGNORE_UNUSED, 0), 1397 + GATE(0, "imem_sysreg_imem_ipclkport_pclk", "mout_imem_clk_imem_aclk", 1398 + GAT_IMEM_SYSREG_IMEM_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1399 + GATE(0, "imem_tbu_imem_ipclkport_aclk", "mout_imem_clk_imem_tcuclk", 1400 + GAT_IMEM_TBU_IMEM_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1401 + GATE(0, "imem_tcu_ipclkport_aclk", "mout_imem_clk_imem_tcuclk", 1402 + GAT_IMEM_TCU_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1403 + GATE(IMEM_WDT0_IPCLKPORT_PCLK, "imem_wdt0_ipclkport_pclk", "mout_imem_clk_imem_aclk", 1404 + GAT_IMEM_WDT0_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1405 + GATE(IMEM_WDT1_IPCLKPORT_PCLK, "imem_wdt1_ipclkport_pclk", "mout_imem_clk_imem_aclk", 1406 + GAT_IMEM_WDT1_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1407 + GATE(IMEM_WDT2_IPCLKPORT_PCLK, "imem_wdt2_ipclkport_pclk", "mout_imem_clk_imem_aclk", 1408 + GAT_IMEM_WDT2_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1409 + }; 1410 + 1411 + static const struct samsung_cmu_info imem_cmu_info __initconst = { 1412 + .mux_clks = imem_mux_clks, 1413 + .nr_mux_clks = ARRAY_SIZE(imem_mux_clks), 1414 + .div_clks = imem_div_clks, 1415 + .nr_div_clks = ARRAY_SIZE(imem_div_clks), 1416 + .gate_clks = imem_gate_clks, 1417 + .nr_gate_clks = ARRAY_SIZE(imem_gate_clks), 1418 + .nr_clk_ids = IMEM_NR_CLK, 1419 + .clk_regs = imem_clk_regs, 1420 + .nr_clk_regs = ARRAY_SIZE(imem_clk_regs), 1421 + }; 1422 + 1423 + static void __init fsd_clk_imem_init(struct device_node *np) 1424 + { 1425 + samsung_cmu_register_one(np, &imem_cmu_info); 1426 + } 1427 + 1428 + CLK_OF_DECLARE(fsd_clk_imem, "tesla,fsd-clock-imem", fsd_clk_imem_init); 1429 + 1430 + /* Register Offset definitions for CMU_MFC (0x12810000) */ 1431 + #define PLL_LOCKTIME_PLL_MFC 0x0 1432 + #define PLL_CON0_PLL_MFC 0x100 1433 + #define MUX_MFC_BUSD 0x1000 1434 + #define MUX_MFC_BUSP 0x1008 1435 + #define DIV_MFC_BUSD_DIV4 0x1800 1436 + #define GAT_MFC_CMU_MFC_IPCLKPORT_PCLK 0x2000 1437 + #define GAT_MFC_AS_P_MFC_IPCLKPORT_PCLKM 0x2004 1438 + #define GAT_MFC_AS_P_MFC_IPCLKPORT_PCLKS 0x2008 1439 + #define GAT_MFC_AXI2APB_MFC_IPCLKPORT_ACLK 0x200c 1440 + #define GAT_MFC_MFC_IPCLKPORT_ACLK 0x2010 1441 + #define GAT_MFC_NS_BRDG_MFC_IPCLKPORT_CLK__PMFC__CLK_MFC_D 0x2018 1442 + #define GAT_MFC_NS_BRDG_MFC_IPCLKPORT_CLK__PMFC__CLK_MFC_P 0x201c 1443 + #define GAT_MFC_PPMU_MFCD0_IPCLKPORT_ACLK 0x2028 1444 + #define GAT_MFC_PPMU_MFCD0_IPCLKPORT_PCLK 0x202c 1445 + #define GAT_MFC_PPMU_MFCD1_IPCLKPORT_ACLK 0x2030 1446 + #define GAT_MFC_PPMU_MFCD1_IPCLKPORT_PCLK 0x2034 1447 + #define GAT_MFC_SYSREG_MFC_IPCLKPORT_PCLK 0x2038 1448 + #define GAT_MFC_TBU_MFCD0_IPCLKPORT_CLK 0x203c 1449 + #define GAT_MFC_TBU_MFCD1_IPCLKPORT_CLK 0x2040 1450 + #define GAT_MFC_BUSD_DIV4_GATE 0x2044 1451 + #define GAT_MFC_BUSD_GATE 0x2048 1452 + 1453 + static const unsigned long mfc_clk_regs[] __initconst = { 1454 + PLL_LOCKTIME_PLL_MFC, 1455 + PLL_CON0_PLL_MFC, 1456 + MUX_MFC_BUSD, 1457 + MUX_MFC_BUSP, 1458 + DIV_MFC_BUSD_DIV4, 1459 + GAT_MFC_CMU_MFC_IPCLKPORT_PCLK, 1460 + GAT_MFC_AS_P_MFC_IPCLKPORT_PCLKM, 1461 + GAT_MFC_AS_P_MFC_IPCLKPORT_PCLKS, 1462 + GAT_MFC_AXI2APB_MFC_IPCLKPORT_ACLK, 1463 + GAT_MFC_MFC_IPCLKPORT_ACLK, 1464 + GAT_MFC_NS_BRDG_MFC_IPCLKPORT_CLK__PMFC__CLK_MFC_D, 1465 + GAT_MFC_NS_BRDG_MFC_IPCLKPORT_CLK__PMFC__CLK_MFC_P, 1466 + GAT_MFC_PPMU_MFCD0_IPCLKPORT_ACLK, 1467 + GAT_MFC_PPMU_MFCD0_IPCLKPORT_PCLK, 1468 + GAT_MFC_PPMU_MFCD1_IPCLKPORT_ACLK, 1469 + GAT_MFC_PPMU_MFCD1_IPCLKPORT_PCLK, 1470 + GAT_MFC_SYSREG_MFC_IPCLKPORT_PCLK, 1471 + GAT_MFC_TBU_MFCD0_IPCLKPORT_CLK, 1472 + GAT_MFC_TBU_MFCD1_IPCLKPORT_CLK, 1473 + GAT_MFC_BUSD_DIV4_GATE, 1474 + GAT_MFC_BUSD_GATE, 1475 + }; 1476 + 1477 + static const struct samsung_pll_rate_table pll_mfc_rate_table[] __initconst = { 1478 + PLL_35XX_RATE(24 * MHZ, 666000000U, 111, 4, 0), 1479 + }; 1480 + 1481 + static const struct samsung_pll_clock mfc_pll_clks[] __initconst = { 1482 + PLL(pll_142xx, 0, "fout_pll_mfc", "fin_pll", 1483 + PLL_LOCKTIME_PLL_MFC, PLL_CON0_PLL_MFC, pll_mfc_rate_table), 1484 + }; 1485 + 1486 + PNAME(mout_mfc_pll_p) = { "fin_pll", "fout_pll_mfc" }; 1487 + PNAME(mout_mfc_busp_p) = { "fin_pll", "dout_mfc_busd_div4" }; 1488 + PNAME(mout_mfc_busd_p) = { "fin_pll", "mfc_busd_gate" }; 1489 + 1490 + static const struct samsung_mux_clock mfc_mux_clks[] __initconst = { 1491 + MUX(0, "mout_mfc_pll", mout_mfc_pll_p, PLL_CON0_PLL_MFC, 4, 1), 1492 + MUX(0, "mout_mfc_busp", mout_mfc_busp_p, MUX_MFC_BUSP, 0, 1), 1493 + MUX(0, "mout_mfc_busd", mout_mfc_busd_p, MUX_MFC_BUSD, 0, 1), 1494 + }; 1495 + 1496 + static const struct samsung_div_clock mfc_div_clks[] __initconst = { 1497 + DIV(0, "dout_mfc_busd_div4", "mfc_busd_div4_gate", DIV_MFC_BUSD_DIV4, 0, 4), 1498 + }; 1499 + 1500 + static const struct samsung_gate_clock mfc_gate_clks[] __initconst = { 1501 + GATE(0, "mfc_cmu_mfc_ipclkport_pclk", "mout_mfc_busp", 1502 + GAT_MFC_CMU_MFC_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1503 + GATE(0, "mfc_as_p_mfc_ipclkport_pclkm", "mout_mfc_busd", 1504 + GAT_MFC_AS_P_MFC_IPCLKPORT_PCLKM, 21, CLK_IGNORE_UNUSED, 0), 1505 + GATE(0, "mfc_as_p_mfc_ipclkport_pclks", "mout_mfc_busp", 1506 + GAT_MFC_AS_P_MFC_IPCLKPORT_PCLKS, 21, CLK_IGNORE_UNUSED, 0), 1507 + GATE(0, "mfc_axi2apb_mfc_ipclkport_aclk", "mout_mfc_busp", 1508 + GAT_MFC_AXI2APB_MFC_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1509 + GATE(MFC_MFC_IPCLKPORT_ACLK, "mfc_mfc_ipclkport_aclk", "mout_mfc_busd", 1510 + GAT_MFC_MFC_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1511 + GATE(0, "mfc_ns_brdg_mfc_ipclkport_clk__pmfc__clk_mfc_d", "mout_mfc_busd", 1512 + GAT_MFC_NS_BRDG_MFC_IPCLKPORT_CLK__PMFC__CLK_MFC_D, 21, CLK_IGNORE_UNUSED, 0), 1513 + GATE(0, "mfc_ns_brdg_mfc_ipclkport_clk__pmfc__clk_mfc_p", "mout_mfc_busp", 1514 + GAT_MFC_NS_BRDG_MFC_IPCLKPORT_CLK__PMFC__CLK_MFC_P, 21, CLK_IGNORE_UNUSED, 0), 1515 + GATE(0, "mfc_ppmu_mfcd0_ipclkport_aclk", "mout_mfc_busd", 1516 + GAT_MFC_PPMU_MFCD0_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1517 + GATE(0, "mfc_ppmu_mfcd0_ipclkport_pclk", "mout_mfc_busp", 1518 + GAT_MFC_PPMU_MFCD0_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1519 + GATE(0, "mfc_ppmu_mfcd1_ipclkport_aclk", "mout_mfc_busd", 1520 + GAT_MFC_PPMU_MFCD1_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1521 + GATE(0, "mfc_ppmu_mfcd1_ipclkport_pclk", "mout_mfc_busp", 1522 + GAT_MFC_PPMU_MFCD1_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1523 + GATE(0, "mfc_sysreg_mfc_ipclkport_pclk", "mout_mfc_busp", 1524 + GAT_MFC_SYSREG_MFC_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1525 + GATE(0, "mfc_tbu_mfcd0_ipclkport_clk", "mout_mfc_busd", 1526 + GAT_MFC_TBU_MFCD0_IPCLKPORT_CLK, 21, CLK_IGNORE_UNUSED, 0), 1527 + GATE(0, "mfc_tbu_mfcd1_ipclkport_clk", "mout_mfc_busd", 1528 + GAT_MFC_TBU_MFCD1_IPCLKPORT_CLK, 21, CLK_IGNORE_UNUSED, 0), 1529 + GATE(0, "mfc_busd_div4_gate", "mout_mfc_pll", 1530 + GAT_MFC_BUSD_DIV4_GATE, 21, CLK_IGNORE_UNUSED, 0), 1531 + GATE(0, "mfc_busd_gate", "mout_mfc_pll", GAT_MFC_BUSD_GATE, 21, CLK_IS_CRITICAL, 0), 1532 + }; 1533 + 1534 + static const struct samsung_cmu_info mfc_cmu_info __initconst = { 1535 + .pll_clks = mfc_pll_clks, 1536 + .nr_pll_clks = ARRAY_SIZE(mfc_pll_clks), 1537 + .mux_clks = mfc_mux_clks, 1538 + .nr_mux_clks = ARRAY_SIZE(mfc_mux_clks), 1539 + .div_clks = mfc_div_clks, 1540 + .nr_div_clks = ARRAY_SIZE(mfc_div_clks), 1541 + .gate_clks = mfc_gate_clks, 1542 + .nr_gate_clks = ARRAY_SIZE(mfc_gate_clks), 1543 + .nr_clk_ids = MFC_NR_CLK, 1544 + .clk_regs = mfc_clk_regs, 1545 + .nr_clk_regs = ARRAY_SIZE(mfc_clk_regs), 1546 + }; 1547 + 1548 + /* Register Offset definitions for CMU_CAM_CSI (0x12610000) */ 1549 + #define PLL_LOCKTIME_PLL_CAM_CSI 0x0 1550 + #define PLL_CON0_PLL_CAM_CSI 0x100 1551 + #define DIV_CAM_CSI0_ACLK 0x1800 1552 + #define DIV_CAM_CSI1_ACLK 0x1804 1553 + #define DIV_CAM_CSI2_ACLK 0x1808 1554 + #define DIV_CAM_CSI_BUSD 0x180c 1555 + #define DIV_CAM_CSI_BUSP 0x1810 1556 + #define GAT_CAM_CSI_CMU_CAM_CSI_IPCLKPORT_PCLK 0x2000 1557 + #define GAT_CAM_AXI2APB_CAM_CSI_IPCLKPORT_ACLK 0x2004 1558 + #define GAT_CAM_CSI_BUS_D_CAM_CSI_IPCLKPORT_CLK__SYSTEM__CLK_CSI0 0x2008 1559 + #define GAT_CAM_CSI_BUS_D_CAM_CSI_IPCLKPORT_CLK__SYSTEM__CLK_CSI1 0x200c 1560 + #define GAT_CAM_CSI_BUS_D_CAM_CSI_IPCLKPORT_CLK__SYSTEM__CLK_CSI2 0x2010 1561 + #define GAT_CAM_CSI_BUS_D_CAM_CSI_IPCLKPORT_CLK__SYSTEM__CLK_SOC_NOC 0x2014 1562 + #define GAT_CAM_CSI_BUS_D_CAM_CSI_IPCLKPORT_CLK__SYSTEM__NOC 0x2018 1563 + #define GAT_CAM_CSI0_0_IPCLKPORT_I_ACLK 0x201c 1564 + #define GAT_CAM_CSI0_0_IPCLKPORT_I_PCLK 0x2020 1565 + #define GAT_CAM_CSI0_1_IPCLKPORT_I_ACLK 0x2024 1566 + #define GAT_CAM_CSI0_1_IPCLKPORT_I_PCLK 0x2028 1567 + #define GAT_CAM_CSI0_2_IPCLKPORT_I_ACLK 0x202c 1568 + #define GAT_CAM_CSI0_2_IPCLKPORT_I_PCLK 0x2030 1569 + #define GAT_CAM_CSI0_3_IPCLKPORT_I_ACLK 0x2034 1570 + #define GAT_CAM_CSI0_3_IPCLKPORT_I_PCLK 0x2038 1571 + #define GAT_CAM_CSI1_0_IPCLKPORT_I_ACLK 0x203c 1572 + #define GAT_CAM_CSI1_0_IPCLKPORT_I_PCLK 0x2040 1573 + #define GAT_CAM_CSI1_1_IPCLKPORT_I_ACLK 0x2044 1574 + #define GAT_CAM_CSI1_1_IPCLKPORT_I_PCLK 0x2048 1575 + #define GAT_CAM_CSI1_2_IPCLKPORT_I_ACLK 0x204c 1576 + #define GAT_CAM_CSI1_2_IPCLKPORT_I_PCLK 0x2050 1577 + #define GAT_CAM_CSI1_3_IPCLKPORT_I_ACLK 0x2054 1578 + #define GAT_CAM_CSI1_3_IPCLKPORT_I_PCLK 0x2058 1579 + #define GAT_CAM_CSI2_0_IPCLKPORT_I_ACLK 0x205c 1580 + #define GAT_CAM_CSI2_0_IPCLKPORT_I_PCLK 0x2060 1581 + #define GAT_CAM_CSI2_1_IPCLKPORT_I_ACLK 0x2064 1582 + #define GAT_CAM_CSI2_1_IPCLKPORT_I_PCLK 0x2068 1583 + #define GAT_CAM_CSI2_2_IPCLKPORT_I_ACLK 0x206c 1584 + #define GAT_CAM_CSI2_2_IPCLKPORT_I_PCLK 0x2070 1585 + #define GAT_CAM_CSI2_3_IPCLKPORT_I_ACLK 0x2074 1586 + #define GAT_CAM_CSI2_3_IPCLKPORT_I_PCLK 0x2078 1587 + #define GAT_CAM_NS_BRDG_CAM_CSI_IPCLKPORT_CLK__PSOC_CAM_CSI__CLK_CAM_CSI_D 0x207c 1588 + #define GAT_CAM_NS_BRDG_CAM_CSI_IPCLKPORT_CLK__PSOC_CAM_CSI__CLK_CAM_CSI_P 0x2080 1589 + #define GAT_CAM_SYSREG_CAM_CSI_IPCLKPORT_PCLK 0x2084 1590 + #define GAT_CAM_TBU_CAM_CSI_IPCLKPORT_ACLK 0x2088 1591 + 1592 + static const unsigned long cam_csi_clk_regs[] __initconst = { 1593 + PLL_LOCKTIME_PLL_CAM_CSI, 1594 + PLL_CON0_PLL_CAM_CSI, 1595 + DIV_CAM_CSI0_ACLK, 1596 + DIV_CAM_CSI1_ACLK, 1597 + DIV_CAM_CSI2_ACLK, 1598 + DIV_CAM_CSI_BUSD, 1599 + DIV_CAM_CSI_BUSP, 1600 + GAT_CAM_CSI_CMU_CAM_CSI_IPCLKPORT_PCLK, 1601 + GAT_CAM_AXI2APB_CAM_CSI_IPCLKPORT_ACLK, 1602 + GAT_CAM_CSI_BUS_D_CAM_CSI_IPCLKPORT_CLK__SYSTEM__CLK_CSI0, 1603 + GAT_CAM_CSI_BUS_D_CAM_CSI_IPCLKPORT_CLK__SYSTEM__CLK_CSI1, 1604 + GAT_CAM_CSI_BUS_D_CAM_CSI_IPCLKPORT_CLK__SYSTEM__CLK_CSI2, 1605 + GAT_CAM_CSI_BUS_D_CAM_CSI_IPCLKPORT_CLK__SYSTEM__CLK_SOC_NOC, 1606 + GAT_CAM_CSI_BUS_D_CAM_CSI_IPCLKPORT_CLK__SYSTEM__NOC, 1607 + GAT_CAM_CSI0_0_IPCLKPORT_I_ACLK, 1608 + GAT_CAM_CSI0_0_IPCLKPORT_I_PCLK, 1609 + GAT_CAM_CSI0_1_IPCLKPORT_I_ACLK, 1610 + GAT_CAM_CSI0_1_IPCLKPORT_I_PCLK, 1611 + GAT_CAM_CSI0_2_IPCLKPORT_I_ACLK, 1612 + GAT_CAM_CSI0_2_IPCLKPORT_I_PCLK, 1613 + GAT_CAM_CSI0_3_IPCLKPORT_I_ACLK, 1614 + GAT_CAM_CSI0_3_IPCLKPORT_I_PCLK, 1615 + GAT_CAM_CSI1_0_IPCLKPORT_I_ACLK, 1616 + GAT_CAM_CSI1_0_IPCLKPORT_I_PCLK, 1617 + GAT_CAM_CSI1_1_IPCLKPORT_I_ACLK, 1618 + GAT_CAM_CSI1_1_IPCLKPORT_I_PCLK, 1619 + GAT_CAM_CSI1_2_IPCLKPORT_I_ACLK, 1620 + GAT_CAM_CSI1_2_IPCLKPORT_I_PCLK, 1621 + GAT_CAM_CSI1_3_IPCLKPORT_I_ACLK, 1622 + GAT_CAM_CSI1_3_IPCLKPORT_I_PCLK, 1623 + GAT_CAM_CSI2_0_IPCLKPORT_I_ACLK, 1624 + GAT_CAM_CSI2_0_IPCLKPORT_I_PCLK, 1625 + GAT_CAM_CSI2_1_IPCLKPORT_I_ACLK, 1626 + GAT_CAM_CSI2_1_IPCLKPORT_I_PCLK, 1627 + GAT_CAM_CSI2_2_IPCLKPORT_I_ACLK, 1628 + GAT_CAM_CSI2_2_IPCLKPORT_I_PCLK, 1629 + GAT_CAM_CSI2_3_IPCLKPORT_I_ACLK, 1630 + GAT_CAM_CSI2_3_IPCLKPORT_I_PCLK, 1631 + GAT_CAM_NS_BRDG_CAM_CSI_IPCLKPORT_CLK__PSOC_CAM_CSI__CLK_CAM_CSI_D, 1632 + GAT_CAM_NS_BRDG_CAM_CSI_IPCLKPORT_CLK__PSOC_CAM_CSI__CLK_CAM_CSI_P, 1633 + GAT_CAM_SYSREG_CAM_CSI_IPCLKPORT_PCLK, 1634 + GAT_CAM_TBU_CAM_CSI_IPCLKPORT_ACLK, 1635 + }; 1636 + 1637 + static const struct samsung_pll_rate_table pll_cam_csi_rate_table[] __initconst = { 1638 + PLL_35XX_RATE(24 * MHZ, 1066000000U, 533, 12, 0), 1639 + }; 1640 + 1641 + static const struct samsung_pll_clock cam_csi_pll_clks[] __initconst = { 1642 + PLL(pll_142xx, 0, "fout_pll_cam_csi", "fin_pll", 1643 + PLL_LOCKTIME_PLL_CAM_CSI, PLL_CON0_PLL_CAM_CSI, pll_cam_csi_rate_table), 1644 + }; 1645 + 1646 + PNAME(mout_cam_csi_pll_p) = { "fin_pll", "fout_pll_cam_csi" }; 1647 + 1648 + static const struct samsung_mux_clock cam_csi_mux_clks[] __initconst = { 1649 + MUX(0, "mout_cam_csi_pll", mout_cam_csi_pll_p, PLL_CON0_PLL_CAM_CSI, 4, 1), 1650 + }; 1651 + 1652 + static const struct samsung_div_clock cam_csi_div_clks[] __initconst = { 1653 + DIV(0, "dout_cam_csi0_aclk", "mout_cam_csi_pll", DIV_CAM_CSI0_ACLK, 0, 4), 1654 + DIV(0, "dout_cam_csi1_aclk", "mout_cam_csi_pll", DIV_CAM_CSI1_ACLK, 0, 4), 1655 + DIV(0, "dout_cam_csi2_aclk", "mout_cam_csi_pll", DIV_CAM_CSI2_ACLK, 0, 4), 1656 + DIV(0, "dout_cam_csi_busd", "mout_cam_csi_pll", DIV_CAM_CSI_BUSD, 0, 4), 1657 + DIV(0, "dout_cam_csi_busp", "mout_cam_csi_pll", DIV_CAM_CSI_BUSP, 0, 4), 1658 + }; 1659 + 1660 + static const struct samsung_gate_clock cam_csi_gate_clks[] __initconst = { 1661 + GATE(0, "cam_csi_cmu_cam_csi_ipclkport_pclk", "dout_cam_csi_busp", 1662 + GAT_CAM_CSI_CMU_CAM_CSI_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1663 + GATE(0, "cam_axi2apb_cam_csi_ipclkport_aclk", "dout_cam_csi_busp", 1664 + GAT_CAM_AXI2APB_CAM_CSI_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1665 + GATE(0, "cam_csi_bus_d_cam_csi_ipclkport_clk__system__clk_csi0", "dout_cam_csi0_aclk", 1666 + GAT_CAM_CSI_BUS_D_CAM_CSI_IPCLKPORT_CLK__SYSTEM__CLK_CSI0, 21, CLK_IGNORE_UNUSED, 0), 1667 + GATE(0, "cam_csi_bus_d_cam_csi_ipclkport_clk__system__clk_csi1", "dout_cam_csi1_aclk", 1668 + GAT_CAM_CSI_BUS_D_CAM_CSI_IPCLKPORT_CLK__SYSTEM__CLK_CSI1, 21, CLK_IGNORE_UNUSED, 0), 1669 + GATE(0, "cam_csi_bus_d_cam_csi_ipclkport_clk__system__clk_csi2", "dout_cam_csi2_aclk", 1670 + GAT_CAM_CSI_BUS_D_CAM_CSI_IPCLKPORT_CLK__SYSTEM__CLK_CSI2, 21, CLK_IGNORE_UNUSED, 0), 1671 + GATE(0, "cam_csi_bus_d_cam_csi_ipclkport_clk__system__clk_soc_noc", "dout_cam_csi_busd", 1672 + GAT_CAM_CSI_BUS_D_CAM_CSI_IPCLKPORT_CLK__SYSTEM__CLK_SOC_NOC, 21, 1673 + CLK_IGNORE_UNUSED, 0), 1674 + GATE(0, "cam_csi_bus_d_cam_csi_ipclkport_clk__system__noc", "dout_cam_csi_busd", 1675 + GAT_CAM_CSI_BUS_D_CAM_CSI_IPCLKPORT_CLK__SYSTEM__NOC, 21, CLK_IGNORE_UNUSED, 0), 1676 + GATE(CAM_CSI0_0_IPCLKPORT_I_ACLK, "cam_csi0_0_ipclkport_i_aclk", "dout_cam_csi0_aclk", 1677 + GAT_CAM_CSI0_0_IPCLKPORT_I_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1678 + GATE(0, "cam_csi0_0_ipclkport_i_pclk", "dout_cam_csi_busp", 1679 + GAT_CAM_CSI0_0_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1680 + GATE(CAM_CSI0_1_IPCLKPORT_I_ACLK, "cam_csi0_1_ipclkport_i_aclk", "dout_cam_csi0_aclk", 1681 + GAT_CAM_CSI0_1_IPCLKPORT_I_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1682 + GATE(0, "cam_csi0_1_ipclkport_i_pclk", "dout_cam_csi_busp", 1683 + GAT_CAM_CSI0_1_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1684 + GATE(CAM_CSI0_2_IPCLKPORT_I_ACLK, "cam_csi0_2_ipclkport_i_aclk", "dout_cam_csi0_aclk", 1685 + GAT_CAM_CSI0_2_IPCLKPORT_I_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1686 + GATE(0, "cam_csi0_2_ipclkport_i_pclk", "dout_cam_csi_busp", 1687 + GAT_CAM_CSI0_2_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1688 + GATE(CAM_CSI0_3_IPCLKPORT_I_ACLK, "cam_csi0_3_ipclkport_i_aclk", "dout_cam_csi0_aclk", 1689 + GAT_CAM_CSI0_3_IPCLKPORT_I_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1690 + GATE(0, "cam_csi0_3_ipclkport_i_pclk", "dout_cam_csi_busp", 1691 + GAT_CAM_CSI0_3_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1692 + GATE(CAM_CSI1_0_IPCLKPORT_I_ACLK, "cam_csi1_0_ipclkport_i_aclk", "dout_cam_csi1_aclk", 1693 + GAT_CAM_CSI1_0_IPCLKPORT_I_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1694 + GATE(0, "cam_csi1_0_ipclkport_i_pclk", "dout_cam_csi_busp", 1695 + GAT_CAM_CSI1_0_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1696 + GATE(CAM_CSI1_1_IPCLKPORT_I_ACLK, "cam_csi1_1_ipclkport_i_aclk", "dout_cam_csi1_aclk", 1697 + GAT_CAM_CSI1_1_IPCLKPORT_I_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1698 + GATE(0, "cam_csi1_1_ipclkport_i_pclk", "dout_cam_csi_busp", 1699 + GAT_CAM_CSI1_1_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1700 + GATE(CAM_CSI1_2_IPCLKPORT_I_ACLK, "cam_csi1_2_ipclkport_i_aclk", "dout_cam_csi1_aclk", 1701 + GAT_CAM_CSI1_2_IPCLKPORT_I_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1702 + GATE(0, "cam_csi1_2_ipclkport_i_pclk", "dout_cam_csi_busp", 1703 + GAT_CAM_CSI1_2_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1704 + GATE(CAM_CSI1_3_IPCLKPORT_I_ACLK, "cam_csi1_3_ipclkport_i_aclk", "dout_cam_csi1_aclk", 1705 + GAT_CAM_CSI1_3_IPCLKPORT_I_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1706 + GATE(0, "cam_csi1_3_ipclkport_i_pclk", "dout_cam_csi_busp", 1707 + GAT_CAM_CSI1_3_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1708 + GATE(CAM_CSI2_0_IPCLKPORT_I_ACLK, "cam_csi2_0_ipclkport_i_aclk", "dout_cam_csi2_aclk", 1709 + GAT_CAM_CSI2_0_IPCLKPORT_I_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1710 + GATE(0, "cam_csi2_0_ipclkport_i_pclk", "dout_cam_csi_busp", 1711 + GAT_CAM_CSI2_0_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1712 + GATE(CAM_CSI2_1_IPCLKPORT_I_ACLK, "cam_csi2_1_ipclkport_i_aclk", "dout_cam_csi2_aclk", 1713 + GAT_CAM_CSI2_1_IPCLKPORT_I_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1714 + GATE(0, "cam_csi2_1_ipclkport_i_pclk", "dout_cam_csi_busp", 1715 + GAT_CAM_CSI2_1_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1716 + GATE(CAM_CSI2_2_IPCLKPORT_I_ACLK, "cam_csi2_2_ipclkport_i_aclk", "dout_cam_csi2_aclk", 1717 + GAT_CAM_CSI2_2_IPCLKPORT_I_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1718 + GATE(0, "cam_csi2_2_ipclkport_i_pclk", "dout_cam_csi_busp", 1719 + GAT_CAM_CSI2_2_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1720 + GATE(CAM_CSI2_3_IPCLKPORT_I_ACLK, "cam_csi2_3_ipclkport_i_aclk", "dout_cam_csi2_aclk", 1721 + GAT_CAM_CSI2_3_IPCLKPORT_I_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1722 + GATE(0, "cam_csi2_3_ipclkport_i_pclk", "dout_cam_csi_busp", 1723 + GAT_CAM_CSI2_3_IPCLKPORT_I_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1724 + GATE(0, "cam_ns_brdg_cam_csi_ipclkport_clk__psoc_cam_csi__clk_cam_csi_d", 1725 + "dout_cam_csi_busd", 1726 + GAT_CAM_NS_BRDG_CAM_CSI_IPCLKPORT_CLK__PSOC_CAM_CSI__CLK_CAM_CSI_D, 21, 1727 + CLK_IGNORE_UNUSED, 0), 1728 + GATE(0, "cam_ns_brdg_cam_csi_ipclkport_clk__psoc_cam_csi__clk_cam_csi_p", 1729 + "dout_cam_csi_busp", 1730 + GAT_CAM_NS_BRDG_CAM_CSI_IPCLKPORT_CLK__PSOC_CAM_CSI__CLK_CAM_CSI_P, 21, 1731 + CLK_IGNORE_UNUSED, 0), 1732 + GATE(0, "cam_sysreg_cam_csi_ipclkport_pclk", "dout_cam_csi_busp", 1733 + GAT_CAM_SYSREG_CAM_CSI_IPCLKPORT_PCLK, 21, CLK_IGNORE_UNUSED, 0), 1734 + GATE(0, "cam_tbu_cam_csi_ipclkport_aclk", "dout_cam_csi_busd", 1735 + GAT_CAM_TBU_CAM_CSI_IPCLKPORT_ACLK, 21, CLK_IGNORE_UNUSED, 0), 1736 + }; 1737 + 1738 + static const struct samsung_cmu_info cam_csi_cmu_info __initconst = { 1739 + .pll_clks = cam_csi_pll_clks, 1740 + .nr_pll_clks = ARRAY_SIZE(cam_csi_pll_clks), 1741 + .mux_clks = cam_csi_mux_clks, 1742 + .nr_mux_clks = ARRAY_SIZE(cam_csi_mux_clks), 1743 + .div_clks = cam_csi_div_clks, 1744 + .nr_div_clks = ARRAY_SIZE(cam_csi_div_clks), 1745 + .gate_clks = cam_csi_gate_clks, 1746 + .nr_gate_clks = ARRAY_SIZE(cam_csi_gate_clks), 1747 + .nr_clk_ids = CAM_CSI_NR_CLK, 1748 + .clk_regs = cam_csi_clk_regs, 1749 + .nr_clk_regs = ARRAY_SIZE(cam_csi_clk_regs), 1750 + }; 1751 + 1752 + /** 1753 + * fsd_cmu_probe - Probe function for FSD platform clocks 1754 + * @pdev: Pointer to platform device 1755 + * 1756 + * Configure clock hierarchy for clock domains of FSD platform 1757 + */ 1758 + static int __init fsd_cmu_probe(struct platform_device *pdev) 1759 + { 1760 + const struct samsung_cmu_info *info; 1761 + struct device *dev = &pdev->dev; 1762 + 1763 + info = of_device_get_match_data(dev); 1764 + exynos_arm64_register_cmu(dev, dev->of_node, info); 1765 + 1766 + return 0; 1767 + } 1768 + 1769 + /* CMUs which belong to Power Domains and need runtime PM to be implemented */ 1770 + static const struct of_device_id fsd_cmu_of_match[] = { 1771 + { 1772 + .compatible = "tesla,fsd-clock-peric", 1773 + .data = &peric_cmu_info, 1774 + }, { 1775 + .compatible = "tesla,fsd-clock-fsys0", 1776 + .data = &fsys0_cmu_info, 1777 + }, { 1778 + .compatible = "tesla,fsd-clock-fsys1", 1779 + .data = &fsys1_cmu_info, 1780 + }, { 1781 + .compatible = "tesla,fsd-clock-mfc", 1782 + .data = &mfc_cmu_info, 1783 + }, { 1784 + .compatible = "tesla,fsd-clock-cam_csi", 1785 + .data = &cam_csi_cmu_info, 1786 + }, { 1787 + }, 1788 + }; 1789 + 1790 + static struct platform_driver fsd_cmu_driver __refdata = { 1791 + .driver = { 1792 + .name = "fsd-cmu", 1793 + .of_match_table = fsd_cmu_of_match, 1794 + .suppress_bind_attrs = true, 1795 + }, 1796 + .probe = fsd_cmu_probe, 1797 + }; 1798 + 1799 + static int __init fsd_cmu_init(void) 1800 + { 1801 + return platform_driver_register(&fsd_cmu_driver); 1802 + } 1803 + core_initcall(fsd_cmu_init);
+1
drivers/clk/samsung/clk-pll.c
··· 1469 1469 case pll_1450x: 1470 1470 case pll_1451x: 1471 1471 case pll_1452x: 1472 + case pll_142xx: 1472 1473 pll->enable_offs = PLL35XX_ENABLE_SHIFT; 1473 1474 pll->lock_offs = PLL35XX_LOCK_STAT_SHIFT; 1474 1475 if (!pll->rate_table)
+1
drivers/clk/samsung/clk-pll.h
··· 39 39 pll_1460x, 40 40 pll_0822x, 41 41 pll_0831x, 42 + pll_142xx, 42 43 }; 43 44 44 45 #define PLL_RATE(_fin, _m, _p, _s, _k, _ks) \
+24 -4
drivers/cpuidle/cpuidle-qcom-spm.c
··· 122 122 if (ret <= 0) 123 123 return ret ? : -ENODEV; 124 124 125 - ret = qcom_scm_set_warm_boot_addr(cpu_resume_arm, cpumask_of(cpu)); 126 - if (ret) 127 - return ret; 128 - 129 125 return cpuidle_register(&data->cpuidle_driver, NULL); 130 126 } 131 127 ··· 131 135 132 136 if (!qcom_scm_is_available()) 133 137 return -EPROBE_DEFER; 138 + 139 + ret = qcom_scm_set_warm_boot_addr(cpu_resume_arm); 140 + if (ret) 141 + return dev_err_probe(&pdev->dev, ret, "set warm boot addr failed"); 134 142 135 143 for_each_possible_cpu(cpu) { 136 144 ret = spm_cpuidle_register(&pdev->dev, cpu); ··· 155 155 }, 156 156 }; 157 157 158 + static bool __init qcom_spm_find_any_cpu(void) 159 + { 160 + struct device_node *cpu_node, *saw_node; 161 + 162 + for_each_of_cpu_node(cpu_node) { 163 + saw_node = of_parse_phandle(cpu_node, "qcom,saw", 0); 164 + if (of_device_is_available(saw_node)) { 165 + of_node_put(saw_node); 166 + of_node_put(cpu_node); 167 + return true; 168 + } 169 + of_node_put(saw_node); 170 + } 171 + return false; 172 + } 173 + 158 174 static int __init qcom_spm_cpuidle_init(void) 159 175 { 160 176 struct platform_device *pdev; ··· 179 163 ret = platform_driver_register(&spm_cpuidle_driver); 180 164 if (ret) 181 165 return ret; 166 + 167 + /* Make sure there is actually any CPU managed by the SPM */ 168 + if (!qcom_spm_find_any_cpu()) 169 + return 0; 182 170 183 171 pdev = platform_device_register_simple("qcom-spm-cpuidle", 184 172 -1, NULL, 0);
+56
drivers/firmware/arm_scmi/Kconfig
··· 54 54 If you want the ARM SCMI PROTOCOL stack to include support for a 55 55 transport based on mailboxes, answer Y. 56 56 57 + config ARM_SCMI_TRANSPORT_OPTEE 58 + bool "SCMI transport based on OP-TEE service" 59 + depends on OPTEE=y || OPTEE=ARM_SCMI_PROTOCOL 60 + select ARM_SCMI_HAVE_TRANSPORT 61 + select ARM_SCMI_HAVE_SHMEM 62 + default y 63 + help 64 + This enables the OP-TEE service based transport for SCMI. 65 + 66 + If you want the ARM SCMI PROTOCOL stack to include support for a 67 + transport based on OP-TEE SCMI service, answer Y. 68 + 57 69 config ARM_SCMI_TRANSPORT_SMC 58 70 bool "SCMI transport based on SMC" 59 71 depends on HAVE_ARM_SMCCC_DISCOVERY ··· 78 66 If you want the ARM SCMI PROTOCOL stack to include support for a 79 67 transport based on SMC, answer Y. 80 68 69 + config ARM_SCMI_TRANSPORT_SMC_ATOMIC_ENABLE 70 + bool "Enable atomic mode support for SCMI SMC transport" 71 + depends on ARM_SCMI_TRANSPORT_SMC 72 + help 73 + Enable support of atomic operation for SCMI SMC based transport. 74 + 75 + If you want the SCMI SMC based transport to operate in atomic 76 + mode, avoiding any kind of sleeping behaviour for selected 77 + transactions on the TX path, answer Y. 78 + Enabling atomic mode operations allows any SCMI driver using this 79 + transport to optionally ask for atomic SCMI transactions and operate 80 + in atomic context too, at the price of using a number of busy-waiting 81 + primitives all over instead. If unsure say N. 82 + 81 83 config ARM_SCMI_TRANSPORT_VIRTIO 82 84 bool "SCMI transport based on VirtIO" 83 85 depends on VIRTIO=y || VIRTIO=ARM_SCMI_PROTOCOL ··· 102 76 103 77 If you want the ARM SCMI PROTOCOL stack to include support for a 104 78 transport based on VirtIO, answer Y. 79 + 80 + config ARM_SCMI_TRANSPORT_VIRTIO_VERSION1_COMPLIANCE 81 + bool "SCMI VirtIO transport Version 1 compliance" 82 + depends on ARM_SCMI_TRANSPORT_VIRTIO 83 + default y 84 + help 85 + This enforces strict compliance with VirtIO Version 1 specification. 86 + 87 + If you want the ARM SCMI VirtIO transport layer to refuse to work 88 + with Legacy VirtIO backends and instead support only VirtIO Version 1 89 + devices (or above), answer Y. 90 + 91 + If you want instead to support also old Legacy VirtIO backends (like 92 + the ones implemented by kvmtool) and let the core Kernel VirtIO layer 93 + take care of the needed conversions, say N. 94 + 95 + config ARM_SCMI_TRANSPORT_VIRTIO_ATOMIC_ENABLE 96 + bool "Enable atomic mode for SCMI VirtIO transport" 97 + depends on ARM_SCMI_TRANSPORT_VIRTIO 98 + help 99 + Enable support of atomic operation for SCMI VirtIO based transport. 100 + 101 + If you want the SCMI VirtIO based transport to operate in atomic 102 + mode, avoiding any kind of sleeping behaviour for selected 103 + transactions on the TX path, answer Y. 104 + 105 + Enabling atomic mode operations allows any SCMI driver using this 106 + transport to optionally ask for atomic SCMI transactions and operate 107 + in atomic context too, at the price of using a number of busy-waiting 108 + primitives all over instead. If unsure say N. 105 109 106 110 endif #ARM_SCMI_PROTOCOL 107 111
+8
drivers/firmware/arm_scmi/Makefile
··· 6 6 scmi-transport-$(CONFIG_ARM_SCMI_TRANSPORT_SMC) += smc.o 7 7 scmi-transport-$(CONFIG_ARM_SCMI_HAVE_MSG) += msg.o 8 8 scmi-transport-$(CONFIG_ARM_SCMI_TRANSPORT_VIRTIO) += virtio.o 9 + scmi-transport-$(CONFIG_ARM_SCMI_TRANSPORT_OPTEE) += optee.o 9 10 scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o system.o voltage.o 10 11 scmi-module-objs := $(scmi-bus-y) $(scmi-driver-y) $(scmi-protocols-y) \ 11 12 $(scmi-transport-y) 12 13 obj-$(CONFIG_ARM_SCMI_PROTOCOL) += scmi-module.o 13 14 obj-$(CONFIG_ARM_SCMI_POWER_DOMAIN) += scmi_pm_domain.o 15 + 16 + ifeq ($(CONFIG_THUMB2_KERNEL)$(CONFIG_CC_IS_CLANG),yy) 17 + # The use of R7 in the SMCCC conflicts with the compiler's use of R7 as a frame 18 + # pointer in Thumb2 mode, which is forcibly enabled by Clang when profiling 19 + # hooks are inserted via the -pg switch. 20 + CFLAGS_REMOVE_smc.o += $(CC_FLAGS_FTRACE) 21 + endif
+28 -6
drivers/firmware/arm_scmi/clock.c
··· 27 27 struct scmi_msg_resp_clock_attributes { 28 28 __le32 attributes; 29 29 #define CLOCK_ENABLE BIT(0) 30 - u8 name[SCMI_MAX_STR_SIZE]; 30 + u8 name[SCMI_MAX_STR_SIZE]; 31 + __le32 clock_enable_latency; 31 32 }; 32 33 33 34 struct scmi_clock_set_config { ··· 117 116 attr = t->rx.buf; 118 117 119 118 ret = ph->xops->do_xfer(ph, t); 120 - if (!ret) 119 + if (!ret) { 121 120 strlcpy(clk->name, attr->name, SCMI_MAX_STR_SIZE); 122 - else 121 + /* Is optional field clock_enable_latency provided ? */ 122 + if (t->rx.len == sizeof(*attr)) 123 + clk->enable_latency = 124 + le32_to_cpu(attr->clock_enable_latency); 125 + } else { 123 126 clk->name[0] = '\0'; 127 + } 124 128 125 129 ph->xops->xfer_put(ph, t); 126 130 return ret; ··· 279 273 280 274 static int 281 275 scmi_clock_config_set(const struct scmi_protocol_handle *ph, u32 clk_id, 282 - u32 config) 276 + u32 config, bool atomic) 283 277 { 284 278 int ret; 285 279 struct scmi_xfer *t; ··· 289 283 sizeof(*cfg), 0, &t); 290 284 if (ret) 291 285 return ret; 286 + 287 + t->hdr.poll_completion = atomic; 292 288 293 289 cfg = t->tx.buf; 294 290 cfg->id = cpu_to_le32(clk_id); ··· 304 296 305 297 static int scmi_clock_enable(const struct scmi_protocol_handle *ph, u32 clk_id) 306 298 { 307 - return scmi_clock_config_set(ph, clk_id, CLOCK_ENABLE); 299 + return scmi_clock_config_set(ph, clk_id, CLOCK_ENABLE, false); 308 300 } 309 301 310 302 static int scmi_clock_disable(const struct scmi_protocol_handle *ph, u32 clk_id) 311 303 { 312 - return scmi_clock_config_set(ph, clk_id, 0); 304 + return scmi_clock_config_set(ph, clk_id, 0, false); 305 + } 306 + 307 + static int scmi_clock_enable_atomic(const struct scmi_protocol_handle *ph, 308 + u32 clk_id) 309 + { 310 + return scmi_clock_config_set(ph, clk_id, CLOCK_ENABLE, true); 311 + } 312 + 313 + static int scmi_clock_disable_atomic(const struct scmi_protocol_handle *ph, 314 + u32 clk_id) 315 + { 316 + return scmi_clock_config_set(ph, clk_id, 0, true); 313 317 } 314 318 315 319 static int scmi_clock_count_get(const struct scmi_protocol_handle *ph) ··· 350 330 .rate_set = scmi_clock_rate_set, 351 331 .enable = scmi_clock_enable, 352 332 .disable = scmi_clock_disable, 333 + .enable_atomic = scmi_clock_enable_atomic, 334 + .disable_atomic = scmi_clock_disable_atomic, 353 335 }; 354 336 355 337 static int scmi_clock_protocol_init(const struct scmi_protocol_handle *ph)
+25 -1
drivers/firmware/arm_scmi/common.h
··· 339 339 * @dev: Reference to device in the SCMI hierarchy corresponding to this 340 340 * channel 341 341 * @handle: Pointer to SCMI entity handle 342 + * @no_completion_irq: Flag to indicate that this channel has no completion 343 + * interrupt mechanism for synchronous commands. 344 + * This can be dynamically set by transports at run-time 345 + * inside their provided .chan_setup(). 342 346 * @transport_info: Transport layer related information 343 347 */ 344 348 struct scmi_chan_info { 345 349 struct device *dev; 346 350 struct scmi_handle *handle; 351 + bool no_completion_irq; 347 352 void *transport_info; 348 353 }; 349 354 ··· 378 373 unsigned int (*get_max_msg)(struct scmi_chan_info *base_cinfo); 379 374 int (*send_message)(struct scmi_chan_info *cinfo, 380 375 struct scmi_xfer *xfer); 381 - void (*mark_txdone)(struct scmi_chan_info *cinfo, int ret); 376 + void (*mark_txdone)(struct scmi_chan_info *cinfo, int ret, 377 + struct scmi_xfer *xfer); 382 378 void (*fetch_response)(struct scmi_chan_info *cinfo, 383 379 struct scmi_xfer *xfer); 384 380 void (*fetch_notification)(struct scmi_chan_info *cinfo, ··· 408 402 * be pending simultaneously in the system. May be overridden by the 409 403 * get_max_msg op. 410 404 * @max_msg_size: Maximum size of data per message that can be handled. 405 + * @force_polling: Flag to force this whole transport to use SCMI core polling 406 + * mechanism instead of completion interrupts even if available. 407 + * @sync_cmds_completed_on_ret: Flag to indicate that the transport assures 408 + * synchronous-command messages are atomically 409 + * completed on .send_message: no need to poll 410 + * actively waiting for a response. 411 + * Used by core internally only when polling is 412 + * selected as a waiting for reply method: i.e. 413 + * if a completion irq was found use that anyway. 414 + * @atomic_enabled: Flag to indicate that this transport, which is assured not 415 + * to sleep anywhere on the TX path, can be used in atomic mode 416 + * when requested. 411 417 */ 412 418 struct scmi_desc { 413 419 int (*transport_init)(void); ··· 428 410 int max_rx_timeout_ms; 429 411 int max_msg; 430 412 int max_msg_size; 413 + const bool force_polling; 414 + const bool sync_cmds_completed_on_ret; 415 + const bool atomic_enabled; 431 416 }; 432 417 433 418 #ifdef CONFIG_ARM_SCMI_TRANSPORT_MAILBOX ··· 441 420 #endif 442 421 #ifdef CONFIG_ARM_SCMI_TRANSPORT_VIRTIO 443 422 extern const struct scmi_desc scmi_virtio_desc; 423 + #endif 424 + #ifdef CONFIG_ARM_SCMI_TRANSPORT_OPTEE 425 + extern const struct scmi_desc scmi_optee_desc; 444 426 #endif 445 427 446 428 void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr, void *priv);
+191 -43
drivers/firmware/arm_scmi/driver.c
··· 131 131 * MAX_PROTOCOLS_IMP elements allocated by the base protocol 132 132 * @active_protocols: IDR storing device_nodes for protocols actually defined 133 133 * in the DT and confirmed as implemented by fw. 134 + * @atomic_threshold: Optional system wide DT-configured threshold, expressed 135 + * in microseconds, for atomic operations. 136 + * Only SCMI synchronous commands reported by the platform 137 + * to have an execution latency lesser-equal to the threshold 138 + * should be considered for atomic mode operation: such 139 + * decision is finally left up to the SCMI drivers. 134 140 * @notify_priv: Pointer to private data structure specific to notifications. 135 141 * @node: List head 136 142 * @users: Number of users of this instance ··· 155 149 struct mutex protocols_mtx; 156 150 u8 *protocols_imp; 157 151 struct idr active_protocols; 152 + unsigned int atomic_threshold; 158 153 void *notify_priv; 159 154 struct list_head node; 160 155 int users; ··· 616 609 info->desc->ops->clear_channel(cinfo); 617 610 } 618 611 612 + static inline bool is_polling_required(struct scmi_chan_info *cinfo, 613 + struct scmi_info *info) 614 + { 615 + return cinfo->no_completion_irq || info->desc->force_polling; 616 + } 617 + 618 + static inline bool is_transport_polling_capable(struct scmi_info *info) 619 + { 620 + return info->desc->ops->poll_done || 621 + info->desc->sync_cmds_completed_on_ret; 622 + } 623 + 624 + static inline bool is_polling_enabled(struct scmi_chan_info *cinfo, 625 + struct scmi_info *info) 626 + { 627 + return is_polling_required(cinfo, info) && 628 + is_transport_polling_capable(info); 629 + } 630 + 619 631 static void scmi_handle_notification(struct scmi_chan_info *cinfo, 620 632 u32 msg_hdr, void *priv) 621 633 { ··· 655 629 656 630 unpack_scmi_header(msg_hdr, &xfer->hdr); 657 631 if (priv) 658 - xfer->priv = priv; 632 + /* Ensure order between xfer->priv store and following ops */ 633 + smp_store_mb(xfer->priv, priv); 659 634 info->desc->ops->fetch_notification(cinfo, info->desc->max_msg_size, 660 635 xfer); 661 636 scmi_notify(cinfo->handle, xfer->hdr.protocol_id, ··· 688 661 xfer->rx.len = info->desc->max_msg_size; 689 662 690 663 if (priv) 691 - xfer->priv = priv; 664 + /* Ensure order between xfer->priv store and following ops */ 665 + smp_store_mb(xfer->priv, priv); 692 666 info->desc->ops->fetch_response(cinfo, xfer); 693 667 694 668 trace_scmi_rx_done(xfer->transfer_id, xfer->hdr.id, ··· 752 724 __scmi_xfer_put(&info->tx_minfo, xfer); 753 725 } 754 726 755 - #define SCMI_MAX_POLL_TO_NS (100 * NSEC_PER_USEC) 756 - 757 727 static bool scmi_xfer_done_no_timeout(struct scmi_chan_info *cinfo, 758 728 struct scmi_xfer *xfer, ktime_t stop) 759 729 { ··· 764 738 return info->desc->ops->poll_done(cinfo, xfer) || 765 739 try_wait_for_completion(&xfer->done) || 766 740 ktime_after(ktime_get(), stop); 741 + } 742 + 743 + /** 744 + * scmi_wait_for_message_response - An helper to group all the possible ways of 745 + * waiting for a synchronous message response. 746 + * 747 + * @cinfo: SCMI channel info 748 + * @xfer: Reference to the transfer being waited for. 749 + * 750 + * Chooses waiting strategy (sleep-waiting vs busy-waiting) depending on 751 + * configuration flags like xfer->hdr.poll_completion. 752 + * 753 + * Return: 0 on Success, error otherwise. 754 + */ 755 + static int scmi_wait_for_message_response(struct scmi_chan_info *cinfo, 756 + struct scmi_xfer *xfer) 757 + { 758 + struct scmi_info *info = handle_to_scmi_info(cinfo->handle); 759 + struct device *dev = info->dev; 760 + int ret = 0, timeout_ms = info->desc->max_rx_timeout_ms; 761 + 762 + trace_scmi_xfer_response_wait(xfer->transfer_id, xfer->hdr.id, 763 + xfer->hdr.protocol_id, xfer->hdr.seq, 764 + timeout_ms, 765 + xfer->hdr.poll_completion); 766 + 767 + if (xfer->hdr.poll_completion) { 768 + /* 769 + * Real polling is needed only if transport has NOT declared 770 + * itself to support synchronous commands replies. 771 + */ 772 + if (!info->desc->sync_cmds_completed_on_ret) { 773 + /* 774 + * Poll on xfer using transport provided .poll_done(); 775 + * assumes no completion interrupt was available. 776 + */ 777 + ktime_t stop = ktime_add_ms(ktime_get(), timeout_ms); 778 + 779 + spin_until_cond(scmi_xfer_done_no_timeout(cinfo, 780 + xfer, stop)); 781 + if (ktime_after(ktime_get(), stop)) { 782 + dev_err(dev, 783 + "timed out in resp(caller: %pS) - polling\n", 784 + (void *)_RET_IP_); 785 + ret = -ETIMEDOUT; 786 + } 787 + } 788 + 789 + if (!ret) { 790 + unsigned long flags; 791 + 792 + /* 793 + * Do not fetch_response if an out-of-order delayed 794 + * response is being processed. 795 + */ 796 + spin_lock_irqsave(&xfer->lock, flags); 797 + if (xfer->state == SCMI_XFER_SENT_OK) { 798 + info->desc->ops->fetch_response(cinfo, xfer); 799 + xfer->state = SCMI_XFER_RESP_OK; 800 + } 801 + spin_unlock_irqrestore(&xfer->lock, flags); 802 + } 803 + } else { 804 + /* And we wait for the response. */ 805 + if (!wait_for_completion_timeout(&xfer->done, 806 + msecs_to_jiffies(timeout_ms))) { 807 + dev_err(dev, "timed out in resp(caller: %pS)\n", 808 + (void *)_RET_IP_); 809 + ret = -ETIMEDOUT; 810 + } 811 + } 812 + 813 + return ret; 767 814 } 768 815 769 816 /** ··· 853 754 struct scmi_xfer *xfer) 854 755 { 855 756 int ret; 856 - int timeout; 857 757 const struct scmi_protocol_instance *pi = ph_to_pi(ph); 858 758 struct scmi_info *info = handle_to_scmi_info(pi->handle); 859 759 struct device *dev = info->dev; 860 760 struct scmi_chan_info *cinfo; 861 761 862 - if (xfer->hdr.poll_completion && !info->desc->ops->poll_done) { 762 + /* Check for polling request on custom command xfers at first */ 763 + if (xfer->hdr.poll_completion && !is_transport_polling_capable(info)) { 863 764 dev_warn_once(dev, 864 765 "Polling mode is not supported by transport.\n"); 865 766 return -EINVAL; 866 767 } 768 + 769 + cinfo = idr_find(&info->tx_idr, pi->proto->id); 770 + if (unlikely(!cinfo)) 771 + return -EINVAL; 772 + 773 + /* True ONLY if also supported by transport. */ 774 + if (is_polling_enabled(cinfo, info)) 775 + xfer->hdr.poll_completion = true; 867 776 868 777 /* 869 778 * Initialise protocol id now from protocol handle to avoid it being ··· 880 773 */ 881 774 xfer->hdr.protocol_id = pi->proto->id; 882 775 reinit_completion(&xfer->done); 883 - 884 - cinfo = idr_find(&info->tx_idr, xfer->hdr.protocol_id); 885 - if (unlikely(!cinfo)) 886 - return -EINVAL; 887 776 888 777 trace_scmi_xfer_begin(xfer->transfer_id, xfer->hdr.id, 889 778 xfer->hdr.protocol_id, xfer->hdr.seq, ··· 901 798 return ret; 902 799 } 903 800 904 - if (xfer->hdr.poll_completion) { 905 - ktime_t stop = ktime_add_ns(ktime_get(), SCMI_MAX_POLL_TO_NS); 906 - 907 - spin_until_cond(scmi_xfer_done_no_timeout(cinfo, xfer, stop)); 908 - if (ktime_before(ktime_get(), stop)) { 909 - unsigned long flags; 910 - 911 - /* 912 - * Do not fetch_response if an out-of-order delayed 913 - * response is being processed. 914 - */ 915 - spin_lock_irqsave(&xfer->lock, flags); 916 - if (xfer->state == SCMI_XFER_SENT_OK) { 917 - info->desc->ops->fetch_response(cinfo, xfer); 918 - xfer->state = SCMI_XFER_RESP_OK; 919 - } 920 - spin_unlock_irqrestore(&xfer->lock, flags); 921 - } else { 922 - ret = -ETIMEDOUT; 923 - } 924 - } else { 925 - /* And we wait for the response. */ 926 - timeout = msecs_to_jiffies(info->desc->max_rx_timeout_ms); 927 - if (!wait_for_completion_timeout(&xfer->done, timeout)) { 928 - dev_err(dev, "timed out in resp(caller: %pS)\n", 929 - (void *)_RET_IP_); 930 - ret = -ETIMEDOUT; 931 - } 932 - } 933 - 801 + ret = scmi_wait_for_message_response(cinfo, xfer); 934 802 if (!ret && xfer->hdr.status) 935 803 ret = scmi_to_linux_errno(xfer->hdr.status); 936 804 937 805 if (info->desc->ops->mark_txdone) 938 - info->desc->ops->mark_txdone(cinfo, ret); 806 + info->desc->ops->mark_txdone(cinfo, ret, xfer); 939 807 940 808 trace_scmi_xfer_end(xfer->transfer_id, xfer->hdr.id, 941 809 xfer->hdr.protocol_id, xfer->hdr.seq, ret); ··· 932 858 * @ph: Pointer to SCMI protocol handle 933 859 * @xfer: Transfer to initiate and wait for response 934 860 * 861 + * Using asynchronous commands in atomic/polling mode should be avoided since 862 + * it could cause long busy-waiting here, so ignore polling for the delayed 863 + * response and WARN if it was requested for this command transaction since 864 + * upper layers should refrain from issuing such kind of requests. 865 + * 866 + * The only other option would have been to refrain from using any asynchronous 867 + * command even if made available, when an atomic transport is detected, and 868 + * instead forcibly use the synchronous version (thing that can be easily 869 + * attained at the protocol layer), but this would also have led to longer 870 + * stalls of the channel for synchronous commands and possibly timeouts. 871 + * (in other words there is usually a good reason if a platform provides an 872 + * asynchronous version of a command and we should prefer to use it...just not 873 + * when using atomic/polling mode) 874 + * 935 875 * Return: -ETIMEDOUT in case of no delayed response, if transmit error, 936 876 * return corresponding error, else if all goes well, return 0. 937 877 */ ··· 957 869 958 870 xfer->async_done = &async_response; 959 871 872 + /* 873 + * Delayed responses should not be polled, so an async command should 874 + * not have been used when requiring an atomic/poll context; WARN and 875 + * perform instead a sleeping wait. 876 + * (Note Async + IgnoreDelayedResponses are sent via do_xfer) 877 + */ 878 + WARN_ON_ONCE(xfer->hdr.poll_completion); 879 + 960 880 ret = do_xfer(ph, xfer); 961 881 if (!ret) { 962 - if (!wait_for_completion_timeout(xfer->async_done, timeout)) 882 + if (!wait_for_completion_timeout(xfer->async_done, timeout)) { 883 + dev_err(ph->dev, 884 + "timed out in delayed resp(caller: %pS)\n", 885 + (void *)_RET_IP_); 963 886 ret = -ETIMEDOUT; 964 - else if (xfer->hdr.status) 887 + } else if (xfer->hdr.status) { 965 888 ret = scmi_to_linux_errno(xfer->hdr.status); 889 + } 966 890 } 967 891 968 892 xfer->async_done = NULL; ··· 1408 1308 WARN_ON(ret); 1409 1309 } 1410 1310 1311 + /** 1312 + * scmi_is_transport_atomic - Method to check if underlying transport for an 1313 + * SCMI instance is configured as atomic. 1314 + * 1315 + * @handle: A reference to the SCMI platform instance. 1316 + * @atomic_threshold: An optional return value for the system wide currently 1317 + * configured threshold for atomic operations. 1318 + * 1319 + * Return: True if transport is configured as atomic 1320 + */ 1321 + static bool scmi_is_transport_atomic(const struct scmi_handle *handle, 1322 + unsigned int *atomic_threshold) 1323 + { 1324 + bool ret; 1325 + struct scmi_info *info = handle_to_scmi_info(handle); 1326 + 1327 + ret = info->desc->atomic_enabled && is_transport_polling_capable(info); 1328 + if (ret && atomic_threshold) 1329 + *atomic_threshold = info->atomic_threshold; 1330 + 1331 + return ret; 1332 + } 1333 + 1411 1334 static inline 1412 1335 struct scmi_handle *scmi_handle_get_from_info_unlocked(struct scmi_info *info) 1413 1336 { ··· 1621 1498 ret = info->desc->ops->chan_setup(cinfo, info->dev, tx); 1622 1499 if (ret) 1623 1500 return ret; 1501 + 1502 + if (tx && is_polling_required(cinfo, info)) { 1503 + if (is_transport_polling_capable(info)) 1504 + dev_info(dev, 1505 + "Enabled polling mode TX channel - prot_id:%d\n", 1506 + prot_id); 1507 + else 1508 + dev_warn(dev, 1509 + "Polling mode NOT supported by transport.\n"); 1510 + } 1624 1511 1625 1512 idr_alloc: 1626 1513 ret = idr_alloc(idr, cinfo, prot_id, prot_id + 1, GFP_KERNEL); ··· 1969 1836 handle->devm_protocol_get = scmi_devm_protocol_get; 1970 1837 handle->devm_protocol_put = scmi_devm_protocol_put; 1971 1838 1839 + /* System wide atomic threshold for atomic ops .. if any */ 1840 + if (!of_property_read_u32(np, "atomic-threshold-us", 1841 + &info->atomic_threshold)) 1842 + dev_info(dev, 1843 + "SCMI System wide atomic threshold set to %d us\n", 1844 + info->atomic_threshold); 1845 + handle->is_transport_atomic = scmi_is_transport_atomic; 1846 + 1972 1847 if (desc->ops->link_supplier) { 1973 1848 ret = desc->ops->link_supplier(dev); 1974 1849 if (ret) ··· 1993 1852 1994 1853 if (scmi_notification_init(handle)) 1995 1854 dev_err(dev, "SCMI Notifications NOT available.\n"); 1855 + 1856 + if (info->desc->atomic_enabled && !is_transport_polling_capable(info)) 1857 + dev_err(dev, 1858 + "Transport is not polling capable. Atomic mode not supported.\n"); 1996 1859 1997 1860 /* 1998 1861 * Trigger SCMI Base protocol initialization. ··· 2138 1993 static const struct of_device_id scmi_of_match[] = { 2139 1994 #ifdef CONFIG_ARM_SCMI_TRANSPORT_MAILBOX 2140 1995 { .compatible = "arm,scmi", .data = &scmi_mailbox_desc }, 1996 + #endif 1997 + #ifdef CONFIG_ARM_SCMI_TRANSPORT_OPTEE 1998 + { .compatible = "linaro,scmi-optee", .data = &scmi_optee_desc }, 2141 1999 #endif 2142 2000 #ifdef CONFIG_ARM_SCMI_TRANSPORT_SMC 2143 2001 { .compatible = "arm,scmi-smc", .data = &scmi_smc_desc},
+2 -1
drivers/firmware/arm_scmi/mailbox.c
··· 140 140 return ret; 141 141 } 142 142 143 - static void mailbox_mark_txdone(struct scmi_chan_info *cinfo, int ret) 143 + static void mailbox_mark_txdone(struct scmi_chan_info *cinfo, int ret, 144 + struct scmi_xfer *__unused) 144 145 { 145 146 struct scmi_mailbox *smbox = cinfo->transport_info; 146 147
+567
drivers/firmware/arm_scmi/optee.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2019-2021 Linaro Ltd. 4 + */ 5 + 6 + #include <linux/io.h> 7 + #include <linux/of.h> 8 + #include <linux/of_address.h> 9 + #include <linux/kernel.h> 10 + #include <linux/module.h> 11 + #include <linux/mutex.h> 12 + #include <linux/slab.h> 13 + #include <linux/tee_drv.h> 14 + #include <linux/uuid.h> 15 + #include <uapi/linux/tee.h> 16 + 17 + #include "common.h" 18 + 19 + #define SCMI_OPTEE_MAX_MSG_SIZE 128 20 + 21 + enum scmi_optee_pta_cmd { 22 + /* 23 + * PTA_SCMI_CMD_CAPABILITIES - Get channel capabilities 24 + * 25 + * [out] value[0].a: Capability bit mask (enum pta_scmi_caps) 26 + * [out] value[0].b: Extended capabilities or 0 27 + */ 28 + PTA_SCMI_CMD_CAPABILITIES = 0, 29 + 30 + /* 31 + * PTA_SCMI_CMD_PROCESS_SMT_CHANNEL - Process SCMI message in SMT buffer 32 + * 33 + * [in] value[0].a: Channel handle 34 + * 35 + * Shared memory used for SCMI message/response exhange is expected 36 + * already identified and bound to channel handle in both SCMI agent 37 + * and SCMI server (OP-TEE) parts. 38 + * The memory uses SMT header to carry SCMI meta-data (protocol ID and 39 + * protocol message ID). 40 + */ 41 + PTA_SCMI_CMD_PROCESS_SMT_CHANNEL = 1, 42 + 43 + /* 44 + * PTA_SCMI_CMD_PROCESS_SMT_CHANNEL_MESSAGE - Process SMT/SCMI message 45 + * 46 + * [in] value[0].a: Channel handle 47 + * [in/out] memref[1]: Message/response buffer (SMT and SCMI payload) 48 + * 49 + * Shared memory used for SCMI message/response is a SMT buffer 50 + * referenced by param[1]. It shall be 128 bytes large to fit response 51 + * payload whatever message playload size. 52 + * The memory uses SMT header to carry SCMI meta-data (protocol ID and 53 + * protocol message ID). 54 + */ 55 + PTA_SCMI_CMD_PROCESS_SMT_CHANNEL_MESSAGE = 2, 56 + 57 + /* 58 + * PTA_SCMI_CMD_GET_CHANNEL - Get channel handle 59 + * 60 + * SCMI shm information are 0 if agent expects to use OP-TEE regular SHM 61 + * 62 + * [in] value[0].a: Channel identifier 63 + * [out] value[0].a: Returned channel handle 64 + * [in] value[0].b: Requested capabilities mask (enum pta_scmi_caps) 65 + */ 66 + PTA_SCMI_CMD_GET_CHANNEL = 3, 67 + }; 68 + 69 + /* 70 + * OP-TEE SCMI service capabilities bit flags (32bit) 71 + * 72 + * PTA_SCMI_CAPS_SMT_HEADER 73 + * When set, OP-TEE supports command using SMT header protocol (SCMI shmem) in 74 + * shared memory buffers to carry SCMI protocol synchronisation information. 75 + */ 76 + #define PTA_SCMI_CAPS_NONE 0 77 + #define PTA_SCMI_CAPS_SMT_HEADER BIT(0) 78 + 79 + /** 80 + * struct scmi_optee_channel - Description of an OP-TEE SCMI channel 81 + * 82 + * @channel_id: OP-TEE channel ID used for this transport 83 + * @tee_session: TEE session identifier 84 + * @caps: OP-TEE SCMI channel capabilities 85 + * @mu: Mutex protection on channel access 86 + * @cinfo: SCMI channel information 87 + * @shmem: Virtual base address of the shared memory 88 + * @tee_shm: Reference to TEE shared memory or NULL if using static shmem 89 + * @link: Reference in agent's channel list 90 + */ 91 + struct scmi_optee_channel { 92 + u32 channel_id; 93 + u32 tee_session; 94 + u32 caps; 95 + struct mutex mu; 96 + struct scmi_chan_info *cinfo; 97 + struct scmi_shared_mem __iomem *shmem; 98 + struct tee_shm *tee_shm; 99 + struct list_head link; 100 + }; 101 + 102 + /** 103 + * struct scmi_optee_agent - OP-TEE transport private data 104 + * 105 + * @dev: Device used for communication with TEE 106 + * @tee_ctx: TEE context used for communication 107 + * @caps: Supported channel capabilities 108 + * @mu: Mutex for protection of @channel_list 109 + * @channel_list: List of all created channels for the agent 110 + */ 111 + struct scmi_optee_agent { 112 + struct device *dev; 113 + struct tee_context *tee_ctx; 114 + u32 caps; 115 + struct mutex mu; 116 + struct list_head channel_list; 117 + }; 118 + 119 + /* There can be only 1 SCMI service in OP-TEE we connect to */ 120 + static struct scmi_optee_agent *scmi_optee_private; 121 + 122 + /* Forward reference to scmi_optee transport initialization */ 123 + static int scmi_optee_init(void); 124 + 125 + /* Open a session toward SCMI OP-TEE service with REE_KERNEL identity */ 126 + static int open_session(struct scmi_optee_agent *agent, u32 *tee_session) 127 + { 128 + struct device *dev = agent->dev; 129 + struct tee_client_device *scmi_pta = to_tee_client_device(dev); 130 + struct tee_ioctl_open_session_arg arg = { }; 131 + int ret; 132 + 133 + memcpy(arg.uuid, scmi_pta->id.uuid.b, TEE_IOCTL_UUID_LEN); 134 + arg.clnt_login = TEE_IOCTL_LOGIN_REE_KERNEL; 135 + 136 + ret = tee_client_open_session(agent->tee_ctx, &arg, NULL); 137 + if (ret < 0 || arg.ret) { 138 + dev_err(dev, "Can't open tee session: %d / %#x\n", ret, arg.ret); 139 + return -EOPNOTSUPP; 140 + } 141 + 142 + *tee_session = arg.session; 143 + 144 + return 0; 145 + } 146 + 147 + static void close_session(struct scmi_optee_agent *agent, u32 tee_session) 148 + { 149 + tee_client_close_session(agent->tee_ctx, tee_session); 150 + } 151 + 152 + static int get_capabilities(struct scmi_optee_agent *agent) 153 + { 154 + struct tee_ioctl_invoke_arg arg = { }; 155 + struct tee_param param[1] = { }; 156 + u32 caps; 157 + u32 tee_session; 158 + int ret; 159 + 160 + ret = open_session(agent, &tee_session); 161 + if (ret) 162 + return ret; 163 + 164 + arg.func = PTA_SCMI_CMD_CAPABILITIES; 165 + arg.session = tee_session; 166 + arg.num_params = 1; 167 + 168 + param[0].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT; 169 + 170 + ret = tee_client_invoke_func(agent->tee_ctx, &arg, param); 171 + 172 + close_session(agent, tee_session); 173 + 174 + if (ret < 0 || arg.ret) { 175 + dev_err(agent->dev, "Can't get capabilities: %d / %#x\n", ret, arg.ret); 176 + return -EOPNOTSUPP; 177 + } 178 + 179 + caps = param[0].u.value.a; 180 + 181 + if (!(caps & PTA_SCMI_CAPS_SMT_HEADER)) { 182 + dev_err(agent->dev, "OP-TEE SCMI PTA doesn't support SMT\n"); 183 + return -EOPNOTSUPP; 184 + } 185 + 186 + agent->caps = caps; 187 + 188 + return 0; 189 + } 190 + 191 + static int get_channel(struct scmi_optee_channel *channel) 192 + { 193 + struct device *dev = scmi_optee_private->dev; 194 + struct tee_ioctl_invoke_arg arg = { }; 195 + struct tee_param param[1] = { }; 196 + unsigned int caps = PTA_SCMI_CAPS_SMT_HEADER; 197 + int ret; 198 + 199 + arg.func = PTA_SCMI_CMD_GET_CHANNEL; 200 + arg.session = channel->tee_session; 201 + arg.num_params = 1; 202 + 203 + param[0].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT; 204 + param[0].u.value.a = channel->channel_id; 205 + param[0].u.value.b = caps; 206 + 207 + ret = tee_client_invoke_func(scmi_optee_private->tee_ctx, &arg, param); 208 + 209 + if (ret || arg.ret) { 210 + dev_err(dev, "Can't get channel with caps %#x: %d / %#x\n", caps, ret, arg.ret); 211 + return -EOPNOTSUPP; 212 + } 213 + 214 + /* From now on use channel identifer provided by OP-TEE SCMI service */ 215 + channel->channel_id = param[0].u.value.a; 216 + channel->caps = caps; 217 + 218 + return 0; 219 + } 220 + 221 + static int invoke_process_smt_channel(struct scmi_optee_channel *channel) 222 + { 223 + struct tee_ioctl_invoke_arg arg = { }; 224 + struct tee_param param[2] = { }; 225 + int ret; 226 + 227 + arg.session = channel->tee_session; 228 + param[0].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; 229 + param[0].u.value.a = channel->channel_id; 230 + 231 + if (channel->tee_shm) { 232 + param[1].attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT; 233 + param[1].u.memref.shm = channel->tee_shm; 234 + param[1].u.memref.size = SCMI_OPTEE_MAX_MSG_SIZE; 235 + arg.num_params = 2; 236 + arg.func = PTA_SCMI_CMD_PROCESS_SMT_CHANNEL_MESSAGE; 237 + } else { 238 + arg.num_params = 1; 239 + arg.func = PTA_SCMI_CMD_PROCESS_SMT_CHANNEL; 240 + } 241 + 242 + ret = tee_client_invoke_func(scmi_optee_private->tee_ctx, &arg, param); 243 + if (ret < 0 || arg.ret) { 244 + dev_err(scmi_optee_private->dev, "Can't invoke channel %u: %d / %#x\n", 245 + channel->channel_id, ret, arg.ret); 246 + return -EIO; 247 + } 248 + 249 + return 0; 250 + } 251 + 252 + static int scmi_optee_link_supplier(struct device *dev) 253 + { 254 + if (!scmi_optee_private) { 255 + if (scmi_optee_init()) 256 + dev_dbg(dev, "Optee bus not yet ready\n"); 257 + 258 + /* Wait for optee bus */ 259 + return -EPROBE_DEFER; 260 + } 261 + 262 + if (!device_link_add(dev, scmi_optee_private->dev, DL_FLAG_AUTOREMOVE_CONSUMER)) { 263 + dev_err(dev, "Adding link to supplier optee device failed\n"); 264 + return -ECANCELED; 265 + } 266 + 267 + return 0; 268 + } 269 + 270 + static bool scmi_optee_chan_available(struct device *dev, int idx) 271 + { 272 + u32 channel_id; 273 + 274 + return !of_property_read_u32_index(dev->of_node, "linaro,optee-channel-id", 275 + idx, &channel_id); 276 + } 277 + 278 + static void scmi_optee_clear_channel(struct scmi_chan_info *cinfo) 279 + { 280 + struct scmi_optee_channel *channel = cinfo->transport_info; 281 + 282 + shmem_clear_channel(channel->shmem); 283 + } 284 + 285 + static int setup_static_shmem(struct device *dev, struct scmi_chan_info *cinfo, 286 + struct scmi_optee_channel *channel) 287 + { 288 + struct device_node *np; 289 + resource_size_t size; 290 + struct resource res; 291 + int ret; 292 + 293 + np = of_parse_phandle(cinfo->dev->of_node, "shmem", 0); 294 + if (!of_device_is_compatible(np, "arm,scmi-shmem")) { 295 + ret = -ENXIO; 296 + goto out; 297 + } 298 + 299 + ret = of_address_to_resource(np, 0, &res); 300 + if (ret) { 301 + dev_err(dev, "Failed to get SCMI Tx shared memory\n"); 302 + goto out; 303 + } 304 + 305 + size = resource_size(&res); 306 + 307 + channel->shmem = devm_ioremap(dev, res.start, size); 308 + if (!channel->shmem) { 309 + dev_err(dev, "Failed to ioremap SCMI Tx shared memory\n"); 310 + ret = -EADDRNOTAVAIL; 311 + goto out; 312 + } 313 + 314 + ret = 0; 315 + 316 + out: 317 + of_node_put(np); 318 + 319 + return ret; 320 + } 321 + 322 + static int setup_shmem(struct device *dev, struct scmi_chan_info *cinfo, 323 + struct scmi_optee_channel *channel) 324 + { 325 + if (of_find_property(cinfo->dev->of_node, "shmem", NULL)) 326 + return setup_static_shmem(dev, cinfo, channel); 327 + else 328 + return -ENOMEM; 329 + } 330 + 331 + static int scmi_optee_chan_setup(struct scmi_chan_info *cinfo, struct device *dev, bool tx) 332 + { 333 + struct scmi_optee_channel *channel; 334 + uint32_t channel_id; 335 + int ret; 336 + 337 + if (!tx) 338 + return -ENODEV; 339 + 340 + channel = devm_kzalloc(dev, sizeof(*channel), GFP_KERNEL); 341 + if (!channel) 342 + return -ENOMEM; 343 + 344 + ret = of_property_read_u32_index(cinfo->dev->of_node, "linaro,optee-channel-id", 345 + 0, &channel_id); 346 + if (ret) 347 + return ret; 348 + 349 + cinfo->transport_info = channel; 350 + channel->cinfo = cinfo; 351 + channel->channel_id = channel_id; 352 + mutex_init(&channel->mu); 353 + 354 + ret = setup_shmem(dev, cinfo, channel); 355 + if (ret) 356 + return ret; 357 + 358 + ret = open_session(scmi_optee_private, &channel->tee_session); 359 + if (ret) 360 + goto err_free_shm; 361 + 362 + ret = get_channel(channel); 363 + if (ret) 364 + goto err_close_sess; 365 + 366 + /* Enable polling */ 367 + cinfo->no_completion_irq = true; 368 + 369 + mutex_lock(&scmi_optee_private->mu); 370 + list_add(&channel->link, &scmi_optee_private->channel_list); 371 + mutex_unlock(&scmi_optee_private->mu); 372 + 373 + return 0; 374 + 375 + err_close_sess: 376 + close_session(scmi_optee_private, channel->tee_session); 377 + err_free_shm: 378 + if (channel->tee_shm) 379 + tee_shm_free(channel->tee_shm); 380 + 381 + return ret; 382 + } 383 + 384 + static int scmi_optee_chan_free(int id, void *p, void *data) 385 + { 386 + struct scmi_chan_info *cinfo = p; 387 + struct scmi_optee_channel *channel = cinfo->transport_info; 388 + 389 + mutex_lock(&scmi_optee_private->mu); 390 + list_del(&channel->link); 391 + mutex_unlock(&scmi_optee_private->mu); 392 + 393 + close_session(scmi_optee_private, channel->tee_session); 394 + 395 + if (channel->tee_shm) { 396 + tee_shm_free(channel->tee_shm); 397 + channel->tee_shm = NULL; 398 + } 399 + 400 + cinfo->transport_info = NULL; 401 + channel->cinfo = NULL; 402 + 403 + scmi_free_channel(cinfo, data, id); 404 + 405 + return 0; 406 + } 407 + 408 + static struct scmi_shared_mem *get_channel_shm(struct scmi_optee_channel *chan, 409 + struct scmi_xfer *xfer) 410 + { 411 + if (!chan) 412 + return NULL; 413 + 414 + return chan->shmem; 415 + } 416 + 417 + 418 + static int scmi_optee_send_message(struct scmi_chan_info *cinfo, 419 + struct scmi_xfer *xfer) 420 + { 421 + struct scmi_optee_channel *channel = cinfo->transport_info; 422 + struct scmi_shared_mem *shmem = get_channel_shm(channel, xfer); 423 + int ret; 424 + 425 + mutex_lock(&channel->mu); 426 + shmem_tx_prepare(shmem, xfer); 427 + 428 + ret = invoke_process_smt_channel(channel); 429 + if (ret) 430 + mutex_unlock(&channel->mu); 431 + 432 + return ret; 433 + } 434 + 435 + static void scmi_optee_fetch_response(struct scmi_chan_info *cinfo, 436 + struct scmi_xfer *xfer) 437 + { 438 + struct scmi_optee_channel *channel = cinfo->transport_info; 439 + struct scmi_shared_mem *shmem = get_channel_shm(channel, xfer); 440 + 441 + shmem_fetch_response(shmem, xfer); 442 + } 443 + 444 + static void scmi_optee_mark_txdone(struct scmi_chan_info *cinfo, int ret, 445 + struct scmi_xfer *__unused) 446 + { 447 + struct scmi_optee_channel *channel = cinfo->transport_info; 448 + 449 + mutex_unlock(&channel->mu); 450 + } 451 + 452 + static struct scmi_transport_ops scmi_optee_ops = { 453 + .link_supplier = scmi_optee_link_supplier, 454 + .chan_available = scmi_optee_chan_available, 455 + .chan_setup = scmi_optee_chan_setup, 456 + .chan_free = scmi_optee_chan_free, 457 + .send_message = scmi_optee_send_message, 458 + .mark_txdone = scmi_optee_mark_txdone, 459 + .fetch_response = scmi_optee_fetch_response, 460 + .clear_channel = scmi_optee_clear_channel, 461 + }; 462 + 463 + static int scmi_optee_ctx_match(struct tee_ioctl_version_data *ver, const void *data) 464 + { 465 + return ver->impl_id == TEE_IMPL_ID_OPTEE; 466 + } 467 + 468 + static int scmi_optee_service_probe(struct device *dev) 469 + { 470 + struct scmi_optee_agent *agent; 471 + struct tee_context *tee_ctx; 472 + int ret; 473 + 474 + /* Only one SCMI OP-TEE device allowed */ 475 + if (scmi_optee_private) { 476 + dev_err(dev, "An SCMI OP-TEE device was already initialized: only one allowed\n"); 477 + return -EBUSY; 478 + } 479 + 480 + tee_ctx = tee_client_open_context(NULL, scmi_optee_ctx_match, NULL, NULL); 481 + if (IS_ERR(tee_ctx)) 482 + return -ENODEV; 483 + 484 + agent = devm_kzalloc(dev, sizeof(*agent), GFP_KERNEL); 485 + if (!agent) { 486 + ret = -ENOMEM; 487 + goto err; 488 + } 489 + 490 + agent->dev = dev; 491 + agent->tee_ctx = tee_ctx; 492 + INIT_LIST_HEAD(&agent->channel_list); 493 + mutex_init(&agent->mu); 494 + 495 + ret = get_capabilities(agent); 496 + if (ret) 497 + goto err; 498 + 499 + /* Ensure agent resources are all visible before scmi_optee_private is */ 500 + smp_mb(); 501 + scmi_optee_private = agent; 502 + 503 + return 0; 504 + 505 + err: 506 + tee_client_close_context(tee_ctx); 507 + 508 + return ret; 509 + } 510 + 511 + static int scmi_optee_service_remove(struct device *dev) 512 + { 513 + struct scmi_optee_agent *agent = scmi_optee_private; 514 + 515 + if (!scmi_optee_private) 516 + return -EINVAL; 517 + 518 + if (!list_empty(&scmi_optee_private->channel_list)) 519 + return -EBUSY; 520 + 521 + /* Ensure cleared reference is visible before resources are released */ 522 + smp_store_mb(scmi_optee_private, NULL); 523 + 524 + tee_client_close_context(agent->tee_ctx); 525 + 526 + return 0; 527 + } 528 + 529 + static const struct tee_client_device_id scmi_optee_service_id[] = { 530 + { 531 + UUID_INIT(0xa8cfe406, 0xd4f5, 0x4a2e, 532 + 0x9f, 0x8d, 0xa2, 0x5d, 0xc7, 0x54, 0xc0, 0x99) 533 + }, 534 + { } 535 + }; 536 + 537 + MODULE_DEVICE_TABLE(tee, scmi_optee_service_id); 538 + 539 + static struct tee_client_driver scmi_optee_driver = { 540 + .id_table = scmi_optee_service_id, 541 + .driver = { 542 + .name = "scmi-optee", 543 + .bus = &tee_bus_type, 544 + .probe = scmi_optee_service_probe, 545 + .remove = scmi_optee_service_remove, 546 + }, 547 + }; 548 + 549 + static int scmi_optee_init(void) 550 + { 551 + return driver_register(&scmi_optee_driver.driver); 552 + } 553 + 554 + static void scmi_optee_exit(void) 555 + { 556 + if (scmi_optee_private) 557 + driver_unregister(&scmi_optee_driver.driver); 558 + } 559 + 560 + const struct scmi_desc scmi_optee_desc = { 561 + .transport_exit = scmi_optee_exit, 562 + .ops = &scmi_optee_ops, 563 + .max_rx_timeout_ms = 30, 564 + .max_msg = 20, 565 + .max_msg_size = SCMI_OPTEE_MAX_MSG_SIZE, 566 + .sync_cmds_completed_on_ret = true, 567 + };
+72 -26
drivers/firmware/arm_scmi/smc.c
··· 7 7 */ 8 8 9 9 #include <linux/arm-smccc.h> 10 + #include <linux/atomic.h> 10 11 #include <linux/device.h> 11 12 #include <linux/err.h> 12 13 #include <linux/interrupt.h> ··· 15 14 #include <linux/of.h> 16 15 #include <linux/of_address.h> 17 16 #include <linux/of_irq.h> 17 + #include <linux/processor.h> 18 18 #include <linux/slab.h> 19 19 20 20 #include "common.h" ··· 25 23 * 26 24 * @cinfo: SCMI channel info 27 25 * @shmem: Transmit/Receive shared memory area 28 - * @shmem_lock: Lock to protect access to Tx/Rx shared memory area 26 + * @shmem_lock: Lock to protect access to Tx/Rx shared memory area. 27 + * Used when NOT operating in atomic mode. 28 + * @inflight: Atomic flag to protect access to Tx/Rx shared memory area. 29 + * Used when operating in atomic mode. 29 30 * @func_id: smc/hvc call function id 30 - * @irq: Optional; employed when platforms indicates msg completion by intr. 31 - * @tx_complete: Optional, employed only when irq is valid. 32 31 */ 33 32 34 33 struct scmi_smc { 35 34 struct scmi_chan_info *cinfo; 36 35 struct scmi_shared_mem __iomem *shmem; 36 + /* Protect access to shmem area */ 37 37 struct mutex shmem_lock; 38 + #define INFLIGHT_NONE MSG_TOKEN_MAX 39 + atomic_t inflight; 38 40 u32 func_id; 39 - int irq; 40 - struct completion tx_complete; 41 41 }; 42 42 43 43 static irqreturn_t smc_msg_done_isr(int irq, void *data) 44 44 { 45 45 struct scmi_smc *scmi_info = data; 46 46 47 - complete(&scmi_info->tx_complete); 47 + scmi_rx_callback(scmi_info->cinfo, 48 + shmem_read_header(scmi_info->shmem), NULL); 48 49 49 50 return IRQ_HANDLED; 50 51 } ··· 60 55 61 56 of_node_put(np); 62 57 return true; 58 + } 59 + 60 + static inline void smc_channel_lock_init(struct scmi_smc *scmi_info) 61 + { 62 + if (IS_ENABLED(CONFIG_ARM_SCMI_TRANSPORT_SMC_ATOMIC_ENABLE)) 63 + atomic_set(&scmi_info->inflight, INFLIGHT_NONE); 64 + else 65 + mutex_init(&scmi_info->shmem_lock); 66 + } 67 + 68 + static bool smc_xfer_inflight(struct scmi_xfer *xfer, atomic_t *inflight) 69 + { 70 + int ret; 71 + 72 + ret = atomic_cmpxchg(inflight, INFLIGHT_NONE, xfer->hdr.seq); 73 + 74 + return ret == INFLIGHT_NONE; 75 + } 76 + 77 + static inline void 78 + smc_channel_lock_acquire(struct scmi_smc *scmi_info, 79 + struct scmi_xfer *xfer __maybe_unused) 80 + { 81 + if (IS_ENABLED(CONFIG_ARM_SCMI_TRANSPORT_SMC_ATOMIC_ENABLE)) 82 + spin_until_cond(smc_xfer_inflight(xfer, &scmi_info->inflight)); 83 + else 84 + mutex_lock(&scmi_info->shmem_lock); 85 + } 86 + 87 + static inline void smc_channel_lock_release(struct scmi_smc *scmi_info) 88 + { 89 + if (IS_ENABLED(CONFIG_ARM_SCMI_TRANSPORT_SMC_ATOMIC_ENABLE)) 90 + atomic_set(&scmi_info->inflight, INFLIGHT_NONE); 91 + else 92 + mutex_unlock(&scmi_info->shmem_lock); 63 93 } 64 94 65 95 static int smc_chan_setup(struct scmi_chan_info *cinfo, struct device *dev, ··· 151 111 dev_err(dev, "failed to setup SCMI smc irq\n"); 152 112 return ret; 153 113 } 154 - init_completion(&scmi_info->tx_complete); 155 - scmi_info->irq = irq; 114 + } else { 115 + cinfo->no_completion_irq = true; 156 116 } 157 117 158 118 scmi_info->func_id = func_id; 159 119 scmi_info->cinfo = cinfo; 160 - mutex_init(&scmi_info->shmem_lock); 120 + smc_channel_lock_init(scmi_info); 161 121 cinfo->transport_info = scmi_info; 162 122 163 123 return 0; ··· 182 142 struct scmi_smc *scmi_info = cinfo->transport_info; 183 143 struct arm_smccc_res res; 184 144 185 - mutex_lock(&scmi_info->shmem_lock); 145 + /* 146 + * Channel will be released only once response has been 147 + * surely fully retrieved, so after .mark_txdone() 148 + */ 149 + smc_channel_lock_acquire(scmi_info, xfer); 186 150 187 151 shmem_tx_prepare(scmi_info->shmem, xfer); 188 152 189 - if (scmi_info->irq) 190 - reinit_completion(&scmi_info->tx_complete); 191 - 192 153 arm_smccc_1_1_invoke(scmi_info->func_id, 0, 0, 0, 0, 0, 0, 0, &res); 193 154 194 - if (scmi_info->irq) 195 - wait_for_completion(&scmi_info->tx_complete); 196 - 197 - scmi_rx_callback(scmi_info->cinfo, 198 - shmem_read_header(scmi_info->shmem), NULL); 199 - 200 - mutex_unlock(&scmi_info->shmem_lock); 201 - 202 155 /* Only SMCCC_RET_NOT_SUPPORTED is valid error code */ 203 - if (res.a0) 156 + if (res.a0) { 157 + smc_channel_lock_release(scmi_info); 204 158 return -EOPNOTSUPP; 159 + } 160 + 205 161 return 0; 206 162 } 207 163 ··· 209 173 shmem_fetch_response(scmi_info->shmem, xfer); 210 174 } 211 175 212 - static bool 213 - smc_poll_done(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer) 176 + static void smc_mark_txdone(struct scmi_chan_info *cinfo, int ret, 177 + struct scmi_xfer *__unused) 214 178 { 215 179 struct scmi_smc *scmi_info = cinfo->transport_info; 216 180 217 - return shmem_poll_done(scmi_info->shmem, xfer); 181 + smc_channel_lock_release(scmi_info); 218 182 } 219 183 220 184 static const struct scmi_transport_ops scmi_smc_ops = { ··· 222 186 .chan_setup = smc_chan_setup, 223 187 .chan_free = smc_chan_free, 224 188 .send_message = smc_send_message, 189 + .mark_txdone = smc_mark_txdone, 225 190 .fetch_response = smc_fetch_response, 226 - .poll_done = smc_poll_done, 227 191 }; 228 192 229 193 const struct scmi_desc scmi_smc_desc = { ··· 231 195 .max_rx_timeout_ms = 30, 232 196 .max_msg = 20, 233 197 .max_msg_size = 128, 198 + /* 199 + * Setting .sync_cmds_atomic_replies to true for SMC assumes that, 200 + * once the SMC instruction has completed successfully, the issued 201 + * SCMI command would have been already fully processed by the SCMI 202 + * platform firmware and so any possible response value expected 203 + * for the issued command will be immmediately ready to be fetched 204 + * from the shared memory area. 205 + */ 206 + .sync_cmds_completed_on_ret = true, 207 + .atomic_enabled = IS_ENABLED(CONFIG_ARM_SCMI_TRANSPORT_SMC_ATOMIC_ENABLE), 234 208 };
+518 -97
drivers/firmware/arm_scmi/virtio.c
··· 3 3 * Virtio Transport driver for Arm System Control and Management Interface 4 4 * (SCMI). 5 5 * 6 - * Copyright (C) 2020-2021 OpenSynergy. 7 - * Copyright (C) 2021 ARM Ltd. 6 + * Copyright (C) 2020-2022 OpenSynergy. 7 + * Copyright (C) 2021-2022 ARM Ltd. 8 8 */ 9 9 10 10 /** ··· 17 17 * virtqueue. Access to each virtqueue is protected by spinlocks. 18 18 */ 19 19 20 + #include <linux/completion.h> 20 21 #include <linux/errno.h> 22 + #include <linux/refcount.h> 21 23 #include <linux/slab.h> 22 24 #include <linux/virtio.h> 23 25 #include <linux/virtio_config.h> ··· 29 27 30 28 #include "common.h" 31 29 30 + #define VIRTIO_MAX_RX_TIMEOUT_MS 60000 32 31 #define VIRTIO_SCMI_MAX_MSG_SIZE 128 /* Value may be increased. */ 33 32 #define VIRTIO_SCMI_MAX_PDU_SIZE \ 34 33 (VIRTIO_SCMI_MAX_MSG_SIZE + SCMI_MSG_MAX_PROT_OVERHEAD) ··· 40 37 * 41 38 * @vqueue: Associated virtqueue 42 39 * @cinfo: SCMI Tx or Rx channel 40 + * @free_lock: Protects access to the @free_list. 43 41 * @free_list: List of unused scmi_vio_msg, maintained for Tx channels only 42 + * @deferred_tx_work: Worker for TX deferred replies processing 43 + * @deferred_tx_wq: Workqueue for TX deferred replies 44 + * @pending_lock: Protects access to the @pending_cmds_list. 45 + * @pending_cmds_list: List of pre-fetched commands queueud for later processing 44 46 * @is_rx: Whether channel is an Rx channel 45 - * @ready: Whether transport user is ready to hear about channel 46 47 * @max_msg: Maximum number of pending messages for this channel. 47 - * @lock: Protects access to all members except ready. 48 - * @ready_lock: Protects access to ready. If required, it must be taken before 49 - * lock. 48 + * @lock: Protects access to all members except users, free_list and 49 + * pending_cmds_list. 50 + * @shutdown_done: A reference to a completion used when freeing this channel. 51 + * @users: A reference count to currently active users of this channel. 50 52 */ 51 53 struct scmi_vio_channel { 52 54 struct virtqueue *vqueue; 53 55 struct scmi_chan_info *cinfo; 56 + /* lock to protect access to the free list. */ 57 + spinlock_t free_lock; 54 58 struct list_head free_list; 59 + /* lock to protect access to the pending list. */ 60 + spinlock_t pending_lock; 61 + struct list_head pending_cmds_list; 62 + struct work_struct deferred_tx_work; 63 + struct workqueue_struct *deferred_tx_wq; 55 64 bool is_rx; 56 - bool ready; 57 65 unsigned int max_msg; 58 - /* lock to protect access to all members except ready. */ 66 + /* 67 + * Lock to protect access to all members except users, free_list and 68 + * pending_cmds_list 69 + */ 59 70 spinlock_t lock; 60 - /* lock to rotects access to ready flag. */ 61 - spinlock_t ready_lock; 71 + struct completion *shutdown_done; 72 + refcount_t users; 73 + }; 74 + 75 + enum poll_states { 76 + VIO_MSG_NOT_POLLED, 77 + VIO_MSG_POLL_TIMEOUT, 78 + VIO_MSG_POLLING, 79 + VIO_MSG_POLL_DONE, 62 80 }; 63 81 64 82 /** ··· 89 65 * @input: SDU used for (delayed) responses and notifications 90 66 * @list: List which scmi_vio_msg may be part of 91 67 * @rx_len: Input SDU size in bytes, once input has been received 68 + * @poll_idx: Last used index registered for polling purposes if this message 69 + * transaction reply was configured for polling. 70 + * @poll_status: Polling state for this message. 71 + * @poll_lock: A lock to protect @poll_status 72 + * @users: A reference count to track this message users and avoid premature 73 + * freeing (and reuse) when polling and IRQ execution paths interleave. 92 74 */ 93 75 struct scmi_vio_msg { 94 76 struct scmi_msg_payld *request; 95 77 struct scmi_msg_payld *input; 96 78 struct list_head list; 97 79 unsigned int rx_len; 80 + unsigned int poll_idx; 81 + enum poll_states poll_status; 82 + /* Lock to protect access to poll_status */ 83 + spinlock_t poll_lock; 84 + refcount_t users; 98 85 }; 99 86 100 87 /* Only one SCMI VirtIO device can possibly exist */ 101 88 static struct virtio_device *scmi_vdev; 89 + 90 + static void scmi_vio_channel_ready(struct scmi_vio_channel *vioch, 91 + struct scmi_chan_info *cinfo) 92 + { 93 + unsigned long flags; 94 + 95 + spin_lock_irqsave(&vioch->lock, flags); 96 + cinfo->transport_info = vioch; 97 + /* Indirectly setting channel not available any more */ 98 + vioch->cinfo = cinfo; 99 + spin_unlock_irqrestore(&vioch->lock, flags); 100 + 101 + refcount_set(&vioch->users, 1); 102 + } 103 + 104 + static inline bool scmi_vio_channel_acquire(struct scmi_vio_channel *vioch) 105 + { 106 + return refcount_inc_not_zero(&vioch->users); 107 + } 108 + 109 + static inline void scmi_vio_channel_release(struct scmi_vio_channel *vioch) 110 + { 111 + if (refcount_dec_and_test(&vioch->users)) { 112 + unsigned long flags; 113 + 114 + spin_lock_irqsave(&vioch->lock, flags); 115 + if (vioch->shutdown_done) { 116 + vioch->cinfo = NULL; 117 + complete(vioch->shutdown_done); 118 + } 119 + spin_unlock_irqrestore(&vioch->lock, flags); 120 + } 121 + } 122 + 123 + static void scmi_vio_channel_cleanup_sync(struct scmi_vio_channel *vioch) 124 + { 125 + unsigned long flags; 126 + DECLARE_COMPLETION_ONSTACK(vioch_shutdown_done); 127 + void *deferred_wq = NULL; 128 + 129 + /* 130 + * Prepare to wait for the last release if not already released 131 + * or in progress. 132 + */ 133 + spin_lock_irqsave(&vioch->lock, flags); 134 + if (!vioch->cinfo || vioch->shutdown_done) { 135 + spin_unlock_irqrestore(&vioch->lock, flags); 136 + return; 137 + } 138 + 139 + vioch->shutdown_done = &vioch_shutdown_done; 140 + virtio_break_device(vioch->vqueue->vdev); 141 + if (!vioch->is_rx && vioch->deferred_tx_wq) { 142 + deferred_wq = vioch->deferred_tx_wq; 143 + /* Cannot be kicked anymore after this...*/ 144 + vioch->deferred_tx_wq = NULL; 145 + } 146 + spin_unlock_irqrestore(&vioch->lock, flags); 147 + 148 + if (deferred_wq) 149 + destroy_workqueue(deferred_wq); 150 + 151 + scmi_vio_channel_release(vioch); 152 + 153 + /* Let any possibly concurrent RX path release the channel */ 154 + wait_for_completion(vioch->shutdown_done); 155 + } 156 + 157 + /* Assumes to be called with vio channel acquired already */ 158 + static struct scmi_vio_msg * 159 + scmi_virtio_get_free_msg(struct scmi_vio_channel *vioch) 160 + { 161 + unsigned long flags; 162 + struct scmi_vio_msg *msg; 163 + 164 + spin_lock_irqsave(&vioch->free_lock, flags); 165 + if (list_empty(&vioch->free_list)) { 166 + spin_unlock_irqrestore(&vioch->free_lock, flags); 167 + return NULL; 168 + } 169 + 170 + msg = list_first_entry(&vioch->free_list, typeof(*msg), list); 171 + list_del_init(&msg->list); 172 + spin_unlock_irqrestore(&vioch->free_lock, flags); 173 + 174 + /* Still no users, no need to acquire poll_lock */ 175 + msg->poll_status = VIO_MSG_NOT_POLLED; 176 + refcount_set(&msg->users, 1); 177 + 178 + return msg; 179 + } 180 + 181 + static inline bool scmi_vio_msg_acquire(struct scmi_vio_msg *msg) 182 + { 183 + return refcount_inc_not_zero(&msg->users); 184 + } 185 + 186 + /* Assumes to be called with vio channel acquired already */ 187 + static inline bool scmi_vio_msg_release(struct scmi_vio_channel *vioch, 188 + struct scmi_vio_msg *msg) 189 + { 190 + bool ret; 191 + 192 + ret = refcount_dec_and_test(&msg->users); 193 + if (ret) { 194 + unsigned long flags; 195 + 196 + spin_lock_irqsave(&vioch->free_lock, flags); 197 + list_add_tail(&msg->list, &vioch->free_list); 198 + spin_unlock_irqrestore(&vioch->free_lock, flags); 199 + } 200 + 201 + return ret; 202 + } 102 203 103 204 static bool scmi_vio_have_vq_rx(struct virtio_device *vdev) 104 205 { ··· 231 82 } 232 83 233 84 static int scmi_vio_feed_vq_rx(struct scmi_vio_channel *vioch, 234 - struct scmi_vio_msg *msg, 235 - struct device *dev) 85 + struct scmi_vio_msg *msg) 236 86 { 237 87 struct scatterlist sg_in; 238 88 int rc; 239 89 unsigned long flags; 90 + struct device *dev = &vioch->vqueue->vdev->dev; 240 91 241 92 sg_init_one(&sg_in, msg->input, VIRTIO_SCMI_MAX_PDU_SIZE); 242 93 ··· 244 95 245 96 rc = virtqueue_add_inbuf(vioch->vqueue, &sg_in, 1, msg, GFP_ATOMIC); 246 97 if (rc) 247 - dev_err_once(dev, "failed to add to virtqueue (%d)\n", rc); 98 + dev_err(dev, "failed to add to RX virtqueue (%d)\n", rc); 248 99 else 249 100 virtqueue_kick(vioch->vqueue); 250 101 ··· 253 104 return rc; 254 105 } 255 106 107 + /* 108 + * Assume to be called with channel already acquired or not ready at all; 109 + * vioch->lock MUST NOT have been already acquired. 110 + */ 256 111 static void scmi_finalize_message(struct scmi_vio_channel *vioch, 257 112 struct scmi_vio_msg *msg) 258 113 { 259 - if (vioch->is_rx) { 260 - scmi_vio_feed_vq_rx(vioch, msg, vioch->cinfo->dev); 261 - } else { 262 - /* Here IRQs are assumed to be already disabled by the caller */ 263 - spin_lock(&vioch->lock); 264 - list_add(&msg->list, &vioch->free_list); 265 - spin_unlock(&vioch->lock); 266 - } 114 + if (vioch->is_rx) 115 + scmi_vio_feed_vq_rx(vioch, msg); 116 + else 117 + scmi_vio_msg_release(vioch, msg); 267 118 } 268 119 269 120 static void scmi_vio_complete_cb(struct virtqueue *vqueue) 270 121 { 271 - unsigned long ready_flags; 122 + unsigned long flags; 272 123 unsigned int length; 273 124 struct scmi_vio_channel *vioch; 274 125 struct scmi_vio_msg *msg; ··· 279 130 vioch = &((struct scmi_vio_channel *)vqueue->vdev->priv)[vqueue->index]; 280 131 281 132 for (;;) { 282 - spin_lock_irqsave(&vioch->ready_lock, ready_flags); 133 + if (!scmi_vio_channel_acquire(vioch)) 134 + return; 283 135 284 - if (!vioch->ready) { 285 - if (!cb_enabled) 286 - (void)virtqueue_enable_cb(vqueue); 287 - goto unlock_ready_out; 288 - } 289 - 290 - /* IRQs already disabled here no need to irqsave */ 291 - spin_lock(&vioch->lock); 136 + spin_lock_irqsave(&vioch->lock, flags); 292 137 if (cb_enabled) { 293 138 virtqueue_disable_cb(vqueue); 294 139 cb_enabled = false; 295 140 } 141 + 296 142 msg = virtqueue_get_buf(vqueue, &length); 297 143 if (!msg) { 298 - if (virtqueue_enable_cb(vqueue)) 299 - goto unlock_out; 144 + if (virtqueue_enable_cb(vqueue)) { 145 + spin_unlock_irqrestore(&vioch->lock, flags); 146 + scmi_vio_channel_release(vioch); 147 + return; 148 + } 300 149 cb_enabled = true; 301 150 } 302 - spin_unlock(&vioch->lock); 151 + spin_unlock_irqrestore(&vioch->lock, flags); 303 152 304 153 if (msg) { 305 154 msg->rx_len = length; ··· 308 161 } 309 162 310 163 /* 311 - * Release ready_lock and re-enable IRQs between loop iterations 312 - * to allow virtio_chan_free() to possibly kick in and set the 313 - * flag vioch->ready to false even in between processing of 314 - * messages, so as to force outstanding messages to be ignored 315 - * when system is shutting down. 164 + * Release vio channel between loop iterations to allow 165 + * virtio_chan_free() to eventually fully release it when 166 + * shutting down; in such a case, any outstanding message will 167 + * be ignored since this loop will bail out at the next 168 + * iteration. 316 169 */ 317 - spin_unlock_irqrestore(&vioch->ready_lock, ready_flags); 170 + scmi_vio_channel_release(vioch); 171 + } 172 + } 173 + 174 + static void scmi_vio_deferred_tx_worker(struct work_struct *work) 175 + { 176 + unsigned long flags; 177 + struct scmi_vio_channel *vioch; 178 + struct scmi_vio_msg *msg, *tmp; 179 + 180 + vioch = container_of(work, struct scmi_vio_channel, deferred_tx_work); 181 + 182 + if (!scmi_vio_channel_acquire(vioch)) 183 + return; 184 + 185 + /* 186 + * Process pre-fetched messages: these could be non-polled messages or 187 + * late timed-out replies to polled messages dequeued by chance while 188 + * polling for some other messages: this worker is in charge to process 189 + * the valid non-expired messages and anyway finally free all of them. 190 + */ 191 + spin_lock_irqsave(&vioch->pending_lock, flags); 192 + 193 + /* Scan the list of possibly pre-fetched messages during polling. */ 194 + list_for_each_entry_safe(msg, tmp, &vioch->pending_cmds_list, list) { 195 + list_del(&msg->list); 196 + 197 + /* 198 + * Channel is acquired here (cannot vanish) and this message 199 + * is no more processed elsewhere so no poll_lock needed. 200 + */ 201 + if (msg->poll_status == VIO_MSG_NOT_POLLED) 202 + scmi_rx_callback(vioch->cinfo, 203 + msg_read_header(msg->input), msg); 204 + 205 + /* Free the processed message once done */ 206 + scmi_vio_msg_release(vioch, msg); 318 207 } 319 208 320 - unlock_out: 321 - spin_unlock(&vioch->lock); 322 - unlock_ready_out: 323 - spin_unlock_irqrestore(&vioch->ready_lock, ready_flags); 209 + spin_unlock_irqrestore(&vioch->pending_lock, flags); 210 + 211 + /* Process possibly still pending messages */ 212 + scmi_vio_complete_cb(vioch->vqueue); 213 + 214 + scmi_vio_channel_release(vioch); 324 215 } 325 216 326 217 static const char *const scmi_vio_vqueue_names[] = { "tx", "rx" }; ··· 378 193 static int virtio_link_supplier(struct device *dev) 379 194 { 380 195 if (!scmi_vdev) { 381 - dev_notice_once(dev, 382 - "Deferring probe after not finding a bound scmi-virtio device\n"); 196 + dev_notice(dev, 197 + "Deferring probe after not finding a bound scmi-virtio device\n"); 383 198 return -EPROBE_DEFER; 384 199 } 385 200 ··· 419 234 static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev, 420 235 bool tx) 421 236 { 422 - unsigned long flags; 423 237 struct scmi_vio_channel *vioch; 424 238 int index = tx ? VIRTIO_SCMI_VQ_TX : VIRTIO_SCMI_VQ_RX; 425 239 int i; ··· 427 243 return -EPROBE_DEFER; 428 244 429 245 vioch = &((struct scmi_vio_channel *)scmi_vdev->priv)[index]; 246 + 247 + /* Setup a deferred worker for polling. */ 248 + if (tx && !vioch->deferred_tx_wq) { 249 + vioch->deferred_tx_wq = 250 + alloc_workqueue(dev_name(&scmi_vdev->dev), 251 + WQ_UNBOUND | WQ_FREEZABLE | WQ_SYSFS, 252 + 0); 253 + if (!vioch->deferred_tx_wq) 254 + return -ENOMEM; 255 + 256 + INIT_WORK(&vioch->deferred_tx_work, 257 + scmi_vio_deferred_tx_worker); 258 + } 430 259 431 260 for (i = 0; i < vioch->max_msg; i++) { 432 261 struct scmi_vio_msg *msg; ··· 454 257 GFP_KERNEL); 455 258 if (!msg->request) 456 259 return -ENOMEM; 260 + spin_lock_init(&msg->poll_lock); 261 + refcount_set(&msg->users, 1); 457 262 } 458 263 459 264 msg->input = devm_kzalloc(cinfo->dev, VIRTIO_SCMI_MAX_PDU_SIZE, ··· 463 264 if (!msg->input) 464 265 return -ENOMEM; 465 266 466 - if (tx) { 467 - spin_lock_irqsave(&vioch->lock, flags); 468 - list_add_tail(&msg->list, &vioch->free_list); 469 - spin_unlock_irqrestore(&vioch->lock, flags); 470 - } else { 471 - scmi_vio_feed_vq_rx(vioch, msg, cinfo->dev); 472 - } 267 + scmi_finalize_message(vioch, msg); 473 268 } 474 269 475 - spin_lock_irqsave(&vioch->lock, flags); 476 - cinfo->transport_info = vioch; 477 - /* Indirectly setting channel not available any more */ 478 - vioch->cinfo = cinfo; 479 - spin_unlock_irqrestore(&vioch->lock, flags); 480 - 481 - spin_lock_irqsave(&vioch->ready_lock, flags); 482 - vioch->ready = true; 483 - spin_unlock_irqrestore(&vioch->ready_lock, flags); 270 + scmi_vio_channel_ready(vioch, cinfo); 484 271 485 272 return 0; 486 273 } 487 274 488 275 static int virtio_chan_free(int id, void *p, void *data) 489 276 { 490 - unsigned long flags; 491 277 struct scmi_chan_info *cinfo = p; 492 278 struct scmi_vio_channel *vioch = cinfo->transport_info; 493 279 494 - spin_lock_irqsave(&vioch->ready_lock, flags); 495 - vioch->ready = false; 496 - spin_unlock_irqrestore(&vioch->ready_lock, flags); 280 + scmi_vio_channel_cleanup_sync(vioch); 497 281 498 282 scmi_free_channel(cinfo, data, id); 499 - 500 - spin_lock_irqsave(&vioch->lock, flags); 501 - vioch->cinfo = NULL; 502 - spin_unlock_irqrestore(&vioch->lock, flags); 503 283 504 284 return 0; 505 285 } ··· 494 316 int rc; 495 317 struct scmi_vio_msg *msg; 496 318 497 - spin_lock_irqsave(&vioch->lock, flags); 319 + if (!scmi_vio_channel_acquire(vioch)) 320 + return -EINVAL; 498 321 499 - if (list_empty(&vioch->free_list)) { 500 - spin_unlock_irqrestore(&vioch->lock, flags); 322 + msg = scmi_virtio_get_free_msg(vioch); 323 + if (!msg) { 324 + scmi_vio_channel_release(vioch); 501 325 return -EBUSY; 502 326 } 503 - 504 - msg = list_first_entry(&vioch->free_list, typeof(*msg), list); 505 - list_del(&msg->list); 506 327 507 328 msg_tx_prepare(msg->request, xfer); 508 329 509 330 sg_init_one(&sg_out, msg->request, msg_command_size(xfer)); 510 331 sg_init_one(&sg_in, msg->input, msg_response_size(xfer)); 511 332 512 - rc = virtqueue_add_sgs(vioch->vqueue, sgs, 1, 1, msg, GFP_ATOMIC); 513 - if (rc) { 514 - list_add(&msg->list, &vioch->free_list); 515 - dev_err_once(vioch->cinfo->dev, 516 - "%s() failed to add to virtqueue (%d)\n", __func__, 517 - rc); 518 - } else { 519 - virtqueue_kick(vioch->vqueue); 333 + spin_lock_irqsave(&vioch->lock, flags); 334 + 335 + /* 336 + * If polling was requested for this transaction: 337 + * - retrieve last used index (will be used as polling reference) 338 + * - bind the polled message to the xfer via .priv 339 + * - grab an additional msg refcount for the poll-path 340 + */ 341 + if (xfer->hdr.poll_completion) { 342 + msg->poll_idx = virtqueue_enable_cb_prepare(vioch->vqueue); 343 + /* Still no users, no need to acquire poll_lock */ 344 + msg->poll_status = VIO_MSG_POLLING; 345 + scmi_vio_msg_acquire(msg); 346 + /* Ensure initialized msg is visibly bound to xfer */ 347 + smp_store_mb(xfer->priv, msg); 520 348 } 521 349 350 + rc = virtqueue_add_sgs(vioch->vqueue, sgs, 1, 1, msg, GFP_ATOMIC); 351 + if (rc) 352 + dev_err(vioch->cinfo->dev, 353 + "failed to add to TX virtqueue (%d)\n", rc); 354 + else 355 + virtqueue_kick(vioch->vqueue); 356 + 522 357 spin_unlock_irqrestore(&vioch->lock, flags); 358 + 359 + if (rc) { 360 + /* Ensure order between xfer->priv clear and vq feeding */ 361 + smp_store_mb(xfer->priv, NULL); 362 + if (xfer->hdr.poll_completion) 363 + scmi_vio_msg_release(vioch, msg); 364 + scmi_vio_msg_release(vioch, msg); 365 + } 366 + 367 + scmi_vio_channel_release(vioch); 523 368 524 369 return rc; 525 370 } ··· 552 351 { 553 352 struct scmi_vio_msg *msg = xfer->priv; 554 353 555 - if (msg) { 354 + if (msg) 556 355 msg_fetch_response(msg->input, msg->rx_len, xfer); 557 - xfer->priv = NULL; 558 - } 559 356 } 560 357 561 358 static void virtio_fetch_notification(struct scmi_chan_info *cinfo, ··· 561 362 { 562 363 struct scmi_vio_msg *msg = xfer->priv; 563 364 564 - if (msg) { 365 + if (msg) 565 366 msg_fetch_notification(msg->input, msg->rx_len, max_len, xfer); 566 - xfer->priv = NULL; 367 + } 368 + 369 + /** 370 + * virtio_mark_txdone - Mark transmission done 371 + * 372 + * Free only completed polling transfer messages. 373 + * 374 + * Note that in the SCMI VirtIO transport we never explicitly release still 375 + * outstanding but timed-out messages by forcibly re-adding them to the 376 + * free-list inside the TX code path; we instead let IRQ/RX callbacks, or the 377 + * TX deferred worker, eventually clean up such messages once, finally, a late 378 + * reply is received and discarded (if ever). 379 + * 380 + * This approach was deemed preferable since those pending timed-out buffers are 381 + * still effectively owned by the SCMI platform VirtIO device even after timeout 382 + * expiration: forcibly freeing and reusing them before they had been returned 383 + * explicitly by the SCMI platform could lead to subtle bugs due to message 384 + * corruption. 385 + * An SCMI platform VirtIO device which never returns message buffers is 386 + * anyway broken and it will quickly lead to exhaustion of available messages. 387 + * 388 + * For this same reason, here, we take care to free only the polled messages 389 + * that had been somehow replied (only if not by chance already processed on the 390 + * IRQ path - the initial scmi_vio_msg_release() takes care of this) and also 391 + * any timed-out polled message if that indeed appears to have been at least 392 + * dequeued from the virtqueues (VIO_MSG_POLL_DONE): this is needed since such 393 + * messages won't be freed elsewhere. Any other polled message is marked as 394 + * VIO_MSG_POLL_TIMEOUT. 395 + * 396 + * Possible late replies to timed-out polled messages will be eventually freed 397 + * by RX callbacks if delivered on the IRQ path or by the deferred TX worker if 398 + * dequeued on some other polling path. 399 + * 400 + * @cinfo: SCMI channel info 401 + * @ret: Transmission return code 402 + * @xfer: Transfer descriptor 403 + */ 404 + static void virtio_mark_txdone(struct scmi_chan_info *cinfo, int ret, 405 + struct scmi_xfer *xfer) 406 + { 407 + unsigned long flags; 408 + struct scmi_vio_channel *vioch = cinfo->transport_info; 409 + struct scmi_vio_msg *msg = xfer->priv; 410 + 411 + if (!msg || !scmi_vio_channel_acquire(vioch)) 412 + return; 413 + 414 + /* Ensure msg is unbound from xfer anyway at this point */ 415 + smp_store_mb(xfer->priv, NULL); 416 + 417 + /* Must be a polled xfer and not already freed on the IRQ path */ 418 + if (!xfer->hdr.poll_completion || scmi_vio_msg_release(vioch, msg)) { 419 + scmi_vio_channel_release(vioch); 420 + return; 567 421 } 422 + 423 + spin_lock_irqsave(&msg->poll_lock, flags); 424 + /* Do not free timedout polled messages only if still inflight */ 425 + if (ret != -ETIMEDOUT || msg->poll_status == VIO_MSG_POLL_DONE) 426 + scmi_vio_msg_release(vioch, msg); 427 + else if (msg->poll_status == VIO_MSG_POLLING) 428 + msg->poll_status = VIO_MSG_POLL_TIMEOUT; 429 + spin_unlock_irqrestore(&msg->poll_lock, flags); 430 + 431 + scmi_vio_channel_release(vioch); 432 + } 433 + 434 + /** 435 + * virtio_poll_done - Provide polling support for VirtIO transport 436 + * 437 + * @cinfo: SCMI channel info 438 + * @xfer: Reference to the transfer being poll for. 439 + * 440 + * VirtIO core provides a polling mechanism based only on last used indexes: 441 + * this means that it is possible to poll the virtqueues waiting for something 442 + * new to arrive from the host side, but the only way to check if the freshly 443 + * arrived buffer was indeed what we were waiting for is to compare the newly 444 + * arrived message descriptor with the one we are polling on. 445 + * 446 + * As a consequence it can happen to dequeue something different from the buffer 447 + * we were poll-waiting for: if that is the case such early fetched buffers are 448 + * then added to a the @pending_cmds_list list for later processing by a 449 + * dedicated deferred worker. 450 + * 451 + * So, basically, once something new is spotted we proceed to de-queue all the 452 + * freshly received used buffers until we found the one we were polling on, or, 453 + * we have 'seemingly' emptied the virtqueue; if some buffers are still pending 454 + * in the vqueue at the end of the polling loop (possible due to inherent races 455 + * in virtqueues handling mechanisms), we similarly kick the deferred worker 456 + * and let it process those, to avoid indefinitely looping in the .poll_done 457 + * busy-waiting helper. 458 + * 459 + * Finally, we delegate to the deferred worker also the final free of any timed 460 + * out reply to a polled message that we should dequeue. 461 + * 462 + * Note that, since we do NOT have per-message suppress notification mechanism, 463 + * the message we are polling for could be alternatively delivered via usual 464 + * IRQs callbacks on another core which happened to have IRQs enabled while we 465 + * are actively polling for it here: in such a case it will be handled as such 466 + * by scmi_rx_callback() and the polling loop in the SCMI Core TX path will be 467 + * transparently terminated anyway. 468 + * 469 + * Return: True once polling has successfully completed. 470 + */ 471 + static bool virtio_poll_done(struct scmi_chan_info *cinfo, 472 + struct scmi_xfer *xfer) 473 + { 474 + bool pending, found = false; 475 + unsigned int length, any_prefetched = 0; 476 + unsigned long flags; 477 + struct scmi_vio_msg *next_msg, *msg = xfer->priv; 478 + struct scmi_vio_channel *vioch = cinfo->transport_info; 479 + 480 + if (!msg) 481 + return true; 482 + 483 + /* 484 + * Processed already by other polling loop on another CPU ? 485 + * 486 + * Note that this message is acquired on the poll path so cannot vanish 487 + * while inside this loop iteration even if concurrently processed on 488 + * the IRQ path. 489 + * 490 + * Avoid to acquire poll_lock since polled_status can be changed 491 + * in a relevant manner only later in this same thread of execution: 492 + * any other possible changes made concurrently by other polling loops 493 + * or by a reply delivered on the IRQ path have no meaningful impact on 494 + * this loop iteration: in other words it is harmless to allow this 495 + * possible race but let has avoid spinlocking with irqs off in this 496 + * initial part of the polling loop. 497 + */ 498 + if (msg->poll_status == VIO_MSG_POLL_DONE) 499 + return true; 500 + 501 + if (!scmi_vio_channel_acquire(vioch)) 502 + return true; 503 + 504 + /* Has cmdq index moved at all ? */ 505 + pending = virtqueue_poll(vioch->vqueue, msg->poll_idx); 506 + if (!pending) { 507 + scmi_vio_channel_release(vioch); 508 + return false; 509 + } 510 + 511 + spin_lock_irqsave(&vioch->lock, flags); 512 + virtqueue_disable_cb(vioch->vqueue); 513 + 514 + /* 515 + * Process all new messages till the polled-for message is found OR 516 + * the vqueue is empty. 517 + */ 518 + while ((next_msg = virtqueue_get_buf(vioch->vqueue, &length))) { 519 + bool next_msg_done = false; 520 + 521 + /* 522 + * Mark any dequeued buffer message as VIO_MSG_POLL_DONE so 523 + * that can be properly freed even on timeout in mark_txdone. 524 + */ 525 + spin_lock(&next_msg->poll_lock); 526 + if (next_msg->poll_status == VIO_MSG_POLLING) { 527 + next_msg->poll_status = VIO_MSG_POLL_DONE; 528 + next_msg_done = true; 529 + } 530 + spin_unlock(&next_msg->poll_lock); 531 + 532 + next_msg->rx_len = length; 533 + /* Is the message we were polling for ? */ 534 + if (next_msg == msg) { 535 + found = true; 536 + break; 537 + } else if (next_msg_done) { 538 + /* Skip the rest if this was another polled msg */ 539 + continue; 540 + } 541 + 542 + /* 543 + * Enqueue for later processing any non-polled message and any 544 + * timed-out polled one that we happen to have dequeued. 545 + */ 546 + spin_lock(&next_msg->poll_lock); 547 + if (next_msg->poll_status == VIO_MSG_NOT_POLLED || 548 + next_msg->poll_status == VIO_MSG_POLL_TIMEOUT) { 549 + spin_unlock(&next_msg->poll_lock); 550 + 551 + any_prefetched++; 552 + spin_lock(&vioch->pending_lock); 553 + list_add_tail(&next_msg->list, 554 + &vioch->pending_cmds_list); 555 + spin_unlock(&vioch->pending_lock); 556 + } else { 557 + spin_unlock(&next_msg->poll_lock); 558 + } 559 + } 560 + 561 + /* 562 + * When the polling loop has successfully terminated if something 563 + * else was queued in the meantime, it will be served by a deferred 564 + * worker OR by the normal IRQ/callback OR by other poll loops. 565 + * 566 + * If we are still looking for the polled reply, the polling index has 567 + * to be updated to the current vqueue last used index. 568 + */ 569 + if (found) { 570 + pending = !virtqueue_enable_cb(vioch->vqueue); 571 + } else { 572 + msg->poll_idx = virtqueue_enable_cb_prepare(vioch->vqueue); 573 + pending = virtqueue_poll(vioch->vqueue, msg->poll_idx); 574 + } 575 + 576 + if (vioch->deferred_tx_wq && (any_prefetched || pending)) 577 + queue_work(vioch->deferred_tx_wq, &vioch->deferred_tx_work); 578 + 579 + spin_unlock_irqrestore(&vioch->lock, flags); 580 + 581 + scmi_vio_channel_release(vioch); 582 + 583 + return found; 568 584 } 569 585 570 586 static const struct scmi_transport_ops scmi_virtio_ops = { ··· 791 377 .send_message = virtio_send_message, 792 378 .fetch_response = virtio_fetch_response, 793 379 .fetch_notification = virtio_fetch_notification, 380 + .mark_txdone = virtio_mark_txdone, 381 + .poll_done = virtio_poll_done, 794 382 }; 795 383 796 384 static int scmi_vio_probe(struct virtio_device *vdev) ··· 833 417 unsigned int sz; 834 418 835 419 spin_lock_init(&channels[i].lock); 836 - spin_lock_init(&channels[i].ready_lock); 420 + spin_lock_init(&channels[i].free_lock); 837 421 INIT_LIST_HEAD(&channels[i].free_list); 422 + spin_lock_init(&channels[i].pending_lock); 423 + INIT_LIST_HEAD(&channels[i].pending_cmds_list); 838 424 channels[i].vqueue = vqs[i]; 839 425 840 426 sz = virtqueue_get_vring_size(channels[i].vqueue); ··· 845 427 sz /= DESCRIPTORS_PER_TX_MSG; 846 428 847 429 if (sz > MSG_TOKEN_MAX) { 848 - dev_info_once(dev, 849 - "%s virtqueue could hold %d messages. Only %ld allowed to be pending.\n", 850 - channels[i].is_rx ? "rx" : "tx", 851 - sz, MSG_TOKEN_MAX); 430 + dev_info(dev, 431 + "%s virtqueue could hold %d messages. Only %ld allowed to be pending.\n", 432 + channels[i].is_rx ? "rx" : "tx", 433 + sz, MSG_TOKEN_MAX); 852 434 sz = MSG_TOKEN_MAX; 853 435 } 854 436 channels[i].max_msg = sz; ··· 878 460 879 461 static int scmi_vio_validate(struct virtio_device *vdev) 880 462 { 463 + #ifdef CONFIG_ARM_SCMI_TRANSPORT_VIRTIO_VERSION1_COMPLIANCE 881 464 if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1)) { 882 465 dev_err(&vdev->dev, 883 466 "device does not comply with spec version 1.x\n"); 884 467 return -EINVAL; 885 468 } 886 - 469 + #endif 887 470 return 0; 888 471 } 889 472 ··· 922 503 .transport_init = virtio_scmi_init, 923 504 .transport_exit = virtio_scmi_exit, 924 505 .ops = &scmi_virtio_ops, 925 - .max_rx_timeout_ms = 60000, /* for non-realtime virtio devices */ 506 + /* for non-realtime virtio devices */ 507 + .max_rx_timeout_ms = VIRTIO_MAX_RX_TIMEOUT_MS, 926 508 .max_msg = 0, /* overridden by virtio_get_max_msg() */ 927 509 .max_msg_size = VIRTIO_SCMI_MAX_MSG_SIZE, 510 + .atomic_enabled = IS_ENABLED(CONFIG_ARM_SCMI_TRANSPORT_VIRTIO_ATOMIC_ENABLE), 928 511 };
+45
drivers/firmware/imx/rm.c
··· 43 43 return hdr->func; 44 44 } 45 45 EXPORT_SYMBOL(imx_sc_rm_is_resource_owned); 46 + 47 + struct imx_sc_msg_rm_get_resource_owner { 48 + struct imx_sc_rpc_msg hdr; 49 + union { 50 + struct { 51 + u16 resource; 52 + } req; 53 + struct { 54 + u8 val; 55 + } resp; 56 + } data; 57 + } __packed __aligned(4); 58 + 59 + /* 60 + * This function get @resource partition number 61 + * 62 + * @param[in] ipc IPC handle 63 + * @param[in] resource resource the control is associated with 64 + * @param[out] pt pointer to return the partition number 65 + * 66 + * @return Returns 0 for success and < 0 for errors. 67 + */ 68 + int imx_sc_rm_get_resource_owner(struct imx_sc_ipc *ipc, u16 resource, u8 *pt) 69 + { 70 + struct imx_sc_msg_rm_get_resource_owner msg; 71 + struct imx_sc_rpc_msg *hdr = &msg.hdr; 72 + int ret; 73 + 74 + hdr->ver = IMX_SC_RPC_VERSION; 75 + hdr->svc = IMX_SC_RPC_SVC_RM; 76 + hdr->func = IMX_SC_RM_FUNC_GET_RESOURCE_OWNER; 77 + hdr->size = 2; 78 + 79 + msg.data.req.resource = resource; 80 + 81 + ret = imx_scu_call_rpc(ipc, &msg, true); 82 + if (ret) 83 + return ret; 84 + 85 + if (pt) 86 + *pt = msg.data.resp.val; 87 + 88 + return 0; 89 + } 90 + EXPORT_SYMBOL(imx_sc_rm_get_resource_owner);
+4
drivers/firmware/imx/scu-pd.c
··· 155 155 { "vpu-pid", IMX_SC_R_VPU_PID0, 8, true, 0 }, 156 156 { "vpu-dec0", IMX_SC_R_VPU_DEC_0, 1, false, 0 }, 157 157 { "vpu-enc0", IMX_SC_R_VPU_ENC_0, 1, false, 0 }, 158 + { "vpu-enc1", IMX_SC_R_VPU_ENC_1, 1, false, 0 }, 159 + { "vpu-mu0", IMX_SC_R_VPU_MU_0, 1, false, 0 }, 160 + { "vpu-mu1", IMX_SC_R_VPU_MU_1, 1, false, 0 }, 161 + { "vpu-mu2", IMX_SC_R_VPU_MU_2, 1, false, 0 }, 158 162 159 163 /* GPU SS */ 160 164 { "gpu0-pid", IMX_SC_R_GPU_0_PID0, 4, true, 0 },
+128 -105
drivers/firmware/qcom_scm.c
··· 49 49 __le64 mem_size; 50 50 }; 51 51 52 - #define QCOM_SCM_FLAG_COLDBOOT_CPU0 0x00 53 - #define QCOM_SCM_FLAG_COLDBOOT_CPU1 0x01 54 - #define QCOM_SCM_FLAG_COLDBOOT_CPU2 0x08 55 - #define QCOM_SCM_FLAG_COLDBOOT_CPU3 0x20 56 - 57 - #define QCOM_SCM_FLAG_WARMBOOT_CPU0 0x04 58 - #define QCOM_SCM_FLAG_WARMBOOT_CPU1 0x02 59 - #define QCOM_SCM_FLAG_WARMBOOT_CPU2 0x10 60 - #define QCOM_SCM_FLAG_WARMBOOT_CPU3 0x40 61 - 62 - struct qcom_scm_wb_entry { 63 - int flag; 64 - void *entry; 52 + /* Each bit configures cold/warm boot address for one of the 4 CPUs */ 53 + static const u8 qcom_scm_cpu_cold_bits[QCOM_SCM_BOOT_MAX_CPUS] = { 54 + 0, BIT(0), BIT(3), BIT(5) 65 55 }; 66 - 67 - static struct qcom_scm_wb_entry qcom_scm_wb[] = { 68 - { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU0 }, 69 - { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU1 }, 70 - { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU2 }, 71 - { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU3 }, 56 + static const u8 qcom_scm_cpu_warm_bits[QCOM_SCM_BOOT_MAX_CPUS] = { 57 + BIT(2), BIT(1), BIT(4), BIT(6) 72 58 }; 73 59 74 60 static const char * const qcom_scm_convention_names[] = { ··· 165 179 /** 166 180 * qcom_scm_call() - Invoke a syscall in the secure world 167 181 * @dev: device 168 - * @svc_id: service identifier 169 - * @cmd_id: command identifier 170 182 * @desc: Descriptor structure containing arguments and return values 183 + * @res: Structure containing results from SMC/HVC call 171 184 * 172 185 * Sends a command to the SCM and waits for the command to finish processing. 173 186 * This should *only* be called in pre-emptible context. ··· 190 205 /** 191 206 * qcom_scm_call_atomic() - atomic variation of qcom_scm_call() 192 207 * @dev: device 193 - * @svc_id: service identifier 194 - * @cmd_id: command identifier 195 208 * @desc: Descriptor structure containing arguments and return values 196 209 * @res: Structure containing results from SMC/HVC call 197 210 * ··· 243 260 return ret ? false : !!res.result[0]; 244 261 } 245 262 246 - /** 247 - * qcom_scm_set_warm_boot_addr() - Set the warm boot address for cpus 248 - * @entry: Entry point function for the cpus 249 - * @cpus: The cpumask of cpus that will use the entry point 250 - * 251 - * Set the Linux entry point for the SCM to transfer control to when coming 252 - * out of a power down. CPU power down may be executed on cpuidle or hotplug. 253 - */ 254 - int qcom_scm_set_warm_boot_addr(void *entry, const cpumask_t *cpus) 263 + static int qcom_scm_set_boot_addr(void *entry, const u8 *cpu_bits) 255 264 { 256 - int ret; 257 - int flags = 0; 258 265 int cpu; 259 - struct qcom_scm_desc desc = { 260 - .svc = QCOM_SCM_SVC_BOOT, 261 - .cmd = QCOM_SCM_BOOT_SET_ADDR, 262 - .arginfo = QCOM_SCM_ARGS(2), 263 - }; 264 - 265 - /* 266 - * Reassign only if we are switching from hotplug entry point 267 - * to cpuidle entry point or vice versa. 268 - */ 269 - for_each_cpu(cpu, cpus) { 270 - if (entry == qcom_scm_wb[cpu].entry) 271 - continue; 272 - flags |= qcom_scm_wb[cpu].flag; 273 - } 274 - 275 - /* No change in entry function */ 276 - if (!flags) 277 - return 0; 278 - 279 - desc.args[0] = flags; 280 - desc.args[1] = virt_to_phys(entry); 281 - 282 - ret = qcom_scm_call(__scm->dev, &desc, NULL); 283 - if (!ret) { 284 - for_each_cpu(cpu, cpus) 285 - qcom_scm_wb[cpu].entry = entry; 286 - } 287 - 288 - return ret; 289 - } 290 - EXPORT_SYMBOL(qcom_scm_set_warm_boot_addr); 291 - 292 - /** 293 - * qcom_scm_set_cold_boot_addr() - Set the cold boot address for cpus 294 - * @entry: Entry point function for the cpus 295 - * @cpus: The cpumask of cpus that will use the entry point 296 - * 297 - * Set the cold boot address of the cpus. Any cpu outside the supported 298 - * range would be removed from the cpu present mask. 299 - */ 300 - int qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus) 301 - { 302 - int flags = 0; 303 - int cpu; 304 - int scm_cb_flags[] = { 305 - QCOM_SCM_FLAG_COLDBOOT_CPU0, 306 - QCOM_SCM_FLAG_COLDBOOT_CPU1, 307 - QCOM_SCM_FLAG_COLDBOOT_CPU2, 308 - QCOM_SCM_FLAG_COLDBOOT_CPU3, 309 - }; 266 + unsigned int flags = 0; 310 267 struct qcom_scm_desc desc = { 311 268 .svc = QCOM_SCM_SVC_BOOT, 312 269 .cmd = QCOM_SCM_BOOT_SET_ADDR, ··· 254 331 .owner = ARM_SMCCC_OWNER_SIP, 255 332 }; 256 333 257 - if (!cpus || cpumask_empty(cpus)) 258 - return -EINVAL; 259 - 260 - for_each_cpu(cpu, cpus) { 261 - if (cpu < ARRAY_SIZE(scm_cb_flags)) 262 - flags |= scm_cb_flags[cpu]; 263 - else 264 - set_cpu_present(cpu, false); 334 + for_each_present_cpu(cpu) { 335 + if (cpu >= QCOM_SCM_BOOT_MAX_CPUS) 336 + return -EINVAL; 337 + flags |= cpu_bits[cpu]; 265 338 } 266 339 267 340 desc.args[0] = flags; ··· 265 346 266 347 return qcom_scm_call_atomic(__scm ? __scm->dev : NULL, &desc, NULL); 267 348 } 349 + 350 + static int qcom_scm_set_boot_addr_mc(void *entry, unsigned int flags) 351 + { 352 + struct qcom_scm_desc desc = { 353 + .svc = QCOM_SCM_SVC_BOOT, 354 + .cmd = QCOM_SCM_BOOT_SET_ADDR_MC, 355 + .owner = ARM_SMCCC_OWNER_SIP, 356 + .arginfo = QCOM_SCM_ARGS(6), 357 + .args = { 358 + virt_to_phys(entry), 359 + /* Apply to all CPUs in all affinity levels */ 360 + ~0ULL, ~0ULL, ~0ULL, ~0ULL, 361 + flags, 362 + }, 363 + }; 364 + 365 + /* Need a device for DMA of the additional arguments */ 366 + if (!__scm || __get_convention() == SMC_CONVENTION_LEGACY) 367 + return -EOPNOTSUPP; 368 + 369 + return qcom_scm_call(__scm->dev, &desc, NULL); 370 + } 371 + 372 + /** 373 + * qcom_scm_set_warm_boot_addr() - Set the warm boot address for all cpus 374 + * @entry: Entry point function for the cpus 375 + * 376 + * Set the Linux entry point for the SCM to transfer control to when coming 377 + * out of a power down. CPU power down may be executed on cpuidle or hotplug. 378 + */ 379 + int qcom_scm_set_warm_boot_addr(void *entry) 380 + { 381 + if (qcom_scm_set_boot_addr_mc(entry, QCOM_SCM_BOOT_MC_FLAG_WARMBOOT)) 382 + /* Fallback to old SCM call */ 383 + return qcom_scm_set_boot_addr(entry, qcom_scm_cpu_warm_bits); 384 + return 0; 385 + } 386 + EXPORT_SYMBOL(qcom_scm_set_warm_boot_addr); 387 + 388 + /** 389 + * qcom_scm_set_cold_boot_addr() - Set the cold boot address for all cpus 390 + * @entry: Entry point function for the cpus 391 + */ 392 + int qcom_scm_set_cold_boot_addr(void *entry) 393 + { 394 + if (qcom_scm_set_boot_addr_mc(entry, QCOM_SCM_BOOT_MC_FLAG_COLDBOOT)) 395 + /* Fallback to old SCM call */ 396 + return qcom_scm_set_boot_addr(entry, qcom_scm_cpu_cold_bits); 397 + return 0; 398 + } 268 399 EXPORT_SYMBOL(qcom_scm_set_cold_boot_addr); 269 400 270 401 /** 271 402 * qcom_scm_cpu_power_down() - Power down the cpu 272 - * @flags - Flags to flush cache 403 + * @flags: Flags to flush cache 273 404 * 274 405 * This is an end point to power down cpu. If there was a pending interrupt, 275 406 * the control would return from this function, otherwise, the cpu jumps to the ··· 404 435 * and optional blob of data used for authenticating the metadata 405 436 * and the rest of the firmware 406 437 * @size: size of the metadata 438 + * @ctx: optional metadata context 407 439 * 408 - * Returns 0 on success. 440 + * Return: 0 on success. 441 + * 442 + * Upon successful return, the PAS metadata context (@ctx) will be used to 443 + * track the metadata allocation, this needs to be released by invoking 444 + * qcom_scm_pas_metadata_release() by the caller. 409 445 */ 410 - int qcom_scm_pas_init_image(u32 peripheral, const void *metadata, size_t size) 446 + int qcom_scm_pas_init_image(u32 peripheral, const void *metadata, size_t size, 447 + struct qcom_scm_pas_metadata *ctx) 411 448 { 412 449 dma_addr_t mdata_phys; 413 450 void *mdata_buf; ··· 442 467 443 468 ret = qcom_scm_clk_enable(); 444 469 if (ret) 445 - goto free_metadata; 470 + goto out; 446 471 447 472 desc.args[1] = mdata_phys; 448 473 ··· 450 475 451 476 qcom_scm_clk_disable(); 452 477 453 - free_metadata: 454 - dma_free_coherent(__scm->dev, size, mdata_buf, mdata_phys); 478 + out: 479 + if (ret < 0 || !ctx) { 480 + dma_free_coherent(__scm->dev, size, mdata_buf, mdata_phys); 481 + } else if (ctx) { 482 + ctx->ptr = mdata_buf; 483 + ctx->phys = mdata_phys; 484 + ctx->size = size; 485 + } 455 486 456 487 return ret ? : res.result[0]; 457 488 } 458 489 EXPORT_SYMBOL(qcom_scm_pas_init_image); 490 + 491 + /** 492 + * qcom_scm_pas_metadata_release() - release metadata context 493 + * @ctx: metadata context 494 + */ 495 + void qcom_scm_pas_metadata_release(struct qcom_scm_pas_metadata *ctx) 496 + { 497 + if (!ctx->ptr) 498 + return; 499 + 500 + dma_free_coherent(__scm->dev, ctx->size, ctx->ptr, ctx->phys); 501 + 502 + ctx->ptr = NULL; 503 + ctx->phys = 0; 504 + ctx->size = 0; 505 + } 506 + EXPORT_SYMBOL(qcom_scm_pas_metadata_release); 459 507 460 508 /** 461 509 * qcom_scm_pas_mem_setup() - Prepare the memory related to a given peripheral ··· 747 749 }; 748 750 int ret; 749 751 750 - desc.args[0] = addr; 751 - desc.args[1] = size; 752 - desc.args[2] = spare; 753 - desc.arginfo = QCOM_SCM_ARGS(3, QCOM_SCM_RW, QCOM_SCM_VAL, 754 - QCOM_SCM_VAL); 755 - 756 752 ret = qcom_scm_call(__scm->dev, &desc, NULL); 757 753 758 754 /* the pg table has been initialized already, ignore the error */ ··· 756 764 return ret; 757 765 } 758 766 EXPORT_SYMBOL(qcom_scm_iommu_secure_ptbl_init); 767 + 768 + int qcom_scm_iommu_set_cp_pool_size(u32 spare, u32 size) 769 + { 770 + struct qcom_scm_desc desc = { 771 + .svc = QCOM_SCM_SVC_MP, 772 + .cmd = QCOM_SCM_MP_IOMMU_SET_CP_POOL_SIZE, 773 + .arginfo = QCOM_SCM_ARGS(2), 774 + .args[0] = size, 775 + .args[1] = spare, 776 + .owner = ARM_SMCCC_OWNER_SIP, 777 + }; 778 + 779 + return qcom_scm_call(__scm->dev, &desc, NULL); 780 + } 781 + EXPORT_SYMBOL(qcom_scm_iommu_set_cp_pool_size); 759 782 760 783 int qcom_scm_mem_protect_video_var(u32 cp_start, u32 cp_size, 761 784 u32 cp_nonpixel_start, ··· 1137 1130 return ret; 1138 1131 } 1139 1132 EXPORT_SYMBOL(qcom_scm_hdcp_req); 1133 + 1134 + int qcom_scm_iommu_set_pt_format(u32 sec_id, u32 ctx_num, u32 pt_fmt) 1135 + { 1136 + struct qcom_scm_desc desc = { 1137 + .svc = QCOM_SCM_SVC_SMMU_PROGRAM, 1138 + .cmd = QCOM_SCM_SMMU_PT_FORMAT, 1139 + .arginfo = QCOM_SCM_ARGS(3), 1140 + .args[0] = sec_id, 1141 + .args[1] = ctx_num, 1142 + .args[2] = pt_fmt, /* 0: LPAE AArch32 - 1: AArch64 */ 1143 + .owner = ARM_SMCCC_OWNER_SIP, 1144 + }; 1145 + 1146 + return qcom_scm_call(__scm->dev, &desc, NULL); 1147 + } 1148 + EXPORT_SYMBOL(qcom_scm_iommu_set_pt_format); 1140 1149 1141 1150 int qcom_scm_qsmmu500_wait_safe_toggle(bool en) 1142 1151 {
+7
drivers/firmware/qcom_scm.h
··· 78 78 #define QCOM_SCM_BOOT_SET_ADDR 0x01 79 79 #define QCOM_SCM_BOOT_TERMINATE_PC 0x02 80 80 #define QCOM_SCM_BOOT_SET_DLOAD_MODE 0x10 81 + #define QCOM_SCM_BOOT_SET_ADDR_MC 0x11 81 82 #define QCOM_SCM_BOOT_SET_REMOTE_STATE 0x0a 82 83 #define QCOM_SCM_FLUSH_FLAG_MASK 0x3 84 + #define QCOM_SCM_BOOT_MAX_CPUS 4 85 + #define QCOM_SCM_BOOT_MC_FLAG_AARCH64 BIT(0) 86 + #define QCOM_SCM_BOOT_MC_FLAG_COLDBOOT BIT(1) 87 + #define QCOM_SCM_BOOT_MC_FLAG_WARMBOOT BIT(2) 83 88 84 89 #define QCOM_SCM_SVC_PIL 0x02 85 90 #define QCOM_SCM_PIL_PAS_INIT_IMAGE 0x01 ··· 105 100 #define QCOM_SCM_MP_RESTORE_SEC_CFG 0x02 106 101 #define QCOM_SCM_MP_IOMMU_SECURE_PTBL_SIZE 0x03 107 102 #define QCOM_SCM_MP_IOMMU_SECURE_PTBL_INIT 0x04 103 + #define QCOM_SCM_MP_IOMMU_SET_CP_POOL_SIZE 0x05 108 104 #define QCOM_SCM_MP_VIDEO_VAR 0x08 109 105 #define QCOM_SCM_MP_ASSIGN 0x16 110 106 ··· 125 119 #define QCOM_SCM_LMH_LIMIT_DCVSH 0x10 126 120 127 121 #define QCOM_SCM_SVC_SMMU_PROGRAM 0x15 122 + #define QCOM_SCM_SMMU_PT_FORMAT 0x01 128 123 #define QCOM_SCM_SMMU_CONFIG_ERRATA1 0x03 129 124 #define QCOM_SCM_SMMU_CONFIG_ERRATA1_CLIENT_ALL 0x02 130 125
+1 -1
drivers/firmware/ti_sci.c
··· 3412 3412 ret = register_restart_handler(&info->nb); 3413 3413 if (ret) { 3414 3414 dev_err(dev, "reboot registration fail(%d)\n", ret); 3415 - return ret; 3415 + goto out; 3416 3416 } 3417 3417 } 3418 3418
+1 -1
drivers/memory/brcmstb_dpfe.c
··· 424 424 425 425 /* 426 426 * It depends on the API version which MBOX register we have to write to 427 - * to signal we are done. 427 + * signal we are done. 428 428 */ 429 429 release_mbox = (priv->dpfe_api->version < 2) 430 430 ? REG_TO_HOST_MBOX : REG_TO_DCPU_MBOX;
+5 -3
drivers/memory/emif.c
··· 1025 1025 temp = devm_kzalloc(dev, sizeof(*pd), GFP_KERNEL); 1026 1026 dev_info = devm_kzalloc(dev, sizeof(*dev_info), GFP_KERNEL); 1027 1027 1028 - if (!emif || !pd || !dev_info) { 1028 + if (!emif || !temp || !dev_info) { 1029 1029 dev_err(dev, "%s:%d: allocation error\n", __func__, __LINE__); 1030 1030 goto error; 1031 1031 } ··· 1117 1117 { 1118 1118 struct emif_data *emif; 1119 1119 struct resource *res; 1120 - int irq; 1120 + int irq, ret; 1121 1121 1122 1122 if (pdev->dev.of_node) 1123 1123 emif = of_get_memory_device_details(pdev->dev.of_node, &pdev->dev); ··· 1147 1147 emif_onetime_settings(emif); 1148 1148 emif_debugfs_init(emif); 1149 1149 disable_and_clear_all_interrupts(emif); 1150 - setup_interrupts(emif, irq); 1150 + ret = setup_interrupts(emif, irq); 1151 + if (ret) 1152 + goto error; 1151 1153 1152 1154 /* One-time actions taken on probing the first device */ 1153 1155 if (!emif1) {
+9
drivers/memory/fsl_ifc.c
··· 88 88 { 89 89 struct fsl_ifc_ctrl *ctrl = dev_get_drvdata(&dev->dev); 90 90 91 + of_platform_depopulate(&dev->dev); 91 92 free_irq(ctrl->nand_irq, ctrl); 92 93 free_irq(ctrl->irq, ctrl); 93 94 ··· 286 285 } 287 286 } 288 287 288 + /* legacy dts may still use "simple-bus" compatible */ 289 + ret = of_platform_populate(dev->dev.of_node, NULL, NULL, 290 + &dev->dev); 291 + if (ret) 292 + goto err_free_nandirq; 293 + 289 294 return 0; 290 295 296 + err_free_nandirq: 297 + free_irq(fsl_ifc_ctrl_dev->nand_irq, fsl_ifc_ctrl_dev); 291 298 err_free_irq: 292 299 free_irq(fsl_ifc_ctrl_dev->irq, fsl_ifc_ctrl_dev); 293 300 err_unmap_nandirq:
+53 -4
drivers/memory/mtk-smi.c
··· 8 8 #include <linux/device.h> 9 9 #include <linux/err.h> 10 10 #include <linux/io.h> 11 + #include <linux/iopoll.h> 11 12 #include <linux/module.h> 12 13 #include <linux/of.h> 13 14 #include <linux/of_platform.h> ··· 33 32 #define SMI_DUMMY 0x444 34 33 35 34 /* SMI LARB */ 35 + #define SMI_LARB_SLP_CON 0xc 36 + #define SLP_PROT_EN BIT(0) 37 + #define SLP_PROT_RDY BIT(16) 38 + 36 39 #define SMI_LARB_CMD_THRT_CON 0x24 37 40 #define SMI_LARB_THRT_RD_NU_LMT_MSK GENMASK(7, 4) 38 41 #define SMI_LARB_THRT_RD_NU_LMT (5 << 4) ··· 86 81 87 82 #define MTK_SMI_FLAG_THRT_UPDATE BIT(0) 88 83 #define MTK_SMI_FLAG_SW_FLAG BIT(1) 84 + #define MTK_SMI_FLAG_SLEEP_CTL BIT(2) 89 85 #define MTK_SMI_CAPS(flags, _x) (!!((flags) & (_x))) 90 86 91 87 struct mtk_smi_reg_pair { ··· 100 94 MTK_SMI_GEN2_SUB_COMM, /* gen2 smi sub common */ 101 95 }; 102 96 103 - #define MTK_SMI_CLK_NR_MAX 4 104 - 105 97 /* larbs: Require apb/smi clocks while gals is optional. */ 106 98 static const char * const mtk_smi_larb_clks[] = {"apb", "smi", "gals"}; 107 99 #define MTK_SMI_LARB_REQ_CLK_NR 2 ··· 110 106 * sub common: Require apb/smi/gals0 clocks in has_gals case. Otherwise, only apb/smi are required. 111 107 */ 112 108 static const char * const mtk_smi_common_clks[] = {"apb", "smi", "gals0", "gals1"}; 109 + #define MTK_SMI_CLK_NR_MAX ARRAY_SIZE(mtk_smi_common_clks) 113 110 #define MTK_SMI_COM_REQ_CLK_NR 2 114 111 #define MTK_SMI_COM_GALS_REQ_CLK_NR MTK_SMI_CLK_NR_MAX 115 112 #define MTK_SMI_SUB_COM_GALS_REQ_CLK_NR 3 ··· 340 335 /* IPU0 | IPU1 | CCU */ 341 336 }; 342 337 338 + static const struct mtk_smi_larb_gen mtk_smi_larb_mt8186 = { 339 + .config_port = mtk_smi_larb_config_port_gen2_general, 340 + .flags_general = MTK_SMI_FLAG_SLEEP_CTL, 341 + }; 342 + 343 343 static const struct mtk_smi_larb_gen mtk_smi_larb_mt8192 = { 344 344 .config_port = mtk_smi_larb_config_port_gen2_general, 345 345 }; 346 346 347 347 static const struct mtk_smi_larb_gen mtk_smi_larb_mt8195 = { 348 348 .config_port = mtk_smi_larb_config_port_gen2_general, 349 - .flags_general = MTK_SMI_FLAG_THRT_UPDATE | MTK_SMI_FLAG_SW_FLAG, 349 + .flags_general = MTK_SMI_FLAG_THRT_UPDATE | MTK_SMI_FLAG_SW_FLAG | 350 + MTK_SMI_FLAG_SLEEP_CTL, 350 351 .ostd = mtk_smi_larb_mt8195_ostd, 351 352 }; 352 353 ··· 363 352 {.compatible = "mediatek,mt8167-smi-larb", .data = &mtk_smi_larb_mt8167}, 364 353 {.compatible = "mediatek,mt8173-smi-larb", .data = &mtk_smi_larb_mt8173}, 365 354 {.compatible = "mediatek,mt8183-smi-larb", .data = &mtk_smi_larb_mt8183}, 355 + {.compatible = "mediatek,mt8186-smi-larb", .data = &mtk_smi_larb_mt8186}, 366 356 {.compatible = "mediatek,mt8192-smi-larb", .data = &mtk_smi_larb_mt8192}, 367 357 {.compatible = "mediatek,mt8195-smi-larb", .data = &mtk_smi_larb_mt8195}, 368 358 {} 369 359 }; 360 + 361 + static int mtk_smi_larb_sleep_ctrl_enable(struct mtk_smi_larb *larb) 362 + { 363 + int ret; 364 + u32 tmp; 365 + 366 + writel_relaxed(SLP_PROT_EN, larb->base + SMI_LARB_SLP_CON); 367 + ret = readl_poll_timeout_atomic(larb->base + SMI_LARB_SLP_CON, 368 + tmp, !!(tmp & SLP_PROT_RDY), 10, 1000); 369 + if (ret) { 370 + /* TODO: Reset this larb if it fails here. */ 371 + dev_err(larb->smi.dev, "sleep ctrl is not ready(0x%x).\n", tmp); 372 + } 373 + return ret; 374 + } 375 + 376 + static void mtk_smi_larb_sleep_ctrl_disable(struct mtk_smi_larb *larb) 377 + { 378 + writel_relaxed(0, larb->base + SMI_LARB_SLP_CON); 379 + } 370 380 371 381 static int mtk_smi_device_link_common(struct device *dev, struct device **com_dev) 372 382 { ··· 498 466 int ret; 499 467 500 468 ret = clk_bulk_prepare_enable(larb->smi.clk_num, larb->smi.clks); 501 - if (ret < 0) 469 + if (ret) 502 470 return ret; 471 + 472 + if (MTK_SMI_CAPS(larb->larb_gen->flags_general, MTK_SMI_FLAG_SLEEP_CTL)) 473 + mtk_smi_larb_sleep_ctrl_disable(larb); 503 474 504 475 /* Configure the basic setting for this larb */ 505 476 larb_gen->config_port(dev); ··· 513 478 static int __maybe_unused mtk_smi_larb_suspend(struct device *dev) 514 479 { 515 480 struct mtk_smi_larb *larb = dev_get_drvdata(dev); 481 + int ret; 482 + 483 + if (MTK_SMI_CAPS(larb->larb_gen->flags_general, MTK_SMI_FLAG_SLEEP_CTL)) { 484 + ret = mtk_smi_larb_sleep_ctrl_enable(larb); 485 + if (ret) 486 + return ret; 487 + } 516 488 517 489 clk_bulk_disable_unprepare(larb->smi.clk_num, larb->smi.clks); 518 490 return 0; ··· 572 530 F_MMU1_LARB(7), 573 531 }; 574 532 533 + static const struct mtk_smi_common_plat mtk_smi_common_mt8186 = { 534 + .type = MTK_SMI_GEN2, 535 + .has_gals = true, 536 + .bus_sel = F_MMU1_LARB(1) | F_MMU1_LARB(4) | F_MMU1_LARB(7), 537 + }; 538 + 575 539 static const struct mtk_smi_common_plat mtk_smi_common_mt8192 = { 576 540 .type = MTK_SMI_GEN2, 577 541 .has_gals = true, ··· 612 564 {.compatible = "mediatek,mt8167-smi-common", .data = &mtk_smi_common_gen2}, 613 565 {.compatible = "mediatek,mt8173-smi-common", .data = &mtk_smi_common_gen2}, 614 566 {.compatible = "mediatek,mt8183-smi-common", .data = &mtk_smi_common_mt8183}, 567 + {.compatible = "mediatek,mt8186-smi-common", .data = &mtk_smi_common_mt8186}, 615 568 {.compatible = "mediatek,mt8192-smi-common", .data = &mtk_smi_common_mt8192}, 616 569 {.compatible = "mediatek,mt8195-smi-common-vdo", .data = &mtk_smi_common_mt8195_vdo}, 617 570 {.compatible = "mediatek,mt8195-smi-common-vpp", .data = &mtk_smi_common_mt8195_vpp},
+17 -8
drivers/memory/of_memory.c
··· 212 212 { 213 213 int ret; 214 214 215 - /* The 'reg' param required since DT has changed, used as 'max-freq' */ 216 - ret = of_property_read_u32(np, "reg", &tim->max_freq); 215 + ret = of_property_read_u32(np, "max-freq", &tim->max_freq); 216 + if (ret) 217 + /* Deprecated way of passing max-freq as 'reg' */ 218 + ret = of_property_read_u32(np, "reg", &tim->max_freq); 217 219 ret |= of_property_read_u32(np, "min-freq", &tim->min_freq); 218 220 ret |= of_property_read_u32(np, "tRFC", &tim->tRFC); 219 221 ret |= of_property_read_u32(np, "tRRD", &tim->tRRD); ··· 318 316 struct property *prop; 319 317 const char *cp; 320 318 int err; 319 + u32 revision_id[2]; 321 320 322 - err = of_property_read_u32(np, "revision-id1", &info.revision_id1); 323 - if (err) 324 - info.revision_id1 = -ENOENT; 321 + err = of_property_read_u32_array(np, "revision-id", revision_id, 2); 322 + if (!err) { 323 + info.revision_id1 = revision_id[0]; 324 + info.revision_id2 = revision_id[1]; 325 + } else { 326 + err = of_property_read_u32(np, "revision-id1", &info.revision_id1); 327 + if (err) 328 + info.revision_id1 = -ENOENT; 325 329 326 - err = of_property_read_u32(np, "revision-id2", &info.revision_id2); 327 - if (err) 328 - info.revision_id2 = -ENOENT; 330 + err = of_property_read_u32(np, "revision-id2", &info.revision_id2); 331 + if (err) 332 + info.revision_id2 = -ENOENT; 333 + } 329 334 330 335 err = of_property_read_u32(np, "io-width", &info.io_width); 331 336 if (err)
+1
drivers/memory/tegra/Kconfig
··· 28 28 default y 29 29 depends on ARCH_TEGRA_3x_SOC || COMPILE_TEST 30 30 select PM_OPP 31 + select DDR 31 32 help 32 33 This driver is for the External Memory Controller (EMC) found on 33 34 Tegra30 chips. The EMC controls the external DRAM on the board.
+1 -1
drivers/memory/tegra/tegra20-emc.c
··· 540 540 unsigned int register_addr, 541 541 unsigned int *register_data) 542 542 { 543 - u32 memory_dev = emem_dev + 1; 543 + u32 memory_dev = emem_dev ? 1 : 2; 544 544 u32 val, mr_mask = 0xff; 545 545 int err; 546 546
+1 -1
drivers/memory/tegra/tegra210-emc-core.c
··· 711 711 return 0; 712 712 } 713 713 714 - static struct thermal_cooling_device_ops tegra210_emc_cd_ops = { 714 + static const struct thermal_cooling_device_ops tegra210_emc_cd_ops = { 715 715 .get_max_state = tegra210_emc_cd_max_state, 716 716 .get_cur_state = tegra210_emc_cd_get_state, 717 717 .set_cur_state = tegra210_emc_cd_set_state,
+121 -10
drivers/memory/tegra/tegra30-emc.c
··· 9 9 * Copyright (C) 2019 GRATE-DRIVER project 10 10 */ 11 11 12 + #include <linux/bitfield.h> 12 13 #include <linux/clk.h> 13 14 #include <linux/clk/tegra.h> 14 15 #include <linux/debugfs.h> ··· 32 31 #include <soc/tegra/common.h> 33 32 #include <soc/tegra/fuse.h> 34 33 34 + #include "../jedec_ddr.h" 35 + #include "../of_memory.h" 36 + 35 37 #include "mc.h" 36 38 37 39 #define EMC_INTSTATUS 0x000 38 40 #define EMC_INTMASK 0x004 39 41 #define EMC_DBG 0x008 42 + #define EMC_ADR_CFG 0x010 40 43 #define EMC_CFG 0x00c 41 44 #define EMC_REFCTRL 0x020 42 45 #define EMC_TIMING_CONTROL 0x028 ··· 86 81 #define EMC_EMRS 0x0d0 87 82 #define EMC_SELF_REF 0x0e0 88 83 #define EMC_MRW 0x0e8 84 + #define EMC_MRR 0x0ec 89 85 #define EMC_XM2DQSPADCTRL3 0x0f8 90 86 #define EMC_FBIO_SPARE 0x100 91 87 #define EMC_FBIO_CFG5 0x104 ··· 214 208 215 209 #define EMC_REFRESH_OVERFLOW_INT BIT(3) 216 210 #define EMC_CLKCHANGE_COMPLETE_INT BIT(4) 211 + #define EMC_MRR_DIVLD_INT BIT(5) 212 + 213 + #define EMC_MRR_DEV_SELECTN GENMASK(31, 30) 214 + #define EMC_MRR_MRR_MA GENMASK(23, 16) 215 + #define EMC_MRR_MRR_DATA GENMASK(15, 0) 216 + 217 + #define EMC_ADR_CFG_EMEM_NUMDEV BIT(0) 217 218 218 219 enum emc_dram_type { 219 220 DRAM_TYPE_DDR3, ··· 391 378 392 379 /* protect shared rate-change code path */ 393 380 struct mutex rate_lock; 381 + 382 + bool mrr_error; 394 383 }; 395 384 396 385 static int emc_seq_update_timing(struct tegra_emc *emc) ··· 1023 1008 return 0; 1024 1009 } 1025 1010 1026 - static struct device_node *emc_find_node_by_ram_code(struct device *dev) 1011 + static struct device_node *emc_find_node_by_ram_code(struct tegra_emc *emc) 1027 1012 { 1013 + struct device *dev = emc->dev; 1028 1014 struct device_node *np; 1029 1015 u32 value, ram_code; 1030 1016 int err; 1017 + 1018 + if (emc->mrr_error) { 1019 + dev_warn(dev, "memory timings skipped due to MRR error\n"); 1020 + return NULL; 1021 + } 1031 1022 1032 1023 if (of_get_child_count(dev->of_node) == 0) { 1033 1024 dev_info_once(dev, "device-tree doesn't have memory timings\n"); ··· 1056 1035 return NULL; 1057 1036 } 1058 1037 1038 + static int emc_read_lpddr_mode_register(struct tegra_emc *emc, 1039 + unsigned int emem_dev, 1040 + unsigned int register_addr, 1041 + unsigned int *register_data) 1042 + { 1043 + u32 memory_dev = emem_dev ? 1 : 2; 1044 + u32 val, mr_mask = 0xff; 1045 + int err; 1046 + 1047 + /* clear data-valid interrupt status */ 1048 + writel_relaxed(EMC_MRR_DIVLD_INT, emc->regs + EMC_INTSTATUS); 1049 + 1050 + /* issue mode register read request */ 1051 + val = FIELD_PREP(EMC_MRR_DEV_SELECTN, memory_dev); 1052 + val |= FIELD_PREP(EMC_MRR_MRR_MA, register_addr); 1053 + 1054 + writel_relaxed(val, emc->regs + EMC_MRR); 1055 + 1056 + /* wait for the LPDDR2 data-valid interrupt */ 1057 + err = readl_relaxed_poll_timeout_atomic(emc->regs + EMC_INTSTATUS, val, 1058 + val & EMC_MRR_DIVLD_INT, 1059 + 1, 100); 1060 + if (err) { 1061 + dev_err(emc->dev, "mode register %u read failed: %d\n", 1062 + register_addr, err); 1063 + emc->mrr_error = true; 1064 + return err; 1065 + } 1066 + 1067 + /* read out mode register data */ 1068 + val = readl_relaxed(emc->regs + EMC_MRR); 1069 + *register_data = FIELD_GET(EMC_MRR_MRR_DATA, val) & mr_mask; 1070 + 1071 + return 0; 1072 + } 1073 + 1074 + static void emc_read_lpddr_sdram_info(struct tegra_emc *emc, 1075 + unsigned int emem_dev) 1076 + { 1077 + union lpddr2_basic_config4 basic_conf4; 1078 + unsigned int manufacturer_id; 1079 + unsigned int revision_id1; 1080 + unsigned int revision_id2; 1081 + 1082 + /* these registers are standard for all LPDDR JEDEC memory chips */ 1083 + emc_read_lpddr_mode_register(emc, emem_dev, 5, &manufacturer_id); 1084 + emc_read_lpddr_mode_register(emc, emem_dev, 6, &revision_id1); 1085 + emc_read_lpddr_mode_register(emc, emem_dev, 7, &revision_id2); 1086 + emc_read_lpddr_mode_register(emc, emem_dev, 8, &basic_conf4.value); 1087 + 1088 + dev_info(emc->dev, "SDRAM[dev%u]: manufacturer: 0x%x (%s) rev1: 0x%x rev2: 0x%x prefetch: S%u density: %uMbit iowidth: %ubit\n", 1089 + emem_dev, manufacturer_id, 1090 + lpddr2_jedec_manufacturer(manufacturer_id), 1091 + revision_id1, revision_id2, 1092 + 4 >> basic_conf4.arch_type, 1093 + 64 << basic_conf4.density, 1094 + 32 >> basic_conf4.io_width); 1095 + } 1096 + 1059 1097 static int emc_setup_hw(struct tegra_emc *emc) 1060 1098 { 1099 + u32 fbio_cfg5, emc_cfg, emc_dbg, emc_adr_cfg; 1061 1100 u32 intmask = EMC_REFRESH_OVERFLOW_INT; 1062 - u32 fbio_cfg5, emc_cfg, emc_dbg; 1101 + static bool print_sdram_info_once; 1063 1102 enum emc_dram_type dram_type; 1103 + const char *dram_type_str; 1104 + unsigned int emem_numdev; 1064 1105 1065 1106 fbio_cfg5 = readl_relaxed(emc->regs + EMC_FBIO_CFG5); 1066 1107 dram_type = fbio_cfg5 & EMC_FBIO_CFG5_DRAM_TYPE_MASK; ··· 1158 1075 emc_dbg &= ~EMC_DBG_WRITE_MUX_ACTIVE; 1159 1076 emc_dbg &= ~EMC_DBG_FORCE_UPDATE; 1160 1077 writel_relaxed(emc_dbg, emc->regs + EMC_DBG); 1078 + 1079 + switch (dram_type) { 1080 + case DRAM_TYPE_DDR1: 1081 + dram_type_str = "DDR1"; 1082 + break; 1083 + case DRAM_TYPE_LPDDR2: 1084 + dram_type_str = "LPDDR2"; 1085 + break; 1086 + case DRAM_TYPE_DDR2: 1087 + dram_type_str = "DDR2"; 1088 + break; 1089 + case DRAM_TYPE_DDR3: 1090 + dram_type_str = "DDR3"; 1091 + break; 1092 + } 1093 + 1094 + emc_adr_cfg = readl_relaxed(emc->regs + EMC_ADR_CFG); 1095 + emem_numdev = FIELD_GET(EMC_ADR_CFG_EMEM_NUMDEV, emc_adr_cfg) + 1; 1096 + 1097 + dev_info_once(emc->dev, "%u %s %s attached\n", emem_numdev, 1098 + dram_type_str, emem_numdev == 2 ? "devices" : "device"); 1099 + 1100 + if (dram_type == DRAM_TYPE_LPDDR2 && !print_sdram_info_once) { 1101 + while (emem_numdev--) 1102 + emc_read_lpddr_sdram_info(emc, emem_numdev); 1103 + 1104 + print_sdram_info_once = true; 1105 + } 1161 1106 1162 1107 return 0; 1163 1108 } ··· 1649 1538 emc->clk_nb.notifier_call = emc_clk_change_notify; 1650 1539 emc->dev = &pdev->dev; 1651 1540 1652 - np = emc_find_node_by_ram_code(&pdev->dev); 1653 - if (np) { 1654 - err = emc_load_timings_from_dt(emc, np); 1655 - of_node_put(np); 1656 - if (err) 1657 - return err; 1658 - } 1659 - 1660 1541 emc->regs = devm_platform_ioremap_resource(pdev, 0); 1661 1542 if (IS_ERR(emc->regs)) 1662 1543 return PTR_ERR(emc->regs); ··· 1656 1553 err = emc_setup_hw(emc); 1657 1554 if (err) 1658 1555 return err; 1556 + 1557 + np = emc_find_node_by_ram_code(emc); 1558 + if (np) { 1559 + err = emc_load_timings_from_dt(emc, np); 1560 + of_node_put(np); 1561 + if (err) 1562 + return err; 1563 + } 1659 1564 1660 1565 err = platform_get_irq(pdev, 0); 1661 1566 if (err < 0)
+4 -3
drivers/remoteproc/qcom_q6v5_mss.c
··· 928 928 regmap_write(halt_map, offset + AXI_HALTREQ_REG, 0); 929 929 } 930 930 931 - static int q6v5_mpss_init_image(struct q6v5 *qproc, const struct firmware *fw) 931 + static int q6v5_mpss_init_image(struct q6v5 *qproc, const struct firmware *fw, 932 + const char *fw_name) 932 933 { 933 934 unsigned long dma_attrs = DMA_ATTR_FORCE_CONTIGUOUS; 934 935 dma_addr_t phys; ··· 940 939 void *ptr; 941 940 int ret; 942 941 943 - metadata = qcom_mdt_read_metadata(fw, &size); 942 + metadata = qcom_mdt_read_metadata(fw, &size, fw_name, qproc->dev); 944 943 if (IS_ERR(metadata)) 945 944 return PTR_ERR(metadata); 946 945 ··· 1290 1289 /* Initialize the RMB validator */ 1291 1290 writel(0, qproc->rmb_base + RMB_PMI_CODE_LENGTH_REG); 1292 1291 1293 - ret = q6v5_mpss_init_image(qproc, fw); 1292 + ret = q6v5_mpss_init_image(qproc, fw, qproc->hexagon_mdt_image); 1294 1293 if (ret) 1295 1294 goto release_firmware; 1296 1295
+33 -3
drivers/remoteproc/qcom_q6v5_pas.c
··· 79 79 struct qcom_rproc_subdev smd_subdev; 80 80 struct qcom_rproc_ssr ssr_subdev; 81 81 struct qcom_sysmon *sysmon; 82 + 83 + struct qcom_scm_pas_metadata pas_metadata; 82 84 }; 83 85 84 86 static void adsp_minidump(struct rproc *rproc) ··· 128 126 } 129 127 } 130 128 129 + static int adsp_unprepare(struct rproc *rproc) 130 + { 131 + struct qcom_adsp *adsp = (struct qcom_adsp *)rproc->priv; 132 + 133 + /* 134 + * adsp_load() did pass pas_metadata to the SCM driver for storing 135 + * metadata context. It might have been released already if 136 + * auth_and_reset() was successful, but in other cases clean it up 137 + * here. 138 + */ 139 + qcom_scm_pas_metadata_release(&adsp->pas_metadata); 140 + 141 + return 0; 142 + } 143 + 131 144 static int adsp_load(struct rproc *rproc, const struct firmware *fw) 132 145 { 133 146 struct qcom_adsp *adsp = (struct qcom_adsp *)rproc->priv; 134 147 int ret; 135 148 136 - ret = qcom_mdt_load(adsp->dev, fw, rproc->firmware, adsp->pas_id, 137 - adsp->mem_region, adsp->mem_phys, adsp->mem_size, 138 - &adsp->mem_reloc); 149 + ret = qcom_mdt_pas_init(adsp->dev, fw, rproc->firmware, adsp->pas_id, 150 + adsp->mem_phys, &adsp->pas_metadata); 151 + if (ret) 152 + return ret; 153 + 154 + ret = qcom_mdt_load_no_init(adsp->dev, fw, rproc->firmware, adsp->pas_id, 155 + adsp->mem_region, adsp->mem_phys, adsp->mem_size, 156 + &adsp->mem_reloc); 139 157 if (ret) 140 158 return ret; 141 159 ··· 206 184 qcom_scm_pas_shutdown(adsp->pas_id); 207 185 goto disable_px_supply; 208 186 } 187 + 188 + qcom_scm_pas_metadata_release(&adsp->pas_metadata); 209 189 210 190 return 0; 211 191 ··· 279 255 } 280 256 281 257 static const struct rproc_ops adsp_ops = { 258 + .unprepare = adsp_unprepare, 282 259 .start = adsp_start, 283 260 .stop = adsp_stop, 284 261 .da_to_va = adsp_da_to_va, ··· 289 264 }; 290 265 291 266 static const struct rproc_ops adsp_minidump_ops = { 267 + .unprepare = adsp_unprepare, 292 268 .start = adsp_start, 293 269 .stop = adsp_stop, 294 270 .da_to_va = adsp_da_to_va, ··· 879 853 { .compatible = "qcom,sm8350-cdsp-pas", .data = &sm8350_cdsp_resource}, 880 854 { .compatible = "qcom,sm8350-slpi-pas", .data = &sm8350_slpi_resource}, 881 855 { .compatible = "qcom,sm8350-mpss-pas", .data = &mpss_resource_init}, 856 + { .compatible = "qcom,sm8450-adsp-pas", .data = &sm8350_adsp_resource}, 857 + { .compatible = "qcom,sm8450-cdsp-pas", .data = &sm8350_cdsp_resource}, 858 + { .compatible = "qcom,sm8450-slpi-pas", .data = &sm8350_slpi_resource}, 859 + { .compatible = "qcom,sm8450-mpss-pas", .data = &mpss_resource_init}, 882 860 { }, 883 861 }; 884 862 MODULE_DEVICE_TABLE(of, adsp_of_match);
+22
drivers/soc/amlogic/meson-secure-pwrc.c
··· 11 11 #include <linux/platform_device.h> 12 12 #include <linux/pm_domain.h> 13 13 #include <dt-bindings/power/meson-a1-power.h> 14 + #include <dt-bindings/power/meson-s4-power.h> 14 15 #include <linux/arm-smccc.h> 15 16 #include <linux/firmware/meson/meson_sm.h> 16 17 #include <linux/module.h> ··· 120 119 SEC_PD(RSA, 0), 121 120 }; 122 121 122 + static struct meson_secure_pwrc_domain_desc s4_pwrc_domains[] = { 123 + SEC_PD(S4_DOS_HEVC, 0), 124 + SEC_PD(S4_DOS_VDEC, 0), 125 + SEC_PD(S4_VPU_HDMI, 0), 126 + SEC_PD(S4_USB_COMB, 0), 127 + SEC_PD(S4_GE2D, 0), 128 + /* ETH is for ethernet online wakeup, and should be always on */ 129 + SEC_PD(S4_ETH, GENPD_FLAG_ALWAYS_ON), 130 + SEC_PD(S4_DEMOD, 0), 131 + SEC_PD(S4_AUDIO, 0), 132 + }; 133 + 123 134 static int meson_secure_pwrc_probe(struct platform_device *pdev) 124 135 { 125 136 int i; ··· 200 187 .count = ARRAY_SIZE(a1_pwrc_domains), 201 188 }; 202 189 190 + static struct meson_secure_pwrc_domain_data meson_secure_s4_pwrc_data = { 191 + .domains = s4_pwrc_domains, 192 + .count = ARRAY_SIZE(s4_pwrc_domains), 193 + }; 194 + 203 195 static const struct of_device_id meson_secure_pwrc_match_table[] = { 204 196 { 205 197 .compatible = "amlogic,meson-a1-pwrc", 206 198 .data = &meson_secure_a1_pwrc_data, 199 + }, 200 + { 201 + .compatible = "amlogic,meson-s4-pwrc", 202 + .data = &meson_secure_s4_pwrc_data, 207 203 }, 208 204 { /* sentinel */ } 209 205 };
+3
drivers/soc/atmel/soc.c
··· 156 156 AT91_SOC(SAMA5D2_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 157 157 AT91_CIDR_VERSION_MASK, SAMA5D28C_LD2G_EXID_MATCH, 158 158 "sama5d28c 256MiB LPDDR2 SiP", "sama5d2"), 159 + AT91_SOC(SAMA5D2_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 160 + AT91_CIDR_VERSION_MASK, SAMA5D29CN_EXID_MATCH, 161 + "sama5d29", "sama5d2"), 159 162 AT91_SOC(SAMA5D3_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 160 163 AT91_CIDR_VERSION_MASK, SAMA5D31_EXID_MATCH, 161 164 "sama5d31", "sama5d3"),
+1
drivers/soc/atmel/soc.h
··· 95 95 #define SAMA5D28C_LD2G_EXID_MATCH 0x00000072 96 96 #define SAMA5D28CU_EXID_MATCH 0x00000010 97 97 #define SAMA5D28CN_EXID_MATCH 0x00000020 98 + #define SAMA5D29CN_EXID_MATCH 0x00000023 98 99 99 100 #define SAMA5D3_CIDR_MATCH 0x0a5c07c0 100 101 #define SAMA5D31_EXID_MATCH 0x00444300
+66
drivers/soc/imx/imx8m-blk-ctrl.c
··· 15 15 16 16 #include <dt-bindings/power/imx8mm-power.h> 17 17 #include <dt-bindings/power/imx8mn-power.h> 18 + #include <dt-bindings/power/imx8mq-power.h> 18 19 19 20 #define BLK_SFT_RSTN 0x0 20 21 #define BLK_CLK_EN 0x4 ··· 590 589 .num_domains = ARRAY_SIZE(imx8mn_disp_blk_ctl_domain_data), 591 590 }; 592 591 592 + static int imx8mq_vpu_power_notifier(struct notifier_block *nb, 593 + unsigned long action, void *data) 594 + { 595 + struct imx8m_blk_ctrl *bc = container_of(nb, struct imx8m_blk_ctrl, 596 + power_nb); 597 + 598 + if (action != GENPD_NOTIFY_ON && action != GENPD_NOTIFY_PRE_OFF) 599 + return NOTIFY_OK; 600 + 601 + /* 602 + * The ADB in the VPUMIX domain has no separate reset and clock 603 + * enable bits, but is ungated and reset together with the VPUs. The 604 + * reset and clock enable inputs to the ADB is a logical OR of the 605 + * VPU bits. In order to set the G2 fuse bits, the G2 clock must 606 + * also be enabled. 607 + */ 608 + regmap_set_bits(bc->regmap, BLK_SFT_RSTN, BIT(0) | BIT(1)); 609 + regmap_set_bits(bc->regmap, BLK_CLK_EN, BIT(0) | BIT(1)); 610 + 611 + if (action == GENPD_NOTIFY_ON) { 612 + /* 613 + * On power up we have no software backchannel to the GPC to 614 + * wait for the ADB handshake to happen, so we just delay for a 615 + * bit. On power down the GPC driver waits for the handshake. 616 + */ 617 + udelay(5); 618 + 619 + /* set "fuse" bits to enable the VPUs */ 620 + regmap_set_bits(bc->regmap, 0x8, 0xffffffff); 621 + regmap_set_bits(bc->regmap, 0xc, 0xffffffff); 622 + regmap_set_bits(bc->regmap, 0x10, 0xffffffff); 623 + } 624 + 625 + return NOTIFY_OK; 626 + } 627 + 628 + static const struct imx8m_blk_ctrl_domain_data imx8mq_vpu_blk_ctl_domain_data[] = { 629 + [IMX8MQ_VPUBLK_PD_G1] = { 630 + .name = "vpublk-g1", 631 + .clk_names = (const char *[]){ "g1", }, 632 + .num_clks = 1, 633 + .gpc_name = "g1", 634 + .rst_mask = BIT(1), 635 + .clk_mask = BIT(1), 636 + }, 637 + [IMX8MQ_VPUBLK_PD_G2] = { 638 + .name = "vpublk-g2", 639 + .clk_names = (const char *[]){ "g2", }, 640 + .num_clks = 1, 641 + .gpc_name = "g2", 642 + .rst_mask = BIT(0), 643 + .clk_mask = BIT(0), 644 + }, 645 + }; 646 + 647 + static const struct imx8m_blk_ctrl_data imx8mq_vpu_blk_ctl_dev_data = { 648 + .max_reg = 0x14, 649 + .power_notifier_fn = imx8mq_vpu_power_notifier, 650 + .domains = imx8mq_vpu_blk_ctl_domain_data, 651 + .num_domains = ARRAY_SIZE(imx8mq_vpu_blk_ctl_domain_data), 652 + }; 653 + 593 654 static const struct of_device_id imx8m_blk_ctrl_of_match[] = { 594 655 { 595 656 .compatible = "fsl,imx8mm-vpu-blk-ctrl", ··· 662 599 }, { 663 600 .compatible = "fsl,imx8mn-disp-blk-ctrl", 664 601 .data = &imx8mn_disp_blk_ctl_dev_data 602 + }, { 603 + .compatible = "fsl,imx8mq-vpu-blk-ctrl", 604 + .data = &imx8mq_vpu_blk_ctl_dev_data 665 605 }, { 666 606 /* Sentinel */ 667 607 }
-3
drivers/soc/imx/soc-imx.c
··· 40 40 if (!__mxc_cpu_type) 41 41 return 0; 42 42 43 - if (of_machine_is_compatible("fsl,ls1021a")) 44 - return 0; 45 - 46 43 soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL); 47 44 if (!soc_dev_attr) 48 45 return -ENOMEM;
+14 -2
drivers/soc/mediatek/mt8167-pm-domains.h
··· 18 18 .name = "mm", 19 19 .sta_mask = PWR_STATUS_DISP, 20 20 .ctl_offs = SPM_DIS_PWR_CON, 21 + .pwr_sta_offs = SPM_PWR_STATUS, 22 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 21 23 .sram_pdn_bits = GENMASK(11, 8), 22 24 .sram_pdn_ack_bits = GENMASK(12, 12), 23 25 .bp_infracfg = { ··· 32 30 .name = "vdec", 33 31 .sta_mask = PWR_STATUS_VDEC, 34 32 .ctl_offs = SPM_VDE_PWR_CON, 33 + .pwr_sta_offs = SPM_PWR_STATUS, 34 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 35 35 .sram_pdn_bits = GENMASK(8, 8), 36 36 .sram_pdn_ack_bits = GENMASK(12, 12), 37 37 .caps = MTK_SCPD_ACTIVE_WAKEUP, ··· 42 38 .name = "isp", 43 39 .sta_mask = PWR_STATUS_ISP, 44 40 .ctl_offs = SPM_ISP_PWR_CON, 41 + .pwr_sta_offs = SPM_PWR_STATUS, 42 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 45 43 .sram_pdn_bits = GENMASK(11, 8), 46 44 .sram_pdn_ack_bits = GENMASK(13, 12), 47 45 .caps = MTK_SCPD_ACTIVE_WAKEUP, ··· 52 46 .name = "mfg_async", 53 47 .sta_mask = MT8167_PWR_STATUS_MFG_ASYNC, 54 48 .ctl_offs = SPM_MFG_ASYNC_PWR_CON, 49 + .pwr_sta_offs = SPM_PWR_STATUS, 50 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 55 51 .sram_pdn_bits = 0, 56 52 .sram_pdn_ack_bits = 0, 57 53 .bp_infracfg = { ··· 65 57 .name = "mfg_2d", 66 58 .sta_mask = MT8167_PWR_STATUS_MFG_2D, 67 59 .ctl_offs = SPM_MFG_2D_PWR_CON, 60 + .pwr_sta_offs = SPM_PWR_STATUS, 61 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 68 62 .sram_pdn_bits = GENMASK(11, 8), 69 63 .sram_pdn_ack_bits = GENMASK(15, 12), 70 64 }, ··· 74 64 .name = "mfg", 75 65 .sta_mask = PWR_STATUS_MFG, 76 66 .ctl_offs = SPM_MFG_PWR_CON, 67 + .pwr_sta_offs = SPM_PWR_STATUS, 68 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 77 69 .sram_pdn_bits = GENMASK(11, 8), 78 70 .sram_pdn_ack_bits = GENMASK(15, 12), 79 71 }, ··· 83 71 .name = "conn", 84 72 .sta_mask = PWR_STATUS_CONN, 85 73 .ctl_offs = SPM_CONN_PWR_CON, 74 + .pwr_sta_offs = SPM_PWR_STATUS, 75 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 86 76 .sram_pdn_bits = GENMASK(8, 8), 87 77 .sram_pdn_ack_bits = 0, 88 78 .caps = MTK_SCPD_ACTIVE_WAKEUP, ··· 99 85 static const struct scpsys_soc_data mt8167_scpsys_data = { 100 86 .domains_data = scpsys_domain_data_mt8167, 101 87 .num_domains = ARRAY_SIZE(scpsys_domain_data_mt8167), 102 - .pwr_sta_offs = SPM_PWR_STATUS, 103 - .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 104 88 }; 105 89 106 90 #endif /* __SOC_MEDIATEK_MT8167_PM_DOMAINS_H */
+20 -2
drivers/soc/mediatek/mt8173-pm-domains.h
··· 15 15 .name = "vdec", 16 16 .sta_mask = PWR_STATUS_VDEC, 17 17 .ctl_offs = SPM_VDE_PWR_CON, 18 + .pwr_sta_offs = SPM_PWR_STATUS, 19 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 18 20 .sram_pdn_bits = GENMASK(11, 8), 19 21 .sram_pdn_ack_bits = GENMASK(12, 12), 20 22 }, ··· 24 22 .name = "venc", 25 23 .sta_mask = PWR_STATUS_VENC, 26 24 .ctl_offs = SPM_VEN_PWR_CON, 25 + .pwr_sta_offs = SPM_PWR_STATUS, 26 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 27 27 .sram_pdn_bits = GENMASK(11, 8), 28 28 .sram_pdn_ack_bits = GENMASK(15, 12), 29 29 }, ··· 33 29 .name = "isp", 34 30 .sta_mask = PWR_STATUS_ISP, 35 31 .ctl_offs = SPM_ISP_PWR_CON, 32 + .pwr_sta_offs = SPM_PWR_STATUS, 33 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 36 34 .sram_pdn_bits = GENMASK(11, 8), 37 35 .sram_pdn_ack_bits = GENMASK(13, 12), 38 36 }, ··· 42 36 .name = "mm", 43 37 .sta_mask = PWR_STATUS_DISP, 44 38 .ctl_offs = SPM_DIS_PWR_CON, 39 + .pwr_sta_offs = SPM_PWR_STATUS, 40 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 45 41 .sram_pdn_bits = GENMASK(11, 8), 46 42 .sram_pdn_ack_bits = GENMASK(12, 12), 47 43 .bp_infracfg = { ··· 55 47 .name = "venc_lt", 56 48 .sta_mask = PWR_STATUS_VENC_LT, 57 49 .ctl_offs = SPM_VEN2_PWR_CON, 50 + .pwr_sta_offs = SPM_PWR_STATUS, 51 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 58 52 .sram_pdn_bits = GENMASK(11, 8), 59 53 .sram_pdn_ack_bits = GENMASK(15, 12), 60 54 }, ··· 64 54 .name = "audio", 65 55 .sta_mask = PWR_STATUS_AUDIO, 66 56 .ctl_offs = SPM_AUDIO_PWR_CON, 57 + .pwr_sta_offs = SPM_PWR_STATUS, 58 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 67 59 .sram_pdn_bits = GENMASK(11, 8), 68 60 .sram_pdn_ack_bits = GENMASK(15, 12), 69 61 }, ··· 73 61 .name = "usb", 74 62 .sta_mask = PWR_STATUS_USB, 75 63 .ctl_offs = SPM_USB_PWR_CON, 64 + .pwr_sta_offs = SPM_PWR_STATUS, 65 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 76 66 .sram_pdn_bits = GENMASK(11, 8), 77 67 .sram_pdn_ack_bits = GENMASK(15, 12), 78 68 .caps = MTK_SCPD_ACTIVE_WAKEUP, ··· 83 69 .name = "mfg_async", 84 70 .sta_mask = PWR_STATUS_MFG_ASYNC, 85 71 .ctl_offs = SPM_MFG_ASYNC_PWR_CON, 72 + .pwr_sta_offs = SPM_PWR_STATUS, 73 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 86 74 .sram_pdn_bits = GENMASK(11, 8), 87 75 .sram_pdn_ack_bits = 0, 88 76 .caps = MTK_SCPD_DOMAIN_SUPPLY, ··· 93 77 .name = "mfg_2d", 94 78 .sta_mask = PWR_STATUS_MFG_2D, 95 79 .ctl_offs = SPM_MFG_2D_PWR_CON, 80 + .pwr_sta_offs = SPM_PWR_STATUS, 81 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 96 82 .sram_pdn_bits = GENMASK(11, 8), 97 83 .sram_pdn_ack_bits = GENMASK(13, 12), 98 84 }, ··· 102 84 .name = "mfg", 103 85 .sta_mask = PWR_STATUS_MFG, 104 86 .ctl_offs = SPM_MFG_PWR_CON, 87 + .pwr_sta_offs = SPM_PWR_STATUS, 88 + .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 105 89 .sram_pdn_bits = GENMASK(13, 8), 106 90 .sram_pdn_ack_bits = GENMASK(21, 16), 107 91 .bp_infracfg = { ··· 118 98 static const struct scpsys_soc_data mt8173_scpsys_data = { 119 99 .domains_data = scpsys_domain_data_mt8173, 120 100 .num_domains = ARRAY_SIZE(scpsys_domain_data_mt8173), 121 - .pwr_sta_offs = SPM_PWR_STATUS, 122 - .pwr_sta2nd_offs = SPM_PWR_STATUS_2ND, 123 101 }; 124 102 125 103 #endif /* __SOC_MEDIATEK_MT8173_PM_DOMAINS_H */
+2
drivers/soc/mediatek/mt8183-mmsys.h
··· 25 25 #define MT8183_RDMA0_SOUT_COLOR0 0x1 26 26 #define MT8183_RDMA1_SOUT_DSI0 0x1 27 27 28 + #define MT8183_MMSYS_SW0_RST_B 0x140 29 + 28 30 static const struct mtk_mmsys_routes mmsys_mt8183_routing_table[] = { 29 31 { 30 32 DDP_COMPONENT_OVL0, DDP_COMPONENT_OVL_2L0,
+30 -2
drivers/soc/mediatek/mt8183-pm-domains.h
··· 15 15 .name = "audio", 16 16 .sta_mask = PWR_STATUS_AUDIO, 17 17 .ctl_offs = 0x0314, 18 + .pwr_sta_offs = 0x0180, 19 + .pwr_sta2nd_offs = 0x0184, 18 20 .sram_pdn_bits = GENMASK(11, 8), 19 21 .sram_pdn_ack_bits = GENMASK(15, 12), 20 22 }, ··· 24 22 .name = "conn", 25 23 .sta_mask = PWR_STATUS_CONN, 26 24 .ctl_offs = 0x032c, 25 + .pwr_sta_offs = 0x0180, 26 + .pwr_sta2nd_offs = 0x0184, 27 27 .sram_pdn_bits = 0, 28 28 .sram_pdn_ack_bits = 0, 29 29 .bp_infracfg = { ··· 37 33 .name = "mfg_async", 38 34 .sta_mask = PWR_STATUS_MFG_ASYNC, 39 35 .ctl_offs = 0x0334, 36 + .pwr_sta_offs = 0x0180, 37 + .pwr_sta2nd_offs = 0x0184, 40 38 .sram_pdn_bits = 0, 41 39 .sram_pdn_ack_bits = 0, 42 40 }, ··· 46 40 .name = "mfg", 47 41 .sta_mask = PWR_STATUS_MFG, 48 42 .ctl_offs = 0x0338, 43 + .pwr_sta_offs = 0x0180, 44 + .pwr_sta2nd_offs = 0x0184, 49 45 .sram_pdn_bits = GENMASK(8, 8), 50 46 .sram_pdn_ack_bits = GENMASK(12, 12), 51 47 .caps = MTK_SCPD_DOMAIN_SUPPLY, ··· 56 48 .name = "mfg_core0", 57 49 .sta_mask = BIT(7), 58 50 .ctl_offs = 0x034c, 51 + .pwr_sta_offs = 0x0180, 52 + .pwr_sta2nd_offs = 0x0184, 59 53 .sram_pdn_bits = GENMASK(8, 8), 60 54 .sram_pdn_ack_bits = GENMASK(12, 12), 61 55 }, ··· 65 55 .name = "mfg_core1", 66 56 .sta_mask = BIT(20), 67 57 .ctl_offs = 0x0310, 58 + .pwr_sta_offs = 0x0180, 59 + .pwr_sta2nd_offs = 0x0184, 68 60 .sram_pdn_bits = GENMASK(8, 8), 69 61 .sram_pdn_ack_bits = GENMASK(12, 12), 70 62 }, ··· 74 62 .name = "mfg_2d", 75 63 .sta_mask = PWR_STATUS_MFG_2D, 76 64 .ctl_offs = 0x0348, 65 + .pwr_sta_offs = 0x0180, 66 + .pwr_sta2nd_offs = 0x0184, 77 67 .sram_pdn_bits = GENMASK(8, 8), 78 68 .sram_pdn_ack_bits = GENMASK(12, 12), 79 69 .bp_infracfg = { ··· 89 75 .name = "disp", 90 76 .sta_mask = PWR_STATUS_DISP, 91 77 .ctl_offs = 0x030c, 78 + .pwr_sta_offs = 0x0180, 79 + .pwr_sta2nd_offs = 0x0184, 92 80 .sram_pdn_bits = GENMASK(8, 8), 93 81 .sram_pdn_ack_bits = GENMASK(12, 12), 94 82 .bp_infracfg = { ··· 110 94 .name = "cam", 111 95 .sta_mask = BIT(25), 112 96 .ctl_offs = 0x0344, 97 + .pwr_sta_offs = 0x0180, 98 + .pwr_sta2nd_offs = 0x0184, 113 99 .sram_pdn_bits = GENMASK(9, 8), 114 100 .sram_pdn_ack_bits = GENMASK(13, 12), 115 101 .bp_infracfg = { ··· 135 117 .name = "isp", 136 118 .sta_mask = PWR_STATUS_ISP, 137 119 .ctl_offs = 0x0308, 120 + .pwr_sta_offs = 0x0180, 121 + .pwr_sta2nd_offs = 0x0184, 138 122 .sram_pdn_bits = GENMASK(9, 8), 139 123 .sram_pdn_ack_bits = GENMASK(13, 12), 140 124 .bp_infracfg = { ··· 160 140 .name = "vdec", 161 141 .sta_mask = BIT(31), 162 142 .ctl_offs = 0x0300, 143 + .pwr_sta_offs = 0x0180, 144 + .pwr_sta2nd_offs = 0x0184, 163 145 .sram_pdn_bits = GENMASK(8, 8), 164 146 .sram_pdn_ack_bits = GENMASK(12, 12), 165 147 .bp_smi = { ··· 175 153 .name = "venc", 176 154 .sta_mask = PWR_STATUS_VENC, 177 155 .ctl_offs = 0x0304, 156 + .pwr_sta_offs = 0x0180, 157 + .pwr_sta2nd_offs = 0x0184, 178 158 .sram_pdn_bits = GENMASK(11, 8), 179 159 .sram_pdn_ack_bits = GENMASK(15, 12), 180 160 .bp_smi = { ··· 190 166 .name = "vpu_top", 191 167 .sta_mask = BIT(26), 192 168 .ctl_offs = 0x0324, 169 + .pwr_sta_offs = 0x0180, 170 + .pwr_sta2nd_offs = 0x0184, 193 171 .sram_pdn_bits = GENMASK(8, 8), 194 172 .sram_pdn_ack_bits = GENMASK(12, 12), 195 173 .bp_infracfg = { ··· 219 193 .name = "vpu_core0", 220 194 .sta_mask = BIT(27), 221 195 .ctl_offs = 0x33c, 196 + .pwr_sta_offs = 0x0180, 197 + .pwr_sta2nd_offs = 0x0184, 222 198 .sram_pdn_bits = GENMASK(11, 8), 223 199 .sram_pdn_ack_bits = GENMASK(13, 12), 224 200 .bp_infracfg = { ··· 239 211 .name = "vpu_core1", 240 212 .sta_mask = BIT(28), 241 213 .ctl_offs = 0x0340, 214 + .pwr_sta_offs = 0x0180, 215 + .pwr_sta2nd_offs = 0x0184, 242 216 .sram_pdn_bits = GENMASK(11, 8), 243 217 .sram_pdn_ack_bits = GENMASK(13, 12), 244 218 .bp_infracfg = { ··· 260 230 static const struct scpsys_soc_data mt8183_scpsys_data = { 261 231 .domains_data = scpsys_domain_data_mt8183, 262 232 .num_domains = ARRAY_SIZE(scpsys_domain_data_mt8183), 263 - .pwr_sta_offs = 0x0180, 264 - .pwr_sta2nd_offs = 0x0184 265 233 }; 266 234 267 235 #endif /* __SOC_MEDIATEK_MT8183_PM_DOMAINS_H */
+115
drivers/soc/mediatek/mt8186-mmsys.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + 3 + #ifndef __SOC_MEDIATEK_MT8186_MMSYS_H 4 + #define __SOC_MEDIATEK_MT8186_MMSYS_H 5 + 6 + #define MT8186_MMSYS_OVL_CON 0xF04 7 + #define MT8186_MMSYS_OVL0_CON_MASK 0x3 8 + #define MT8186_MMSYS_OVL0_2L_CON_MASK 0xC 9 + #define MT8186_OVL0_GO_BLEND BIT(0) 10 + #define MT8186_OVL0_GO_BG BIT(1) 11 + #define MT8186_OVL0_2L_GO_BLEND BIT(2) 12 + #define MT8186_OVL0_2L_GO_BG BIT(3) 13 + #define MT8186_DISP_RDMA0_SOUT_SEL 0xF0C 14 + #define MT8186_RDMA0_SOUT_SEL_MASK 0xF 15 + #define MT8186_RDMA0_SOUT_TO_DSI0 (0) 16 + #define MT8186_RDMA0_SOUT_TO_COLOR0 (1) 17 + #define MT8186_RDMA0_SOUT_TO_DPI0 (2) 18 + #define MT8186_DISP_OVL0_2L_MOUT_EN 0xF14 19 + #define MT8186_OVL0_2L_MOUT_EN_MASK 0xF 20 + #define MT8186_OVL0_2L_MOUT_TO_RDMA0 BIT(0) 21 + #define MT8186_OVL0_2L_MOUT_TO_RDMA1 BIT(3) 22 + #define MT8186_DISP_OVL0_MOUT_EN 0xF18 23 + #define MT8186_OVL0_MOUT_EN_MASK 0xF 24 + #define MT8186_OVL0_MOUT_TO_RDMA0 BIT(0) 25 + #define MT8186_OVL0_MOUT_TO_RDMA1 BIT(3) 26 + #define MT8186_DISP_DITHER0_MOUT_EN 0xF20 27 + #define MT8186_DITHER0_MOUT_EN_MASK 0xF 28 + #define MT8186_DITHER0_MOUT_TO_DSI0 BIT(0) 29 + #define MT8186_DITHER0_MOUT_TO_RDMA1 BIT(2) 30 + #define MT8186_DITHER0_MOUT_TO_DPI0 BIT(3) 31 + #define MT8186_DISP_RDMA0_SEL_IN 0xF28 32 + #define MT8186_RDMA0_SEL_IN_MASK 0xF 33 + #define MT8186_RDMA0_FROM_OVL0 0 34 + #define MT8186_RDMA0_FROM_OVL0_2L 2 35 + #define MT8186_DISP_DSI0_SEL_IN 0xF30 36 + #define MT8186_DSI0_SEL_IN_MASK 0xF 37 + #define MT8186_DSI0_FROM_RDMA0 0 38 + #define MT8186_DSI0_FROM_DITHER0 1 39 + #define MT8186_DSI0_FROM_RDMA1 2 40 + #define MT8186_DISP_RDMA1_MOUT_EN 0xF3C 41 + #define MT8186_RDMA1_MOUT_EN_MASK 0xF 42 + #define MT8186_RDMA1_MOUT_TO_DPI0_SEL BIT(0) 43 + #define MT8186_RDMA1_MOUT_TO_DSI0_SEL BIT(2) 44 + #define MT8186_DISP_RDMA1_SEL_IN 0xF40 45 + #define MT8186_RDMA1_SEL_IN_MASK 0xF 46 + #define MT8186_RDMA1_FROM_OVL0 0 47 + #define MT8186_RDMA1_FROM_OVL0_2L 2 48 + #define MT8186_RDMA1_FROM_DITHER0 3 49 + #define MT8186_DISP_DPI0_SEL_IN 0xF44 50 + #define MT8186_DPI0_SEL_IN_MASK 0xF 51 + #define MT8186_DPI0_FROM_RDMA1 0 52 + #define MT8186_DPI0_FROM_DITHER0 1 53 + #define MT8186_DPI0_FROM_RDMA0 2 54 + 55 + #define MT8186_MMSYS_SW0_RST_B 0x160 56 + 57 + static const struct mtk_mmsys_routes mmsys_mt8186_routing_table[] = { 58 + { 59 + DDP_COMPONENT_OVL0, DDP_COMPONENT_RDMA0, 60 + MT8186_DISP_OVL0_MOUT_EN, MT8186_OVL0_MOUT_EN_MASK, 61 + MT8186_OVL0_MOUT_TO_RDMA0 62 + }, 63 + { 64 + DDP_COMPONENT_OVL0, DDP_COMPONENT_RDMA0, 65 + MT8186_DISP_RDMA0_SEL_IN, MT8186_RDMA0_SEL_IN_MASK, 66 + MT8186_RDMA0_FROM_OVL0 67 + }, 68 + { 69 + DDP_COMPONENT_OVL0, DDP_COMPONENT_RDMA0, 70 + MT8186_MMSYS_OVL_CON, MT8186_MMSYS_OVL0_CON_MASK, 71 + MT8186_OVL0_GO_BLEND 72 + }, 73 + { 74 + DDP_COMPONENT_RDMA0, DDP_COMPONENT_COLOR0, 75 + MT8186_DISP_RDMA0_SOUT_SEL, MT8186_RDMA0_SOUT_SEL_MASK, 76 + MT8186_RDMA0_SOUT_TO_COLOR0 77 + }, 78 + { 79 + DDP_COMPONENT_DITHER, DDP_COMPONENT_DSI0, 80 + MT8186_DISP_DITHER0_MOUT_EN, MT8186_DITHER0_MOUT_EN_MASK, 81 + MT8186_DITHER0_MOUT_TO_DSI0, 82 + }, 83 + { 84 + DDP_COMPONENT_DITHER, DDP_COMPONENT_DSI0, 85 + MT8186_DISP_DSI0_SEL_IN, MT8186_DSI0_SEL_IN_MASK, 86 + MT8186_DSI0_FROM_DITHER0 87 + }, 88 + { 89 + DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA1, 90 + MT8186_DISP_OVL0_2L_MOUT_EN, MT8186_OVL0_2L_MOUT_EN_MASK, 91 + MT8186_OVL0_2L_MOUT_TO_RDMA1 92 + }, 93 + { 94 + DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA1, 95 + MT8186_DISP_RDMA1_SEL_IN, MT8186_RDMA1_SEL_IN_MASK, 96 + MT8186_RDMA1_FROM_OVL0_2L 97 + }, 98 + { 99 + DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA1, 100 + MT8186_MMSYS_OVL_CON, MT8186_MMSYS_OVL0_2L_CON_MASK, 101 + MT8186_OVL0_2L_GO_BLEND 102 + }, 103 + { 104 + DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0, 105 + MT8186_DISP_RDMA1_MOUT_EN, MT8186_RDMA1_MOUT_EN_MASK, 106 + MT8186_RDMA1_MOUT_TO_DPI0_SEL 107 + }, 108 + { 109 + DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0, 110 + MT8186_DISP_DPI0_SEL_IN, MT8186_DPI0_SEL_IN_MASK, 111 + MT8186_DPI0_FROM_RDMA1 112 + }, 113 + }; 114 + 115 + #endif /* __SOC_MEDIATEK_MT8186_MMSYS_H */
+344
drivers/soc/mediatek/mt8186-pm-domains.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (c) 2022 MediaTek Inc. 4 + * Author: Chun-Jie Chen <chun-jie.chen@mediatek.com> 5 + */ 6 + 7 + #ifndef __SOC_MEDIATEK_MT8186_PM_DOMAINS_H 8 + #define __SOC_MEDIATEK_MT8186_PM_DOMAINS_H 9 + 10 + #include "mtk-pm-domains.h" 11 + #include <dt-bindings/power/mt8186-power.h> 12 + 13 + /* 14 + * MT8186 power domain support 15 + */ 16 + 17 + static const struct scpsys_domain_data scpsys_domain_data_mt8186[] = { 18 + [MT8186_POWER_DOMAIN_MFG0] = { 19 + .name = "mfg0", 20 + .sta_mask = BIT(2), 21 + .ctl_offs = 0x308, 22 + .pwr_sta_offs = 0x16C, 23 + .pwr_sta2nd_offs = 0x170, 24 + .sram_pdn_bits = BIT(8), 25 + .sram_pdn_ack_bits = BIT(12), 26 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF | MTK_SCPD_DOMAIN_SUPPLY, 27 + }, 28 + [MT8186_POWER_DOMAIN_MFG1] = { 29 + .name = "mfg1", 30 + .sta_mask = BIT(3), 31 + .ctl_offs = 0x30c, 32 + .pwr_sta_offs = 0x16C, 33 + .pwr_sta2nd_offs = 0x170, 34 + .sram_pdn_bits = BIT(8), 35 + .sram_pdn_ack_bits = BIT(12), 36 + .bp_infracfg = { 37 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_1_MFG1_STEP1, 38 + MT8186_TOP_AXI_PROT_EN_1_SET, 39 + MT8186_TOP_AXI_PROT_EN_1_CLR, 40 + MT8186_TOP_AXI_PROT_EN_1_STA), 41 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_MFG1_STEP2, 42 + MT8186_TOP_AXI_PROT_EN_SET, 43 + MT8186_TOP_AXI_PROT_EN_CLR, 44 + MT8186_TOP_AXI_PROT_EN_STA), 45 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_MFG1_STEP3, 46 + MT8186_TOP_AXI_PROT_EN_SET, 47 + MT8186_TOP_AXI_PROT_EN_CLR, 48 + MT8186_TOP_AXI_PROT_EN_STA), 49 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_1_MFG1_STEP4, 50 + MT8186_TOP_AXI_PROT_EN_1_SET, 51 + MT8186_TOP_AXI_PROT_EN_1_CLR, 52 + MT8186_TOP_AXI_PROT_EN_1_STA), 53 + }, 54 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 55 + }, 56 + [MT8186_POWER_DOMAIN_MFG2] = { 57 + .name = "mfg2", 58 + .sta_mask = BIT(4), 59 + .ctl_offs = 0x310, 60 + .pwr_sta_offs = 0x16C, 61 + .pwr_sta2nd_offs = 0x170, 62 + .sram_pdn_bits = BIT(8), 63 + .sram_pdn_ack_bits = BIT(12), 64 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 65 + }, 66 + [MT8186_POWER_DOMAIN_MFG3] = { 67 + .name = "mfg3", 68 + .sta_mask = BIT(5), 69 + .ctl_offs = 0x314, 70 + .pwr_sta_offs = 0x16C, 71 + .pwr_sta2nd_offs = 0x170, 72 + .sram_pdn_bits = BIT(8), 73 + .sram_pdn_ack_bits = BIT(12), 74 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 75 + }, 76 + [MT8186_POWER_DOMAIN_SSUSB] = { 77 + .name = "ssusb", 78 + .sta_mask = BIT(20), 79 + .ctl_offs = 0x9F0, 80 + .pwr_sta_offs = 0x16C, 81 + .pwr_sta2nd_offs = 0x170, 82 + .sram_pdn_bits = BIT(8), 83 + .sram_pdn_ack_bits = BIT(12), 84 + .caps = MTK_SCPD_ACTIVE_WAKEUP, 85 + }, 86 + [MT8186_POWER_DOMAIN_SSUSB_P1] = { 87 + .name = "ssusb_p1", 88 + .sta_mask = BIT(19), 89 + .ctl_offs = 0x9F4, 90 + .pwr_sta_offs = 0x16C, 91 + .pwr_sta2nd_offs = 0x170, 92 + .sram_pdn_bits = BIT(8), 93 + .sram_pdn_ack_bits = BIT(12), 94 + .caps = MTK_SCPD_ACTIVE_WAKEUP, 95 + }, 96 + [MT8186_POWER_DOMAIN_DIS] = { 97 + .name = "dis", 98 + .sta_mask = BIT(21), 99 + .ctl_offs = 0x354, 100 + .pwr_sta_offs = 0x16C, 101 + .pwr_sta2nd_offs = 0x170, 102 + .sram_pdn_bits = BIT(8), 103 + .sram_pdn_ack_bits = BIT(12), 104 + .bp_infracfg = { 105 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_1_DIS_STEP1, 106 + MT8186_TOP_AXI_PROT_EN_1_SET, 107 + MT8186_TOP_AXI_PROT_EN_1_CLR, 108 + MT8186_TOP_AXI_PROT_EN_1_STA), 109 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_DIS_STEP2, 110 + MT8186_TOP_AXI_PROT_EN_SET, 111 + MT8186_TOP_AXI_PROT_EN_CLR, 112 + MT8186_TOP_AXI_PROT_EN_STA), 113 + }, 114 + }, 115 + [MT8186_POWER_DOMAIN_IMG] = { 116 + .name = "img", 117 + .sta_mask = BIT(13), 118 + .ctl_offs = 0x334, 119 + .pwr_sta_offs = 0x16C, 120 + .pwr_sta2nd_offs = 0x170, 121 + .sram_pdn_bits = BIT(8), 122 + .sram_pdn_ack_bits = BIT(12), 123 + .bp_infracfg = { 124 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_1_IMG_STEP1, 125 + MT8186_TOP_AXI_PROT_EN_1_SET, 126 + MT8186_TOP_AXI_PROT_EN_1_CLR, 127 + MT8186_TOP_AXI_PROT_EN_1_STA), 128 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_1_IMG_STEP2, 129 + MT8186_TOP_AXI_PROT_EN_1_SET, 130 + MT8186_TOP_AXI_PROT_EN_1_CLR, 131 + MT8186_TOP_AXI_PROT_EN_1_STA), 132 + }, 133 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 134 + }, 135 + [MT8186_POWER_DOMAIN_IMG2] = { 136 + .name = "img2", 137 + .sta_mask = BIT(14), 138 + .ctl_offs = 0x338, 139 + .pwr_sta_offs = 0x16C, 140 + .pwr_sta2nd_offs = 0x170, 141 + .sram_pdn_bits = BIT(8), 142 + .sram_pdn_ack_bits = BIT(12), 143 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 144 + }, 145 + [MT8186_POWER_DOMAIN_IPE] = { 146 + .name = "ipe", 147 + .sta_mask = BIT(15), 148 + .ctl_offs = 0x33C, 149 + .pwr_sta_offs = 0x16C, 150 + .pwr_sta2nd_offs = 0x170, 151 + .sram_pdn_bits = BIT(8), 152 + .sram_pdn_ack_bits = BIT(12), 153 + .bp_infracfg = { 154 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_1_IPE_STEP1, 155 + MT8186_TOP_AXI_PROT_EN_1_SET, 156 + MT8186_TOP_AXI_PROT_EN_1_CLR, 157 + MT8186_TOP_AXI_PROT_EN_1_STA), 158 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_1_IPE_STEP2, 159 + MT8186_TOP_AXI_PROT_EN_1_SET, 160 + MT8186_TOP_AXI_PROT_EN_1_CLR, 161 + MT8186_TOP_AXI_PROT_EN_1_STA), 162 + }, 163 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 164 + }, 165 + [MT8186_POWER_DOMAIN_CAM] = { 166 + .name = "cam", 167 + .sta_mask = BIT(23), 168 + .ctl_offs = 0x35C, 169 + .pwr_sta_offs = 0x16C, 170 + .pwr_sta2nd_offs = 0x170, 171 + .sram_pdn_bits = BIT(8), 172 + .sram_pdn_ack_bits = BIT(12), 173 + .bp_infracfg = { 174 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_1_CAM_STEP1, 175 + MT8186_TOP_AXI_PROT_EN_1_SET, 176 + MT8186_TOP_AXI_PROT_EN_1_CLR, 177 + MT8186_TOP_AXI_PROT_EN_1_STA), 178 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_1_CAM_STEP2, 179 + MT8186_TOP_AXI_PROT_EN_1_SET, 180 + MT8186_TOP_AXI_PROT_EN_1_CLR, 181 + MT8186_TOP_AXI_PROT_EN_1_STA), 182 + }, 183 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 184 + }, 185 + [MT8186_POWER_DOMAIN_CAM_RAWA] = { 186 + .name = "cam_rawa", 187 + .sta_mask = BIT(24), 188 + .ctl_offs = 0x360, 189 + .pwr_sta_offs = 0x16C, 190 + .pwr_sta2nd_offs = 0x170, 191 + .sram_pdn_bits = BIT(8), 192 + .sram_pdn_ack_bits = BIT(12), 193 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 194 + }, 195 + [MT8186_POWER_DOMAIN_CAM_RAWB] = { 196 + .name = "cam_rawb", 197 + .sta_mask = BIT(25), 198 + .ctl_offs = 0x364, 199 + .pwr_sta_offs = 0x16C, 200 + .pwr_sta2nd_offs = 0x170, 201 + .sram_pdn_bits = BIT(8), 202 + .sram_pdn_ack_bits = BIT(12), 203 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 204 + }, 205 + [MT8186_POWER_DOMAIN_VENC] = { 206 + .name = "venc", 207 + .sta_mask = BIT(18), 208 + .ctl_offs = 0x348, 209 + .pwr_sta_offs = 0x16C, 210 + .pwr_sta2nd_offs = 0x170, 211 + .sram_pdn_bits = BIT(8), 212 + .sram_pdn_ack_bits = BIT(12), 213 + .bp_infracfg = { 214 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_1_VENC_STEP1, 215 + MT8186_TOP_AXI_PROT_EN_1_SET, 216 + MT8186_TOP_AXI_PROT_EN_1_CLR, 217 + MT8186_TOP_AXI_PROT_EN_1_STA), 218 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_1_VENC_STEP2, 219 + MT8186_TOP_AXI_PROT_EN_1_SET, 220 + MT8186_TOP_AXI_PROT_EN_1_CLR, 221 + MT8186_TOP_AXI_PROT_EN_1_STA), 222 + }, 223 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 224 + }, 225 + [MT8186_POWER_DOMAIN_VDEC] = { 226 + .name = "vdec", 227 + .sta_mask = BIT(16), 228 + .ctl_offs = 0x340, 229 + .pwr_sta_offs = 0x16C, 230 + .pwr_sta2nd_offs = 0x170, 231 + .sram_pdn_bits = BIT(8), 232 + .sram_pdn_ack_bits = BIT(12), 233 + .bp_infracfg = { 234 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_1_VDEC_STEP1, 235 + MT8186_TOP_AXI_PROT_EN_1_SET, 236 + MT8186_TOP_AXI_PROT_EN_1_CLR, 237 + MT8186_TOP_AXI_PROT_EN_1_STA), 238 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_1_VDEC_STEP2, 239 + MT8186_TOP_AXI_PROT_EN_1_SET, 240 + MT8186_TOP_AXI_PROT_EN_1_CLR, 241 + MT8186_TOP_AXI_PROT_EN_1_STA), 242 + }, 243 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 244 + }, 245 + [MT8186_POWER_DOMAIN_WPE] = { 246 + .name = "wpe", 247 + .sta_mask = BIT(0), 248 + .ctl_offs = 0x3F8, 249 + .pwr_sta_offs = 0x16C, 250 + .pwr_sta2nd_offs = 0x170, 251 + .sram_pdn_bits = BIT(8), 252 + .sram_pdn_ack_bits = BIT(12), 253 + .bp_infracfg = { 254 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_2_WPE_STEP1, 255 + MT8186_TOP_AXI_PROT_EN_2_SET, 256 + MT8186_TOP_AXI_PROT_EN_2_CLR, 257 + MT8186_TOP_AXI_PROT_EN_2_STA), 258 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_2_WPE_STEP2, 259 + MT8186_TOP_AXI_PROT_EN_2_SET, 260 + MT8186_TOP_AXI_PROT_EN_2_CLR, 261 + MT8186_TOP_AXI_PROT_EN_2_STA), 262 + }, 263 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 264 + }, 265 + [MT8186_POWER_DOMAIN_CONN_ON] = { 266 + .name = "conn_on", 267 + .sta_mask = BIT(1), 268 + .ctl_offs = 0x304, 269 + .pwr_sta_offs = 0x16C, 270 + .pwr_sta2nd_offs = 0x170, 271 + .bp_infracfg = { 272 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_1_CONN_ON_STEP1, 273 + MT8186_TOP_AXI_PROT_EN_1_SET, 274 + MT8186_TOP_AXI_PROT_EN_1_CLR, 275 + MT8186_TOP_AXI_PROT_EN_1_STA), 276 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_CONN_ON_STEP2, 277 + MT8186_TOP_AXI_PROT_EN_SET, 278 + MT8186_TOP_AXI_PROT_EN_CLR, 279 + MT8186_TOP_AXI_PROT_EN_STA), 280 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_CONN_ON_STEP3, 281 + MT8186_TOP_AXI_PROT_EN_SET, 282 + MT8186_TOP_AXI_PROT_EN_CLR, 283 + MT8186_TOP_AXI_PROT_EN_STA), 284 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_CONN_ON_STEP4, 285 + MT8186_TOP_AXI_PROT_EN_SET, 286 + MT8186_TOP_AXI_PROT_EN_CLR, 287 + MT8186_TOP_AXI_PROT_EN_STA), 288 + }, 289 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF | MTK_SCPD_ACTIVE_WAKEUP, 290 + }, 291 + [MT8186_POWER_DOMAIN_CSIRX_TOP] = { 292 + .name = "csirx_top", 293 + .sta_mask = BIT(6), 294 + .ctl_offs = 0x318, 295 + .pwr_sta_offs = 0x16C, 296 + .pwr_sta2nd_offs = 0x170, 297 + .sram_pdn_bits = BIT(8), 298 + .sram_pdn_ack_bits = BIT(12), 299 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 300 + }, 301 + [MT8186_POWER_DOMAIN_ADSP_AO] = { 302 + .name = "adsp_ao", 303 + .sta_mask = BIT(17), 304 + .ctl_offs = 0x9FC, 305 + .pwr_sta_offs = 0x16C, 306 + .pwr_sta2nd_offs = 0x170, 307 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 308 + }, 309 + [MT8186_POWER_DOMAIN_ADSP_INFRA] = { 310 + .name = "adsp_infra", 311 + .sta_mask = BIT(10), 312 + .ctl_offs = 0x9F8, 313 + .pwr_sta_offs = 0x16C, 314 + .pwr_sta2nd_offs = 0x170, 315 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 316 + }, 317 + [MT8186_POWER_DOMAIN_ADSP_TOP] = { 318 + .name = "adsp_top", 319 + .sta_mask = BIT(31), 320 + .ctl_offs = 0x3E4, 321 + .pwr_sta_offs = 0x16C, 322 + .pwr_sta2nd_offs = 0x170, 323 + .sram_pdn_bits = BIT(8), 324 + .sram_pdn_ack_bits = BIT(12), 325 + .bp_infracfg = { 326 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_3_ADSP_TOP_STEP1, 327 + MT8186_TOP_AXI_PROT_EN_3_SET, 328 + MT8186_TOP_AXI_PROT_EN_3_CLR, 329 + MT8186_TOP_AXI_PROT_EN_3_STA), 330 + BUS_PROT_WR_IGN(MT8186_TOP_AXI_PROT_EN_3_ADSP_TOP_STEP2, 331 + MT8186_TOP_AXI_PROT_EN_3_SET, 332 + MT8186_TOP_AXI_PROT_EN_3_CLR, 333 + MT8186_TOP_AXI_PROT_EN_3_STA), 334 + }, 335 + .caps = MTK_SCPD_SRAM_ISO | MTK_SCPD_KEEP_DEFAULT_OFF | MTK_SCPD_ACTIVE_WAKEUP, 336 + }, 337 + }; 338 + 339 + static const struct scpsys_soc_data mt8186_scpsys_data = { 340 + .domains_data = scpsys_domain_data_mt8186, 341 + .num_domains = ARRAY_SIZE(scpsys_domain_data_mt8186), 342 + }; 343 + 344 + #endif /* __SOC_MEDIATEK_MT8186_PM_DOMAINS_H */
+42 -2
drivers/soc/mediatek/mt8192-pm-domains.h
··· 15 15 .name = "audio", 16 16 .sta_mask = BIT(21), 17 17 .ctl_offs = 0x0354, 18 + .pwr_sta_offs = 0x016c, 19 + .pwr_sta2nd_offs = 0x0170, 18 20 .sram_pdn_bits = GENMASK(8, 8), 19 21 .sram_pdn_ack_bits = GENMASK(12, 12), 20 22 .bp_infracfg = { ··· 30 28 .name = "conn", 31 29 .sta_mask = PWR_STATUS_CONN, 32 30 .ctl_offs = 0x0304, 31 + .pwr_sta_offs = 0x016c, 32 + .pwr_sta2nd_offs = 0x0170, 33 33 .sram_pdn_bits = 0, 34 34 .sram_pdn_ack_bits = 0, 35 35 .bp_infracfg = { ··· 54 50 .name = "mfg0", 55 51 .sta_mask = BIT(2), 56 52 .ctl_offs = 0x0308, 53 + .pwr_sta_offs = 0x016c, 54 + .pwr_sta2nd_offs = 0x0170, 57 55 .sram_pdn_bits = GENMASK(8, 8), 58 56 .sram_pdn_ack_bits = GENMASK(12, 12), 59 57 }, ··· 63 57 .name = "mfg1", 64 58 .sta_mask = BIT(3), 65 59 .ctl_offs = 0x030c, 60 + .pwr_sta_offs = 0x016c, 61 + .pwr_sta2nd_offs = 0x0170, 66 62 .sram_pdn_bits = GENMASK(8, 8), 67 63 .sram_pdn_ack_bits = GENMASK(12, 12), 68 64 .bp_infracfg = { ··· 90 82 .name = "mfg2", 91 83 .sta_mask = BIT(4), 92 84 .ctl_offs = 0x0310, 85 + .pwr_sta_offs = 0x016c, 86 + .pwr_sta2nd_offs = 0x0170, 93 87 .sram_pdn_bits = GENMASK(8, 8), 94 88 .sram_pdn_ack_bits = GENMASK(12, 12), 95 89 }, ··· 99 89 .name = "mfg3", 100 90 .sta_mask = BIT(5), 101 91 .ctl_offs = 0x0314, 92 + .pwr_sta_offs = 0x016c, 93 + .pwr_sta2nd_offs = 0x0170, 102 94 .sram_pdn_bits = GENMASK(8, 8), 103 95 .sram_pdn_ack_bits = GENMASK(12, 12), 104 96 }, ··· 108 96 .name = "mfg4", 109 97 .sta_mask = BIT(6), 110 98 .ctl_offs = 0x0318, 99 + .pwr_sta_offs = 0x016c, 100 + .pwr_sta2nd_offs = 0x0170, 111 101 .sram_pdn_bits = GENMASK(8, 8), 112 102 .sram_pdn_ack_bits = GENMASK(12, 12), 113 103 }, ··· 117 103 .name = "mfg5", 118 104 .sta_mask = BIT(7), 119 105 .ctl_offs = 0x031c, 106 + .pwr_sta_offs = 0x016c, 107 + .pwr_sta2nd_offs = 0x0170, 120 108 .sram_pdn_bits = GENMASK(8, 8), 121 109 .sram_pdn_ack_bits = GENMASK(12, 12), 122 110 }, ··· 126 110 .name = "mfg6", 127 111 .sta_mask = BIT(8), 128 112 .ctl_offs = 0x0320, 113 + .pwr_sta_offs = 0x016c, 114 + .pwr_sta2nd_offs = 0x0170, 129 115 .sram_pdn_bits = GENMASK(8, 8), 130 116 .sram_pdn_ack_bits = GENMASK(12, 12), 131 117 }, ··· 135 117 .name = "disp", 136 118 .sta_mask = BIT(20), 137 119 .ctl_offs = 0x0350, 120 + .pwr_sta_offs = 0x016c, 121 + .pwr_sta2nd_offs = 0x0170, 138 122 .sram_pdn_bits = GENMASK(8, 8), 139 123 .sram_pdn_ack_bits = GENMASK(12, 12), 140 124 .bp_infracfg = { ··· 166 146 .name = "ipe", 167 147 .sta_mask = BIT(14), 168 148 .ctl_offs = 0x0338, 149 + .pwr_sta_offs = 0x016c, 150 + .pwr_sta2nd_offs = 0x0170, 169 151 .sram_pdn_bits = GENMASK(8, 8), 170 152 .sram_pdn_ack_bits = GENMASK(12, 12), 171 153 .bp_infracfg = { ··· 185 163 .name = "isp", 186 164 .sta_mask = BIT(12), 187 165 .ctl_offs = 0x0330, 166 + .pwr_sta_offs = 0x016c, 167 + .pwr_sta2nd_offs = 0x0170, 188 168 .sram_pdn_bits = GENMASK(8, 8), 189 169 .sram_pdn_ack_bits = GENMASK(12, 12), 190 170 .bp_infracfg = { ··· 204 180 .name = "isp2", 205 181 .sta_mask = BIT(13), 206 182 .ctl_offs = 0x0334, 183 + .pwr_sta_offs = 0x016c, 184 + .pwr_sta2nd_offs = 0x0170, 207 185 .sram_pdn_bits = GENMASK(8, 8), 208 186 .sram_pdn_ack_bits = GENMASK(12, 12), 209 187 .bp_infracfg = { ··· 223 197 .name = "mdp", 224 198 .sta_mask = BIT(19), 225 199 .ctl_offs = 0x034c, 200 + .pwr_sta_offs = 0x016c, 201 + .pwr_sta2nd_offs = 0x0170, 226 202 .sram_pdn_bits = GENMASK(8, 8), 227 203 .sram_pdn_ack_bits = GENMASK(12, 12), 228 204 .bp_infracfg = { ··· 242 214 .name = "venc", 243 215 .sta_mask = BIT(17), 244 216 .ctl_offs = 0x0344, 217 + .pwr_sta_offs = 0x016c, 218 + .pwr_sta2nd_offs = 0x0170, 245 219 .sram_pdn_bits = GENMASK(8, 8), 246 220 .sram_pdn_ack_bits = GENMASK(12, 12), 247 221 .bp_infracfg = { ··· 261 231 .name = "vdec", 262 232 .sta_mask = BIT(15), 263 233 .ctl_offs = 0x033c, 234 + .pwr_sta_offs = 0x016c, 235 + .pwr_sta2nd_offs = 0x0170, 264 236 .sram_pdn_bits = GENMASK(8, 8), 265 237 .sram_pdn_ack_bits = GENMASK(12, 12), 266 238 .bp_infracfg = { ··· 280 248 .name = "vdec2", 281 249 .sta_mask = BIT(16), 282 250 .ctl_offs = 0x0340, 251 + .pwr_sta_offs = 0x016c, 252 + .pwr_sta2nd_offs = 0x0170, 283 253 .sram_pdn_bits = GENMASK(8, 8), 284 254 .sram_pdn_ack_bits = GENMASK(12, 12), 285 255 }, ··· 289 255 .name = "cam", 290 256 .sta_mask = BIT(23), 291 257 .ctl_offs = 0x035c, 258 + .pwr_sta_offs = 0x016c, 259 + .pwr_sta2nd_offs = 0x0170, 292 260 .sram_pdn_bits = GENMASK(8, 8), 293 261 .sram_pdn_ack_bits = GENMASK(12, 12), 294 262 .bp_infracfg = { ··· 320 284 .name = "cam_rawa", 321 285 .sta_mask = BIT(24), 322 286 .ctl_offs = 0x0360, 287 + .pwr_sta_offs = 0x016c, 288 + .pwr_sta2nd_offs = 0x0170, 323 289 .sram_pdn_bits = GENMASK(8, 8), 324 290 .sram_pdn_ack_bits = GENMASK(12, 12), 325 291 }, ··· 329 291 .name = "cam_rawb", 330 292 .sta_mask = BIT(25), 331 293 .ctl_offs = 0x0364, 294 + .pwr_sta_offs = 0x016c, 295 + .pwr_sta2nd_offs = 0x0170, 332 296 .sram_pdn_bits = GENMASK(8, 8), 333 297 .sram_pdn_ack_bits = GENMASK(12, 12), 334 298 }, ··· 338 298 .name = "cam_rawc", 339 299 .sta_mask = BIT(26), 340 300 .ctl_offs = 0x0368, 301 + .pwr_sta_offs = 0x016c, 302 + .pwr_sta2nd_offs = 0x0170, 341 303 .sram_pdn_bits = GENMASK(8, 8), 342 304 .sram_pdn_ack_bits = GENMASK(12, 12), 343 305 }, ··· 348 306 static const struct scpsys_soc_data mt8192_scpsys_data = { 349 307 .domains_data = scpsys_domain_data_mt8192, 350 308 .num_domains = ARRAY_SIZE(scpsys_domain_data_mt8192), 351 - .pwr_sta_offs = 0x016c, 352 - .pwr_sta2nd_offs = 0x0170, 353 309 }; 354 310 355 311 #endif /* __SOC_MEDIATEK_MT8192_PM_DOMAINS_H */
+613
drivers/soc/mediatek/mt8195-pm-domains.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (c) 2021 MediaTek Inc. 4 + * Author: Chun-Jie Chen <chun-jie.chen@mediatek.com> 5 + */ 6 + 7 + #ifndef __SOC_MEDIATEK_MT8195_PM_DOMAINS_H 8 + #define __SOC_MEDIATEK_MT8195_PM_DOMAINS_H 9 + 10 + #include "mtk-pm-domains.h" 11 + #include <dt-bindings/power/mt8195-power.h> 12 + 13 + /* 14 + * MT8195 power domain support 15 + */ 16 + 17 + static const struct scpsys_domain_data scpsys_domain_data_mt8195[] = { 18 + [MT8195_POWER_DOMAIN_PCIE_MAC_P0] = { 19 + .name = "pcie_mac_p0", 20 + .sta_mask = BIT(11), 21 + .ctl_offs = 0x328, 22 + .pwr_sta_offs = 0x174, 23 + .pwr_sta2nd_offs = 0x178, 24 + .sram_pdn_bits = GENMASK(8, 8), 25 + .sram_pdn_ack_bits = GENMASK(12, 12), 26 + .bp_infracfg = { 27 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_VDNR_PCIE_MAC_P0, 28 + MT8195_TOP_AXI_PROT_EN_VDNR_SET, 29 + MT8195_TOP_AXI_PROT_EN_VDNR_CLR, 30 + MT8195_TOP_AXI_PROT_EN_VDNR_STA1), 31 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_VDNR_1_PCIE_MAC_P0, 32 + MT8195_TOP_AXI_PROT_EN_VDNR_1_SET, 33 + MT8195_TOP_AXI_PROT_EN_VDNR_1_CLR, 34 + MT8195_TOP_AXI_PROT_EN_VDNR_1_STA1), 35 + }, 36 + }, 37 + [MT8195_POWER_DOMAIN_PCIE_MAC_P1] = { 38 + .name = "pcie_mac_p1", 39 + .sta_mask = BIT(12), 40 + .ctl_offs = 0x32C, 41 + .pwr_sta_offs = 0x174, 42 + .pwr_sta2nd_offs = 0x178, 43 + .sram_pdn_bits = GENMASK(8, 8), 44 + .sram_pdn_ack_bits = GENMASK(12, 12), 45 + .bp_infracfg = { 46 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_VDNR_PCIE_MAC_P1, 47 + MT8195_TOP_AXI_PROT_EN_VDNR_SET, 48 + MT8195_TOP_AXI_PROT_EN_VDNR_CLR, 49 + MT8195_TOP_AXI_PROT_EN_VDNR_STA1), 50 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_VDNR_1_PCIE_MAC_P1, 51 + MT8195_TOP_AXI_PROT_EN_VDNR_1_SET, 52 + MT8195_TOP_AXI_PROT_EN_VDNR_1_CLR, 53 + MT8195_TOP_AXI_PROT_EN_VDNR_1_STA1), 54 + }, 55 + }, 56 + [MT8195_POWER_DOMAIN_PCIE_PHY] = { 57 + .name = "pcie_phy", 58 + .sta_mask = BIT(13), 59 + .ctl_offs = 0x330, 60 + .pwr_sta_offs = 0x174, 61 + .pwr_sta2nd_offs = 0x178, 62 + .caps = MTK_SCPD_ACTIVE_WAKEUP, 63 + }, 64 + [MT8195_POWER_DOMAIN_SSUSB_PCIE_PHY] = { 65 + .name = "ssusb_pcie_phy", 66 + .sta_mask = BIT(14), 67 + .ctl_offs = 0x334, 68 + .pwr_sta_offs = 0x174, 69 + .pwr_sta2nd_offs = 0x178, 70 + .caps = MTK_SCPD_ACTIVE_WAKEUP, 71 + }, 72 + [MT8195_POWER_DOMAIN_CSI_RX_TOP] = { 73 + .name = "csi_rx_top", 74 + .sta_mask = BIT(18), 75 + .ctl_offs = 0x3C4, 76 + .pwr_sta_offs = 0x174, 77 + .pwr_sta2nd_offs = 0x178, 78 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 79 + }, 80 + [MT8195_POWER_DOMAIN_ETHER] = { 81 + .name = "ether", 82 + .sta_mask = BIT(3), 83 + .ctl_offs = 0x344, 84 + .pwr_sta_offs = 0x16c, 85 + .pwr_sta2nd_offs = 0x170, 86 + .sram_pdn_bits = GENMASK(8, 8), 87 + .sram_pdn_ack_bits = GENMASK(12, 12), 88 + .caps = MTK_SCPD_ACTIVE_WAKEUP, 89 + }, 90 + [MT8195_POWER_DOMAIN_ADSP] = { 91 + .name = "adsp", 92 + .sta_mask = BIT(10), 93 + .ctl_offs = 0x360, 94 + .pwr_sta_offs = 0x16c, 95 + .pwr_sta2nd_offs = 0x170, 96 + .sram_pdn_bits = GENMASK(8, 8), 97 + .sram_pdn_ack_bits = GENMASK(12, 12), 98 + .bp_infracfg = { 99 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_2_ADSP, 100 + MT8195_TOP_AXI_PROT_EN_2_SET, 101 + MT8195_TOP_AXI_PROT_EN_2_CLR, 102 + MT8195_TOP_AXI_PROT_EN_2_STA1), 103 + }, 104 + .caps = MTK_SCPD_SRAM_ISO | MTK_SCPD_ACTIVE_WAKEUP, 105 + }, 106 + [MT8195_POWER_DOMAIN_AUDIO] = { 107 + .name = "audio", 108 + .sta_mask = BIT(8), 109 + .ctl_offs = 0x358, 110 + .pwr_sta_offs = 0x16c, 111 + .pwr_sta2nd_offs = 0x170, 112 + .sram_pdn_bits = GENMASK(8, 8), 113 + .sram_pdn_ack_bits = GENMASK(12, 12), 114 + .bp_infracfg = { 115 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_2_AUDIO, 116 + MT8195_TOP_AXI_PROT_EN_2_SET, 117 + MT8195_TOP_AXI_PROT_EN_2_CLR, 118 + MT8195_TOP_AXI_PROT_EN_2_STA1), 119 + }, 120 + }, 121 + [MT8195_POWER_DOMAIN_MFG0] = { 122 + .name = "mfg0", 123 + .sta_mask = BIT(1), 124 + .ctl_offs = 0x300, 125 + .pwr_sta_offs = 0x174, 126 + .pwr_sta2nd_offs = 0x178, 127 + .sram_pdn_bits = GENMASK(8, 8), 128 + .sram_pdn_ack_bits = GENMASK(12, 12), 129 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF | MTK_SCPD_DOMAIN_SUPPLY, 130 + }, 131 + [MT8195_POWER_DOMAIN_MFG1] = { 132 + .name = "mfg1", 133 + .sta_mask = BIT(2), 134 + .ctl_offs = 0x304, 135 + .pwr_sta_offs = 0x174, 136 + .pwr_sta2nd_offs = 0x178, 137 + .sram_pdn_bits = GENMASK(8, 8), 138 + .sram_pdn_ack_bits = GENMASK(12, 12), 139 + .bp_infracfg = { 140 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MFG1, 141 + MT8195_TOP_AXI_PROT_EN_SET, 142 + MT8195_TOP_AXI_PROT_EN_CLR, 143 + MT8195_TOP_AXI_PROT_EN_STA1), 144 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_2_MFG1, 145 + MT8195_TOP_AXI_PROT_EN_2_SET, 146 + MT8195_TOP_AXI_PROT_EN_2_CLR, 147 + MT8195_TOP_AXI_PROT_EN_2_STA1), 148 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_1_MFG1, 149 + MT8195_TOP_AXI_PROT_EN_1_SET, 150 + MT8195_TOP_AXI_PROT_EN_1_CLR, 151 + MT8195_TOP_AXI_PROT_EN_1_STA1), 152 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_2_MFG1_2ND, 153 + MT8195_TOP_AXI_PROT_EN_2_SET, 154 + MT8195_TOP_AXI_PROT_EN_2_CLR, 155 + MT8195_TOP_AXI_PROT_EN_2_STA1), 156 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MFG1_2ND, 157 + MT8195_TOP_AXI_PROT_EN_SET, 158 + MT8195_TOP_AXI_PROT_EN_CLR, 159 + MT8195_TOP_AXI_PROT_EN_STA1), 160 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_SUB_INFRA_VDNR_MFG1, 161 + MT8195_TOP_AXI_PROT_EN_SUB_INFRA_VDNR_SET, 162 + MT8195_TOP_AXI_PROT_EN_SUB_INFRA_VDNR_CLR, 163 + MT8195_TOP_AXI_PROT_EN_SUB_INFRA_VDNR_STA1), 164 + }, 165 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 166 + }, 167 + [MT8195_POWER_DOMAIN_MFG2] = { 168 + .name = "mfg2", 169 + .sta_mask = BIT(3), 170 + .ctl_offs = 0x308, 171 + .pwr_sta_offs = 0x174, 172 + .pwr_sta2nd_offs = 0x178, 173 + .sram_pdn_bits = GENMASK(8, 8), 174 + .sram_pdn_ack_bits = GENMASK(12, 12), 175 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 176 + }, 177 + [MT8195_POWER_DOMAIN_MFG3] = { 178 + .name = "mfg3", 179 + .sta_mask = BIT(4), 180 + .ctl_offs = 0x30C, 181 + .pwr_sta_offs = 0x174, 182 + .pwr_sta2nd_offs = 0x178, 183 + .sram_pdn_bits = GENMASK(8, 8), 184 + .sram_pdn_ack_bits = GENMASK(12, 12), 185 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 186 + }, 187 + [MT8195_POWER_DOMAIN_MFG4] = { 188 + .name = "mfg4", 189 + .sta_mask = BIT(5), 190 + .ctl_offs = 0x310, 191 + .pwr_sta_offs = 0x174, 192 + .pwr_sta2nd_offs = 0x178, 193 + .sram_pdn_bits = GENMASK(8, 8), 194 + .sram_pdn_ack_bits = GENMASK(12, 12), 195 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 196 + }, 197 + [MT8195_POWER_DOMAIN_MFG5] = { 198 + .name = "mfg5", 199 + .sta_mask = BIT(6), 200 + .ctl_offs = 0x314, 201 + .pwr_sta_offs = 0x174, 202 + .pwr_sta2nd_offs = 0x178, 203 + .sram_pdn_bits = GENMASK(8, 8), 204 + .sram_pdn_ack_bits = GENMASK(12, 12), 205 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 206 + }, 207 + [MT8195_POWER_DOMAIN_MFG6] = { 208 + .name = "mfg6", 209 + .sta_mask = BIT(7), 210 + .ctl_offs = 0x318, 211 + .pwr_sta_offs = 0x174, 212 + .pwr_sta2nd_offs = 0x178, 213 + .sram_pdn_bits = GENMASK(8, 8), 214 + .sram_pdn_ack_bits = GENMASK(12, 12), 215 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 216 + }, 217 + [MT8195_POWER_DOMAIN_VPPSYS0] = { 218 + .name = "vppsys0", 219 + .sta_mask = BIT(11), 220 + .ctl_offs = 0x364, 221 + .pwr_sta_offs = 0x16c, 222 + .pwr_sta2nd_offs = 0x170, 223 + .sram_pdn_bits = GENMASK(8, 8), 224 + .sram_pdn_ack_bits = GENMASK(12, 12), 225 + .bp_infracfg = { 226 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_VPPSYS0, 227 + MT8195_TOP_AXI_PROT_EN_SET, 228 + MT8195_TOP_AXI_PROT_EN_CLR, 229 + MT8195_TOP_AXI_PROT_EN_STA1), 230 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_2_VPPSYS0, 231 + MT8195_TOP_AXI_PROT_EN_MM_2_SET, 232 + MT8195_TOP_AXI_PROT_EN_MM_2_CLR, 233 + MT8195_TOP_AXI_PROT_EN_MM_2_STA1), 234 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_VPPSYS0_2ND, 235 + MT8195_TOP_AXI_PROT_EN_SET, 236 + MT8195_TOP_AXI_PROT_EN_CLR, 237 + MT8195_TOP_AXI_PROT_EN_STA1), 238 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_2_VPPSYS0_2ND, 239 + MT8195_TOP_AXI_PROT_EN_MM_2_SET, 240 + MT8195_TOP_AXI_PROT_EN_MM_2_CLR, 241 + MT8195_TOP_AXI_PROT_EN_MM_2_STA1), 242 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_SUB_INFRA_VDNR_VPPSYS0, 243 + MT8195_TOP_AXI_PROT_EN_SUB_INFRA_VDNR_SET, 244 + MT8195_TOP_AXI_PROT_EN_SUB_INFRA_VDNR_CLR, 245 + MT8195_TOP_AXI_PROT_EN_SUB_INFRA_VDNR_STA1), 246 + }, 247 + }, 248 + [MT8195_POWER_DOMAIN_VDOSYS0] = { 249 + .name = "vdosys0", 250 + .sta_mask = BIT(13), 251 + .ctl_offs = 0x36C, 252 + .pwr_sta_offs = 0x16c, 253 + .pwr_sta2nd_offs = 0x170, 254 + .sram_pdn_bits = GENMASK(8, 8), 255 + .sram_pdn_ack_bits = GENMASK(12, 12), 256 + .bp_infracfg = { 257 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_VDOSYS0, 258 + MT8195_TOP_AXI_PROT_EN_MM_SET, 259 + MT8195_TOP_AXI_PROT_EN_MM_CLR, 260 + MT8195_TOP_AXI_PROT_EN_MM_STA1), 261 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_VDOSYS0, 262 + MT8195_TOP_AXI_PROT_EN_SET, 263 + MT8195_TOP_AXI_PROT_EN_CLR, 264 + MT8195_TOP_AXI_PROT_EN_STA1), 265 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_SUB_INFRA_VDNR_VDOSYS0, 266 + MT8195_TOP_AXI_PROT_EN_SUB_INFRA_VDNR_SET, 267 + MT8195_TOP_AXI_PROT_EN_SUB_INFRA_VDNR_CLR, 268 + MT8195_TOP_AXI_PROT_EN_SUB_INFRA_VDNR_STA1), 269 + }, 270 + }, 271 + [MT8195_POWER_DOMAIN_VPPSYS1] = { 272 + .name = "vppsys1", 273 + .sta_mask = BIT(12), 274 + .ctl_offs = 0x368, 275 + .pwr_sta_offs = 0x16c, 276 + .pwr_sta2nd_offs = 0x170, 277 + .sram_pdn_bits = GENMASK(8, 8), 278 + .sram_pdn_ack_bits = GENMASK(12, 12), 279 + .bp_infracfg = { 280 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_VPPSYS1, 281 + MT8195_TOP_AXI_PROT_EN_MM_SET, 282 + MT8195_TOP_AXI_PROT_EN_MM_CLR, 283 + MT8195_TOP_AXI_PROT_EN_MM_STA1), 284 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_VPPSYS1_2ND, 285 + MT8195_TOP_AXI_PROT_EN_MM_SET, 286 + MT8195_TOP_AXI_PROT_EN_MM_CLR, 287 + MT8195_TOP_AXI_PROT_EN_MM_STA1), 288 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_2_VPPSYS1, 289 + MT8195_TOP_AXI_PROT_EN_MM_2_SET, 290 + MT8195_TOP_AXI_PROT_EN_MM_2_CLR, 291 + MT8195_TOP_AXI_PROT_EN_MM_2_STA1), 292 + }, 293 + }, 294 + [MT8195_POWER_DOMAIN_VDOSYS1] = { 295 + .name = "vdosys1", 296 + .sta_mask = BIT(14), 297 + .ctl_offs = 0x370, 298 + .pwr_sta_offs = 0x16c, 299 + .pwr_sta2nd_offs = 0x170, 300 + .sram_pdn_bits = GENMASK(8, 8), 301 + .sram_pdn_ack_bits = GENMASK(12, 12), 302 + .bp_infracfg = { 303 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_VDOSYS1, 304 + MT8195_TOP_AXI_PROT_EN_MM_SET, 305 + MT8195_TOP_AXI_PROT_EN_MM_CLR, 306 + MT8195_TOP_AXI_PROT_EN_MM_STA1), 307 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_VDOSYS1_2ND, 308 + MT8195_TOP_AXI_PROT_EN_MM_SET, 309 + MT8195_TOP_AXI_PROT_EN_MM_CLR, 310 + MT8195_TOP_AXI_PROT_EN_MM_STA1), 311 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_2_VDOSYS1, 312 + MT8195_TOP_AXI_PROT_EN_MM_2_SET, 313 + MT8195_TOP_AXI_PROT_EN_MM_2_CLR, 314 + MT8195_TOP_AXI_PROT_EN_MM_2_STA1), 315 + }, 316 + }, 317 + [MT8195_POWER_DOMAIN_DP_TX] = { 318 + .name = "dp_tx", 319 + .sta_mask = BIT(16), 320 + .ctl_offs = 0x378, 321 + .pwr_sta_offs = 0x16c, 322 + .pwr_sta2nd_offs = 0x170, 323 + .sram_pdn_bits = GENMASK(8, 8), 324 + .sram_pdn_ack_bits = GENMASK(12, 12), 325 + .bp_infracfg = { 326 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_VDNR_1_DP_TX, 327 + MT8195_TOP_AXI_PROT_EN_VDNR_1_SET, 328 + MT8195_TOP_AXI_PROT_EN_VDNR_1_CLR, 329 + MT8195_TOP_AXI_PROT_EN_VDNR_1_STA1), 330 + }, 331 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 332 + }, 333 + [MT8195_POWER_DOMAIN_EPD_TX] = { 334 + .name = "epd_tx", 335 + .sta_mask = BIT(17), 336 + .ctl_offs = 0x37C, 337 + .pwr_sta_offs = 0x16c, 338 + .pwr_sta2nd_offs = 0x170, 339 + .sram_pdn_bits = GENMASK(8, 8), 340 + .sram_pdn_ack_bits = GENMASK(12, 12), 341 + .bp_infracfg = { 342 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_VDNR_1_EPD_TX, 343 + MT8195_TOP_AXI_PROT_EN_VDNR_1_SET, 344 + MT8195_TOP_AXI_PROT_EN_VDNR_1_CLR, 345 + MT8195_TOP_AXI_PROT_EN_VDNR_1_STA1), 346 + }, 347 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 348 + }, 349 + [MT8195_POWER_DOMAIN_HDMI_TX] = { 350 + .name = "hdmi_tx", 351 + .sta_mask = BIT(18), 352 + .ctl_offs = 0x380, 353 + .pwr_sta_offs = 0x16c, 354 + .pwr_sta2nd_offs = 0x170, 355 + .sram_pdn_bits = GENMASK(8, 8), 356 + .sram_pdn_ack_bits = GENMASK(12, 12), 357 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF | MTK_SCPD_ACTIVE_WAKEUP, 358 + }, 359 + [MT8195_POWER_DOMAIN_WPESYS] = { 360 + .name = "wpesys", 361 + .sta_mask = BIT(15), 362 + .ctl_offs = 0x374, 363 + .pwr_sta_offs = 0x16c, 364 + .pwr_sta2nd_offs = 0x170, 365 + .sram_pdn_bits = GENMASK(8, 8), 366 + .sram_pdn_ack_bits = GENMASK(12, 12), 367 + .bp_infracfg = { 368 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_2_WPESYS, 369 + MT8195_TOP_AXI_PROT_EN_MM_2_SET, 370 + MT8195_TOP_AXI_PROT_EN_MM_2_CLR, 371 + MT8195_TOP_AXI_PROT_EN_MM_2_STA1), 372 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_WPESYS, 373 + MT8195_TOP_AXI_PROT_EN_MM_SET, 374 + MT8195_TOP_AXI_PROT_EN_MM_CLR, 375 + MT8195_TOP_AXI_PROT_EN_MM_STA1), 376 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_2_WPESYS_2ND, 377 + MT8195_TOP_AXI_PROT_EN_MM_2_SET, 378 + MT8195_TOP_AXI_PROT_EN_MM_2_CLR, 379 + MT8195_TOP_AXI_PROT_EN_MM_2_STA1), 380 + }, 381 + }, 382 + [MT8195_POWER_DOMAIN_VDEC0] = { 383 + .name = "vdec0", 384 + .sta_mask = BIT(20), 385 + .ctl_offs = 0x388, 386 + .pwr_sta_offs = 0x16c, 387 + .pwr_sta2nd_offs = 0x170, 388 + .sram_pdn_bits = GENMASK(8, 8), 389 + .sram_pdn_ack_bits = GENMASK(12, 12), 390 + .bp_infracfg = { 391 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_VDEC0, 392 + MT8195_TOP_AXI_PROT_EN_MM_SET, 393 + MT8195_TOP_AXI_PROT_EN_MM_CLR, 394 + MT8195_TOP_AXI_PROT_EN_MM_STA1), 395 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_2_VDEC0, 396 + MT8195_TOP_AXI_PROT_EN_MM_2_SET, 397 + MT8195_TOP_AXI_PROT_EN_MM_2_CLR, 398 + MT8195_TOP_AXI_PROT_EN_MM_2_STA1), 399 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_VDEC0_2ND, 400 + MT8195_TOP_AXI_PROT_EN_MM_SET, 401 + MT8195_TOP_AXI_PROT_EN_MM_CLR, 402 + MT8195_TOP_AXI_PROT_EN_MM_STA1), 403 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_2_VDEC0_2ND, 404 + MT8195_TOP_AXI_PROT_EN_MM_2_SET, 405 + MT8195_TOP_AXI_PROT_EN_MM_2_CLR, 406 + MT8195_TOP_AXI_PROT_EN_MM_2_STA1), 407 + }, 408 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 409 + }, 410 + [MT8195_POWER_DOMAIN_VDEC1] = { 411 + .name = "vdec1", 412 + .sta_mask = BIT(21), 413 + .ctl_offs = 0x38C, 414 + .pwr_sta_offs = 0x16c, 415 + .pwr_sta2nd_offs = 0x170, 416 + .sram_pdn_bits = GENMASK(8, 8), 417 + .sram_pdn_ack_bits = GENMASK(12, 12), 418 + .bp_infracfg = { 419 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_VDEC1, 420 + MT8195_TOP_AXI_PROT_EN_MM_SET, 421 + MT8195_TOP_AXI_PROT_EN_MM_CLR, 422 + MT8195_TOP_AXI_PROT_EN_MM_STA1), 423 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_VDEC1_2ND, 424 + MT8195_TOP_AXI_PROT_EN_MM_SET, 425 + MT8195_TOP_AXI_PROT_EN_MM_CLR, 426 + MT8195_TOP_AXI_PROT_EN_MM_STA1), 427 + }, 428 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 429 + }, 430 + [MT8195_POWER_DOMAIN_VDEC2] = { 431 + .name = "vdec2", 432 + .sta_mask = BIT(22), 433 + .ctl_offs = 0x390, 434 + .pwr_sta_offs = 0x16c, 435 + .pwr_sta2nd_offs = 0x170, 436 + .sram_pdn_bits = GENMASK(8, 8), 437 + .sram_pdn_ack_bits = GENMASK(12, 12), 438 + .bp_infracfg = { 439 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_2_VDEC2, 440 + MT8195_TOP_AXI_PROT_EN_MM_2_SET, 441 + MT8195_TOP_AXI_PROT_EN_MM_2_CLR, 442 + MT8195_TOP_AXI_PROT_EN_MM_2_STA1), 443 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_2_VDEC2_2ND, 444 + MT8195_TOP_AXI_PROT_EN_MM_2_SET, 445 + MT8195_TOP_AXI_PROT_EN_MM_2_CLR, 446 + MT8195_TOP_AXI_PROT_EN_MM_2_STA1), 447 + }, 448 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 449 + }, 450 + [MT8195_POWER_DOMAIN_VENC] = { 451 + .name = "venc", 452 + .sta_mask = BIT(23), 453 + .ctl_offs = 0x394, 454 + .pwr_sta_offs = 0x16c, 455 + .pwr_sta2nd_offs = 0x170, 456 + .sram_pdn_bits = GENMASK(8, 8), 457 + .sram_pdn_ack_bits = GENMASK(12, 12), 458 + .bp_infracfg = { 459 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_VENC, 460 + MT8195_TOP_AXI_PROT_EN_MM_SET, 461 + MT8195_TOP_AXI_PROT_EN_MM_CLR, 462 + MT8195_TOP_AXI_PROT_EN_MM_STA1), 463 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_VENC_2ND, 464 + MT8195_TOP_AXI_PROT_EN_MM_SET, 465 + MT8195_TOP_AXI_PROT_EN_MM_CLR, 466 + MT8195_TOP_AXI_PROT_EN_MM_STA1), 467 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_2_VENC, 468 + MT8195_TOP_AXI_PROT_EN_MM_2_SET, 469 + MT8195_TOP_AXI_PROT_EN_MM_2_CLR, 470 + MT8195_TOP_AXI_PROT_EN_MM_2_STA1), 471 + }, 472 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 473 + }, 474 + [MT8195_POWER_DOMAIN_VENC_CORE1] = { 475 + .name = "venc_core1", 476 + .sta_mask = BIT(24), 477 + .ctl_offs = 0x398, 478 + .pwr_sta_offs = 0x16c, 479 + .pwr_sta2nd_offs = 0x170, 480 + .sram_pdn_bits = GENMASK(8, 8), 481 + .sram_pdn_ack_bits = GENMASK(12, 12), 482 + .bp_infracfg = { 483 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_VENC_CORE1, 484 + MT8195_TOP_AXI_PROT_EN_MM_SET, 485 + MT8195_TOP_AXI_PROT_EN_MM_CLR, 486 + MT8195_TOP_AXI_PROT_EN_MM_STA1), 487 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_2_VENC_CORE1, 488 + MT8195_TOP_AXI_PROT_EN_MM_2_SET, 489 + MT8195_TOP_AXI_PROT_EN_MM_2_CLR, 490 + MT8195_TOP_AXI_PROT_EN_MM_2_STA1), 491 + }, 492 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 493 + }, 494 + [MT8195_POWER_DOMAIN_IMG] = { 495 + .name = "img", 496 + .sta_mask = BIT(29), 497 + .ctl_offs = 0x3AC, 498 + .pwr_sta_offs = 0x16c, 499 + .pwr_sta2nd_offs = 0x170, 500 + .sram_pdn_bits = GENMASK(8, 8), 501 + .sram_pdn_ack_bits = GENMASK(12, 12), 502 + .bp_infracfg = { 503 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_IMG, 504 + MT8195_TOP_AXI_PROT_EN_MM_SET, 505 + MT8195_TOP_AXI_PROT_EN_MM_CLR, 506 + MT8195_TOP_AXI_PROT_EN_MM_STA1), 507 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_IMG_2ND, 508 + MT8195_TOP_AXI_PROT_EN_MM_SET, 509 + MT8195_TOP_AXI_PROT_EN_MM_CLR, 510 + MT8195_TOP_AXI_PROT_EN_MM_STA1), 511 + }, 512 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 513 + }, 514 + [MT8195_POWER_DOMAIN_DIP] = { 515 + .name = "dip", 516 + .sta_mask = BIT(30), 517 + .ctl_offs = 0x3B0, 518 + .pwr_sta_offs = 0x16c, 519 + .pwr_sta2nd_offs = 0x170, 520 + .sram_pdn_bits = GENMASK(8, 8), 521 + .sram_pdn_ack_bits = GENMASK(12, 12), 522 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 523 + }, 524 + [MT8195_POWER_DOMAIN_IPE] = { 525 + .name = "ipe", 526 + .sta_mask = BIT(31), 527 + .ctl_offs = 0x3B4, 528 + .pwr_sta_offs = 0x16c, 529 + .pwr_sta2nd_offs = 0x170, 530 + .sram_pdn_bits = GENMASK(8, 8), 531 + .sram_pdn_ack_bits = GENMASK(12, 12), 532 + .bp_infracfg = { 533 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_IPE, 534 + MT8195_TOP_AXI_PROT_EN_MM_SET, 535 + MT8195_TOP_AXI_PROT_EN_MM_CLR, 536 + MT8195_TOP_AXI_PROT_EN_MM_STA1), 537 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_2_IPE, 538 + MT8195_TOP_AXI_PROT_EN_MM_2_SET, 539 + MT8195_TOP_AXI_PROT_EN_MM_2_CLR, 540 + MT8195_TOP_AXI_PROT_EN_MM_2_STA1), 541 + }, 542 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 543 + }, 544 + [MT8195_POWER_DOMAIN_CAM] = { 545 + .name = "cam", 546 + .sta_mask = BIT(25), 547 + .ctl_offs = 0x39C, 548 + .pwr_sta_offs = 0x16c, 549 + .pwr_sta2nd_offs = 0x170, 550 + .sram_pdn_bits = GENMASK(8, 8), 551 + .sram_pdn_ack_bits = GENMASK(12, 12), 552 + .bp_infracfg = { 553 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_2_CAM, 554 + MT8195_TOP_AXI_PROT_EN_2_SET, 555 + MT8195_TOP_AXI_PROT_EN_2_CLR, 556 + MT8195_TOP_AXI_PROT_EN_2_STA1), 557 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_CAM, 558 + MT8195_TOP_AXI_PROT_EN_MM_SET, 559 + MT8195_TOP_AXI_PROT_EN_MM_CLR, 560 + MT8195_TOP_AXI_PROT_EN_MM_STA1), 561 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_1_CAM, 562 + MT8195_TOP_AXI_PROT_EN_1_SET, 563 + MT8195_TOP_AXI_PROT_EN_1_CLR, 564 + MT8195_TOP_AXI_PROT_EN_1_STA1), 565 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_CAM_2ND, 566 + MT8195_TOP_AXI_PROT_EN_MM_SET, 567 + MT8195_TOP_AXI_PROT_EN_MM_CLR, 568 + MT8195_TOP_AXI_PROT_EN_MM_STA1), 569 + BUS_PROT_WR(MT8195_TOP_AXI_PROT_EN_MM_2_CAM, 570 + MT8195_TOP_AXI_PROT_EN_MM_2_SET, 571 + MT8195_TOP_AXI_PROT_EN_MM_2_CLR, 572 + MT8195_TOP_AXI_PROT_EN_MM_2_STA1), 573 + }, 574 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 575 + }, 576 + [MT8195_POWER_DOMAIN_CAM_RAWA] = { 577 + .name = "cam_rawa", 578 + .sta_mask = BIT(26), 579 + .ctl_offs = 0x3A0, 580 + .pwr_sta_offs = 0x16c, 581 + .pwr_sta2nd_offs = 0x170, 582 + .sram_pdn_bits = GENMASK(8, 8), 583 + .sram_pdn_ack_bits = GENMASK(12, 12), 584 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 585 + }, 586 + [MT8195_POWER_DOMAIN_CAM_RAWB] = { 587 + .name = "cam_rawb", 588 + .sta_mask = BIT(27), 589 + .ctl_offs = 0x3A4, 590 + .pwr_sta_offs = 0x16c, 591 + .pwr_sta2nd_offs = 0x170, 592 + .sram_pdn_bits = GENMASK(8, 8), 593 + .sram_pdn_ack_bits = GENMASK(12, 12), 594 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 595 + }, 596 + [MT8195_POWER_DOMAIN_CAM_MRAW] = { 597 + .name = "cam_mraw", 598 + .sta_mask = BIT(28), 599 + .ctl_offs = 0x3A8, 600 + .pwr_sta_offs = 0x16c, 601 + .pwr_sta2nd_offs = 0x170, 602 + .sram_pdn_bits = GENMASK(8, 8), 603 + .sram_pdn_ack_bits = GENMASK(12, 12), 604 + .caps = MTK_SCPD_KEEP_DEFAULT_OFF, 605 + }, 606 + }; 607 + 608 + static const struct scpsys_soc_data mt8195_scpsys_data = { 609 + .domains_data = scpsys_domain_data_mt8195, 610 + .num_domains = ARRAY_SIZE(scpsys_domain_data_mt8195), 611 + }; 612 + 613 + #endif /* __SOC_MEDIATEK_MT8195_PM_DOMAINS_H */
+19
drivers/soc/mediatek/mtk-infracfg.c
··· 6 6 #include <linux/export.h> 7 7 #include <linux/jiffies.h> 8 8 #include <linux/regmap.h> 9 + #include <linux/mfd/syscon.h> 9 10 #include <linux/soc/mediatek/infracfg.h> 10 11 #include <asm/processor.h> 11 12 ··· 73 72 74 73 return ret; 75 74 } 75 + 76 + static int __init mtk_infracfg_init(void) 77 + { 78 + struct regmap *infracfg; 79 + 80 + /* 81 + * MT8192 has an experimental path to route GPU traffic to the DSU's 82 + * Accelerator Coherency Port, which is inadvertently enabled by 83 + * default. It turns out not to work, so disable it to prevent spurious 84 + * GPU faults. 85 + */ 86 + infracfg = syscon_regmap_lookup_by_compatible("mediatek,mt8192-infracfg"); 87 + if (!IS_ERR(infracfg)) 88 + regmap_set_bits(infracfg, MT8192_INFRA_CTRL, 89 + MT8192_INFRA_CTRL_DISABLE_MFG2ACP); 90 + return 0; 91 + } 92 + postcore_initcall(mtk_infracfg_init);
+16 -2
drivers/soc/mediatek/mtk-mmsys.c
··· 15 15 #include "mtk-mmsys.h" 16 16 #include "mt8167-mmsys.h" 17 17 #include "mt8183-mmsys.h" 18 + #include "mt8186-mmsys.h" 18 19 #include "mt8192-mmsys.h" 19 20 #include "mt8365-mmsys.h" 20 21 ··· 49 48 .clk_driver = "clk-mt8173-mm", 50 49 .routes = mmsys_default_routing_table, 51 50 .num_routes = ARRAY_SIZE(mmsys_default_routing_table), 51 + .sw0_rst_offset = MT8183_MMSYS_SW0_RST_B, 52 52 }; 53 53 54 54 static const struct mtk_mmsys_driver_data mt8183_mmsys_driver_data = { 55 55 .clk_driver = "clk-mt8183-mm", 56 56 .routes = mmsys_mt8183_routing_table, 57 57 .num_routes = ARRAY_SIZE(mmsys_mt8183_routing_table), 58 + .sw0_rst_offset = MT8183_MMSYS_SW0_RST_B, 59 + }; 60 + 61 + static const struct mtk_mmsys_driver_data mt8186_mmsys_driver_data = { 62 + .clk_driver = "clk-mt8186-mm", 63 + .routes = mmsys_mt8186_routing_table, 64 + .num_routes = ARRAY_SIZE(mmsys_mt8186_routing_table), 65 + .sw0_rst_offset = MT8186_MMSYS_SW0_RST_B, 58 66 }; 59 67 60 68 static const struct mtk_mmsys_driver_data mt8192_mmsys_driver_data = { ··· 131 121 132 122 spin_lock_irqsave(&mmsys->lock, flags); 133 123 134 - reg = readl_relaxed(mmsys->regs + MMSYS_SW0_RST_B); 124 + reg = readl_relaxed(mmsys->regs + mmsys->data->sw0_rst_offset); 135 125 136 126 if (assert) 137 127 reg &= ~BIT(id); 138 128 else 139 129 reg |= BIT(id); 140 130 141 - writel_relaxed(reg, mmsys->regs + MMSYS_SW0_RST_B); 131 + writel_relaxed(reg, mmsys->regs + mmsys->data->sw0_rst_offset); 142 132 143 133 spin_unlock_irqrestore(&mmsys->lock, flags); 144 134 ··· 251 241 { 252 242 .compatible = "mediatek,mt8183-mmsys", 253 243 .data = &mt8183_mmsys_driver_data, 244 + }, 245 + { 246 + .compatible = "mediatek,mt8186-mmsys", 247 + .data = &mt8186_mmsys_driver_data, 254 248 }, 255 249 { 256 250 .compatible = "mediatek,mt8192-mmsys",
+1 -2
drivers/soc/mediatek/mtk-mmsys.h
··· 78 78 #define DSI_SEL_IN_RDMA 0x1 79 79 #define DSI_SEL_IN_MASK 0x1 80 80 81 - #define MMSYS_SW0_RST_B 0x140 82 - 83 81 struct mtk_mmsys_routes { 84 82 u32 from_comp; 85 83 u32 to_comp; ··· 90 92 const char *clk_driver; 91 93 const struct mtk_mmsys_routes *routes; 92 94 const unsigned int num_routes; 95 + const u16 sw0_rst_offset; 93 96 }; 94 97 95 98 /*
+45
drivers/soc/mediatek/mtk-mutex.c
··· 26 26 27 27 #define INT_MUTEX BIT(1) 28 28 29 + #define MT8186_MUTEX_MOD_DISP_OVL0 0 30 + #define MT8186_MUTEX_MOD_DISP_OVL0_2L 1 31 + #define MT8186_MUTEX_MOD_DISP_RDMA0 2 32 + #define MT8186_MUTEX_MOD_DISP_COLOR0 4 33 + #define MT8186_MUTEX_MOD_DISP_CCORR0 5 34 + #define MT8186_MUTEX_MOD_DISP_AAL0 7 35 + #define MT8186_MUTEX_MOD_DISP_GAMMA0 8 36 + #define MT8186_MUTEX_MOD_DISP_POSTMASK0 9 37 + #define MT8186_MUTEX_MOD_DISP_DITHER0 10 38 + #define MT8186_MUTEX_MOD_DISP_RDMA1 17 39 + 40 + #define MT8186_MUTEX_SOF_SINGLE_MODE 0 41 + #define MT8186_MUTEX_SOF_DSI0 1 42 + #define MT8186_MUTEX_SOF_DPI0 2 43 + #define MT8186_MUTEX_EOF_DSI0 (MT8186_MUTEX_SOF_DSI0 << 6) 44 + #define MT8186_MUTEX_EOF_DPI0 (MT8186_MUTEX_SOF_DPI0 << 6) 45 + 29 46 #define MT8167_MUTEX_MOD_DISP_PWM 1 30 47 #define MT8167_MUTEX_MOD_DISP_OVL0 6 31 48 #define MT8167_MUTEX_MOD_DISP_OVL1 7 ··· 243 226 [DDP_COMPONENT_WDMA0] = MT8183_MUTEX_MOD_DISP_WDMA0, 244 227 }; 245 228 229 + static const unsigned int mt8186_mutex_mod[DDP_COMPONENT_ID_MAX] = { 230 + [DDP_COMPONENT_AAL0] = MT8186_MUTEX_MOD_DISP_AAL0, 231 + [DDP_COMPONENT_CCORR] = MT8186_MUTEX_MOD_DISP_CCORR0, 232 + [DDP_COMPONENT_COLOR0] = MT8186_MUTEX_MOD_DISP_COLOR0, 233 + [DDP_COMPONENT_DITHER] = MT8186_MUTEX_MOD_DISP_DITHER0, 234 + [DDP_COMPONENT_GAMMA] = MT8186_MUTEX_MOD_DISP_GAMMA0, 235 + [DDP_COMPONENT_OVL0] = MT8186_MUTEX_MOD_DISP_OVL0, 236 + [DDP_COMPONENT_OVL_2L0] = MT8186_MUTEX_MOD_DISP_OVL0_2L, 237 + [DDP_COMPONENT_POSTMASK0] = MT8186_MUTEX_MOD_DISP_POSTMASK0, 238 + [DDP_COMPONENT_RDMA0] = MT8186_MUTEX_MOD_DISP_RDMA0, 239 + [DDP_COMPONENT_RDMA1] = MT8186_MUTEX_MOD_DISP_RDMA1, 240 + }; 241 + 246 242 static const unsigned int mt8192_mutex_mod[DDP_COMPONENT_ID_MAX] = { 247 243 [DDP_COMPONENT_AAL0] = MT8192_MUTEX_MOD_DISP_AAL0, 248 244 [DDP_COMPONENT_CCORR] = MT8192_MUTEX_MOD_DISP_CCORR0, ··· 294 264 [MUTEX_SOF_DPI0] = MT8183_MUTEX_SOF_DPI0 | MT8183_MUTEX_EOF_DPI0, 295 265 }; 296 266 267 + static const unsigned int mt8186_mutex_sof[MUTEX_SOF_DSI3 + 1] = { 268 + [MUTEX_SOF_SINGLE_MODE] = MUTEX_SOF_SINGLE_MODE, 269 + [MUTEX_SOF_DSI0] = MT8186_MUTEX_SOF_DSI0 | MT8186_MUTEX_EOF_DSI0, 270 + [MUTEX_SOF_DPI0] = MT8186_MUTEX_SOF_DPI0 | MT8186_MUTEX_EOF_DPI0, 271 + }; 272 + 297 273 static const struct mtk_mutex_data mt2701_mutex_driver_data = { 298 274 .mutex_mod = mt2701_mutex_mod, 299 275 .mutex_sof = mt2712_mutex_sof, ··· 335 299 .mutex_mod_reg = MT8183_MUTEX0_MOD0, 336 300 .mutex_sof_reg = MT8183_MUTEX0_SOF0, 337 301 .no_clk = true, 302 + }; 303 + 304 + static const struct mtk_mutex_data mt8186_mutex_driver_data = { 305 + .mutex_mod = mt8186_mutex_mod, 306 + .mutex_sof = mt8186_mutex_sof, 307 + .mutex_mod_reg = MT8183_MUTEX0_MOD0, 308 + .mutex_sof_reg = MT8183_MUTEX0_SOF0, 338 309 }; 339 310 340 311 static const struct mtk_mutex_data mt8192_mutex_driver_data = { ··· 583 540 .data = &mt8173_mutex_driver_data}, 584 541 { .compatible = "mediatek,mt8183-disp-mutex", 585 542 .data = &mt8183_mutex_driver_data}, 543 + { .compatible = "mediatek,mt8186-disp-mutex", 544 + .data = &mt8186_mutex_driver_data}, 586 545 { .compatible = "mediatek,mt8192-disp-mutex", 587 546 .data = &mt8192_mutex_driver_data}, 588 547 {},
+15 -2
drivers/soc/mediatek/mtk-pm-domains.c
··· 19 19 #include "mt8167-pm-domains.h" 20 20 #include "mt8173-pm-domains.h" 21 21 #include "mt8183-pm-domains.h" 22 + #include "mt8186-pm-domains.h" 22 23 #include "mt8192-pm-domains.h" 24 + #include "mt8195-pm-domains.h" 23 25 24 26 #define MTK_POLL_DELAY_US 10 25 27 #define MTK_POLL_TIMEOUT USEC_PER_SEC ··· 62 60 struct scpsys *scpsys = pd->scpsys; 63 61 u32 status, status2; 64 62 65 - regmap_read(scpsys->base, scpsys->soc_data->pwr_sta_offs, &status); 63 + regmap_read(scpsys->base, pd->data->pwr_sta_offs, &status); 66 64 status &= pd->data->sta_mask; 67 65 68 - regmap_read(scpsys->base, scpsys->soc_data->pwr_sta2nd_offs, &status2); 66 + regmap_read(scpsys->base, pd->data->pwr_sta2nd_offs, &status2); 69 67 status2 &= pd->data->sta_mask; 70 68 71 69 /* A domain is on when both status bits are set. */ ··· 445 443 pd->genpd.power_off = scpsys_power_off; 446 444 pd->genpd.power_on = scpsys_power_on; 447 445 446 + if (MTK_SCPD_CAPS(pd, MTK_SCPD_ACTIVE_WAKEUP)) 447 + pd->genpd.flags |= GENPD_FLAG_ACTIVE_WAKEUP; 448 + 448 449 if (MTK_SCPD_CAPS(pd, MTK_SCPD_KEEP_DEFAULT_OFF)) 449 450 pm_genpd_init(&pd->genpd, NULL, true); 450 451 else ··· 568 563 .data = &mt8183_scpsys_data, 569 564 }, 570 565 { 566 + .compatible = "mediatek,mt8186-power-controller", 567 + .data = &mt8186_scpsys_data, 568 + }, 569 + { 571 570 .compatible = "mediatek,mt8192-power-controller", 572 571 .data = &mt8192_scpsys_data, 572 + }, 573 + { 574 + .compatible = "mediatek,mt8195-power-controller", 575 + .data = &mt8195_scpsys_data, 573 576 }, 574 577 { } 575 578 };
+3 -5
drivers/soc/mediatek/mtk-pm-domains.h
··· 37 37 #define PWR_STATUS_AUDIO BIT(24) 38 38 #define PWR_STATUS_USB BIT(25) 39 39 40 - #define SPM_MAX_BUS_PROT_DATA 5 40 + #define SPM_MAX_BUS_PROT_DATA 6 41 41 42 42 #define _BUS_PROT(_mask, _set, _clr, _sta, _update, _ignore) { \ 43 43 .bus_prot_mask = (_mask), \ ··· 72 72 bool ignore_clr_ack; 73 73 }; 74 74 75 - #define MAX_SUBSYS_CLKS 10 76 - 77 75 /** 78 76 * struct scpsys_domain_data - scp domain data for power on/off flow 79 77 * @name: The name of the power domain. ··· 92 94 u8 caps; 93 95 const struct scpsys_bus_prot_data bp_infracfg[SPM_MAX_BUS_PROT_DATA]; 94 96 const struct scpsys_bus_prot_data bp_smi[SPM_MAX_BUS_PROT_DATA]; 97 + int pwr_sta_offs; 98 + int pwr_sta2nd_offs; 95 99 }; 96 100 97 101 struct scpsys_soc_data { 98 102 const struct scpsys_domain_data *domains_data; 99 103 int num_domains; 100 - int pwr_sta_offs; 101 - int pwr_sta2nd_offs; 102 104 }; 103 105 104 106 #endif /* __SOC_MEDIATEK_MTK_PM_DOMAINS_H */
+71
drivers/soc/mediatek/mtk-pmic-wrap.c
··· 30 30 #define PWRAP_GET_WACS_REQ(x) (((x) >> 19) & 0x00000001) 31 31 #define PWRAP_STATE_SYNC_IDLE0 BIT(20) 32 32 #define PWRAP_STATE_INIT_DONE0 BIT(21) 33 + #define PWRAP_STATE_INIT_DONE0_MT8186 BIT(22) 33 34 #define PWRAP_STATE_INIT_DONE1 BIT(15) 34 35 35 36 /* macro for WACS FSM */ ··· 78 77 #define PWRAP_CAP_INT1_EN BIT(3) 79 78 #define PWRAP_CAP_WDT_SRC1 BIT(4) 80 79 #define PWRAP_CAP_ARB BIT(5) 80 + #define PWRAP_CAP_ARB_MT8186 BIT(8) 81 81 82 82 /* defines for slave device wrapper registers */ 83 83 enum dew_regs { ··· 1065 1063 [PWRAP_MSB_FIRST] = 0x170, 1066 1064 }; 1067 1065 1066 + static int mt8186_regs[] = { 1067 + [PWRAP_MUX_SEL] = 0x0, 1068 + [PWRAP_WRAP_EN] = 0x4, 1069 + [PWRAP_DIO_EN] = 0x8, 1070 + [PWRAP_RDDMY] = 0x20, 1071 + [PWRAP_CSHEXT_WRITE] = 0x24, 1072 + [PWRAP_CSHEXT_READ] = 0x28, 1073 + [PWRAP_CSLEXT_WRITE] = 0x2C, 1074 + [PWRAP_CSLEXT_READ] = 0x30, 1075 + [PWRAP_EXT_CK_WRITE] = 0x34, 1076 + [PWRAP_STAUPD_CTRL] = 0x3C, 1077 + [PWRAP_STAUPD_GRPEN] = 0x40, 1078 + [PWRAP_EINT_STA0_ADR] = 0x44, 1079 + [PWRAP_EINT_STA1_ADR] = 0x48, 1080 + [PWRAP_INT_CLR] = 0xC8, 1081 + [PWRAP_INT_FLG] = 0xC4, 1082 + [PWRAP_MAN_EN] = 0x7C, 1083 + [PWRAP_MAN_CMD] = 0x80, 1084 + [PWRAP_WACS0_EN] = 0x8C, 1085 + [PWRAP_WACS1_EN] = 0x94, 1086 + [PWRAP_WACS2_EN] = 0x9C, 1087 + [PWRAP_INIT_DONE0] = 0x90, 1088 + [PWRAP_INIT_DONE1] = 0x98, 1089 + [PWRAP_INIT_DONE2] = 0xA0, 1090 + [PWRAP_INT_EN] = 0xBC, 1091 + [PWRAP_INT1_EN] = 0xCC, 1092 + [PWRAP_INT1_FLG] = 0xD4, 1093 + [PWRAP_INT1_CLR] = 0xD8, 1094 + [PWRAP_TIMER_EN] = 0xF0, 1095 + [PWRAP_WDT_UNIT] = 0xF8, 1096 + [PWRAP_WDT_SRC_EN] = 0xFC, 1097 + [PWRAP_WDT_SRC_EN_1] = 0x100, 1098 + [PWRAP_WDT_FLG] = 0x104, 1099 + [PWRAP_SPMINF_STA] = 0x1B4, 1100 + [PWRAP_DCM_EN] = 0x1EC, 1101 + [PWRAP_DCM_DBC_PRD] = 0x1F0, 1102 + [PWRAP_GPSINF_0_STA] = 0x204, 1103 + [PWRAP_GPSINF_1_STA] = 0x208, 1104 + [PWRAP_WACS0_CMD] = 0xC00, 1105 + [PWRAP_WACS0_RDATA] = 0xC04, 1106 + [PWRAP_WACS0_VLDCLR] = 0xC08, 1107 + [PWRAP_WACS1_CMD] = 0xC10, 1108 + [PWRAP_WACS1_RDATA] = 0xC14, 1109 + [PWRAP_WACS1_VLDCLR] = 0xC18, 1110 + [PWRAP_WACS2_CMD] = 0xC20, 1111 + [PWRAP_WACS2_RDATA] = 0xC24, 1112 + [PWRAP_WACS2_VLDCLR] = 0xC28, 1113 + }; 1114 + 1068 1115 enum pmic_type { 1069 1116 PMIC_MT6323, 1070 1117 PMIC_MT6351, ··· 1134 1083 PWRAP_MT8135, 1135 1084 PWRAP_MT8173, 1136 1085 PWRAP_MT8183, 1086 + PWRAP_MT8186, 1137 1087 PWRAP_MT8195, 1138 1088 PWRAP_MT8516, 1139 1089 }; ··· 1587 1535 case PWRAP_MT6779: 1588 1536 case PWRAP_MT6797: 1589 1537 case PWRAP_MT8173: 1538 + case PWRAP_MT8186: 1590 1539 case PWRAP_MT8516: 1591 1540 pwrap_writel(wrp, 1, PWRAP_CIPHER_EN); 1592 1541 break; ··· 2122 2069 .init_soc_specific = NULL, 2123 2070 }; 2124 2071 2072 + static struct pmic_wrapper_type pwrap_mt8186 = { 2073 + .regs = mt8186_regs, 2074 + .type = PWRAP_MT8186, 2075 + .arb_en_all = 0xfb27f, 2076 + .int_en_all = 0xfffffffe, /* disable WatchDog Timeout for bit 1 */ 2077 + .int1_en_all = 0x000017ff, /* disable Matching interrupt for bit 13 */ 2078 + .spi_w = PWRAP_MAN_CMD_SPI_WRITE, 2079 + .wdt_src = PWRAP_WDT_SRC_MASK_ALL, 2080 + .caps = PWRAP_CAP_INT1_EN | PWRAP_CAP_ARB_MT8186, 2081 + .init_reg_clock = pwrap_common_init_reg_clock, 2082 + .init_soc_specific = NULL, 2083 + }; 2084 + 2125 2085 static const struct of_device_id of_pwrap_match_tbl[] = { 2126 2086 { 2127 2087 .compatible = "mediatek,mt2701-pwrap", ··· 2163 2097 }, { 2164 2098 .compatible = "mediatek,mt8183-pwrap", 2165 2099 .data = &pwrap_mt8183, 2100 + }, { 2101 + .compatible = "mediatek,mt8186-pwrap", 2102 + .data = &pwrap_mt8186, 2166 2103 }, { 2167 2104 .compatible = "mediatek,mt8195-pwrap", 2168 2105 .data = &pwrap_mt8195, ··· 2278 2209 2279 2210 if (HAS_CAP(wrp->master->caps, PWRAP_CAP_ARB)) 2280 2211 mask_done = PWRAP_STATE_INIT_DONE1; 2212 + else if (HAS_CAP(wrp->master->caps, PWRAP_CAP_ARB_MT8186)) 2213 + mask_done = PWRAP_STATE_INIT_DONE0_MT8186; 2281 2214 else 2282 2215 mask_done = PWRAP_STATE_INIT_DONE0; 2283 2216
+8 -5
drivers/soc/microchip/mpfs-sys-controller.c
··· 95 95 { 96 96 struct device *dev = &pdev->dev; 97 97 struct mpfs_sys_controller *sys_controller; 98 - int i; 98 + int i, ret; 99 99 100 - sys_controller = devm_kzalloc(dev, sizeof(*sys_controller), GFP_KERNEL); 100 + sys_controller = kzalloc(sizeof(*sys_controller), GFP_KERNEL); 101 101 if (!sys_controller) 102 102 return -ENOMEM; 103 103 ··· 106 106 sys_controller->client.tx_block = 1U; 107 107 108 108 sys_controller->chan = mbox_request_channel(&sys_controller->client, 0); 109 - if (IS_ERR(sys_controller->chan)) 110 - return dev_err_probe(dev, PTR_ERR(sys_controller->chan), 111 - "Failed to get mbox channel\n"); 109 + if (IS_ERR(sys_controller->chan)) { 110 + ret = dev_err_probe(dev, PTR_ERR(sys_controller->chan), 111 + "Failed to get mbox channel\n"); 112 + kfree(sys_controller); 113 + return ret; 114 + } 112 115 113 116 init_completion(&sys_controller->c); 114 117 kref_init(&sys_controller->consumers);
-1
drivers/soc/qcom/apr.c
··· 653 653 654 654 pdr_handle_release(apr->pdr); 655 655 device_for_each_child(&rpdev->dev, NULL, apr_remove_device); 656 - flush_workqueue(apr->rxwq); 657 656 destroy_workqueue(apr->rxwq); 658 657 } 659 658
+91 -16
drivers/soc/qcom/llcc-qcom.c
··· 29 29 #define ATTR1_FIXED_SIZE_SHIFT 0x03 30 30 #define ATTR1_PRIORITY_SHIFT 0x04 31 31 #define ATTR1_MAX_CAP_SHIFT 0x10 32 - #define ATTR0_RES_WAYS_MASK GENMASK(11, 0) 33 - #define ATTR0_BONUS_WAYS_MASK GENMASK(27, 16) 32 + #define ATTR0_RES_WAYS_MASK GENMASK(15, 0) 33 + #define ATTR0_BONUS_WAYS_MASK GENMASK(31, 16) 34 34 #define ATTR0_BONUS_WAYS_SHIFT 0x10 35 35 #define LLCC_STATUS_READ_DELAY 100 36 36 37 37 #define CACHE_LINE_SIZE_SHIFT 6 38 38 39 - #define LLCC_COMMON_HW_INFO 0x00030000 40 - #define LLCC_MAJOR_VERSION_MASK GENMASK(31, 24) 41 - 42 - #define LLCC_COMMON_STATUS0 0x0003000c 43 39 #define LLCC_LB_CNT_MASK GENMASK(31, 28) 44 40 #define LLCC_LB_CNT_SHIFT 28 45 41 ··· 48 52 #define LLCC_TRP_SCID_DIS_CAP_ALLOC 0x21f00 49 53 #define LLCC_TRP_PCB_ACT 0x21f04 50 54 #define LLCC_TRP_WRSC_EN 0x21f20 55 + #define LLCC_TRP_WRSC_CACHEABLE_EN 0x21f2c 51 56 52 57 #define BANK_OFFSET_STRIDE 0x80000 58 + 59 + #define LLCC_VERSION_2_0_0_0 0x02000000 60 + #define LLCC_VERSION_2_1_0_0 0x02010000 53 61 54 62 /** 55 63 * struct llcc_slice_config - Data associated with the llcc slice ··· 79 79 * collapse. 80 80 * @activate_on_init: Activate the slice immediately after it is programmed 81 81 * @write_scid_en: Bit enables write cache support for a given scid. 82 + * @write_scid_cacheable_en: Enables write cache cacheable support for a 83 + * given scid (not supported on v2 or older hardware). 82 84 */ 83 85 struct llcc_slice_config { 84 86 u32 usecase_id; ··· 96 94 bool retain_on_pc; 97 95 bool activate_on_init; 98 96 bool write_scid_en; 97 + bool write_scid_cacheable_en; 99 98 }; 100 99 101 100 struct qcom_llcc_config { 102 101 const struct llcc_slice_config *sct_data; 103 102 int size; 104 103 bool need_llcc_cfg; 104 + const u32 *reg_offset; 105 + }; 106 + 107 + enum llcc_reg_offset { 108 + LLCC_COMMON_HW_INFO, 109 + LLCC_COMMON_STATUS0, 105 110 }; 106 111 107 112 static const struct llcc_slice_config sc7180_data[] = { ··· 226 217 { LLCC_CPUHWT, 5, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 1 }, 227 218 }; 228 219 220 + static const struct llcc_slice_config sm8450_data[] = { 221 + {LLCC_CPUSS, 1, 3072, 1, 0, 0xFFFF, 0x0, 0, 0, 0, 1, 1, 0, 0 }, 222 + {LLCC_VIDSC0, 2, 512, 3, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 223 + {LLCC_AUDIO, 6, 1024, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0 }, 224 + {LLCC_MDMHPGRW, 7, 1024, 3, 0, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 225 + {LLCC_MODHW, 9, 1024, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 226 + {LLCC_CMPT, 10, 4096, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 227 + {LLCC_GPUHTW, 11, 512, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 228 + {LLCC_GPU, 12, 2048, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 1, 0 }, 229 + {LLCC_MMUHWT, 13, 768, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 0, 1, 0, 0 }, 230 + {LLCC_DISP, 16, 4096, 2, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 231 + {LLCC_MDMPNG, 21, 1024, 1, 1, 0xF000, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 232 + {LLCC_AUDHW, 22, 1024, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0 }, 233 + {LLCC_CVP, 28, 256, 3, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 234 + {LLCC_MODPE, 29, 64, 1, 1, 0xF000, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 235 + {LLCC_APTCM, 30, 1024, 3, 1, 0x0, 0xF0, 1, 0, 0, 1, 0, 0, 0 }, 236 + {LLCC_WRCACHE, 31, 512, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 0, 1, 0, 0 }, 237 + {LLCC_CVPFW, 17, 512, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 238 + {LLCC_CPUSS1, 3, 1024, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 239 + {LLCC_CAMEXP0, 4, 256, 3, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 240 + {LLCC_CPUMTE, 23, 256, 1, 1, 0x0FFF, 0x0, 0, 0, 0, 0, 1, 0, 0 }, 241 + {LLCC_CPUHWT, 5, 512, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 1, 0, 0 }, 242 + {LLCC_CAMEXP1, 27, 256, 3, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 243 + {LLCC_AENPU, 8, 2048, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0 }, 244 + }; 245 + 246 + static const u32 llcc_v1_2_reg_offset[] = { 247 + [LLCC_COMMON_HW_INFO] = 0x00030000, 248 + [LLCC_COMMON_STATUS0] = 0x0003000c, 249 + }; 250 + 251 + static const u32 llcc_v21_reg_offset[] = { 252 + [LLCC_COMMON_HW_INFO] = 0x00034000, 253 + [LLCC_COMMON_STATUS0] = 0x0003400c, 254 + }; 255 + 229 256 static const struct qcom_llcc_config sc7180_cfg = { 230 257 .sct_data = sc7180_data, 231 258 .size = ARRAY_SIZE(sc7180_data), 232 259 .need_llcc_cfg = true, 260 + .reg_offset = llcc_v1_2_reg_offset, 233 261 }; 234 262 235 263 static const struct qcom_llcc_config sc7280_cfg = { 236 264 .sct_data = sc7280_data, 237 265 .size = ARRAY_SIZE(sc7280_data), 238 266 .need_llcc_cfg = true, 267 + .reg_offset = llcc_v1_2_reg_offset, 239 268 }; 240 269 241 270 static const struct qcom_llcc_config sdm845_cfg = { 242 271 .sct_data = sdm845_data, 243 272 .size = ARRAY_SIZE(sdm845_data), 244 273 .need_llcc_cfg = false, 274 + .reg_offset = llcc_v1_2_reg_offset, 245 275 }; 246 276 247 277 static const struct qcom_llcc_config sm6350_cfg = { 248 278 .sct_data = sm6350_data, 249 279 .size = ARRAY_SIZE(sm6350_data), 280 + .need_llcc_cfg = true, 281 + .reg_offset = llcc_v1_2_reg_offset, 250 282 }; 251 283 252 284 static const struct qcom_llcc_config sm8150_cfg = { 253 285 .sct_data = sm8150_data, 254 286 .size = ARRAY_SIZE(sm8150_data), 287 + .need_llcc_cfg = true, 288 + .reg_offset = llcc_v1_2_reg_offset, 255 289 }; 256 290 257 291 static const struct qcom_llcc_config sm8250_cfg = { 258 292 .sct_data = sm8250_data, 259 293 .size = ARRAY_SIZE(sm8250_data), 294 + .need_llcc_cfg = true, 295 + .reg_offset = llcc_v1_2_reg_offset, 260 296 }; 261 297 262 298 static const struct qcom_llcc_config sm8350_cfg = { 263 299 .sct_data = sm8350_data, 264 300 .size = ARRAY_SIZE(sm8350_data), 301 + .need_llcc_cfg = true, 302 + .reg_offset = llcc_v1_2_reg_offset, 303 + }; 304 + 305 + static const struct qcom_llcc_config sm8450_cfg = { 306 + .sct_data = sm8450_data, 307 + .size = ARRAY_SIZE(sm8450_data), 308 + .need_llcc_cfg = true, 309 + .reg_offset = llcc_v21_reg_offset, 265 310 }; 266 311 267 312 static struct llcc_drv_data *drv_data = (void *) -EPROBE_DEFER; ··· 567 504 return ret; 568 505 } 569 506 570 - if (drv_data->major_version == 2) { 507 + if (drv_data->version >= LLCC_VERSION_2_0_0_0) { 571 508 u32 wren; 572 509 573 510 wren = config->write_scid_en << config->slice_id; 574 511 ret = regmap_update_bits(drv_data->bcast_regmap, LLCC_TRP_WRSC_EN, 575 512 BIT(config->slice_id), wren); 513 + if (ret) 514 + return ret; 515 + } 516 + 517 + if (drv_data->version >= LLCC_VERSION_2_1_0_0) { 518 + u32 wr_cache_en; 519 + 520 + wr_cache_en = config->write_scid_cacheable_en << config->slice_id; 521 + ret = regmap_update_bits(drv_data->bcast_regmap, LLCC_TRP_WRSC_CACHEABLE_EN, 522 + BIT(config->slice_id), wr_cache_en); 576 523 if (ret) 577 524 return ret; 578 525 } ··· 671 598 goto err; 672 599 } 673 600 674 - /* Extract major version of the IP */ 675 - ret = regmap_read(drv_data->bcast_regmap, LLCC_COMMON_HW_INFO, &version); 601 + cfg = of_device_get_match_data(&pdev->dev); 602 + 603 + /* Extract version of the IP */ 604 + ret = regmap_read(drv_data->bcast_regmap, cfg->reg_offset[LLCC_COMMON_HW_INFO], 605 + &version); 676 606 if (ret) 677 607 goto err; 678 608 679 - drv_data->major_version = FIELD_GET(LLCC_MAJOR_VERSION_MASK, version); 609 + drv_data->version = version; 680 610 681 - ret = regmap_read(drv_data->regmap, LLCC_COMMON_STATUS0, 682 - &num_banks); 611 + ret = regmap_read(drv_data->regmap, cfg->reg_offset[LLCC_COMMON_STATUS0], 612 + &num_banks); 683 613 if (ret) 684 614 goto err; 685 615 ··· 690 614 num_banks >>= LLCC_LB_CNT_SHIFT; 691 615 drv_data->num_banks = num_banks; 692 616 693 - cfg = of_device_get_match_data(&pdev->dev); 694 617 llcc_cfg = cfg->sct_data; 695 618 sz = cfg->size; 696 619 ··· 707 632 for (i = 0; i < num_banks; i++) 708 633 drv_data->offsets[i] = i * BANK_OFFSET_STRIDE; 709 634 710 - drv_data->bitmap = devm_kcalloc(dev, 711 - BITS_TO_LONGS(drv_data->max_slices), sizeof(unsigned long), 712 - GFP_KERNEL); 635 + drv_data->bitmap = devm_bitmap_zalloc(dev, drv_data->max_slices, 636 + GFP_KERNEL); 713 637 if (!drv_data->bitmap) { 714 638 ret = -ENOMEM; 715 639 goto err; ··· 746 672 { .compatible = "qcom,sm8150-llcc", .data = &sm8150_cfg }, 747 673 { .compatible = "qcom,sm8250-llcc", .data = &sm8250_cfg }, 748 674 { .compatible = "qcom,sm8350-llcc", .data = &sm8350_cfg }, 675 + { .compatible = "qcom,sm8450-llcc", .data = &sm8450_cfg }, 749 676 { } 750 677 }; 751 678
+151 -81
drivers/soc/qcom/mdt_loader.c
··· 31 31 return true; 32 32 } 33 33 34 + static ssize_t mdt_load_split_segment(void *ptr, const struct elf32_phdr *phdrs, 35 + unsigned int segment, const char *fw_name, 36 + struct device *dev) 37 + { 38 + const struct elf32_phdr *phdr = &phdrs[segment]; 39 + const struct firmware *seg_fw; 40 + char *seg_name; 41 + ssize_t ret; 42 + 43 + if (strlen(fw_name) < 4) 44 + return -EINVAL; 45 + 46 + seg_name = kstrdup(fw_name, GFP_KERNEL); 47 + if (!seg_name) 48 + return -ENOMEM; 49 + 50 + sprintf(seg_name + strlen(fw_name) - 3, "b%02d", segment); 51 + ret = request_firmware_into_buf(&seg_fw, seg_name, dev, 52 + ptr, phdr->p_filesz); 53 + if (ret) { 54 + dev_err(dev, "error %zd loading %s\n", ret, seg_name); 55 + kfree(seg_name); 56 + return ret; 57 + } 58 + 59 + if (seg_fw->size != phdr->p_filesz) { 60 + dev_err(dev, 61 + "failed to load segment %d from truncated file %s\n", 62 + segment, seg_name); 63 + ret = -EINVAL; 64 + } 65 + 66 + release_firmware(seg_fw); 67 + kfree(seg_name); 68 + 69 + return ret; 70 + } 71 + 34 72 /** 35 73 * qcom_mdt_get_size() - acquire size of the memory region needed to load mdt 36 74 * @fw: firmware object for the mdt file ··· 121 83 * 122 84 * Return: pointer to data, or ERR_PTR() 123 85 */ 124 - void *qcom_mdt_read_metadata(const struct firmware *fw, size_t *data_len) 86 + void *qcom_mdt_read_metadata(const struct firmware *fw, size_t *data_len, 87 + const char *fw_name, struct device *dev) 125 88 { 126 89 const struct elf32_phdr *phdrs; 127 90 const struct elf32_hdr *ehdr; 91 + unsigned int hash_segment = 0; 128 92 size_t hash_offset; 129 93 size_t hash_size; 130 94 size_t ehdr_size; 95 + unsigned int i; 96 + ssize_t ret; 131 97 void *data; 132 98 133 99 ehdr = (struct elf32_hdr *)fw->data; ··· 143 101 if (phdrs[0].p_type == PT_LOAD) 144 102 return ERR_PTR(-EINVAL); 145 103 146 - if ((phdrs[1].p_flags & QCOM_MDT_TYPE_MASK) != QCOM_MDT_TYPE_HASH) 104 + for (i = 1; i < ehdr->e_phnum; i++) { 105 + if ((phdrs[i].p_flags & QCOM_MDT_TYPE_MASK) == QCOM_MDT_TYPE_HASH) { 106 + hash_segment = i; 107 + break; 108 + } 109 + } 110 + 111 + if (!hash_segment) { 112 + dev_err(dev, "no hash segment found in %s\n", fw_name); 147 113 return ERR_PTR(-EINVAL); 114 + } 148 115 149 116 ehdr_size = phdrs[0].p_filesz; 150 - hash_size = phdrs[1].p_filesz; 117 + hash_size = phdrs[hash_segment].p_filesz; 151 118 152 119 data = kmalloc(ehdr_size + hash_size, GFP_KERNEL); 153 120 if (!data) 154 121 return ERR_PTR(-ENOMEM); 155 122 156 - /* Is the header and hash already packed */ 157 - if (ehdr_size + hash_size == fw->size) 158 - hash_offset = phdrs[0].p_filesz; 159 - else 160 - hash_offset = phdrs[1].p_offset; 161 - 123 + /* Copy ELF header */ 162 124 memcpy(data, fw->data, ehdr_size); 163 - memcpy(data + ehdr_size, fw->data + hash_offset, hash_size); 125 + 126 + if (ehdr_size + hash_size == fw->size) { 127 + /* Firmware is split and hash is packed following the ELF header */ 128 + hash_offset = phdrs[0].p_filesz; 129 + memcpy(data + ehdr_size, fw->data + hash_offset, hash_size); 130 + } else if (phdrs[hash_segment].p_offset + hash_size <= fw->size) { 131 + /* Hash is in its own segment, but within the loaded file */ 132 + hash_offset = phdrs[hash_segment].p_offset; 133 + memcpy(data + ehdr_size, fw->data + hash_offset, hash_size); 134 + } else { 135 + /* Hash is in its own segment, beyond the loaded file */ 136 + ret = mdt_load_split_segment(data + ehdr_size, phdrs, hash_segment, fw_name, dev); 137 + if (ret) { 138 + kfree(data); 139 + return ERR_PTR(ret); 140 + } 141 + } 164 142 165 143 *data_len = ehdr_size + hash_size; 166 144 ··· 188 126 } 189 127 EXPORT_SYMBOL_GPL(qcom_mdt_read_metadata); 190 128 129 + /** 130 + * qcom_mdt_pas_init() - initialize PAS region for firmware loading 131 + * @dev: device handle to associate resources with 132 + * @fw: firmware object for the mdt file 133 + * @firmware: name of the firmware, for construction of segment file names 134 + * @pas_id: PAS identifier 135 + * @mem_phys: physical address of allocated memory region 136 + * @ctx: PAS metadata context, to be released by caller 137 + * 138 + * Returns 0 on success, negative errno otherwise. 139 + */ 140 + int qcom_mdt_pas_init(struct device *dev, const struct firmware *fw, 141 + const char *fw_name, int pas_id, phys_addr_t mem_phys, 142 + struct qcom_scm_pas_metadata *ctx) 143 + { 144 + const struct elf32_phdr *phdrs; 145 + const struct elf32_phdr *phdr; 146 + const struct elf32_hdr *ehdr; 147 + phys_addr_t min_addr = PHYS_ADDR_MAX; 148 + phys_addr_t max_addr = 0; 149 + size_t metadata_len; 150 + void *metadata; 151 + int ret; 152 + int i; 153 + 154 + ehdr = (struct elf32_hdr *)fw->data; 155 + phdrs = (struct elf32_phdr *)(ehdr + 1); 156 + 157 + for (i = 0; i < ehdr->e_phnum; i++) { 158 + phdr = &phdrs[i]; 159 + 160 + if (!mdt_phdr_valid(phdr)) 161 + continue; 162 + 163 + if (phdr->p_paddr < min_addr) 164 + min_addr = phdr->p_paddr; 165 + 166 + if (phdr->p_paddr + phdr->p_memsz > max_addr) 167 + max_addr = ALIGN(phdr->p_paddr + phdr->p_memsz, SZ_4K); 168 + } 169 + 170 + metadata = qcom_mdt_read_metadata(fw, &metadata_len, fw_name, dev); 171 + if (IS_ERR(metadata)) { 172 + ret = PTR_ERR(metadata); 173 + dev_err(dev, "error %d reading firmware %s metadata\n", ret, fw_name); 174 + goto out; 175 + } 176 + 177 + ret = qcom_scm_pas_init_image(pas_id, metadata, metadata_len, ctx); 178 + kfree(metadata); 179 + if (ret) { 180 + /* Invalid firmware metadata */ 181 + dev_err(dev, "error %d initializing firmware %s\n", ret, fw_name); 182 + goto out; 183 + } 184 + 185 + ret = qcom_scm_pas_mem_setup(pas_id, mem_phys, max_addr - min_addr); 186 + if (ret) { 187 + /* Unable to set up relocation */ 188 + dev_err(dev, "error %d setting up firmware %s\n", ret, fw_name); 189 + goto out; 190 + } 191 + 192 + out: 193 + return ret; 194 + } 195 + EXPORT_SYMBOL_GPL(qcom_mdt_pas_init); 196 + 191 197 static int __qcom_mdt_load(struct device *dev, const struct firmware *fw, 192 - const char *firmware, int pas_id, void *mem_region, 198 + const char *fw_name, int pas_id, void *mem_region, 193 199 phys_addr_t mem_phys, size_t mem_size, 194 200 phys_addr_t *reloc_base, bool pas_init) 195 201 { 196 202 const struct elf32_phdr *phdrs; 197 203 const struct elf32_phdr *phdr; 198 204 const struct elf32_hdr *ehdr; 199 - const struct firmware *seg_fw; 200 205 phys_addr_t mem_reloc; 201 206 phys_addr_t min_addr = PHYS_ADDR_MAX; 202 - phys_addr_t max_addr = 0; 203 - size_t metadata_len; 204 - size_t fw_name_len; 205 207 ssize_t offset; 206 - void *metadata; 207 - char *fw_name; 208 208 bool relocate = false; 209 209 void *ptr; 210 210 int ret = 0; ··· 277 153 278 154 ehdr = (struct elf32_hdr *)fw->data; 279 155 phdrs = (struct elf32_phdr *)(ehdr + 1); 280 - 281 - fw_name_len = strlen(firmware); 282 - if (fw_name_len <= 4) 283 - return -EINVAL; 284 - 285 - fw_name = kstrdup(firmware, GFP_KERNEL); 286 - if (!fw_name) 287 - return -ENOMEM; 288 - 289 - if (pas_init) { 290 - metadata = qcom_mdt_read_metadata(fw, &metadata_len); 291 - if (IS_ERR(metadata)) { 292 - ret = PTR_ERR(metadata); 293 - dev_err(dev, "error %d reading firmware %s metadata\n", 294 - ret, fw_name); 295 - goto out; 296 - } 297 - 298 - ret = qcom_scm_pas_init_image(pas_id, metadata, metadata_len); 299 - 300 - kfree(metadata); 301 - if (ret) { 302 - /* Invalid firmware metadata */ 303 - dev_err(dev, "error %d initializing firmware %s\n", 304 - ret, fw_name); 305 - goto out; 306 - } 307 - } 308 156 309 157 for (i = 0; i < ehdr->e_phnum; i++) { 310 158 phdr = &phdrs[i]; ··· 289 193 290 194 if (phdr->p_paddr < min_addr) 291 195 min_addr = phdr->p_paddr; 292 - 293 - if (phdr->p_paddr + phdr->p_memsz > max_addr) 294 - max_addr = ALIGN(phdr->p_paddr + phdr->p_memsz, SZ_4K); 295 196 } 296 197 297 198 if (relocate) { 298 - if (pas_init) { 299 - ret = qcom_scm_pas_mem_setup(pas_id, mem_phys, 300 - max_addr - min_addr); 301 - if (ret) { 302 - /* Unable to set up relocation */ 303 - dev_err(dev, "error %d setting up firmware %s\n", 304 - ret, fw_name); 305 - goto out; 306 - } 307 - } 308 - 309 199 /* 310 200 * The image is relocatable, so offset each segment based on 311 201 * the lowest segment address. ··· 328 246 329 247 ptr = mem_region + offset; 330 248 331 - if (phdr->p_filesz && phdr->p_offset < fw->size) { 249 + if (phdr->p_filesz && phdr->p_offset < fw->size && 250 + phdr->p_offset + phdr->p_filesz <= fw->size) { 332 251 /* Firmware is large enough to be non-split */ 333 252 if (phdr->p_offset + phdr->p_filesz > fw->size) { 334 253 dev_err(dev, "file %s segment %d would be truncated\n", ··· 341 258 memcpy(ptr, fw->data + phdr->p_offset, phdr->p_filesz); 342 259 } else if (phdr->p_filesz) { 343 260 /* Firmware not large enough, load split-out segments */ 344 - sprintf(fw_name + fw_name_len - 3, "b%02d", i); 345 - ret = request_firmware_into_buf(&seg_fw, fw_name, dev, 346 - ptr, phdr->p_filesz); 347 - if (ret) { 348 - dev_err(dev, "error %d loading %s\n", 349 - ret, fw_name); 261 + ret = mdt_load_split_segment(ptr, phdrs, i, fw_name, dev); 262 + if (ret) 350 263 break; 351 - } 352 - 353 - if (seg_fw->size != phdr->p_filesz) { 354 - dev_err(dev, 355 - "failed to load segment %d from truncated file %s\n", 356 - i, fw_name); 357 - release_firmware(seg_fw); 358 - ret = -EINVAL; 359 - break; 360 - } 361 - 362 - release_firmware(seg_fw); 363 264 } 364 265 365 266 if (phdr->p_memsz > phdr->p_filesz) ··· 352 285 353 286 if (reloc_base) 354 287 *reloc_base = mem_reloc; 355 - 356 - out: 357 - kfree(fw_name); 358 288 359 289 return ret; 360 290 } ··· 374 310 phys_addr_t mem_phys, size_t mem_size, 375 311 phys_addr_t *reloc_base) 376 312 { 313 + int ret; 314 + 315 + ret = qcom_mdt_pas_init(dev, fw, firmware, pas_id, mem_phys, NULL); 316 + if (ret) 317 + return ret; 318 + 377 319 return __qcom_mdt_load(dev, fw, firmware, pas_id, mem_region, mem_phys, 378 320 mem_size, reloc_base, true); 379 321 }
+1
drivers/soc/qcom/ocmem.c
··· 206 206 ocmem = platform_get_drvdata(pdev); 207 207 if (!ocmem) { 208 208 dev_err(dev, "Cannot get ocmem\n"); 209 + put_device(&pdev->dev); 209 210 return ERR_PTR(-ENODEV); 210 211 } 211 212 return ocmem;
+6 -2
drivers/soc/qcom/qcom_aoss.c
··· 451 451 452 452 qmp = platform_get_drvdata(pdev); 453 453 454 - return qmp ? qmp : ERR_PTR(-EPROBE_DEFER); 454 + if (!qmp) { 455 + put_device(&pdev->dev); 456 + return ERR_PTR(-EPROBE_DEFER); 457 + } 458 + return qmp; 455 459 } 456 460 EXPORT_SYMBOL(qmp_get); 457 461 ··· 501 497 } 502 498 503 499 irq = platform_get_irq(pdev, 0); 504 - ret = devm_request_irq(&pdev->dev, irq, qmp_intr, IRQF_ONESHOT, 500 + ret = devm_request_irq(&pdev->dev, irq, qmp_intr, 0, 505 501 "aoss-qmp", qmp); 506 502 if (ret < 0) { 507 503 dev_err(&pdev->dev, "failed to request interrupt\n");
+20
drivers/soc/qcom/rpmpd.c
··· 138 138 .max_state = RPM_SMD_LEVEL_TURBO, 139 139 }; 140 140 141 + /* msm8226 RPM Power Domains */ 142 + DEFINE_RPMPD_PAIR(msm8226, vddcx, vddcx_ao, SMPA, CORNER, 1); 143 + DEFINE_RPMPD_VFC(msm8226, vddcx_vfc, SMPA, 1); 144 + 145 + static struct rpmpd *msm8226_rpmpds[] = { 146 + [MSM8226_VDDCX] = &msm8226_vddcx, 147 + [MSM8226_VDDCX_AO] = &msm8226_vddcx_ao, 148 + [MSM8226_VDDCX_VFC] = &msm8226_vddcx_vfc, 149 + }; 150 + 151 + static const struct rpmpd_desc msm8226_desc = { 152 + .rpmpds = msm8226_rpmpds, 153 + .num_pds = ARRAY_SIZE(msm8226_rpmpds), 154 + .max_state = MAX_CORNER_RPMPD_STATE, 155 + }; 156 + 141 157 /* msm8939 RPM Power Domains */ 142 158 DEFINE_RPMPD_PAIR(msm8939, vddmd, vddmd_ao, SMPA, CORNER, 1); 143 159 DEFINE_RPMPD_VFC(msm8939, vddmd_vfc, SMPA, 1); ··· 452 436 453 437 static const struct of_device_id rpmpd_match_table[] = { 454 438 { .compatible = "qcom,mdm9607-rpmpd", .data = &mdm9607_desc }, 439 + { .compatible = "qcom,msm8226-rpmpd", .data = &msm8226_desc }, 455 440 { .compatible = "qcom,msm8916-rpmpd", .data = &msm8916_desc }, 456 441 { .compatible = "qcom,msm8939-rpmpd", .data = &msm8939_desc }, 457 442 { .compatible = "qcom,msm8953-rpmpd", .data = &msm8953_desc }, ··· 627 610 628 611 data->domains = devm_kcalloc(&pdev->dev, num, sizeof(*data->domains), 629 612 GFP_KERNEL); 613 + if (!data->domains) 614 + return -ENOMEM; 615 + 630 616 data->num_domains = num; 631 617 632 618 for (i = 0; i < num; i++) {
+12
drivers/soc/qcom/socinfo.c
··· 104 104 [36] = "PM8009", 105 105 [38] = "PM8150C", 106 106 [41] = "SMB2351", 107 + [47] = "PMK8350", 108 + [48] = "PM8350", 109 + [49] = "PM8350C", 110 + [50] = "PM8350B", 111 + [51] = "PMR735A", 112 + [52] = "PMR735B", 113 + [58] = "PM8450", 114 + [65] = "PM8010", 107 115 }; 108 116 #endif /* CONFIG_DEBUG_FS */ 109 117 ··· 322 314 { 422, "IPQ6010" }, 323 315 { 425, "SC7180" }, 324 316 { 434, "SM6350" }, 317 + { 439, "SM8350" }, 318 + { 449, "SC8280XP" }, 325 319 { 453, "IPQ6005" }, 326 320 { 455, "QRB5165" }, 327 321 { 457, "SM8450" }, 328 322 { 459, "SM7225" }, 323 + { 460, "SA8540P" }, 324 + { 480, "SM8450" }, 329 325 }; 330 326 331 327 static const char *socinfo_machine(struct device *dev, unsigned int id)
+12
drivers/soc/renesas/Kconfig
··· 40 40 select SYS_SUPPORTS_SH_TMU 41 41 select SYSC_RMOBILE 42 42 43 + config ARCH_RZG2L 44 + bool 45 + select PM 46 + select PM_GENERIC_DOMAINS 47 + 43 48 config ARCH_RZN1 44 49 bool 45 50 select ARM_AMBA ··· 298 293 299 294 config ARCH_R9A07G044 300 295 bool "ARM64 Platform support for RZ/G2L" 296 + select ARCH_RZG2L 301 297 help 302 298 This enables support for the Renesas RZ/G2L SoC variants. 299 + 300 + config ARCH_R9A07G054 301 + bool "ARM64 Platform support for RZ/V2L" 302 + select ARCH_RZG2L 303 + help 304 + This enables support for the Renesas RZ/V2L SoC variants. 303 305 304 306 endif # ARM64 305 307
+46 -22
drivers/soc/renesas/renesas-soc.c
··· 64 64 .name = "RZ/G2L", 65 65 }; 66 66 67 + static const struct renesas_family fam_rzv2l __initconst __maybe_unused = { 68 + .name = "RZ/V2L", 69 + }; 70 + 67 71 static const struct renesas_family fam_shmobile __initconst __maybe_unused = { 68 72 .name = "SH-Mobile", 69 73 .reg = 0xe600101c, /* CCCR (Common Chip Code Register) */ ··· 146 142 static const struct renesas_soc soc_rz_g2l __initconst __maybe_unused = { 147 143 .family = &fam_rzg2l, 148 144 .id = 0x841c447, 145 + }; 146 + 147 + static const struct renesas_soc soc_rz_v2l __initconst __maybe_unused = { 148 + .family = &fam_rzv2l, 149 + .id = 0x8447447, 149 150 }; 150 151 151 152 static const struct renesas_soc soc_rcar_m1a __initconst __maybe_unused = { ··· 343 334 #if defined(CONFIG_ARCH_R9A07G044) 344 335 { .compatible = "renesas,r9a07g044", .data = &soc_rz_g2l }, 345 336 #endif 337 + #if defined(CONFIG_ARCH_R9A07G054) 338 + { .compatible = "renesas,r9a07g054", .data = &soc_rz_v2l }, 339 + #endif 346 340 #ifdef CONFIG_ARCH_SH73A0 347 341 { .compatible = "renesas,sh73a0", .data = &soc_shmobile_ag5 }, 348 342 #endif ··· 379 367 static const struct of_device_id renesas_ids[] __initconst = { 380 368 { .compatible = "renesas,bsid", .data = &id_bsid }, 381 369 { .compatible = "renesas,r9a07g044-sysc", .data = &id_rzg2l }, 370 + { .compatible = "renesas,r9a07g054-sysc", .data = &id_rzg2l }, 382 371 { .compatible = "renesas,prr", .data = &id_prr }, 383 372 { /* sentinel */ } 384 373 }; ··· 393 380 const struct renesas_soc *soc; 394 381 const struct renesas_id *id; 395 382 void __iomem *chipid = NULL; 383 + const char *rev_prefix = ""; 396 384 struct soc_device *soc_dev; 397 385 struct device_node *np; 398 386 const char *soc_id; 387 + int ret; 399 388 400 389 match = of_match_node(renesas_socs, of_root); 401 390 if (!match) ··· 418 403 chipid = ioremap(family->reg, 4); 419 404 } 420 405 406 + soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL); 407 + if (!soc_dev_attr) 408 + return -ENOMEM; 409 + 410 + np = of_find_node_by_path("/"); 411 + of_property_read_string(np, "model", &soc_dev_attr->machine); 412 + of_node_put(np); 413 + 414 + soc_dev_attr->family = kstrdup_const(family->name, GFP_KERNEL); 415 + soc_dev_attr->soc_id = kstrdup_const(soc_id, GFP_KERNEL); 416 + 421 417 if (chipid) { 422 418 product = readl(chipid + id->offset); 423 419 iounmap(chipid); ··· 443 417 444 418 eshi = ((product >> 4) & 0x0f) + 1; 445 419 eslo = product & 0xf; 420 + soc_dev_attr->revision = kasprintf(GFP_KERNEL, "ES%u.%u", 421 + eshi, eslo); 422 + } else if (id == &id_rzg2l) { 423 + eshi = ((product >> 28) & 0x0f); 424 + soc_dev_attr->revision = kasprintf(GFP_KERNEL, "%u", 425 + eshi); 426 + rev_prefix = "Rev "; 446 427 } 447 428 448 429 if (soc->id && 449 430 ((product & id->mask) >> __ffs(id->mask)) != soc->id) { 450 431 pr_warn("SoC mismatch (product = 0x%x)\n", product); 451 - return -ENODEV; 432 + ret = -ENODEV; 433 + goto free_soc_dev_attr; 452 434 } 453 435 } 454 436 455 - soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL); 456 - if (!soc_dev_attr) 457 - return -ENOMEM; 458 - 459 - np = of_find_node_by_path("/"); 460 - of_property_read_string(np, "model", &soc_dev_attr->machine); 461 - of_node_put(np); 462 - 463 - soc_dev_attr->family = kstrdup_const(family->name, GFP_KERNEL); 464 - soc_dev_attr->soc_id = kstrdup_const(soc_id, GFP_KERNEL); 465 - if (eshi) 466 - soc_dev_attr->revision = kasprintf(GFP_KERNEL, "ES%u.%u", eshi, 467 - eslo); 468 - 469 - pr_info("Detected Renesas %s %s %s\n", soc_dev_attr->family, 470 - soc_dev_attr->soc_id, soc_dev_attr->revision ?: ""); 437 + pr_info("Detected Renesas %s %s %s%s\n", soc_dev_attr->family, 438 + soc_dev_attr->soc_id, rev_prefix, soc_dev_attr->revision ?: ""); 471 439 472 440 soc_dev = soc_device_register(soc_dev_attr); 473 441 if (IS_ERR(soc_dev)) { 474 - kfree(soc_dev_attr->revision); 475 - kfree_const(soc_dev_attr->soc_id); 476 - kfree_const(soc_dev_attr->family); 477 - kfree(soc_dev_attr); 478 - return PTR_ERR(soc_dev); 442 + ret = PTR_ERR(soc_dev); 443 + goto free_soc_dev_attr; 479 444 } 480 445 481 446 return 0; 447 + 448 + free_soc_dev_attr: 449 + kfree(soc_dev_attr->revision); 450 + kfree_const(soc_dev_attr->soc_id); 451 + kfree_const(soc_dev_attr->family); 452 + kfree(soc_dev_attr); 453 + return ret; 482 454 } 483 455 early_initcall(renesas_soc_init);
+21 -3
drivers/soc/tegra/fuse/fuse-tegra.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Copyright (c) 2013-2014, NVIDIA CORPORATION. All rights reserved. 3 + * Copyright (c) 2013-2021, NVIDIA CORPORATION. All rights reserved. 4 4 */ 5 5 6 6 #include <linux/clk.h> ··· 162 162 .bit_offset = 0, 163 163 .nbits = 32, 164 164 }, { 165 + .name = "gcplex-config-fuse", 166 + .offset = 0x1c8, 167 + .bytes = 4, 168 + .bit_offset = 0, 169 + .nbits = 32, 170 + }, { 165 171 .name = "tsensor-realignment", 166 172 .offset = 0x1fc, 167 173 .bytes = 4, ··· 185 179 .bytes = 4, 186 180 .bit_offset = 0, 187 181 .nbits = 32, 182 + }, { 183 + .name = "pdi0", 184 + .offset = 0x300, 185 + .bytes = 4, 186 + .bit_offset = 0, 187 + .nbits = 32, 188 + }, { 189 + .name = "pdi1", 190 + .offset = 0x304, 191 + .bytes = 4, 192 + .bit_offset = 0, 193 + .nbits = 32, 188 194 }, 189 195 }; 190 196 191 197 static void tegra_fuse_restore(void *base) 192 198 { 199 + fuse->base = (void __iomem *)base; 193 200 fuse->clk = NULL; 194 - fuse->base = base; 195 201 } 196 202 197 203 static int tegra_fuse_probe(struct platform_device *pdev) ··· 213 195 struct resource *res; 214 196 int err; 215 197 216 - err = devm_add_action(&pdev->dev, tegra_fuse_restore, base); 198 + err = devm_add_action(&pdev->dev, tegra_fuse_restore, (void __force *)base); 217 199 if (err) 218 200 return err; 219 201
+11 -5
drivers/soc/tegra/pmc.c
··· 3 3 * drivers/soc/tegra/pmc.c 4 4 * 5 5 * Copyright (c) 2010 Google, Inc 6 - * Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved. 6 + * Copyright (c) 2018-2022, NVIDIA CORPORATION. All rights reserved. 7 7 * 8 8 * Author: 9 9 * Colin Cross <ccross@google.com> ··· 54 54 #include <dt-bindings/pinctrl/pinctrl-tegra-io-pad.h> 55 55 #include <dt-bindings/gpio/tegra186-gpio.h> 56 56 #include <dt-bindings/gpio/tegra194-gpio.h> 57 + #include <dt-bindings/gpio/tegra234-gpio.h> 57 58 #include <dt-bindings/soc/tegra-pmc.h> 58 59 59 60 #define PMC_CNTRL 0x0 ··· 3067 3066 } 3068 3067 3069 3068 static const struct tegra_pmc_soc tegra20_pmc_soc = { 3070 - .supports_core_domain = false, 3069 + .supports_core_domain = true, 3071 3070 .num_powergates = ARRAY_SIZE(tegra20_powergates), 3072 3071 .powergates = tegra20_powergates, 3073 3072 .num_cpu_powergates = 0, ··· 3128 3127 }; 3129 3128 3130 3129 static const struct tegra_pmc_soc tegra30_pmc_soc = { 3131 - .supports_core_domain = false, 3130 + .supports_core_domain = true, 3132 3131 .num_powergates = ARRAY_SIZE(tegra30_powergates), 3133 3132 .powergates = tegra30_powergates, 3134 3133 .num_cpu_powergates = ARRAY_SIZE(tegra30_cpu_powergates), ··· 3789 3788 "FUSECRC", 3790 3789 }; 3791 3790 3791 + static const struct tegra_wake_event tegra234_wake_events[] = { 3792 + TEGRA_WAKE_GPIO("power", 29, 1, TEGRA234_AON_GPIO(EE, 4)), 3793 + TEGRA_WAKE_IRQ("rtc", 73, 10), 3794 + }; 3795 + 3792 3796 static const struct tegra_pmc_soc tegra234_pmc_soc = { 3793 3797 .supports_core_domain = false, 3794 3798 .num_powergates = 0, ··· 3818 3812 .num_reset_sources = ARRAY_SIZE(tegra234_reset_sources), 3819 3813 .reset_levels = tegra186_reset_levels, 3820 3814 .num_reset_levels = ARRAY_SIZE(tegra186_reset_levels), 3821 - .num_wake_events = 0, 3822 - .wake_events = NULL, 3815 + .num_wake_events = ARRAY_SIZE(tegra234_wake_events), 3816 + .wake_events = tegra234_wake_events, 3823 3817 .pmc_clks_data = NULL, 3824 3818 .num_pmc_clks = 0, 3825 3819 .has_blink_output = false,
+6 -9
drivers/soc/ti/k3-ringacc.c
··· 1402 1402 sizeof(*ringacc->rings) * 1403 1403 ringacc->num_rings, 1404 1404 GFP_KERNEL); 1405 - ringacc->rings_inuse = devm_kcalloc(dev, 1406 - BITS_TO_LONGS(ringacc->num_rings), 1407 - sizeof(unsigned long), GFP_KERNEL); 1408 - ringacc->proxy_inuse = devm_kcalloc(dev, 1409 - BITS_TO_LONGS(ringacc->num_proxies), 1410 - sizeof(unsigned long), GFP_KERNEL); 1405 + ringacc->rings_inuse = devm_bitmap_zalloc(dev, ringacc->num_rings, 1406 + GFP_KERNEL); 1407 + ringacc->proxy_inuse = devm_bitmap_zalloc(dev, ringacc->num_proxies, 1408 + GFP_KERNEL); 1411 1409 1412 1410 if (!ringacc->rings || !ringacc->rings_inuse || !ringacc->proxy_inuse) 1413 1411 return -ENOMEM; ··· 1481 1483 sizeof(*ringacc->rings) * 1482 1484 ringacc->num_rings * 2, 1483 1485 GFP_KERNEL); 1484 - ringacc->rings_inuse = devm_kcalloc(dev, 1485 - BITS_TO_LONGS(ringacc->num_rings), 1486 - sizeof(unsigned long), GFP_KERNEL); 1486 + ringacc->rings_inuse = devm_bitmap_zalloc(dev, ringacc->num_rings, 1487 + GFP_KERNEL); 1487 1488 1488 1489 if (!ringacc->rings || !ringacc->rings_inuse) 1489 1490 return ERR_PTR(-ENOMEM);
+1
drivers/soc/ti/k3-socinfo.c
··· 42 42 { 0xBB6D, "J7200" }, 43 43 { 0xBB38, "AM64X" }, 44 44 { 0xBB75, "J721S2"}, 45 + { 0xBB7E, "AM62X" }, 45 46 }; 46 47 47 48 static int
+7 -6
drivers/soc/ti/smartreflex.c
··· 819 819 { 820 820 struct omap_sr *sr_info; 821 821 struct omap_sr_data *pdata = pdev->dev.platform_data; 822 - struct resource *mem, *irq; 822 + struct resource *mem; 823 823 struct dentry *nvalue_dir; 824 824 int i, ret = 0; 825 825 ··· 844 844 if (IS_ERR(sr_info->base)) 845 845 return PTR_ERR(sr_info->base); 846 846 847 - irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 847 + ret = platform_get_irq_optional(pdev, 0); 848 + if (ret < 0 && ret != -ENXIO) 849 + return dev_err_probe(&pdev->dev, ret, "failed to get IRQ resource\n"); 850 + if (ret > 0) 851 + sr_info->irq = ret; 848 852 849 853 sr_info->fck = devm_clk_get(pdev->dev.parent, "fck"); 850 854 if (IS_ERR(sr_info->fck)) ··· 873 869 sr_info->senp_avgweight = pdata->senp_avgweight; 874 870 sr_info->autocomp_active = false; 875 871 sr_info->ip_type = pdata->ip_type; 876 - 877 - if (irq) 878 - sr_info->irq = irq->start; 879 872 880 873 sr_set_clk_length(sr_info); 881 874 ··· 927 926 928 927 } 929 928 930 - return ret; 929 + return 0; 931 930 932 931 err_debugfs: 933 932 debugfs_remove_recursive(sr_info->dbg_dir);
+2 -2
drivers/soc/ti/wkup_m3_ipc.c
··· 450 450 return PTR_ERR(m3_ipc->ipc_mem_base); 451 451 452 452 irq = platform_get_irq(pdev, 0); 453 - if (!irq) { 453 + if (irq < 0) { 454 454 dev_err(&pdev->dev, "no irq resource\n"); 455 - return -ENXIO; 455 + return irq; 456 456 } 457 457 458 458 ret = devm_request_irq(dev, irq, wkup_m3_txev_handler,
+1 -1
drivers/tee/amdtee/call.c
··· 122 122 } 123 123 124 124 static DEFINE_MUTEX(ta_refcount_mutex); 125 - static struct list_head ta_list = LIST_HEAD_INIT(ta_list); 125 + static LIST_HEAD(ta_list); 126 126 127 127 static u32 get_ta_refcount(u32 ta_handle) 128 128 {
+16 -39
drivers/tee/amdtee/shm_pool.c
··· 8 8 #include <linux/psp-sev.h> 9 9 #include "amdtee_private.h" 10 10 11 - static int pool_op_alloc(struct tee_shm_pool_mgr *poolm, struct tee_shm *shm, 12 - size_t size) 11 + static int pool_op_alloc(struct tee_shm_pool *pool, struct tee_shm *shm, 12 + size_t size, size_t align) 13 13 { 14 14 unsigned int order = get_order(size); 15 15 unsigned long va; 16 16 int rc; 17 17 18 + /* 19 + * Ignore alignment since this is already going to be page aligned 20 + * and there's no need for any larger alignment. 21 + */ 18 22 va = __get_free_pages(GFP_KERNEL | __GFP_ZERO, order); 19 23 if (!va) 20 24 return -ENOMEM; ··· 38 34 return 0; 39 35 } 40 36 41 - static void pool_op_free(struct tee_shm_pool_mgr *poolm, struct tee_shm *shm) 37 + static void pool_op_free(struct tee_shm_pool *pool, struct tee_shm *shm) 42 38 { 43 39 /* Unmap the shared memory from TEE */ 44 40 amdtee_unmap_shmem(shm); ··· 46 42 shm->kaddr = NULL; 47 43 } 48 44 49 - static void pool_op_destroy_poolmgr(struct tee_shm_pool_mgr *poolm) 45 + static void pool_op_destroy_pool(struct tee_shm_pool *pool) 50 46 { 51 - kfree(poolm); 47 + kfree(pool); 52 48 } 53 49 54 - static const struct tee_shm_pool_mgr_ops pool_ops = { 50 + static const struct tee_shm_pool_ops pool_ops = { 55 51 .alloc = pool_op_alloc, 56 52 .free = pool_op_free, 57 - .destroy_poolmgr = pool_op_destroy_poolmgr, 53 + .destroy_pool = pool_op_destroy_pool, 58 54 }; 59 - 60 - static struct tee_shm_pool_mgr *pool_mem_mgr_alloc(void) 61 - { 62 - struct tee_shm_pool_mgr *mgr = kzalloc(sizeof(*mgr), GFP_KERNEL); 63 - 64 - if (!mgr) 65 - return ERR_PTR(-ENOMEM); 66 - 67 - mgr->ops = &pool_ops; 68 - 69 - return mgr; 70 - } 71 55 72 56 struct tee_shm_pool *amdtee_config_shm(void) 73 57 { 74 - struct tee_shm_pool_mgr *priv_mgr; 75 - struct tee_shm_pool_mgr *dmabuf_mgr; 76 - void *rc; 58 + struct tee_shm_pool *pool = kzalloc(sizeof(*pool), GFP_KERNEL); 77 59 78 - rc = pool_mem_mgr_alloc(); 79 - if (IS_ERR(rc)) 80 - return rc; 81 - priv_mgr = rc; 60 + if (!pool) 61 + return ERR_PTR(-ENOMEM); 82 62 83 - rc = pool_mem_mgr_alloc(); 84 - if (IS_ERR(rc)) { 85 - tee_shm_pool_mgr_destroy(priv_mgr); 86 - return rc; 87 - } 88 - dmabuf_mgr = rc; 63 + pool->ops = &pool_ops; 89 64 90 - rc = tee_shm_pool_alloc(priv_mgr, dmabuf_mgr); 91 - if (IS_ERR(rc)) { 92 - tee_shm_pool_mgr_destroy(priv_mgr); 93 - tee_shm_pool_mgr_destroy(dmabuf_mgr); 94 - } 95 - 96 - return rc; 65 + return pool; 97 66 }
-8
drivers/tee/optee/Kconfig
··· 7 7 help 8 8 This implements the OP-TEE Trusted Execution Environment (TEE) 9 9 driver. 10 - 11 - config OPTEE_SHM_NUM_PRIV_PAGES 12 - int "Private Shared Memory Pages" 13 - default 1 14 - depends on OPTEE 15 - help 16 - This sets the number of private shared memory pages to be 17 - used by OP-TEE TEE driver.
+1 -1
drivers/tee/optee/call.c
··· 120 120 if (optee->rpc_arg_count) 121 121 sz += OPTEE_MSG_GET_ARG_SIZE(optee->rpc_arg_count); 122 122 123 - shm = tee_shm_alloc(ctx, sz, TEE_SHM_MAPPED | TEE_SHM_PRIV); 123 + shm = tee_shm_alloc_priv_buf(ctx, sz); 124 124 if (IS_ERR(shm)) 125 125 return shm; 126 126
+17 -4
drivers/tee/optee/core.c
··· 18 18 #include <linux/workqueue.h> 19 19 #include "optee_private.h" 20 20 21 - int optee_pool_op_alloc_helper(struct tee_shm_pool_mgr *poolm, 22 - struct tee_shm *shm, size_t size, 21 + int optee_pool_op_alloc_helper(struct tee_shm_pool *pool, struct tee_shm *shm, 22 + size_t size, size_t align, 23 23 int (*shm_register)(struct tee_context *ctx, 24 24 struct tee_shm *shm, 25 25 struct page **pages, ··· 30 30 struct page *page; 31 31 int rc = 0; 32 32 33 + /* 34 + * Ignore alignment since this is already going to be page aligned 35 + * and there's no need for any larger alignment. 36 + */ 33 37 page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order); 34 38 if (!page) 35 39 return -ENOMEM; ··· 55 51 for (i = 0; i < nr_pages; i++) 56 52 pages[i] = page + i; 57 53 58 - shm->flags |= TEE_SHM_REGISTER; 59 54 rc = shm_register(shm->ctx, shm, pages, nr_pages, 60 55 (unsigned long)shm->kaddr); 61 56 kfree(pages); ··· 65 62 return 0; 66 63 67 64 err: 68 - __free_pages(page, order); 65 + free_pages((unsigned long)shm->kaddr, order); 69 66 return rc; 67 + } 68 + 69 + void optee_pool_op_free_helper(struct tee_shm_pool *pool, struct tee_shm *shm, 70 + int (*shm_unregister)(struct tee_context *ctx, 71 + struct tee_shm *shm)) 72 + { 73 + if (shm_unregister) 74 + shm_unregister(shm->ctx, shm); 75 + free_pages((unsigned long)shm->kaddr, get_order(shm->size)); 76 + shm->kaddr = NULL; 70 77 } 71 78 72 79 static void optee_bus_scan(struct work_struct *work)
+2 -3
drivers/tee/optee/device.c
··· 121 121 if (rc < 0 || !shm_size) 122 122 goto out_sess; 123 123 124 - device_shm = tee_shm_alloc(ctx, shm_size, 125 - TEE_SHM_MAPPED | TEE_SHM_DMA_BUF); 124 + device_shm = tee_shm_alloc_kernel_buf(ctx, shm_size); 126 125 if (IS_ERR(device_shm)) { 127 - pr_err("tee_shm_alloc failed\n"); 126 + pr_err("tee_shm_alloc_kernel_buf failed\n"); 128 127 rc = PTR_ERR(device_shm); 129 128 goto out_sess; 130 129 }
+17 -46
drivers/tee/optee/ffa_abi.c
··· 369 369 * The main function is optee_ffa_shm_pool_alloc_pages(). 370 370 */ 371 371 372 - static int pool_ffa_op_alloc(struct tee_shm_pool_mgr *poolm, 373 - struct tee_shm *shm, size_t size) 372 + static int pool_ffa_op_alloc(struct tee_shm_pool *pool, 373 + struct tee_shm *shm, size_t size, size_t align) 374 374 { 375 - return optee_pool_op_alloc_helper(poolm, shm, size, 375 + return optee_pool_op_alloc_helper(pool, shm, size, align, 376 376 optee_ffa_shm_register); 377 377 } 378 378 379 - static void pool_ffa_op_free(struct tee_shm_pool_mgr *poolm, 379 + static void pool_ffa_op_free(struct tee_shm_pool *pool, 380 380 struct tee_shm *shm) 381 381 { 382 - optee_ffa_shm_unregister(shm->ctx, shm); 383 - free_pages((unsigned long)shm->kaddr, get_order(shm->size)); 384 - shm->kaddr = NULL; 382 + optee_pool_op_free_helper(pool, shm, optee_ffa_shm_unregister); 385 383 } 386 384 387 - static void pool_ffa_op_destroy_poolmgr(struct tee_shm_pool_mgr *poolm) 385 + static void pool_ffa_op_destroy_pool(struct tee_shm_pool *pool) 388 386 { 389 - kfree(poolm); 387 + kfree(pool); 390 388 } 391 389 392 - static const struct tee_shm_pool_mgr_ops pool_ffa_ops = { 390 + static const struct tee_shm_pool_ops pool_ffa_ops = { 393 391 .alloc = pool_ffa_op_alloc, 394 392 .free = pool_ffa_op_free, 395 - .destroy_poolmgr = pool_ffa_op_destroy_poolmgr, 393 + .destroy_pool = pool_ffa_op_destroy_pool, 396 394 }; 397 395 398 396 /** ··· 399 401 * This pool is used with OP-TEE over FF-A. In this case command buffers 400 402 * and such are allocated from kernel's own memory. 401 403 */ 402 - static struct tee_shm_pool_mgr *optee_ffa_shm_pool_alloc_pages(void) 404 + static struct tee_shm_pool *optee_ffa_shm_pool_alloc_pages(void) 403 405 { 404 - struct tee_shm_pool_mgr *mgr = kzalloc(sizeof(*mgr), GFP_KERNEL); 406 + struct tee_shm_pool *pool = kzalloc(sizeof(*pool), GFP_KERNEL); 405 407 406 - if (!mgr) 408 + if (!pool) 407 409 return ERR_PTR(-ENOMEM); 408 410 409 - mgr->ops = &pool_ffa_ops; 411 + pool->ops = &pool_ffa_ops; 410 412 411 - return mgr; 413 + return pool; 412 414 } 413 415 414 416 /* ··· 438 440 shm = optee_rpc_cmd_alloc_suppl(ctx, arg->params[0].u.value.b); 439 441 break; 440 442 case OPTEE_RPC_SHM_TYPE_KERNEL: 441 - shm = tee_shm_alloc(optee->ctx, arg->params[0].u.value.b, 442 - TEE_SHM_MAPPED | TEE_SHM_PRIV); 443 + shm = tee_shm_alloc_priv_buf(optee->ctx, 444 + arg->params[0].u.value.b); 443 445 break; 444 446 default: 445 447 arg->ret = TEEC_ERROR_BAD_PARAMETERS; ··· 698 700 return true; 699 701 } 700 702 701 - static struct tee_shm_pool *optee_ffa_config_dyn_shm(void) 702 - { 703 - struct tee_shm_pool_mgr *priv_mgr; 704 - struct tee_shm_pool_mgr *dmabuf_mgr; 705 - void *rc; 706 - 707 - rc = optee_ffa_shm_pool_alloc_pages(); 708 - if (IS_ERR(rc)) 709 - return rc; 710 - priv_mgr = rc; 711 - 712 - rc = optee_ffa_shm_pool_alloc_pages(); 713 - if (IS_ERR(rc)) { 714 - tee_shm_pool_mgr_destroy(priv_mgr); 715 - return rc; 716 - } 717 - dmabuf_mgr = rc; 718 - 719 - rc = tee_shm_pool_alloc(priv_mgr, dmabuf_mgr); 720 - if (IS_ERR(rc)) { 721 - tee_shm_pool_mgr_destroy(priv_mgr); 722 - tee_shm_pool_mgr_destroy(dmabuf_mgr); 723 - } 724 - 725 - return rc; 726 - } 727 - 728 703 static void optee_ffa_get_version(struct tee_device *teedev, 729 704 struct tee_ioctl_version_data *vers) 730 705 { ··· 795 824 if (!optee) 796 825 return -ENOMEM; 797 826 798 - pool = optee_ffa_config_dyn_shm(); 827 + pool = optee_ffa_shm_pool_alloc_pages(); 799 828 if (IS_ERR(pool)) { 800 829 rc = PTR_ERR(pool); 801 830 goto err_free_optee;
+5 -2
drivers/tee/optee/optee_private.h
··· 229 229 int optee_enumerate_devices(u32 func); 230 230 void optee_unregister_devices(void); 231 231 232 - int optee_pool_op_alloc_helper(struct tee_shm_pool_mgr *poolm, 233 - struct tee_shm *shm, size_t size, 232 + int optee_pool_op_alloc_helper(struct tee_shm_pool *pool, struct tee_shm *shm, 233 + size_t size, size_t align, 234 234 int (*shm_register)(struct tee_context *ctx, 235 235 struct tee_shm *shm, 236 236 struct page **pages, 237 237 size_t num_pages, 238 238 unsigned long start)); 239 + void optee_pool_op_free_helper(struct tee_shm_pool *pool, struct tee_shm *shm, 240 + int (*shm_unregister)(struct tee_context *ctx, 241 + struct tee_shm *shm)); 239 242 240 243 241 244 void optee_remove_common(struct optee *optee);
+37 -88
drivers/tee/optee/smc_abi.c
··· 42 42 * 6. Driver initialization. 43 43 */ 44 44 45 - #define OPTEE_SHM_NUM_PRIV_PAGES CONFIG_OPTEE_SHM_NUM_PRIV_PAGES 45 + /* 46 + * A typical OP-TEE private shm allocation is 224 bytes (argument struct 47 + * with 6 parameters, needed for open session). So with an alignment of 512 48 + * we'll waste a bit more than 50%. However, it's only expected that we'll 49 + * have a handful of these structs allocated at a time. Most memory will 50 + * be allocated aligned to the page size, So all in all this should scale 51 + * up and down quite well. 52 + */ 53 + #define OPTEE_MIN_STATIC_POOL_ALIGN 9 /* 512 bytes aligned */ 46 54 47 55 /* 48 56 * 1. Convert between struct tee_param and struct optee_msg_param ··· 228 220 case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: 229 221 case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: 230 222 case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: 231 - if (tee_shm_is_registered(p->u.memref.shm)) 223 + if (tee_shm_is_dynamic(p->u.memref.shm)) 232 224 rc = to_msg_param_reg_mem(mp, p); 233 225 else 234 226 rc = to_msg_param_tmp_mem(mp, p); ··· 530 522 * The main function is optee_shm_pool_alloc_pages(). 531 523 */ 532 524 533 - static int pool_op_alloc(struct tee_shm_pool_mgr *poolm, 534 - struct tee_shm *shm, size_t size) 525 + static int pool_op_alloc(struct tee_shm_pool *pool, 526 + struct tee_shm *shm, size_t size, size_t align) 535 527 { 536 528 /* 537 529 * Shared memory private to the OP-TEE driver doesn't need 538 530 * to be registered with OP-TEE. 539 531 */ 540 532 if (shm->flags & TEE_SHM_PRIV) 541 - return optee_pool_op_alloc_helper(poolm, shm, size, NULL); 533 + return optee_pool_op_alloc_helper(pool, shm, size, align, NULL); 542 534 543 - return optee_pool_op_alloc_helper(poolm, shm, size, optee_shm_register); 535 + return optee_pool_op_alloc_helper(pool, shm, size, align, 536 + optee_shm_register); 544 537 } 545 538 546 - static void pool_op_free(struct tee_shm_pool_mgr *poolm, 539 + static void pool_op_free(struct tee_shm_pool *pool, 547 540 struct tee_shm *shm) 548 541 { 549 542 if (!(shm->flags & TEE_SHM_PRIV)) 550 - optee_shm_unregister(shm->ctx, shm); 551 - 552 - free_pages((unsigned long)shm->kaddr, get_order(shm->size)); 553 - shm->kaddr = NULL; 543 + optee_pool_op_free_helper(pool, shm, optee_shm_unregister); 544 + else 545 + optee_pool_op_free_helper(pool, shm, NULL); 554 546 } 555 547 556 - static void pool_op_destroy_poolmgr(struct tee_shm_pool_mgr *poolm) 548 + static void pool_op_destroy_pool(struct tee_shm_pool *pool) 557 549 { 558 - kfree(poolm); 550 + kfree(pool); 559 551 } 560 552 561 - static const struct tee_shm_pool_mgr_ops pool_ops = { 553 + static const struct tee_shm_pool_ops pool_ops = { 562 554 .alloc = pool_op_alloc, 563 555 .free = pool_op_free, 564 - .destroy_poolmgr = pool_op_destroy_poolmgr, 556 + .destroy_pool = pool_op_destroy_pool, 565 557 }; 566 558 567 559 /** ··· 570 562 * This pool is used when OP-TEE supports dymanic SHM. In this case 571 563 * command buffers and such are allocated from kernel's own memory. 572 564 */ 573 - static struct tee_shm_pool_mgr *optee_shm_pool_alloc_pages(void) 565 + static struct tee_shm_pool *optee_shm_pool_alloc_pages(void) 574 566 { 575 - struct tee_shm_pool_mgr *mgr = kzalloc(sizeof(*mgr), GFP_KERNEL); 567 + struct tee_shm_pool *pool = kzalloc(sizeof(*pool), GFP_KERNEL); 576 568 577 - if (!mgr) 569 + if (!pool) 578 570 return ERR_PTR(-ENOMEM); 579 571 580 - mgr->ops = &pool_ops; 572 + pool->ops = &pool_ops; 581 573 582 - return mgr; 574 + return pool; 583 575 } 584 576 585 577 /* ··· 650 642 shm = optee_rpc_cmd_alloc_suppl(ctx, sz); 651 643 break; 652 644 case OPTEE_RPC_SHM_TYPE_KERNEL: 653 - shm = tee_shm_alloc(optee->ctx, sz, 654 - TEE_SHM_MAPPED | TEE_SHM_PRIV); 645 + shm = tee_shm_alloc_priv_buf(optee->ctx, sz); 655 646 break; 656 647 default: 657 648 arg->ret = TEEC_ERROR_BAD_PARAMETERS; ··· 669 662 670 663 sz = tee_shm_get_size(shm); 671 664 672 - if (tee_shm_is_registered(shm)) { 665 + if (tee_shm_is_dynamic(shm)) { 673 666 struct page **pages; 674 667 u64 *pages_list; 675 668 size_t page_num; ··· 775 768 776 769 switch (OPTEE_SMC_RETURN_GET_RPC_FUNC(param->a0)) { 777 770 case OPTEE_SMC_RPC_FUNC_ALLOC: 778 - shm = tee_shm_alloc(optee->ctx, param->a1, 779 - TEE_SHM_MAPPED | TEE_SHM_PRIV); 771 + shm = tee_shm_alloc_priv_buf(optee->ctx, param->a1); 780 772 if (!IS_ERR(shm) && !tee_shm_get_pa(shm, 0, &pa)) { 781 773 reg_pair_from_64(&param->a1, &param->a2, pa); 782 774 reg_pair_from_64(&param->a4, &param->a5, ··· 1149 1143 return true; 1150 1144 } 1151 1145 1152 - static struct tee_shm_pool *optee_config_dyn_shm(void) 1153 - { 1154 - struct tee_shm_pool_mgr *priv_mgr; 1155 - struct tee_shm_pool_mgr *dmabuf_mgr; 1156 - void *rc; 1157 - 1158 - rc = optee_shm_pool_alloc_pages(); 1159 - if (IS_ERR(rc)) 1160 - return rc; 1161 - priv_mgr = rc; 1162 - 1163 - rc = optee_shm_pool_alloc_pages(); 1164 - if (IS_ERR(rc)) { 1165 - tee_shm_pool_mgr_destroy(priv_mgr); 1166 - return rc; 1167 - } 1168 - dmabuf_mgr = rc; 1169 - 1170 - rc = tee_shm_pool_alloc(priv_mgr, dmabuf_mgr); 1171 - if (IS_ERR(rc)) { 1172 - tee_shm_pool_mgr_destroy(priv_mgr); 1173 - tee_shm_pool_mgr_destroy(dmabuf_mgr); 1174 - } 1175 - 1176 - return rc; 1177 - } 1178 - 1179 1146 static struct tee_shm_pool * 1180 1147 optee_config_shm_memremap(optee_invoke_fn *invoke_fn, void **memremaped_shm) 1181 1148 { ··· 1162 1183 phys_addr_t begin; 1163 1184 phys_addr_t end; 1164 1185 void *va; 1165 - struct tee_shm_pool_mgr *priv_mgr; 1166 - struct tee_shm_pool_mgr *dmabuf_mgr; 1167 1186 void *rc; 1168 - const int sz = OPTEE_SHM_NUM_PRIV_PAGES * PAGE_SIZE; 1169 1187 1170 1188 invoke_fn(OPTEE_SMC_GET_SHM_CONFIG, 0, 0, 0, 0, 0, 0, 0, &res.smccc); 1171 1189 if (res.result.status != OPTEE_SMC_RETURN_OK) { ··· 1180 1204 paddr = begin; 1181 1205 size = end - begin; 1182 1206 1183 - if (size < 2 * OPTEE_SHM_NUM_PRIV_PAGES * PAGE_SIZE) { 1184 - pr_err("too small shared memory area\n"); 1185 - return ERR_PTR(-EINVAL); 1186 - } 1187 - 1188 1207 va = memremap(paddr, size, MEMREMAP_WB); 1189 1208 if (!va) { 1190 1209 pr_err("shared memory ioremap failed\n"); ··· 1187 1216 } 1188 1217 vaddr = (unsigned long)va; 1189 1218 1190 - rc = tee_shm_pool_mgr_alloc_res_mem(vaddr, paddr, sz, 1191 - 3 /* 8 bytes aligned */); 1219 + rc = tee_shm_pool_alloc_res_mem(vaddr, paddr, size, 1220 + OPTEE_MIN_STATIC_POOL_ALIGN); 1192 1221 if (IS_ERR(rc)) 1193 - goto err_memunmap; 1194 - priv_mgr = rc; 1222 + memunmap(va); 1223 + else 1224 + *memremaped_shm = va; 1195 1225 1196 - vaddr += sz; 1197 - paddr += sz; 1198 - size -= sz; 1199 - 1200 - rc = tee_shm_pool_mgr_alloc_res_mem(vaddr, paddr, size, PAGE_SHIFT); 1201 - if (IS_ERR(rc)) 1202 - goto err_free_priv_mgr; 1203 - dmabuf_mgr = rc; 1204 - 1205 - rc = tee_shm_pool_alloc(priv_mgr, dmabuf_mgr); 1206 - if (IS_ERR(rc)) 1207 - goto err_free_dmabuf_mgr; 1208 - 1209 - *memremaped_shm = va; 1210 - 1211 - return rc; 1212 - 1213 - err_free_dmabuf_mgr: 1214 - tee_shm_pool_mgr_destroy(dmabuf_mgr); 1215 - err_free_priv_mgr: 1216 - tee_shm_pool_mgr_destroy(priv_mgr); 1217 - err_memunmap: 1218 - memunmap(va); 1219 1226 return rc; 1220 1227 } 1221 1228 ··· 1315 1366 * Try to use dynamic shared memory if possible 1316 1367 */ 1317 1368 if (sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) 1318 - pool = optee_config_dyn_shm(); 1369 + pool = optee_shm_pool_alloc_pages(); 1319 1370 1320 1371 /* 1321 1372 * If dynamic shared memory is not available or failed - try static one
+2 -3
drivers/tee/tee_core.c
··· 297 297 if (data.flags) 298 298 return -EINVAL; 299 299 300 - shm = tee_shm_alloc(ctx, data.size, TEE_SHM_MAPPED | TEE_SHM_DMA_BUF); 300 + shm = tee_shm_alloc_user_buf(ctx, data.size); 301 301 if (IS_ERR(shm)) 302 302 return PTR_ERR(shm); 303 303 ··· 334 334 if (data.flags) 335 335 return -EINVAL; 336 336 337 - shm = tee_shm_register(ctx, data.addr, data.length, 338 - TEE_SHM_DMA_BUF | TEE_SHM_USER_MAPPED); 337 + shm = tee_shm_register_user_buf(ctx, data.addr, data.length); 339 338 if (IS_ERR(shm)) 340 339 return PTR_ERR(shm); 341 340
+4 -11
drivers/tee/tee_private.h
··· 12 12 #include <linux/mutex.h> 13 13 #include <linux/types.h> 14 14 15 - /** 16 - * struct tee_shm_pool - shared memory pool 17 - * @private_mgr: pool manager for shared memory only between kernel 18 - * and secure world 19 - * @dma_buf_mgr: pool manager for shared memory exported to user space 20 - */ 21 - struct tee_shm_pool { 22 - struct tee_shm_pool_mgr *private_mgr; 23 - struct tee_shm_pool_mgr *dma_buf_mgr; 24 - }; 25 - 26 15 #define TEE_DEVICE_FLAG_REGISTERED 0x1 27 16 #define TEE_MAX_DEV_NAME_LEN 32 28 17 ··· 56 67 57 68 void teedev_ctx_get(struct tee_context *ctx); 58 69 void teedev_ctx_put(struct tee_context *ctx); 70 + 71 + struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size); 72 + struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx, 73 + unsigned long addr, size_t length); 59 74 60 75 #endif /*TEE_PRIVATE_H*/
+211 -111
drivers/tee/tee_shm.c
··· 12 12 #include <linux/uio.h> 13 13 #include "tee_private.h" 14 14 15 + static void shm_put_kernel_pages(struct page **pages, size_t page_count) 16 + { 17 + size_t n; 18 + 19 + for (n = 0; n < page_count; n++) 20 + put_page(pages[n]); 21 + } 22 + 23 + static int shm_get_kernel_pages(unsigned long start, size_t page_count, 24 + struct page **pages) 25 + { 26 + struct kvec *kiov; 27 + size_t n; 28 + int rc; 29 + 30 + kiov = kcalloc(page_count, sizeof(*kiov), GFP_KERNEL); 31 + if (!kiov) 32 + return -ENOMEM; 33 + 34 + for (n = 0; n < page_count; n++) { 35 + kiov[n].iov_base = (void *)(start + n * PAGE_SIZE); 36 + kiov[n].iov_len = PAGE_SIZE; 37 + } 38 + 39 + rc = get_kernel_pages(kiov, page_count, 0, pages); 40 + kfree(kiov); 41 + 42 + return rc; 43 + } 44 + 15 45 static void release_registered_pages(struct tee_shm *shm) 16 46 { 17 47 if (shm->pages) { 18 - if (shm->flags & TEE_SHM_USER_MAPPED) { 48 + if (shm->flags & TEE_SHM_USER_MAPPED) 19 49 unpin_user_pages(shm->pages, shm->num_pages); 20 - } else { 21 - size_t n; 22 - 23 - for (n = 0; n < shm->num_pages; n++) 24 - put_page(shm->pages[n]); 25 - } 50 + else 51 + shm_put_kernel_pages(shm->pages, shm->num_pages); 26 52 27 53 kfree(shm->pages); 28 54 } ··· 57 31 static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) 58 32 { 59 33 if (shm->flags & TEE_SHM_POOL) { 60 - struct tee_shm_pool_mgr *poolm; 61 - 62 - if (shm->flags & TEE_SHM_DMA_BUF) 63 - poolm = teedev->pool->dma_buf_mgr; 64 - else 65 - poolm = teedev->pool->private_mgr; 66 - 67 - poolm->ops->free(poolm, shm); 68 - } else if (shm->flags & TEE_SHM_REGISTER) { 34 + teedev->pool->ops->free(teedev->pool, shm); 35 + } else if (shm->flags & TEE_SHM_DYNAMIC) { 69 36 int rc = teedev->desc->ops->shm_unregister(shm->ctx, shm); 70 37 71 38 if (rc) ··· 75 56 tee_device_put(teedev); 76 57 } 77 58 78 - struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags) 59 + static struct tee_shm *shm_alloc_helper(struct tee_context *ctx, size_t size, 60 + size_t align, u32 flags, int id) 79 61 { 80 62 struct tee_device *teedev = ctx->teedev; 81 - struct tee_shm_pool_mgr *poolm = NULL; 82 63 struct tee_shm *shm; 83 64 void *ret; 84 65 int rc; 85 - 86 - if (!(flags & TEE_SHM_MAPPED)) { 87 - dev_err(teedev->dev.parent, 88 - "only mapped allocations supported\n"); 89 - return ERR_PTR(-EINVAL); 90 - } 91 - 92 - if ((flags & ~(TEE_SHM_MAPPED | TEE_SHM_DMA_BUF | TEE_SHM_PRIV))) { 93 - dev_err(teedev->dev.parent, "invalid shm flags 0x%x", flags); 94 - return ERR_PTR(-EINVAL); 95 - } 96 66 97 67 if (!tee_device_get(teedev)) 98 68 return ERR_PTR(-EINVAL); ··· 99 91 } 100 92 101 93 refcount_set(&shm->refcount, 1); 102 - shm->flags = flags | TEE_SHM_POOL; 103 - shm->ctx = ctx; 104 - if (flags & TEE_SHM_DMA_BUF) 105 - poolm = teedev->pool->dma_buf_mgr; 106 - else 107 - poolm = teedev->pool->private_mgr; 94 + shm->flags = flags; 95 + shm->id = id; 108 96 109 - rc = poolm->ops->alloc(poolm, shm, size); 97 + /* 98 + * We're assigning this as it is needed if the shm is to be 99 + * registered. If this function returns OK then the caller expected 100 + * to call teedev_ctx_get() or clear shm->ctx in case it's not 101 + * needed any longer. 102 + */ 103 + shm->ctx = ctx; 104 + 105 + rc = teedev->pool->ops->alloc(teedev->pool, shm, size, align); 110 106 if (rc) { 111 107 ret = ERR_PTR(rc); 112 108 goto err_kfree; 113 109 } 114 110 115 - if (flags & TEE_SHM_DMA_BUF) { 116 - mutex_lock(&teedev->mutex); 117 - shm->id = idr_alloc(&teedev->idr, shm, 1, 0, GFP_KERNEL); 118 - mutex_unlock(&teedev->mutex); 119 - if (shm->id < 0) { 120 - ret = ERR_PTR(shm->id); 121 - goto err_pool_free; 122 - } 123 - } 124 - 125 111 teedev_ctx_get(ctx); 126 - 127 112 return shm; 128 - err_pool_free: 129 - poolm->ops->free(poolm, shm); 130 113 err_kfree: 131 114 kfree(shm); 132 115 err_dev_put: 133 116 tee_device_put(teedev); 134 117 return ret; 135 118 } 136 - EXPORT_SYMBOL_GPL(tee_shm_alloc); 119 + 120 + /** 121 + * tee_shm_alloc_user_buf() - Allocate shared memory for user space 122 + * @ctx: Context that allocates the shared memory 123 + * @size: Requested size of shared memory 124 + * 125 + * Memory allocated as user space shared memory is automatically freed when 126 + * the TEE file pointer is closed. The primary usage of this function is 127 + * when the TEE driver doesn't support registering ordinary user space 128 + * memory. 129 + * 130 + * @returns a pointer to 'struct tee_shm' 131 + */ 132 + struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size) 133 + { 134 + u32 flags = TEE_SHM_DYNAMIC | TEE_SHM_POOL; 135 + struct tee_device *teedev = ctx->teedev; 136 + struct tee_shm *shm; 137 + void *ret; 138 + int id; 139 + 140 + mutex_lock(&teedev->mutex); 141 + id = idr_alloc(&teedev->idr, NULL, 1, 0, GFP_KERNEL); 142 + mutex_unlock(&teedev->mutex); 143 + if (id < 0) 144 + return ERR_PTR(id); 145 + 146 + shm = shm_alloc_helper(ctx, size, PAGE_SIZE, flags, id); 147 + if (IS_ERR(shm)) { 148 + mutex_lock(&teedev->mutex); 149 + idr_remove(&teedev->idr, id); 150 + mutex_unlock(&teedev->mutex); 151 + return shm; 152 + } 153 + 154 + mutex_lock(&teedev->mutex); 155 + ret = idr_replace(&teedev->idr, shm, id); 156 + mutex_unlock(&teedev->mutex); 157 + if (IS_ERR(ret)) { 158 + tee_shm_free(shm); 159 + return ret; 160 + } 161 + 162 + return shm; 163 + } 137 164 138 165 /** 139 166 * tee_shm_alloc_kernel_buf() - Allocate shared memory for kernel buffer ··· 184 141 */ 185 142 struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) 186 143 { 187 - return tee_shm_alloc(ctx, size, TEE_SHM_MAPPED); 144 + u32 flags = TEE_SHM_DYNAMIC | TEE_SHM_POOL; 145 + 146 + return shm_alloc_helper(ctx, size, PAGE_SIZE, flags, -1); 188 147 } 189 148 EXPORT_SYMBOL_GPL(tee_shm_alloc_kernel_buf); 190 149 191 - struct tee_shm *tee_shm_register(struct tee_context *ctx, unsigned long addr, 192 - size_t length, u32 flags) 150 + /** 151 + * tee_shm_alloc_priv_buf() - Allocate shared memory for a privately shared 152 + * kernel buffer 153 + * @ctx: Context that allocates the shared memory 154 + * @size: Requested size of shared memory 155 + * 156 + * This function returns similar shared memory as 157 + * tee_shm_alloc_kernel_buf(), but with the difference that the memory 158 + * might not be registered in secure world in case the driver supports 159 + * passing memory not registered in advance. 160 + * 161 + * This function should normally only be used internally in the TEE 162 + * drivers. 163 + * 164 + * @returns a pointer to 'struct tee_shm' 165 + */ 166 + struct tee_shm *tee_shm_alloc_priv_buf(struct tee_context *ctx, size_t size) 167 + { 168 + u32 flags = TEE_SHM_PRIV | TEE_SHM_POOL; 169 + 170 + return shm_alloc_helper(ctx, size, sizeof(long) * 2, flags, -1); 171 + } 172 + EXPORT_SYMBOL_GPL(tee_shm_alloc_priv_buf); 173 + 174 + static struct tee_shm * 175 + register_shm_helper(struct tee_context *ctx, unsigned long addr, 176 + size_t length, u32 flags, int id) 193 177 { 194 178 struct tee_device *teedev = ctx->teedev; 195 - const u32 req_user_flags = TEE_SHM_DMA_BUF | TEE_SHM_USER_MAPPED; 196 - const u32 req_kernel_flags = TEE_SHM_DMA_BUF | TEE_SHM_KERNEL_MAPPED; 197 179 struct tee_shm *shm; 180 + unsigned long start; 181 + size_t num_pages; 198 182 void *ret; 199 183 int rc; 200 - int num_pages; 201 - unsigned long start; 202 - 203 - if (flags != req_user_flags && flags != req_kernel_flags) 204 - return ERR_PTR(-ENOTSUPP); 205 184 206 185 if (!tee_device_get(teedev)) 207 186 return ERR_PTR(-EINVAL); 208 187 209 188 if (!teedev->desc->ops->shm_register || 210 189 !teedev->desc->ops->shm_unregister) { 211 - tee_device_put(teedev); 212 - return ERR_PTR(-ENOTSUPP); 190 + ret = ERR_PTR(-ENOTSUPP); 191 + goto err_dev_put; 213 192 } 214 193 215 194 teedev_ctx_get(ctx); ··· 239 174 shm = kzalloc(sizeof(*shm), GFP_KERNEL); 240 175 if (!shm) { 241 176 ret = ERR_PTR(-ENOMEM); 242 - goto err; 177 + goto err_ctx_put; 243 178 } 244 179 245 180 refcount_set(&shm->refcount, 1); 246 - shm->flags = flags | TEE_SHM_REGISTER; 181 + shm->flags = flags; 247 182 shm->ctx = ctx; 248 - shm->id = -1; 183 + shm->id = id; 249 184 addr = untagged_addr(addr); 250 185 start = rounddown(addr, PAGE_SIZE); 251 186 shm->offset = addr - start; ··· 254 189 shm->pages = kcalloc(num_pages, sizeof(*shm->pages), GFP_KERNEL); 255 190 if (!shm->pages) { 256 191 ret = ERR_PTR(-ENOMEM); 257 - goto err; 192 + goto err_free_shm; 258 193 } 259 194 260 - if (flags & TEE_SHM_USER_MAPPED) { 195 + if (flags & TEE_SHM_USER_MAPPED) 261 196 rc = pin_user_pages_fast(start, num_pages, FOLL_WRITE, 262 197 shm->pages); 263 - } else { 264 - struct kvec *kiov; 265 - int i; 266 - 267 - kiov = kcalloc(num_pages, sizeof(*kiov), GFP_KERNEL); 268 - if (!kiov) { 269 - ret = ERR_PTR(-ENOMEM); 270 - goto err; 271 - } 272 - 273 - for (i = 0; i < num_pages; i++) { 274 - kiov[i].iov_base = (void *)(start + i * PAGE_SIZE); 275 - kiov[i].iov_len = PAGE_SIZE; 276 - } 277 - 278 - rc = get_kernel_pages(kiov, num_pages, 0, shm->pages); 279 - kfree(kiov); 280 - } 198 + else 199 + rc = shm_get_kernel_pages(start, num_pages, shm->pages); 281 200 if (rc > 0) 282 201 shm->num_pages = rc; 283 202 if (rc != num_pages) { 284 203 if (rc >= 0) 285 204 rc = -ENOMEM; 286 205 ret = ERR_PTR(rc); 287 - goto err; 288 - } 289 - 290 - mutex_lock(&teedev->mutex); 291 - shm->id = idr_alloc(&teedev->idr, shm, 1, 0, GFP_KERNEL); 292 - mutex_unlock(&teedev->mutex); 293 - 294 - if (shm->id < 0) { 295 - ret = ERR_PTR(shm->id); 296 - goto err; 206 + goto err_put_shm_pages; 297 207 } 298 208 299 209 rc = teedev->desc->ops->shm_register(ctx, shm, shm->pages, 300 210 shm->num_pages, start); 301 211 if (rc) { 302 212 ret = ERR_PTR(rc); 303 - goto err; 213 + goto err_put_shm_pages; 304 214 } 305 215 306 216 return shm; 307 - err: 308 - if (shm) { 309 - if (shm->id >= 0) { 310 - mutex_lock(&teedev->mutex); 311 - idr_remove(&teedev->idr, shm->id); 312 - mutex_unlock(&teedev->mutex); 313 - } 314 - release_registered_pages(shm); 315 - } 217 + err_put_shm_pages: 218 + if (flags & TEE_SHM_USER_MAPPED) 219 + unpin_user_pages(shm->pages, shm->num_pages); 220 + else 221 + shm_put_kernel_pages(shm->pages, shm->num_pages); 222 + kfree(shm->pages); 223 + err_free_shm: 316 224 kfree(shm); 225 + err_ctx_put: 317 226 teedev_ctx_put(ctx); 227 + err_dev_put: 318 228 tee_device_put(teedev); 319 229 return ret; 320 230 } 321 - EXPORT_SYMBOL_GPL(tee_shm_register); 231 + 232 + /** 233 + * tee_shm_register_user_buf() - Register a userspace shared memory buffer 234 + * @ctx: Context that registers the shared memory 235 + * @addr: The userspace address of the shared buffer 236 + * @length: Length of the shared buffer 237 + * 238 + * @returns a pointer to 'struct tee_shm' 239 + */ 240 + struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx, 241 + unsigned long addr, size_t length) 242 + { 243 + u32 flags = TEE_SHM_USER_MAPPED | TEE_SHM_DYNAMIC; 244 + struct tee_device *teedev = ctx->teedev; 245 + struct tee_shm *shm; 246 + void *ret; 247 + int id; 248 + 249 + mutex_lock(&teedev->mutex); 250 + id = idr_alloc(&teedev->idr, NULL, 1, 0, GFP_KERNEL); 251 + mutex_unlock(&teedev->mutex); 252 + if (id < 0) 253 + return ERR_PTR(id); 254 + 255 + shm = register_shm_helper(ctx, addr, length, flags, id); 256 + if (IS_ERR(shm)) { 257 + mutex_lock(&teedev->mutex); 258 + idr_remove(&teedev->idr, id); 259 + mutex_unlock(&teedev->mutex); 260 + return shm; 261 + } 262 + 263 + mutex_lock(&teedev->mutex); 264 + ret = idr_replace(&teedev->idr, shm, id); 265 + mutex_unlock(&teedev->mutex); 266 + if (IS_ERR(ret)) { 267 + tee_shm_free(shm); 268 + return ret; 269 + } 270 + 271 + return shm; 272 + } 273 + 274 + /** 275 + * tee_shm_register_kernel_buf() - Register kernel memory to be shared with 276 + * secure world 277 + * @ctx: Context that registers the shared memory 278 + * @addr: The buffer 279 + * @length: Length of the buffer 280 + * 281 + * @returns a pointer to 'struct tee_shm' 282 + */ 283 + 284 + struct tee_shm *tee_shm_register_kernel_buf(struct tee_context *ctx, 285 + void *addr, size_t length) 286 + { 287 + u32 flags = TEE_SHM_DYNAMIC; 288 + 289 + return register_shm_helper(ctx, (unsigned long)addr, length, flags, -1); 290 + } 291 + EXPORT_SYMBOL_GPL(tee_shm_register_kernel_buf); 322 292 323 293 static int tee_shm_fop_release(struct inode *inode, struct file *filp) 324 294 { ··· 393 293 { 394 294 int fd; 395 295 396 - if (!(shm->flags & TEE_SHM_DMA_BUF)) 296 + if (shm->id < 0) 397 297 return -EINVAL; 398 298 399 299 /* matched by tee_shm_put() in tee_shm_op_release() */ ··· 423 323 */ 424 324 int tee_shm_va2pa(struct tee_shm *shm, void *va, phys_addr_t *pa) 425 325 { 426 - if (!(shm->flags & TEE_SHM_MAPPED)) 326 + if (!shm->kaddr) 427 327 return -EINVAL; 428 328 /* Check that we're in the range of the shm */ 429 329 if ((char *)va < (char *)shm->kaddr) ··· 445 345 */ 446 346 int tee_shm_pa2va(struct tee_shm *shm, phys_addr_t pa, void **va) 447 347 { 448 - if (!(shm->flags & TEE_SHM_MAPPED)) 348 + if (!shm->kaddr) 449 349 return -EINVAL; 450 350 /* Check that we're in the range of the shm */ 451 351 if (pa < shm->paddr) ··· 473 373 */ 474 374 void *tee_shm_get_va(struct tee_shm *shm, size_t offs) 475 375 { 476 - if (!(shm->flags & TEE_SHM_MAPPED)) 376 + if (!shm->kaddr) 477 377 return ERR_PTR(-EINVAL); 478 378 if (offs >= shm->size) 479 379 return ERR_PTR(-EINVAL); ··· 548 448 * the refcount_inc() in tee_shm_get_from_id() never starts 549 449 * from 0. 550 450 */ 551 - if (shm->flags & TEE_SHM_DMA_BUF) 451 + if (shm->id >= 0) 552 452 idr_remove(&teedev->idr, shm->id); 553 453 do_release = true; 554 454 }
+41 -135
drivers/tee/tee_shm_pool.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Copyright (c) 2015, Linaro Limited 3 + * Copyright (c) 2015, 2017, 2022 Linaro Limited 4 4 */ 5 5 #include <linux/device.h> 6 6 #include <linux/dma-buf.h> ··· 9 9 #include <linux/tee_drv.h> 10 10 #include "tee_private.h" 11 11 12 - static int pool_op_gen_alloc(struct tee_shm_pool_mgr *poolm, 13 - struct tee_shm *shm, size_t size) 12 + static int pool_op_gen_alloc(struct tee_shm_pool *pool, struct tee_shm *shm, 13 + size_t size, size_t align) 14 14 { 15 15 unsigned long va; 16 - struct gen_pool *genpool = poolm->private_data; 17 - size_t s = roundup(size, 1 << genpool->min_alloc_order); 16 + struct gen_pool *genpool = pool->private_data; 17 + size_t a = max_t(size_t, align, BIT(genpool->min_alloc_order)); 18 + struct genpool_data_align data = { .align = a }; 19 + size_t s = roundup(size, a); 18 20 19 - va = gen_pool_alloc(genpool, s); 21 + va = gen_pool_alloc_algo(genpool, s, gen_pool_first_fit_align, &data); 20 22 if (!va) 21 23 return -ENOMEM; 22 24 ··· 26 24 shm->kaddr = (void *)va; 27 25 shm->paddr = gen_pool_virt_to_phys(genpool, va); 28 26 shm->size = s; 27 + /* 28 + * This is from a static shared memory pool so no need to register 29 + * each chunk, and no need to unregister later either. 30 + */ 31 + shm->flags &= ~TEE_SHM_DYNAMIC; 29 32 return 0; 30 33 } 31 34 32 - static void pool_op_gen_free(struct tee_shm_pool_mgr *poolm, 33 - struct tee_shm *shm) 35 + static void pool_op_gen_free(struct tee_shm_pool *pool, struct tee_shm *shm) 34 36 { 35 - gen_pool_free(poolm->private_data, (unsigned long)shm->kaddr, 37 + gen_pool_free(pool->private_data, (unsigned long)shm->kaddr, 36 38 shm->size); 37 39 shm->kaddr = NULL; 38 40 } 39 41 40 - static void pool_op_gen_destroy_poolmgr(struct tee_shm_pool_mgr *poolm) 42 + static void pool_op_gen_destroy_pool(struct tee_shm_pool *pool) 41 43 { 42 - gen_pool_destroy(poolm->private_data); 43 - kfree(poolm); 44 + gen_pool_destroy(pool->private_data); 45 + kfree(pool); 44 46 } 45 47 46 - static const struct tee_shm_pool_mgr_ops pool_ops_generic = { 48 + static const struct tee_shm_pool_ops pool_ops_generic = { 47 49 .alloc = pool_op_gen_alloc, 48 50 .free = pool_op_gen_free, 49 - .destroy_poolmgr = pool_op_gen_destroy_poolmgr, 51 + .destroy_pool = pool_op_gen_destroy_pool, 50 52 }; 51 53 52 - /** 53 - * tee_shm_pool_alloc_res_mem() - Create a shared memory pool from reserved 54 - * memory range 55 - * @priv_info: Information for driver private shared memory pool 56 - * @dmabuf_info: Information for dma-buf shared memory pool 57 - * 58 - * Start and end of pools will must be page aligned. 59 - * 60 - * Allocation with the flag TEE_SHM_DMA_BUF set will use the range supplied 61 - * in @dmabuf, others will use the range provided by @priv. 62 - * 63 - * @returns pointer to a 'struct tee_shm_pool' or an ERR_PTR on failure. 64 - */ 65 - struct tee_shm_pool * 66 - tee_shm_pool_alloc_res_mem(struct tee_shm_pool_mem_info *priv_info, 67 - struct tee_shm_pool_mem_info *dmabuf_info) 68 - { 69 - struct tee_shm_pool_mgr *priv_mgr; 70 - struct tee_shm_pool_mgr *dmabuf_mgr; 71 - void *rc; 72 - 73 - /* 74 - * Create the pool for driver private shared memory 75 - */ 76 - rc = tee_shm_pool_mgr_alloc_res_mem(priv_info->vaddr, priv_info->paddr, 77 - priv_info->size, 78 - 3 /* 8 byte aligned */); 79 - if (IS_ERR(rc)) 80 - return rc; 81 - priv_mgr = rc; 82 - 83 - /* 84 - * Create the pool for dma_buf shared memory 85 - */ 86 - rc = tee_shm_pool_mgr_alloc_res_mem(dmabuf_info->vaddr, 87 - dmabuf_info->paddr, 88 - dmabuf_info->size, PAGE_SHIFT); 89 - if (IS_ERR(rc)) 90 - goto err_free_priv_mgr; 91 - dmabuf_mgr = rc; 92 - 93 - rc = tee_shm_pool_alloc(priv_mgr, dmabuf_mgr); 94 - if (IS_ERR(rc)) 95 - goto err_free_dmabuf_mgr; 96 - 97 - return rc; 98 - 99 - err_free_dmabuf_mgr: 100 - tee_shm_pool_mgr_destroy(dmabuf_mgr); 101 - err_free_priv_mgr: 102 - tee_shm_pool_mgr_destroy(priv_mgr); 103 - 104 - return rc; 105 - } 106 - EXPORT_SYMBOL_GPL(tee_shm_pool_alloc_res_mem); 107 - 108 - struct tee_shm_pool_mgr *tee_shm_pool_mgr_alloc_res_mem(unsigned long vaddr, 109 - phys_addr_t paddr, 110 - size_t size, 111 - int min_alloc_order) 54 + struct tee_shm_pool *tee_shm_pool_alloc_res_mem(unsigned long vaddr, 55 + phys_addr_t paddr, size_t size, 56 + int min_alloc_order) 112 57 { 113 58 const size_t page_mask = PAGE_SIZE - 1; 114 - struct tee_shm_pool_mgr *mgr; 59 + struct tee_shm_pool *pool; 115 60 int rc; 116 61 117 62 /* Start and end must be page aligned */ 118 63 if (vaddr & page_mask || paddr & page_mask || size & page_mask) 119 64 return ERR_PTR(-EINVAL); 120 65 121 - mgr = kzalloc(sizeof(*mgr), GFP_KERNEL); 122 - if (!mgr) 123 - return ERR_PTR(-ENOMEM); 124 - 125 - mgr->private_data = gen_pool_create(min_alloc_order, -1); 126 - if (!mgr->private_data) { 127 - rc = -ENOMEM; 128 - goto err; 129 - } 130 - 131 - gen_pool_set_algo(mgr->private_data, gen_pool_best_fit, NULL); 132 - rc = gen_pool_add_virt(mgr->private_data, vaddr, paddr, size, -1); 133 - if (rc) { 134 - gen_pool_destroy(mgr->private_data); 135 - goto err; 136 - } 137 - 138 - mgr->ops = &pool_ops_generic; 139 - 140 - return mgr; 141 - err: 142 - kfree(mgr); 143 - 144 - return ERR_PTR(rc); 145 - } 146 - EXPORT_SYMBOL_GPL(tee_shm_pool_mgr_alloc_res_mem); 147 - 148 - static bool check_mgr_ops(struct tee_shm_pool_mgr *mgr) 149 - { 150 - return mgr && mgr->ops && mgr->ops->alloc && mgr->ops->free && 151 - mgr->ops->destroy_poolmgr; 152 - } 153 - 154 - struct tee_shm_pool *tee_shm_pool_alloc(struct tee_shm_pool_mgr *priv_mgr, 155 - struct tee_shm_pool_mgr *dmabuf_mgr) 156 - { 157 - struct tee_shm_pool *pool; 158 - 159 - if (!check_mgr_ops(priv_mgr) || !check_mgr_ops(dmabuf_mgr)) 160 - return ERR_PTR(-EINVAL); 161 - 162 66 pool = kzalloc(sizeof(*pool), GFP_KERNEL); 163 67 if (!pool) 164 68 return ERR_PTR(-ENOMEM); 165 69 166 - pool->private_mgr = priv_mgr; 167 - pool->dma_buf_mgr = dmabuf_mgr; 70 + pool->private_data = gen_pool_create(min_alloc_order, -1); 71 + if (!pool->private_data) { 72 + rc = -ENOMEM; 73 + goto err; 74 + } 75 + 76 + rc = gen_pool_add_virt(pool->private_data, vaddr, paddr, size, -1); 77 + if (rc) { 78 + gen_pool_destroy(pool->private_data); 79 + goto err; 80 + } 81 + 82 + pool->ops = &pool_ops_generic; 168 83 169 84 return pool; 170 - } 171 - EXPORT_SYMBOL_GPL(tee_shm_pool_alloc); 172 - 173 - /** 174 - * tee_shm_pool_free() - Free a shared memory pool 175 - * @pool: The shared memory pool to free 176 - * 177 - * There must be no remaining shared memory allocated from this pool when 178 - * this function is called. 179 - */ 180 - void tee_shm_pool_free(struct tee_shm_pool *pool) 181 - { 182 - if (pool->private_mgr) 183 - tee_shm_pool_mgr_destroy(pool->private_mgr); 184 - if (pool->dma_buf_mgr) 185 - tee_shm_pool_mgr_destroy(pool->dma_buf_mgr); 85 + err: 186 86 kfree(pool); 87 + 88 + return ERR_PTR(rc); 187 89 } 188 - EXPORT_SYMBOL_GPL(tee_shm_pool_free); 90 + EXPORT_SYMBOL_GPL(tee_shm_pool_alloc_res_mem);
+150
include/dt-bindings/clock/fsd-clk.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (c) 2017 - 2022: Samsung Electronics Co., Ltd. 4 + * https://www.samsung.com 5 + * Copyright (c) 2017-2022 Tesla, Inc. 6 + * https://www.tesla.com 7 + * 8 + * The constants defined in this header are being used in dts 9 + * and fsd platform driver. 10 + */ 11 + 12 + #ifndef _DT_BINDINGS_CLOCK_FSD_H 13 + #define _DT_BINDINGS_CLOCK_FSD_H 14 + 15 + /* CMU */ 16 + #define DOUT_CMU_PLL_SHARED0_DIV4 1 17 + #define DOUT_CMU_PERIC_SHARED1DIV36 2 18 + #define DOUT_CMU_PERIC_SHARED0DIV3_TBUCLK 3 19 + #define DOUT_CMU_PERIC_SHARED0DIV20 4 20 + #define DOUT_CMU_PERIC_SHARED1DIV4_DMACLK 5 21 + #define DOUT_CMU_PLL_SHARED0_DIV6 6 22 + #define DOUT_CMU_FSYS0_SHARED1DIV4 7 23 + #define DOUT_CMU_FSYS0_SHARED0DIV4 8 24 + #define DOUT_CMU_FSYS1_SHARED0DIV8 9 25 + #define DOUT_CMU_FSYS1_SHARED0DIV4 10 26 + #define CMU_CPUCL_SWITCH_GATE 11 27 + #define DOUT_CMU_IMEM_TCUCLK 12 28 + #define DOUT_CMU_IMEM_ACLK 13 29 + #define DOUT_CMU_IMEM_DMACLK 14 30 + #define GAT_CMU_FSYS0_SHARED0DIV4 15 31 + #define CMU_NR_CLK 16 32 + 33 + /* PERIC */ 34 + #define PERIC_SCLK_UART0 1 35 + #define PERIC_PCLK_UART0 2 36 + #define PERIC_SCLK_UART1 3 37 + #define PERIC_PCLK_UART1 4 38 + #define PERIC_DMA0_IPCLKPORT_ACLK 5 39 + #define PERIC_DMA1_IPCLKPORT_ACLK 6 40 + #define PERIC_PWM0_IPCLKPORT_I_PCLK_S0 7 41 + #define PERIC_PWM1_IPCLKPORT_I_PCLK_S0 8 42 + #define PERIC_PCLK_SPI0 9 43 + #define PERIC_SCLK_SPI0 10 44 + #define PERIC_PCLK_SPI1 11 45 + #define PERIC_SCLK_SPI1 12 46 + #define PERIC_PCLK_SPI2 13 47 + #define PERIC_SCLK_SPI2 14 48 + #define PERIC_PCLK_TDM0 15 49 + #define PERIC_PCLK_HSI2C0 16 50 + #define PERIC_PCLK_HSI2C1 17 51 + #define PERIC_PCLK_HSI2C2 18 52 + #define PERIC_PCLK_HSI2C3 19 53 + #define PERIC_PCLK_HSI2C4 20 54 + #define PERIC_PCLK_HSI2C5 21 55 + #define PERIC_PCLK_HSI2C6 22 56 + #define PERIC_PCLK_HSI2C7 23 57 + #define PERIC_MCAN0_IPCLKPORT_CCLK 24 58 + #define PERIC_MCAN0_IPCLKPORT_PCLK 25 59 + #define PERIC_MCAN1_IPCLKPORT_CCLK 26 60 + #define PERIC_MCAN1_IPCLKPORT_PCLK 27 61 + #define PERIC_MCAN2_IPCLKPORT_CCLK 28 62 + #define PERIC_MCAN2_IPCLKPORT_PCLK 29 63 + #define PERIC_MCAN3_IPCLKPORT_CCLK 30 64 + #define PERIC_MCAN3_IPCLKPORT_PCLK 31 65 + #define PERIC_PCLK_ADCIF 32 66 + #define PERIC_EQOS_TOP_IPCLKPORT_CLK_PTP_REF_I 33 67 + #define PERIC_EQOS_TOP_IPCLKPORT_ACLK_I 34 68 + #define PERIC_EQOS_TOP_IPCLKPORT_HCLK_I 35 69 + #define PERIC_EQOS_TOP_IPCLKPORT_RGMII_CLK_I 36 70 + #define PERIC_EQOS_TOP_IPCLKPORT_CLK_RX_I 37 71 + #define PERIC_BUS_D_PERIC_IPCLKPORT_EQOSCLK 38 72 + #define PERIC_BUS_P_PERIC_IPCLKPORT_EQOSCLK 39 73 + #define PERIC_HCLK_TDM0 40 74 + #define PERIC_PCLK_TDM1 41 75 + #define PERIC_HCLK_TDM1 42 76 + #define PERIC_EQOS_PHYRXCLK_MUX 43 77 + #define PERIC_EQOS_PHYRXCLK 44 78 + #define PERIC_DOUT_RGMII_CLK 45 79 + #define PERIC_NR_CLK 46 80 + 81 + /* FSYS0 */ 82 + #define UFS0_MPHY_REFCLK_IXTAL24 1 83 + #define UFS0_MPHY_REFCLK_IXTAL26 2 84 + #define UFS1_MPHY_REFCLK_IXTAL24 3 85 + #define UFS1_MPHY_REFCLK_IXTAL26 4 86 + #define UFS0_TOP0_HCLK_BUS 5 87 + #define UFS0_TOP0_ACLK 6 88 + #define UFS0_TOP0_CLK_UNIPRO 7 89 + #define UFS0_TOP0_FMP_CLK 8 90 + #define UFS1_TOP1_HCLK_BUS 9 91 + #define UFS1_TOP1_ACLK 10 92 + #define UFS1_TOP1_CLK_UNIPRO 11 93 + #define UFS1_TOP1_FMP_CLK 12 94 + #define PCIE_SUBCTRL_INST0_DBI_ACLK_SOC 13 95 + #define PCIE_SUBCTRL_INST0_AUX_CLK_SOC 14 96 + #define PCIE_SUBCTRL_INST0_MSTR_ACLK_SOC 15 97 + #define PCIE_SUBCTRL_INST0_SLV_ACLK_SOC 16 98 + #define FSYS0_EQOS_TOP0_IPCLKPORT_CLK_PTP_REF_I 17 99 + #define FSYS0_EQOS_TOP0_IPCLKPORT_ACLK_I 18 100 + #define FSYS0_EQOS_TOP0_IPCLKPORT_HCLK_I 19 101 + #define FSYS0_EQOS_TOP0_IPCLKPORT_RGMII_CLK_I 20 102 + #define FSYS0_EQOS_TOP0_IPCLKPORT_CLK_RX_I 21 103 + #define FSYS0_DOUT_FSYS0_PERIBUS_GRP 22 104 + #define FSYS0_NR_CLK 23 105 + 106 + /* FSYS1 */ 107 + #define PCIE_LINK0_IPCLKPORT_DBI_ACLK 1 108 + #define PCIE_LINK0_IPCLKPORT_AUX_ACLK 2 109 + #define PCIE_LINK0_IPCLKPORT_MSTR_ACLK 3 110 + #define PCIE_LINK0_IPCLKPORT_SLV_ACLK 4 111 + #define PCIE_LINK1_IPCLKPORT_DBI_ACLK 5 112 + #define PCIE_LINK1_IPCLKPORT_AUX_ACLK 6 113 + #define PCIE_LINK1_IPCLKPORT_MSTR_ACLK 7 114 + #define PCIE_LINK1_IPCLKPORT_SLV_ACLK 8 115 + #define FSYS1_NR_CLK 9 116 + 117 + /* IMEM */ 118 + #define IMEM_DMA0_IPCLKPORT_ACLK 1 119 + #define IMEM_DMA1_IPCLKPORT_ACLK 2 120 + #define IMEM_WDT0_IPCLKPORT_PCLK 3 121 + #define IMEM_WDT1_IPCLKPORT_PCLK 4 122 + #define IMEM_WDT2_IPCLKPORT_PCLK 5 123 + #define IMEM_MCT_PCLK 6 124 + #define IMEM_TMU_CPU0_IPCLKPORT_I_CLK_TS 7 125 + #define IMEM_TMU_CPU2_IPCLKPORT_I_CLK_TS 8 126 + #define IMEM_TMU_TOP_IPCLKPORT_I_CLK_TS 9 127 + #define IMEM_TMU_GPU_IPCLKPORT_I_CLK_TS 10 128 + #define IMEM_TMU_GT_IPCLKPORT_I_CLK_TS 11 129 + #define IMEM_NR_CLK 12 130 + 131 + /* MFC */ 132 + #define MFC_MFC_IPCLKPORT_ACLK 1 133 + #define MFC_NR_CLK 2 134 + 135 + /* CAM_CSI */ 136 + #define CAM_CSI0_0_IPCLKPORT_I_ACLK 1 137 + #define CAM_CSI0_1_IPCLKPORT_I_ACLK 2 138 + #define CAM_CSI0_2_IPCLKPORT_I_ACLK 3 139 + #define CAM_CSI0_3_IPCLKPORT_I_ACLK 4 140 + #define CAM_CSI1_0_IPCLKPORT_I_ACLK 5 141 + #define CAM_CSI1_1_IPCLKPORT_I_ACLK 6 142 + #define CAM_CSI1_2_IPCLKPORT_I_ACLK 7 143 + #define CAM_CSI1_3_IPCLKPORT_I_ACLK 8 144 + #define CAM_CSI2_0_IPCLKPORT_I_ACLK 9 145 + #define CAM_CSI2_1_IPCLKPORT_I_ACLK 10 146 + #define CAM_CSI2_2_IPCLKPORT_I_ACLK 11 147 + #define CAM_CSI2_3_IPCLKPORT_I_ACLK 12 148 + #define CAM_CSI_NR_CLK 13 149 + 150 + #endif /*_DT_BINDINGS_CLOCK_FSD_H */
+3
include/dt-bindings/power/imx8mq-power.h
··· 18 18 #define IMX8M_POWER_DOMAIN_MIPI_CSI2 9 19 19 #define IMX8M_POWER_DOMAIN_PCIE2 10 20 20 21 + #define IMX8MQ_VPUBLK_PD_G1 0 22 + #define IMX8MQ_VPUBLK_PD_G2 1 23 + 21 24 #endif
+19
include/dt-bindings/power/meson-s4-power.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0+ or MIT) */ 2 + /* 3 + * Copyright (c) 2021 Amlogic, Inc. 4 + * Author: Shunzhou Jiang <shunzhou.jiang@amlogic.com> 5 + */ 6 + 7 + #ifndef _DT_BINDINGS_MESON_S4_POWER_H 8 + #define _DT_BINDINGS_MESON_S4_POWER_H 9 + 10 + #define PWRC_S4_DOS_HEVC_ID 0 11 + #define PWRC_S4_DOS_VDEC_ID 1 12 + #define PWRC_S4_VPU_HDMI_ID 2 13 + #define PWRC_S4_USB_COMB_ID 3 14 + #define PWRC_S4_GE2D_ID 4 15 + #define PWRC_S4_ETH_ID 5 16 + #define PWRC_S4_DEMOD_ID 6 17 + #define PWRC_S4_AUDIO_ID 7 18 + 19 + #endif
+32
include/dt-bindings/power/mt8186-power.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause) */ 2 + /* 3 + * Copyright (c) 2022 MediaTek Inc. 4 + * Author: Chun-Jie Chen <chun-jie.chen@mediatek.com> 5 + */ 6 + 7 + #ifndef _DT_BINDINGS_POWER_MT8186_POWER_H 8 + #define _DT_BINDINGS_POWER_MT8186_POWER_H 9 + 10 + #define MT8186_POWER_DOMAIN_MFG0 0 11 + #define MT8186_POWER_DOMAIN_MFG1 1 12 + #define MT8186_POWER_DOMAIN_MFG2 2 13 + #define MT8186_POWER_DOMAIN_MFG3 3 14 + #define MT8186_POWER_DOMAIN_SSUSB 4 15 + #define MT8186_POWER_DOMAIN_SSUSB_P1 5 16 + #define MT8186_POWER_DOMAIN_DIS 6 17 + #define MT8186_POWER_DOMAIN_IMG 7 18 + #define MT8186_POWER_DOMAIN_IMG2 8 19 + #define MT8186_POWER_DOMAIN_IPE 9 20 + #define MT8186_POWER_DOMAIN_CAM 10 21 + #define MT8186_POWER_DOMAIN_CAM_RAWA 11 22 + #define MT8186_POWER_DOMAIN_CAM_RAWB 12 23 + #define MT8186_POWER_DOMAIN_VENC 13 24 + #define MT8186_POWER_DOMAIN_VDEC 14 25 + #define MT8186_POWER_DOMAIN_WPE 15 26 + #define MT8186_POWER_DOMAIN_CONN_ON 16 27 + #define MT8186_POWER_DOMAIN_CSIRX_TOP 17 28 + #define MT8186_POWER_DOMAIN_ADSP_AO 18 29 + #define MT8186_POWER_DOMAIN_ADSP_INFRA 19 30 + #define MT8186_POWER_DOMAIN_ADSP_TOP 20 31 + 32 + #endif /* _DT_BINDINGS_POWER_MT8186_POWER_H */
+46
include/dt-bindings/power/mt8195-power.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0 OR MIT) */ 2 + /* 3 + * Copyright (c) 2021 MediaTek Inc. 4 + * Author: Chun-Jie Chen <chun-jie.chen@mediatek.com> 5 + */ 6 + 7 + #ifndef _DT_BINDINGS_POWER_MT8195_POWER_H 8 + #define _DT_BINDINGS_POWER_MT8195_POWER_H 9 + 10 + #define MT8195_POWER_DOMAIN_PCIE_MAC_P0 0 11 + #define MT8195_POWER_DOMAIN_PCIE_MAC_P1 1 12 + #define MT8195_POWER_DOMAIN_PCIE_PHY 2 13 + #define MT8195_POWER_DOMAIN_SSUSB_PCIE_PHY 3 14 + #define MT8195_POWER_DOMAIN_CSI_RX_TOP 4 15 + #define MT8195_POWER_DOMAIN_ETHER 5 16 + #define MT8195_POWER_DOMAIN_ADSP 6 17 + #define MT8195_POWER_DOMAIN_AUDIO 7 18 + #define MT8195_POWER_DOMAIN_MFG0 8 19 + #define MT8195_POWER_DOMAIN_MFG1 9 20 + #define MT8195_POWER_DOMAIN_MFG2 10 21 + #define MT8195_POWER_DOMAIN_MFG3 11 22 + #define MT8195_POWER_DOMAIN_MFG4 12 23 + #define MT8195_POWER_DOMAIN_MFG5 13 24 + #define MT8195_POWER_DOMAIN_MFG6 14 25 + #define MT8195_POWER_DOMAIN_VPPSYS0 15 26 + #define MT8195_POWER_DOMAIN_VDOSYS0 16 27 + #define MT8195_POWER_DOMAIN_VPPSYS1 17 28 + #define MT8195_POWER_DOMAIN_VDOSYS1 18 29 + #define MT8195_POWER_DOMAIN_DP_TX 19 30 + #define MT8195_POWER_DOMAIN_EPD_TX 20 31 + #define MT8195_POWER_DOMAIN_HDMI_TX 21 32 + #define MT8195_POWER_DOMAIN_WPESYS 22 33 + #define MT8195_POWER_DOMAIN_VDEC0 23 34 + #define MT8195_POWER_DOMAIN_VDEC1 24 35 + #define MT8195_POWER_DOMAIN_VDEC2 25 36 + #define MT8195_POWER_DOMAIN_VENC 26 37 + #define MT8195_POWER_DOMAIN_VENC_CORE1 27 38 + #define MT8195_POWER_DOMAIN_IMG 28 39 + #define MT8195_POWER_DOMAIN_DIP 29 40 + #define MT8195_POWER_DOMAIN_IPE 30 41 + #define MT8195_POWER_DOMAIN_CAM 31 42 + #define MT8195_POWER_DOMAIN_CAM_RAWA 32 43 + #define MT8195_POWER_DOMAIN_CAM_RAWB 33 44 + #define MT8195_POWER_DOMAIN_CAM_MRAW 34 45 + 46 + #endif /* _DT_BINDINGS_POWER_MT8195_POWER_H */
+5
include/dt-bindings/power/qcom-rpmpd.h
··· 139 139 #define MDM9607_VDDMX_AO 4 140 140 #define MDM9607_VDDMX_VFL 5 141 141 142 + /* MSM8226 Power Domain Indexes */ 143 + #define MSM8226_VDDCX 0 144 + #define MSM8226_VDDCX_AO 1 145 + #define MSM8226_VDDCX_VFC 2 146 + 142 147 /* MSM8939 Power Domains */ 143 148 #define MSM8939_VDDMDCX 0 144 149 #define MSM8939_VDDMDCX_AO 1
+5
include/linux/firmware/imx/svc/rm.h
··· 59 59 60 60 #if IS_ENABLED(CONFIG_IMX_SCU) 61 61 bool imx_sc_rm_is_resource_owned(struct imx_sc_ipc *ipc, u16 resource); 62 + int imx_sc_rm_get_resource_owner(struct imx_sc_ipc *ipc, u16 resource, u8 *pt); 62 63 #else 63 64 static inline bool 64 65 imx_sc_rm_is_resource_owned(struct imx_sc_ipc *ipc, u16 resource) 65 66 { 66 67 return true; 68 + } 69 + static inline int imx_sc_rm_get_resource_owner(struct imx_sc_ipc *ipc, u16 resource, u8 *pt) 70 + { 71 + return -EOPNOTSUPP; 67 72 } 68 73 #endif 69 74 #endif
+13 -3
include/linux/qcom_scm.h
··· 63 63 64 64 extern bool qcom_scm_is_available(void); 65 65 66 - extern int qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus); 67 - extern int qcom_scm_set_warm_boot_addr(void *entry, const cpumask_t *cpus); 66 + extern int qcom_scm_set_cold_boot_addr(void *entry); 67 + extern int qcom_scm_set_warm_boot_addr(void *entry); 68 68 extern void qcom_scm_cpu_power_down(u32 flags); 69 69 extern int qcom_scm_set_remote_state(u32 state, u32 id); 70 70 71 + struct qcom_scm_pas_metadata { 72 + void *ptr; 73 + dma_addr_t phys; 74 + ssize_t size; 75 + }; 76 + 71 77 extern int qcom_scm_pas_init_image(u32 peripheral, const void *metadata, 72 - size_t size); 78 + size_t size, 79 + struct qcom_scm_pas_metadata *ctx); 80 + void qcom_scm_pas_metadata_release(struct qcom_scm_pas_metadata *ctx); 73 81 extern int qcom_scm_pas_mem_setup(u32 peripheral, phys_addr_t addr, 74 82 phys_addr_t size); 75 83 extern int qcom_scm_pas_auth_and_reset(u32 peripheral); ··· 91 83 extern int qcom_scm_restore_sec_cfg(u32 device_id, u32 spare); 92 84 extern int qcom_scm_iommu_secure_ptbl_size(u32 spare, size_t *size); 93 85 extern int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare); 86 + extern int qcom_scm_iommu_set_cp_pool_size(u32 spare, u32 size); 94 87 extern int qcom_scm_mem_protect_video_var(u32 cp_start, u32 cp_size, 95 88 u32 cp_nonpixel_start, 96 89 u32 cp_nonpixel_size); ··· 116 107 extern int qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt, 117 108 u32 *resp); 118 109 110 + extern int qcom_scm_iommu_set_pt_format(u32 sec_id, u32 ctx_num, u32 pt_fmt); 119 111 extern int qcom_scm_qsmmu500_wait_safe_toggle(bool en); 120 112 121 113 extern int qcom_scm_lmh_dcvsh(u32 payload_fn, u32 payload_reg, u32 payload_val,
+15
include/linux/scmi_protocol.h
··· 42 42 43 43 struct scmi_clock_info { 44 44 char name[SCMI_MAX_STR_SIZE]; 45 + unsigned int enable_latency; 45 46 bool rate_discrete; 46 47 union { 47 48 struct { ··· 83 82 u64 rate); 84 83 int (*enable)(const struct scmi_protocol_handle *ph, u32 clk_id); 85 84 int (*disable)(const struct scmi_protocol_handle *ph, u32 clk_id); 85 + int (*enable_atomic)(const struct scmi_protocol_handle *ph, u32 clk_id); 86 + int (*disable_atomic)(const struct scmi_protocol_handle *ph, 87 + u32 clk_id); 86 88 }; 87 89 88 90 /** ··· 616 612 * @devm_protocol_get: devres managed method to acquire a protocol and get specific 617 613 * operations and a dedicated protocol handler 618 614 * @devm_protocol_put: devres managed method to release a protocol 615 + * @is_transport_atomic: method to check if the underlying transport for this 616 + * instance handle is configured to support atomic 617 + * transactions for commands. 618 + * Some users of the SCMI stack in the upper layers could 619 + * be interested to know if they can assume SCMI 620 + * command transactions associated to this handle will 621 + * never sleep and act accordingly. 622 + * An optional atomic threshold value could be returned 623 + * where configured. 619 624 * @notify_ops: pointer to set of notifications related operations 620 625 */ 621 626 struct scmi_handle { ··· 635 622 (*devm_protocol_get)(struct scmi_device *sdev, u8 proto, 636 623 struct scmi_protocol_handle **ph); 637 624 void (*devm_protocol_put)(struct scmi_device *sdev, u8 proto); 625 + bool (*is_transport_atomic)(const struct scmi_handle *handle, 626 + unsigned int *atomic_threshold); 638 627 639 628 const struct scmi_notify_ops *notify_ops; 640 629 };
+133
include/linux/soc/mediatek/infracfg.h
··· 2 2 #ifndef __SOC_MEDIATEK_INFRACFG_H 3 3 #define __SOC_MEDIATEK_INFRACFG_H 4 4 5 + #define MT8195_TOP_AXI_PROT_EN_STA1 0x228 6 + #define MT8195_TOP_AXI_PROT_EN_1_STA1 0x258 7 + #define MT8195_TOP_AXI_PROT_EN_SET 0x2a0 8 + #define MT8195_TOP_AXI_PROT_EN_CLR 0x2a4 9 + #define MT8195_TOP_AXI_PROT_EN_1_SET 0x2a8 10 + #define MT8195_TOP_AXI_PROT_EN_1_CLR 0x2ac 11 + #define MT8195_TOP_AXI_PROT_EN_MM_SET 0x2d4 12 + #define MT8195_TOP_AXI_PROT_EN_MM_CLR 0x2d8 13 + #define MT8195_TOP_AXI_PROT_EN_MM_STA1 0x2ec 14 + #define MT8195_TOP_AXI_PROT_EN_2_SET 0x714 15 + #define MT8195_TOP_AXI_PROT_EN_2_CLR 0x718 16 + #define MT8195_TOP_AXI_PROT_EN_2_STA1 0x724 17 + #define MT8195_TOP_AXI_PROT_EN_VDNR_SET 0xb84 18 + #define MT8195_TOP_AXI_PROT_EN_VDNR_CLR 0xb88 19 + #define MT8195_TOP_AXI_PROT_EN_VDNR_STA1 0xb90 20 + #define MT8195_TOP_AXI_PROT_EN_VDNR_1_SET 0xba4 21 + #define MT8195_TOP_AXI_PROT_EN_VDNR_1_CLR 0xba8 22 + #define MT8195_TOP_AXI_PROT_EN_VDNR_1_STA1 0xbb0 23 + #define MT8195_TOP_AXI_PROT_EN_VDNR_2_SET 0xbb8 24 + #define MT8195_TOP_AXI_PROT_EN_VDNR_2_CLR 0xbbc 25 + #define MT8195_TOP_AXI_PROT_EN_VDNR_2_STA1 0xbc4 26 + #define MT8195_TOP_AXI_PROT_EN_SUB_INFRA_VDNR_SET 0xbcc 27 + #define MT8195_TOP_AXI_PROT_EN_SUB_INFRA_VDNR_CLR 0xbd0 28 + #define MT8195_TOP_AXI_PROT_EN_SUB_INFRA_VDNR_STA1 0xbd8 29 + #define MT8195_TOP_AXI_PROT_EN_MM_2_SET 0xdcc 30 + #define MT8195_TOP_AXI_PROT_EN_MM_2_CLR 0xdd0 31 + #define MT8195_TOP_AXI_PROT_EN_MM_2_STA1 0xdd8 32 + 33 + #define MT8195_TOP_AXI_PROT_EN_VDOSYS0 BIT(6) 34 + #define MT8195_TOP_AXI_PROT_EN_VPPSYS0 BIT(10) 35 + #define MT8195_TOP_AXI_PROT_EN_MFG1 BIT(11) 36 + #define MT8195_TOP_AXI_PROT_EN_MFG1_2ND GENMASK(22, 21) 37 + #define MT8195_TOP_AXI_PROT_EN_VPPSYS0_2ND BIT(23) 38 + #define MT8195_TOP_AXI_PROT_EN_1_MFG1 GENMASK(20, 19) 39 + #define MT8195_TOP_AXI_PROT_EN_1_CAM BIT(22) 40 + #define MT8195_TOP_AXI_PROT_EN_2_CAM BIT(0) 41 + #define MT8195_TOP_AXI_PROT_EN_2_MFG1_2ND GENMASK(6, 5) 42 + #define MT8195_TOP_AXI_PROT_EN_2_MFG1 BIT(7) 43 + #define MT8195_TOP_AXI_PROT_EN_2_AUDIO (BIT(9) | BIT(11)) 44 + #define MT8195_TOP_AXI_PROT_EN_2_ADSP (BIT(12) | GENMASK(16, 14)) 45 + #define MT8195_TOP_AXI_PROT_EN_MM_CAM (BIT(0) | BIT(2) | BIT(4)) 46 + #define MT8195_TOP_AXI_PROT_EN_MM_IPE BIT(1) 47 + #define MT8195_TOP_AXI_PROT_EN_MM_IMG BIT(3) 48 + #define MT8195_TOP_AXI_PROT_EN_MM_VDOSYS0 GENMASK(21, 17) 49 + #define MT8195_TOP_AXI_PROT_EN_MM_VPPSYS1 GENMASK(8, 5) 50 + #define MT8195_TOP_AXI_PROT_EN_MM_VENC (BIT(9) | BIT(11)) 51 + #define MT8195_TOP_AXI_PROT_EN_MM_VENC_CORE1 (BIT(10) | BIT(12)) 52 + #define MT8195_TOP_AXI_PROT_EN_MM_VDEC0 BIT(13) 53 + #define MT8195_TOP_AXI_PROT_EN_MM_VDEC1 BIT(14) 54 + #define MT8195_TOP_AXI_PROT_EN_MM_VDOSYS1_2ND BIT(22) 55 + #define MT8195_TOP_AXI_PROT_EN_MM_VPPSYS1_2ND BIT(23) 56 + #define MT8195_TOP_AXI_PROT_EN_MM_CAM_2ND BIT(24) 57 + #define MT8195_TOP_AXI_PROT_EN_MM_IMG_2ND BIT(25) 58 + #define MT8195_TOP_AXI_PROT_EN_MM_VENC_2ND BIT(26) 59 + #define MT8195_TOP_AXI_PROT_EN_MM_WPESYS BIT(27) 60 + #define MT8195_TOP_AXI_PROT_EN_MM_VDEC0_2ND BIT(28) 61 + #define MT8195_TOP_AXI_PROT_EN_MM_VDEC1_2ND BIT(29) 62 + #define MT8195_TOP_AXI_PROT_EN_MM_VDOSYS1 GENMASK(31, 30) 63 + #define MT8195_TOP_AXI_PROT_EN_MM_2_VPPSYS0_2ND (GENMASK(1, 0) | BIT(4) | BIT(11)) 64 + #define MT8195_TOP_AXI_PROT_EN_MM_2_VENC BIT(2) 65 + #define MT8195_TOP_AXI_PROT_EN_MM_2_VENC_CORE1 (BIT(3) | BIT(15)) 66 + #define MT8195_TOP_AXI_PROT_EN_MM_2_CAM (BIT(5) | BIT(17)) 67 + #define MT8195_TOP_AXI_PROT_EN_MM_2_VPPSYS1 (GENMASK(7, 6) | BIT(18)) 68 + #define MT8195_TOP_AXI_PROT_EN_MM_2_VPPSYS0 GENMASK(9, 8) 69 + #define MT8195_TOP_AXI_PROT_EN_MM_2_VDOSYS1 BIT(10) 70 + #define MT8195_TOP_AXI_PROT_EN_MM_2_VDEC2_2ND BIT(12) 71 + #define MT8195_TOP_AXI_PROT_EN_MM_2_VDEC0_2ND BIT(13) 72 + #define MT8195_TOP_AXI_PROT_EN_MM_2_WPESYS_2ND BIT(14) 73 + #define MT8195_TOP_AXI_PROT_EN_MM_2_IPE BIT(16) 74 + #define MT8195_TOP_AXI_PROT_EN_MM_2_VDEC2 BIT(21) 75 + #define MT8195_TOP_AXI_PROT_EN_MM_2_VDEC0 BIT(22) 76 + #define MT8195_TOP_AXI_PROT_EN_MM_2_WPESYS GENMASK(24, 23) 77 + #define MT8195_TOP_AXI_PROT_EN_VDNR_1_EPD_TX BIT(1) 78 + #define MT8195_TOP_AXI_PROT_EN_VDNR_1_DP_TX BIT(2) 79 + #define MT8195_TOP_AXI_PROT_EN_VDNR_PCIE_MAC_P0 (BIT(11) | BIT(28)) 80 + #define MT8195_TOP_AXI_PROT_EN_VDNR_PCIE_MAC_P1 (BIT(12) | BIT(29)) 81 + #define MT8195_TOP_AXI_PROT_EN_VDNR_1_PCIE_MAC_P0 BIT(13) 82 + #define MT8195_TOP_AXI_PROT_EN_VDNR_1_PCIE_MAC_P1 BIT(14) 83 + #define MT8195_TOP_AXI_PROT_EN_SUB_INFRA_VDNR_MFG1 (BIT(17) | BIT(19)) 84 + #define MT8195_TOP_AXI_PROT_EN_SUB_INFRA_VDNR_VPPSYS0 BIT(20) 85 + #define MT8195_TOP_AXI_PROT_EN_SUB_INFRA_VDNR_VDOSYS0 BIT(21) 86 + 5 87 #define MT8192_TOP_AXI_PROT_EN_STA1 0x228 6 88 #define MT8192_TOP_AXI_PROT_EN_1_STA1 0x258 7 89 #define MT8192_TOP_AXI_PROT_EN_SET 0x2a0 ··· 139 57 #define MT8192_TOP_AXI_PROT_EN_MM_2_MDP BIT(12) 140 58 #define MT8192_TOP_AXI_PROT_EN_MM_2_MDP_2ND BIT(13) 141 59 #define MT8192_TOP_AXI_PROT_EN_VDNR_CAM BIT(21) 60 + 61 + #define MT8186_TOP_AXI_PROT_EN_SET (0x2A0) 62 + #define MT8186_TOP_AXI_PROT_EN_CLR (0x2A4) 63 + #define MT8186_TOP_AXI_PROT_EN_STA (0x228) 64 + #define MT8186_TOP_AXI_PROT_EN_1_SET (0x2A8) 65 + #define MT8186_TOP_AXI_PROT_EN_1_CLR (0x2AC) 66 + #define MT8186_TOP_AXI_PROT_EN_1_STA (0x258) 67 + #define MT8186_TOP_AXI_PROT_EN_2_SET (0x2B0) 68 + #define MT8186_TOP_AXI_PROT_EN_2_CLR (0x2B4) 69 + #define MT8186_TOP_AXI_PROT_EN_2_STA (0x26C) 70 + #define MT8186_TOP_AXI_PROT_EN_3_SET (0x2B8) 71 + #define MT8186_TOP_AXI_PROT_EN_3_CLR (0x2BC) 72 + #define MT8186_TOP_AXI_PROT_EN_3_STA (0x2C8) 73 + 74 + /* MFG1 */ 75 + #define MT8186_TOP_AXI_PROT_EN_1_MFG1_STEP1 (GENMASK(28, 27)) 76 + #define MT8186_TOP_AXI_PROT_EN_MFG1_STEP2 (GENMASK(22, 21)) 77 + #define MT8186_TOP_AXI_PROT_EN_MFG1_STEP3 (BIT(25)) 78 + #define MT8186_TOP_AXI_PROT_EN_1_MFG1_STEP4 (BIT(29)) 79 + /* DIS */ 80 + #define MT8186_TOP_AXI_PROT_EN_1_DIS_STEP1 (GENMASK(12, 11)) 81 + #define MT8186_TOP_AXI_PROT_EN_DIS_STEP2 (GENMASK(2, 1) | GENMASK(11, 10)) 82 + /* IMG */ 83 + #define MT8186_TOP_AXI_PROT_EN_1_IMG_STEP1 (BIT(23)) 84 + #define MT8186_TOP_AXI_PROT_EN_1_IMG_STEP2 (BIT(15)) 85 + /* IPE */ 86 + #define MT8186_TOP_AXI_PROT_EN_1_IPE_STEP1 (BIT(24)) 87 + #define MT8186_TOP_AXI_PROT_EN_1_IPE_STEP2 (BIT(16)) 88 + /* CAM */ 89 + #define MT8186_TOP_AXI_PROT_EN_1_CAM_STEP1 (GENMASK(22, 21)) 90 + #define MT8186_TOP_AXI_PROT_EN_1_CAM_STEP2 (GENMASK(14, 13)) 91 + /* VENC */ 92 + #define MT8186_TOP_AXI_PROT_EN_1_VENC_STEP1 (BIT(31)) 93 + #define MT8186_TOP_AXI_PROT_EN_1_VENC_STEP2 (BIT(19)) 94 + /* VDEC */ 95 + #define MT8186_TOP_AXI_PROT_EN_1_VDEC_STEP1 (BIT(30)) 96 + #define MT8186_TOP_AXI_PROT_EN_1_VDEC_STEP2 (BIT(17)) 97 + /* WPE */ 98 + #define MT8186_TOP_AXI_PROT_EN_2_WPE_STEP1 (BIT(17)) 99 + #define MT8186_TOP_AXI_PROT_EN_2_WPE_STEP2 (BIT(16)) 100 + /* CONN_ON */ 101 + #define MT8186_TOP_AXI_PROT_EN_1_CONN_ON_STEP1 (BIT(18)) 102 + #define MT8186_TOP_AXI_PROT_EN_CONN_ON_STEP2 (BIT(14)) 103 + #define MT8186_TOP_AXI_PROT_EN_CONN_ON_STEP3 (BIT(13)) 104 + #define MT8186_TOP_AXI_PROT_EN_CONN_ON_STEP4 (BIT(16)) 105 + /* ADSP_TOP */ 106 + #define MT8186_TOP_AXI_PROT_EN_3_ADSP_TOP_STEP1 (GENMASK(12, 11)) 107 + #define MT8186_TOP_AXI_PROT_EN_3_ADSP_TOP_STEP2 (GENMASK(1, 0)) 142 108 143 109 #define MT8183_TOP_AXI_PROT_EN_STA1 0x228 144 110 #define MT8183_TOP_AXI_PROT_EN_STA1_1 0x258 ··· 276 146 #define INFRA_TOPAXI_PROTECTSTA1 0x0228 277 147 #define INFRA_TOPAXI_PROTECTEN_SET 0x0260 278 148 #define INFRA_TOPAXI_PROTECTEN_CLR 0x0264 149 + 150 + #define MT8192_INFRA_CTRL 0x290 151 + #define MT8192_INFRA_CTRL_DISABLE_MFG2ACP BIT(9) 279 152 280 153 #define REG_INFRA_MISC 0xf00 281 154 #define F_DDR_4GB_SUPPORT_EN BIT(13)
+7 -2
include/linux/soc/qcom/llcc-qcom.h
··· 35 35 #define LLCC_WRCACHE 31 36 36 #define LLCC_CVPFW 32 37 37 #define LLCC_CPUSS1 33 38 + #define LLCC_CAMEXP0 34 39 + #define LLCC_CPUMTE 35 38 40 #define LLCC_CPUHWT 36 41 + #define LLCC_MDMCLAD2 37 42 + #define LLCC_CAMEXP1 38 43 + #define LLCC_AENPU 45 39 44 40 45 /** 41 46 * struct llcc_slice_desc - Cache slice descriptor ··· 88 83 * @bitmap: Bit map to track the active slice ids 89 84 * @offsets: Pointer to the bank offsets array 90 85 * @ecc_irq: interrupt for llcc cache error detection and reporting 91 - * @major_version: Indicates the LLCC major version 86 + * @version: Indicates the LLCC version 92 87 */ 93 88 struct llcc_drv_data { 94 89 struct regmap *regmap; ··· 101 96 unsigned long *bitmap; 102 97 u32 *offsets; 103 98 int ecc_irq; 104 - u32 major_version; 99 + u32 version; 105 100 }; 106 101 107 102 #if IS_ENABLED(CONFIG_QCOM_LLCC)
+15 -2
include/linux/soc/qcom/mdt_loader.h
··· 10 10 11 11 struct device; 12 12 struct firmware; 13 + struct qcom_scm_pas_metadata; 13 14 14 15 #if IS_ENABLED(CONFIG_QCOM_MDT_LOADER) 15 16 16 17 ssize_t qcom_mdt_get_size(const struct firmware *fw); 18 + int qcom_mdt_pas_init(struct device *dev, const struct firmware *fw, 19 + const char *fw_name, int pas_id, phys_addr_t mem_phys, 20 + struct qcom_scm_pas_metadata *pas_metadata_ctx); 17 21 int qcom_mdt_load(struct device *dev, const struct firmware *fw, 18 22 const char *fw_name, int pas_id, void *mem_region, 19 23 phys_addr_t mem_phys, size_t mem_size, ··· 27 23 const char *fw_name, int pas_id, void *mem_region, 28 24 phys_addr_t mem_phys, size_t mem_size, 29 25 phys_addr_t *reloc_base); 30 - void *qcom_mdt_read_metadata(const struct firmware *fw, size_t *data_len); 26 + void *qcom_mdt_read_metadata(const struct firmware *fw, size_t *data_len, 27 + const char *fw_name, struct device *dev); 31 28 32 29 #else /* !IS_ENABLED(CONFIG_QCOM_MDT_LOADER) */ 33 30 34 31 static inline ssize_t qcom_mdt_get_size(const struct firmware *fw) 32 + { 33 + return -ENODEV; 34 + } 35 + 36 + static inline int qcom_mdt_pas_init(struct device *dev, const struct firmware *fw, 37 + const char *fw_name, int pas_id, phys_addr_t mem_phys, 38 + struct qcom_scm_pas_metadata *pas_metadata_ctx) 35 39 { 36 40 return -ENODEV; 37 41 } ··· 63 51 } 64 52 65 53 static inline void *qcom_mdt_read_metadata(const struct firmware *fw, 66 - size_t *data_len) 54 + size_t *data_len, const char *fw_name, 55 + struct device *dev) 67 56 { 68 57 return ERR_PTR(-ENODEV); 69 58 }
+1 -1
include/linux/soc/ti/ti_sci_protocol.h
··· 645 645 646 646 static inline struct ti_sci_resource * 647 647 devm_ti_sci_get_resource(const struct ti_sci_handle *handle, struct device *dev, 648 - u32 dev_id, u32 sub_type); 648 + u32 dev_id, u32 sub_type) 649 649 { 650 650 return ERR_PTR(-EINVAL); 651 651 }
+31 -107
include/linux/tee_drv.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 - * Copyright (c) 2015-2016, Linaro Limited 3 + * Copyright (c) 2015-2022 Linaro Limited 4 4 */ 5 5 6 6 #ifndef __TEE_DRV_H ··· 20 20 * specific TEE driver. 21 21 */ 22 22 23 - #define TEE_SHM_MAPPED BIT(0) /* Memory mapped by the kernel */ 24 - #define TEE_SHM_DMA_BUF BIT(1) /* Memory with dma-buf handle */ 25 - #define TEE_SHM_EXT_DMA_BUF BIT(2) /* Memory with dma-buf handle */ 26 - #define TEE_SHM_REGISTER BIT(3) /* Memory registered in secure world */ 27 - #define TEE_SHM_USER_MAPPED BIT(4) /* Memory mapped in user space */ 28 - #define TEE_SHM_POOL BIT(5) /* Memory allocated from pool */ 29 - #define TEE_SHM_KERNEL_MAPPED BIT(6) /* Memory mapped in kernel space */ 30 - #define TEE_SHM_PRIV BIT(7) /* Memory private to TEE driver */ 23 + #define TEE_SHM_DYNAMIC BIT(0) /* Dynamic shared memory registered */ 24 + /* in secure world */ 25 + #define TEE_SHM_USER_MAPPED BIT(1) /* Memory mapped in user space */ 26 + #define TEE_SHM_POOL BIT(2) /* Memory allocated from pool */ 27 + #define TEE_SHM_PRIV BIT(3) /* Memory private to TEE driver */ 31 28 32 29 struct device; 33 30 struct tee_device; ··· 218 221 }; 219 222 220 223 /** 221 - * struct tee_shm_pool_mgr - shared memory manager 224 + * struct tee_shm_pool - shared memory pool 222 225 * @ops: operations 223 226 * @private_data: private data for the shared memory manager 224 227 */ 225 - struct tee_shm_pool_mgr { 226 - const struct tee_shm_pool_mgr_ops *ops; 228 + struct tee_shm_pool { 229 + const struct tee_shm_pool_ops *ops; 227 230 void *private_data; 228 231 }; 229 232 230 233 /** 231 - * struct tee_shm_pool_mgr_ops - shared memory pool manager operations 234 + * struct tee_shm_pool_ops - shared memory pool operations 232 235 * @alloc: called when allocating shared memory 233 236 * @free: called when freeing shared memory 234 - * @destroy_poolmgr: called when destroying the pool manager 237 + * @destroy_pool: called when destroying the pool 235 238 */ 236 - struct tee_shm_pool_mgr_ops { 237 - int (*alloc)(struct tee_shm_pool_mgr *poolmgr, struct tee_shm *shm, 238 - size_t size); 239 - void (*free)(struct tee_shm_pool_mgr *poolmgr, struct tee_shm *shm); 240 - void (*destroy_poolmgr)(struct tee_shm_pool_mgr *poolmgr); 239 + struct tee_shm_pool_ops { 240 + int (*alloc)(struct tee_shm_pool *pool, struct tee_shm *shm, 241 + size_t size, size_t align); 242 + void (*free)(struct tee_shm_pool *pool, struct tee_shm *shm); 243 + void (*destroy_pool)(struct tee_shm_pool *pool); 241 244 }; 242 - 243 - /** 244 - * tee_shm_pool_alloc() - Create a shared memory pool from shm managers 245 - * @priv_mgr: manager for driver private shared memory allocations 246 - * @dmabuf_mgr: manager for dma-buf shared memory allocations 247 - * 248 - * Allocation with the flag TEE_SHM_DMA_BUF set will use the range supplied 249 - * in @dmabuf, others will use the range provided by @priv. 250 - * 251 - * @returns pointer to a 'struct tee_shm_pool' or an ERR_PTR on failure. 252 - */ 253 - struct tee_shm_pool *tee_shm_pool_alloc(struct tee_shm_pool_mgr *priv_mgr, 254 - struct tee_shm_pool_mgr *dmabuf_mgr); 255 245 256 246 /* 257 - * tee_shm_pool_mgr_alloc_res_mem() - Create a shm manager for reserved 258 - * memory 247 + * tee_shm_pool_alloc_res_mem() - Create a shm manager for reserved memory 259 248 * @vaddr: Virtual address of start of pool 260 249 * @paddr: Physical address of start of pool 261 250 * @size: Size in bytes of the pool 262 - * 263 - * @returns pointer to a 'struct tee_shm_pool_mgr' or an ERR_PTR on failure. 264 - */ 265 - struct tee_shm_pool_mgr *tee_shm_pool_mgr_alloc_res_mem(unsigned long vaddr, 266 - phys_addr_t paddr, 267 - size_t size, 268 - int min_alloc_order); 269 - 270 - /** 271 - * tee_shm_pool_mgr_destroy() - Free a shared memory manager 272 - */ 273 - static inline void tee_shm_pool_mgr_destroy(struct tee_shm_pool_mgr *poolm) 274 - { 275 - poolm->ops->destroy_poolmgr(poolm); 276 - } 277 - 278 - /** 279 - * struct tee_shm_pool_mem_info - holds information needed to create a shared 280 - * memory pool 281 - * @vaddr: Virtual address of start of pool 282 - * @paddr: Physical address of start of pool 283 - * @size: Size in bytes of the pool 284 - */ 285 - struct tee_shm_pool_mem_info { 286 - unsigned long vaddr; 287 - phys_addr_t paddr; 288 - size_t size; 289 - }; 290 - 291 - /** 292 - * tee_shm_pool_alloc_res_mem() - Create a shared memory pool from reserved 293 - * memory range 294 - * @priv_info: Information for driver private shared memory pool 295 - * @dmabuf_info: Information for dma-buf shared memory pool 296 - * 297 - * Start and end of pools will must be page aligned. 298 - * 299 - * Allocation with the flag TEE_SHM_DMA_BUF set will use the range supplied 300 - * in @dmabuf, others will use the range provided by @priv. 301 251 * 302 252 * @returns pointer to a 'struct tee_shm_pool' or an ERR_PTR on failure. 303 253 */ 304 - struct tee_shm_pool * 305 - tee_shm_pool_alloc_res_mem(struct tee_shm_pool_mem_info *priv_info, 306 - struct tee_shm_pool_mem_info *dmabuf_info); 254 + struct tee_shm_pool *tee_shm_pool_alloc_res_mem(unsigned long vaddr, 255 + phys_addr_t paddr, size_t size, 256 + int min_alloc_order); 307 257 308 258 /** 309 259 * tee_shm_pool_free() - Free a shared memory pool ··· 259 315 * The must be no remaining shared memory allocated from this pool when 260 316 * this function is called. 261 317 */ 262 - void tee_shm_pool_free(struct tee_shm_pool *pool); 318 + static inline void tee_shm_pool_free(struct tee_shm_pool *pool) 319 + { 320 + pool->ops->destroy_pool(pool); 321 + } 263 322 264 323 /** 265 324 * tee_get_drvdata() - Return driver_data pointer ··· 270 323 */ 271 324 void *tee_get_drvdata(struct tee_device *teedev); 272 325 273 - /** 274 - * tee_shm_alloc() - Allocate shared memory 275 - * @ctx: Context that allocates the shared memory 276 - * @size: Requested size of shared memory 277 - * @flags: Flags setting properties for the requested shared memory. 278 - * 279 - * Memory allocated as global shared memory is automatically freed when the 280 - * TEE file pointer is closed. The @flags field uses the bits defined by 281 - * TEE_SHM_* above. TEE_SHM_MAPPED must currently always be set. If 282 - * TEE_SHM_DMA_BUF global shared memory will be allocated and associated 283 - * with a dma-buf handle, else driver private memory. 284 - * 285 - * @returns a pointer to 'struct tee_shm' 286 - */ 287 - struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags); 326 + struct tee_shm *tee_shm_alloc_priv_buf(struct tee_context *ctx, size_t size); 288 327 struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size); 289 328 290 - /** 291 - * tee_shm_register() - Register shared memory buffer 292 - * @ctx: Context that registers the shared memory 293 - * @addr: Address is userspace of the shared buffer 294 - * @length: Length of the shared buffer 295 - * @flags: Flags setting properties for the requested shared memory. 296 - * 297 - * @returns a pointer to 'struct tee_shm' 298 - */ 299 - struct tee_shm *tee_shm_register(struct tee_context *ctx, unsigned long addr, 300 - size_t length, u32 flags); 329 + struct tee_shm *tee_shm_register_kernel_buf(struct tee_context *ctx, 330 + void *addr, size_t length); 301 331 302 332 /** 303 - * tee_shm_is_registered() - Check if shared memory object in registered in TEE 333 + * tee_shm_is_dynamic() - Check if shared memory object is of the dynamic kind 304 334 * @shm: Shared memory handle 305 - * @returns true if object is registered in TEE 335 + * @returns true if object is dynamic shared memory 306 336 */ 307 - static inline bool tee_shm_is_registered(struct tee_shm *shm) 337 + static inline bool tee_shm_is_dynamic(struct tee_shm *shm) 308 338 { 309 - return shm && (shm->flags & TEE_SHM_REGISTER); 339 + return shm && (shm->flags & TEE_SHM_DYNAMIC); 310 340 } 311 341 312 342 /**
+1 -1
include/soc/tegra/bpmp-abi.h
··· 931 931 * @brief Request with MRQ_RESET 932 932 * 933 933 * Used by the sender of an #MRQ_RESET message to request BPMP to 934 - * assert or or deassert a given reset line. 934 + * assert or deassert a given reset line. 935 935 */ 936 936 struct mrq_reset_request { 937 937 /** @brief Reset action to perform (@ref mrq_reset_commands) */
+28
include/trace/events/scmi.h
··· 33 33 __entry->seq, __entry->poll) 34 34 ); 35 35 36 + TRACE_EVENT(scmi_xfer_response_wait, 37 + TP_PROTO(int transfer_id, u8 msg_id, u8 protocol_id, u16 seq, 38 + u32 timeout, bool poll), 39 + TP_ARGS(transfer_id, msg_id, protocol_id, seq, timeout, poll), 40 + 41 + TP_STRUCT__entry( 42 + __field(int, transfer_id) 43 + __field(u8, msg_id) 44 + __field(u8, protocol_id) 45 + __field(u16, seq) 46 + __field(u32, timeout) 47 + __field(bool, poll) 48 + ), 49 + 50 + TP_fast_assign( 51 + __entry->transfer_id = transfer_id; 52 + __entry->msg_id = msg_id; 53 + __entry->protocol_id = protocol_id; 54 + __entry->seq = seq; 55 + __entry->timeout = timeout; 56 + __entry->poll = poll; 57 + ), 58 + 59 + TP_printk("transfer_id=%d msg_id=%u protocol_id=%u seq=%u tmo_ms=%u poll=%u", 60 + __entry->transfer_id, __entry->msg_id, __entry->protocol_id, 61 + __entry->seq, __entry->timeout, __entry->poll) 62 + ); 63 + 36 64 TRACE_EVENT(scmi_xfer_end, 37 65 TP_PROTO(int transfer_id, u8 msg_id, u8 protocol_id, u16 seq, 38 66 int status),
+9 -14
security/keys/trusted-keys/trusted_tee.c
··· 70 70 memset(&inv_arg, 0, sizeof(inv_arg)); 71 71 memset(&param, 0, sizeof(param)); 72 72 73 - reg_shm_in = tee_shm_register(pvt_data.ctx, (unsigned long)p->key, 74 - p->key_len, TEE_SHM_DMA_BUF | 75 - TEE_SHM_KERNEL_MAPPED); 73 + reg_shm_in = tee_shm_register_kernel_buf(pvt_data.ctx, p->key, 74 + p->key_len); 76 75 if (IS_ERR(reg_shm_in)) { 77 76 dev_err(pvt_data.dev, "key shm register failed\n"); 78 77 return PTR_ERR(reg_shm_in); 79 78 } 80 79 81 - reg_shm_out = tee_shm_register(pvt_data.ctx, (unsigned long)p->blob, 82 - sizeof(p->blob), TEE_SHM_DMA_BUF | 83 - TEE_SHM_KERNEL_MAPPED); 80 + reg_shm_out = tee_shm_register_kernel_buf(pvt_data.ctx, p->blob, 81 + sizeof(p->blob)); 84 82 if (IS_ERR(reg_shm_out)) { 85 83 dev_err(pvt_data.dev, "blob shm register failed\n"); 86 84 ret = PTR_ERR(reg_shm_out); ··· 129 131 memset(&inv_arg, 0, sizeof(inv_arg)); 130 132 memset(&param, 0, sizeof(param)); 131 133 132 - reg_shm_in = tee_shm_register(pvt_data.ctx, (unsigned long)p->blob, 133 - p->blob_len, TEE_SHM_DMA_BUF | 134 - TEE_SHM_KERNEL_MAPPED); 134 + reg_shm_in = tee_shm_register_kernel_buf(pvt_data.ctx, p->blob, 135 + p->blob_len); 135 136 if (IS_ERR(reg_shm_in)) { 136 137 dev_err(pvt_data.dev, "blob shm register failed\n"); 137 138 return PTR_ERR(reg_shm_in); 138 139 } 139 140 140 - reg_shm_out = tee_shm_register(pvt_data.ctx, (unsigned long)p->key, 141 - sizeof(p->key), TEE_SHM_DMA_BUF | 142 - TEE_SHM_KERNEL_MAPPED); 141 + reg_shm_out = tee_shm_register_kernel_buf(pvt_data.ctx, p->key, 142 + sizeof(p->key)); 143 143 if (IS_ERR(reg_shm_out)) { 144 144 dev_err(pvt_data.dev, "key shm register failed\n"); 145 145 ret = PTR_ERR(reg_shm_out); ··· 188 192 memset(&inv_arg, 0, sizeof(inv_arg)); 189 193 memset(&param, 0, sizeof(param)); 190 194 191 - reg_shm = tee_shm_register(pvt_data.ctx, (unsigned long)key, key_len, 192 - TEE_SHM_DMA_BUF | TEE_SHM_KERNEL_MAPPED); 195 + reg_shm = tee_shm_register_kernel_buf(pvt_data.ctx, key, key_len); 193 196 if (IS_ERR(reg_shm)) { 194 197 dev_err(pvt_data.dev, "key shm register failed\n"); 195 198 return PTR_ERR(reg_shm);