Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM SoC-related driver updates from Olof Johansson:
"Various driver updates for platforms:

- Nvidia: Fuse support for Tegra194, continued memory controller
pieces for Tegra30

- NXP/FSL: Refactorings of QuickEngine drivers to support
ARM/ARM64/PPC

- NXP/FSL: i.MX8MP SoC driver pieces

- TI Keystone: ring accelerator driver

- Qualcomm: SCM driver cleanup/refactoring + support for new SoCs.

- Xilinx ZynqMP: feature checking interface for firmware. Mailbox
communication for power management

- Overall support patch set for cpuidle on more complex hierarchies
(PSCI-based)

and misc cleanups, refactorings of Marvell, TI, other platforms"

* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (166 commits)
drivers: soc: xilinx: Use mailbox IPI callback
dt-bindings: power: reset: xilinx: Add bindings for ipi mailbox
drivers: soc: ti: knav_qmss_queue: Pass lockdep expression to RCU lists
MAINTAINERS: Add brcmstb PCIe controller entry
soc/tegra: fuse: Unmap registers once they are not needed anymore
soc/tegra: fuse: Correct straps' address for older Tegra124 device trees
soc/tegra: fuse: Warn if straps are not ready
soc/tegra: fuse: Cache values of straps and Chip ID registers
memory: tegra30-emc: Correct error message for timed out auto calibration
memory: tegra30-emc: Firm up hardware programming sequence
memory: tegra30-emc: Firm up suspend/resume sequence
soc/tegra: regulators: Do nothing if voltage is unchanged
memory: tegra: Correct reset value of xusb_hostr
soc/tegra: fuse: Add APB DMA dependency for Tegra20
bus: tegra-aconnect: Remove PM_CLK dependency
dt-bindings: mediatek: add MT6765 power dt-bindings
soc: mediatek: cmdq: delete not used define
memory: tegra: Add support for the Tegra194 memory controller
memory: tegra: Only include support for enabled SoCs
memory: tegra: Support DVFS on Tegra186 and later
...

+6592 -3289
+15
Documentation/devicetree/bindings/arm/cpus.yaml
··· 242 243 where voltage is in V, frequency is in MHz. 244 245 qcom,saw: 246 $ref: '/schemas/types.yaml#/definitions/phandle' 247 description: |
··· 242 243 where voltage is in V, frequency is in MHz. 244 245 + power-domains: 246 + $ref: '/schemas/types.yaml#/definitions/phandle-array' 247 + description: 248 + List of phandles and PM domain specifiers, as defined by bindings of the 249 + PM domain provider (see also ../power_domain.txt). 250 + 251 + power-domain-names: 252 + $ref: '/schemas/types.yaml#/definitions/string-array' 253 + description: 254 + A list of power domain name strings sorted in the same order as the 255 + power-domains property. 256 + 257 + For PSCI based platforms, the name corresponding to the index of the PSCI 258 + PM domain provider, must be "psci". 259 + 260 qcom,saw: 261 $ref: '/schemas/types.yaml#/definitions/phandle' 262 description: |
+1 -1
Documentation/devicetree/bindings/arm/msm/qcom,llcc.yaml
··· 47 - | 48 #include <dt-bindings/interrupt-controller/arm-gic.h> 49 50 - cache-controller@1100000 { 51 compatible = "qcom,sdm845-llcc"; 52 reg = <0x1100000 0x200000>, <0x1300000 0x50000> ; 53 reg-names = "llcc_base", "llcc_broadcast_base";
··· 47 - | 48 #include <dt-bindings/interrupt-controller/arm-gic.h> 49 50 + system-cache-controller@1100000 { 51 compatible = "qcom,sdm845-llcc"; 52 reg = <0x1100000 0x200000>, <0x1300000 0x50000> ; 53 reg-names = "llcc_base", "llcc_broadcast_base";
+104
Documentation/devicetree/bindings/arm/psci.yaml
··· 102 [1] Kernel documentation - ARM idle states bindings 103 Documentation/devicetree/bindings/arm/idle-states.txt 104 105 106 required: 107 - compatible ··· 187 188 cpu_on = <0x95c10002>; 189 cpu_off = <0x95c10001>; 190 }; 191 ...
··· 102 [1] Kernel documentation - ARM idle states bindings 103 Documentation/devicetree/bindings/arm/idle-states.txt 104 105 + "#power-domain-cells": 106 + description: 107 + The number of cells in a PM domain specifier as per binding in [3]. 108 + Must be 0 as to represent a single PM domain. 109 + 110 + ARM systems can have multiple cores, sometimes in an hierarchical 111 + arrangement. This often, but not always, maps directly to the processor 112 + power topology of the system. Individual nodes in a topology have their 113 + own specific power states and can be better represented hierarchically. 114 + 115 + For these cases, the definitions of the idle states for the CPUs and the 116 + CPU topology, must conform to the binding in [3]. The idle states 117 + themselves must conform to the binding in [4] and must specify the 118 + arm,psci-suspend-param property. 119 + 120 + It should also be noted that, in PSCI firmware v1.0 the OS-Initiated 121 + (OSI) CPU suspend mode is introduced. Using a hierarchical representation 122 + helps to implement support for OSI mode and OS implementations may choose 123 + to mandate it. 124 + 125 + [3] Documentation/devicetree/bindings/power/power_domain.txt 126 + [4] Documentation/devicetree/bindings/power/domain-idle-state.txt 127 + 128 + power-domains: 129 + $ref: '/schemas/types.yaml#/definitions/phandle-array' 130 + description: 131 + List of phandles and PM domain specifiers, as defined by bindings of the 132 + PM domain provider. 133 134 required: 135 - compatible ··· 159 160 cpu_on = <0x95c10002>; 161 cpu_off = <0x95c10001>; 162 + }; 163 + 164 + - |+ 165 + 166 + // Case 4: CPUs and CPU idle states described using the hierarchical model. 167 + 168 + cpus { 169 + #size-cells = <0>; 170 + #address-cells = <1>; 171 + 172 + CPU0: cpu@0 { 173 + device_type = "cpu"; 174 + compatible = "arm,cortex-a53", "arm,armv8"; 175 + reg = <0x0>; 176 + enable-method = "psci"; 177 + power-domains = <&CPU_PD0>; 178 + power-domain-names = "psci"; 179 + }; 180 + 181 + CPU1: cpu@1 { 182 + device_type = "cpu"; 183 + compatible = "arm,cortex-a57", "arm,armv8"; 184 + reg = <0x100>; 185 + enable-method = "psci"; 186 + power-domains = <&CPU_PD1>; 187 + power-domain-names = "psci"; 188 + }; 189 + 190 + idle-states { 191 + 192 + CPU_PWRDN: cpu-power-down { 193 + compatible = "arm,idle-state"; 194 + arm,psci-suspend-param = <0x0000001>; 195 + entry-latency-us = <10>; 196 + exit-latency-us = <10>; 197 + min-residency-us = <100>; 198 + }; 199 + 200 + CLUSTER_RET: cluster-retention { 201 + compatible = "domain-idle-state"; 202 + arm,psci-suspend-param = <0x1000011>; 203 + entry-latency-us = <500>; 204 + exit-latency-us = <500>; 205 + min-residency-us = <2000>; 206 + }; 207 + 208 + CLUSTER_PWRDN: cluster-power-down { 209 + compatible = "domain-idle-state"; 210 + arm,psci-suspend-param = <0x1000031>; 211 + entry-latency-us = <2000>; 212 + exit-latency-us = <2000>; 213 + min-residency-us = <6000>; 214 + }; 215 + }; 216 + }; 217 + 218 + psci { 219 + compatible = "arm,psci-1.0"; 220 + method = "smc"; 221 + 222 + CPU_PD0: cpu-pd0 { 223 + #power-domain-cells = <0>; 224 + domain-idle-states = <&CPU_PWRDN>; 225 + power-domains = <&CLUSTER_PD>; 226 + }; 227 + 228 + CPU_PD1: cpu-pd1 { 229 + #power-domain-cells = <0>; 230 + domain-idle-states = <&CPU_PWRDN>; 231 + power-domains = <&CLUSTER_PD>; 232 + }; 233 + 234 + CLUSTER_PD: cluster-pd { 235 + #power-domain-cells = <0>; 236 + domain-idle-states = <&CLUSTER_RET>, <&CLUSTER_PWRDN>; 237 + }; 238 }; 239 ...
-148
Documentation/devicetree/bindings/power/qcom,rpmpd.txt
··· 1 - Qualcomm RPM/RPMh Power domains 2 - 3 - For RPM/RPMh Power domains, we communicate a performance state to RPM/RPMh 4 - which then translates it into a corresponding voltage on a rail 5 - 6 - Required Properties: 7 - - compatible: Should be one of the following 8 - * qcom,msm8976-rpmpd: RPM Power domain for the msm8976 family of SoC 9 - * qcom,msm8996-rpmpd: RPM Power domain for the msm8996 family of SoC 10 - * qcom,msm8998-rpmpd: RPM Power domain for the msm8998 family of SoC 11 - * qcom,qcs404-rpmpd: RPM Power domain for the qcs404 family of SoC 12 - * qcom,sdm845-rpmhpd: RPMh Power domain for the sdm845 family of SoC 13 - - #power-domain-cells: number of cells in Power domain specifier 14 - must be 1. 15 - - operating-points-v2: Phandle to the OPP table for the Power domain. 16 - Refer to Documentation/devicetree/bindings/power/power_domain.txt 17 - and Documentation/devicetree/bindings/opp/opp.txt for more details 18 - 19 - Refer to <dt-bindings/power/qcom-rpmpd.h> for the level values for 20 - various OPPs for different platforms as well as Power domain indexes 21 - 22 - Example: rpmh power domain controller and OPP table 23 - 24 - #include <dt-bindings/power/qcom-rpmhpd.h> 25 - 26 - opp-level values specified in the OPP tables for RPMh power domains 27 - should use the RPMH_REGULATOR_LEVEL_* constants from 28 - <dt-bindings/power/qcom-rpmhpd.h> 29 - 30 - rpmhpd: power-controller { 31 - compatible = "qcom,sdm845-rpmhpd"; 32 - #power-domain-cells = <1>; 33 - operating-points-v2 = <&rpmhpd_opp_table>; 34 - 35 - rpmhpd_opp_table: opp-table { 36 - compatible = "operating-points-v2"; 37 - 38 - rpmhpd_opp_ret: opp1 { 39 - opp-level = <RPMH_REGULATOR_LEVEL_RETENTION>; 40 - }; 41 - 42 - rpmhpd_opp_min_svs: opp2 { 43 - opp-level = <RPMH_REGULATOR_LEVEL_MIN_SVS>; 44 - }; 45 - 46 - rpmhpd_opp_low_svs: opp3 { 47 - opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS>; 48 - }; 49 - 50 - rpmhpd_opp_svs: opp4 { 51 - opp-level = <RPMH_REGULATOR_LEVEL_SVS>; 52 - }; 53 - 54 - rpmhpd_opp_svs_l1: opp5 { 55 - opp-level = <RPMH_REGULATOR_LEVEL_SVS_L1>; 56 - }; 57 - 58 - rpmhpd_opp_nom: opp6 { 59 - opp-level = <RPMH_REGULATOR_LEVEL_NOM>; 60 - }; 61 - 62 - rpmhpd_opp_nom_l1: opp7 { 63 - opp-level = <RPMH_REGULATOR_LEVEL_NOM_L1>; 64 - }; 65 - 66 - rpmhpd_opp_nom_l2: opp8 { 67 - opp-level = <RPMH_REGULATOR_LEVEL_NOM_L2>; 68 - }; 69 - 70 - rpmhpd_opp_turbo: opp9 { 71 - opp-level = <RPMH_REGULATOR_LEVEL_TURBO>; 72 - }; 73 - 74 - rpmhpd_opp_turbo_l1: opp10 { 75 - opp-level = <RPMH_REGULATOR_LEVEL_TURBO_L1>; 76 - }; 77 - }; 78 - }; 79 - 80 - Example: rpm power domain controller and OPP table 81 - 82 - rpmpd: power-controller { 83 - compatible = "qcom,msm8996-rpmpd"; 84 - #power-domain-cells = <1>; 85 - operating-points-v2 = <&rpmpd_opp_table>; 86 - 87 - rpmpd_opp_table: opp-table { 88 - compatible = "operating-points-v2"; 89 - 90 - rpmpd_opp_low: opp1 { 91 - opp-level = <1>; 92 - }; 93 - 94 - rpmpd_opp_ret: opp2 { 95 - opp-level = <2>; 96 - }; 97 - 98 - rpmpd_opp_svs: opp3 { 99 - opp-level = <3>; 100 - }; 101 - 102 - rpmpd_opp_normal: opp4 { 103 - opp-level = <4>; 104 - }; 105 - 106 - rpmpd_opp_high: opp5 { 107 - opp-level = <5>; 108 - }; 109 - 110 - rpmpd_opp_turbo: opp6 { 111 - opp-level = <6>; 112 - }; 113 - }; 114 - }; 115 - 116 - Example: Client/Consumer device using OPP table 117 - 118 - leaky-device0@12350000 { 119 - compatible = "foo,i-leak-current"; 120 - reg = <0x12350000 0x1000>; 121 - power-domains = <&rpmhpd SDM845_MX>; 122 - operating-points-v2 = <&leaky_opp_table>; 123 - }; 124 - 125 - 126 - leaky_opp_table: opp-table { 127 - compatible = "operating-points-v2"; 128 - 129 - opp1 { 130 - opp-hz = /bits/ 64 <144000>; 131 - required-opps = <&rpmhpd_opp_low>; 132 - }; 133 - 134 - opp2 { 135 - opp-hz = /bits/ 64 <400000>; 136 - required-opps = <&rpmhpd_opp_ret>; 137 - }; 138 - 139 - opp3 { 140 - opp-hz = /bits/ 64 <20000000>; 141 - required-opps = <&rpmpd_opp_svs>; 142 - }; 143 - 144 - opp4 { 145 - opp-hz = /bits/ 64 <25000000>; 146 - required-opps = <&rpmpd_opp_normal>; 147 - }; 148 - };
···
+170
Documentation/devicetree/bindings/power/qcom,rpmpd.yaml
···
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/power/qcom,rpmpd.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm RPM/RPMh Power domains 8 + 9 + maintainers: 10 + - Rajendra Nayak <rnayak@codeaurora.org> 11 + 12 + description: 13 + For RPM/RPMh Power domains, we communicate a performance state to RPM/RPMh 14 + which then translates it into a corresponding voltage on a rail. 15 + 16 + properties: 17 + compatible: 18 + enum: 19 + - qcom,msm8976-rpmpd 20 + - qcom,msm8996-rpmpd 21 + - qcom,msm8998-rpmpd 22 + - qcom,qcs404-rpmpd 23 + - qcom,sc7180-rpmhpd 24 + - qcom,sdm845-rpmhpd 25 + - qcom,sm8150-rpmhpd 26 + 27 + '#power-domain-cells': 28 + const: 1 29 + 30 + operating-points-v2: true 31 + 32 + opp-table: 33 + type: object 34 + 35 + required: 36 + - compatible 37 + - '#power-domain-cells' 38 + - operating-points-v2 39 + 40 + additionalProperties: false 41 + 42 + examples: 43 + - | 44 + 45 + // Example 1 (rpmh power domain controller and OPP table): 46 + 47 + #include <dt-bindings/power/qcom-rpmpd.h> 48 + 49 + rpmhpd: power-controller { 50 + compatible = "qcom,sdm845-rpmhpd"; 51 + #power-domain-cells = <1>; 52 + operating-points-v2 = <&rpmhpd_opp_table>; 53 + 54 + rpmhpd_opp_table: opp-table { 55 + compatible = "operating-points-v2"; 56 + 57 + rpmhpd_opp_ret: opp1 { 58 + opp-level = <RPMH_REGULATOR_LEVEL_RETENTION>; 59 + }; 60 + 61 + rpmhpd_opp_min_svs: opp2 { 62 + opp-level = <RPMH_REGULATOR_LEVEL_MIN_SVS>; 63 + }; 64 + 65 + rpmhpd_opp_low_svs: opp3 { 66 + opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS>; 67 + }; 68 + 69 + rpmhpd_opp_svs: opp4 { 70 + opp-level = <RPMH_REGULATOR_LEVEL_SVS>; 71 + }; 72 + 73 + rpmhpd_opp_svs_l1: opp5 { 74 + opp-level = <RPMH_REGULATOR_LEVEL_SVS_L1>; 75 + }; 76 + 77 + rpmhpd_opp_nom: opp6 { 78 + opp-level = <RPMH_REGULATOR_LEVEL_NOM>; 79 + }; 80 + 81 + rpmhpd_opp_nom_l1: opp7 { 82 + opp-level = <RPMH_REGULATOR_LEVEL_NOM_L1>; 83 + }; 84 + 85 + rpmhpd_opp_nom_l2: opp8 { 86 + opp-level = <RPMH_REGULATOR_LEVEL_NOM_L2>; 87 + }; 88 + 89 + rpmhpd_opp_turbo: opp9 { 90 + opp-level = <RPMH_REGULATOR_LEVEL_TURBO>; 91 + }; 92 + 93 + rpmhpd_opp_turbo_l1: opp10 { 94 + opp-level = <RPMH_REGULATOR_LEVEL_TURBO_L1>; 95 + }; 96 + }; 97 + }; 98 + 99 + - | 100 + 101 + // Example 2 (rpm power domain controller and OPP table): 102 + 103 + rpmpd: power-controller { 104 + compatible = "qcom,msm8996-rpmpd"; 105 + #power-domain-cells = <1>; 106 + operating-points-v2 = <&rpmpd_opp_table>; 107 + 108 + rpmpd_opp_table: opp-table { 109 + compatible = "operating-points-v2"; 110 + 111 + rpmpd_opp_low: opp1 { 112 + opp-level = <1>; 113 + }; 114 + 115 + rpmpd_opp_ret: opp2 { 116 + opp-level = <2>; 117 + }; 118 + 119 + rpmpd_opp_svs: opp3 { 120 + opp-level = <3>; 121 + }; 122 + 123 + rpmpd_opp_normal: opp4 { 124 + opp-level = <4>; 125 + }; 126 + 127 + rpmpd_opp_high: opp5 { 128 + opp-level = <5>; 129 + }; 130 + 131 + rpmpd_opp_turbo: opp6 { 132 + opp-level = <6>; 133 + }; 134 + }; 135 + }; 136 + 137 + - | 138 + 139 + // Example 3 (Client/Consumer device using OPP table): 140 + 141 + leaky-device0@12350000 { 142 + compatible = "foo,i-leak-current"; 143 + reg = <0x12350000 0x1000>; 144 + power-domains = <&rpmhpd 0>; 145 + operating-points-v2 = <&leaky_opp_table>; 146 + }; 147 + 148 + leaky_opp_table: opp-table { 149 + compatible = "operating-points-v2"; 150 + opp1 { 151 + opp-hz = /bits/ 64 <144000>; 152 + required-opps = <&rpmhpd_opp_low>; 153 + }; 154 + 155 + opp2 { 156 + opp-hz = /bits/ 64 <400000>; 157 + required-opps = <&rpmhpd_opp_ret>; 158 + }; 159 + 160 + opp3 { 161 + opp-hz = /bits/ 64 <20000000>; 162 + required-opps = <&rpmpd_opp_svs>; 163 + }; 164 + 165 + opp4 { 166 + opp-hz = /bits/ 64 <25000000>; 167 + required-opps = <&rpmpd_opp_normal>; 168 + }; 169 + }; 170 + ...
+39 -3
Documentation/devicetree/bindings/power/reset/xlnx,zynqmp-power.txt
··· 8 - compatible: Must contain: "xlnx,zynqmp-power" 9 - interrupts: Interrupt specifier 10 11 - ------- 12 - Example 13 - ------- 14 15 firmware { 16 zynqmp_firmware: zynqmp-firmware { ··· 38 zynqmp_power: zynqmp-power { 39 compatible = "xlnx,zynqmp-power"; 40 interrupts = <0 35 4>; 41 }; 42 }; 43 };
··· 8 - compatible: Must contain: "xlnx,zynqmp-power" 9 - interrupts: Interrupt specifier 10 11 + Optional properties: 12 + - mbox-names : Name given to channels seen in the 'mboxes' property. 13 + "tx" - Mailbox corresponding to transmit path 14 + "rx" - Mailbox corresponding to receive path 15 + - mboxes : Standard property to specify a Mailbox. Each value of 16 + the mboxes property should contain a phandle to the 17 + mailbox controller device node and an args specifier 18 + that will be the phandle to the intended sub-mailbox 19 + child node to be used for communication. See 20 + Documentation/devicetree/bindings/mailbox/mailbox.txt 21 + for more details about the generic mailbox controller 22 + and client driver bindings. Also see 23 + Documentation/devicetree/bindings/mailbox/ \ 24 + xlnx,zynqmp-ipi-mailbox.txt for typical controller that 25 + is used to communicate with this System controllers. 26 + 27 + -------- 28 + Examples 29 + -------- 30 + 31 + Example with interrupt method: 32 33 firmware { 34 zynqmp_firmware: zynqmp-firmware { ··· 20 zynqmp_power: zynqmp-power { 21 compatible = "xlnx,zynqmp-power"; 22 interrupts = <0 35 4>; 23 + }; 24 + }; 25 + }; 26 + 27 + Example with IPI mailbox method: 28 + 29 + firmware { 30 + zynqmp_firmware: zynqmp-firmware { 31 + compatible = "xlnx,zynqmp-firmware"; 32 + method = "smc"; 33 + 34 + zynqmp_power: zynqmp-power { 35 + compatible = "xlnx,zynqmp-power"; 36 + interrupt-parent = <&gic>; 37 + interrupts = <0 35 4>; 38 + mboxes = <&ipi_mailbox_pmu0 0>, 39 + <&ipi_mailbox_pmu0 1>; 40 + mbox-names = "tx", "rx"; 41 }; 42 }; 43 };
+37
Documentation/devicetree/bindings/reset/brcm,bcm7216-pcie-sata-rescal.yaml
···
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + # Copyright 2020 Broadcom 3 + %YAML 1.2 4 + --- 5 + $id: "http://devicetree.org/schemas/reset/brcm,bcm7216-pcie-sata-rescal.yaml#" 6 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 7 + 8 + title: BCM7216 RESCAL reset controller 9 + 10 + description: This document describes the BCM7216 RESCAL reset controller which is responsible for controlling the reset of the SATA and PCIe0/1 instances on BCM7216. 11 + 12 + maintainers: 13 + - Florian Fainelli <f.fainelli@gmail.com> 14 + - Jim Quinlan <jim2101024@gmail.com> 15 + 16 + properties: 17 + compatible: 18 + const: brcm,bcm7216-pcie-sata-rescal 19 + 20 + reg: 21 + maxItems: 1 22 + 23 + "#reset-cells": 24 + const: 0 25 + 26 + required: 27 + - compatible 28 + - reg 29 + - "#reset-cells" 30 + 31 + examples: 32 + - | 33 + reset-controller@8b2c800 { 34 + compatible = "brcm,bcm7216-pcie-sata-rescal"; 35 + reg = <0x8b2c800 0x10>; 36 + #reset-cells = <0>; 37 + };
+63
Documentation/devicetree/bindings/reset/intel,rcu-gw.yaml
···
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/reset/intel,rcu-gw.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: System Reset Controller on Intel Gateway SoCs 8 + 9 + maintainers: 10 + - Dilip Kota <eswara.kota@linux.intel.com> 11 + 12 + properties: 13 + compatible: 14 + enum: 15 + - intel,rcu-lgm 16 + - intel,rcu-xrx200 17 + 18 + reg: 19 + description: Reset controller registers. 20 + maxItems: 1 21 + 22 + intel,global-reset: 23 + description: Global reset register offset and bit offset. 24 + allOf: 25 + - $ref: /schemas/types.yaml#/definitions/uint32-array 26 + - maxItems: 2 27 + 28 + "#reset-cells": 29 + minimum: 2 30 + maximum: 3 31 + description: | 32 + First cell is reset request register offset. 33 + Second cell is bit offset in reset request register. 34 + Third cell is bit offset in reset status register. 35 + For LGM SoC, reset cell count is 2 as bit offset in 36 + reset request and reset status registers is same. Whereas 37 + 3 for legacy SoCs as bit offset differs. 38 + 39 + required: 40 + - compatible 41 + - reg 42 + - intel,global-reset 43 + - "#reset-cells" 44 + 45 + additionalProperties: false 46 + 47 + examples: 48 + - | 49 + rcu0: reset-controller@e0000000 { 50 + compatible = "intel,rcu-lgm"; 51 + reg = <0xe0000000 0x20000>; 52 + intel,global-reset = <0x10 30>; 53 + #reset-cells = <2>; 54 + }; 55 + 56 + pwm: pwm@e0d00000 { 57 + status = "disabled"; 58 + compatible = "intel,lgm-pwm"; 59 + reg = <0xe0d00000 0x30>; 60 + clocks = <&cgu0 1>; 61 + #pwm-cells = <2>; 62 + resets = <&rcu0 0x30 21>; 63 + };
+32
Documentation/devicetree/bindings/reset/nuvoton,npcm-reset.txt
···
··· 1 + Nuvoton NPCM Reset controller 2 + 3 + Required properties: 4 + - compatible : "nuvoton,npcm750-reset" for NPCM7XX BMC 5 + - reg : specifies physical base address and size of the register. 6 + - #reset-cells: must be set to 2 7 + 8 + Optional property: 9 + - nuvoton,sw-reset-number - Contains the software reset number to restart the SoC. 10 + NPCM7xx contain four software reset that represent numbers 1 to 4. 11 + 12 + If 'nuvoton,sw-reset-number' is not specfied software reset is disabled. 13 + 14 + Example: 15 + rstc: rstc@f0801000 { 16 + compatible = "nuvoton,npcm750-reset"; 17 + reg = <0xf0801000 0x70>; 18 + #reset-cells = <2>; 19 + nuvoton,sw-reset-number = <2>; 20 + }; 21 + 22 + Specifying reset lines connected to IP NPCM7XX modules 23 + ====================================================== 24 + example: 25 + 26 + spi0: spi@..... { 27 + ... 28 + resets = <&rstc NPCM7XX_RESET_IPSRST2 NPCM7XX_RESET_PSPI1>; 29 + ... 30 + }; 31 + 32 + The index could be found in <dt-bindings/reset/nuvoton,npcm7xx-reset.h>.
+6
Documentation/devicetree/bindings/soc/mediatek/scpsys.txt
··· 11 power/power-domain.yaml. It provides the power domains defined in 12 - include/dt-bindings/power/mt8173-power.h 13 - include/dt-bindings/power/mt6797-power.h 14 - include/dt-bindings/power/mt2701-power.h 15 - include/dt-bindings/power/mt2712-power.h 16 - include/dt-bindings/power/mt7622-power.h ··· 20 - compatible: Should be one of: 21 - "mediatek,mt2701-scpsys" 22 - "mediatek,mt2712-scpsys" 23 - "mediatek,mt6797-scpsys" 24 - "mediatek,mt7622-scpsys" 25 - "mediatek,mt7623-scpsys", "mediatek,mt2701-scpsys": For MT7623 SoC ··· 35 enabled before enabling certain power domains. 36 Required clocks for MT2701 or MT7623: "mm", "mfg", "ethif" 37 Required clocks for MT2712: "mm", "mfg", "venc", "jpgdec", "audio", "vdec" 38 Required clocks for MT6797: "mm", "mfg", "vdec" 39 Required clocks for MT7622 or MT7629: "hif_sel" 40 Required clocks for MT7623A: "ethif"
··· 11 power/power-domain.yaml. It provides the power domains defined in 12 - include/dt-bindings/power/mt8173-power.h 13 - include/dt-bindings/power/mt6797-power.h 14 + - include/dt-bindings/power/mt6765-power.h 15 - include/dt-bindings/power/mt2701-power.h 16 - include/dt-bindings/power/mt2712-power.h 17 - include/dt-bindings/power/mt7622-power.h ··· 19 - compatible: Should be one of: 20 - "mediatek,mt2701-scpsys" 21 - "mediatek,mt2712-scpsys" 22 + - "mediatek,mt6765-scpsys" 23 - "mediatek,mt6797-scpsys" 24 - "mediatek,mt7622-scpsys" 25 - "mediatek,mt7623-scpsys", "mediatek,mt2701-scpsys": For MT7623 SoC ··· 33 enabled before enabling certain power domains. 34 Required clocks for MT2701 or MT7623: "mm", "mfg", "ethif" 35 Required clocks for MT2712: "mm", "mfg", "venc", "jpgdec", "audio", "vdec" 36 + Required clocks for MT6765: MUX: "mm", "mfg" 37 + CG: "mm-0", "mm-1", "mm-2", "mm-3", "isp-0", 38 + "isp-1", "cam-0", "cam-1", "cam-2", 39 + "cam-3","cam-4" 40 Required clocks for MT6797: "mm", "mfg", "vdec" 41 Required clocks for MT7622 or MT7629: "hif_sel" 42 Required clocks for MT7623A: "ethif"
+5
MAINTAINERS
··· 3289 N: bcm2711 3290 N: bcm2835 3291 F: drivers/staging/vc04_services 3292 3293 BROADCOM BCM47XX MIPS ARCHITECTURE 3294 M: Hauke Mehrtens <hauke@hauke-m.de> ··· 3346 F: arch/arm/mm/cache-b15-rac.c 3347 F: arch/arm/include/asm/hardware/cache-b15-rac.h 3348 N: brcmstb 3349 3350 BROADCOM BMIPS CPUFREQ DRIVER 3351 M: Markus Mayer <mmayer@broadcom.com> ··· 16152 F: drivers/firmware/arm_scmi/ 16153 F: drivers/reset/reset-scmi.c 16154 F: include/linux/sc[mp]i_protocol.h 16155 16156 SYSTEM RESET/SHUTDOWN DRIVERS 16157 M: Sebastian Reichel <sre@kernel.org>
··· 3289 N: bcm2711 3290 N: bcm2835 3291 F: drivers/staging/vc04_services 3292 + F: Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml 3293 + F: drivers/pci/controller/pcie-brcmstb.c 3294 3295 BROADCOM BCM47XX MIPS ARCHITECTURE 3296 M: Hauke Mehrtens <hauke@hauke-m.de> ··· 3344 F: arch/arm/mm/cache-b15-rac.c 3345 F: arch/arm/include/asm/hardware/cache-b15-rac.h 3346 N: brcmstb 3347 + F: Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml 3348 + F: drivers/pci/controller/pcie-brcmstb.c 3349 3350 BROADCOM BMIPS CPUFREQ DRIVER 3351 M: Markus Mayer <mmayer@broadcom.com> ··· 16148 F: drivers/firmware/arm_scmi/ 16149 F: drivers/reset/reset-scmi.c 16150 F: include/linux/sc[mp]i_protocol.h 16151 + F: include/trace/events/scmi.h 16152 16153 SYSTEM RESET/SHUTDOWN DRIVERS 16154 M: Sebastian Reichel <sre@kernel.org>
+53 -4
arch/arm64/boot/dts/qcom/msm8916.dtsi
··· 102 reg = <0x0>; 103 next-level-cache = <&L2_0>; 104 enable-method = "psci"; 105 - cpu-idle-states = <&CPU_SLEEP_0>; 106 clocks = <&apcs>; 107 operating-points-v2 = <&cpu_opp_table>; 108 #cooling-cells = <2>; 109 }; 110 111 CPU1: cpu@1 { ··· 115 reg = <0x1>; 116 next-level-cache = <&L2_0>; 117 enable-method = "psci"; 118 - cpu-idle-states = <&CPU_SLEEP_0>; 119 clocks = <&apcs>; 120 operating-points-v2 = <&cpu_opp_table>; 121 #cooling-cells = <2>; 122 }; 123 124 CPU2: cpu@2 { ··· 128 reg = <0x2>; 129 next-level-cache = <&L2_0>; 130 enable-method = "psci"; 131 - cpu-idle-states = <&CPU_SLEEP_0>; 132 clocks = <&apcs>; 133 operating-points-v2 = <&cpu_opp_table>; 134 #cooling-cells = <2>; 135 }; 136 137 CPU3: cpu@3 { ··· 141 reg = <0x3>; 142 next-level-cache = <&L2_0>; 143 enable-method = "psci"; 144 - cpu-idle-states = <&CPU_SLEEP_0>; 145 clocks = <&apcs>; 146 operating-points-v2 = <&cpu_opp_table>; 147 #cooling-cells = <2>; 148 }; 149 150 L2_0: l2-cache { ··· 165 min-residency-us = <2000>; 166 local-timer-stop; 167 }; 168 }; 169 }; 170 171 psci { 172 compatible = "arm,psci-1.0"; 173 method = "smc"; 174 }; 175 176 pmu {
··· 102 reg = <0x0>; 103 next-level-cache = <&L2_0>; 104 enable-method = "psci"; 105 clocks = <&apcs>; 106 operating-points-v2 = <&cpu_opp_table>; 107 #cooling-cells = <2>; 108 + power-domains = <&CPU_PD0>; 109 + power-domain-names = "psci"; 110 }; 111 112 CPU1: cpu@1 { ··· 114 reg = <0x1>; 115 next-level-cache = <&L2_0>; 116 enable-method = "psci"; 117 clocks = <&apcs>; 118 operating-points-v2 = <&cpu_opp_table>; 119 #cooling-cells = <2>; 120 + power-domains = <&CPU_PD1>; 121 + power-domain-names = "psci"; 122 }; 123 124 CPU2: cpu@2 { ··· 126 reg = <0x2>; 127 next-level-cache = <&L2_0>; 128 enable-method = "psci"; 129 clocks = <&apcs>; 130 operating-points-v2 = <&cpu_opp_table>; 131 #cooling-cells = <2>; 132 + power-domains = <&CPU_PD2>; 133 + power-domain-names = "psci"; 134 }; 135 136 CPU3: cpu@3 { ··· 138 reg = <0x3>; 139 next-level-cache = <&L2_0>; 140 enable-method = "psci"; 141 clocks = <&apcs>; 142 operating-points-v2 = <&cpu_opp_table>; 143 #cooling-cells = <2>; 144 + power-domains = <&CPU_PD3>; 145 + power-domain-names = "psci"; 146 }; 147 148 L2_0: l2-cache { ··· 161 min-residency-us = <2000>; 162 local-timer-stop; 163 }; 164 + 165 + CLUSTER_RET: cluster-retention { 166 + compatible = "domain-idle-state"; 167 + arm,psci-suspend-param = <0x41000012>; 168 + entry-latency-us = <500>; 169 + exit-latency-us = <500>; 170 + min-residency-us = <2000>; 171 + }; 172 + 173 + CLUSTER_PWRDN: cluster-gdhs { 174 + compatible = "domain-idle-state"; 175 + arm,psci-suspend-param = <0x41000032>; 176 + entry-latency-us = <2000>; 177 + exit-latency-us = <2000>; 178 + min-residency-us = <6000>; 179 + }; 180 }; 181 }; 182 183 psci { 184 compatible = "arm,psci-1.0"; 185 method = "smc"; 186 + 187 + CPU_PD0: cpu-pd0 { 188 + #power-domain-cells = <0>; 189 + power-domains = <&CLUSTER_PD>; 190 + domain-idle-states = <&CPU_SLEEP_0>; 191 + }; 192 + 193 + CPU_PD1: cpu-pd1 { 194 + #power-domain-cells = <0>; 195 + power-domains = <&CLUSTER_PD>; 196 + domain-idle-states = <&CPU_SLEEP_0>; 197 + }; 198 + 199 + CPU_PD2: cpu-pd2 { 200 + #power-domain-cells = <0>; 201 + power-domains = <&CLUSTER_PD>; 202 + domain-idle-states = <&CPU_SLEEP_0>; 203 + }; 204 + 205 + CPU_PD3: cpu-pd3 { 206 + #power-domain-cells = <0>; 207 + power-domains = <&CLUSTER_PD>; 208 + domain-idle-states = <&CPU_SLEEP_0>; 209 + }; 210 + 211 + CLUSTER_PD: cluster-pd { 212 + #power-domain-cells = <0>; 213 + domain-idle-states = <&CLUSTER_RET>, <&CLUSTER_PWRDN>; 214 + }; 215 }; 216 217 pmu {
+1 -171
arch/powerpc/include/asm/cpm.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef __CPM_H 3 - #define __CPM_H 4 - 5 - #include <linux/compiler.h> 6 - #include <linux/types.h> 7 - #include <linux/errno.h> 8 - #include <linux/of.h> 9 - #include <soc/fsl/qe/qe.h> 10 - 11 - /* 12 - * SPI Parameter RAM common to QE and CPM. 13 - */ 14 - struct spi_pram { 15 - __be16 rbase; /* Rx Buffer descriptor base address */ 16 - __be16 tbase; /* Tx Buffer descriptor base address */ 17 - u8 rfcr; /* Rx function code */ 18 - u8 tfcr; /* Tx function code */ 19 - __be16 mrblr; /* Max receive buffer length */ 20 - __be32 rstate; /* Internal */ 21 - __be32 rdp; /* Internal */ 22 - __be16 rbptr; /* Internal */ 23 - __be16 rbc; /* Internal */ 24 - __be32 rxtmp; /* Internal */ 25 - __be32 tstate; /* Internal */ 26 - __be32 tdp; /* Internal */ 27 - __be16 tbptr; /* Internal */ 28 - __be16 tbc; /* Internal */ 29 - __be32 txtmp; /* Internal */ 30 - __be32 res; /* Tx temp. */ 31 - __be16 rpbase; /* Relocation pointer (CPM1 only) */ 32 - __be16 res1; /* Reserved */ 33 - }; 34 - 35 - /* 36 - * USB Controller pram common to QE and CPM. 37 - */ 38 - struct usb_ctlr { 39 - u8 usb_usmod; 40 - u8 usb_usadr; 41 - u8 usb_uscom; 42 - u8 res1[1]; 43 - __be16 usb_usep[4]; 44 - u8 res2[4]; 45 - __be16 usb_usber; 46 - u8 res3[2]; 47 - __be16 usb_usbmr; 48 - u8 res4[1]; 49 - u8 usb_usbs; 50 - /* Fields down below are QE-only */ 51 - __be16 usb_ussft; 52 - u8 res5[2]; 53 - __be16 usb_usfrn; 54 - u8 res6[0x22]; 55 - } __attribute__ ((packed)); 56 - 57 - /* 58 - * Function code bits, usually generic to devices. 59 - */ 60 - #ifdef CONFIG_CPM1 61 - #define CPMFCR_GBL ((u_char)0x00) /* Flag doesn't exist in CPM1 */ 62 - #define CPMFCR_TC2 ((u_char)0x00) /* Flag doesn't exist in CPM1 */ 63 - #define CPMFCR_DTB ((u_char)0x00) /* Flag doesn't exist in CPM1 */ 64 - #define CPMFCR_BDB ((u_char)0x00) /* Flag doesn't exist in CPM1 */ 65 - #else 66 - #define CPMFCR_GBL ((u_char)0x20) /* Set memory snooping */ 67 - #define CPMFCR_TC2 ((u_char)0x04) /* Transfer code 2 value */ 68 - #define CPMFCR_DTB ((u_char)0x02) /* Use local bus for data when set */ 69 - #define CPMFCR_BDB ((u_char)0x01) /* Use local bus for BD when set */ 70 - #endif 71 - #define CPMFCR_EB ((u_char)0x10) /* Set big endian byte order */ 72 - 73 - /* Opcodes common to CPM1 and CPM2 74 - */ 75 - #define CPM_CR_INIT_TRX ((ushort)0x0000) 76 - #define CPM_CR_INIT_RX ((ushort)0x0001) 77 - #define CPM_CR_INIT_TX ((ushort)0x0002) 78 - #define CPM_CR_HUNT_MODE ((ushort)0x0003) 79 - #define CPM_CR_STOP_TX ((ushort)0x0004) 80 - #define CPM_CR_GRA_STOP_TX ((ushort)0x0005) 81 - #define CPM_CR_RESTART_TX ((ushort)0x0006) 82 - #define CPM_CR_CLOSE_RX_BD ((ushort)0x0007) 83 - #define CPM_CR_SET_GADDR ((ushort)0x0008) 84 - #define CPM_CR_SET_TIMER ((ushort)0x0008) 85 - #define CPM_CR_STOP_IDMA ((ushort)0x000b) 86 - 87 - /* Buffer descriptors used by many of the CPM protocols. */ 88 - typedef struct cpm_buf_desc { 89 - ushort cbd_sc; /* Status and Control */ 90 - ushort cbd_datlen; /* Data length in buffer */ 91 - uint cbd_bufaddr; /* Buffer address in host memory */ 92 - } cbd_t; 93 - 94 - /* Buffer descriptor control/status used by serial 95 - */ 96 - 97 - #define BD_SC_EMPTY (0x8000) /* Receive is empty */ 98 - #define BD_SC_READY (0x8000) /* Transmit is ready */ 99 - #define BD_SC_WRAP (0x2000) /* Last buffer descriptor */ 100 - #define BD_SC_INTRPT (0x1000) /* Interrupt on change */ 101 - #define BD_SC_LAST (0x0800) /* Last buffer in frame */ 102 - #define BD_SC_TC (0x0400) /* Transmit CRC */ 103 - #define BD_SC_CM (0x0200) /* Continuous mode */ 104 - #define BD_SC_ID (0x0100) /* Rec'd too many idles */ 105 - #define BD_SC_P (0x0100) /* xmt preamble */ 106 - #define BD_SC_BR (0x0020) /* Break received */ 107 - #define BD_SC_FR (0x0010) /* Framing error */ 108 - #define BD_SC_PR (0x0008) /* Parity error */ 109 - #define BD_SC_NAK (0x0004) /* NAK - did not respond */ 110 - #define BD_SC_OV (0x0002) /* Overrun */ 111 - #define BD_SC_UN (0x0002) /* Underrun */ 112 - #define BD_SC_CD (0x0001) /* */ 113 - #define BD_SC_CL (0x0001) /* Collision */ 114 - 115 - /* Buffer descriptor control/status used by Ethernet receive. 116 - * Common to SCC and FCC. 117 - */ 118 - #define BD_ENET_RX_EMPTY (0x8000) 119 - #define BD_ENET_RX_WRAP (0x2000) 120 - #define BD_ENET_RX_INTR (0x1000) 121 - #define BD_ENET_RX_LAST (0x0800) 122 - #define BD_ENET_RX_FIRST (0x0400) 123 - #define BD_ENET_RX_MISS (0x0100) 124 - #define BD_ENET_RX_BC (0x0080) /* FCC Only */ 125 - #define BD_ENET_RX_MC (0x0040) /* FCC Only */ 126 - #define BD_ENET_RX_LG (0x0020) 127 - #define BD_ENET_RX_NO (0x0010) 128 - #define BD_ENET_RX_SH (0x0008) 129 - #define BD_ENET_RX_CR (0x0004) 130 - #define BD_ENET_RX_OV (0x0002) 131 - #define BD_ENET_RX_CL (0x0001) 132 - #define BD_ENET_RX_STATS (0x01ff) /* All status bits */ 133 - 134 - /* Buffer descriptor control/status used by Ethernet transmit. 135 - * Common to SCC and FCC. 136 - */ 137 - #define BD_ENET_TX_READY (0x8000) 138 - #define BD_ENET_TX_PAD (0x4000) 139 - #define BD_ENET_TX_WRAP (0x2000) 140 - #define BD_ENET_TX_INTR (0x1000) 141 - #define BD_ENET_TX_LAST (0x0800) 142 - #define BD_ENET_TX_TC (0x0400) 143 - #define BD_ENET_TX_DEF (0x0200) 144 - #define BD_ENET_TX_HB (0x0100) 145 - #define BD_ENET_TX_LC (0x0080) 146 - #define BD_ENET_TX_RL (0x0040) 147 - #define BD_ENET_TX_RCMASK (0x003c) 148 - #define BD_ENET_TX_UN (0x0002) 149 - #define BD_ENET_TX_CSL (0x0001) 150 - #define BD_ENET_TX_STATS (0x03ff) /* All status bits */ 151 - 152 - /* Buffer descriptor control/status used by Transparent mode SCC. 153 - */ 154 - #define BD_SCC_TX_LAST (0x0800) 155 - 156 - /* Buffer descriptor control/status used by I2C. 157 - */ 158 - #define BD_I2C_START (0x0400) 159 - 160 - #ifdef CONFIG_CPM 161 - int cpm_command(u32 command, u8 opcode); 162 - #else 163 - static inline int cpm_command(u32 command, u8 opcode) 164 - { 165 - return -ENOSYS; 166 - } 167 - #endif /* CONFIG_CPM */ 168 - 169 - int cpm2_gpiochip_add32(struct device *dev); 170 - 171 - #endif
··· 1 + #include <soc/fsl/cpm.h>
+1 -2
arch/powerpc/platforms/83xx/km83xx.c
··· 34 #include <sysdev/fsl_soc.h> 35 #include <sysdev/fsl_pci.h> 36 #include <soc/fsl/qe/qe.h> 37 - #include <soc/fsl/qe/qe_ic.h> 38 39 #include "mpc83xx.h" 40 ··· 177 .name = "mpc83xx-km-platform", 178 .probe = mpc83xx_km_probe, 179 .setup_arch = mpc83xx_km_setup_arch, 180 - .init_IRQ = mpc83xx_ipic_and_qe_init_IRQ, 181 .get_irq = ipic_get_irq, 182 .restart = mpc83xx_restart, 183 .time_init = mpc83xx_time_init,
··· 34 #include <sysdev/fsl_soc.h> 35 #include <sysdev/fsl_pci.h> 36 #include <soc/fsl/qe/qe.h> 37 38 #include "mpc83xx.h" 39 ··· 178 .name = "mpc83xx-km-platform", 179 .probe = mpc83xx_km_probe, 180 .setup_arch = mpc83xx_km_setup_arch, 181 + .init_IRQ = mpc83xx_ipic_init_IRQ, 182 .get_irq = ipic_get_irq, 183 .restart = mpc83xx_restart, 184 .time_init = mpc83xx_time_init,
-23
arch/powerpc/platforms/83xx/misc.c
··· 14 #include <asm/io.h> 15 #include <asm/hw_irq.h> 16 #include <asm/ipic.h> 17 - #include <soc/fsl/qe/qe_ic.h> 18 #include <sysdev/fsl_soc.h> 19 #include <sysdev/fsl_pci.h> 20 ··· 89 */ 90 ipic_set_default_priority(); 91 } 92 - 93 - #ifdef CONFIG_QUICC_ENGINE 94 - void __init mpc83xx_qe_init_IRQ(void) 95 - { 96 - struct device_node *np; 97 - 98 - np = of_find_compatible_node(NULL, NULL, "fsl,qe-ic"); 99 - if (!np) { 100 - np = of_find_node_by_type(NULL, "qeic"); 101 - if (!np) 102 - return; 103 - } 104 - qe_ic_init(np, 0, qe_ic_cascade_low_ipic, qe_ic_cascade_high_ipic); 105 - of_node_put(np); 106 - } 107 - 108 - void __init mpc83xx_ipic_and_qe_init_IRQ(void) 109 - { 110 - mpc83xx_ipic_init_IRQ(); 111 - mpc83xx_qe_init_IRQ(); 112 - } 113 - #endif /* CONFIG_QUICC_ENGINE */ 114 115 static const struct of_device_id of_bus_ids[] __initconst = { 116 { .type = "soc", },
··· 14 #include <asm/io.h> 15 #include <asm/hw_irq.h> 16 #include <asm/ipic.h> 17 #include <sysdev/fsl_soc.h> 18 #include <sysdev/fsl_pci.h> 19 ··· 90 */ 91 ipic_set_default_priority(); 92 } 93 94 static const struct of_device_id of_bus_ids[] __initconst = { 95 { .type = "soc", },
+1 -2
arch/powerpc/platforms/83xx/mpc832x_mds.c
··· 33 #include <sysdev/fsl_soc.h> 34 #include <sysdev/fsl_pci.h> 35 #include <soc/fsl/qe/qe.h> 36 - #include <soc/fsl/qe/qe_ic.h> 37 38 #include "mpc83xx.h" 39 ··· 101 .name = "MPC832x MDS", 102 .probe = mpc832x_sys_probe, 103 .setup_arch = mpc832x_sys_setup_arch, 104 - .init_IRQ = mpc83xx_ipic_and_qe_init_IRQ, 105 .get_irq = ipic_get_irq, 106 .restart = mpc83xx_restart, 107 .time_init = mpc83xx_time_init,
··· 33 #include <sysdev/fsl_soc.h> 34 #include <sysdev/fsl_pci.h> 35 #include <soc/fsl/qe/qe.h> 36 37 #include "mpc83xx.h" 38 ··· 102 .name = "MPC832x MDS", 103 .probe = mpc832x_sys_probe, 104 .setup_arch = mpc832x_sys_setup_arch, 105 + .init_IRQ = mpc83xx_ipic_init_IRQ, 106 .get_irq = ipic_get_irq, 107 .restart = mpc83xx_restart, 108 .time_init = mpc83xx_time_init,
+1 -2
arch/powerpc/platforms/83xx/mpc832x_rdb.c
··· 22 #include <asm/ipic.h> 23 #include <asm/udbg.h> 24 #include <soc/fsl/qe/qe.h> 25 - #include <soc/fsl/qe/qe_ic.h> 26 #include <sysdev/fsl_soc.h> 27 #include <sysdev/fsl_pci.h> 28 ··· 219 .name = "MPC832x RDB", 220 .probe = mpc832x_rdb_probe, 221 .setup_arch = mpc832x_rdb_setup_arch, 222 - .init_IRQ = mpc83xx_ipic_and_qe_init_IRQ, 223 .get_irq = ipic_get_irq, 224 .restart = mpc83xx_restart, 225 .time_init = mpc83xx_time_init,
··· 22 #include <asm/ipic.h> 23 #include <asm/udbg.h> 24 #include <soc/fsl/qe/qe.h> 25 #include <sysdev/fsl_soc.h> 26 #include <sysdev/fsl_pci.h> 27 ··· 220 .name = "MPC832x RDB", 221 .probe = mpc832x_rdb_probe, 222 .setup_arch = mpc832x_rdb_setup_arch, 223 + .init_IRQ = mpc83xx_ipic_init_IRQ, 224 .get_irq = ipic_get_irq, 225 .restart = mpc83xx_restart, 226 .time_init = mpc83xx_time_init,
+1 -2
arch/powerpc/platforms/83xx/mpc836x_mds.c
··· 40 #include <sysdev/fsl_soc.h> 41 #include <sysdev/fsl_pci.h> 42 #include <soc/fsl/qe/qe.h> 43 - #include <soc/fsl/qe/qe_ic.h> 44 45 #include "mpc83xx.h" 46 ··· 201 .name = "MPC836x MDS", 202 .probe = mpc836x_mds_probe, 203 .setup_arch = mpc836x_mds_setup_arch, 204 - .init_IRQ = mpc83xx_ipic_and_qe_init_IRQ, 205 .get_irq = ipic_get_irq, 206 .restart = mpc83xx_restart, 207 .time_init = mpc83xx_time_init,
··· 40 #include <sysdev/fsl_soc.h> 41 #include <sysdev/fsl_pci.h> 42 #include <soc/fsl/qe/qe.h> 43 44 #include "mpc83xx.h" 45 ··· 202 .name = "MPC836x MDS", 203 .probe = mpc836x_mds_probe, 204 .setup_arch = mpc836x_mds_setup_arch, 205 + .init_IRQ = mpc83xx_ipic_init_IRQ, 206 .get_irq = ipic_get_irq, 207 .restart = mpc83xx_restart, 208 .time_init = mpc83xx_time_init,
+1 -2
arch/powerpc/platforms/83xx/mpc836x_rdk.c
··· 17 #include <asm/ipic.h> 18 #include <asm/udbg.h> 19 #include <soc/fsl/qe/qe.h> 20 - #include <soc/fsl/qe/qe_ic.h> 21 #include <sysdev/fsl_soc.h> 22 #include <sysdev/fsl_pci.h> 23 ··· 41 .name = "MPC836x RDK", 42 .probe = mpc836x_rdk_probe, 43 .setup_arch = mpc836x_rdk_setup_arch, 44 - .init_IRQ = mpc83xx_ipic_and_qe_init_IRQ, 45 .get_irq = ipic_get_irq, 46 .restart = mpc83xx_restart, 47 .time_init = mpc83xx_time_init,
··· 17 #include <asm/ipic.h> 18 #include <asm/udbg.h> 19 #include <soc/fsl/qe/qe.h> 20 #include <sysdev/fsl_soc.h> 21 #include <sysdev/fsl_pci.h> 22 ··· 42 .name = "MPC836x RDK", 43 .probe = mpc836x_rdk_probe, 44 .setup_arch = mpc836x_rdk_setup_arch, 45 + .init_IRQ = mpc83xx_ipic_init_IRQ, 46 .get_irq = ipic_get_irq, 47 .restart = mpc83xx_restart, 48 .time_init = mpc83xx_time_init,
-7
arch/powerpc/platforms/83xx/mpc83xx.h
··· 72 extern int mpc834x_usb_cfg(void); 73 extern int mpc831x_usb_cfg(void); 74 extern void mpc83xx_ipic_init_IRQ(void); 75 - #ifdef CONFIG_QUICC_ENGINE 76 - extern void mpc83xx_qe_init_IRQ(void); 77 - extern void mpc83xx_ipic_and_qe_init_IRQ(void); 78 - #else 79 - static inline void __init mpc83xx_qe_init_IRQ(void) {} 80 - #define mpc83xx_ipic_and_qe_init_IRQ mpc83xx_ipic_init_IRQ 81 - #endif /* CONFIG_QUICC_ENGINE */ 82 83 #ifdef CONFIG_PCI 84 extern void mpc83xx_setup_pci(void);
··· 72 extern int mpc834x_usb_cfg(void); 73 extern int mpc831x_usb_cfg(void); 74 extern void mpc83xx_ipic_init_IRQ(void); 75 76 #ifdef CONFIG_PCI 77 extern void mpc83xx_setup_pci(void);
-10
arch/powerpc/platforms/85xx/corenet_generic.c
··· 24 #include <asm/mpic.h> 25 #include <asm/ehv_pic.h> 26 #include <asm/swiotlb.h> 27 - #include <soc/fsl/qe/qe_ic.h> 28 29 #include <linux/of_platform.h> 30 #include <sysdev/fsl_soc.h> ··· 37 unsigned int flags = MPIC_BIG_ENDIAN | MPIC_SINGLE_DEST_CPU | 38 MPIC_NO_RESET; 39 40 - struct device_node *np; 41 - 42 if (ppc_md.get_irq == mpic_get_coreint_irq) 43 flags |= MPIC_ENABLE_COREINT; 44 ··· 44 BUG_ON(mpic == NULL); 45 46 mpic_init(mpic); 47 - 48 - np = of_find_compatible_node(NULL, NULL, "fsl,qe-ic"); 49 - if (np) { 50 - qe_ic_init(np, 0, qe_ic_cascade_low_mpic, 51 - qe_ic_cascade_high_mpic); 52 - of_node_put(np); 53 - } 54 } 55 56 /*
··· 24 #include <asm/mpic.h> 25 #include <asm/ehv_pic.h> 26 #include <asm/swiotlb.h> 27 28 #include <linux/of_platform.h> 29 #include <sysdev/fsl_soc.h> ··· 38 unsigned int flags = MPIC_BIG_ENDIAN | MPIC_SINGLE_DEST_CPU | 39 MPIC_NO_RESET; 40 41 if (ppc_md.get_irq == mpic_get_coreint_irq) 42 flags |= MPIC_ENABLE_COREINT; 43 ··· 47 BUG_ON(mpic == NULL); 48 49 mpic_init(mpic); 50 } 51 52 /*
-27
arch/powerpc/platforms/85xx/mpc85xx_mds.c
··· 44 #include <sysdev/fsl_soc.h> 45 #include <sysdev/fsl_pci.h> 46 #include <soc/fsl/qe/qe.h> 47 - #include <soc/fsl/qe/qe_ic.h> 48 #include <asm/mpic.h> 49 #include <asm/swiotlb.h> 50 #include "smp.h" ··· 267 } 268 } 269 270 - static void __init mpc85xx_mds_qeic_init(void) 271 - { 272 - struct device_node *np; 273 - 274 - np = of_find_compatible_node(NULL, NULL, "fsl,qe"); 275 - if (!of_device_is_available(np)) { 276 - of_node_put(np); 277 - return; 278 - } 279 - 280 - np = of_find_compatible_node(NULL, NULL, "fsl,qe-ic"); 281 - if (!np) { 282 - np = of_find_node_by_type(NULL, "qeic"); 283 - if (!np) 284 - return; 285 - } 286 - 287 - if (machine_is(p1021_mds)) 288 - qe_ic_init(np, 0, qe_ic_cascade_low_mpic, 289 - qe_ic_cascade_high_mpic); 290 - else 291 - qe_ic_init(np, 0, qe_ic_cascade_muxed_mpic, NULL); 292 - of_node_put(np); 293 - } 294 #else 295 static void __init mpc85xx_mds_qe_init(void) { } 296 - static void __init mpc85xx_mds_qeic_init(void) { } 297 #endif /* CONFIG_QUICC_ENGINE */ 298 299 static void __init mpc85xx_mds_setup_arch(void) ··· 338 BUG_ON(mpic == NULL); 339 340 mpic_init(mpic); 341 - mpc85xx_mds_qeic_init(); 342 } 343 344 static int __init mpc85xx_mds_probe(void)
··· 44 #include <sysdev/fsl_soc.h> 45 #include <sysdev/fsl_pci.h> 46 #include <soc/fsl/qe/qe.h> 47 #include <asm/mpic.h> 48 #include <asm/swiotlb.h> 49 #include "smp.h" ··· 268 } 269 } 270 271 #else 272 static void __init mpc85xx_mds_qe_init(void) { } 273 #endif /* CONFIG_QUICC_ENGINE */ 274 275 static void __init mpc85xx_mds_setup_arch(void) ··· 364 BUG_ON(mpic == NULL); 365 366 mpic_init(mpic); 367 } 368 369 static int __init mpc85xx_mds_probe(void)
-17
arch/powerpc/platforms/85xx/mpc85xx_rdb.c
··· 23 #include <asm/udbg.h> 24 #include <asm/mpic.h> 25 #include <soc/fsl/qe/qe.h> 26 - #include <soc/fsl/qe/qe_ic.h> 27 28 #include <sysdev/fsl_soc.h> 29 #include <sysdev/fsl_pci.h> ··· 43 { 44 struct mpic *mpic; 45 46 - #ifdef CONFIG_QUICC_ENGINE 47 - struct device_node *np; 48 - #endif 49 - 50 if (of_machine_is_compatible("fsl,MPC85XXRDB-CAMP")) { 51 mpic = mpic_alloc(NULL, 0, MPIC_NO_RESET | 52 MPIC_BIG_ENDIAN | ··· 57 58 BUG_ON(mpic == NULL); 59 mpic_init(mpic); 60 - 61 - #ifdef CONFIG_QUICC_ENGINE 62 - np = of_find_compatible_node(NULL, NULL, "fsl,qe-ic"); 63 - if (np) { 64 - qe_ic_init(np, 0, qe_ic_cascade_low_mpic, 65 - qe_ic_cascade_high_mpic); 66 - of_node_put(np); 67 - 68 - } else 69 - pr_err("%s: Could not find qe-ic node\n", __func__); 70 - #endif 71 - 72 } 73 74 /*
··· 23 #include <asm/udbg.h> 24 #include <asm/mpic.h> 25 #include <soc/fsl/qe/qe.h> 26 27 #include <sysdev/fsl_soc.h> 28 #include <sysdev/fsl_pci.h> ··· 44 { 45 struct mpic *mpic; 46 47 if (of_machine_is_compatible("fsl,MPC85XXRDB-CAMP")) { 48 mpic = mpic_alloc(NULL, 0, MPIC_NO_RESET | 49 MPIC_BIG_ENDIAN | ··· 62 63 BUG_ON(mpic == NULL); 64 mpic_init(mpic); 65 } 66 67 /*
-15
arch/powerpc/platforms/85xx/twr_p102x.c
··· 19 #include <asm/udbg.h> 20 #include <asm/mpic.h> 21 #include <soc/fsl/qe/qe.h> 22 - #include <soc/fsl/qe/qe_ic.h> 23 24 #include <sysdev/fsl_soc.h> 25 #include <sysdev/fsl_pci.h> ··· 30 { 31 struct mpic *mpic; 32 33 - #ifdef CONFIG_QUICC_ENGINE 34 - struct device_node *np; 35 - #endif 36 - 37 mpic = mpic_alloc(NULL, 0, MPIC_BIG_ENDIAN | 38 MPIC_SINGLE_DEST_CPU, 39 0, 256, " OpenPIC "); 40 41 BUG_ON(mpic == NULL); 42 mpic_init(mpic); 43 - 44 - #ifdef CONFIG_QUICC_ENGINE 45 - np = of_find_compatible_node(NULL, NULL, "fsl,qe-ic"); 46 - if (np) { 47 - qe_ic_init(np, 0, qe_ic_cascade_low_mpic, 48 - qe_ic_cascade_high_mpic); 49 - of_node_put(np); 50 - } else 51 - pr_err("Could not find qe-ic node\n"); 52 - #endif 53 } 54 55 /* ************************************************************************
··· 19 #include <asm/udbg.h> 20 #include <asm/mpic.h> 21 #include <soc/fsl/qe/qe.h> 22 23 #include <sysdev/fsl_soc.h> 24 #include <sysdev/fsl_pci.h> ··· 31 { 32 struct mpic *mpic; 33 34 mpic = mpic_alloc(NULL, 0, MPIC_BIG_ENDIAN | 35 MPIC_SINGLE_DEST_CPU, 36 0, 256, " OpenPIC "); 37 38 BUG_ON(mpic == NULL); 39 mpic_init(mpic); 40 } 41 42 /* ************************************************************************
+38
drivers/base/power/domain.c
··· 2303 EXPORT_SYMBOL_GPL(of_genpd_add_subdomain); 2304 2305 /** 2306 * of_genpd_remove_last - Remove the last PM domain registered for a provider 2307 * @provider: Pointer to device structure associated with provider 2308 *
··· 2303 EXPORT_SYMBOL_GPL(of_genpd_add_subdomain); 2304 2305 /** 2306 + * of_genpd_remove_subdomain - Remove a subdomain from an I/O PM domain. 2307 + * @parent_spec: OF phandle args to use for parent PM domain look-up 2308 + * @subdomain_spec: OF phandle args to use for subdomain look-up 2309 + * 2310 + * Looks-up a parent PM domain and subdomain based upon phandle args 2311 + * provided and removes the subdomain from the parent PM domain. Returns a 2312 + * negative error code on failure. 2313 + */ 2314 + int of_genpd_remove_subdomain(struct of_phandle_args *parent_spec, 2315 + struct of_phandle_args *subdomain_spec) 2316 + { 2317 + struct generic_pm_domain *parent, *subdomain; 2318 + int ret; 2319 + 2320 + mutex_lock(&gpd_list_lock); 2321 + 2322 + parent = genpd_get_from_provider(parent_spec); 2323 + if (IS_ERR(parent)) { 2324 + ret = PTR_ERR(parent); 2325 + goto out; 2326 + } 2327 + 2328 + subdomain = genpd_get_from_provider(subdomain_spec); 2329 + if (IS_ERR(subdomain)) { 2330 + ret = PTR_ERR(subdomain); 2331 + goto out; 2332 + } 2333 + 2334 + ret = pm_genpd_remove_subdomain(parent, subdomain); 2335 + 2336 + out: 2337 + mutex_unlock(&gpd_list_lock); 2338 + 2339 + return ret; 2340 + } 2341 + EXPORT_SYMBOL_GPL(of_genpd_remove_subdomain); 2342 + 2343 + /** 2344 * of_genpd_remove_last - Remove the last PM domain registered for a provider 2345 * @provider: Pointer to device structure associated with provider 2346 *
-1
drivers/bus/Kconfig
··· 139 tristate "Tegra ACONNECT Bus Driver" 140 depends on ARCH_TEGRA_210_SOC 141 depends on OF && PM 142 - select PM_CLK 143 help 144 Driver for the Tegra ACONNECT bus which is used to interface with 145 the devices inside the Audio Processing Engine (APE) for Tegra210.
··· 139 tristate "Tegra ACONNECT Bus Driver" 140 depends on ARCH_TEGRA_210_SOC 141 depends on OF && PM 142 help 143 Driver for the Tegra ACONNECT bus which is used to interface with 144 the devices inside the Audio Processing Engine (APE) for Tegra210.
+1 -2
drivers/bus/moxtet.c
··· 102 return 0; 103 } 104 105 - struct bus_type moxtet_bus_type = { 106 .name = "moxtet", 107 .dev_groups = moxtet_dev_groups, 108 .match = moxtet_match, 109 }; 110 - EXPORT_SYMBOL_GPL(moxtet_bus_type); 111 112 int __moxtet_register_driver(struct module *owner, 113 struct moxtet_driver *mdrv)
··· 102 return 0; 103 } 104 105 + static struct bus_type moxtet_bus_type = { 106 .name = "moxtet", 107 .dev_groups = moxtet_dev_groups, 108 .match = moxtet_match, 109 }; 110 111 int __moxtet_register_driver(struct module *owner, 112 struct moxtet_driver *mdrv)
+9 -9
drivers/bus/ti-sysc.c
··· 479 { 480 struct ti_sysc_platform_data *pdata; 481 482 - if (ddata->legacy_mode) 483 return; 484 485 pdata = dev_get_platdata(ddata->dev); ··· 491 { 492 struct ti_sysc_platform_data *pdata; 493 494 - if (ddata->legacy_mode) 495 return; 496 497 pdata = dev_get_platdata(ddata->dev); ··· 509 { 510 ddata->rsts = 511 devm_reset_control_get_optional_shared(ddata->dev, "rstctrl"); 512 - if (IS_ERR(ddata->rsts)) 513 - return PTR_ERR(ddata->rsts); 514 515 - return 0; 516 } 517 518 /** ··· 1214 /* These drivers need to be fixed to not use pm_runtime_irq_safe() */ 1215 SYSC_QUIRK("gpio", 0, 0, 0x10, 0x114, 0x50600801, 0xffff00ff, 1216 SYSC_QUIRK_LEGACY_IDLE | SYSC_QUIRK_OPT_CLKS_IN_RESET), 1217 - SYSC_QUIRK("mmu", 0, 0, 0x10, 0x14, 0x00000020, 0xffffffff, 1218 - SYSC_QUIRK_LEGACY_IDLE), 1219 - SYSC_QUIRK("mmu", 0, 0, 0x10, 0x14, 0x00000030, 0xffffffff, 1220 - SYSC_QUIRK_LEGACY_IDLE), 1221 SYSC_QUIRK("sham", 0, 0x100, 0x110, 0x114, 0x40000c03, 0xffffffff, 1222 SYSC_QUIRK_LEGACY_IDLE), 1223 SYSC_QUIRK("smartreflex", 0, -1, 0x24, -1, 0x00000000, 0xffffffff, ··· 1245 /* Quirks that need to be set based on detected module */ 1246 SYSC_QUIRK("aess", 0, 0, 0x10, -1, 0x40000000, 0xffffffff, 1247 SYSC_MODULE_QUIRK_AESS), 1248 SYSC_QUIRK("hdq1w", 0, 0, 0x14, 0x18, 0x00000006, 0xffffffff, 1249 SYSC_MODULE_QUIRK_HDQ1W), 1250 SYSC_QUIRK("hdq1w", 0, 0, 0x14, 0x18, 0x0000000a, 0xffffffff,
··· 479 { 480 struct ti_sysc_platform_data *pdata; 481 482 + if (ddata->legacy_mode || (ddata->cfg.quirks & SYSC_QUIRK_CLKDM_NOAUTO)) 483 return; 484 485 pdata = dev_get_platdata(ddata->dev); ··· 491 { 492 struct ti_sysc_platform_data *pdata; 493 494 + if (ddata->legacy_mode || (ddata->cfg.quirks & SYSC_QUIRK_CLKDM_NOAUTO)) 495 return; 496 497 pdata = dev_get_platdata(ddata->dev); ··· 509 { 510 ddata->rsts = 511 devm_reset_control_get_optional_shared(ddata->dev, "rstctrl"); 512 513 + return PTR_ERR_OR_ZERO(ddata->rsts); 514 } 515 516 /** ··· 1216 /* These drivers need to be fixed to not use pm_runtime_irq_safe() */ 1217 SYSC_QUIRK("gpio", 0, 0, 0x10, 0x114, 0x50600801, 0xffff00ff, 1218 SYSC_QUIRK_LEGACY_IDLE | SYSC_QUIRK_OPT_CLKS_IN_RESET), 1219 SYSC_QUIRK("sham", 0, 0x100, 0x110, 0x114, 0x40000c03, 0xffffffff, 1220 SYSC_QUIRK_LEGACY_IDLE), 1221 SYSC_QUIRK("smartreflex", 0, -1, 0x24, -1, 0x00000000, 0xffffffff, ··· 1251 /* Quirks that need to be set based on detected module */ 1252 SYSC_QUIRK("aess", 0, 0, 0x10, -1, 0x40000000, 0xffffffff, 1253 SYSC_MODULE_QUIRK_AESS), 1254 + SYSC_QUIRK("dcan", 0x48480000, 0x20, -1, -1, 0xa3170504, 0xffffffff, 1255 + SYSC_QUIRK_CLKDM_NOAUTO), 1256 + SYSC_QUIRK("dwc3", 0x48880000, 0, 0x10, -1, 0x500a0200, 0xffffffff, 1257 + SYSC_QUIRK_CLKDM_NOAUTO), 1258 + SYSC_QUIRK("dwc3", 0x488c0000, 0, 0x10, -1, 0x500a0200, 0xffffffff, 1259 + SYSC_QUIRK_CLKDM_NOAUTO), 1260 SYSC_QUIRK("hdq1w", 0, 0, 0x14, 0x18, 0x00000006, 0xffffffff, 1261 SYSC_MODULE_QUIRK_HDQ1W), 1262 SYSC_QUIRK("hdq1w", 0, 0, 0x14, 0x18, 0x0000000a, 0xffffffff,
+1 -1
drivers/clk/clk-scmi.c
··· 176 } 177 178 static const struct scmi_device_id scmi_id_table[] = { 179 - { SCMI_PROTOCOL_CLOCK }, 180 { }, 181 }; 182 MODULE_DEVICE_TABLE(scmi, scmi_id_table);
··· 176 } 177 178 static const struct scmi_device_id scmi_id_table[] = { 179 + { SCMI_PROTOCOL_CLOCK, "clocks" }, 180 { }, 181 }; 182 MODULE_DEVICE_TABLE(scmi, scmi_id_table);
+1 -1
drivers/cpufreq/scmi-cpufreq.c
··· 261 } 262 263 static const struct scmi_device_id scmi_id_table[] = { 264 - { SCMI_PROTOCOL_PERF }, 265 { }, 266 }; 267 MODULE_DEVICE_TABLE(scmi, scmi_id_table);
··· 261 } 262 263 static const struct scmi_device_id scmi_id_table[] = { 264 + { SCMI_PROTOCOL_PERF, "cpufreq" }, 265 { }, 266 }; 267 MODULE_DEVICE_TABLE(scmi, scmi_id_table);
+3 -1
drivers/cpuidle/Makefile
··· 21 obj-$(CONFIG_ARM_AT91_CPUIDLE) += cpuidle-at91.o 22 obj-$(CONFIG_ARM_EXYNOS_CPUIDLE) += cpuidle-exynos.o 23 obj-$(CONFIG_ARM_CPUIDLE) += cpuidle-arm.o 24 - obj-$(CONFIG_ARM_PSCI_CPUIDLE) += cpuidle-psci.o 25 26 ############################################################################### 27 # MIPS drivers
··· 21 obj-$(CONFIG_ARM_AT91_CPUIDLE) += cpuidle-at91.o 22 obj-$(CONFIG_ARM_EXYNOS_CPUIDLE) += cpuidle-exynos.o 23 obj-$(CONFIG_ARM_CPUIDLE) += cpuidle-arm.o 24 + obj-$(CONFIG_ARM_PSCI_CPUIDLE) += cpuidle_psci.o 25 + cpuidle_psci-y := cpuidle-psci.o 26 + cpuidle_psci-$(CONFIG_PM_GENERIC_DOMAINS_OF) += cpuidle-psci-domain.o 27 28 ############################################################################### 29 # MIPS drivers
+308
drivers/cpuidle/cpuidle-psci-domain.c
···
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * PM domains for CPUs via genpd - managed by cpuidle-psci. 4 + * 5 + * Copyright (C) 2019 Linaro Ltd. 6 + * Author: Ulf Hansson <ulf.hansson@linaro.org> 7 + * 8 + */ 9 + 10 + #define pr_fmt(fmt) "CPUidle PSCI: " fmt 11 + 12 + #include <linux/cpu.h> 13 + #include <linux/device.h> 14 + #include <linux/kernel.h> 15 + #include <linux/pm_domain.h> 16 + #include <linux/pm_runtime.h> 17 + #include <linux/psci.h> 18 + #include <linux/slab.h> 19 + #include <linux/string.h> 20 + 21 + #include "cpuidle-psci.h" 22 + 23 + struct psci_pd_provider { 24 + struct list_head link; 25 + struct device_node *node; 26 + }; 27 + 28 + static LIST_HEAD(psci_pd_providers); 29 + static bool osi_mode_enabled __initdata; 30 + 31 + static int psci_pd_power_off(struct generic_pm_domain *pd) 32 + { 33 + struct genpd_power_state *state = &pd->states[pd->state_idx]; 34 + u32 *pd_state; 35 + 36 + if (!state->data) 37 + return 0; 38 + 39 + /* OSI mode is enabled, set the corresponding domain state. */ 40 + pd_state = state->data; 41 + psci_set_domain_state(*pd_state); 42 + 43 + return 0; 44 + } 45 + 46 + static int __init psci_pd_parse_state_nodes(struct genpd_power_state *states, 47 + int state_count) 48 + { 49 + int i, ret; 50 + u32 psci_state, *psci_state_buf; 51 + 52 + for (i = 0; i < state_count; i++) { 53 + ret = psci_dt_parse_state_node(to_of_node(states[i].fwnode), 54 + &psci_state); 55 + if (ret) 56 + goto free_state; 57 + 58 + psci_state_buf = kmalloc(sizeof(u32), GFP_KERNEL); 59 + if (!psci_state_buf) { 60 + ret = -ENOMEM; 61 + goto free_state; 62 + } 63 + *psci_state_buf = psci_state; 64 + states[i].data = psci_state_buf; 65 + } 66 + 67 + return 0; 68 + 69 + free_state: 70 + i--; 71 + for (; i >= 0; i--) 72 + kfree(states[i].data); 73 + return ret; 74 + } 75 + 76 + static int __init psci_pd_parse_states(struct device_node *np, 77 + struct genpd_power_state **states, int *state_count) 78 + { 79 + int ret; 80 + 81 + /* Parse the domain idle states. */ 82 + ret = of_genpd_parse_idle_states(np, states, state_count); 83 + if (ret) 84 + return ret; 85 + 86 + /* Fill out the PSCI specifics for each found state. */ 87 + ret = psci_pd_parse_state_nodes(*states, *state_count); 88 + if (ret) 89 + kfree(*states); 90 + 91 + return ret; 92 + } 93 + 94 + static void psci_pd_free_states(struct genpd_power_state *states, 95 + unsigned int state_count) 96 + { 97 + int i; 98 + 99 + for (i = 0; i < state_count; i++) 100 + kfree(states[i].data); 101 + kfree(states); 102 + } 103 + 104 + static int __init psci_pd_init(struct device_node *np) 105 + { 106 + struct generic_pm_domain *pd; 107 + struct psci_pd_provider *pd_provider; 108 + struct dev_power_governor *pd_gov; 109 + struct genpd_power_state *states = NULL; 110 + int ret = -ENOMEM, state_count = 0; 111 + 112 + pd = kzalloc(sizeof(*pd), GFP_KERNEL); 113 + if (!pd) 114 + goto out; 115 + 116 + pd_provider = kzalloc(sizeof(*pd_provider), GFP_KERNEL); 117 + if (!pd_provider) 118 + goto free_pd; 119 + 120 + pd->name = kasprintf(GFP_KERNEL, "%pOF", np); 121 + if (!pd->name) 122 + goto free_pd_prov; 123 + 124 + /* 125 + * Parse the domain idle states and let genpd manage the state selection 126 + * for those being compatible with "domain-idle-state". 127 + */ 128 + ret = psci_pd_parse_states(np, &states, &state_count); 129 + if (ret) 130 + goto free_name; 131 + 132 + pd->free_states = psci_pd_free_states; 133 + pd->name = kbasename(pd->name); 134 + pd->power_off = psci_pd_power_off; 135 + pd->states = states; 136 + pd->state_count = state_count; 137 + pd->flags |= GENPD_FLAG_IRQ_SAFE | GENPD_FLAG_CPU_DOMAIN; 138 + 139 + /* Use governor for CPU PM domains if it has some states to manage. */ 140 + pd_gov = state_count > 0 ? &pm_domain_cpu_gov : NULL; 141 + 142 + ret = pm_genpd_init(pd, pd_gov, false); 143 + if (ret) { 144 + psci_pd_free_states(states, state_count); 145 + goto free_name; 146 + } 147 + 148 + ret = of_genpd_add_provider_simple(np, pd); 149 + if (ret) 150 + goto remove_pd; 151 + 152 + pd_provider->node = of_node_get(np); 153 + list_add(&pd_provider->link, &psci_pd_providers); 154 + 155 + pr_debug("init PM domain %s\n", pd->name); 156 + return 0; 157 + 158 + remove_pd: 159 + pm_genpd_remove(pd); 160 + free_name: 161 + kfree(pd->name); 162 + free_pd_prov: 163 + kfree(pd_provider); 164 + free_pd: 165 + kfree(pd); 166 + out: 167 + pr_err("failed to init PM domain ret=%d %pOF\n", ret, np); 168 + return ret; 169 + } 170 + 171 + static void __init psci_pd_remove(void) 172 + { 173 + struct psci_pd_provider *pd_provider, *it; 174 + struct generic_pm_domain *genpd; 175 + 176 + list_for_each_entry_safe(pd_provider, it, &psci_pd_providers, link) { 177 + of_genpd_del_provider(pd_provider->node); 178 + 179 + genpd = of_genpd_remove_last(pd_provider->node); 180 + if (!IS_ERR(genpd)) 181 + kfree(genpd); 182 + 183 + of_node_put(pd_provider->node); 184 + list_del(&pd_provider->link); 185 + kfree(pd_provider); 186 + } 187 + } 188 + 189 + static int __init psci_pd_init_topology(struct device_node *np, bool add) 190 + { 191 + struct device_node *node; 192 + struct of_phandle_args child, parent; 193 + int ret; 194 + 195 + for_each_child_of_node(np, node) { 196 + if (of_parse_phandle_with_args(node, "power-domains", 197 + "#power-domain-cells", 0, &parent)) 198 + continue; 199 + 200 + child.np = node; 201 + child.args_count = 0; 202 + 203 + ret = add ? of_genpd_add_subdomain(&parent, &child) : 204 + of_genpd_remove_subdomain(&parent, &child); 205 + of_node_put(parent.np); 206 + if (ret) { 207 + of_node_put(node); 208 + return ret; 209 + } 210 + } 211 + 212 + return 0; 213 + } 214 + 215 + static int __init psci_pd_add_topology(struct device_node *np) 216 + { 217 + return psci_pd_init_topology(np, true); 218 + } 219 + 220 + static void __init psci_pd_remove_topology(struct device_node *np) 221 + { 222 + psci_pd_init_topology(np, false); 223 + } 224 + 225 + static const struct of_device_id psci_of_match[] __initconst = { 226 + { .compatible = "arm,psci-1.0" }, 227 + {} 228 + }; 229 + 230 + static int __init psci_idle_init_domains(void) 231 + { 232 + struct device_node *np = of_find_matching_node(NULL, psci_of_match); 233 + struct device_node *node; 234 + int ret = 0, pd_count = 0; 235 + 236 + if (!np) 237 + return -ENODEV; 238 + 239 + /* Currently limit the hierarchical topology to be used in OSI mode. */ 240 + if (!psci_has_osi_support()) 241 + goto out; 242 + 243 + /* 244 + * Parse child nodes for the "#power-domain-cells" property and 245 + * initialize a genpd/genpd-of-provider pair when it's found. 246 + */ 247 + for_each_child_of_node(np, node) { 248 + if (!of_find_property(node, "#power-domain-cells", NULL)) 249 + continue; 250 + 251 + ret = psci_pd_init(node); 252 + if (ret) 253 + goto put_node; 254 + 255 + pd_count++; 256 + } 257 + 258 + /* Bail out if not using the hierarchical CPU topology. */ 259 + if (!pd_count) 260 + goto out; 261 + 262 + /* Link genpd masters/subdomains to model the CPU topology. */ 263 + ret = psci_pd_add_topology(np); 264 + if (ret) 265 + goto remove_pd; 266 + 267 + /* Try to enable OSI mode. */ 268 + ret = psci_set_osi_mode(); 269 + if (ret) { 270 + pr_warn("failed to enable OSI mode: %d\n", ret); 271 + psci_pd_remove_topology(np); 272 + goto remove_pd; 273 + } 274 + 275 + osi_mode_enabled = true; 276 + of_node_put(np); 277 + pr_info("Initialized CPU PM domain topology\n"); 278 + return pd_count; 279 + 280 + put_node: 281 + of_node_put(node); 282 + remove_pd: 283 + if (pd_count) 284 + psci_pd_remove(); 285 + pr_err("failed to create CPU PM domains ret=%d\n", ret); 286 + out: 287 + of_node_put(np); 288 + return ret; 289 + } 290 + subsys_initcall(psci_idle_init_domains); 291 + 292 + struct device __init *psci_dt_attach_cpu(int cpu) 293 + { 294 + struct device *dev; 295 + 296 + if (!osi_mode_enabled) 297 + return NULL; 298 + 299 + dev = dev_pm_domain_attach_by_name(get_cpu_device(cpu), "psci"); 300 + if (IS_ERR_OR_NULL(dev)) 301 + return dev; 302 + 303 + pm_runtime_irq_safe(dev); 304 + if (cpu_online(cpu)) 305 + pm_runtime_get_sync(dev); 306 + 307 + return dev; 308 + }
+136 -25
drivers/cpuidle/cpuidle-psci.c
··· 8 9 #define pr_fmt(fmt) "CPUidle PSCI: " fmt 10 11 #include <linux/cpuidle.h> 12 #include <linux/cpumask.h> 13 #include <linux/cpu_pm.h> ··· 17 #include <linux/of.h> 18 #include <linux/of_device.h> 19 #include <linux/psci.h> 20 #include <linux/slab.h> 21 22 #include <asm/cpuidle.h> 23 24 #include "dt_idle_states.h" 25 26 - static DEFINE_PER_CPU_READ_MOSTLY(u32 *, psci_power_state); 27 28 static int psci_enter_idle_state(struct cpuidle_device *dev, 29 struct cpuidle_driver *drv, int idx) 30 { 31 - u32 *state = __this_cpu_read(psci_power_state); 32 33 - return CPU_PM_CPU_IDLE_ENTER_PARAM(psci_cpu_suspend_enter, 34 - idx, state[idx - 1]); 35 } 36 37 static struct cpuidle_driver psci_idle_driver __initdata = { ··· 143 { }, 144 }; 145 146 - static int __init psci_dt_parse_state_node(struct device_node *np, u32 *state) 147 { 148 int err = of_property_read_u32(np, "arm,psci-suspend-param", state); 149 ··· 160 return 0; 161 } 162 163 - static int __init psci_dt_cpu_init_idle(struct device_node *cpu_node, int cpu) 164 { 165 - int i, ret = 0, count = 0; 166 u32 *psci_states; 167 struct device_node *state_node; 168 169 - /* Count idle states */ 170 - while ((state_node = of_parse_phandle(cpu_node, "cpu-idle-states", 171 - count))) { 172 - count++; 173 - of_node_put(state_node); 174 - } 175 - 176 - if (!count) 177 - return -ENODEV; 178 - 179 - psci_states = kcalloc(count, sizeof(*psci_states), GFP_KERNEL); 180 if (!psci_states) 181 return -ENOMEM; 182 183 - for (i = 0; i < count; i++) { 184 - state_node = of_parse_phandle(cpu_node, "cpu-idle-states", i); 185 ret = psci_dt_parse_state_node(state_node, &psci_states[i]); 186 of_node_put(state_node); 187 ··· 188 pr_debug("psci-power-state %#x index %d\n", psci_states[i], i); 189 } 190 191 - /* Idle states parsed correctly, initialize per-cpu pointer */ 192 - per_cpu(psci_power_state, cpu) = psci_states; 193 return 0; 194 195 free_mem: ··· 222 return ret; 223 } 224 225 - static __init int psci_cpu_init_idle(unsigned int cpu) 226 { 227 struct device_node *cpu_node; 228 int ret; ··· 239 if (!cpu_node) 240 return -ENODEV; 241 242 - ret = psci_dt_cpu_init_idle(cpu_node, cpu); 243 244 of_node_put(cpu_node); 245 ··· 295 /* 296 * Initialize PSCI idle states. 297 */ 298 - ret = psci_cpu_init_idle(cpu); 299 if (ret) { 300 pr_err("CPU %d failed to PSCI idle\n", cpu); 301 goto out_kfree_drv; ··· 331 goto out_fail; 332 } 333 334 return 0; 335 336 out_fail:
··· 8 9 #define pr_fmt(fmt) "CPUidle PSCI: " fmt 10 11 + #include <linux/cpuhotplug.h> 12 #include <linux/cpuidle.h> 13 #include <linux/cpumask.h> 14 #include <linux/cpu_pm.h> ··· 16 #include <linux/of.h> 17 #include <linux/of_device.h> 18 #include <linux/psci.h> 19 + #include <linux/pm_runtime.h> 20 #include <linux/slab.h> 21 22 #include <asm/cpuidle.h> 23 24 + #include "cpuidle-psci.h" 25 #include "dt_idle_states.h" 26 27 + struct psci_cpuidle_data { 28 + u32 *psci_states; 29 + struct device *dev; 30 + }; 31 + 32 + static DEFINE_PER_CPU_READ_MOSTLY(struct psci_cpuidle_data, psci_cpuidle_data); 33 + static DEFINE_PER_CPU(u32, domain_state); 34 + static bool psci_cpuidle_use_cpuhp __initdata; 35 + 36 + void psci_set_domain_state(u32 state) 37 + { 38 + __this_cpu_write(domain_state, state); 39 + } 40 + 41 + static inline u32 psci_get_domain_state(void) 42 + { 43 + return __this_cpu_read(domain_state); 44 + } 45 + 46 + static inline int psci_enter_state(int idx, u32 state) 47 + { 48 + return CPU_PM_CPU_IDLE_ENTER_PARAM(psci_cpu_suspend_enter, idx, state); 49 + } 50 + 51 + static int psci_enter_domain_idle_state(struct cpuidle_device *dev, 52 + struct cpuidle_driver *drv, int idx) 53 + { 54 + struct psci_cpuidle_data *data = this_cpu_ptr(&psci_cpuidle_data); 55 + u32 *states = data->psci_states; 56 + struct device *pd_dev = data->dev; 57 + u32 state; 58 + int ret; 59 + 60 + /* Do runtime PM to manage a hierarchical CPU toplogy. */ 61 + pm_runtime_put_sync_suspend(pd_dev); 62 + 63 + state = psci_get_domain_state(); 64 + if (!state) 65 + state = states[idx]; 66 + 67 + ret = psci_enter_state(idx, state); 68 + 69 + pm_runtime_get_sync(pd_dev); 70 + 71 + /* Clear the domain state to start fresh when back from idle. */ 72 + psci_set_domain_state(0); 73 + return ret; 74 + } 75 + 76 + static int psci_idle_cpuhp_up(unsigned int cpu) 77 + { 78 + struct device *pd_dev = __this_cpu_read(psci_cpuidle_data.dev); 79 + 80 + if (pd_dev) 81 + pm_runtime_get_sync(pd_dev); 82 + 83 + return 0; 84 + } 85 + 86 + static int psci_idle_cpuhp_down(unsigned int cpu) 87 + { 88 + struct device *pd_dev = __this_cpu_read(psci_cpuidle_data.dev); 89 + 90 + if (pd_dev) { 91 + pm_runtime_put_sync(pd_dev); 92 + /* Clear domain state to start fresh at next online. */ 93 + psci_set_domain_state(0); 94 + } 95 + 96 + return 0; 97 + } 98 + 99 + static void __init psci_idle_init_cpuhp(void) 100 + { 101 + int err; 102 + 103 + if (!psci_cpuidle_use_cpuhp) 104 + return; 105 + 106 + err = cpuhp_setup_state_nocalls(CPUHP_AP_CPU_PM_STARTING, 107 + "cpuidle/psci:online", 108 + psci_idle_cpuhp_up, 109 + psci_idle_cpuhp_down); 110 + if (err) 111 + pr_warn("Failed %d while setup cpuhp state\n", err); 112 + } 113 114 static int psci_enter_idle_state(struct cpuidle_device *dev, 115 struct cpuidle_driver *drv, int idx) 116 { 117 + u32 *state = __this_cpu_read(psci_cpuidle_data.psci_states); 118 119 + return psci_enter_state(idx, state[idx]); 120 } 121 122 static struct cpuidle_driver psci_idle_driver __initdata = { ··· 56 { }, 57 }; 58 59 + int __init psci_dt_parse_state_node(struct device_node *np, u32 *state) 60 { 61 int err = of_property_read_u32(np, "arm,psci-suspend-param", state); 62 ··· 73 return 0; 74 } 75 76 + static int __init psci_dt_cpu_init_idle(struct cpuidle_driver *drv, 77 + struct device_node *cpu_node, 78 + unsigned int state_count, int cpu) 79 { 80 + int i, ret = 0; 81 u32 *psci_states; 82 struct device_node *state_node; 83 + struct psci_cpuidle_data *data = per_cpu_ptr(&psci_cpuidle_data, cpu); 84 85 + state_count++; /* Add WFI state too */ 86 + psci_states = kcalloc(state_count, sizeof(*psci_states), GFP_KERNEL); 87 if (!psci_states) 88 return -ENOMEM; 89 90 + for (i = 1; i < state_count; i++) { 91 + state_node = of_get_cpu_state_node(cpu_node, i - 1); 92 + if (!state_node) 93 + break; 94 + 95 ret = psci_dt_parse_state_node(state_node, &psci_states[i]); 96 of_node_put(state_node); 97 ··· 104 pr_debug("psci-power-state %#x index %d\n", psci_states[i], i); 105 } 106 107 + if (i != state_count) { 108 + ret = -ENODEV; 109 + goto free_mem; 110 + } 111 + 112 + /* Currently limit the hierarchical topology to be used in OSI mode. */ 113 + if (psci_has_osi_support()) { 114 + data->dev = psci_dt_attach_cpu(cpu); 115 + if (IS_ERR(data->dev)) { 116 + ret = PTR_ERR(data->dev); 117 + goto free_mem; 118 + } 119 + 120 + /* 121 + * Using the deepest state for the CPU to trigger a potential 122 + * selection of a shared state for the domain, assumes the 123 + * domain states are all deeper states. 124 + */ 125 + if (data->dev) { 126 + drv->states[state_count - 1].enter = 127 + psci_enter_domain_idle_state; 128 + psci_cpuidle_use_cpuhp = true; 129 + } 130 + } 131 + 132 + /* Idle states parsed correctly, store them in the per-cpu struct. */ 133 + data->psci_states = psci_states; 134 return 0; 135 136 free_mem: ··· 113 return ret; 114 } 115 116 + static __init int psci_cpu_init_idle(struct cpuidle_driver *drv, 117 + unsigned int cpu, unsigned int state_count) 118 { 119 struct device_node *cpu_node; 120 int ret; ··· 129 if (!cpu_node) 130 return -ENODEV; 131 132 + ret = psci_dt_cpu_init_idle(drv, cpu_node, state_count, cpu); 133 134 of_node_put(cpu_node); 135 ··· 185 /* 186 * Initialize PSCI idle states. 187 */ 188 + ret = psci_cpu_init_idle(drv, cpu, ret); 189 if (ret) { 190 pr_err("CPU %d failed to PSCI idle\n", cpu); 191 goto out_kfree_drv; ··· 221 goto out_fail; 222 } 223 224 + psci_idle_init_cpuhp(); 225 return 0; 226 227 out_fail:
+17
drivers/cpuidle/cpuidle-psci.h
···
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #ifndef __CPUIDLE_PSCI_H 4 + #define __CPUIDLE_PSCI_H 5 + 6 + struct device_node; 7 + 8 + void psci_set_domain_state(u32 state); 9 + int __init psci_dt_parse_state_node(struct device_node *np, u32 *state); 10 + 11 + #ifdef CONFIG_PM_GENERIC_DOMAINS_OF 12 + struct device __init *psci_dt_attach_cpu(int cpu); 13 + #else 14 + static inline struct device __init *psci_dt_attach_cpu(int cpu) { return NULL; } 15 + #endif 16 + 17 + #endif /* __CPUIDLE_PSCI_H */
+2 -3
drivers/cpuidle/dt_idle_states.c
··· 111 for (cpu = cpumask_next(cpumask_first(cpumask), cpumask); 112 cpu < nr_cpu_ids; cpu = cpumask_next(cpu, cpumask)) { 113 cpu_node = of_cpu_device_node_get(cpu); 114 - curr_state_node = of_parse_phandle(cpu_node, "cpu-idle-states", 115 - idx); 116 if (state_node != curr_state_node) 117 valid = false; 118 ··· 169 cpu_node = of_cpu_device_node_get(cpumask_first(cpumask)); 170 171 for (i = 0; ; i++) { 172 - state_node = of_parse_phandle(cpu_node, "cpu-idle-states", i); 173 if (!state_node) 174 break; 175
··· 111 for (cpu = cpumask_next(cpumask_first(cpumask), cpumask); 112 cpu < nr_cpu_ids; cpu = cpumask_next(cpu, cpumask)) { 113 cpu_node = of_cpu_device_node_get(cpu); 114 + curr_state_node = of_get_cpu_state_node(cpu_node, idx); 115 if (state_node != curr_state_node) 116 valid = false; 117 ··· 170 cpu_node = of_cpu_device_node_get(cpumask_first(cpumask)); 171 172 for (i = 0; ; i++) { 173 + state_node = of_get_cpu_state_node(cpu_node, i); 174 if (!state_node) 175 break; 176
-8
drivers/firmware/Kconfig
··· 239 depends on ARM || ARM64 240 select RESET_CONTROLLER 241 242 - config QCOM_SCM_32 243 - def_bool y 244 - depends on QCOM_SCM && ARM 245 - 246 - config QCOM_SCM_64 247 - def_bool y 248 - depends on QCOM_SCM && ARM64 249 - 250 config QCOM_SCM_DOWNLOAD_MODE_DEFAULT 251 bool "Qualcomm download mode enabled by default" 252 depends on QCOM_SCM
··· 239 depends on ARM || ARM64 240 select RESET_CONTROLLER 241 242 config QCOM_SCM_DOWNLOAD_MODE_DEFAULT 243 bool "Qualcomm download mode enabled by default" 244 depends on QCOM_SCM
+1 -4
drivers/firmware/Makefile
··· 17 obj-$(CONFIG_FIRMWARE_MEMMAP) += memmap.o 18 obj-$(CONFIG_RASPBERRYPI_FIRMWARE) += raspberrypi.o 19 obj-$(CONFIG_FW_CFG_SYSFS) += qemu_fw_cfg.o 20 - obj-$(CONFIG_QCOM_SCM) += qcom_scm.o 21 - obj-$(CONFIG_QCOM_SCM_64) += qcom_scm-64.o 22 - obj-$(CONFIG_QCOM_SCM_32) += qcom_scm-32.o 23 - CFLAGS_qcom_scm-32.o :=$(call as-instr,.arch armv7-a\n.arch_extension sec,-DREQUIRES_SEC=1) -march=armv7-a 24 obj-$(CONFIG_TI_SCI_PROTOCOL) += ti_sci.o 25 obj-$(CONFIG_TRUSTED_FOUNDATIONS) += trusted_foundations.o 26 obj-$(CONFIG_TURRIS_MOX_RWTM) += turris-mox-rwtm.o
··· 17 obj-$(CONFIG_FIRMWARE_MEMMAP) += memmap.o 18 obj-$(CONFIG_RASPBERRYPI_FIRMWARE) += raspberrypi.o 19 obj-$(CONFIG_FW_CFG_SYSFS) += qemu_fw_cfg.o 20 + obj-$(CONFIG_QCOM_SCM) += qcom_scm.o qcom_scm-smc.o qcom_scm-legacy.o 21 obj-$(CONFIG_TI_SCI_PROTOCOL) += ti_sci.o 22 obj-$(CONFIG_TRUSTED_FOUNDATIONS) += trusted_foundations.o 23 obj-$(CONFIG_TURRIS_MOX_RWTM) += turris-mox-rwtm.o
+26 -3
drivers/firmware/arm_scmi/bus.c
··· 28 return NULL; 29 30 for (; id->protocol_id; id++) 31 - if (id->protocol_id == scmi_dev->protocol_id) 32 - return id; 33 34 return NULL; 35 } ··· 60 return fn(handle); 61 } 62 63 static int scmi_dev_probe(struct device *dev) 64 { 65 struct scmi_driver *scmi_drv = to_scmi_driver(dev->driver); ··· 82 ret = scmi_protocol_init(scmi_dev->protocol_id, scmi_dev->handle); 83 if (ret) 84 return ret; 85 86 return scmi_drv->probe(scmi_dev); 87 } ··· 138 } 139 140 struct scmi_device * 141 - scmi_device_create(struct device_node *np, struct device *parent, int protocol) 142 { 143 int id, retval; 144 struct scmi_device *scmi_dev; ··· 148 if (!scmi_dev) 149 return NULL; 150 151 id = ida_simple_get(&scmi_bus_id, 1, 0, GFP_KERNEL); 152 if (id < 0) { 153 kfree(scmi_dev); 154 return NULL; 155 } ··· 175 176 return scmi_dev; 177 put_dev: 178 put_device(&scmi_dev->dev); 179 ida_simple_remove(&scmi_bus_id, id); 180 return NULL; ··· 183 184 void scmi_device_destroy(struct scmi_device *scmi_dev) 185 { 186 scmi_handle_put(scmi_dev->handle); 187 ida_simple_remove(&scmi_bus_id, scmi_dev->id); 188 device_unregister(&scmi_dev->dev);
··· 28 return NULL; 29 30 for (; id->protocol_id; id++) 31 + if (id->protocol_id == scmi_dev->protocol_id) { 32 + if (!id->name) 33 + return id; 34 + else if (!strcmp(id->name, scmi_dev->name)) 35 + return id; 36 + } 37 38 return NULL; 39 } ··· 56 return fn(handle); 57 } 58 59 + static int scmi_protocol_dummy_init(struct scmi_handle *handle) 60 + { 61 + return 0; 62 + } 63 + 64 static int scmi_dev_probe(struct device *dev) 65 { 66 struct scmi_driver *scmi_drv = to_scmi_driver(dev->driver); ··· 73 ret = scmi_protocol_init(scmi_dev->protocol_id, scmi_dev->handle); 74 if (ret) 75 return ret; 76 + 77 + /* Skip protocol initialisation for additional devices */ 78 + idr_replace(&scmi_protocols, &scmi_protocol_dummy_init, 79 + scmi_dev->protocol_id); 80 81 return scmi_drv->probe(scmi_dev); 82 } ··· 125 } 126 127 struct scmi_device * 128 + scmi_device_create(struct device_node *np, struct device *parent, int protocol, 129 + const char *name) 130 { 131 int id, retval; 132 struct scmi_device *scmi_dev; ··· 134 if (!scmi_dev) 135 return NULL; 136 137 + scmi_dev->name = kstrdup_const(name ?: "unknown", GFP_KERNEL); 138 + if (!scmi_dev->name) { 139 + kfree(scmi_dev); 140 + return NULL; 141 + } 142 + 143 id = ida_simple_get(&scmi_bus_id, 1, 0, GFP_KERNEL); 144 if (id < 0) { 145 + kfree_const(scmi_dev->name); 146 kfree(scmi_dev); 147 return NULL; 148 } ··· 154 155 return scmi_dev; 156 put_dev: 157 + kfree_const(scmi_dev->name); 158 put_device(&scmi_dev->dev); 159 ida_simple_remove(&scmi_bus_id, id); 160 return NULL; ··· 161 162 void scmi_device_destroy(struct scmi_device *scmi_dev) 163 { 164 + kfree_const(scmi_dev->name); 165 scmi_handle_put(scmi_dev->handle); 166 ida_simple_remove(&scmi_bus_id, scmi_dev->id); 167 device_unregister(&scmi_dev->dev);
+2
drivers/firmware/arm_scmi/clock.c
··· 65 }; 66 67 struct clock_info { 68 int num_clocks; 69 int max_async_req; 70 atomic_t cur_async_req; ··· 341 scmi_clock_describe_rates_get(handle, clkid, clk); 342 } 343 344 handle->clk_ops = &clk_ops; 345 handle->clk_priv = cinfo; 346
··· 65 }; 66 67 struct clock_info { 68 + u32 version; 69 int num_clocks; 70 int max_async_req; 71 atomic_t cur_async_req; ··· 340 scmi_clock_describe_rates_get(handle, clkid, clk); 341 } 342 343 + cinfo->version = version; 344 handle->clk_ops = &clk_ops; 345 handle->clk_priv = cinfo; 346
+2
drivers/firmware/arm_scmi/common.h
··· 81 /** 82 * struct scmi_xfer - Structure representing a message flow 83 * 84 * @hdr: Transmit message header 85 * @tx: Transmit message 86 * @rx: Receive message, the buffer should be pre-allocated to store ··· 91 * @async: pointer to delayed response message received event completion 92 */ 93 struct scmi_xfer { 94 struct scmi_msg_hdr hdr; 95 struct scmi_msg tx; 96 struct scmi_msg rx;
··· 81 /** 82 * struct scmi_xfer - Structure representing a message flow 83 * 84 + * @transfer_id: Unique ID for debug & profiling purpose 85 * @hdr: Transmit message header 86 * @tx: Transmit message 87 * @rx: Receive message, the buffer should be pre-allocated to store ··· 90 * @async: pointer to delayed response message received event completion 91 */ 92 struct scmi_xfer { 93 + int transfer_id; 94 struct scmi_msg_hdr hdr; 95 struct scmi_msg tx; 96 struct scmi_msg rx;
+107 -3
drivers/firmware/arm_scmi/driver.c
··· 29 30 #include "common.h" 31 32 #define MSG_ID_MASK GENMASK(7, 0) 33 #define MSG_XTRACT_ID(hdr) FIELD_GET(MSG_ID_MASK, (hdr)) 34 #define MSG_TYPE_MASK GENMASK(9, 8) ··· 64 static LIST_HEAD(scmi_list); 65 /* Protection for the entire list */ 66 static DEFINE_MUTEX(scmi_list_mutex); 67 68 /** 69 * struct scmi_xfers_info - Structure to manage transfer information ··· 309 xfer = &minfo->xfer_block[xfer_id]; 310 xfer->hdr.seq = xfer_id; 311 reinit_completion(&xfer->done); 312 313 return xfer; 314 } ··· 380 381 scmi_fetch_response(xfer, mem); 382 383 if (msg_type == MSG_TYPE_DELAYED_RESP) 384 complete(xfer->async_done); 385 else ··· 449 if (unlikely(!cinfo)) 450 return -EINVAL; 451 452 ret = mbox_send_message(cinfo->chan, xfer); 453 if (ret < 0) { 454 dev_dbg(dev, "mbox send fail %d\n", ret); ··· 491 * received our message. 492 */ 493 mbox_client_txdone(cinfo->chan, ret); 494 495 return ret; 496 } ··· 753 idx = tx ? 0 : 1; 754 idr = tx ? &info->tx_idr : &info->rx_idr; 755 756 if (scmi_mailbox_check(np, idx)) { 757 cinfo = idr_find(idr, SCMI_PROTOCOL_BASE); 758 if (unlikely(!cinfo)) /* Possible only if platform has no Rx */ ··· 826 827 static inline void 828 scmi_create_protocol_device(struct device_node *np, struct scmi_info *info, 829 - int prot_id) 830 { 831 struct scmi_device *sdev; 832 833 - sdev = scmi_device_create(np, info->dev, prot_id); 834 if (!sdev) { 835 dev_err(info->dev, "failed to create %d protocol device\n", 836 prot_id); ··· 845 846 /* setup handle now as the transport is ready */ 847 scmi_set_handle(sdev); 848 } 849 850 static int scmi_probe(struct platform_device *pdev) ··· 949 continue; 950 } 951 952 - scmi_create_protocol_device(child, info, prot_id); 953 } 954 955 return 0; ··· 997 return ret; 998 } 999 1000 static const struct scmi_desc scmi_generic_desc = { 1001 .max_rx_timeout_ms = 30, /* We may increase this if required */ 1002 .max_msg = 20, /* Limited by MBOX_TX_QUEUE_LEN */ ··· 1061 .driver = { 1062 .name = "arm-scmi", 1063 .of_match_table = scmi_of_match, 1064 }, 1065 .probe = scmi_probe, 1066 .remove = scmi_remove,
··· 29 30 #include "common.h" 31 32 + #define CREATE_TRACE_POINTS 33 + #include <trace/events/scmi.h> 34 + 35 #define MSG_ID_MASK GENMASK(7, 0) 36 #define MSG_XTRACT_ID(hdr) FIELD_GET(MSG_ID_MASK, (hdr)) 37 #define MSG_TYPE_MASK GENMASK(9, 8) ··· 61 static LIST_HEAD(scmi_list); 62 /* Protection for the entire list */ 63 static DEFINE_MUTEX(scmi_list_mutex); 64 + /* Track the unique id for the transfers for debug & profiling purpose */ 65 + static atomic_t transfer_last_id; 66 67 /** 68 * struct scmi_xfers_info - Structure to manage transfer information ··· 304 xfer = &minfo->xfer_block[xfer_id]; 305 xfer->hdr.seq = xfer_id; 306 reinit_completion(&xfer->done); 307 + xfer->transfer_id = atomic_inc_return(&transfer_last_id); 308 309 return xfer; 310 } ··· 374 375 scmi_fetch_response(xfer, mem); 376 377 + trace_scmi_rx_done(xfer->transfer_id, xfer->hdr.id, 378 + xfer->hdr.protocol_id, xfer->hdr.seq, 379 + msg_type); 380 + 381 if (msg_type == MSG_TYPE_DELAYED_RESP) 382 complete(xfer->async_done); 383 else ··· 439 if (unlikely(!cinfo)) 440 return -EINVAL; 441 442 + trace_scmi_xfer_begin(xfer->transfer_id, xfer->hdr.id, 443 + xfer->hdr.protocol_id, xfer->hdr.seq, 444 + xfer->hdr.poll_completion); 445 + 446 ret = mbox_send_message(cinfo->chan, xfer); 447 if (ret < 0) { 448 dev_dbg(dev, "mbox send fail %d\n", ret); ··· 477 * received our message. 478 */ 479 mbox_client_txdone(cinfo->chan, ret); 480 + 481 + trace_scmi_xfer_end(xfer->transfer_id, xfer->hdr.id, 482 + xfer->hdr.protocol_id, xfer->hdr.seq, 483 + xfer->hdr.status); 484 485 return ret; 486 } ··· 735 idx = tx ? 0 : 1; 736 idr = tx ? &info->tx_idr : &info->rx_idr; 737 738 + /* check if already allocated, used for multiple device per protocol */ 739 + cinfo = idr_find(idr, prot_id); 740 + if (cinfo) 741 + return 0; 742 + 743 if (scmi_mailbox_check(np, idx)) { 744 cinfo = idr_find(idr, SCMI_PROTOCOL_BASE); 745 if (unlikely(!cinfo)) /* Possible only if platform has no Rx */ ··· 803 804 static inline void 805 scmi_create_protocol_device(struct device_node *np, struct scmi_info *info, 806 + int prot_id, const char *name) 807 { 808 struct scmi_device *sdev; 809 810 + sdev = scmi_device_create(np, info->dev, prot_id, name); 811 if (!sdev) { 812 dev_err(info->dev, "failed to create %d protocol device\n", 813 prot_id); ··· 822 823 /* setup handle now as the transport is ready */ 824 scmi_set_handle(sdev); 825 + } 826 + 827 + #define MAX_SCMI_DEV_PER_PROTOCOL 2 828 + struct scmi_prot_devnames { 829 + int protocol_id; 830 + char *names[MAX_SCMI_DEV_PER_PROTOCOL]; 831 + }; 832 + 833 + static struct scmi_prot_devnames devnames[] = { 834 + { SCMI_PROTOCOL_POWER, { "genpd" },}, 835 + { SCMI_PROTOCOL_PERF, { "cpufreq" },}, 836 + { SCMI_PROTOCOL_CLOCK, { "clocks" },}, 837 + { SCMI_PROTOCOL_SENSOR, { "hwmon" },}, 838 + { SCMI_PROTOCOL_RESET, { "reset" },}, 839 + }; 840 + 841 + static inline void 842 + scmi_create_protocol_devices(struct device_node *np, struct scmi_info *info, 843 + int prot_id) 844 + { 845 + int loop, cnt; 846 + 847 + for (loop = 0; loop < ARRAY_SIZE(devnames); loop++) { 848 + if (devnames[loop].protocol_id != prot_id) 849 + continue; 850 + 851 + for (cnt = 0; cnt < ARRAY_SIZE(devnames[loop].names); cnt++) { 852 + const char *name = devnames[loop].names[cnt]; 853 + 854 + if (name) 855 + scmi_create_protocol_device(np, info, prot_id, 856 + name); 857 + } 858 + } 859 } 860 861 static int scmi_probe(struct platform_device *pdev) ··· 892 continue; 893 } 894 895 + scmi_create_protocol_devices(child, info, prot_id); 896 } 897 898 return 0; ··· 940 return ret; 941 } 942 943 + static ssize_t protocol_version_show(struct device *dev, 944 + struct device_attribute *attr, char *buf) 945 + { 946 + struct scmi_info *info = dev_get_drvdata(dev); 947 + 948 + return sprintf(buf, "%u.%u\n", info->version.major_ver, 949 + info->version.minor_ver); 950 + } 951 + static DEVICE_ATTR_RO(protocol_version); 952 + 953 + static ssize_t firmware_version_show(struct device *dev, 954 + struct device_attribute *attr, char *buf) 955 + { 956 + struct scmi_info *info = dev_get_drvdata(dev); 957 + 958 + return sprintf(buf, "0x%x\n", info->version.impl_ver); 959 + } 960 + static DEVICE_ATTR_RO(firmware_version); 961 + 962 + static ssize_t vendor_id_show(struct device *dev, 963 + struct device_attribute *attr, char *buf) 964 + { 965 + struct scmi_info *info = dev_get_drvdata(dev); 966 + 967 + return sprintf(buf, "%s\n", info->version.vendor_id); 968 + } 969 + static DEVICE_ATTR_RO(vendor_id); 970 + 971 + static ssize_t sub_vendor_id_show(struct device *dev, 972 + struct device_attribute *attr, char *buf) 973 + { 974 + struct scmi_info *info = dev_get_drvdata(dev); 975 + 976 + return sprintf(buf, "%s\n", info->version.sub_vendor_id); 977 + } 978 + static DEVICE_ATTR_RO(sub_vendor_id); 979 + 980 + static struct attribute *versions_attrs[] = { 981 + &dev_attr_firmware_version.attr, 982 + &dev_attr_protocol_version.attr, 983 + &dev_attr_vendor_id.attr, 984 + &dev_attr_sub_vendor_id.attr, 985 + NULL, 986 + }; 987 + ATTRIBUTE_GROUPS(versions); 988 + 989 static const struct scmi_desc scmi_generic_desc = { 990 .max_rx_timeout_ms = 30, /* We may increase this if required */ 991 .max_msg = 20, /* Limited by MBOX_TX_QUEUE_LEN */ ··· 958 .driver = { 959 .name = "arm-scmi", 960 .of_match_table = scmi_of_match, 961 + .dev_groups = versions_groups, 962 }, 963 .probe = scmi_probe, 964 .remove = scmi_remove,
+2
drivers/firmware/arm_scmi/perf.c
··· 145 }; 146 147 struct scmi_perf_info { 148 int num_domains; 149 bool power_scale_mw; 150 u64 stats_addr; ··· 737 scmi_perf_domain_init_fc(handle, domain, &dom->fc_info); 738 } 739 740 handle->perf_ops = &perf_ops; 741 handle->perf_priv = pinfo; 742
··· 145 }; 146 147 struct scmi_perf_info { 148 + u32 version; 149 int num_domains; 150 bool power_scale_mw; 151 u64 stats_addr; ··· 736 scmi_perf_domain_init_fc(handle, domain, &dom->fc_info); 737 } 738 739 + pinfo->version = version; 740 handle->perf_ops = &perf_ops; 741 handle->perf_priv = pinfo; 742
+2
drivers/firmware/arm_scmi/power.c
··· 50 }; 51 52 struct scmi_power_info { 53 int num_domains; 54 u64 stats_addr; 55 u32 stats_size; ··· 208 scmi_power_domain_attributes_get(handle, domain, dom); 209 } 210 211 handle->power_ops = &power_ops; 212 handle->power_priv = pinfo; 213
··· 50 }; 51 52 struct scmi_power_info { 53 + u32 version; 54 int num_domains; 55 u64 stats_addr; 56 u32 stats_size; ··· 207 scmi_power_domain_attributes_get(handle, domain, dom); 208 } 209 210 + pinfo->version = version; 211 handle->power_ops = &power_ops; 212 handle->power_priv = pinfo; 213
+2
drivers/firmware/arm_scmi/reset.c
··· 48 }; 49 50 struct scmi_reset_info { 51 int num_domains; 52 struct reset_dom_info *dom_info; 53 }; ··· 218 scmi_reset_domain_attributes_get(handle, domain, dom); 219 } 220 221 handle->reset_ops = &reset_ops; 222 handle->reset_priv = pinfo; 223
··· 48 }; 49 50 struct scmi_reset_info { 51 + u32 version; 52 int num_domains; 53 struct reset_dom_info *dom_info; 54 }; ··· 217 scmi_reset_domain_attributes_get(handle, domain, dom); 218 } 219 220 + pinfo->version = version; 221 handle->reset_ops = &reset_ops; 222 handle->reset_priv = pinfo; 223
+1 -1
drivers/firmware/arm_scmi/scmi_pm_domain.c
··· 112 } 113 114 static const struct scmi_device_id scmi_id_table[] = { 115 - { SCMI_PROTOCOL_POWER }, 116 { }, 117 }; 118 MODULE_DEVICE_TABLE(scmi, scmi_id_table);
··· 112 } 113 114 static const struct scmi_device_id scmi_id_table[] = { 115 + { SCMI_PROTOCOL_POWER, "genpd" }, 116 { }, 117 }; 118 MODULE_DEVICE_TABLE(scmi, scmi_id_table);
+2
drivers/firmware/arm_scmi/sensors.c
··· 68 }; 69 70 struct sensors_info { 71 int num_sensors; 72 int max_requests; 73 u64 reg_addr; ··· 295 296 scmi_sensor_description_get(handle, sinfo); 297 298 handle->sensor_ops = &sensor_ops; 299 handle->sensor_priv = sinfo; 300
··· 68 }; 69 70 struct sensors_info { 71 + u32 version; 72 int num_sensors; 73 int max_requests; 74 u64 reg_addr; ··· 294 295 scmi_sensor_description_get(handle, sinfo); 296 297 + sinfo->version = version; 298 handle->sensor_ops = &sensor_ops; 299 handle->sensor_priv = sinfo; 300
+1 -1
drivers/firmware/imx/Kconfig
··· 1 # SPDX-License-Identifier: GPL-2.0-only 2 config IMX_DSP 3 - bool "IMX DSP Protocol driver" 4 depends on IMX_MBOX 5 help 6 This enables DSP IPC protocol between host AP (Linux)
··· 1 # SPDX-License-Identifier: GPL-2.0-only 2 config IMX_DSP 3 + tristate "IMX DSP Protocol driver" 4 depends on IMX_MBOX 5 help 6 This enables DSP IPC protocol between host AP (Linux)
+16 -2
drivers/firmware/psci/psci.c
··· 97 PSCI_1_0_FEATURES_CPU_SUSPEND_PF_MASK; 98 } 99 100 - static inline bool psci_has_osi_support(void) 101 { 102 return psci_cpu_suspend_feature & PSCI_1_0_OS_INITIATED; 103 } ··· 160 static u32 psci_get_version(void) 161 { 162 return invoke_psci_fn(PSCI_0_2_FN_PSCI_VERSION, 0, 0, 0); 163 } 164 165 static int psci_cpu_suspend(u32 state, unsigned long entry_point) ··· 553 if (err) 554 return err; 555 556 - if (psci_has_osi_support()) 557 pr_info("OSI mode supported.\n"); 558 559 return 0; 560 }
··· 97 PSCI_1_0_FEATURES_CPU_SUSPEND_PF_MASK; 98 } 99 100 + bool psci_has_osi_support(void) 101 { 102 return psci_cpu_suspend_feature & PSCI_1_0_OS_INITIATED; 103 } ··· 160 static u32 psci_get_version(void) 161 { 162 return invoke_psci_fn(PSCI_0_2_FN_PSCI_VERSION, 0, 0, 0); 163 + } 164 + 165 + int psci_set_osi_mode(void) 166 + { 167 + int err; 168 + 169 + err = invoke_psci_fn(PSCI_1_0_FN_SET_SUSPEND_MODE, 170 + PSCI_1_0_SUSPEND_MODE_OSI, 0, 0); 171 + return psci_to_linux_errno(err); 172 } 173 174 static int psci_cpu_suspend(u32 state, unsigned long entry_point) ··· 544 if (err) 545 return err; 546 547 + if (psci_has_osi_support()) { 548 pr_info("OSI mode supported.\n"); 549 + 550 + /* Default to PC mode. */ 551 + invoke_psci_fn(PSCI_1_0_FN_SET_SUSPEND_MODE, 552 + PSCI_1_0_SUSPEND_MODE_PC, 0, 0); 553 + } 554 555 return 0; 556 }
-671
drivers/firmware/qcom_scm-32.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* Copyright (c) 2010,2015, The Linux Foundation. All rights reserved. 3 - * Copyright (C) 2015 Linaro Ltd. 4 - */ 5 - 6 - #include <linux/slab.h> 7 - #include <linux/io.h> 8 - #include <linux/module.h> 9 - #include <linux/mutex.h> 10 - #include <linux/errno.h> 11 - #include <linux/err.h> 12 - #include <linux/qcom_scm.h> 13 - #include <linux/dma-mapping.h> 14 - 15 - #include "qcom_scm.h" 16 - 17 - #define QCOM_SCM_FLAG_COLDBOOT_CPU0 0x00 18 - #define QCOM_SCM_FLAG_COLDBOOT_CPU1 0x01 19 - #define QCOM_SCM_FLAG_COLDBOOT_CPU2 0x08 20 - #define QCOM_SCM_FLAG_COLDBOOT_CPU3 0x20 21 - 22 - #define QCOM_SCM_FLAG_WARMBOOT_CPU0 0x04 23 - #define QCOM_SCM_FLAG_WARMBOOT_CPU1 0x02 24 - #define QCOM_SCM_FLAG_WARMBOOT_CPU2 0x10 25 - #define QCOM_SCM_FLAG_WARMBOOT_CPU3 0x40 26 - 27 - struct qcom_scm_entry { 28 - int flag; 29 - void *entry; 30 - }; 31 - 32 - static struct qcom_scm_entry qcom_scm_wb[] = { 33 - { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU0 }, 34 - { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU1 }, 35 - { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU2 }, 36 - { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU3 }, 37 - }; 38 - 39 - static DEFINE_MUTEX(qcom_scm_lock); 40 - 41 - /** 42 - * struct qcom_scm_command - one SCM command buffer 43 - * @len: total available memory for command and response 44 - * @buf_offset: start of command buffer 45 - * @resp_hdr_offset: start of response buffer 46 - * @id: command to be executed 47 - * @buf: buffer returned from qcom_scm_get_command_buffer() 48 - * 49 - * An SCM command is laid out in memory as follows: 50 - * 51 - * ------------------- <--- struct qcom_scm_command 52 - * | command header | 53 - * ------------------- <--- qcom_scm_get_command_buffer() 54 - * | command buffer | 55 - * ------------------- <--- struct qcom_scm_response and 56 - * | response header | qcom_scm_command_to_response() 57 - * ------------------- <--- qcom_scm_get_response_buffer() 58 - * | response buffer | 59 - * ------------------- 60 - * 61 - * There can be arbitrary padding between the headers and buffers so 62 - * you should always use the appropriate qcom_scm_get_*_buffer() routines 63 - * to access the buffers in a safe manner. 64 - */ 65 - struct qcom_scm_command { 66 - __le32 len; 67 - __le32 buf_offset; 68 - __le32 resp_hdr_offset; 69 - __le32 id; 70 - __le32 buf[0]; 71 - }; 72 - 73 - /** 74 - * struct qcom_scm_response - one SCM response buffer 75 - * @len: total available memory for response 76 - * @buf_offset: start of response data relative to start of qcom_scm_response 77 - * @is_complete: indicates if the command has finished processing 78 - */ 79 - struct qcom_scm_response { 80 - __le32 len; 81 - __le32 buf_offset; 82 - __le32 is_complete; 83 - }; 84 - 85 - /** 86 - * qcom_scm_command_to_response() - Get a pointer to a qcom_scm_response 87 - * @cmd: command 88 - * 89 - * Returns a pointer to a response for a command. 90 - */ 91 - static inline struct qcom_scm_response *qcom_scm_command_to_response( 92 - const struct qcom_scm_command *cmd) 93 - { 94 - return (void *)cmd + le32_to_cpu(cmd->resp_hdr_offset); 95 - } 96 - 97 - /** 98 - * qcom_scm_get_command_buffer() - Get a pointer to a command buffer 99 - * @cmd: command 100 - * 101 - * Returns a pointer to the command buffer of a command. 102 - */ 103 - static inline void *qcom_scm_get_command_buffer(const struct qcom_scm_command *cmd) 104 - { 105 - return (void *)cmd->buf; 106 - } 107 - 108 - /** 109 - * qcom_scm_get_response_buffer() - Get a pointer to a response buffer 110 - * @rsp: response 111 - * 112 - * Returns a pointer to a response buffer of a response. 113 - */ 114 - static inline void *qcom_scm_get_response_buffer(const struct qcom_scm_response *rsp) 115 - { 116 - return (void *)rsp + le32_to_cpu(rsp->buf_offset); 117 - } 118 - 119 - static u32 smc(u32 cmd_addr) 120 - { 121 - int context_id; 122 - register u32 r0 asm("r0") = 1; 123 - register u32 r1 asm("r1") = (u32)&context_id; 124 - register u32 r2 asm("r2") = cmd_addr; 125 - do { 126 - asm volatile( 127 - __asmeq("%0", "r0") 128 - __asmeq("%1", "r0") 129 - __asmeq("%2", "r1") 130 - __asmeq("%3", "r2") 131 - #ifdef REQUIRES_SEC 132 - ".arch_extension sec\n" 133 - #endif 134 - "smc #0 @ switch to secure world\n" 135 - : "=r" (r0) 136 - : "r" (r0), "r" (r1), "r" (r2) 137 - : "r3", "r12"); 138 - } while (r0 == QCOM_SCM_INTERRUPTED); 139 - 140 - return r0; 141 - } 142 - 143 - /** 144 - * qcom_scm_call() - Send an SCM command 145 - * @dev: struct device 146 - * @svc_id: service identifier 147 - * @cmd_id: command identifier 148 - * @cmd_buf: command buffer 149 - * @cmd_len: length of the command buffer 150 - * @resp_buf: response buffer 151 - * @resp_len: length of the response buffer 152 - * 153 - * Sends a command to the SCM and waits for the command to finish processing. 154 - * 155 - * A note on cache maintenance: 156 - * Note that any buffers that are expected to be accessed by the secure world 157 - * must be flushed before invoking qcom_scm_call and invalidated in the cache 158 - * immediately after qcom_scm_call returns. Cache maintenance on the command 159 - * and response buffers is taken care of by qcom_scm_call; however, callers are 160 - * responsible for any other cached buffers passed over to the secure world. 161 - */ 162 - static int qcom_scm_call(struct device *dev, u32 svc_id, u32 cmd_id, 163 - const void *cmd_buf, size_t cmd_len, void *resp_buf, 164 - size_t resp_len) 165 - { 166 - int ret; 167 - struct qcom_scm_command *cmd; 168 - struct qcom_scm_response *rsp; 169 - size_t alloc_len = sizeof(*cmd) + cmd_len + sizeof(*rsp) + resp_len; 170 - dma_addr_t cmd_phys; 171 - 172 - cmd = kzalloc(PAGE_ALIGN(alloc_len), GFP_KERNEL); 173 - if (!cmd) 174 - return -ENOMEM; 175 - 176 - cmd->len = cpu_to_le32(alloc_len); 177 - cmd->buf_offset = cpu_to_le32(sizeof(*cmd)); 178 - cmd->resp_hdr_offset = cpu_to_le32(sizeof(*cmd) + cmd_len); 179 - 180 - cmd->id = cpu_to_le32((svc_id << 10) | cmd_id); 181 - if (cmd_buf) 182 - memcpy(qcom_scm_get_command_buffer(cmd), cmd_buf, cmd_len); 183 - 184 - rsp = qcom_scm_command_to_response(cmd); 185 - 186 - cmd_phys = dma_map_single(dev, cmd, alloc_len, DMA_TO_DEVICE); 187 - if (dma_mapping_error(dev, cmd_phys)) { 188 - kfree(cmd); 189 - return -ENOMEM; 190 - } 191 - 192 - mutex_lock(&qcom_scm_lock); 193 - ret = smc(cmd_phys); 194 - if (ret < 0) 195 - ret = qcom_scm_remap_error(ret); 196 - mutex_unlock(&qcom_scm_lock); 197 - if (ret) 198 - goto out; 199 - 200 - do { 201 - dma_sync_single_for_cpu(dev, cmd_phys + sizeof(*cmd) + cmd_len, 202 - sizeof(*rsp), DMA_FROM_DEVICE); 203 - } while (!rsp->is_complete); 204 - 205 - if (resp_buf) { 206 - dma_sync_single_for_cpu(dev, cmd_phys + sizeof(*cmd) + cmd_len + 207 - le32_to_cpu(rsp->buf_offset), 208 - resp_len, DMA_FROM_DEVICE); 209 - memcpy(resp_buf, qcom_scm_get_response_buffer(rsp), 210 - resp_len); 211 - } 212 - out: 213 - dma_unmap_single(dev, cmd_phys, alloc_len, DMA_TO_DEVICE); 214 - kfree(cmd); 215 - return ret; 216 - } 217 - 218 - #define SCM_CLASS_REGISTER (0x2 << 8) 219 - #define SCM_MASK_IRQS BIT(5) 220 - #define SCM_ATOMIC(svc, cmd, n) (((((svc) << 10)|((cmd) & 0x3ff)) << 12) | \ 221 - SCM_CLASS_REGISTER | \ 222 - SCM_MASK_IRQS | \ 223 - (n & 0xf)) 224 - 225 - /** 226 - * qcom_scm_call_atomic1() - Send an atomic SCM command with one argument 227 - * @svc_id: service identifier 228 - * @cmd_id: command identifier 229 - * @arg1: first argument 230 - * 231 - * This shall only be used with commands that are guaranteed to be 232 - * uninterruptable, atomic and SMP safe. 233 - */ 234 - static s32 qcom_scm_call_atomic1(u32 svc, u32 cmd, u32 arg1) 235 - { 236 - int context_id; 237 - 238 - register u32 r0 asm("r0") = SCM_ATOMIC(svc, cmd, 1); 239 - register u32 r1 asm("r1") = (u32)&context_id; 240 - register u32 r2 asm("r2") = arg1; 241 - 242 - asm volatile( 243 - __asmeq("%0", "r0") 244 - __asmeq("%1", "r0") 245 - __asmeq("%2", "r1") 246 - __asmeq("%3", "r2") 247 - #ifdef REQUIRES_SEC 248 - ".arch_extension sec\n" 249 - #endif 250 - "smc #0 @ switch to secure world\n" 251 - : "=r" (r0) 252 - : "r" (r0), "r" (r1), "r" (r2) 253 - : "r3", "r12"); 254 - return r0; 255 - } 256 - 257 - /** 258 - * qcom_scm_call_atomic2() - Send an atomic SCM command with two arguments 259 - * @svc_id: service identifier 260 - * @cmd_id: command identifier 261 - * @arg1: first argument 262 - * @arg2: second argument 263 - * 264 - * This shall only be used with commands that are guaranteed to be 265 - * uninterruptable, atomic and SMP safe. 266 - */ 267 - static s32 qcom_scm_call_atomic2(u32 svc, u32 cmd, u32 arg1, u32 arg2) 268 - { 269 - int context_id; 270 - 271 - register u32 r0 asm("r0") = SCM_ATOMIC(svc, cmd, 2); 272 - register u32 r1 asm("r1") = (u32)&context_id; 273 - register u32 r2 asm("r2") = arg1; 274 - register u32 r3 asm("r3") = arg2; 275 - 276 - asm volatile( 277 - __asmeq("%0", "r0") 278 - __asmeq("%1", "r0") 279 - __asmeq("%2", "r1") 280 - __asmeq("%3", "r2") 281 - __asmeq("%4", "r3") 282 - #ifdef REQUIRES_SEC 283 - ".arch_extension sec\n" 284 - #endif 285 - "smc #0 @ switch to secure world\n" 286 - : "=r" (r0) 287 - : "r" (r0), "r" (r1), "r" (r2), "r" (r3) 288 - : "r12"); 289 - return r0; 290 - } 291 - 292 - u32 qcom_scm_get_version(void) 293 - { 294 - int context_id; 295 - static u32 version = -1; 296 - register u32 r0 asm("r0"); 297 - register u32 r1 asm("r1"); 298 - 299 - if (version != -1) 300 - return version; 301 - 302 - mutex_lock(&qcom_scm_lock); 303 - 304 - r0 = 0x1 << 8; 305 - r1 = (u32)&context_id; 306 - do { 307 - asm volatile( 308 - __asmeq("%0", "r0") 309 - __asmeq("%1", "r1") 310 - __asmeq("%2", "r0") 311 - __asmeq("%3", "r1") 312 - #ifdef REQUIRES_SEC 313 - ".arch_extension sec\n" 314 - #endif 315 - "smc #0 @ switch to secure world\n" 316 - : "=r" (r0), "=r" (r1) 317 - : "r" (r0), "r" (r1) 318 - : "r2", "r3", "r12"); 319 - } while (r0 == QCOM_SCM_INTERRUPTED); 320 - 321 - version = r1; 322 - mutex_unlock(&qcom_scm_lock); 323 - 324 - return version; 325 - } 326 - EXPORT_SYMBOL(qcom_scm_get_version); 327 - 328 - /** 329 - * qcom_scm_set_cold_boot_addr() - Set the cold boot address for cpus 330 - * @entry: Entry point function for the cpus 331 - * @cpus: The cpumask of cpus that will use the entry point 332 - * 333 - * Set the cold boot address of the cpus. Any cpu outside the supported 334 - * range would be removed from the cpu present mask. 335 - */ 336 - int __qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus) 337 - { 338 - int flags = 0; 339 - int cpu; 340 - int scm_cb_flags[] = { 341 - QCOM_SCM_FLAG_COLDBOOT_CPU0, 342 - QCOM_SCM_FLAG_COLDBOOT_CPU1, 343 - QCOM_SCM_FLAG_COLDBOOT_CPU2, 344 - QCOM_SCM_FLAG_COLDBOOT_CPU3, 345 - }; 346 - 347 - if (!cpus || (cpus && cpumask_empty(cpus))) 348 - return -EINVAL; 349 - 350 - for_each_cpu(cpu, cpus) { 351 - if (cpu < ARRAY_SIZE(scm_cb_flags)) 352 - flags |= scm_cb_flags[cpu]; 353 - else 354 - set_cpu_present(cpu, false); 355 - } 356 - 357 - return qcom_scm_call_atomic2(QCOM_SCM_SVC_BOOT, QCOM_SCM_BOOT_ADDR, 358 - flags, virt_to_phys(entry)); 359 - } 360 - 361 - /** 362 - * qcom_scm_set_warm_boot_addr() - Set the warm boot address for cpus 363 - * @entry: Entry point function for the cpus 364 - * @cpus: The cpumask of cpus that will use the entry point 365 - * 366 - * Set the Linux entry point for the SCM to transfer control to when coming 367 - * out of a power down. CPU power down may be executed on cpuidle or hotplug. 368 - */ 369 - int __qcom_scm_set_warm_boot_addr(struct device *dev, void *entry, 370 - const cpumask_t *cpus) 371 - { 372 - int ret; 373 - int flags = 0; 374 - int cpu; 375 - struct { 376 - __le32 flags; 377 - __le32 addr; 378 - } cmd; 379 - 380 - /* 381 - * Reassign only if we are switching from hotplug entry point 382 - * to cpuidle entry point or vice versa. 383 - */ 384 - for_each_cpu(cpu, cpus) { 385 - if (entry == qcom_scm_wb[cpu].entry) 386 - continue; 387 - flags |= qcom_scm_wb[cpu].flag; 388 - } 389 - 390 - /* No change in entry function */ 391 - if (!flags) 392 - return 0; 393 - 394 - cmd.addr = cpu_to_le32(virt_to_phys(entry)); 395 - cmd.flags = cpu_to_le32(flags); 396 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_BOOT, QCOM_SCM_BOOT_ADDR, 397 - &cmd, sizeof(cmd), NULL, 0); 398 - if (!ret) { 399 - for_each_cpu(cpu, cpus) 400 - qcom_scm_wb[cpu].entry = entry; 401 - } 402 - 403 - return ret; 404 - } 405 - 406 - /** 407 - * qcom_scm_cpu_power_down() - Power down the cpu 408 - * @flags - Flags to flush cache 409 - * 410 - * This is an end point to power down cpu. If there was a pending interrupt, 411 - * the control would return from this function, otherwise, the cpu jumps to the 412 - * warm boot entry point set for this cpu upon reset. 413 - */ 414 - void __qcom_scm_cpu_power_down(u32 flags) 415 - { 416 - qcom_scm_call_atomic1(QCOM_SCM_SVC_BOOT, QCOM_SCM_CMD_TERMINATE_PC, 417 - flags & QCOM_SCM_FLUSH_FLAG_MASK); 418 - } 419 - 420 - int __qcom_scm_is_call_available(struct device *dev, u32 svc_id, u32 cmd_id) 421 - { 422 - int ret; 423 - __le32 svc_cmd = cpu_to_le32((svc_id << 10) | cmd_id); 424 - __le32 ret_val = 0; 425 - 426 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_INFO, QCOM_IS_CALL_AVAIL_CMD, 427 - &svc_cmd, sizeof(svc_cmd), &ret_val, 428 - sizeof(ret_val)); 429 - if (ret) 430 - return ret; 431 - 432 - return le32_to_cpu(ret_val); 433 - } 434 - 435 - int __qcom_scm_hdcp_req(struct device *dev, struct qcom_scm_hdcp_req *req, 436 - u32 req_cnt, u32 *resp) 437 - { 438 - if (req_cnt > QCOM_SCM_HDCP_MAX_REQ_CNT) 439 - return -ERANGE; 440 - 441 - return qcom_scm_call(dev, QCOM_SCM_SVC_HDCP, QCOM_SCM_CMD_HDCP, 442 - req, req_cnt * sizeof(*req), resp, sizeof(*resp)); 443 - } 444 - 445 - int __qcom_scm_ocmem_lock(struct device *dev, u32 id, u32 offset, u32 size, 446 - u32 mode) 447 - { 448 - struct ocmem_tz_lock { 449 - __le32 id; 450 - __le32 offset; 451 - __le32 size; 452 - __le32 mode; 453 - } request; 454 - 455 - request.id = cpu_to_le32(id); 456 - request.offset = cpu_to_le32(offset); 457 - request.size = cpu_to_le32(size); 458 - request.mode = cpu_to_le32(mode); 459 - 460 - return qcom_scm_call(dev, QCOM_SCM_OCMEM_SVC, QCOM_SCM_OCMEM_LOCK_CMD, 461 - &request, sizeof(request), NULL, 0); 462 - } 463 - 464 - int __qcom_scm_ocmem_unlock(struct device *dev, u32 id, u32 offset, u32 size) 465 - { 466 - struct ocmem_tz_unlock { 467 - __le32 id; 468 - __le32 offset; 469 - __le32 size; 470 - } request; 471 - 472 - request.id = cpu_to_le32(id); 473 - request.offset = cpu_to_le32(offset); 474 - request.size = cpu_to_le32(size); 475 - 476 - return qcom_scm_call(dev, QCOM_SCM_OCMEM_SVC, QCOM_SCM_OCMEM_UNLOCK_CMD, 477 - &request, sizeof(request), NULL, 0); 478 - } 479 - 480 - void __qcom_scm_init(void) 481 - { 482 - } 483 - 484 - bool __qcom_scm_pas_supported(struct device *dev, u32 peripheral) 485 - { 486 - __le32 out; 487 - __le32 in; 488 - int ret; 489 - 490 - in = cpu_to_le32(peripheral); 491 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL, 492 - QCOM_SCM_PAS_IS_SUPPORTED_CMD, 493 - &in, sizeof(in), 494 - &out, sizeof(out)); 495 - 496 - return ret ? false : !!out; 497 - } 498 - 499 - int __qcom_scm_pas_init_image(struct device *dev, u32 peripheral, 500 - dma_addr_t metadata_phys) 501 - { 502 - __le32 scm_ret; 503 - int ret; 504 - struct { 505 - __le32 proc; 506 - __le32 image_addr; 507 - } request; 508 - 509 - request.proc = cpu_to_le32(peripheral); 510 - request.image_addr = cpu_to_le32(metadata_phys); 511 - 512 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL, 513 - QCOM_SCM_PAS_INIT_IMAGE_CMD, 514 - &request, sizeof(request), 515 - &scm_ret, sizeof(scm_ret)); 516 - 517 - return ret ? : le32_to_cpu(scm_ret); 518 - } 519 - 520 - int __qcom_scm_pas_mem_setup(struct device *dev, u32 peripheral, 521 - phys_addr_t addr, phys_addr_t size) 522 - { 523 - __le32 scm_ret; 524 - int ret; 525 - struct { 526 - __le32 proc; 527 - __le32 addr; 528 - __le32 len; 529 - } request; 530 - 531 - request.proc = cpu_to_le32(peripheral); 532 - request.addr = cpu_to_le32(addr); 533 - request.len = cpu_to_le32(size); 534 - 535 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL, 536 - QCOM_SCM_PAS_MEM_SETUP_CMD, 537 - &request, sizeof(request), 538 - &scm_ret, sizeof(scm_ret)); 539 - 540 - return ret ? : le32_to_cpu(scm_ret); 541 - } 542 - 543 - int __qcom_scm_pas_auth_and_reset(struct device *dev, u32 peripheral) 544 - { 545 - __le32 out; 546 - __le32 in; 547 - int ret; 548 - 549 - in = cpu_to_le32(peripheral); 550 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL, 551 - QCOM_SCM_PAS_AUTH_AND_RESET_CMD, 552 - &in, sizeof(in), 553 - &out, sizeof(out)); 554 - 555 - return ret ? : le32_to_cpu(out); 556 - } 557 - 558 - int __qcom_scm_pas_shutdown(struct device *dev, u32 peripheral) 559 - { 560 - __le32 out; 561 - __le32 in; 562 - int ret; 563 - 564 - in = cpu_to_le32(peripheral); 565 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL, 566 - QCOM_SCM_PAS_SHUTDOWN_CMD, 567 - &in, sizeof(in), 568 - &out, sizeof(out)); 569 - 570 - return ret ? : le32_to_cpu(out); 571 - } 572 - 573 - int __qcom_scm_pas_mss_reset(struct device *dev, bool reset) 574 - { 575 - __le32 out; 576 - __le32 in = cpu_to_le32(reset); 577 - int ret; 578 - 579 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL, QCOM_SCM_PAS_MSS_RESET, 580 - &in, sizeof(in), 581 - &out, sizeof(out)); 582 - 583 - return ret ? : le32_to_cpu(out); 584 - } 585 - 586 - int __qcom_scm_set_dload_mode(struct device *dev, bool enable) 587 - { 588 - return qcom_scm_call_atomic2(QCOM_SCM_SVC_BOOT, QCOM_SCM_SET_DLOAD_MODE, 589 - enable ? QCOM_SCM_SET_DLOAD_MODE : 0, 0); 590 - } 591 - 592 - int __qcom_scm_set_remote_state(struct device *dev, u32 state, u32 id) 593 - { 594 - struct { 595 - __le32 state; 596 - __le32 id; 597 - } req; 598 - __le32 scm_ret = 0; 599 - int ret; 600 - 601 - req.state = cpu_to_le32(state); 602 - req.id = cpu_to_le32(id); 603 - 604 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_BOOT, QCOM_SCM_SET_REMOTE_STATE, 605 - &req, sizeof(req), &scm_ret, sizeof(scm_ret)); 606 - 607 - return ret ? : le32_to_cpu(scm_ret); 608 - } 609 - 610 - int __qcom_scm_assign_mem(struct device *dev, phys_addr_t mem_region, 611 - size_t mem_sz, phys_addr_t src, size_t src_sz, 612 - phys_addr_t dest, size_t dest_sz) 613 - { 614 - return -ENODEV; 615 - } 616 - 617 - int __qcom_scm_restore_sec_cfg(struct device *dev, u32 device_id, 618 - u32 spare) 619 - { 620 - struct msm_scm_sec_cfg { 621 - __le32 id; 622 - __le32 ctx_bank_num; 623 - } cfg; 624 - int ret, scm_ret = 0; 625 - 626 - cfg.id = cpu_to_le32(device_id); 627 - cfg.ctx_bank_num = cpu_to_le32(spare); 628 - 629 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_MP, QCOM_SCM_RESTORE_SEC_CFG, 630 - &cfg, sizeof(cfg), &scm_ret, sizeof(scm_ret)); 631 - 632 - if (ret || scm_ret) 633 - return ret ? ret : -EINVAL; 634 - 635 - return 0; 636 - } 637 - 638 - int __qcom_scm_iommu_secure_ptbl_size(struct device *dev, u32 spare, 639 - size_t *size) 640 - { 641 - return -ENODEV; 642 - } 643 - 644 - int __qcom_scm_iommu_secure_ptbl_init(struct device *dev, u64 addr, u32 size, 645 - u32 spare) 646 - { 647 - return -ENODEV; 648 - } 649 - 650 - int __qcom_scm_io_readl(struct device *dev, phys_addr_t addr, 651 - unsigned int *val) 652 - { 653 - int ret; 654 - 655 - ret = qcom_scm_call_atomic1(QCOM_SCM_SVC_IO, QCOM_SCM_IO_READ, addr); 656 - if (ret >= 0) 657 - *val = ret; 658 - 659 - return ret < 0 ? ret : 0; 660 - } 661 - 662 - int __qcom_scm_io_writel(struct device *dev, phys_addr_t addr, unsigned int val) 663 - { 664 - return qcom_scm_call_atomic2(QCOM_SCM_SVC_IO, QCOM_SCM_IO_WRITE, 665 - addr, val); 666 - } 667 - 668 - int __qcom_scm_qsmmu500_wait_safe_toggle(struct device *dev, bool enable) 669 - { 670 - return -ENODEV; 671 - }
···
-579
drivers/firmware/qcom_scm-64.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* Copyright (c) 2015, The Linux Foundation. All rights reserved. 3 - */ 4 - 5 - #include <linux/io.h> 6 - #include <linux/errno.h> 7 - #include <linux/delay.h> 8 - #include <linux/mutex.h> 9 - #include <linux/slab.h> 10 - #include <linux/types.h> 11 - #include <linux/qcom_scm.h> 12 - #include <linux/arm-smccc.h> 13 - #include <linux/dma-mapping.h> 14 - 15 - #include "qcom_scm.h" 16 - 17 - #define QCOM_SCM_FNID(s, c) ((((s) & 0xFF) << 8) | ((c) & 0xFF)) 18 - 19 - #define MAX_QCOM_SCM_ARGS 10 20 - #define MAX_QCOM_SCM_RETS 3 21 - 22 - enum qcom_scm_arg_types { 23 - QCOM_SCM_VAL, 24 - QCOM_SCM_RO, 25 - QCOM_SCM_RW, 26 - QCOM_SCM_BUFVAL, 27 - }; 28 - 29 - #define QCOM_SCM_ARGS_IMPL(num, a, b, c, d, e, f, g, h, i, j, ...) (\ 30 - (((a) & 0x3) << 4) | \ 31 - (((b) & 0x3) << 6) | \ 32 - (((c) & 0x3) << 8) | \ 33 - (((d) & 0x3) << 10) | \ 34 - (((e) & 0x3) << 12) | \ 35 - (((f) & 0x3) << 14) | \ 36 - (((g) & 0x3) << 16) | \ 37 - (((h) & 0x3) << 18) | \ 38 - (((i) & 0x3) << 20) | \ 39 - (((j) & 0x3) << 22) | \ 40 - ((num) & 0xf)) 41 - 42 - #define QCOM_SCM_ARGS(...) QCOM_SCM_ARGS_IMPL(__VA_ARGS__, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) 43 - 44 - /** 45 - * struct qcom_scm_desc 46 - * @arginfo: Metadata describing the arguments in args[] 47 - * @args: The array of arguments for the secure syscall 48 - * @res: The values returned by the secure syscall 49 - */ 50 - struct qcom_scm_desc { 51 - u32 arginfo; 52 - u64 args[MAX_QCOM_SCM_ARGS]; 53 - }; 54 - 55 - static u64 qcom_smccc_convention = -1; 56 - static DEFINE_MUTEX(qcom_scm_lock); 57 - 58 - #define QCOM_SCM_EBUSY_WAIT_MS 30 59 - #define QCOM_SCM_EBUSY_MAX_RETRY 20 60 - 61 - #define N_EXT_QCOM_SCM_ARGS 7 62 - #define FIRST_EXT_ARG_IDX 3 63 - #define N_REGISTER_ARGS (MAX_QCOM_SCM_ARGS - N_EXT_QCOM_SCM_ARGS + 1) 64 - 65 - static void __qcom_scm_call_do(const struct qcom_scm_desc *desc, 66 - struct arm_smccc_res *res, u32 fn_id, 67 - u64 x5, u32 type) 68 - { 69 - u64 cmd; 70 - struct arm_smccc_quirk quirk = { .id = ARM_SMCCC_QUIRK_QCOM_A6 }; 71 - 72 - cmd = ARM_SMCCC_CALL_VAL(type, qcom_smccc_convention, 73 - ARM_SMCCC_OWNER_SIP, fn_id); 74 - 75 - quirk.state.a6 = 0; 76 - 77 - do { 78 - arm_smccc_smc_quirk(cmd, desc->arginfo, desc->args[0], 79 - desc->args[1], desc->args[2], x5, 80 - quirk.state.a6, 0, res, &quirk); 81 - 82 - if (res->a0 == QCOM_SCM_INTERRUPTED) 83 - cmd = res->a0; 84 - 85 - } while (res->a0 == QCOM_SCM_INTERRUPTED); 86 - } 87 - 88 - static void qcom_scm_call_do(const struct qcom_scm_desc *desc, 89 - struct arm_smccc_res *res, u32 fn_id, 90 - u64 x5, bool atomic) 91 - { 92 - int retry_count = 0; 93 - 94 - if (atomic) { 95 - __qcom_scm_call_do(desc, res, fn_id, x5, ARM_SMCCC_FAST_CALL); 96 - return; 97 - } 98 - 99 - do { 100 - mutex_lock(&qcom_scm_lock); 101 - 102 - __qcom_scm_call_do(desc, res, fn_id, x5, 103 - ARM_SMCCC_STD_CALL); 104 - 105 - mutex_unlock(&qcom_scm_lock); 106 - 107 - if (res->a0 == QCOM_SCM_V2_EBUSY) { 108 - if (retry_count++ > QCOM_SCM_EBUSY_MAX_RETRY) 109 - break; 110 - msleep(QCOM_SCM_EBUSY_WAIT_MS); 111 - } 112 - } while (res->a0 == QCOM_SCM_V2_EBUSY); 113 - } 114 - 115 - static int ___qcom_scm_call(struct device *dev, u32 svc_id, u32 cmd_id, 116 - const struct qcom_scm_desc *desc, 117 - struct arm_smccc_res *res, bool atomic) 118 - { 119 - int arglen = desc->arginfo & 0xf; 120 - int i; 121 - u32 fn_id = QCOM_SCM_FNID(svc_id, cmd_id); 122 - u64 x5 = desc->args[FIRST_EXT_ARG_IDX]; 123 - dma_addr_t args_phys = 0; 124 - void *args_virt = NULL; 125 - size_t alloc_len; 126 - gfp_t flag = atomic ? GFP_ATOMIC : GFP_KERNEL; 127 - 128 - if (unlikely(arglen > N_REGISTER_ARGS)) { 129 - alloc_len = N_EXT_QCOM_SCM_ARGS * sizeof(u64); 130 - args_virt = kzalloc(PAGE_ALIGN(alloc_len), flag); 131 - 132 - if (!args_virt) 133 - return -ENOMEM; 134 - 135 - if (qcom_smccc_convention == ARM_SMCCC_SMC_32) { 136 - __le32 *args = args_virt; 137 - 138 - for (i = 0; i < N_EXT_QCOM_SCM_ARGS; i++) 139 - args[i] = cpu_to_le32(desc->args[i + 140 - FIRST_EXT_ARG_IDX]); 141 - } else { 142 - __le64 *args = args_virt; 143 - 144 - for (i = 0; i < N_EXT_QCOM_SCM_ARGS; i++) 145 - args[i] = cpu_to_le64(desc->args[i + 146 - FIRST_EXT_ARG_IDX]); 147 - } 148 - 149 - args_phys = dma_map_single(dev, args_virt, alloc_len, 150 - DMA_TO_DEVICE); 151 - 152 - if (dma_mapping_error(dev, args_phys)) { 153 - kfree(args_virt); 154 - return -ENOMEM; 155 - } 156 - 157 - x5 = args_phys; 158 - } 159 - 160 - qcom_scm_call_do(desc, res, fn_id, x5, atomic); 161 - 162 - if (args_virt) { 163 - dma_unmap_single(dev, args_phys, alloc_len, DMA_TO_DEVICE); 164 - kfree(args_virt); 165 - } 166 - 167 - if ((long)res->a0 < 0) 168 - return qcom_scm_remap_error(res->a0); 169 - 170 - return 0; 171 - } 172 - 173 - /** 174 - * qcom_scm_call() - Invoke a syscall in the secure world 175 - * @dev: device 176 - * @svc_id: service identifier 177 - * @cmd_id: command identifier 178 - * @desc: Descriptor structure containing arguments and return values 179 - * 180 - * Sends a command to the SCM and waits for the command to finish processing. 181 - * This should *only* be called in pre-emptible context. 182 - */ 183 - static int qcom_scm_call(struct device *dev, u32 svc_id, u32 cmd_id, 184 - const struct qcom_scm_desc *desc, 185 - struct arm_smccc_res *res) 186 - { 187 - might_sleep(); 188 - return ___qcom_scm_call(dev, svc_id, cmd_id, desc, res, false); 189 - } 190 - 191 - /** 192 - * qcom_scm_call_atomic() - atomic variation of qcom_scm_call() 193 - * @dev: device 194 - * @svc_id: service identifier 195 - * @cmd_id: command identifier 196 - * @desc: Descriptor structure containing arguments and return values 197 - * @res: Structure containing results from SMC/HVC call 198 - * 199 - * Sends a command to the SCM and waits for the command to finish processing. 200 - * This can be called in atomic context. 201 - */ 202 - static int qcom_scm_call_atomic(struct device *dev, u32 svc_id, u32 cmd_id, 203 - const struct qcom_scm_desc *desc, 204 - struct arm_smccc_res *res) 205 - { 206 - return ___qcom_scm_call(dev, svc_id, cmd_id, desc, res, true); 207 - } 208 - 209 - /** 210 - * qcom_scm_set_cold_boot_addr() - Set the cold boot address for cpus 211 - * @entry: Entry point function for the cpus 212 - * @cpus: The cpumask of cpus that will use the entry point 213 - * 214 - * Set the cold boot address of the cpus. Any cpu outside the supported 215 - * range would be removed from the cpu present mask. 216 - */ 217 - int __qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus) 218 - { 219 - return -ENOTSUPP; 220 - } 221 - 222 - /** 223 - * qcom_scm_set_warm_boot_addr() - Set the warm boot address for cpus 224 - * @dev: Device pointer 225 - * @entry: Entry point function for the cpus 226 - * @cpus: The cpumask of cpus that will use the entry point 227 - * 228 - * Set the Linux entry point for the SCM to transfer control to when coming 229 - * out of a power down. CPU power down may be executed on cpuidle or hotplug. 230 - */ 231 - int __qcom_scm_set_warm_boot_addr(struct device *dev, void *entry, 232 - const cpumask_t *cpus) 233 - { 234 - return -ENOTSUPP; 235 - } 236 - 237 - /** 238 - * qcom_scm_cpu_power_down() - Power down the cpu 239 - * @flags - Flags to flush cache 240 - * 241 - * This is an end point to power down cpu. If there was a pending interrupt, 242 - * the control would return from this function, otherwise, the cpu jumps to the 243 - * warm boot entry point set for this cpu upon reset. 244 - */ 245 - void __qcom_scm_cpu_power_down(u32 flags) 246 - { 247 - } 248 - 249 - int __qcom_scm_is_call_available(struct device *dev, u32 svc_id, u32 cmd_id) 250 - { 251 - int ret; 252 - struct qcom_scm_desc desc = {0}; 253 - struct arm_smccc_res res; 254 - 255 - desc.arginfo = QCOM_SCM_ARGS(1); 256 - desc.args[0] = QCOM_SCM_FNID(svc_id, cmd_id) | 257 - (ARM_SMCCC_OWNER_SIP << ARM_SMCCC_OWNER_SHIFT); 258 - 259 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_INFO, QCOM_IS_CALL_AVAIL_CMD, 260 - &desc, &res); 261 - 262 - return ret ? : res.a1; 263 - } 264 - 265 - int __qcom_scm_hdcp_req(struct device *dev, struct qcom_scm_hdcp_req *req, 266 - u32 req_cnt, u32 *resp) 267 - { 268 - int ret; 269 - struct qcom_scm_desc desc = {0}; 270 - struct arm_smccc_res res; 271 - 272 - if (req_cnt > QCOM_SCM_HDCP_MAX_REQ_CNT) 273 - return -ERANGE; 274 - 275 - desc.args[0] = req[0].addr; 276 - desc.args[1] = req[0].val; 277 - desc.args[2] = req[1].addr; 278 - desc.args[3] = req[1].val; 279 - desc.args[4] = req[2].addr; 280 - desc.args[5] = req[2].val; 281 - desc.args[6] = req[3].addr; 282 - desc.args[7] = req[3].val; 283 - desc.args[8] = req[4].addr; 284 - desc.args[9] = req[4].val; 285 - desc.arginfo = QCOM_SCM_ARGS(10); 286 - 287 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_HDCP, QCOM_SCM_CMD_HDCP, &desc, 288 - &res); 289 - *resp = res.a1; 290 - 291 - return ret; 292 - } 293 - 294 - int __qcom_scm_ocmem_lock(struct device *dev, uint32_t id, uint32_t offset, 295 - uint32_t size, uint32_t mode) 296 - { 297 - return -ENOTSUPP; 298 - } 299 - 300 - int __qcom_scm_ocmem_unlock(struct device *dev, uint32_t id, uint32_t offset, 301 - uint32_t size) 302 - { 303 - return -ENOTSUPP; 304 - } 305 - 306 - void __qcom_scm_init(void) 307 - { 308 - u64 cmd; 309 - struct arm_smccc_res res; 310 - u32 function = QCOM_SCM_FNID(QCOM_SCM_SVC_INFO, QCOM_IS_CALL_AVAIL_CMD); 311 - 312 - /* First try a SMC64 call */ 313 - cmd = ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, ARM_SMCCC_SMC_64, 314 - ARM_SMCCC_OWNER_SIP, function); 315 - 316 - arm_smccc_smc(cmd, QCOM_SCM_ARGS(1), cmd & (~BIT(ARM_SMCCC_TYPE_SHIFT)), 317 - 0, 0, 0, 0, 0, &res); 318 - 319 - if (!res.a0 && res.a1) 320 - qcom_smccc_convention = ARM_SMCCC_SMC_64; 321 - else 322 - qcom_smccc_convention = ARM_SMCCC_SMC_32; 323 - } 324 - 325 - bool __qcom_scm_pas_supported(struct device *dev, u32 peripheral) 326 - { 327 - int ret; 328 - struct qcom_scm_desc desc = {0}; 329 - struct arm_smccc_res res; 330 - 331 - desc.args[0] = peripheral; 332 - desc.arginfo = QCOM_SCM_ARGS(1); 333 - 334 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL, 335 - QCOM_SCM_PAS_IS_SUPPORTED_CMD, 336 - &desc, &res); 337 - 338 - return ret ? false : !!res.a1; 339 - } 340 - 341 - int __qcom_scm_pas_init_image(struct device *dev, u32 peripheral, 342 - dma_addr_t metadata_phys) 343 - { 344 - int ret; 345 - struct qcom_scm_desc desc = {0}; 346 - struct arm_smccc_res res; 347 - 348 - desc.args[0] = peripheral; 349 - desc.args[1] = metadata_phys; 350 - desc.arginfo = QCOM_SCM_ARGS(2, QCOM_SCM_VAL, QCOM_SCM_RW); 351 - 352 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL, QCOM_SCM_PAS_INIT_IMAGE_CMD, 353 - &desc, &res); 354 - 355 - return ret ? : res.a1; 356 - } 357 - 358 - int __qcom_scm_pas_mem_setup(struct device *dev, u32 peripheral, 359 - phys_addr_t addr, phys_addr_t size) 360 - { 361 - int ret; 362 - struct qcom_scm_desc desc = {0}; 363 - struct arm_smccc_res res; 364 - 365 - desc.args[0] = peripheral; 366 - desc.args[1] = addr; 367 - desc.args[2] = size; 368 - desc.arginfo = QCOM_SCM_ARGS(3); 369 - 370 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL, QCOM_SCM_PAS_MEM_SETUP_CMD, 371 - &desc, &res); 372 - 373 - return ret ? : res.a1; 374 - } 375 - 376 - int __qcom_scm_pas_auth_and_reset(struct device *dev, u32 peripheral) 377 - { 378 - int ret; 379 - struct qcom_scm_desc desc = {0}; 380 - struct arm_smccc_res res; 381 - 382 - desc.args[0] = peripheral; 383 - desc.arginfo = QCOM_SCM_ARGS(1); 384 - 385 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL, 386 - QCOM_SCM_PAS_AUTH_AND_RESET_CMD, 387 - &desc, &res); 388 - 389 - return ret ? : res.a1; 390 - } 391 - 392 - int __qcom_scm_pas_shutdown(struct device *dev, u32 peripheral) 393 - { 394 - int ret; 395 - struct qcom_scm_desc desc = {0}; 396 - struct arm_smccc_res res; 397 - 398 - desc.args[0] = peripheral; 399 - desc.arginfo = QCOM_SCM_ARGS(1); 400 - 401 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL, QCOM_SCM_PAS_SHUTDOWN_CMD, 402 - &desc, &res); 403 - 404 - return ret ? : res.a1; 405 - } 406 - 407 - int __qcom_scm_pas_mss_reset(struct device *dev, bool reset) 408 - { 409 - struct qcom_scm_desc desc = {0}; 410 - struct arm_smccc_res res; 411 - int ret; 412 - 413 - desc.args[0] = reset; 414 - desc.args[1] = 0; 415 - desc.arginfo = QCOM_SCM_ARGS(2); 416 - 417 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL, QCOM_SCM_PAS_MSS_RESET, &desc, 418 - &res); 419 - 420 - return ret ? : res.a1; 421 - } 422 - 423 - int __qcom_scm_set_remote_state(struct device *dev, u32 state, u32 id) 424 - { 425 - struct qcom_scm_desc desc = {0}; 426 - struct arm_smccc_res res; 427 - int ret; 428 - 429 - desc.args[0] = state; 430 - desc.args[1] = id; 431 - desc.arginfo = QCOM_SCM_ARGS(2); 432 - 433 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_BOOT, QCOM_SCM_SET_REMOTE_STATE, 434 - &desc, &res); 435 - 436 - return ret ? : res.a1; 437 - } 438 - 439 - int __qcom_scm_assign_mem(struct device *dev, phys_addr_t mem_region, 440 - size_t mem_sz, phys_addr_t src, size_t src_sz, 441 - phys_addr_t dest, size_t dest_sz) 442 - { 443 - int ret; 444 - struct qcom_scm_desc desc = {0}; 445 - struct arm_smccc_res res; 446 - 447 - desc.args[0] = mem_region; 448 - desc.args[1] = mem_sz; 449 - desc.args[2] = src; 450 - desc.args[3] = src_sz; 451 - desc.args[4] = dest; 452 - desc.args[5] = dest_sz; 453 - desc.args[6] = 0; 454 - 455 - desc.arginfo = QCOM_SCM_ARGS(7, QCOM_SCM_RO, QCOM_SCM_VAL, 456 - QCOM_SCM_RO, QCOM_SCM_VAL, QCOM_SCM_RO, 457 - QCOM_SCM_VAL, QCOM_SCM_VAL); 458 - 459 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_MP, 460 - QCOM_MEM_PROT_ASSIGN_ID, 461 - &desc, &res); 462 - 463 - return ret ? : res.a1; 464 - } 465 - 466 - int __qcom_scm_restore_sec_cfg(struct device *dev, u32 device_id, u32 spare) 467 - { 468 - struct qcom_scm_desc desc = {0}; 469 - struct arm_smccc_res res; 470 - int ret; 471 - 472 - desc.args[0] = device_id; 473 - desc.args[1] = spare; 474 - desc.arginfo = QCOM_SCM_ARGS(2); 475 - 476 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_MP, QCOM_SCM_RESTORE_SEC_CFG, 477 - &desc, &res); 478 - 479 - return ret ? : res.a1; 480 - } 481 - 482 - int __qcom_scm_iommu_secure_ptbl_size(struct device *dev, u32 spare, 483 - size_t *size) 484 - { 485 - struct qcom_scm_desc desc = {0}; 486 - struct arm_smccc_res res; 487 - int ret; 488 - 489 - desc.args[0] = spare; 490 - desc.arginfo = QCOM_SCM_ARGS(1); 491 - 492 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_MP, 493 - QCOM_SCM_IOMMU_SECURE_PTBL_SIZE, &desc, &res); 494 - 495 - if (size) 496 - *size = res.a1; 497 - 498 - return ret ? : res.a2; 499 - } 500 - 501 - int __qcom_scm_iommu_secure_ptbl_init(struct device *dev, u64 addr, u32 size, 502 - u32 spare) 503 - { 504 - struct qcom_scm_desc desc = {0}; 505 - struct arm_smccc_res res; 506 - int ret; 507 - 508 - desc.args[0] = addr; 509 - desc.args[1] = size; 510 - desc.args[2] = spare; 511 - desc.arginfo = QCOM_SCM_ARGS(3, QCOM_SCM_RW, QCOM_SCM_VAL, 512 - QCOM_SCM_VAL); 513 - 514 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_MP, 515 - QCOM_SCM_IOMMU_SECURE_PTBL_INIT, &desc, &res); 516 - 517 - /* the pg table has been initialized already, ignore the error */ 518 - if (ret == -EPERM) 519 - ret = 0; 520 - 521 - return ret; 522 - } 523 - 524 - int __qcom_scm_set_dload_mode(struct device *dev, bool enable) 525 - { 526 - struct qcom_scm_desc desc = {0}; 527 - struct arm_smccc_res res; 528 - 529 - desc.args[0] = QCOM_SCM_SET_DLOAD_MODE; 530 - desc.args[1] = enable ? QCOM_SCM_SET_DLOAD_MODE : 0; 531 - desc.arginfo = QCOM_SCM_ARGS(2); 532 - 533 - return qcom_scm_call(dev, QCOM_SCM_SVC_BOOT, QCOM_SCM_SET_DLOAD_MODE, 534 - &desc, &res); 535 - } 536 - 537 - int __qcom_scm_io_readl(struct device *dev, phys_addr_t addr, 538 - unsigned int *val) 539 - { 540 - struct qcom_scm_desc desc = {0}; 541 - struct arm_smccc_res res; 542 - int ret; 543 - 544 - desc.args[0] = addr; 545 - desc.arginfo = QCOM_SCM_ARGS(1); 546 - 547 - ret = qcom_scm_call(dev, QCOM_SCM_SVC_IO, QCOM_SCM_IO_READ, 548 - &desc, &res); 549 - if (ret >= 0) 550 - *val = res.a1; 551 - 552 - return ret < 0 ? ret : 0; 553 - } 554 - 555 - int __qcom_scm_io_writel(struct device *dev, phys_addr_t addr, unsigned int val) 556 - { 557 - struct qcom_scm_desc desc = {0}; 558 - struct arm_smccc_res res; 559 - 560 - desc.args[0] = addr; 561 - desc.args[1] = val; 562 - desc.arginfo = QCOM_SCM_ARGS(2); 563 - 564 - return qcom_scm_call(dev, QCOM_SCM_SVC_IO, QCOM_SCM_IO_WRITE, 565 - &desc, &res); 566 - } 567 - 568 - int __qcom_scm_qsmmu500_wait_safe_toggle(struct device *dev, bool en) 569 - { 570 - struct qcom_scm_desc desc = {0}; 571 - struct arm_smccc_res res; 572 - 573 - desc.args[0] = QCOM_SCM_CONFIG_ERRATA1_CLIENT_ALL; 574 - desc.args[1] = en; 575 - desc.arginfo = QCOM_SCM_ARGS(2); 576 - 577 - return qcom_scm_call_atomic(dev, QCOM_SCM_SVC_SMMU_PROGRAM, 578 - QCOM_SCM_CONFIG_ERRATA1, &desc, &res); 579 - }
···
+242
drivers/firmware/qcom_scm-legacy.c
···
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* Copyright (c) 2010,2015,2019 The Linux Foundation. All rights reserved. 3 + * Copyright (C) 2015 Linaro Ltd. 4 + */ 5 + 6 + #include <linux/slab.h> 7 + #include <linux/io.h> 8 + #include <linux/module.h> 9 + #include <linux/mutex.h> 10 + #include <linux/errno.h> 11 + #include <linux/err.h> 12 + #include <linux/qcom_scm.h> 13 + #include <linux/arm-smccc.h> 14 + #include <linux/dma-mapping.h> 15 + 16 + #include "qcom_scm.h" 17 + 18 + static DEFINE_MUTEX(qcom_scm_lock); 19 + 20 + 21 + /** 22 + * struct arm_smccc_args 23 + * @args: The array of values used in registers in smc instruction 24 + */ 25 + struct arm_smccc_args { 26 + unsigned long args[8]; 27 + }; 28 + 29 + 30 + /** 31 + * struct scm_legacy_command - one SCM command buffer 32 + * @len: total available memory for command and response 33 + * @buf_offset: start of command buffer 34 + * @resp_hdr_offset: start of response buffer 35 + * @id: command to be executed 36 + * @buf: buffer returned from scm_legacy_get_command_buffer() 37 + * 38 + * An SCM command is laid out in memory as follows: 39 + * 40 + * ------------------- <--- struct scm_legacy_command 41 + * | command header | 42 + * ------------------- <--- scm_legacy_get_command_buffer() 43 + * | command buffer | 44 + * ------------------- <--- struct scm_legacy_response and 45 + * | response header | scm_legacy_command_to_response() 46 + * ------------------- <--- scm_legacy_get_response_buffer() 47 + * | response buffer | 48 + * ------------------- 49 + * 50 + * There can be arbitrary padding between the headers and buffers so 51 + * you should always use the appropriate scm_legacy_get_*_buffer() routines 52 + * to access the buffers in a safe manner. 53 + */ 54 + struct scm_legacy_command { 55 + __le32 len; 56 + __le32 buf_offset; 57 + __le32 resp_hdr_offset; 58 + __le32 id; 59 + __le32 buf[0]; 60 + }; 61 + 62 + /** 63 + * struct scm_legacy_response - one SCM response buffer 64 + * @len: total available memory for response 65 + * @buf_offset: start of response data relative to start of scm_legacy_response 66 + * @is_complete: indicates if the command has finished processing 67 + */ 68 + struct scm_legacy_response { 69 + __le32 len; 70 + __le32 buf_offset; 71 + __le32 is_complete; 72 + }; 73 + 74 + /** 75 + * scm_legacy_command_to_response() - Get a pointer to a scm_legacy_response 76 + * @cmd: command 77 + * 78 + * Returns a pointer to a response for a command. 79 + */ 80 + static inline struct scm_legacy_response *scm_legacy_command_to_response( 81 + const struct scm_legacy_command *cmd) 82 + { 83 + return (void *)cmd + le32_to_cpu(cmd->resp_hdr_offset); 84 + } 85 + 86 + /** 87 + * scm_legacy_get_command_buffer() - Get a pointer to a command buffer 88 + * @cmd: command 89 + * 90 + * Returns a pointer to the command buffer of a command. 91 + */ 92 + static inline void *scm_legacy_get_command_buffer( 93 + const struct scm_legacy_command *cmd) 94 + { 95 + return (void *)cmd->buf; 96 + } 97 + 98 + /** 99 + * scm_legacy_get_response_buffer() - Get a pointer to a response buffer 100 + * @rsp: response 101 + * 102 + * Returns a pointer to a response buffer of a response. 103 + */ 104 + static inline void *scm_legacy_get_response_buffer( 105 + const struct scm_legacy_response *rsp) 106 + { 107 + return (void *)rsp + le32_to_cpu(rsp->buf_offset); 108 + } 109 + 110 + static void __scm_legacy_do(const struct arm_smccc_args *smc, 111 + struct arm_smccc_res *res) 112 + { 113 + do { 114 + arm_smccc_smc(smc->args[0], smc->args[1], smc->args[2], 115 + smc->args[3], smc->args[4], smc->args[5], 116 + smc->args[6], smc->args[7], res); 117 + } while (res->a0 == QCOM_SCM_INTERRUPTED); 118 + } 119 + 120 + /** 121 + * qcom_scm_call() - Sends a command to the SCM and waits for the command to 122 + * finish processing. 123 + * 124 + * A note on cache maintenance: 125 + * Note that any buffers that are expected to be accessed by the secure world 126 + * must be flushed before invoking qcom_scm_call and invalidated in the cache 127 + * immediately after qcom_scm_call returns. Cache maintenance on the command 128 + * and response buffers is taken care of by qcom_scm_call; however, callers are 129 + * responsible for any other cached buffers passed over to the secure world. 130 + */ 131 + int scm_legacy_call(struct device *dev, const struct qcom_scm_desc *desc, 132 + struct qcom_scm_res *res) 133 + { 134 + u8 arglen = desc->arginfo & 0xf; 135 + int ret = 0, context_id; 136 + unsigned int i; 137 + struct scm_legacy_command *cmd; 138 + struct scm_legacy_response *rsp; 139 + struct arm_smccc_args smc = {0}; 140 + struct arm_smccc_res smc_res; 141 + const size_t cmd_len = arglen * sizeof(__le32); 142 + const size_t resp_len = MAX_QCOM_SCM_RETS * sizeof(__le32); 143 + size_t alloc_len = sizeof(*cmd) + cmd_len + sizeof(*rsp) + resp_len; 144 + dma_addr_t cmd_phys; 145 + __le32 *arg_buf; 146 + const __le32 *res_buf; 147 + 148 + cmd = kzalloc(PAGE_ALIGN(alloc_len), GFP_KERNEL); 149 + if (!cmd) 150 + return -ENOMEM; 151 + 152 + cmd->len = cpu_to_le32(alloc_len); 153 + cmd->buf_offset = cpu_to_le32(sizeof(*cmd)); 154 + cmd->resp_hdr_offset = cpu_to_le32(sizeof(*cmd) + cmd_len); 155 + cmd->id = cpu_to_le32(SCM_LEGACY_FNID(desc->svc, desc->cmd)); 156 + 157 + arg_buf = scm_legacy_get_command_buffer(cmd); 158 + for (i = 0; i < arglen; i++) 159 + arg_buf[i] = cpu_to_le32(desc->args[i]); 160 + 161 + rsp = scm_legacy_command_to_response(cmd); 162 + 163 + cmd_phys = dma_map_single(dev, cmd, alloc_len, DMA_TO_DEVICE); 164 + if (dma_mapping_error(dev, cmd_phys)) { 165 + kfree(cmd); 166 + return -ENOMEM; 167 + } 168 + 169 + smc.args[0] = 1; 170 + smc.args[1] = (unsigned long)&context_id; 171 + smc.args[2] = cmd_phys; 172 + 173 + mutex_lock(&qcom_scm_lock); 174 + __scm_legacy_do(&smc, &smc_res); 175 + if (smc_res.a0) 176 + ret = qcom_scm_remap_error(smc_res.a0); 177 + mutex_unlock(&qcom_scm_lock); 178 + if (ret) 179 + goto out; 180 + 181 + do { 182 + dma_sync_single_for_cpu(dev, cmd_phys + sizeof(*cmd) + cmd_len, 183 + sizeof(*rsp), DMA_FROM_DEVICE); 184 + } while (!rsp->is_complete); 185 + 186 + dma_sync_single_for_cpu(dev, cmd_phys + sizeof(*cmd) + cmd_len + 187 + le32_to_cpu(rsp->buf_offset), 188 + resp_len, DMA_FROM_DEVICE); 189 + 190 + if (res) { 191 + res_buf = scm_legacy_get_response_buffer(rsp); 192 + for (i = 0; i < MAX_QCOM_SCM_RETS; i++) 193 + res->result[i] = le32_to_cpu(res_buf[i]); 194 + } 195 + out: 196 + dma_unmap_single(dev, cmd_phys, alloc_len, DMA_TO_DEVICE); 197 + kfree(cmd); 198 + return ret; 199 + } 200 + 201 + #define SCM_LEGACY_ATOMIC_N_REG_ARGS 5 202 + #define SCM_LEGACY_ATOMIC_FIRST_REG_IDX 2 203 + #define SCM_LEGACY_CLASS_REGISTER (0x2 << 8) 204 + #define SCM_LEGACY_MASK_IRQS BIT(5) 205 + #define SCM_LEGACY_ATOMIC_ID(svc, cmd, n) \ 206 + ((SCM_LEGACY_FNID(svc, cmd) << 12) | \ 207 + SCM_LEGACY_CLASS_REGISTER | \ 208 + SCM_LEGACY_MASK_IRQS | \ 209 + (n & 0xf)) 210 + 211 + /** 212 + * qcom_scm_call_atomic() - Send an atomic SCM command with up to 5 arguments 213 + * and 3 return values 214 + * @desc: SCM call descriptor containing arguments 215 + * @res: SCM call return values 216 + * 217 + * This shall only be used with commands that are guaranteed to be 218 + * uninterruptable, atomic and SMP safe. 219 + */ 220 + int scm_legacy_call_atomic(struct device *unused, 221 + const struct qcom_scm_desc *desc, 222 + struct qcom_scm_res *res) 223 + { 224 + int context_id; 225 + struct arm_smccc_res smc_res; 226 + size_t arglen = desc->arginfo & 0xf; 227 + 228 + BUG_ON(arglen > SCM_LEGACY_ATOMIC_N_REG_ARGS); 229 + 230 + arm_smccc_smc(SCM_LEGACY_ATOMIC_ID(desc->svc, desc->cmd, arglen), 231 + (unsigned long)&context_id, 232 + desc->args[0], desc->args[1], desc->args[2], 233 + desc->args[3], desc->args[4], 0, &smc_res); 234 + 235 + if (res) { 236 + res->result[0] = smc_res.a1; 237 + res->result[1] = smc_res.a2; 238 + res->result[2] = smc_res.a3; 239 + } 240 + 241 + return smc_res.a0; 242 + }
+151
drivers/firmware/qcom_scm-smc.c
···
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* Copyright (c) 2015,2019 The Linux Foundation. All rights reserved. 3 + */ 4 + 5 + #include <linux/io.h> 6 + #include <linux/errno.h> 7 + #include <linux/delay.h> 8 + #include <linux/mutex.h> 9 + #include <linux/slab.h> 10 + #include <linux/types.h> 11 + #include <linux/qcom_scm.h> 12 + #include <linux/arm-smccc.h> 13 + #include <linux/dma-mapping.h> 14 + 15 + #include "qcom_scm.h" 16 + 17 + /** 18 + * struct arm_smccc_args 19 + * @args: The array of values used in registers in smc instruction 20 + */ 21 + struct arm_smccc_args { 22 + unsigned long args[8]; 23 + }; 24 + 25 + static DEFINE_MUTEX(qcom_scm_lock); 26 + 27 + #define QCOM_SCM_EBUSY_WAIT_MS 30 28 + #define QCOM_SCM_EBUSY_MAX_RETRY 20 29 + 30 + #define SCM_SMC_N_REG_ARGS 4 31 + #define SCM_SMC_FIRST_EXT_IDX (SCM_SMC_N_REG_ARGS - 1) 32 + #define SCM_SMC_N_EXT_ARGS (MAX_QCOM_SCM_ARGS - SCM_SMC_N_REG_ARGS + 1) 33 + #define SCM_SMC_FIRST_REG_IDX 2 34 + #define SCM_SMC_LAST_REG_IDX (SCM_SMC_FIRST_REG_IDX + SCM_SMC_N_REG_ARGS - 1) 35 + 36 + static void __scm_smc_do_quirk(const struct arm_smccc_args *smc, 37 + struct arm_smccc_res *res) 38 + { 39 + unsigned long a0 = smc->args[0]; 40 + struct arm_smccc_quirk quirk = { .id = ARM_SMCCC_QUIRK_QCOM_A6 }; 41 + 42 + quirk.state.a6 = 0; 43 + 44 + do { 45 + arm_smccc_smc_quirk(a0, smc->args[1], smc->args[2], 46 + smc->args[3], smc->args[4], smc->args[5], 47 + quirk.state.a6, smc->args[7], res, &quirk); 48 + 49 + if (res->a0 == QCOM_SCM_INTERRUPTED) 50 + a0 = res->a0; 51 + 52 + } while (res->a0 == QCOM_SCM_INTERRUPTED); 53 + } 54 + 55 + static void __scm_smc_do(const struct arm_smccc_args *smc, 56 + struct arm_smccc_res *res, bool atomic) 57 + { 58 + int retry_count = 0; 59 + 60 + if (atomic) { 61 + __scm_smc_do_quirk(smc, res); 62 + return; 63 + } 64 + 65 + do { 66 + mutex_lock(&qcom_scm_lock); 67 + 68 + __scm_smc_do_quirk(smc, res); 69 + 70 + mutex_unlock(&qcom_scm_lock); 71 + 72 + if (res->a0 == QCOM_SCM_V2_EBUSY) { 73 + if (retry_count++ > QCOM_SCM_EBUSY_MAX_RETRY) 74 + break; 75 + msleep(QCOM_SCM_EBUSY_WAIT_MS); 76 + } 77 + } while (res->a0 == QCOM_SCM_V2_EBUSY); 78 + } 79 + 80 + int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc, 81 + struct qcom_scm_res *res, bool atomic) 82 + { 83 + int arglen = desc->arginfo & 0xf; 84 + int i; 85 + dma_addr_t args_phys = 0; 86 + void *args_virt = NULL; 87 + size_t alloc_len; 88 + gfp_t flag = atomic ? GFP_ATOMIC : GFP_KERNEL; 89 + u32 smccc_call_type = atomic ? ARM_SMCCC_FAST_CALL : ARM_SMCCC_STD_CALL; 90 + u32 qcom_smccc_convention = 91 + (qcom_scm_convention == SMC_CONVENTION_ARM_32) ? 92 + ARM_SMCCC_SMC_32 : ARM_SMCCC_SMC_64; 93 + struct arm_smccc_res smc_res; 94 + struct arm_smccc_args smc = {0}; 95 + 96 + smc.args[0] = ARM_SMCCC_CALL_VAL( 97 + smccc_call_type, 98 + qcom_smccc_convention, 99 + desc->owner, 100 + SCM_SMC_FNID(desc->svc, desc->cmd)); 101 + smc.args[1] = desc->arginfo; 102 + for (i = 0; i < SCM_SMC_N_REG_ARGS; i++) 103 + smc.args[i + SCM_SMC_FIRST_REG_IDX] = desc->args[i]; 104 + 105 + if (unlikely(arglen > SCM_SMC_N_REG_ARGS)) { 106 + alloc_len = SCM_SMC_N_EXT_ARGS * sizeof(u64); 107 + args_virt = kzalloc(PAGE_ALIGN(alloc_len), flag); 108 + 109 + if (!args_virt) 110 + return -ENOMEM; 111 + 112 + if (qcom_smccc_convention == ARM_SMCCC_SMC_32) { 113 + __le32 *args = args_virt; 114 + 115 + for (i = 0; i < SCM_SMC_N_EXT_ARGS; i++) 116 + args[i] = cpu_to_le32(desc->args[i + 117 + SCM_SMC_FIRST_EXT_IDX]); 118 + } else { 119 + __le64 *args = args_virt; 120 + 121 + for (i = 0; i < SCM_SMC_N_EXT_ARGS; i++) 122 + args[i] = cpu_to_le64(desc->args[i + 123 + SCM_SMC_FIRST_EXT_IDX]); 124 + } 125 + 126 + args_phys = dma_map_single(dev, args_virt, alloc_len, 127 + DMA_TO_DEVICE); 128 + 129 + if (dma_mapping_error(dev, args_phys)) { 130 + kfree(args_virt); 131 + return -ENOMEM; 132 + } 133 + 134 + smc.args[SCM_SMC_LAST_REG_IDX] = args_phys; 135 + } 136 + 137 + __scm_smc_do(&smc, &smc_res, atomic); 138 + 139 + if (args_virt) { 140 + dma_unmap_single(dev, args_phys, alloc_len, DMA_TO_DEVICE); 141 + kfree(args_virt); 142 + } 143 + 144 + if (res) { 145 + res->result[0] = smc_res.a1; 146 + res->result[1] = smc_res.a2; 147 + res->result[2] = smc_res.a3; 148 + } 149 + 150 + return (long)smc_res.a0 ? qcom_scm_remap_error(smc_res.a0) : 0; 151 + }
+680 -196
drivers/firmware/qcom_scm.c
··· 1 // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * Qualcomm SCM driver 4 - * 5 - * Copyright (c) 2010,2015, The Linux Foundation. All rights reserved. 6 * Copyright (C) 2015 Linaro Ltd. 7 */ 8 #include <linux/platform_device.h> ··· 16 #include <linux/of_platform.h> 17 #include <linux/clk.h> 18 #include <linux/reset-controller.h> 19 20 #include "qcom_scm.h" 21 ··· 48 struct qcom_scm_mem_map_info { 49 __le64 mem_addr; 50 __le64 mem_size; 51 }; 52 53 static struct qcom_scm *__scm; ··· 114 clk_disable_unprepare(__scm->bus_clk); 115 } 116 117 - /** 118 - * qcom_scm_set_cold_boot_addr() - Set the cold boot address for cpus 119 - * @entry: Entry point function for the cpus 120 - * @cpus: The cpumask of cpus that will use the entry point 121 - * 122 - * Set the cold boot address of the cpus. Any cpu outside the supported 123 - * range would be removed from the cpu present mask. 124 - */ 125 - int qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus) 126 { 127 - return __qcom_scm_set_cold_boot_addr(entry, cpus); 128 } 129 - EXPORT_SYMBOL(qcom_scm_set_cold_boot_addr); 130 131 /** 132 * qcom_scm_set_warm_boot_addr() - Set the warm boot address for cpus ··· 261 */ 262 int qcom_scm_set_warm_boot_addr(void *entry, const cpumask_t *cpus) 263 { 264 - return __qcom_scm_set_warm_boot_addr(__scm->dev, entry, cpus); 265 } 266 EXPORT_SYMBOL(qcom_scm_set_warm_boot_addr); 267 268 /** 269 * qcom_scm_cpu_power_down() - Power down the cpu ··· 349 */ 350 void qcom_scm_cpu_power_down(u32 flags) 351 { 352 - __qcom_scm_cpu_power_down(flags); 353 } 354 EXPORT_SYMBOL(qcom_scm_cpu_power_down); 355 356 - /** 357 - * qcom_scm_hdcp_available() - Check if secure environment supports HDCP. 358 - * 359 - * Return true if HDCP is supported, false if not. 360 - */ 361 - bool qcom_scm_hdcp_available(void) 362 { 363 - int ret = qcom_scm_clk_enable(); 364 - 365 - if (ret) 366 - return ret; 367 - 368 - ret = __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_HDCP, 369 - QCOM_SCM_CMD_HDCP); 370 - 371 - qcom_scm_clk_disable(); 372 - 373 - return ret > 0 ? true : false; 374 - } 375 - EXPORT_SYMBOL(qcom_scm_hdcp_available); 376 - 377 - /** 378 - * qcom_scm_hdcp_req() - Send HDCP request. 379 - * @req: HDCP request array 380 - * @req_cnt: HDCP request array count 381 - * @resp: response buffer passed to SCM 382 - * 383 - * Write HDCP register(s) through SCM. 384 - */ 385 - int qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt, u32 *resp) 386 - { 387 - int ret = qcom_scm_clk_enable(); 388 - 389 - if (ret) 390 - return ret; 391 - 392 - ret = __qcom_scm_hdcp_req(__scm->dev, req, req_cnt, resp); 393 - qcom_scm_clk_disable(); 394 - return ret; 395 - } 396 - EXPORT_SYMBOL(qcom_scm_hdcp_req); 397 - 398 - /** 399 - * qcom_scm_pas_supported() - Check if the peripheral authentication service is 400 - * available for the given peripherial 401 - * @peripheral: peripheral id 402 - * 403 - * Returns true if PAS is supported for this peripheral, otherwise false. 404 - */ 405 - bool qcom_scm_pas_supported(u32 peripheral) 406 - { 407 int ret; 408 409 - ret = __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_PIL, 410 - QCOM_SCM_PAS_IS_SUPPORTED_CMD); 411 - if (ret <= 0) 412 - return false; 413 414 - return __qcom_scm_pas_supported(__scm->dev, peripheral); 415 } 416 - EXPORT_SYMBOL(qcom_scm_pas_supported); 417 418 - /** 419 - * qcom_scm_ocmem_lock_available() - is OCMEM lock/unlock interface available 420 - */ 421 - bool qcom_scm_ocmem_lock_available(void) 422 { 423 - return __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_OCMEM_SVC, 424 - QCOM_SCM_OCMEM_LOCK_CMD); 425 - } 426 - EXPORT_SYMBOL(qcom_scm_ocmem_lock_available); 427 428 - /** 429 - * qcom_scm_ocmem_lock() - call OCMEM lock interface to assign an OCMEM 430 - * region to the specified initiator 431 - * 432 - * @id: tz initiator id 433 - * @offset: OCMEM offset 434 - * @size: OCMEM size 435 - * @mode: access mode (WIDE/NARROW) 436 - */ 437 - int qcom_scm_ocmem_lock(enum qcom_scm_ocmem_client id, u32 offset, u32 size, 438 - u32 mode) 439 - { 440 - return __qcom_scm_ocmem_lock(__scm->dev, id, offset, size, mode); 441 - } 442 - EXPORT_SYMBOL(qcom_scm_ocmem_lock); 443 444 - /** 445 - * qcom_scm_ocmem_unlock() - call OCMEM unlock interface to release an OCMEM 446 - * region from the specified initiator 447 - * 448 - * @id: tz initiator id 449 - * @offset: OCMEM offset 450 - * @size: OCMEM size 451 - */ 452 - int qcom_scm_ocmem_unlock(enum qcom_scm_ocmem_client id, u32 offset, u32 size) 453 - { 454 - return __qcom_scm_ocmem_unlock(__scm->dev, id, offset, size); 455 } 456 - EXPORT_SYMBOL(qcom_scm_ocmem_unlock); 457 458 /** 459 * qcom_scm_pas_init_image() - Initialize peripheral authentication service ··· 434 dma_addr_t mdata_phys; 435 void *mdata_buf; 436 int ret; 437 438 /* 439 * During the scm call memory protection will be enabled for the meta ··· 460 if (ret) 461 goto free_metadata; 462 463 - ret = __qcom_scm_pas_init_image(__scm->dev, peripheral, mdata_phys); 464 465 qcom_scm_clk_disable(); 466 467 free_metadata: 468 dma_free_coherent(__scm->dev, size, mdata_buf, mdata_phys); 469 470 - return ret; 471 } 472 EXPORT_SYMBOL(qcom_scm_pas_init_image); 473 ··· 485 int qcom_scm_pas_mem_setup(u32 peripheral, phys_addr_t addr, phys_addr_t size) 486 { 487 int ret; 488 489 ret = qcom_scm_clk_enable(); 490 if (ret) 491 return ret; 492 493 - ret = __qcom_scm_pas_mem_setup(__scm->dev, peripheral, addr, size); 494 qcom_scm_clk_disable(); 495 496 - return ret; 497 } 498 EXPORT_SYMBOL(qcom_scm_pas_mem_setup); 499 ··· 517 int qcom_scm_pas_auth_and_reset(u32 peripheral) 518 { 519 int ret; 520 521 ret = qcom_scm_clk_enable(); 522 if (ret) 523 return ret; 524 525 - ret = __qcom_scm_pas_auth_and_reset(__scm->dev, peripheral); 526 qcom_scm_clk_disable(); 527 528 - return ret; 529 } 530 EXPORT_SYMBOL(qcom_scm_pas_auth_and_reset); 531 ··· 546 int qcom_scm_pas_shutdown(u32 peripheral) 547 { 548 int ret; 549 550 ret = qcom_scm_clk_enable(); 551 if (ret) 552 return ret; 553 554 - ret = __qcom_scm_pas_shutdown(__scm->dev, peripheral); 555 qcom_scm_clk_disable(); 556 557 - return ret; 558 } 559 EXPORT_SYMBOL(qcom_scm_pas_shutdown); 560 561 static int qcom_scm_pas_reset_assert(struct reset_controller_dev *rcdev, 562 unsigned long idx) ··· 638 .deassert = qcom_scm_pas_reset_deassert, 639 }; 640 641 /** 642 * qcom_scm_restore_sec_cfg_available() - Check if secure environment 643 * supports restore security config interface. ··· 684 bool qcom_scm_restore_sec_cfg_available(void) 685 { 686 return __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_MP, 687 - QCOM_SCM_RESTORE_SEC_CFG); 688 } 689 EXPORT_SYMBOL(qcom_scm_restore_sec_cfg_available); 690 691 int qcom_scm_restore_sec_cfg(u32 device_id, u32 spare) 692 { 693 - return __qcom_scm_restore_sec_cfg(__scm->dev, device_id, spare); 694 } 695 EXPORT_SYMBOL(qcom_scm_restore_sec_cfg); 696 697 int qcom_scm_iommu_secure_ptbl_size(u32 spare, size_t *size) 698 { 699 - return __qcom_scm_iommu_secure_ptbl_size(__scm->dev, spare, size); 700 } 701 EXPORT_SYMBOL(qcom_scm_iommu_secure_ptbl_size); 702 703 int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare) 704 { 705 - return __qcom_scm_iommu_secure_ptbl_init(__scm->dev, addr, size, spare); 706 } 707 EXPORT_SYMBOL(qcom_scm_iommu_secure_ptbl_init); 708 709 - int qcom_scm_qsmmu500_wait_safe_toggle(bool en) 710 { 711 - return __qcom_scm_qsmmu500_wait_safe_toggle(__scm->dev, en); 712 - } 713 - EXPORT_SYMBOL(qcom_scm_qsmmu500_wait_safe_toggle); 714 - 715 - int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val) 716 - { 717 - return __qcom_scm_io_readl(__scm->dev, addr, val); 718 - } 719 - EXPORT_SYMBOL(qcom_scm_io_readl); 720 - 721 - int qcom_scm_io_writel(phys_addr_t addr, unsigned int val) 722 - { 723 - return __qcom_scm_io_writel(__scm->dev, addr, val); 724 - } 725 - EXPORT_SYMBOL(qcom_scm_io_writel); 726 - 727 - static void qcom_scm_set_download_mode(bool enable) 728 - { 729 - bool avail; 730 - int ret = 0; 731 - 732 - avail = __qcom_scm_is_call_available(__scm->dev, 733 - QCOM_SCM_SVC_BOOT, 734 - QCOM_SCM_SET_DLOAD_MODE); 735 - if (avail) { 736 - ret = __qcom_scm_set_dload_mode(__scm->dev, enable); 737 - } else if (__scm->dload_mode_addr) { 738 - ret = __qcom_scm_io_writel(__scm->dev, __scm->dload_mode_addr, 739 - enable ? QCOM_SCM_SET_DLOAD_MODE : 0); 740 - } else { 741 - dev_err(__scm->dev, 742 - "No available mechanism for setting download mode\n"); 743 - } 744 - 745 - if (ret) 746 - dev_err(__scm->dev, "failed to set download mode: %d\n", ret); 747 - } 748 - 749 - static int qcom_scm_find_dload_address(struct device *dev, u64 *addr) 750 - { 751 - struct device_node *tcsr; 752 - struct device_node *np = dev->of_node; 753 - struct resource res; 754 - u32 offset; 755 int ret; 756 757 - tcsr = of_parse_phandle(np, "qcom,dload-mode", 0); 758 - if (!tcsr) 759 - return 0; 760 761 - ret = of_address_to_resource(tcsr, 0, &res); 762 - of_node_put(tcsr); 763 - if (ret) 764 - return ret; 765 - 766 - ret = of_property_read_u32_index(np, "qcom,dload-mode", 1, &offset); 767 - if (ret < 0) 768 - return ret; 769 - 770 - *addr = res.start + offset; 771 - 772 - return 0; 773 } 774 - 775 - /** 776 - * qcom_scm_is_available() - Checks if SCM is available 777 - */ 778 - bool qcom_scm_is_available(void) 779 - { 780 - return !!__scm; 781 - } 782 - EXPORT_SYMBOL(qcom_scm_is_available); 783 - 784 - int qcom_scm_set_remote_state(u32 state, u32 id) 785 - { 786 - return __qcom_scm_set_remote_state(__scm->dev, state, id); 787 - } 788 - EXPORT_SYMBOL(qcom_scm_set_remote_state); 789 790 /** 791 * qcom_scm_assign_mem() - Make a secure call to reassign memory ownership ··· 867 } 868 EXPORT_SYMBOL(qcom_scm_assign_mem); 869 870 static int qcom_scm_probe(struct platform_device *pdev) 871 { 872 struct qcom_scm *scm; ··· 1115 __scm = scm; 1116 __scm->dev = &pdev->dev; 1117 1118 - __qcom_scm_init(); 1119 1120 /* 1121 * If requested enable "download mode", from this point on warmboot
··· 1 // SPDX-License-Identifier: GPL-2.0-only 2 + /* Copyright (c) 2010,2015,2019 The Linux Foundation. All rights reserved. 3 * Copyright (C) 2015 Linaro Ltd. 4 */ 5 #include <linux/platform_device.h> ··· 19 #include <linux/of_platform.h> 20 #include <linux/clk.h> 21 #include <linux/reset-controller.h> 22 + #include <linux/arm-smccc.h> 23 24 #include "qcom_scm.h" 25 ··· 50 struct qcom_scm_mem_map_info { 51 __le64 mem_addr; 52 __le64 mem_size; 53 + }; 54 + 55 + #define QCOM_SCM_FLAG_COLDBOOT_CPU0 0x00 56 + #define QCOM_SCM_FLAG_COLDBOOT_CPU1 0x01 57 + #define QCOM_SCM_FLAG_COLDBOOT_CPU2 0x08 58 + #define QCOM_SCM_FLAG_COLDBOOT_CPU3 0x20 59 + 60 + #define QCOM_SCM_FLAG_WARMBOOT_CPU0 0x04 61 + #define QCOM_SCM_FLAG_WARMBOOT_CPU1 0x02 62 + #define QCOM_SCM_FLAG_WARMBOOT_CPU2 0x10 63 + #define QCOM_SCM_FLAG_WARMBOOT_CPU3 0x40 64 + 65 + struct qcom_scm_wb_entry { 66 + int flag; 67 + void *entry; 68 + }; 69 + 70 + static struct qcom_scm_wb_entry qcom_scm_wb[] = { 71 + { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU0 }, 72 + { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU1 }, 73 + { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU2 }, 74 + { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU3 }, 75 + }; 76 + 77 + static const char *qcom_scm_convention_names[] = { 78 + [SMC_CONVENTION_UNKNOWN] = "unknown", 79 + [SMC_CONVENTION_ARM_32] = "smc arm 32", 80 + [SMC_CONVENTION_ARM_64] = "smc arm 64", 81 + [SMC_CONVENTION_LEGACY] = "smc legacy", 82 }; 83 84 static struct qcom_scm *__scm; ··· 87 clk_disable_unprepare(__scm->bus_clk); 88 } 89 90 + static int __qcom_scm_is_call_available(struct device *dev, u32 svc_id, 91 + u32 cmd_id); 92 + 93 + enum qcom_scm_convention qcom_scm_convention; 94 + static bool has_queried __read_mostly; 95 + static DEFINE_SPINLOCK(query_lock); 96 + 97 + static void __query_convention(void) 98 { 99 + unsigned long flags; 100 + struct qcom_scm_desc desc = { 101 + .svc = QCOM_SCM_SVC_INFO, 102 + .cmd = QCOM_SCM_INFO_IS_CALL_AVAIL, 103 + .args[0] = SCM_SMC_FNID(QCOM_SCM_SVC_INFO, 104 + QCOM_SCM_INFO_IS_CALL_AVAIL) | 105 + (ARM_SMCCC_OWNER_SIP << ARM_SMCCC_OWNER_SHIFT), 106 + .arginfo = QCOM_SCM_ARGS(1), 107 + .owner = ARM_SMCCC_OWNER_SIP, 108 + }; 109 + struct qcom_scm_res res; 110 + int ret; 111 + 112 + spin_lock_irqsave(&query_lock, flags); 113 + if (has_queried) 114 + goto out; 115 + 116 + qcom_scm_convention = SMC_CONVENTION_ARM_64; 117 + // Device isn't required as there is only one argument - no device 118 + // needed to dma_map_single to secure world 119 + ret = scm_smc_call(NULL, &desc, &res, true); 120 + if (!ret && res.result[0] == 1) 121 + goto out; 122 + 123 + qcom_scm_convention = SMC_CONVENTION_ARM_32; 124 + ret = scm_smc_call(NULL, &desc, &res, true); 125 + if (!ret && res.result[0] == 1) 126 + goto out; 127 + 128 + qcom_scm_convention = SMC_CONVENTION_LEGACY; 129 + out: 130 + has_queried = true; 131 + spin_unlock_irqrestore(&query_lock, flags); 132 + pr_info("qcom_scm: convention: %s\n", 133 + qcom_scm_convention_names[qcom_scm_convention]); 134 } 135 + 136 + static inline enum qcom_scm_convention __get_convention(void) 137 + { 138 + if (unlikely(!has_queried)) 139 + __query_convention(); 140 + return qcom_scm_convention; 141 + } 142 + 143 + /** 144 + * qcom_scm_call() - Invoke a syscall in the secure world 145 + * @dev: device 146 + * @svc_id: service identifier 147 + * @cmd_id: command identifier 148 + * @desc: Descriptor structure containing arguments and return values 149 + * 150 + * Sends a command to the SCM and waits for the command to finish processing. 151 + * This should *only* be called in pre-emptible context. 152 + */ 153 + static int qcom_scm_call(struct device *dev, const struct qcom_scm_desc *desc, 154 + struct qcom_scm_res *res) 155 + { 156 + might_sleep(); 157 + switch (__get_convention()) { 158 + case SMC_CONVENTION_ARM_32: 159 + case SMC_CONVENTION_ARM_64: 160 + return scm_smc_call(dev, desc, res, false); 161 + case SMC_CONVENTION_LEGACY: 162 + return scm_legacy_call(dev, desc, res); 163 + default: 164 + pr_err("Unknown current SCM calling convention.\n"); 165 + return -EINVAL; 166 + } 167 + } 168 + 169 + /** 170 + * qcom_scm_call_atomic() - atomic variation of qcom_scm_call() 171 + * @dev: device 172 + * @svc_id: service identifier 173 + * @cmd_id: command identifier 174 + * @desc: Descriptor structure containing arguments and return values 175 + * @res: Structure containing results from SMC/HVC call 176 + * 177 + * Sends a command to the SCM and waits for the command to finish processing. 178 + * This can be called in atomic context. 179 + */ 180 + static int qcom_scm_call_atomic(struct device *dev, 181 + const struct qcom_scm_desc *desc, 182 + struct qcom_scm_res *res) 183 + { 184 + switch (__get_convention()) { 185 + case SMC_CONVENTION_ARM_32: 186 + case SMC_CONVENTION_ARM_64: 187 + return scm_smc_call(dev, desc, res, true); 188 + case SMC_CONVENTION_LEGACY: 189 + return scm_legacy_call_atomic(dev, desc, res); 190 + default: 191 + pr_err("Unknown current SCM calling convention.\n"); 192 + return -EINVAL; 193 + } 194 + } 195 + 196 + static int __qcom_scm_is_call_available(struct device *dev, u32 svc_id, 197 + u32 cmd_id) 198 + { 199 + int ret; 200 + struct qcom_scm_desc desc = { 201 + .svc = QCOM_SCM_SVC_INFO, 202 + .cmd = QCOM_SCM_INFO_IS_CALL_AVAIL, 203 + .owner = ARM_SMCCC_OWNER_SIP, 204 + }; 205 + struct qcom_scm_res res; 206 + 207 + desc.arginfo = QCOM_SCM_ARGS(1); 208 + switch (__get_convention()) { 209 + case SMC_CONVENTION_ARM_32: 210 + case SMC_CONVENTION_ARM_64: 211 + desc.args[0] = SCM_SMC_FNID(svc_id, cmd_id) | 212 + (ARM_SMCCC_OWNER_SIP << ARM_SMCCC_OWNER_SHIFT); 213 + break; 214 + case SMC_CONVENTION_LEGACY: 215 + desc.args[0] = SCM_LEGACY_FNID(svc_id, cmd_id); 216 + break; 217 + default: 218 + pr_err("Unknown SMC convention being used\n"); 219 + return -EINVAL; 220 + } 221 + 222 + ret = qcom_scm_call(dev, &desc, &res); 223 + 224 + return ret ? : res.result[0]; 225 + } 226 227 /** 228 * qcom_scm_set_warm_boot_addr() - Set the warm boot address for cpus ··· 111 */ 112 int qcom_scm_set_warm_boot_addr(void *entry, const cpumask_t *cpus) 113 { 114 + int ret; 115 + int flags = 0; 116 + int cpu; 117 + struct qcom_scm_desc desc = { 118 + .svc = QCOM_SCM_SVC_BOOT, 119 + .cmd = QCOM_SCM_BOOT_SET_ADDR, 120 + .arginfo = QCOM_SCM_ARGS(2), 121 + }; 122 + 123 + /* 124 + * Reassign only if we are switching from hotplug entry point 125 + * to cpuidle entry point or vice versa. 126 + */ 127 + for_each_cpu(cpu, cpus) { 128 + if (entry == qcom_scm_wb[cpu].entry) 129 + continue; 130 + flags |= qcom_scm_wb[cpu].flag; 131 + } 132 + 133 + /* No change in entry function */ 134 + if (!flags) 135 + return 0; 136 + 137 + desc.args[0] = flags; 138 + desc.args[1] = virt_to_phys(entry); 139 + 140 + ret = qcom_scm_call(__scm->dev, &desc, NULL); 141 + if (!ret) { 142 + for_each_cpu(cpu, cpus) 143 + qcom_scm_wb[cpu].entry = entry; 144 + } 145 + 146 + return ret; 147 } 148 EXPORT_SYMBOL(qcom_scm_set_warm_boot_addr); 149 + 150 + /** 151 + * qcom_scm_set_cold_boot_addr() - Set the cold boot address for cpus 152 + * @entry: Entry point function for the cpus 153 + * @cpus: The cpumask of cpus that will use the entry point 154 + * 155 + * Set the cold boot address of the cpus. Any cpu outside the supported 156 + * range would be removed from the cpu present mask. 157 + */ 158 + int qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus) 159 + { 160 + int flags = 0; 161 + int cpu; 162 + int scm_cb_flags[] = { 163 + QCOM_SCM_FLAG_COLDBOOT_CPU0, 164 + QCOM_SCM_FLAG_COLDBOOT_CPU1, 165 + QCOM_SCM_FLAG_COLDBOOT_CPU2, 166 + QCOM_SCM_FLAG_COLDBOOT_CPU3, 167 + }; 168 + struct qcom_scm_desc desc = { 169 + .svc = QCOM_SCM_SVC_BOOT, 170 + .cmd = QCOM_SCM_BOOT_SET_ADDR, 171 + .arginfo = QCOM_SCM_ARGS(2), 172 + .owner = ARM_SMCCC_OWNER_SIP, 173 + }; 174 + 175 + if (!cpus || (cpus && cpumask_empty(cpus))) 176 + return -EINVAL; 177 + 178 + for_each_cpu(cpu, cpus) { 179 + if (cpu < ARRAY_SIZE(scm_cb_flags)) 180 + flags |= scm_cb_flags[cpu]; 181 + else 182 + set_cpu_present(cpu, false); 183 + } 184 + 185 + desc.args[0] = flags; 186 + desc.args[1] = virt_to_phys(entry); 187 + 188 + return qcom_scm_call_atomic(__scm ? __scm->dev : NULL, &desc, NULL); 189 + } 190 + EXPORT_SYMBOL(qcom_scm_set_cold_boot_addr); 191 192 /** 193 * qcom_scm_cpu_power_down() - Power down the cpu ··· 125 */ 126 void qcom_scm_cpu_power_down(u32 flags) 127 { 128 + struct qcom_scm_desc desc = { 129 + .svc = QCOM_SCM_SVC_BOOT, 130 + .cmd = QCOM_SCM_BOOT_TERMINATE_PC, 131 + .args[0] = flags & QCOM_SCM_FLUSH_FLAG_MASK, 132 + .arginfo = QCOM_SCM_ARGS(1), 133 + .owner = ARM_SMCCC_OWNER_SIP, 134 + }; 135 + 136 + qcom_scm_call_atomic(__scm ? __scm->dev : NULL, &desc, NULL); 137 } 138 EXPORT_SYMBOL(qcom_scm_cpu_power_down); 139 140 + int qcom_scm_set_remote_state(u32 state, u32 id) 141 { 142 + struct qcom_scm_desc desc = { 143 + .svc = QCOM_SCM_SVC_BOOT, 144 + .cmd = QCOM_SCM_BOOT_SET_REMOTE_STATE, 145 + .arginfo = QCOM_SCM_ARGS(2), 146 + .args[0] = state, 147 + .args[1] = id, 148 + .owner = ARM_SMCCC_OWNER_SIP, 149 + }; 150 + struct qcom_scm_res res; 151 int ret; 152 153 + ret = qcom_scm_call(__scm->dev, &desc, &res); 154 155 + return ret ? : res.result[0]; 156 } 157 + EXPORT_SYMBOL(qcom_scm_set_remote_state); 158 159 + static int __qcom_scm_set_dload_mode(struct device *dev, bool enable) 160 { 161 + struct qcom_scm_desc desc = { 162 + .svc = QCOM_SCM_SVC_BOOT, 163 + .cmd = QCOM_SCM_BOOT_SET_DLOAD_MODE, 164 + .arginfo = QCOM_SCM_ARGS(2), 165 + .args[0] = QCOM_SCM_BOOT_SET_DLOAD_MODE, 166 + .owner = ARM_SMCCC_OWNER_SIP, 167 + }; 168 169 + desc.args[1] = enable ? QCOM_SCM_BOOT_SET_DLOAD_MODE : 0; 170 171 + return qcom_scm_call(__scm->dev, &desc, NULL); 172 } 173 + 174 + static void qcom_scm_set_download_mode(bool enable) 175 + { 176 + bool avail; 177 + int ret = 0; 178 + 179 + avail = __qcom_scm_is_call_available(__scm->dev, 180 + QCOM_SCM_SVC_BOOT, 181 + QCOM_SCM_BOOT_SET_DLOAD_MODE); 182 + if (avail) { 183 + ret = __qcom_scm_set_dload_mode(__scm->dev, enable); 184 + } else if (__scm->dload_mode_addr) { 185 + ret = qcom_scm_io_writel(__scm->dload_mode_addr, 186 + enable ? QCOM_SCM_BOOT_SET_DLOAD_MODE : 0); 187 + } else { 188 + dev_err(__scm->dev, 189 + "No available mechanism for setting download mode\n"); 190 + } 191 + 192 + if (ret) 193 + dev_err(__scm->dev, "failed to set download mode: %d\n", ret); 194 + } 195 196 /** 197 * qcom_scm_pas_init_image() - Initialize peripheral authentication service ··· 248 dma_addr_t mdata_phys; 249 void *mdata_buf; 250 int ret; 251 + struct qcom_scm_desc desc = { 252 + .svc = QCOM_SCM_SVC_PIL, 253 + .cmd = QCOM_SCM_PIL_PAS_INIT_IMAGE, 254 + .arginfo = QCOM_SCM_ARGS(2, QCOM_SCM_VAL, QCOM_SCM_RW), 255 + .args[0] = peripheral, 256 + .owner = ARM_SMCCC_OWNER_SIP, 257 + }; 258 + struct qcom_scm_res res; 259 260 /* 261 * During the scm call memory protection will be enabled for the meta ··· 266 if (ret) 267 goto free_metadata; 268 269 + desc.args[1] = mdata_phys; 270 + 271 + ret = qcom_scm_call(__scm->dev, &desc, &res); 272 273 qcom_scm_clk_disable(); 274 275 free_metadata: 276 dma_free_coherent(__scm->dev, size, mdata_buf, mdata_phys); 277 278 + return ret ? : res.result[0]; 279 } 280 EXPORT_SYMBOL(qcom_scm_pas_init_image); 281 ··· 289 int qcom_scm_pas_mem_setup(u32 peripheral, phys_addr_t addr, phys_addr_t size) 290 { 291 int ret; 292 + struct qcom_scm_desc desc = { 293 + .svc = QCOM_SCM_SVC_PIL, 294 + .cmd = QCOM_SCM_PIL_PAS_MEM_SETUP, 295 + .arginfo = QCOM_SCM_ARGS(3), 296 + .args[0] = peripheral, 297 + .args[1] = addr, 298 + .args[2] = size, 299 + .owner = ARM_SMCCC_OWNER_SIP, 300 + }; 301 + struct qcom_scm_res res; 302 303 ret = qcom_scm_clk_enable(); 304 if (ret) 305 return ret; 306 307 + ret = qcom_scm_call(__scm->dev, &desc, &res); 308 qcom_scm_clk_disable(); 309 310 + return ret ? : res.result[0]; 311 } 312 EXPORT_SYMBOL(qcom_scm_pas_mem_setup); 313 ··· 311 int qcom_scm_pas_auth_and_reset(u32 peripheral) 312 { 313 int ret; 314 + struct qcom_scm_desc desc = { 315 + .svc = QCOM_SCM_SVC_PIL, 316 + .cmd = QCOM_SCM_PIL_PAS_AUTH_AND_RESET, 317 + .arginfo = QCOM_SCM_ARGS(1), 318 + .args[0] = peripheral, 319 + .owner = ARM_SMCCC_OWNER_SIP, 320 + }; 321 + struct qcom_scm_res res; 322 323 ret = qcom_scm_clk_enable(); 324 if (ret) 325 return ret; 326 327 + ret = qcom_scm_call(__scm->dev, &desc, &res); 328 qcom_scm_clk_disable(); 329 330 + return ret ? : res.result[0]; 331 } 332 EXPORT_SYMBOL(qcom_scm_pas_auth_and_reset); 333 ··· 332 int qcom_scm_pas_shutdown(u32 peripheral) 333 { 334 int ret; 335 + struct qcom_scm_desc desc = { 336 + .svc = QCOM_SCM_SVC_PIL, 337 + .cmd = QCOM_SCM_PIL_PAS_SHUTDOWN, 338 + .arginfo = QCOM_SCM_ARGS(1), 339 + .args[0] = peripheral, 340 + .owner = ARM_SMCCC_OWNER_SIP, 341 + }; 342 + struct qcom_scm_res res; 343 344 ret = qcom_scm_clk_enable(); 345 if (ret) 346 return ret; 347 348 + ret = qcom_scm_call(__scm->dev, &desc, &res); 349 + 350 qcom_scm_clk_disable(); 351 352 + return ret ? : res.result[0]; 353 } 354 EXPORT_SYMBOL(qcom_scm_pas_shutdown); 355 + 356 + /** 357 + * qcom_scm_pas_supported() - Check if the peripheral authentication service is 358 + * available for the given peripherial 359 + * @peripheral: peripheral id 360 + * 361 + * Returns true if PAS is supported for this peripheral, otherwise false. 362 + */ 363 + bool qcom_scm_pas_supported(u32 peripheral) 364 + { 365 + int ret; 366 + struct qcom_scm_desc desc = { 367 + .svc = QCOM_SCM_SVC_PIL, 368 + .cmd = QCOM_SCM_PIL_PAS_IS_SUPPORTED, 369 + .arginfo = QCOM_SCM_ARGS(1), 370 + .args[0] = peripheral, 371 + .owner = ARM_SMCCC_OWNER_SIP, 372 + }; 373 + struct qcom_scm_res res; 374 + 375 + ret = __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_PIL, 376 + QCOM_SCM_PIL_PAS_IS_SUPPORTED); 377 + if (ret <= 0) 378 + return false; 379 + 380 + ret = qcom_scm_call(__scm->dev, &desc, &res); 381 + 382 + return ret ? false : !!res.result[0]; 383 + } 384 + EXPORT_SYMBOL(qcom_scm_pas_supported); 385 + 386 + static int __qcom_scm_pas_mss_reset(struct device *dev, bool reset) 387 + { 388 + struct qcom_scm_desc desc = { 389 + .svc = QCOM_SCM_SVC_PIL, 390 + .cmd = QCOM_SCM_PIL_PAS_MSS_RESET, 391 + .arginfo = QCOM_SCM_ARGS(2), 392 + .args[0] = reset, 393 + .args[1] = 0, 394 + .owner = ARM_SMCCC_OWNER_SIP, 395 + }; 396 + struct qcom_scm_res res; 397 + int ret; 398 + 399 + ret = qcom_scm_call(__scm->dev, &desc, &res); 400 + 401 + return ret ? : res.result[0]; 402 + } 403 404 static int qcom_scm_pas_reset_assert(struct reset_controller_dev *rcdev, 405 unsigned long idx) ··· 367 .deassert = qcom_scm_pas_reset_deassert, 368 }; 369 370 + int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val) 371 + { 372 + struct qcom_scm_desc desc = { 373 + .svc = QCOM_SCM_SVC_IO, 374 + .cmd = QCOM_SCM_IO_READ, 375 + .arginfo = QCOM_SCM_ARGS(1), 376 + .args[0] = addr, 377 + .owner = ARM_SMCCC_OWNER_SIP, 378 + }; 379 + struct qcom_scm_res res; 380 + int ret; 381 + 382 + 383 + ret = qcom_scm_call(__scm->dev, &desc, &res); 384 + if (ret >= 0) 385 + *val = res.result[0]; 386 + 387 + return ret < 0 ? ret : 0; 388 + } 389 + EXPORT_SYMBOL(qcom_scm_io_readl); 390 + 391 + int qcom_scm_io_writel(phys_addr_t addr, unsigned int val) 392 + { 393 + struct qcom_scm_desc desc = { 394 + .svc = QCOM_SCM_SVC_IO, 395 + .cmd = QCOM_SCM_IO_WRITE, 396 + .arginfo = QCOM_SCM_ARGS(2), 397 + .args[0] = addr, 398 + .args[1] = val, 399 + .owner = ARM_SMCCC_OWNER_SIP, 400 + }; 401 + 402 + 403 + return qcom_scm_call(__scm->dev, &desc, NULL); 404 + } 405 + EXPORT_SYMBOL(qcom_scm_io_writel); 406 + 407 /** 408 * qcom_scm_restore_sec_cfg_available() - Check if secure environment 409 * supports restore security config interface. ··· 376 bool qcom_scm_restore_sec_cfg_available(void) 377 { 378 return __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_MP, 379 + QCOM_SCM_MP_RESTORE_SEC_CFG); 380 } 381 EXPORT_SYMBOL(qcom_scm_restore_sec_cfg_available); 382 383 int qcom_scm_restore_sec_cfg(u32 device_id, u32 spare) 384 { 385 + struct qcom_scm_desc desc = { 386 + .svc = QCOM_SCM_SVC_MP, 387 + .cmd = QCOM_SCM_MP_RESTORE_SEC_CFG, 388 + .arginfo = QCOM_SCM_ARGS(2), 389 + .args[0] = device_id, 390 + .args[1] = spare, 391 + .owner = ARM_SMCCC_OWNER_SIP, 392 + }; 393 + struct qcom_scm_res res; 394 + int ret; 395 + 396 + ret = qcom_scm_call(__scm->dev, &desc, &res); 397 + 398 + return ret ? : res.result[0]; 399 } 400 EXPORT_SYMBOL(qcom_scm_restore_sec_cfg); 401 402 int qcom_scm_iommu_secure_ptbl_size(u32 spare, size_t *size) 403 { 404 + struct qcom_scm_desc desc = { 405 + .svc = QCOM_SCM_SVC_MP, 406 + .cmd = QCOM_SCM_MP_IOMMU_SECURE_PTBL_SIZE, 407 + .arginfo = QCOM_SCM_ARGS(1), 408 + .args[0] = spare, 409 + .owner = ARM_SMCCC_OWNER_SIP, 410 + }; 411 + struct qcom_scm_res res; 412 + int ret; 413 + 414 + ret = qcom_scm_call(__scm->dev, &desc, &res); 415 + 416 + if (size) 417 + *size = res.result[0]; 418 + 419 + return ret ? : res.result[1]; 420 } 421 EXPORT_SYMBOL(qcom_scm_iommu_secure_ptbl_size); 422 423 int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare) 424 { 425 + struct qcom_scm_desc desc = { 426 + .svc = QCOM_SCM_SVC_MP, 427 + .cmd = QCOM_SCM_MP_IOMMU_SECURE_PTBL_INIT, 428 + .arginfo = QCOM_SCM_ARGS(3, QCOM_SCM_RW, QCOM_SCM_VAL, 429 + QCOM_SCM_VAL), 430 + .args[0] = addr, 431 + .args[1] = size, 432 + .args[2] = spare, 433 + .owner = ARM_SMCCC_OWNER_SIP, 434 + }; 435 + int ret; 436 + 437 + desc.args[0] = addr; 438 + desc.args[1] = size; 439 + desc.args[2] = spare; 440 + desc.arginfo = QCOM_SCM_ARGS(3, QCOM_SCM_RW, QCOM_SCM_VAL, 441 + QCOM_SCM_VAL); 442 + 443 + ret = qcom_scm_call(__scm->dev, &desc, NULL); 444 + 445 + /* the pg table has been initialized already, ignore the error */ 446 + if (ret == -EPERM) 447 + ret = 0; 448 + 449 + return ret; 450 } 451 EXPORT_SYMBOL(qcom_scm_iommu_secure_ptbl_init); 452 453 + static int __qcom_scm_assign_mem(struct device *dev, phys_addr_t mem_region, 454 + size_t mem_sz, phys_addr_t src, size_t src_sz, 455 + phys_addr_t dest, size_t dest_sz) 456 { 457 int ret; 458 + struct qcom_scm_desc desc = { 459 + .svc = QCOM_SCM_SVC_MP, 460 + .cmd = QCOM_SCM_MP_ASSIGN, 461 + .arginfo = QCOM_SCM_ARGS(7, QCOM_SCM_RO, QCOM_SCM_VAL, 462 + QCOM_SCM_RO, QCOM_SCM_VAL, QCOM_SCM_RO, 463 + QCOM_SCM_VAL, QCOM_SCM_VAL), 464 + .args[0] = mem_region, 465 + .args[1] = mem_sz, 466 + .args[2] = src, 467 + .args[3] = src_sz, 468 + .args[4] = dest, 469 + .args[5] = dest_sz, 470 + .args[6] = 0, 471 + .owner = ARM_SMCCC_OWNER_SIP, 472 + }; 473 + struct qcom_scm_res res; 474 475 + ret = qcom_scm_call(dev, &desc, &res); 476 477 + return ret ? : res.result[0]; 478 } 479 480 /** 481 * qcom_scm_assign_mem() - Make a secure call to reassign memory ownership ··· 561 } 562 EXPORT_SYMBOL(qcom_scm_assign_mem); 563 564 + /** 565 + * qcom_scm_ocmem_lock_available() - is OCMEM lock/unlock interface available 566 + */ 567 + bool qcom_scm_ocmem_lock_available(void) 568 + { 569 + return __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_OCMEM, 570 + QCOM_SCM_OCMEM_LOCK_CMD); 571 + } 572 + EXPORT_SYMBOL(qcom_scm_ocmem_lock_available); 573 + 574 + /** 575 + * qcom_scm_ocmem_lock() - call OCMEM lock interface to assign an OCMEM 576 + * region to the specified initiator 577 + * 578 + * @id: tz initiator id 579 + * @offset: OCMEM offset 580 + * @size: OCMEM size 581 + * @mode: access mode (WIDE/NARROW) 582 + */ 583 + int qcom_scm_ocmem_lock(enum qcom_scm_ocmem_client id, u32 offset, u32 size, 584 + u32 mode) 585 + { 586 + struct qcom_scm_desc desc = { 587 + .svc = QCOM_SCM_SVC_OCMEM, 588 + .cmd = QCOM_SCM_OCMEM_LOCK_CMD, 589 + .args[0] = id, 590 + .args[1] = offset, 591 + .args[2] = size, 592 + .args[3] = mode, 593 + .arginfo = QCOM_SCM_ARGS(4), 594 + }; 595 + 596 + return qcom_scm_call(__scm->dev, &desc, NULL); 597 + } 598 + EXPORT_SYMBOL(qcom_scm_ocmem_lock); 599 + 600 + /** 601 + * qcom_scm_ocmem_unlock() - call OCMEM unlock interface to release an OCMEM 602 + * region from the specified initiator 603 + * 604 + * @id: tz initiator id 605 + * @offset: OCMEM offset 606 + * @size: OCMEM size 607 + */ 608 + int qcom_scm_ocmem_unlock(enum qcom_scm_ocmem_client id, u32 offset, u32 size) 609 + { 610 + struct qcom_scm_desc desc = { 611 + .svc = QCOM_SCM_SVC_OCMEM, 612 + .cmd = QCOM_SCM_OCMEM_UNLOCK_CMD, 613 + .args[0] = id, 614 + .args[1] = offset, 615 + .args[2] = size, 616 + .arginfo = QCOM_SCM_ARGS(3), 617 + }; 618 + 619 + return qcom_scm_call(__scm->dev, &desc, NULL); 620 + } 621 + EXPORT_SYMBOL(qcom_scm_ocmem_unlock); 622 + 623 + /** 624 + * qcom_scm_hdcp_available() - Check if secure environment supports HDCP. 625 + * 626 + * Return true if HDCP is supported, false if not. 627 + */ 628 + bool qcom_scm_hdcp_available(void) 629 + { 630 + int ret = qcom_scm_clk_enable(); 631 + 632 + if (ret) 633 + return ret; 634 + 635 + ret = __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_HDCP, 636 + QCOM_SCM_HDCP_INVOKE); 637 + 638 + qcom_scm_clk_disable(); 639 + 640 + return ret > 0 ? true : false; 641 + } 642 + EXPORT_SYMBOL(qcom_scm_hdcp_available); 643 + 644 + /** 645 + * qcom_scm_hdcp_req() - Send HDCP request. 646 + * @req: HDCP request array 647 + * @req_cnt: HDCP request array count 648 + * @resp: response buffer passed to SCM 649 + * 650 + * Write HDCP register(s) through SCM. 651 + */ 652 + int qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt, u32 *resp) 653 + { 654 + int ret; 655 + struct qcom_scm_desc desc = { 656 + .svc = QCOM_SCM_SVC_HDCP, 657 + .cmd = QCOM_SCM_HDCP_INVOKE, 658 + .arginfo = QCOM_SCM_ARGS(10), 659 + .args = { 660 + req[0].addr, 661 + req[0].val, 662 + req[1].addr, 663 + req[1].val, 664 + req[2].addr, 665 + req[2].val, 666 + req[3].addr, 667 + req[3].val, 668 + req[4].addr, 669 + req[4].val 670 + }, 671 + .owner = ARM_SMCCC_OWNER_SIP, 672 + }; 673 + struct qcom_scm_res res; 674 + 675 + if (req_cnt > QCOM_SCM_HDCP_MAX_REQ_CNT) 676 + return -ERANGE; 677 + 678 + ret = qcom_scm_clk_enable(); 679 + if (ret) 680 + return ret; 681 + 682 + ret = qcom_scm_call(__scm->dev, &desc, &res); 683 + *resp = res.result[0]; 684 + 685 + qcom_scm_clk_disable(); 686 + 687 + return ret; 688 + } 689 + EXPORT_SYMBOL(qcom_scm_hdcp_req); 690 + 691 + int qcom_scm_qsmmu500_wait_safe_toggle(bool en) 692 + { 693 + struct qcom_scm_desc desc = { 694 + .svc = QCOM_SCM_SVC_SMMU_PROGRAM, 695 + .cmd = QCOM_SCM_SMMU_CONFIG_ERRATA1, 696 + .arginfo = QCOM_SCM_ARGS(2), 697 + .args[0] = QCOM_SCM_SMMU_CONFIG_ERRATA1_CLIENT_ALL, 698 + .args[1] = en, 699 + .owner = ARM_SMCCC_OWNER_SIP, 700 + }; 701 + 702 + 703 + return qcom_scm_call_atomic(__scm->dev, &desc, NULL); 704 + } 705 + EXPORT_SYMBOL(qcom_scm_qsmmu500_wait_safe_toggle); 706 + 707 + static int qcom_scm_find_dload_address(struct device *dev, u64 *addr) 708 + { 709 + struct device_node *tcsr; 710 + struct device_node *np = dev->of_node; 711 + struct resource res; 712 + u32 offset; 713 + int ret; 714 + 715 + tcsr = of_parse_phandle(np, "qcom,dload-mode", 0); 716 + if (!tcsr) 717 + return 0; 718 + 719 + ret = of_address_to_resource(tcsr, 0, &res); 720 + of_node_put(tcsr); 721 + if (ret) 722 + return ret; 723 + 724 + ret = of_property_read_u32_index(np, "qcom,dload-mode", 1, &offset); 725 + if (ret < 0) 726 + return ret; 727 + 728 + *addr = res.start + offset; 729 + 730 + return 0; 731 + } 732 + 733 + /** 734 + * qcom_scm_is_available() - Checks if SCM is available 735 + */ 736 + bool qcom_scm_is_available(void) 737 + { 738 + return !!__scm; 739 + } 740 + EXPORT_SYMBOL(qcom_scm_is_available); 741 + 742 static int qcom_scm_probe(struct platform_device *pdev) 743 { 744 struct qcom_scm *scm; ··· 631 __scm = scm; 632 __scm->dev = &pdev->dev; 633 634 + __query_convention(); 635 636 /* 637 * If requested enable "download mode", from this point on warmboot
+99 -75
drivers/firmware/qcom_scm.h
··· 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* Copyright (c) 2010-2015, The Linux Foundation. All rights reserved. 3 */ 4 #ifndef __QCOM_SCM_INT_H 5 #define __QCOM_SCM_INT_H 6 7 - #define QCOM_SCM_SVC_BOOT 0x1 8 - #define QCOM_SCM_BOOT_ADDR 0x1 9 - #define QCOM_SCM_SET_DLOAD_MODE 0x10 10 - #define QCOM_SCM_BOOT_ADDR_MC 0x11 11 - #define QCOM_SCM_SET_REMOTE_STATE 0xa 12 - extern int __qcom_scm_set_remote_state(struct device *dev, u32 state, u32 id); 13 - extern int __qcom_scm_set_dload_mode(struct device *dev, bool enable); 14 15 - #define QCOM_SCM_FLAG_HLOS 0x01 16 - #define QCOM_SCM_FLAG_COLDBOOT_MC 0x02 17 - #define QCOM_SCM_FLAG_WARMBOOT_MC 0x04 18 - extern int __qcom_scm_set_warm_boot_addr(struct device *dev, void *entry, 19 - const cpumask_t *cpus); 20 - extern int __qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus); 21 22 - #define QCOM_SCM_CMD_TERMINATE_PC 0x2 23 #define QCOM_SCM_FLUSH_FLAG_MASK 0x3 24 - #define QCOM_SCM_CMD_CORE_HOTPLUGGED 0x10 25 - extern void __qcom_scm_cpu_power_down(u32 flags); 26 27 - #define QCOM_SCM_SVC_IO 0x5 28 - #define QCOM_SCM_IO_READ 0x1 29 - #define QCOM_SCM_IO_WRITE 0x2 30 - extern int __qcom_scm_io_readl(struct device *dev, phys_addr_t addr, unsigned int *val); 31 - extern int __qcom_scm_io_writel(struct device *dev, phys_addr_t addr, unsigned int val); 32 33 - #define QCOM_SCM_SVC_INFO 0x6 34 - #define QCOM_IS_CALL_AVAIL_CMD 0x1 35 - extern int __qcom_scm_is_call_available(struct device *dev, u32 svc_id, 36 - u32 cmd_id); 37 38 #define QCOM_SCM_SVC_HDCP 0x11 39 - #define QCOM_SCM_CMD_HDCP 0x01 40 - extern int __qcom_scm_hdcp_req(struct device *dev, 41 - struct qcom_scm_hdcp_req *req, u32 req_cnt, u32 *resp); 42 43 extern void __qcom_scm_init(void); 44 - 45 - #define QCOM_SCM_OCMEM_SVC 0xf 46 - #define QCOM_SCM_OCMEM_LOCK_CMD 0x1 47 - #define QCOM_SCM_OCMEM_UNLOCK_CMD 0x2 48 - 49 - extern int __qcom_scm_ocmem_lock(struct device *dev, u32 id, u32 offset, 50 - u32 size, u32 mode); 51 - extern int __qcom_scm_ocmem_unlock(struct device *dev, u32 id, u32 offset, 52 - u32 size); 53 - 54 - #define QCOM_SCM_SVC_PIL 0x2 55 - #define QCOM_SCM_PAS_INIT_IMAGE_CMD 0x1 56 - #define QCOM_SCM_PAS_MEM_SETUP_CMD 0x2 57 - #define QCOM_SCM_PAS_AUTH_AND_RESET_CMD 0x5 58 - #define QCOM_SCM_PAS_SHUTDOWN_CMD 0x6 59 - #define QCOM_SCM_PAS_IS_SUPPORTED_CMD 0x7 60 - #define QCOM_SCM_PAS_MSS_RESET 0xa 61 - extern bool __qcom_scm_pas_supported(struct device *dev, u32 peripheral); 62 - extern int __qcom_scm_pas_init_image(struct device *dev, u32 peripheral, 63 - dma_addr_t metadata_phys); 64 - extern int __qcom_scm_pas_mem_setup(struct device *dev, u32 peripheral, 65 - phys_addr_t addr, phys_addr_t size); 66 - extern int __qcom_scm_pas_auth_and_reset(struct device *dev, u32 peripheral); 67 - extern int __qcom_scm_pas_shutdown(struct device *dev, u32 peripheral); 68 - extern int __qcom_scm_pas_mss_reset(struct device *dev, bool reset); 69 70 /* common error codes */ 71 #define QCOM_SCM_V2_EBUSY -12 ··· 138 } 139 return -EINVAL; 140 } 141 - 142 - #define QCOM_SCM_SVC_MP 0xc 143 - #define QCOM_SCM_RESTORE_SEC_CFG 2 144 - extern int __qcom_scm_restore_sec_cfg(struct device *dev, u32 device_id, 145 - u32 spare); 146 - #define QCOM_SCM_IOMMU_SECURE_PTBL_SIZE 3 147 - #define QCOM_SCM_IOMMU_SECURE_PTBL_INIT 4 148 - #define QCOM_SCM_SVC_SMMU_PROGRAM 0x15 149 - #define QCOM_SCM_CONFIG_ERRATA1 0x3 150 - #define QCOM_SCM_CONFIG_ERRATA1_CLIENT_ALL 0x2 151 - extern int __qcom_scm_iommu_secure_ptbl_size(struct device *dev, u32 spare, 152 - size_t *size); 153 - extern int __qcom_scm_iommu_secure_ptbl_init(struct device *dev, u64 addr, 154 - u32 size, u32 spare); 155 - extern int __qcom_scm_qsmmu500_wait_safe_toggle(struct device *dev, 156 - bool enable); 157 - #define QCOM_MEM_PROT_ASSIGN_ID 0x16 158 - extern int __qcom_scm_assign_mem(struct device *dev, 159 - phys_addr_t mem_region, size_t mem_sz, 160 - phys_addr_t src, size_t src_sz, 161 - phys_addr_t dest, size_t dest_sz); 162 163 #endif
··· 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* Copyright (c) 2010-2015,2019 The Linux Foundation. All rights reserved. 3 */ 4 #ifndef __QCOM_SCM_INT_H 5 #define __QCOM_SCM_INT_H 6 7 + enum qcom_scm_convention { 8 + SMC_CONVENTION_UNKNOWN, 9 + SMC_CONVENTION_LEGACY, 10 + SMC_CONVENTION_ARM_32, 11 + SMC_CONVENTION_ARM_64, 12 + }; 13 14 + extern enum qcom_scm_convention qcom_scm_convention; 15 16 + #define MAX_QCOM_SCM_ARGS 10 17 + #define MAX_QCOM_SCM_RETS 3 18 + 19 + enum qcom_scm_arg_types { 20 + QCOM_SCM_VAL, 21 + QCOM_SCM_RO, 22 + QCOM_SCM_RW, 23 + QCOM_SCM_BUFVAL, 24 + }; 25 + 26 + #define QCOM_SCM_ARGS_IMPL(num, a, b, c, d, e, f, g, h, i, j, ...) (\ 27 + (((a) & 0x3) << 4) | \ 28 + (((b) & 0x3) << 6) | \ 29 + (((c) & 0x3) << 8) | \ 30 + (((d) & 0x3) << 10) | \ 31 + (((e) & 0x3) << 12) | \ 32 + (((f) & 0x3) << 14) | \ 33 + (((g) & 0x3) << 16) | \ 34 + (((h) & 0x3) << 18) | \ 35 + (((i) & 0x3) << 20) | \ 36 + (((j) & 0x3) << 22) | \ 37 + ((num) & 0xf)) 38 + 39 + #define QCOM_SCM_ARGS(...) QCOM_SCM_ARGS_IMPL(__VA_ARGS__, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) 40 + 41 + 42 + /** 43 + * struct qcom_scm_desc 44 + * @arginfo: Metadata describing the arguments in args[] 45 + * @args: The array of arguments for the secure syscall 46 + */ 47 + struct qcom_scm_desc { 48 + u32 svc; 49 + u32 cmd; 50 + u32 arginfo; 51 + u64 args[MAX_QCOM_SCM_ARGS]; 52 + u32 owner; 53 + }; 54 + 55 + /** 56 + * struct qcom_scm_res 57 + * @result: The values returned by the secure syscall 58 + */ 59 + struct qcom_scm_res { 60 + u64 result[MAX_QCOM_SCM_RETS]; 61 + }; 62 + 63 + #define SCM_SMC_FNID(s, c) ((((s) & 0xFF) << 8) | ((c) & 0xFF)) 64 + extern int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc, 65 + struct qcom_scm_res *res, bool atomic); 66 + 67 + #define SCM_LEGACY_FNID(s, c) (((s) << 10) | ((c) & 0x3ff)) 68 + extern int scm_legacy_call_atomic(struct device *dev, 69 + const struct qcom_scm_desc *desc, 70 + struct qcom_scm_res *res); 71 + extern int scm_legacy_call(struct device *dev, const struct qcom_scm_desc *desc, 72 + struct qcom_scm_res *res); 73 + 74 + #define QCOM_SCM_SVC_BOOT 0x01 75 + #define QCOM_SCM_BOOT_SET_ADDR 0x01 76 + #define QCOM_SCM_BOOT_TERMINATE_PC 0x02 77 + #define QCOM_SCM_BOOT_SET_DLOAD_MODE 0x10 78 + #define QCOM_SCM_BOOT_SET_REMOTE_STATE 0x0a 79 #define QCOM_SCM_FLUSH_FLAG_MASK 0x3 80 81 + #define QCOM_SCM_SVC_PIL 0x02 82 + #define QCOM_SCM_PIL_PAS_INIT_IMAGE 0x01 83 + #define QCOM_SCM_PIL_PAS_MEM_SETUP 0x02 84 + #define QCOM_SCM_PIL_PAS_AUTH_AND_RESET 0x05 85 + #define QCOM_SCM_PIL_PAS_SHUTDOWN 0x06 86 + #define QCOM_SCM_PIL_PAS_IS_SUPPORTED 0x07 87 + #define QCOM_SCM_PIL_PAS_MSS_RESET 0x0a 88 89 + #define QCOM_SCM_SVC_IO 0x05 90 + #define QCOM_SCM_IO_READ 0x01 91 + #define QCOM_SCM_IO_WRITE 0x02 92 + 93 + #define QCOM_SCM_SVC_INFO 0x06 94 + #define QCOM_SCM_INFO_IS_CALL_AVAIL 0x01 95 + 96 + #define QCOM_SCM_SVC_MP 0x0c 97 + #define QCOM_SCM_MP_RESTORE_SEC_CFG 0x02 98 + #define QCOM_SCM_MP_IOMMU_SECURE_PTBL_SIZE 0x03 99 + #define QCOM_SCM_MP_IOMMU_SECURE_PTBL_INIT 0x04 100 + #define QCOM_SCM_MP_ASSIGN 0x16 101 + 102 + #define QCOM_SCM_SVC_OCMEM 0x0f 103 + #define QCOM_SCM_OCMEM_LOCK_CMD 0x01 104 + #define QCOM_SCM_OCMEM_UNLOCK_CMD 0x02 105 106 #define QCOM_SCM_SVC_HDCP 0x11 107 + #define QCOM_SCM_HDCP_INVOKE 0x01 108 + 109 + #define QCOM_SCM_SVC_SMMU_PROGRAM 0x15 110 + #define QCOM_SCM_SMMU_CONFIG_ERRATA1 0x03 111 + #define QCOM_SCM_SMMU_CONFIG_ERRATA1_CLIENT_ALL 0x02 112 113 extern void __qcom_scm_init(void); 114 115 /* common error codes */ 116 #define QCOM_SCM_V2_EBUSY -12 ··· 93 } 94 return -EINVAL; 95 } 96 97 #endif
+1 -1
drivers/firmware/turris-mox-rwtm.c
··· 197 rwtm->serial_number = reply->status[1]; 198 rwtm->serial_number <<= 32; 199 rwtm->serial_number |= reply->status[0]; 200 - rwtm->board_version = reply->status[2]; 201 rwtm->ram_size = reply->status[3]; 202 reply_to_mac_addr(rwtm->mac_address1, reply->status[4], 203 reply->status[5]);
··· 197 rwtm->serial_number = reply->status[1]; 198 rwtm->serial_number <<= 32; 199 rwtm->serial_number |= reply->status[0]; 200 + rwtm->board_version = reply->status[2]; 201 rwtm->ram_size = reply->status[3]; 202 reply_to_mac_addr(rwtm->mac_address1, reply->status[4], 203 reply->status[5]);
+43
drivers/firmware/xilinx/zynqmp.c
··· 26 27 static const struct zynqmp_eemi_ops *eemi_ops_tbl; 28 29 static const struct mfd_cell firmware_devs[] = { 30 { 31 .name = "zynqmp_power_controller", ··· 47 case XST_PM_SUCCESS: 48 case XST_PM_DOUBLE_REQ: 49 return 0; 50 case XST_PM_NO_ACCESS: 51 return -EACCES; 52 case XST_PM_ABORT_SUSPEND: ··· 134 } 135 136 /** 137 * zynqmp_pm_invoke_fn() - Invoke the system-level platform management layer 138 * caller function depending on the configuration 139 * @pm_api_id: Requested PM-API call ··· 199 * Make sure to stay in x0 register 200 */ 201 u64 smc_arg[4]; 202 203 smc_arg[0] = PM_SIP_SVC | pm_api_id; 204 smc_arg[1] = ((u64)arg1 << 32) | arg0; ··· 758 np = of_find_compatible_node(NULL, NULL, "xlnx,versal"); 759 if (!np) 760 return 0; 761 } 762 of_node_put(np); 763
··· 26 27 static const struct zynqmp_eemi_ops *eemi_ops_tbl; 28 29 + static bool feature_check_enabled; 30 + static u32 zynqmp_pm_features[PM_API_MAX]; 31 + 32 static const struct mfd_cell firmware_devs[] = { 33 { 34 .name = "zynqmp_power_controller", ··· 44 case XST_PM_SUCCESS: 45 case XST_PM_DOUBLE_REQ: 46 return 0; 47 + case XST_PM_NO_FEATURE: 48 + return -ENOTSUPP; 49 case XST_PM_NO_ACCESS: 50 return -EACCES; 51 case XST_PM_ABORT_SUSPEND: ··· 129 } 130 131 /** 132 + * zynqmp_pm_feature() - Check weather given feature is supported or not 133 + * @api_id: API ID to check 134 + * 135 + * Return: Returns status, either success or error+reason 136 + */ 137 + static int zynqmp_pm_feature(u32 api_id) 138 + { 139 + int ret; 140 + u32 ret_payload[PAYLOAD_ARG_CNT]; 141 + u64 smc_arg[2]; 142 + 143 + if (!feature_check_enabled) 144 + return 0; 145 + 146 + /* Return value if feature is already checked */ 147 + if (zynqmp_pm_features[api_id] != PM_FEATURE_UNCHECKED) 148 + return zynqmp_pm_features[api_id]; 149 + 150 + smc_arg[0] = PM_SIP_SVC | PM_FEATURE_CHECK; 151 + smc_arg[1] = api_id; 152 + 153 + ret = do_fw_call(smc_arg[0], smc_arg[1], 0, ret_payload); 154 + if (ret) { 155 + zynqmp_pm_features[api_id] = PM_FEATURE_INVALID; 156 + return PM_FEATURE_INVALID; 157 + } 158 + 159 + zynqmp_pm_features[api_id] = ret_payload[1]; 160 + 161 + return zynqmp_pm_features[api_id]; 162 + } 163 + 164 + /** 165 * zynqmp_pm_invoke_fn() - Invoke the system-level platform management layer 166 * caller function depending on the configuration 167 * @pm_api_id: Requested PM-API call ··· 161 * Make sure to stay in x0 register 162 */ 163 u64 smc_arg[4]; 164 + 165 + if (zynqmp_pm_feature(pm_api_id) == PM_FEATURE_INVALID) 166 + return -ENOTSUPP; 167 168 smc_arg[0] = PM_SIP_SVC | pm_api_id; 169 smc_arg[1] = ((u64)arg1 << 32) | arg0; ··· 717 np = of_find_compatible_node(NULL, NULL, "xlnx,versal"); 718 if (!np) 719 return 0; 720 + 721 + feature_check_enabled = true; 722 } 723 of_node_put(np); 724
+1 -1
drivers/hwmon/scmi-hwmon.c
··· 259 } 260 261 static const struct scmi_device_id scmi_id_table[] = { 262 - { SCMI_PROTOCOL_SENSOR }, 263 { }, 264 }; 265 MODULE_DEVICE_TABLE(scmi, scmi_id_table);
··· 259 } 260 261 static const struct scmi_device_id scmi_id_table[] = { 262 + { SCMI_PROTOCOL_SENSOR, "hwmon" }, 263 { }, 264 }; 265 MODULE_DEVICE_TABLE(scmi, scmi_id_table);
+1 -4
drivers/mailbox/armada-37xx-rwtm-mailbox.c
··· 143 static int armada_37xx_mbox_probe(struct platform_device *pdev) 144 { 145 struct a37xx_mbox *mbox; 146 - struct resource *regs; 147 struct mbox_chan *chans; 148 int ret; 149 ··· 155 if (!chans) 156 return -ENOMEM; 157 158 - regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); 159 - 160 - mbox->base = devm_ioremap_resource(&pdev->dev, regs); 161 if (IS_ERR(mbox->base)) { 162 dev_err(&pdev->dev, "ioremap failed\n"); 163 return PTR_ERR(mbox->base);
··· 143 static int armada_37xx_mbox_probe(struct platform_device *pdev) 144 { 145 struct a37xx_mbox *mbox; 146 struct mbox_chan *chans; 147 int ret; 148 ··· 156 if (!chans) 157 return -ENOMEM; 158 159 + mbox->base = devm_platform_ioremap_resource(pdev, 0); 160 if (IS_ERR(mbox->base)) { 161 dev_err(&pdev->dev, "ioremap failed\n"); 162 return PTR_ERR(mbox->base);
+1 -3
drivers/memory/mvebu-devbus.c
··· 267 struct devbus_read_params r; 268 struct devbus_write_params w; 269 struct devbus *devbus; 270 - struct resource *res; 271 struct clk *clk; 272 unsigned long rate; 273 int err; ··· 276 return -ENOMEM; 277 278 devbus->dev = dev; 279 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 280 - devbus->base = devm_ioremap_resource(&pdev->dev, res); 281 if (IS_ERR(devbus->base)) 282 return PTR_ERR(devbus->base); 283
··· 267 struct devbus_read_params r; 268 struct devbus_write_params w; 269 struct devbus *devbus; 270 struct clk *clk; 271 unsigned long rate; 272 int err; ··· 277 return -ENOMEM; 278 279 devbus->dev = dev; 280 + devbus->base = devm_platform_ioremap_resource(pdev, 0); 281 if (IS_ERR(devbus->base)) 282 return PTR_ERR(devbus->base); 283
+1 -1
drivers/memory/samsung/Kconfig
··· 8 if SAMSUNG_MC 9 10 config EXYNOS5422_DMC 11 - tristate "EXYNOS5422 Dynamic Memory Controller driver" 12 depends on ARCH_EXYNOS || (COMPILE_TEST && HAS_IOMEM) 13 select DDR 14 depends on DEVFREQ_GOV_SIMPLE_ONDEMAND
··· 8 if SAMSUNG_MC 9 10 config EXYNOS5422_DMC 11 + tristate "Exynos5422 Dynamic Memory Controller driver" 12 depends on ARCH_EXYNOS || (COMPILE_TEST && HAS_IOMEM) 13 select DDR 14 depends on DEVFREQ_GOV_SIMPLE_ONDEMAND
+1 -1
drivers/memory/samsung/exynos-srom.c
··· 3 // Copyright (c) 2015 Samsung Electronics Co., Ltd. 4 // http://www.samsung.com/ 5 // 6 - // EXYNOS - SROM Controller support 7 // Author: Pankaj Dubey <pankaj.dubey@samsung.com> 8 9 #include <linux/io.h>
··· 3 // Copyright (c) 2015 Samsung Electronics Co., Ltd. 4 // http://www.samsung.com/ 5 // 6 + // Exynos - SROM Controller support 7 // Author: Pankaj Dubey <pankaj.dubey@samsung.com> 8 9 #include <linux/io.h>
+2 -5
drivers/memory/samsung/exynos5422-dmc.c
··· 1374 struct device *dev = &pdev->dev; 1375 struct device_node *np = dev->of_node; 1376 struct exynos5_dmc *dmc; 1377 - struct resource *res; 1378 int irq[2]; 1379 1380 dmc = devm_kzalloc(dev, sizeof(*dmc), GFP_KERNEL); ··· 1385 dmc->dev = dev; 1386 platform_set_drvdata(pdev, dmc); 1387 1388 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1389 - dmc->base_drexi0 = devm_ioremap_resource(dev, res); 1390 if (IS_ERR(dmc->base_drexi0)) 1391 return PTR_ERR(dmc->base_drexi0); 1392 1393 - res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 1394 - dmc->base_drexi1 = devm_ioremap_resource(dev, res); 1395 if (IS_ERR(dmc->base_drexi1)) 1396 return PTR_ERR(dmc->base_drexi1); 1397
··· 1374 struct device *dev = &pdev->dev; 1375 struct device_node *np = dev->of_node; 1376 struct exynos5_dmc *dmc; 1377 int irq[2]; 1378 1379 dmc = devm_kzalloc(dev, sizeof(*dmc), GFP_KERNEL); ··· 1386 dmc->dev = dev; 1387 platform_set_drvdata(pdev, dmc); 1388 1389 + dmc->base_drexi0 = devm_platform_ioremap_resource(pdev, 0); 1390 if (IS_ERR(dmc->base_drexi0)) 1391 return PTR_ERR(dmc->base_drexi0); 1392 1393 + dmc->base_drexi1 = devm_platform_ioremap_resource(pdev, 1); 1394 if (IS_ERR(dmc->base_drexi1)) 1395 return PTR_ERR(dmc->base_drexi1); 1396
+2 -1
drivers/memory/tegra/Makefile
··· 13 obj-$(CONFIG_TEGRA20_EMC) += tegra20-emc.o 14 obj-$(CONFIG_TEGRA30_EMC) += tegra30-emc.o 15 obj-$(CONFIG_TEGRA124_EMC) += tegra124-emc.o 16 - obj-$(CONFIG_ARCH_TEGRA_186_SOC) += tegra186.o
··· 13 obj-$(CONFIG_TEGRA20_EMC) += tegra20-emc.o 14 obj-$(CONFIG_TEGRA30_EMC) += tegra30-emc.o 15 obj-$(CONFIG_TEGRA124_EMC) += tegra124-emc.o 16 + obj-$(CONFIG_ARCH_TEGRA_186_SOC) += tegra186.o tegra186-emc.o 17 + obj-$(CONFIG_ARCH_TEGRA_194_SOC) += tegra186.o tegra186-emc.o
+145 -44
drivers/memory/tegra/tegra124-emc.c
··· 467 468 void __iomem *regs; 469 470 enum emc_dram_type dram_type; 471 unsigned int dram_num; 472 473 struct emc_timing last_timing; 474 struct emc_timing *timings; 475 unsigned int num_timings; 476 }; 477 478 /* Timing change sequence functions */ ··· 1006 return NULL; 1007 } 1008 1009 - /* Debugfs entry */ 1010 1011 - static int emc_debug_rate_get(void *data, u64 *rate) 1012 { 1013 - struct clk *c = data; 1014 1015 - *rate = clk_get_rate(c); 1016 1017 - return 0; 1018 } 1019 1020 - static int emc_debug_rate_set(void *data, u64 rate) 1021 - { 1022 - struct clk *c = data; 1023 - 1024 - return clk_set_rate(c, rate); 1025 - } 1026 - 1027 - DEFINE_SIMPLE_ATTRIBUTE(emc_debug_rate_fops, emc_debug_rate_get, 1028 - emc_debug_rate_set, "%lld\n"); 1029 - 1030 - static int emc_debug_supported_rates_show(struct seq_file *s, void *data) 1031 { 1032 struct tegra_emc *emc = s->private; 1033 const char *prefix = ""; 1034 unsigned int i; 1035 1036 for (i = 0; i < emc->num_timings; i++) { 1037 - struct emc_timing *timing = &emc->timings[i]; 1038 - 1039 - seq_printf(s, "%s%lu", prefix, timing->rate); 1040 - 1041 prefix = " "; 1042 } 1043 ··· 1059 return 0; 1060 } 1061 1062 - static int emc_debug_supported_rates_open(struct inode *inode, 1063 - struct file *file) 1064 { 1065 - return single_open(file, emc_debug_supported_rates_show, 1066 inode->i_private); 1067 } 1068 1069 - static const struct file_operations emc_debug_supported_rates_fops = { 1070 - .open = emc_debug_supported_rates_open, 1071 .read = seq_read, 1072 .llseek = seq_lseek, 1073 .release = single_release, 1074 }; 1075 1076 static void emc_debugfs_init(struct device *dev, struct tegra_emc *emc) 1077 { 1078 - struct dentry *root, *file; 1079 - struct clk *clk; 1080 1081 - root = debugfs_create_dir("emc", NULL); 1082 - if (!root) { 1083 dev_err(dev, "failed to create debugfs directory\n"); 1084 return; 1085 } 1086 1087 - clk = clk_get_sys("tegra-clk-debug", "emc"); 1088 - if (IS_ERR(clk)) { 1089 - dev_err(dev, "failed to get debug clock: %ld\n", PTR_ERR(clk)); 1090 - return; 1091 - } 1092 - 1093 - file = debugfs_create_file("rate", S_IRUGO | S_IWUSR, root, clk, 1094 - &emc_debug_rate_fops); 1095 - if (!file) 1096 - dev_err(dev, "failed to create debugfs entry\n"); 1097 - 1098 - file = debugfs_create_file("supported_rates", S_IRUGO, root, emc, 1099 - &emc_debug_supported_rates_fops); 1100 - if (!file) 1101 - dev_err(dev, "failed to create debugfs entry\n"); 1102 } 1103 1104 static int tegra_emc_probe(struct platform_device *pdev)
··· 467 468 void __iomem *regs; 469 470 + struct clk *clk; 471 + 472 enum emc_dram_type dram_type; 473 unsigned int dram_num; 474 475 struct emc_timing last_timing; 476 struct emc_timing *timings; 477 unsigned int num_timings; 478 + 479 + struct { 480 + struct dentry *root; 481 + unsigned long min_rate; 482 + unsigned long max_rate; 483 + } debugfs; 484 }; 485 486 /* Timing change sequence functions */ ··· 998 return NULL; 999 } 1000 1001 + /* 1002 + * debugfs interface 1003 + * 1004 + * The memory controller driver exposes some files in debugfs that can be used 1005 + * to control the EMC frequency. The top-level directory can be found here: 1006 + * 1007 + * /sys/kernel/debug/emc 1008 + * 1009 + * It contains the following files: 1010 + * 1011 + * - available_rates: This file contains a list of valid, space-separated 1012 + * EMC frequencies. 1013 + * 1014 + * - min_rate: Writing a value to this file sets the given frequency as the 1015 + * floor of the permitted range. If this is higher than the currently 1016 + * configured EMC frequency, this will cause the frequency to be 1017 + * increased so that it stays within the valid range. 1018 + * 1019 + * - max_rate: Similarily to the min_rate file, writing a value to this file 1020 + * sets the given frequency as the ceiling of the permitted range. If 1021 + * the value is lower than the currently configured EMC frequency, this 1022 + * will cause the frequency to be decreased so that it stays within the 1023 + * valid range. 1024 + */ 1025 1026 + static bool tegra_emc_validate_rate(struct tegra_emc *emc, unsigned long rate) 1027 { 1028 + unsigned int i; 1029 1030 + for (i = 0; i < emc->num_timings; i++) 1031 + if (rate == emc->timings[i].rate) 1032 + return true; 1033 1034 + return false; 1035 } 1036 1037 + static int tegra_emc_debug_available_rates_show(struct seq_file *s, 1038 + void *data) 1039 { 1040 struct tegra_emc *emc = s->private; 1041 const char *prefix = ""; 1042 unsigned int i; 1043 1044 for (i = 0; i < emc->num_timings; i++) { 1045 + seq_printf(s, "%s%lu", prefix, emc->timings[i].rate); 1046 prefix = " "; 1047 } 1048 ··· 1038 return 0; 1039 } 1040 1041 + static int tegra_emc_debug_available_rates_open(struct inode *inode, 1042 + struct file *file) 1043 { 1044 + return single_open(file, tegra_emc_debug_available_rates_show, 1045 inode->i_private); 1046 } 1047 1048 + static const struct file_operations tegra_emc_debug_available_rates_fops = { 1049 + .open = tegra_emc_debug_available_rates_open, 1050 .read = seq_read, 1051 .llseek = seq_lseek, 1052 .release = single_release, 1053 }; 1054 1055 + static int tegra_emc_debug_min_rate_get(void *data, u64 *rate) 1056 + { 1057 + struct tegra_emc *emc = data; 1058 + 1059 + *rate = emc->debugfs.min_rate; 1060 + 1061 + return 0; 1062 + } 1063 + 1064 + static int tegra_emc_debug_min_rate_set(void *data, u64 rate) 1065 + { 1066 + struct tegra_emc *emc = data; 1067 + int err; 1068 + 1069 + if (!tegra_emc_validate_rate(emc, rate)) 1070 + return -EINVAL; 1071 + 1072 + err = clk_set_min_rate(emc->clk, rate); 1073 + if (err < 0) 1074 + return err; 1075 + 1076 + emc->debugfs.min_rate = rate; 1077 + 1078 + return 0; 1079 + } 1080 + 1081 + DEFINE_SIMPLE_ATTRIBUTE(tegra_emc_debug_min_rate_fops, 1082 + tegra_emc_debug_min_rate_get, 1083 + tegra_emc_debug_min_rate_set, "%llu\n"); 1084 + 1085 + static int tegra_emc_debug_max_rate_get(void *data, u64 *rate) 1086 + { 1087 + struct tegra_emc *emc = data; 1088 + 1089 + *rate = emc->debugfs.max_rate; 1090 + 1091 + return 0; 1092 + } 1093 + 1094 + static int tegra_emc_debug_max_rate_set(void *data, u64 rate) 1095 + { 1096 + struct tegra_emc *emc = data; 1097 + int err; 1098 + 1099 + if (!tegra_emc_validate_rate(emc, rate)) 1100 + return -EINVAL; 1101 + 1102 + err = clk_set_max_rate(emc->clk, rate); 1103 + if (err < 0) 1104 + return err; 1105 + 1106 + emc->debugfs.max_rate = rate; 1107 + 1108 + return 0; 1109 + } 1110 + 1111 + DEFINE_SIMPLE_ATTRIBUTE(tegra_emc_debug_max_rate_fops, 1112 + tegra_emc_debug_max_rate_get, 1113 + tegra_emc_debug_max_rate_set, "%llu\n"); 1114 + 1115 static void emc_debugfs_init(struct device *dev, struct tegra_emc *emc) 1116 { 1117 + unsigned int i; 1118 + int err; 1119 1120 + emc->clk = devm_clk_get(dev, "emc"); 1121 + if (IS_ERR(emc->clk)) { 1122 + if (PTR_ERR(emc->clk) != -ENODEV) { 1123 + dev_err(dev, "failed to get EMC clock: %ld\n", 1124 + PTR_ERR(emc->clk)); 1125 + return; 1126 + } 1127 + } 1128 + 1129 + emc->debugfs.min_rate = ULONG_MAX; 1130 + emc->debugfs.max_rate = 0; 1131 + 1132 + for (i = 0; i < emc->num_timings; i++) { 1133 + if (emc->timings[i].rate < emc->debugfs.min_rate) 1134 + emc->debugfs.min_rate = emc->timings[i].rate; 1135 + 1136 + if (emc->timings[i].rate > emc->debugfs.max_rate) 1137 + emc->debugfs.max_rate = emc->timings[i].rate; 1138 + } 1139 + 1140 + err = clk_set_rate_range(emc->clk, emc->debugfs.min_rate, 1141 + emc->debugfs.max_rate); 1142 + if (err < 0) { 1143 + dev_err(dev, "failed to set rate range [%lu-%lu] for %pC\n", 1144 + emc->debugfs.min_rate, emc->debugfs.max_rate, 1145 + emc->clk); 1146 + return; 1147 + } 1148 + 1149 + emc->debugfs.root = debugfs_create_dir("emc", NULL); 1150 + if (!emc->debugfs.root) { 1151 dev_err(dev, "failed to create debugfs directory\n"); 1152 return; 1153 } 1154 1155 + debugfs_create_file("available_rates", S_IRUGO, emc->debugfs.root, emc, 1156 + &tegra_emc_debug_available_rates_fops); 1157 + debugfs_create_file("min_rate", S_IRUGO | S_IWUSR, emc->debugfs.root, 1158 + emc, &tegra_emc_debug_min_rate_fops); 1159 + debugfs_create_file("max_rate", S_IRUGO | S_IWUSR, emc->debugfs.root, 1160 + emc, &tegra_emc_debug_max_rate_fops); 1161 } 1162 1163 static int tegra_emc_probe(struct platform_device *pdev)
+293
drivers/memory/tegra/tegra186-emc.c
···
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (C) 2019 NVIDIA CORPORATION. All rights reserved. 4 + */ 5 + 6 + #include <linux/clk.h> 7 + #include <linux/debugfs.h> 8 + #include <linux/module.h> 9 + #include <linux/mod_devicetable.h> 10 + #include <linux/platform_device.h> 11 + 12 + #include <soc/tegra/bpmp.h> 13 + 14 + struct tegra186_emc_dvfs { 15 + unsigned long latency; 16 + unsigned long rate; 17 + }; 18 + 19 + struct tegra186_emc { 20 + struct tegra_bpmp *bpmp; 21 + struct device *dev; 22 + struct clk *clk; 23 + 24 + struct tegra186_emc_dvfs *dvfs; 25 + unsigned int num_dvfs; 26 + 27 + struct { 28 + struct dentry *root; 29 + unsigned long min_rate; 30 + unsigned long max_rate; 31 + } debugfs; 32 + }; 33 + 34 + /* 35 + * debugfs interface 36 + * 37 + * The memory controller driver exposes some files in debugfs that can be used 38 + * to control the EMC frequency. The top-level directory can be found here: 39 + * 40 + * /sys/kernel/debug/emc 41 + * 42 + * It contains the following files: 43 + * 44 + * - available_rates: This file contains a list of valid, space-separated 45 + * EMC frequencies. 46 + * 47 + * - min_rate: Writing a value to this file sets the given frequency as the 48 + * floor of the permitted range. If this is higher than the currently 49 + * configured EMC frequency, this will cause the frequency to be 50 + * increased so that it stays within the valid range. 51 + * 52 + * - max_rate: Similarily to the min_rate file, writing a value to this file 53 + * sets the given frequency as the ceiling of the permitted range. If 54 + * the value is lower than the currently configured EMC frequency, this 55 + * will cause the frequency to be decreased so that it stays within the 56 + * valid range. 57 + */ 58 + 59 + static bool tegra186_emc_validate_rate(struct tegra186_emc *emc, 60 + unsigned long rate) 61 + { 62 + unsigned int i; 63 + 64 + for (i = 0; i < emc->num_dvfs; i++) 65 + if (rate == emc->dvfs[i].rate) 66 + return true; 67 + 68 + return false; 69 + } 70 + 71 + static int tegra186_emc_debug_available_rates_show(struct seq_file *s, 72 + void *data) 73 + { 74 + struct tegra186_emc *emc = s->private; 75 + const char *prefix = ""; 76 + unsigned int i; 77 + 78 + for (i = 0; i < emc->num_dvfs; i++) { 79 + seq_printf(s, "%s%lu", prefix, emc->dvfs[i].rate); 80 + prefix = " "; 81 + } 82 + 83 + seq_puts(s, "\n"); 84 + 85 + return 0; 86 + } 87 + 88 + static int tegra186_emc_debug_available_rates_open(struct inode *inode, 89 + struct file *file) 90 + { 91 + return single_open(file, tegra186_emc_debug_available_rates_show, 92 + inode->i_private); 93 + } 94 + 95 + static const struct file_operations tegra186_emc_debug_available_rates_fops = { 96 + .open = tegra186_emc_debug_available_rates_open, 97 + .read = seq_read, 98 + .llseek = seq_lseek, 99 + .release = single_release, 100 + }; 101 + 102 + static int tegra186_emc_debug_min_rate_get(void *data, u64 *rate) 103 + { 104 + struct tegra186_emc *emc = data; 105 + 106 + *rate = emc->debugfs.min_rate; 107 + 108 + return 0; 109 + } 110 + 111 + static int tegra186_emc_debug_min_rate_set(void *data, u64 rate) 112 + { 113 + struct tegra186_emc *emc = data; 114 + int err; 115 + 116 + if (!tegra186_emc_validate_rate(emc, rate)) 117 + return -EINVAL; 118 + 119 + err = clk_set_min_rate(emc->clk, rate); 120 + if (err < 0) 121 + return err; 122 + 123 + emc->debugfs.min_rate = rate; 124 + 125 + return 0; 126 + } 127 + 128 + DEFINE_SIMPLE_ATTRIBUTE(tegra186_emc_debug_min_rate_fops, 129 + tegra186_emc_debug_min_rate_get, 130 + tegra186_emc_debug_min_rate_set, "%llu\n"); 131 + 132 + static int tegra186_emc_debug_max_rate_get(void *data, u64 *rate) 133 + { 134 + struct tegra186_emc *emc = data; 135 + 136 + *rate = emc->debugfs.max_rate; 137 + 138 + return 0; 139 + } 140 + 141 + static int tegra186_emc_debug_max_rate_set(void *data, u64 rate) 142 + { 143 + struct tegra186_emc *emc = data; 144 + int err; 145 + 146 + if (!tegra186_emc_validate_rate(emc, rate)) 147 + return -EINVAL; 148 + 149 + err = clk_set_max_rate(emc->clk, rate); 150 + if (err < 0) 151 + return err; 152 + 153 + emc->debugfs.max_rate = rate; 154 + 155 + return 0; 156 + } 157 + 158 + DEFINE_SIMPLE_ATTRIBUTE(tegra186_emc_debug_max_rate_fops, 159 + tegra186_emc_debug_max_rate_get, 160 + tegra186_emc_debug_max_rate_set, "%llu\n"); 161 + 162 + static int tegra186_emc_probe(struct platform_device *pdev) 163 + { 164 + struct mrq_emc_dvfs_latency_response response; 165 + struct tegra_bpmp_message msg; 166 + struct tegra186_emc *emc; 167 + unsigned int i; 168 + int err; 169 + 170 + emc = devm_kzalloc(&pdev->dev, sizeof(*emc), GFP_KERNEL); 171 + if (!emc) 172 + return -ENOMEM; 173 + 174 + emc->bpmp = tegra_bpmp_get(&pdev->dev); 175 + if (IS_ERR(emc->bpmp)) { 176 + err = PTR_ERR(emc->bpmp); 177 + 178 + if (err != -EPROBE_DEFER) 179 + dev_err(&pdev->dev, "failed to get BPMP: %d\n", err); 180 + 181 + return err; 182 + } 183 + 184 + emc->clk = devm_clk_get(&pdev->dev, "emc"); 185 + if (IS_ERR(emc->clk)) { 186 + err = PTR_ERR(emc->clk); 187 + dev_err(&pdev->dev, "failed to get EMC clock: %d\n", err); 188 + return err; 189 + } 190 + 191 + platform_set_drvdata(pdev, emc); 192 + emc->dev = &pdev->dev; 193 + 194 + memset(&msg, 0, sizeof(msg)); 195 + msg.mrq = MRQ_EMC_DVFS_LATENCY; 196 + msg.tx.data = NULL; 197 + msg.tx.size = 0; 198 + msg.rx.data = &response; 199 + msg.rx.size = sizeof(response); 200 + 201 + err = tegra_bpmp_transfer(emc->bpmp, &msg); 202 + if (err < 0) { 203 + dev_err(&pdev->dev, "failed to EMC DVFS pairs: %d\n", err); 204 + return err; 205 + } 206 + 207 + emc->debugfs.min_rate = ULONG_MAX; 208 + emc->debugfs.max_rate = 0; 209 + 210 + emc->num_dvfs = response.num_pairs; 211 + 212 + emc->dvfs = devm_kmalloc_array(&pdev->dev, emc->num_dvfs, 213 + sizeof(*emc->dvfs), GFP_KERNEL); 214 + if (!emc->dvfs) 215 + return -ENOMEM; 216 + 217 + dev_dbg(&pdev->dev, "%u DVFS pairs:\n", emc->num_dvfs); 218 + 219 + for (i = 0; i < emc->num_dvfs; i++) { 220 + emc->dvfs[i].rate = response.pairs[i].freq * 1000; 221 + emc->dvfs[i].latency = response.pairs[i].latency; 222 + 223 + if (emc->dvfs[i].rate < emc->debugfs.min_rate) 224 + emc->debugfs.min_rate = emc->dvfs[i].rate; 225 + 226 + if (emc->dvfs[i].rate > emc->debugfs.max_rate) 227 + emc->debugfs.max_rate = emc->dvfs[i].rate; 228 + 229 + dev_dbg(&pdev->dev, " %2u: %lu Hz -> %lu us\n", i, 230 + emc->dvfs[i].rate, emc->dvfs[i].latency); 231 + } 232 + 233 + err = clk_set_rate_range(emc->clk, emc->debugfs.min_rate, 234 + emc->debugfs.max_rate); 235 + if (err < 0) { 236 + dev_err(&pdev->dev, 237 + "failed to set rate range [%lu-%lu] for %pC\n", 238 + emc->debugfs.min_rate, emc->debugfs.max_rate, 239 + emc->clk); 240 + return err; 241 + } 242 + 243 + emc->debugfs.root = debugfs_create_dir("emc", NULL); 244 + if (!emc->debugfs.root) { 245 + dev_err(&pdev->dev, "failed to create debugfs directory\n"); 246 + return 0; 247 + } 248 + 249 + debugfs_create_file("available_rates", S_IRUGO, emc->debugfs.root, 250 + emc, &tegra186_emc_debug_available_rates_fops); 251 + debugfs_create_file("min_rate", S_IRUGO | S_IWUSR, emc->debugfs.root, 252 + emc, &tegra186_emc_debug_min_rate_fops); 253 + debugfs_create_file("max_rate", S_IRUGO | S_IWUSR, emc->debugfs.root, 254 + emc, &tegra186_emc_debug_max_rate_fops); 255 + 256 + return 0; 257 + } 258 + 259 + static int tegra186_emc_remove(struct platform_device *pdev) 260 + { 261 + struct tegra186_emc *emc = platform_get_drvdata(pdev); 262 + 263 + debugfs_remove_recursive(emc->debugfs.root); 264 + tegra_bpmp_put(emc->bpmp); 265 + 266 + return 0; 267 + } 268 + 269 + static const struct of_device_id tegra186_emc_of_match[] = { 270 + #if defined(CONFIG_ARCH_TEGRA186_SOC) 271 + { .compatible = "nvidia,tegra186-emc" }, 272 + #endif 273 + #if defined(CONFIG_ARCH_TEGRA194_SOC) 274 + { .compatible = "nvidia,tegra194-emc" }, 275 + #endif 276 + { /* sentinel */ } 277 + }; 278 + MODULE_DEVICE_TABLE(of, tegra186_emc_of_match); 279 + 280 + static struct platform_driver tegra186_emc_driver = { 281 + .driver = { 282 + .name = "tegra186-emc", 283 + .of_match_table = tegra186_emc_of_match, 284 + .suppress_bind_attrs = true, 285 + }, 286 + .probe = tegra186_emc_probe, 287 + .remove = tegra186_emc_remove, 288 + }; 289 + module_platform_driver(tegra186_emc_driver); 290 + 291 + MODULE_AUTHOR("Thierry Reding <treding@nvidia.com>"); 292 + MODULE_DESCRIPTION("NVIDIA Tegra186 External Memory Controller driver"); 293 + MODULE_LICENSE("GPL v2");
+1039 -32
drivers/memory/tegra/tegra186.c
··· 6 #include <linux/io.h> 7 #include <linux/module.h> 8 #include <linux/mod_devicetable.h> 9 #include <linux/platform_device.h> 10 11 #include <dt-bindings/memory/tegra186-mc.h> 12 13 - struct tegra_mc { 14 - struct device *dev; 15 - void __iomem *regs; 16 - }; 17 18 - struct tegra_mc_client { 19 const char *name; 20 unsigned int sid; 21 struct { ··· 26 } regs; 27 }; 28 29 - static const struct tegra_mc_client tegra186_mc_clients[] = { 30 { 31 .name = "ptcr", 32 .sid = TEGRA186_SID_PASSTHROUGH, ··· 573 }, 574 }; 575 576 static int tegra186_mc_probe(struct platform_device *pdev) 577 { 578 struct resource *res; 579 - struct tegra_mc *mc; 580 - unsigned int i; 581 - int err = 0; 582 583 mc = devm_kzalloc(&pdev->dev, sizeof(*mc), GFP_KERNEL); 584 if (!mc) 585 return -ENOMEM; 586 587 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 588 mc->regs = devm_ioremap_resource(&pdev->dev, res); ··· 1540 1541 mc->dev = &pdev->dev; 1542 1543 - for (i = 0; i < ARRAY_SIZE(tegra186_mc_clients); i++) { 1544 - const struct tegra_mc_client *client = &tegra186_mc_clients[i]; 1545 - u32 override, security; 1546 - 1547 - override = readl(mc->regs + client->regs.override); 1548 - security = readl(mc->regs + client->regs.security); 1549 - 1550 - dev_dbg(&pdev->dev, "client %s: override: %x security: %x\n", 1551 - client->name, override, security); 1552 - 1553 - dev_dbg(&pdev->dev, "setting SID %u for %s\n", client->sid, 1554 - client->name); 1555 - writel(client->sid, mc->regs + client->regs.override); 1556 - 1557 - override = readl(mc->regs + client->regs.override); 1558 - security = readl(mc->regs + client->regs.security); 1559 - 1560 - dev_dbg(&pdev->dev, "client %s: override: %x security: %x\n", 1561 - client->name, override, security); 1562 - } 1563 1564 platform_set_drvdata(pdev, mc); 1565 1566 - return err; 1567 } 1568 1569 static const struct of_device_id tegra186_mc_of_match[] = { 1570 - { .compatible = "nvidia,tegra186-mc", }, 1571 { /* sentinel */ } 1572 }; 1573 MODULE_DEVICE_TABLE(of, tegra186_mc_of_match); 1574 1575 static struct platform_driver tegra186_mc_driver = { 1576 .driver = { 1577 .name = "tegra186-mc", 1578 .of_match_table = tegra186_mc_of_match, 1579 .suppress_bind_attrs = true, 1580 }, 1581 - .prevent_deferred_probe = true, 1582 .probe = tegra186_mc_probe, 1583 }; 1584 module_platform_driver(tegra186_mc_driver); 1585
··· 6 #include <linux/io.h> 7 #include <linux/module.h> 8 #include <linux/mod_devicetable.h> 9 + #include <linux/of_device.h> 10 #include <linux/platform_device.h> 11 12 + #if defined(CONFIG_ARCH_TEGRA_186_SOC) 13 #include <dt-bindings/memory/tegra186-mc.h> 14 + #endif 15 16 + #if defined(CONFIG_ARCH_TEGRA_194_SOC) 17 + #include <dt-bindings/memory/tegra194-mc.h> 18 + #endif 19 20 + struct tegra186_mc_client { 21 const char *name; 22 unsigned int sid; 23 struct { ··· 24 } regs; 25 }; 26 27 + struct tegra186_mc_soc { 28 + const struct tegra186_mc_client *clients; 29 + unsigned int num_clients; 30 + }; 31 + 32 + struct tegra186_mc { 33 + struct device *dev; 34 + void __iomem *regs; 35 + 36 + const struct tegra186_mc_soc *soc; 37 + }; 38 + 39 + static void tegra186_mc_program_sid(struct tegra186_mc *mc) 40 + { 41 + unsigned int i; 42 + 43 + for (i = 0; i < mc->soc->num_clients; i++) { 44 + const struct tegra186_mc_client *client = &mc->soc->clients[i]; 45 + u32 override, security; 46 + 47 + override = readl(mc->regs + client->regs.override); 48 + security = readl(mc->regs + client->regs.security); 49 + 50 + dev_dbg(mc->dev, "client %s: override: %x security: %x\n", 51 + client->name, override, security); 52 + 53 + dev_dbg(mc->dev, "setting SID %u for %s\n", client->sid, 54 + client->name); 55 + writel(client->sid, mc->regs + client->regs.override); 56 + 57 + override = readl(mc->regs + client->regs.override); 58 + security = readl(mc->regs + client->regs.security); 59 + 60 + dev_dbg(mc->dev, "client %s: override: %x security: %x\n", 61 + client->name, override, security); 62 + } 63 + } 64 + 65 + #if defined(CONFIG_ARCH_TEGRA_186_SOC) 66 + static const struct tegra186_mc_client tegra186_mc_clients[] = { 67 { 68 .name = "ptcr", 69 .sid = TEGRA186_SID_PASSTHROUGH, ··· 532 }, 533 }; 534 535 + static const struct tegra186_mc_soc tegra186_mc_soc = { 536 + .num_clients = ARRAY_SIZE(tegra186_mc_clients), 537 + .clients = tegra186_mc_clients, 538 + }; 539 + #endif 540 + 541 + #if defined(CONFIG_ARCH_TEGRA_194_SOC) 542 + static const struct tegra186_mc_client tegra194_mc_clients[] = { 543 + { 544 + .name = "ptcr", 545 + .sid = TEGRA194_SID_PASSTHROUGH, 546 + .regs = { 547 + .override = 0x000, 548 + .security = 0x004, 549 + }, 550 + }, { 551 + .name = "miu7r", 552 + .sid = TEGRA194_SID_MIU, 553 + .regs = { 554 + .override = 0x008, 555 + .security = 0x00c, 556 + }, 557 + }, { 558 + .name = "miu7w", 559 + .sid = TEGRA194_SID_MIU, 560 + .regs = { 561 + .override = 0x010, 562 + .security = 0x014, 563 + }, 564 + }, { 565 + .name = "hdar", 566 + .sid = TEGRA194_SID_HDA, 567 + .regs = { 568 + .override = 0x0a8, 569 + .security = 0x0ac, 570 + }, 571 + }, { 572 + .name = "host1xdmar", 573 + .sid = TEGRA194_SID_HOST1X, 574 + .regs = { 575 + .override = 0x0b0, 576 + .security = 0x0b4, 577 + }, 578 + }, { 579 + .name = "nvencsrd", 580 + .sid = TEGRA194_SID_NVENC, 581 + .regs = { 582 + .override = 0x0e0, 583 + .security = 0x0e4, 584 + }, 585 + }, { 586 + .name = "satar", 587 + .sid = TEGRA194_SID_SATA, 588 + .regs = { 589 + .override = 0x0f8, 590 + .security = 0x0fc, 591 + }, 592 + }, { 593 + .name = "mpcorer", 594 + .sid = TEGRA194_SID_PASSTHROUGH, 595 + .regs = { 596 + .override = 0x138, 597 + .security = 0x13c, 598 + }, 599 + }, { 600 + .name = "nvencswr", 601 + .sid = TEGRA194_SID_NVENC, 602 + .regs = { 603 + .override = 0x158, 604 + .security = 0x15c, 605 + }, 606 + }, { 607 + .name = "hdaw", 608 + .sid = TEGRA194_SID_HDA, 609 + .regs = { 610 + .override = 0x1a8, 611 + .security = 0x1ac, 612 + }, 613 + }, { 614 + .name = "mpcorew", 615 + .sid = TEGRA194_SID_PASSTHROUGH, 616 + .regs = { 617 + .override = 0x1c8, 618 + .security = 0x1cc, 619 + }, 620 + }, { 621 + .name = "sataw", 622 + .sid = TEGRA194_SID_SATA, 623 + .regs = { 624 + .override = 0x1e8, 625 + .security = 0x1ec, 626 + }, 627 + }, { 628 + .name = "ispra", 629 + .sid = TEGRA194_SID_ISP, 630 + .regs = { 631 + .override = 0x220, 632 + .security = 0x224, 633 + }, 634 + }, { 635 + .name = "ispfalr", 636 + .sid = TEGRA194_SID_ISP_FALCON, 637 + .regs = { 638 + .override = 0x228, 639 + .security = 0x22c, 640 + }, 641 + }, { 642 + .name = "ispwa", 643 + .sid = TEGRA194_SID_ISP, 644 + .regs = { 645 + .override = 0x230, 646 + .security = 0x234, 647 + }, 648 + }, { 649 + .name = "ispwb", 650 + .sid = TEGRA194_SID_ISP, 651 + .regs = { 652 + .override = 0x238, 653 + .security = 0x23c, 654 + }, 655 + }, { 656 + .name = "xusb_hostr", 657 + .sid = TEGRA194_SID_XUSB_HOST, 658 + .regs = { 659 + .override = 0x250, 660 + .security = 0x254, 661 + }, 662 + }, { 663 + .name = "xusb_hostw", 664 + .sid = TEGRA194_SID_XUSB_HOST, 665 + .regs = { 666 + .override = 0x258, 667 + .security = 0x25c, 668 + }, 669 + }, { 670 + .name = "xusb_devr", 671 + .sid = TEGRA194_SID_XUSB_DEV, 672 + .regs = { 673 + .override = 0x260, 674 + .security = 0x264, 675 + }, 676 + }, { 677 + .name = "xusb_devw", 678 + .sid = TEGRA194_SID_XUSB_DEV, 679 + .regs = { 680 + .override = 0x268, 681 + .security = 0x26c, 682 + }, 683 + }, { 684 + .name = "sdmmcra", 685 + .sid = TEGRA194_SID_SDMMC1, 686 + .regs = { 687 + .override = 0x300, 688 + .security = 0x304, 689 + }, 690 + }, { 691 + .name = "sdmmcr", 692 + .sid = TEGRA194_SID_SDMMC3, 693 + .regs = { 694 + .override = 0x310, 695 + .security = 0x314, 696 + }, 697 + }, { 698 + .name = "sdmmcrab", 699 + .sid = TEGRA194_SID_SDMMC4, 700 + .regs = { 701 + .override = 0x318, 702 + .security = 0x31c, 703 + }, 704 + }, { 705 + .name = "sdmmcwa", 706 + .sid = TEGRA194_SID_SDMMC1, 707 + .regs = { 708 + .override = 0x320, 709 + .security = 0x324, 710 + }, 711 + }, { 712 + .name = "sdmmcw", 713 + .sid = TEGRA194_SID_SDMMC3, 714 + .regs = { 715 + .override = 0x330, 716 + .security = 0x334, 717 + }, 718 + }, { 719 + .name = "sdmmcwab", 720 + .sid = TEGRA194_SID_SDMMC4, 721 + .regs = { 722 + .override = 0x338, 723 + .security = 0x33c, 724 + }, 725 + }, { 726 + .name = "vicsrd", 727 + .sid = TEGRA194_SID_VIC, 728 + .regs = { 729 + .override = 0x360, 730 + .security = 0x364, 731 + }, 732 + }, { 733 + .name = "vicswr", 734 + .sid = TEGRA194_SID_VIC, 735 + .regs = { 736 + .override = 0x368, 737 + .security = 0x36c, 738 + }, 739 + }, { 740 + .name = "viw", 741 + .sid = TEGRA194_SID_VI, 742 + .regs = { 743 + .override = 0x390, 744 + .security = 0x394, 745 + }, 746 + }, { 747 + .name = "nvdecsrd", 748 + .sid = TEGRA194_SID_NVDEC, 749 + .regs = { 750 + .override = 0x3c0, 751 + .security = 0x3c4, 752 + }, 753 + }, { 754 + .name = "nvdecswr", 755 + .sid = TEGRA194_SID_NVDEC, 756 + .regs = { 757 + .override = 0x3c8, 758 + .security = 0x3cc, 759 + }, 760 + }, { 761 + .name = "aper", 762 + .sid = TEGRA194_SID_APE, 763 + .regs = { 764 + .override = 0x3c0, 765 + .security = 0x3c4, 766 + }, 767 + }, { 768 + .name = "apew", 769 + .sid = TEGRA194_SID_APE, 770 + .regs = { 771 + .override = 0x3d0, 772 + .security = 0x3d4, 773 + }, 774 + }, { 775 + .name = "nvjpgsrd", 776 + .sid = TEGRA194_SID_NVJPG, 777 + .regs = { 778 + .override = 0x3f0, 779 + .security = 0x3f4, 780 + }, 781 + }, { 782 + .name = "nvjpgswr", 783 + .sid = TEGRA194_SID_NVJPG, 784 + .regs = { 785 + .override = 0x3f0, 786 + .security = 0x3f4, 787 + }, 788 + }, { 789 + .name = "axiapr", 790 + .sid = TEGRA194_SID_PASSTHROUGH, 791 + .regs = { 792 + .override = 0x410, 793 + .security = 0x414, 794 + }, 795 + }, { 796 + .name = "axiapw", 797 + .sid = TEGRA194_SID_PASSTHROUGH, 798 + .regs = { 799 + .override = 0x418, 800 + .security = 0x41c, 801 + }, 802 + }, { 803 + .name = "etrr", 804 + .sid = TEGRA194_SID_ETR, 805 + .regs = { 806 + .override = 0x420, 807 + .security = 0x424, 808 + }, 809 + }, { 810 + .name = "etrw", 811 + .sid = TEGRA194_SID_ETR, 812 + .regs = { 813 + .override = 0x428, 814 + .security = 0x42c, 815 + }, 816 + }, { 817 + .name = "axisr", 818 + .sid = TEGRA194_SID_PASSTHROUGH, 819 + .regs = { 820 + .override = 0x460, 821 + .security = 0x464, 822 + }, 823 + }, { 824 + .name = "axisw", 825 + .sid = TEGRA194_SID_PASSTHROUGH, 826 + .regs = { 827 + .override = 0x468, 828 + .security = 0x46c, 829 + }, 830 + }, { 831 + .name = "eqosr", 832 + .sid = TEGRA194_SID_EQOS, 833 + .regs = { 834 + .override = 0x470, 835 + .security = 0x474, 836 + }, 837 + }, { 838 + .name = "eqosw", 839 + .sid = TEGRA194_SID_EQOS, 840 + .regs = { 841 + .override = 0x478, 842 + .security = 0x47c, 843 + }, 844 + }, { 845 + .name = "ufshcr", 846 + .sid = TEGRA194_SID_UFSHC, 847 + .regs = { 848 + .override = 0x480, 849 + .security = 0x484, 850 + }, 851 + }, { 852 + .name = "ufshcw", 853 + .sid = TEGRA194_SID_UFSHC, 854 + .regs = { 855 + .override = 0x488, 856 + .security = 0x48c, 857 + }, 858 + }, { 859 + .name = "nvdisplayr", 860 + .sid = TEGRA194_SID_NVDISPLAY, 861 + .regs = { 862 + .override = 0x490, 863 + .security = 0x494, 864 + }, 865 + }, { 866 + .name = "bpmpr", 867 + .sid = TEGRA194_SID_BPMP, 868 + .regs = { 869 + .override = 0x498, 870 + .security = 0x49c, 871 + }, 872 + }, { 873 + .name = "bpmpw", 874 + .sid = TEGRA194_SID_BPMP, 875 + .regs = { 876 + .override = 0x4a0, 877 + .security = 0x4a4, 878 + }, 879 + }, { 880 + .name = "bpmpdmar", 881 + .sid = TEGRA194_SID_BPMP, 882 + .regs = { 883 + .override = 0x4a8, 884 + .security = 0x4ac, 885 + }, 886 + }, { 887 + .name = "bpmpdmaw", 888 + .sid = TEGRA194_SID_BPMP, 889 + .regs = { 890 + .override = 0x4b0, 891 + .security = 0x4b4, 892 + }, 893 + }, { 894 + .name = "aonr", 895 + .sid = TEGRA194_SID_AON, 896 + .regs = { 897 + .override = 0x4b8, 898 + .security = 0x4bc, 899 + }, 900 + }, { 901 + .name = "aonw", 902 + .sid = TEGRA194_SID_AON, 903 + .regs = { 904 + .override = 0x4c0, 905 + .security = 0x4c4, 906 + }, 907 + }, { 908 + .name = "aondmar", 909 + .sid = TEGRA194_SID_AON, 910 + .regs = { 911 + .override = 0x4c8, 912 + .security = 0x4cc, 913 + }, 914 + }, { 915 + .name = "aondmaw", 916 + .sid = TEGRA194_SID_AON, 917 + .regs = { 918 + .override = 0x4d0, 919 + .security = 0x4d4, 920 + }, 921 + }, { 922 + .name = "scer", 923 + .sid = TEGRA194_SID_SCE, 924 + .regs = { 925 + .override = 0x4d8, 926 + .security = 0x4dc, 927 + }, 928 + }, { 929 + .name = "scew", 930 + .sid = TEGRA194_SID_SCE, 931 + .regs = { 932 + .override = 0x4e0, 933 + .security = 0x4e4, 934 + }, 935 + }, { 936 + .name = "scedmar", 937 + .sid = TEGRA194_SID_SCE, 938 + .regs = { 939 + .override = 0x4e8, 940 + .security = 0x4ec, 941 + }, 942 + }, { 943 + .name = "scedmaw", 944 + .sid = TEGRA194_SID_SCE, 945 + .regs = { 946 + .override = 0x4f0, 947 + .security = 0x4f4, 948 + }, 949 + }, { 950 + .name = "apedmar", 951 + .sid = TEGRA194_SID_APE, 952 + .regs = { 953 + .override = 0x4f8, 954 + .security = 0x4fc, 955 + }, 956 + }, { 957 + .name = "apedmaw", 958 + .sid = TEGRA194_SID_APE, 959 + .regs = { 960 + .override = 0x500, 961 + .security = 0x504, 962 + }, 963 + }, { 964 + .name = "nvdisplayr1", 965 + .sid = TEGRA194_SID_NVDISPLAY, 966 + .regs = { 967 + .override = 0x508, 968 + .security = 0x50c, 969 + }, 970 + }, { 971 + .name = "vicsrd1", 972 + .sid = TEGRA194_SID_VIC, 973 + .regs = { 974 + .override = 0x510, 975 + .security = 0x514, 976 + }, 977 + }, { 978 + .name = "nvdecsrd1", 979 + .sid = TEGRA194_SID_NVDEC, 980 + .regs = { 981 + .override = 0x518, 982 + .security = 0x51c, 983 + }, 984 + }, { 985 + .name = "miu0r", 986 + .sid = TEGRA194_SID_MIU, 987 + .regs = { 988 + .override = 0x530, 989 + .security = 0x534, 990 + }, 991 + }, { 992 + .name = "miu0w", 993 + .sid = TEGRA194_SID_MIU, 994 + .regs = { 995 + .override = 0x538, 996 + .security = 0x53c, 997 + }, 998 + }, { 999 + .name = "miu1r", 1000 + .sid = TEGRA194_SID_MIU, 1001 + .regs = { 1002 + .override = 0x540, 1003 + .security = 0x544, 1004 + }, 1005 + }, { 1006 + .name = "miu1w", 1007 + .sid = TEGRA194_SID_MIU, 1008 + .regs = { 1009 + .override = 0x548, 1010 + .security = 0x54c, 1011 + }, 1012 + }, { 1013 + .name = "miu2r", 1014 + .sid = TEGRA194_SID_MIU, 1015 + .regs = { 1016 + .override = 0x570, 1017 + .security = 0x574, 1018 + }, 1019 + }, { 1020 + .name = "miu2w", 1021 + .sid = TEGRA194_SID_MIU, 1022 + .regs = { 1023 + .override = 0x578, 1024 + .security = 0x57c, 1025 + }, 1026 + }, { 1027 + .name = "miu3r", 1028 + .sid = TEGRA194_SID_MIU, 1029 + .regs = { 1030 + .override = 0x580, 1031 + .security = 0x584, 1032 + }, 1033 + }, { 1034 + .name = "miu3w", 1035 + .sid = TEGRA194_SID_MIU, 1036 + .regs = { 1037 + .override = 0x588, 1038 + .security = 0x58c, 1039 + }, 1040 + }, { 1041 + .name = "miu4r", 1042 + .sid = TEGRA194_SID_MIU, 1043 + .regs = { 1044 + .override = 0x590, 1045 + .security = 0x594, 1046 + }, 1047 + }, { 1048 + .name = "miu4w", 1049 + .sid = TEGRA194_SID_MIU, 1050 + .regs = { 1051 + .override = 0x598, 1052 + .security = 0x59c, 1053 + }, 1054 + }, { 1055 + .name = "dpmur", 1056 + .sid = TEGRA194_SID_PASSTHROUGH, 1057 + .regs = { 1058 + .override = 0x598, 1059 + .security = 0x59c, 1060 + }, 1061 + }, { 1062 + .name = "vifalr", 1063 + .sid = TEGRA194_SID_VI_FALCON, 1064 + .regs = { 1065 + .override = 0x5e0, 1066 + .security = 0x5e4, 1067 + }, 1068 + }, { 1069 + .name = "vifalw", 1070 + .sid = TEGRA194_SID_VI_FALCON, 1071 + .regs = { 1072 + .override = 0x5e8, 1073 + .security = 0x5ec, 1074 + }, 1075 + }, { 1076 + .name = "dla0rda", 1077 + .sid = TEGRA194_SID_NVDLA0, 1078 + .regs = { 1079 + .override = 0x5f0, 1080 + .security = 0x5f4, 1081 + }, 1082 + }, { 1083 + .name = "dla0falrdb", 1084 + .sid = TEGRA194_SID_NVDLA0, 1085 + .regs = { 1086 + .override = 0x5f8, 1087 + .security = 0x5fc, 1088 + }, 1089 + }, { 1090 + .name = "dla0wra", 1091 + .sid = TEGRA194_SID_NVDLA0, 1092 + .regs = { 1093 + .override = 0x600, 1094 + .security = 0x604, 1095 + }, 1096 + }, { 1097 + .name = "dla0falwrb", 1098 + .sid = TEGRA194_SID_NVDLA0, 1099 + .regs = { 1100 + .override = 0x608, 1101 + .security = 0x60c, 1102 + }, 1103 + }, { 1104 + .name = "dla1rda", 1105 + .sid = TEGRA194_SID_NVDLA1, 1106 + .regs = { 1107 + .override = 0x610, 1108 + .security = 0x614, 1109 + }, 1110 + }, { 1111 + .name = "dla1falrdb", 1112 + .sid = TEGRA194_SID_NVDLA1, 1113 + .regs = { 1114 + .override = 0x618, 1115 + .security = 0x61c, 1116 + }, 1117 + }, { 1118 + .name = "dla1wra", 1119 + .sid = TEGRA194_SID_NVDLA1, 1120 + .regs = { 1121 + .override = 0x620, 1122 + .security = 0x624, 1123 + }, 1124 + }, { 1125 + .name = "dla1falwrb", 1126 + .sid = TEGRA194_SID_NVDLA1, 1127 + .regs = { 1128 + .override = 0x628, 1129 + .security = 0x62c, 1130 + }, 1131 + }, { 1132 + .name = "pva0rda", 1133 + .sid = TEGRA194_SID_PVA0, 1134 + .regs = { 1135 + .override = 0x630, 1136 + .security = 0x634, 1137 + }, 1138 + }, { 1139 + .name = "pva0rdb", 1140 + .sid = TEGRA194_SID_PVA0, 1141 + .regs = { 1142 + .override = 0x638, 1143 + .security = 0x63c, 1144 + }, 1145 + }, { 1146 + .name = "pva0rdc", 1147 + .sid = TEGRA194_SID_PVA0, 1148 + .regs = { 1149 + .override = 0x640, 1150 + .security = 0x644, 1151 + }, 1152 + }, { 1153 + .name = "pva0wra", 1154 + .sid = TEGRA194_SID_PVA0, 1155 + .regs = { 1156 + .override = 0x648, 1157 + .security = 0x64c, 1158 + }, 1159 + }, { 1160 + .name = "pva0wrb", 1161 + .sid = TEGRA194_SID_PVA0, 1162 + .regs = { 1163 + .override = 0x650, 1164 + .security = 0x654, 1165 + }, 1166 + }, { 1167 + .name = "pva0wrc", 1168 + .sid = TEGRA194_SID_PVA0, 1169 + .regs = { 1170 + .override = 0x658, 1171 + .security = 0x65c, 1172 + }, 1173 + }, { 1174 + .name = "pva1rda", 1175 + .sid = TEGRA194_SID_PVA1, 1176 + .regs = { 1177 + .override = 0x660, 1178 + .security = 0x664, 1179 + }, 1180 + }, { 1181 + .name = "pva1rdb", 1182 + .sid = TEGRA194_SID_PVA1, 1183 + .regs = { 1184 + .override = 0x668, 1185 + .security = 0x66c, 1186 + }, 1187 + }, { 1188 + .name = "pva1rdc", 1189 + .sid = TEGRA194_SID_PVA1, 1190 + .regs = { 1191 + .override = 0x670, 1192 + .security = 0x674, 1193 + }, 1194 + }, { 1195 + .name = "pva1wra", 1196 + .sid = TEGRA194_SID_PVA1, 1197 + .regs = { 1198 + .override = 0x678, 1199 + .security = 0x67c, 1200 + }, 1201 + }, { 1202 + .name = "pva1wrb", 1203 + .sid = TEGRA194_SID_PVA1, 1204 + .regs = { 1205 + .override = 0x680, 1206 + .security = 0x684, 1207 + }, 1208 + }, { 1209 + .name = "pva1wrc", 1210 + .sid = TEGRA194_SID_PVA1, 1211 + .regs = { 1212 + .override = 0x688, 1213 + .security = 0x68c, 1214 + }, 1215 + }, { 1216 + .name = "rcer", 1217 + .sid = TEGRA194_SID_RCE, 1218 + .regs = { 1219 + .override = 0x690, 1220 + .security = 0x694, 1221 + }, 1222 + }, { 1223 + .name = "rcew", 1224 + .sid = TEGRA194_SID_RCE, 1225 + .regs = { 1226 + .override = 0x698, 1227 + .security = 0x69c, 1228 + }, 1229 + }, { 1230 + .name = "rcedmar", 1231 + .sid = TEGRA194_SID_RCE, 1232 + .regs = { 1233 + .override = 0x6a0, 1234 + .security = 0x6a4, 1235 + }, 1236 + }, { 1237 + .name = "rcedmaw", 1238 + .sid = TEGRA194_SID_RCE, 1239 + .regs = { 1240 + .override = 0x6a8, 1241 + .security = 0x6ac, 1242 + }, 1243 + }, { 1244 + .name = "nvenc1srd", 1245 + .sid = TEGRA194_SID_NVENC1, 1246 + .regs = { 1247 + .override = 0x6b0, 1248 + .security = 0x6b4, 1249 + }, 1250 + }, { 1251 + .name = "nvenc1swr", 1252 + .sid = TEGRA194_SID_NVENC1, 1253 + .regs = { 1254 + .override = 0x6b8, 1255 + .security = 0x6bc, 1256 + }, 1257 + }, { 1258 + .name = "pcie0r", 1259 + .sid = TEGRA194_SID_PCIE0, 1260 + .regs = { 1261 + .override = 0x6c0, 1262 + .security = 0x6c4, 1263 + }, 1264 + }, { 1265 + .name = "pcie0w", 1266 + .sid = TEGRA194_SID_PCIE0, 1267 + .regs = { 1268 + .override = 0x6c8, 1269 + .security = 0x6cc, 1270 + }, 1271 + }, { 1272 + .name = "pcie1r", 1273 + .sid = TEGRA194_SID_PCIE1, 1274 + .regs = { 1275 + .override = 0x6d0, 1276 + .security = 0x6d4, 1277 + }, 1278 + }, { 1279 + .name = "pcie1w", 1280 + .sid = TEGRA194_SID_PCIE1, 1281 + .regs = { 1282 + .override = 0x6d8, 1283 + .security = 0x6dc, 1284 + }, 1285 + }, { 1286 + .name = "pcie2ar", 1287 + .sid = TEGRA194_SID_PCIE2, 1288 + .regs = { 1289 + .override = 0x6e0, 1290 + .security = 0x6e4, 1291 + }, 1292 + }, { 1293 + .name = "pcie2aw", 1294 + .sid = TEGRA194_SID_PCIE2, 1295 + .regs = { 1296 + .override = 0x6e8, 1297 + .security = 0x6ec, 1298 + }, 1299 + }, { 1300 + .name = "pcie3r", 1301 + .sid = TEGRA194_SID_PCIE3, 1302 + .regs = { 1303 + .override = 0x6f0, 1304 + .security = 0x6f4, 1305 + }, 1306 + }, { 1307 + .name = "pcie3w", 1308 + .sid = TEGRA194_SID_PCIE3, 1309 + .regs = { 1310 + .override = 0x6f8, 1311 + .security = 0x6fc, 1312 + }, 1313 + }, { 1314 + .name = "pcie4r", 1315 + .sid = TEGRA194_SID_PCIE4, 1316 + .regs = { 1317 + .override = 0x700, 1318 + .security = 0x704, 1319 + }, 1320 + }, { 1321 + .name = "pcie4w", 1322 + .sid = TEGRA194_SID_PCIE4, 1323 + .regs = { 1324 + .override = 0x708, 1325 + .security = 0x70c, 1326 + }, 1327 + }, { 1328 + .name = "pcie5r", 1329 + .sid = TEGRA194_SID_PCIE5, 1330 + .regs = { 1331 + .override = 0x710, 1332 + .security = 0x714, 1333 + }, 1334 + }, { 1335 + .name = "pcie5w", 1336 + .sid = TEGRA194_SID_PCIE5, 1337 + .regs = { 1338 + .override = 0x718, 1339 + .security = 0x71c, 1340 + }, 1341 + }, { 1342 + .name = "ispfalw", 1343 + .sid = TEGRA194_SID_ISP_FALCON, 1344 + .regs = { 1345 + .override = 0x720, 1346 + .security = 0x724, 1347 + }, 1348 + }, { 1349 + .name = "dla0rda1", 1350 + .sid = TEGRA194_SID_NVDLA0, 1351 + .regs = { 1352 + .override = 0x748, 1353 + .security = 0x74c, 1354 + }, 1355 + }, { 1356 + .name = "dla1rda1", 1357 + .sid = TEGRA194_SID_NVDLA1, 1358 + .regs = { 1359 + .override = 0x750, 1360 + .security = 0x754, 1361 + }, 1362 + }, { 1363 + .name = "pva0rda1", 1364 + .sid = TEGRA194_SID_PVA0, 1365 + .regs = { 1366 + .override = 0x758, 1367 + .security = 0x75c, 1368 + }, 1369 + }, { 1370 + .name = "pva0rdb1", 1371 + .sid = TEGRA194_SID_PVA0, 1372 + .regs = { 1373 + .override = 0x760, 1374 + .security = 0x764, 1375 + }, 1376 + }, { 1377 + .name = "pva1rda1", 1378 + .sid = TEGRA194_SID_PVA1, 1379 + .regs = { 1380 + .override = 0x768, 1381 + .security = 0x76c, 1382 + }, 1383 + }, { 1384 + .name = "pva1rdb1", 1385 + .sid = TEGRA194_SID_PVA1, 1386 + .regs = { 1387 + .override = 0x770, 1388 + .security = 0x774, 1389 + }, 1390 + }, { 1391 + .name = "pcie5r1", 1392 + .sid = TEGRA194_SID_PCIE5, 1393 + .regs = { 1394 + .override = 0x778, 1395 + .security = 0x77c, 1396 + }, 1397 + }, { 1398 + .name = "nvencsrd1", 1399 + .sid = TEGRA194_SID_NVENC, 1400 + .regs = { 1401 + .override = 0x780, 1402 + .security = 0x784, 1403 + }, 1404 + }, { 1405 + .name = "nvenc1srd1", 1406 + .sid = TEGRA194_SID_NVENC1, 1407 + .regs = { 1408 + .override = 0x788, 1409 + .security = 0x78c, 1410 + }, 1411 + }, { 1412 + .name = "ispra1", 1413 + .sid = TEGRA194_SID_ISP, 1414 + .regs = { 1415 + .override = 0x790, 1416 + .security = 0x794, 1417 + }, 1418 + }, { 1419 + .name = "pcie0r1", 1420 + .sid = TEGRA194_SID_PCIE0, 1421 + .regs = { 1422 + .override = 0x798, 1423 + .security = 0x79c, 1424 + }, 1425 + }, { 1426 + .name = "nvdec1srd", 1427 + .sid = TEGRA194_SID_NVDEC1, 1428 + .regs = { 1429 + .override = 0x7c8, 1430 + .security = 0x7cc, 1431 + }, 1432 + }, { 1433 + .name = "nvdec1srd1", 1434 + .sid = TEGRA194_SID_NVDEC1, 1435 + .regs = { 1436 + .override = 0x7d0, 1437 + .security = 0x7d4, 1438 + }, 1439 + }, { 1440 + .name = "nvdec1swr", 1441 + .sid = TEGRA194_SID_NVDEC1, 1442 + .regs = { 1443 + .override = 0x7d8, 1444 + .security = 0x7dc, 1445 + }, 1446 + }, { 1447 + .name = "miu5r", 1448 + .sid = TEGRA194_SID_MIU, 1449 + .regs = { 1450 + .override = 0x7e0, 1451 + .security = 0x7e4, 1452 + }, 1453 + }, { 1454 + .name = "miu5w", 1455 + .sid = TEGRA194_SID_MIU, 1456 + .regs = { 1457 + .override = 0x7e8, 1458 + .security = 0x7ec, 1459 + }, 1460 + }, { 1461 + .name = "miu6r", 1462 + .sid = TEGRA194_SID_MIU, 1463 + .regs = { 1464 + .override = 0x7f0, 1465 + .security = 0x7f4, 1466 + }, 1467 + }, { 1468 + .name = "miu6w", 1469 + .sid = TEGRA194_SID_MIU, 1470 + .regs = { 1471 + .override = 0x7f8, 1472 + .security = 0x7fc, 1473 + }, 1474 + }, 1475 + }; 1476 + 1477 + static const struct tegra186_mc_soc tegra194_mc_soc = { 1478 + .num_clients = ARRAY_SIZE(tegra194_mc_clients), 1479 + .clients = tegra194_mc_clients, 1480 + }; 1481 + #endif 1482 + 1483 static int tegra186_mc_probe(struct platform_device *pdev) 1484 { 1485 + struct tegra186_mc *mc; 1486 struct resource *res; 1487 + int err; 1488 1489 mc = devm_kzalloc(&pdev->dev, sizeof(*mc), GFP_KERNEL); 1490 if (!mc) 1491 return -ENOMEM; 1492 + 1493 + mc->soc = of_device_get_match_data(&pdev->dev); 1494 1495 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1496 mc->regs = devm_ioremap_resource(&pdev->dev, res); ··· 550 551 mc->dev = &pdev->dev; 552 553 + err = of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev); 554 + if (err < 0) 555 + return err; 556 557 platform_set_drvdata(pdev, mc); 558 + tegra186_mc_program_sid(mc); 559 560 + return 0; 561 + } 562 + 563 + static int tegra186_mc_remove(struct platform_device *pdev) 564 + { 565 + struct tegra186_mc *mc = platform_get_drvdata(pdev); 566 + 567 + of_platform_depopulate(mc->dev); 568 + 569 + return 0; 570 } 571 572 static const struct of_device_id tegra186_mc_of_match[] = { 573 + #if defined(CONFIG_ARCH_TEGRA_186_SOC) 574 + { .compatible = "nvidia,tegra186-mc", .data = &tegra186_mc_soc }, 575 + #endif 576 + #if defined(CONFIG_ARCH_TEGRA_194_SOC) 577 + { .compatible = "nvidia,tegra194-mc", .data = &tegra194_mc_soc }, 578 + #endif 579 { /* sentinel */ } 580 }; 581 MODULE_DEVICE_TABLE(of, tegra186_mc_of_match); 582 + 583 + static int tegra186_mc_suspend(struct device *dev) 584 + { 585 + return 0; 586 + } 587 + 588 + static int tegra186_mc_resume(struct device *dev) 589 + { 590 + struct tegra186_mc *mc = dev_get_drvdata(dev); 591 + 592 + tegra186_mc_program_sid(mc); 593 + 594 + return 0; 595 + } 596 + 597 + static const struct dev_pm_ops tegra186_mc_pm_ops = { 598 + SET_SYSTEM_SLEEP_PM_OPS(tegra186_mc_suspend, tegra186_mc_resume) 599 + }; 600 601 static struct platform_driver tegra186_mc_driver = { 602 .driver = { 603 .name = "tegra186-mc", 604 .of_match_table = tegra186_mc_of_match, 605 + .pm = &tegra186_mc_pm_ops, 606 .suppress_bind_attrs = true, 607 }, 608 .probe = tegra186_mc_probe, 609 + .remove = tegra186_mc_remove, 610 }; 611 module_platform_driver(tegra186_mc_driver); 612
+175
drivers/memory/tegra/tegra20-emc.c
··· 8 #include <linux/clk.h> 9 #include <linux/clk/tegra.h> 10 #include <linux/completion.h> 11 #include <linux/err.h> 12 #include <linux/interrupt.h> 13 #include <linux/io.h> ··· 151 152 struct emc_timing *timings; 153 unsigned int num_timings; 154 }; 155 156 static irqreturn_t tegra_emc_isr(int irq, void *data) ··· 485 return timing->rate; 486 } 487 488 static int tegra_emc_probe(struct platform_device *pdev) 489 { 490 struct device_node *np; ··· 721 err); 722 goto unset_cb; 723 } 724 725 return 0; 726
··· 8 #include <linux/clk.h> 9 #include <linux/clk/tegra.h> 10 #include <linux/completion.h> 11 + #include <linux/debugfs.h> 12 #include <linux/err.h> 13 #include <linux/interrupt.h> 14 #include <linux/io.h> ··· 150 151 struct emc_timing *timings; 152 unsigned int num_timings; 153 + 154 + struct { 155 + struct dentry *root; 156 + unsigned long min_rate; 157 + unsigned long max_rate; 158 + } debugfs; 159 }; 160 161 static irqreturn_t tegra_emc_isr(int irq, void *data) ··· 478 return timing->rate; 479 } 480 481 + /* 482 + * debugfs interface 483 + * 484 + * The memory controller driver exposes some files in debugfs that can be used 485 + * to control the EMC frequency. The top-level directory can be found here: 486 + * 487 + * /sys/kernel/debug/emc 488 + * 489 + * It contains the following files: 490 + * 491 + * - available_rates: This file contains a list of valid, space-separated 492 + * EMC frequencies. 493 + * 494 + * - min_rate: Writing a value to this file sets the given frequency as the 495 + * floor of the permitted range. If this is higher than the currently 496 + * configured EMC frequency, this will cause the frequency to be 497 + * increased so that it stays within the valid range. 498 + * 499 + * - max_rate: Similarily to the min_rate file, writing a value to this file 500 + * sets the given frequency as the ceiling of the permitted range. If 501 + * the value is lower than the currently configured EMC frequency, this 502 + * will cause the frequency to be decreased so that it stays within the 503 + * valid range. 504 + */ 505 + 506 + static bool tegra_emc_validate_rate(struct tegra_emc *emc, unsigned long rate) 507 + { 508 + unsigned int i; 509 + 510 + for (i = 0; i < emc->num_timings; i++) 511 + if (rate == emc->timings[i].rate) 512 + return true; 513 + 514 + return false; 515 + } 516 + 517 + static int tegra_emc_debug_available_rates_show(struct seq_file *s, void *data) 518 + { 519 + struct tegra_emc *emc = s->private; 520 + const char *prefix = ""; 521 + unsigned int i; 522 + 523 + for (i = 0; i < emc->num_timings; i++) { 524 + seq_printf(s, "%s%lu", prefix, emc->timings[i].rate); 525 + prefix = " "; 526 + } 527 + 528 + seq_puts(s, "\n"); 529 + 530 + return 0; 531 + } 532 + 533 + static int tegra_emc_debug_available_rates_open(struct inode *inode, 534 + struct file *file) 535 + { 536 + return single_open(file, tegra_emc_debug_available_rates_show, 537 + inode->i_private); 538 + } 539 + 540 + static const struct file_operations tegra_emc_debug_available_rates_fops = { 541 + .open = tegra_emc_debug_available_rates_open, 542 + .read = seq_read, 543 + .llseek = seq_lseek, 544 + .release = single_release, 545 + }; 546 + 547 + static int tegra_emc_debug_min_rate_get(void *data, u64 *rate) 548 + { 549 + struct tegra_emc *emc = data; 550 + 551 + *rate = emc->debugfs.min_rate; 552 + 553 + return 0; 554 + } 555 + 556 + static int tegra_emc_debug_min_rate_set(void *data, u64 rate) 557 + { 558 + struct tegra_emc *emc = data; 559 + int err; 560 + 561 + if (!tegra_emc_validate_rate(emc, rate)) 562 + return -EINVAL; 563 + 564 + err = clk_set_min_rate(emc->clk, rate); 565 + if (err < 0) 566 + return err; 567 + 568 + emc->debugfs.min_rate = rate; 569 + 570 + return 0; 571 + } 572 + 573 + DEFINE_SIMPLE_ATTRIBUTE(tegra_emc_debug_min_rate_fops, 574 + tegra_emc_debug_min_rate_get, 575 + tegra_emc_debug_min_rate_set, "%llu\n"); 576 + 577 + static int tegra_emc_debug_max_rate_get(void *data, u64 *rate) 578 + { 579 + struct tegra_emc *emc = data; 580 + 581 + *rate = emc->debugfs.max_rate; 582 + 583 + return 0; 584 + } 585 + 586 + static int tegra_emc_debug_max_rate_set(void *data, u64 rate) 587 + { 588 + struct tegra_emc *emc = data; 589 + int err; 590 + 591 + if (!tegra_emc_validate_rate(emc, rate)) 592 + return -EINVAL; 593 + 594 + err = clk_set_max_rate(emc->clk, rate); 595 + if (err < 0) 596 + return err; 597 + 598 + emc->debugfs.max_rate = rate; 599 + 600 + return 0; 601 + } 602 + 603 + DEFINE_SIMPLE_ATTRIBUTE(tegra_emc_debug_max_rate_fops, 604 + tegra_emc_debug_max_rate_get, 605 + tegra_emc_debug_max_rate_set, "%llu\n"); 606 + 607 + static void tegra_emc_debugfs_init(struct tegra_emc *emc) 608 + { 609 + struct device *dev = emc->dev; 610 + unsigned int i; 611 + int err; 612 + 613 + emc->debugfs.min_rate = ULONG_MAX; 614 + emc->debugfs.max_rate = 0; 615 + 616 + for (i = 0; i < emc->num_timings; i++) { 617 + if (emc->timings[i].rate < emc->debugfs.min_rate) 618 + emc->debugfs.min_rate = emc->timings[i].rate; 619 + 620 + if (emc->timings[i].rate > emc->debugfs.max_rate) 621 + emc->debugfs.max_rate = emc->timings[i].rate; 622 + } 623 + 624 + err = clk_set_rate_range(emc->clk, emc->debugfs.min_rate, 625 + emc->debugfs.max_rate); 626 + if (err < 0) { 627 + dev_err(dev, "failed to set rate range [%lu-%lu] for %pC\n", 628 + emc->debugfs.min_rate, emc->debugfs.max_rate, 629 + emc->clk); 630 + } 631 + 632 + emc->debugfs.root = debugfs_create_dir("emc", NULL); 633 + if (!emc->debugfs.root) { 634 + dev_err(emc->dev, "failed to create debugfs directory\n"); 635 + return; 636 + } 637 + 638 + debugfs_create_file("available_rates", S_IRUGO, emc->debugfs.root, 639 + emc, &tegra_emc_debug_available_rates_fops); 640 + debugfs_create_file("min_rate", S_IRUGO | S_IWUSR, emc->debugfs.root, 641 + emc, &tegra_emc_debug_min_rate_fops); 642 + debugfs_create_file("max_rate", S_IRUGO | S_IWUSR, emc->debugfs.root, 643 + emc, &tegra_emc_debug_max_rate_fops); 644 + } 645 + 646 static int tegra_emc_probe(struct platform_device *pdev) 647 { 648 struct device_node *np; ··· 549 err); 550 goto unset_cb; 551 } 552 + 553 + platform_set_drvdata(pdev, emc); 554 + tegra_emc_debugfs_init(emc); 555 556 return 0; 557
+1 -1
drivers/memory/tegra/tegra210.c
··· 436 .reg = 0x37c, 437 .shift = 0, 438 .mask = 0xff, 439 - .def = 0x39, 440 }, 441 }, { 442 .id = 0x4b,
··· 436 .reg = 0x37c, 437 .shift = 0, 438 .mask = 0xff, 439 + .def = 0x7a, 440 }, 441 }, { 442 .id = 0x4b,
+277 -75
drivers/memory/tegra/tegra30-emc.c
··· 12 #include <linux/clk.h> 13 #include <linux/clk/tegra.h> 14 #include <linux/completion.h> 15 #include <linux/delay.h> 16 #include <linux/err.h> 17 #include <linux/interrupt.h> ··· 332 struct clk *clk; 333 void __iomem *regs; 334 unsigned int irq; 335 336 struct emc_timing *timings; 337 unsigned int num_timings; 338 ··· 348 bool vref_cal_toggle : 1; 349 bool zcal_long : 1; 350 bool dll_on : 1; 351 - bool prepared : 1; 352 - bool bad_state : 1; 353 }; 354 355 static irqreturn_t tegra_emc_isr(int irq, void *data) 356 { ··· 426 if (!status) 427 return IRQ_NONE; 428 429 - /* notify about EMC-CAR handshake completion */ 430 - if (status & EMC_CLKCHANGE_COMPLETE_INT) 431 - complete(&emc->clk_handshake_complete); 432 - 433 /* notify about HW problem */ 434 if (status & EMC_REFRESH_OVERFLOW_INT) 435 dev_err_ratelimited(emc->dev, ··· 433 434 /* clear interrupts */ 435 writel_relaxed(status, emc->regs + EMC_INTSTATUS); 436 437 return IRQ_HANDLED; 438 } ··· 511 } 512 513 return preset; 514 - } 515 - 516 - static int emc_seq_update_timing(struct tegra_emc *emc) 517 - { 518 - u32 val; 519 - int err; 520 - 521 - writel_relaxed(EMC_TIMING_UPDATE, emc->regs + EMC_TIMING_CONTROL); 522 - 523 - err = readl_relaxed_poll_timeout_atomic(emc->regs + EMC_STATUS, val, 524 - !(val & EMC_STATUS_TIMING_UPDATE_STALLED), 525 - 1, 200); 526 - if (err) { 527 - dev_err(emc->dev, "failed to update timing: %d\n", err); 528 - return err; 529 - } 530 - 531 - return 0; 532 } 533 534 static int emc_prepare_mc_clk_cfg(struct tegra_emc *emc, unsigned long rate) ··· 639 !(val & EMC_AUTO_CAL_STATUS_ACTIVE), 1, 300); 640 if (err) { 641 dev_err(emc->dev, 642 - "failed to disable auto-cal: %d\n", 643 - err); 644 return err; 645 } 646 ··· 676 677 writel_relaxed(val, emc->regs + EMC_MRS_WAIT_CNT); 678 } 679 - 680 - /* disable interrupt since read access is prohibited after stalling */ 681 - disable_irq(emc->irq); 682 683 /* this read also completes the writes */ 684 val = readl_relaxed(emc->regs + EMC_SEL_DPD_CTRL); ··· 792 emc->regs + EMC_ZQ_CAL); 793 } 794 795 - /* re-enable auto-refresh */ 796 - writel_relaxed(EMC_REFCTRL_ENABLE_ALL(dram_num), 797 - emc->regs + EMC_REFCTRL); 798 - 799 /* flow control marker 3 */ 800 writel_relaxed(0x1, emc->regs + EMC_UNSTALL_RW_AFTER_CLKCHANGE); 801 802 reinit_completion(&emc->clk_handshake_complete); 803 804 - /* interrupt can be re-enabled now */ 805 - enable_irq(emc->irq); 806 - 807 - emc->bad_state = false; 808 - emc->prepared = true; 809 810 return 0; 811 } ··· 811 static int emc_complete_timing_change(struct tegra_emc *emc, 812 unsigned long rate) 813 { 814 - struct emc_timing *timing = emc_find_timing(emc, rate); 815 unsigned long timeout; 816 - int ret; 817 818 timeout = wait_for_completion_timeout(&emc->clk_handshake_complete, 819 msecs_to_jiffies(100)); 820 if (timeout == 0) { 821 dev_err(emc->dev, "emc-car handshake failed\n"); 822 - emc->bad_state = true; 823 return -EIO; 824 } 825 826 - /* restore auto-calibration */ 827 - if (emc->vref_cal_toggle) 828 - writel_relaxed(timing->emc_auto_cal_interval, 829 - emc->regs + EMC_AUTO_CAL_INTERVAL); 830 831 - /* restore dynamic self-refresh */ 832 - if (timing->emc_cfg_dyn_self_ref) { 833 - emc->emc_cfg |= EMC_CFG_DYN_SREF_ENABLE; 834 - writel_relaxed(emc->emc_cfg, emc->regs + EMC_CFG); 835 - } 836 - 837 - /* set number of clocks to wait after each ZQ command */ 838 - if (emc->zcal_long) 839 - writel_relaxed(timing->emc_zcal_cnt_long, 840 - emc->regs + EMC_ZCAL_WAIT_CNT); 841 - 842 - udelay(2); 843 - /* update restored timing */ 844 - ret = emc_seq_update_timing(emc); 845 - if (ret) 846 - emc->bad_state = true; 847 - 848 - /* restore early ACK */ 849 - mc_writel(emc->mc, emc->mc_override, MC_EMEM_ARB_OVERRIDE); 850 - 851 - emc->prepared = false; 852 - 853 - return ret; 854 } 855 856 static int emc_unprepare_timing_change(struct tegra_emc *emc, 857 unsigned long rate) 858 { 859 - if (emc->prepared && !emc->bad_state) { 860 /* shouldn't ever happen in practice */ 861 dev_err(emc->dev, "timing configuration can't be reverted\n"); 862 emc->bad_state = true; ··· 847 848 switch (msg) { 849 case PRE_RATE_CHANGE: 850 err = emc_prepare_timing_change(emc, cnd->new_rate); 851 break; 852 853 case ABORT_RATE_CHANGE: ··· 1113 return timing->rate; 1114 } 1115 1116 static int tegra_emc_probe(struct platform_device *pdev) 1117 { 1118 struct platform_device *mc; ··· 1364 } 1365 1366 platform_set_drvdata(pdev, emc); 1367 1368 return 0; 1369 ··· 1377 static int tegra_emc_suspend(struct device *dev) 1378 { 1379 struct tegra_emc *emc = dev_get_drvdata(dev); 1380 1381 - /* 1382 - * Suspending in a bad state will hang machine. The "prepared" var 1383 - * shall be always false here unless it's a kernel bug that caused 1384 - * suspending in a wrong order. 1385 - */ 1386 - if (WARN_ON(emc->prepared) || emc->bad_state) 1387 return -EINVAL; 1388 1389 emc->bad_state = true; ··· 1401 1402 emc_setup_hw(emc); 1403 emc->bad_state = false; 1404 1405 return 0; 1406 }
··· 12 #include <linux/clk.h> 13 #include <linux/clk/tegra.h> 14 #include <linux/completion.h> 15 + #include <linux/debugfs.h> 16 #include <linux/delay.h> 17 #include <linux/err.h> 18 #include <linux/interrupt.h> ··· 331 struct clk *clk; 332 void __iomem *regs; 333 unsigned int irq; 334 + bool bad_state; 335 336 + struct emc_timing *new_timing; 337 struct emc_timing *timings; 338 unsigned int num_timings; 339 ··· 345 bool vref_cal_toggle : 1; 346 bool zcal_long : 1; 347 bool dll_on : 1; 348 + 349 + struct { 350 + struct dentry *root; 351 + unsigned long min_rate; 352 + unsigned long max_rate; 353 + } debugfs; 354 }; 355 + 356 + static int emc_seq_update_timing(struct tegra_emc *emc) 357 + { 358 + u32 val; 359 + int err; 360 + 361 + writel_relaxed(EMC_TIMING_UPDATE, emc->regs + EMC_TIMING_CONTROL); 362 + 363 + err = readl_relaxed_poll_timeout_atomic(emc->regs + EMC_STATUS, val, 364 + !(val & EMC_STATUS_TIMING_UPDATE_STALLED), 365 + 1, 200); 366 + if (err) { 367 + dev_err(emc->dev, "failed to update timing: %d\n", err); 368 + return err; 369 + } 370 + 371 + return 0; 372 + } 373 + 374 + static void emc_complete_clk_change(struct tegra_emc *emc) 375 + { 376 + struct emc_timing *timing = emc->new_timing; 377 + unsigned int dram_num; 378 + bool failed = false; 379 + int err; 380 + 381 + /* re-enable auto-refresh */ 382 + dram_num = tegra_mc_get_emem_device_count(emc->mc); 383 + writel_relaxed(EMC_REFCTRL_ENABLE_ALL(dram_num), 384 + emc->regs + EMC_REFCTRL); 385 + 386 + /* restore auto-calibration */ 387 + if (emc->vref_cal_toggle) 388 + writel_relaxed(timing->emc_auto_cal_interval, 389 + emc->regs + EMC_AUTO_CAL_INTERVAL); 390 + 391 + /* restore dynamic self-refresh */ 392 + if (timing->emc_cfg_dyn_self_ref) { 393 + emc->emc_cfg |= EMC_CFG_DYN_SREF_ENABLE; 394 + writel_relaxed(emc->emc_cfg, emc->regs + EMC_CFG); 395 + } 396 + 397 + /* set number of clocks to wait after each ZQ command */ 398 + if (emc->zcal_long) 399 + writel_relaxed(timing->emc_zcal_cnt_long, 400 + emc->regs + EMC_ZCAL_WAIT_CNT); 401 + 402 + /* wait for writes to settle */ 403 + udelay(2); 404 + 405 + /* update restored timing */ 406 + err = emc_seq_update_timing(emc); 407 + if (err) 408 + failed = true; 409 + 410 + /* restore early ACK */ 411 + mc_writel(emc->mc, emc->mc_override, MC_EMEM_ARB_OVERRIDE); 412 + 413 + WRITE_ONCE(emc->bad_state, failed); 414 + } 415 416 static irqreturn_t tegra_emc_isr(int irq, void *data) 417 { ··· 359 if (!status) 360 return IRQ_NONE; 361 362 /* notify about HW problem */ 363 if (status & EMC_REFRESH_OVERFLOW_INT) 364 dev_err_ratelimited(emc->dev, ··· 370 371 /* clear interrupts */ 372 writel_relaxed(status, emc->regs + EMC_INTSTATUS); 373 + 374 + /* notify about EMC-CAR handshake completion */ 375 + if (status & EMC_CLKCHANGE_COMPLETE_INT) { 376 + if (completion_done(&emc->clk_handshake_complete)) { 377 + dev_err_ratelimited(emc->dev, 378 + "bogus handshake interrupt\n"); 379 + return IRQ_NONE; 380 + } 381 + 382 + emc_complete_clk_change(emc); 383 + complete(&emc->clk_handshake_complete); 384 + } 385 386 return IRQ_HANDLED; 387 } ··· 436 } 437 438 return preset; 439 } 440 441 static int emc_prepare_mc_clk_cfg(struct tegra_emc *emc, unsigned long rate) ··· 582 !(val & EMC_AUTO_CAL_STATUS_ACTIVE), 1, 300); 583 if (err) { 584 dev_err(emc->dev, 585 + "auto-cal finish timeout: %d\n", err); 586 return err; 587 } 588 ··· 620 621 writel_relaxed(val, emc->regs + EMC_MRS_WAIT_CNT); 622 } 623 624 /* this read also completes the writes */ 625 val = readl_relaxed(emc->regs + EMC_SEL_DPD_CTRL); ··· 739 emc->regs + EMC_ZQ_CAL); 740 } 741 742 /* flow control marker 3 */ 743 writel_relaxed(0x1, emc->regs + EMC_UNSTALL_RW_AFTER_CLKCHANGE); 744 745 + /* 746 + * Read and discard an arbitrary MC register (Note: EMC registers 747 + * can't be used) to ensure the register writes are completed. 748 + */ 749 + mc_readl(emc->mc, MC_EMEM_ARB_OVERRIDE); 750 + 751 reinit_completion(&emc->clk_handshake_complete); 752 753 + emc->new_timing = timing; 754 755 return 0; 756 } ··· 760 static int emc_complete_timing_change(struct tegra_emc *emc, 761 unsigned long rate) 762 { 763 unsigned long timeout; 764 765 timeout = wait_for_completion_timeout(&emc->clk_handshake_complete, 766 msecs_to_jiffies(100)); 767 if (timeout == 0) { 768 dev_err(emc->dev, "emc-car handshake failed\n"); 769 return -EIO; 770 } 771 772 + if (READ_ONCE(emc->bad_state)) 773 + return -EIO; 774 775 + return 0; 776 } 777 778 static int emc_unprepare_timing_change(struct tegra_emc *emc, 779 unsigned long rate) 780 { 781 + if (!emc->bad_state) { 782 /* shouldn't ever happen in practice */ 783 dev_err(emc->dev, "timing configuration can't be reverted\n"); 784 emc->bad_state = true; ··· 823 824 switch (msg) { 825 case PRE_RATE_CHANGE: 826 + /* 827 + * Disable interrupt since read accesses are prohibited after 828 + * stalling. 829 + */ 830 + disable_irq(emc->irq); 831 err = emc_prepare_timing_change(emc, cnd->new_rate); 832 + enable_irq(emc->irq); 833 break; 834 835 case ABORT_RATE_CHANGE: ··· 1083 return timing->rate; 1084 } 1085 1086 + /* 1087 + * debugfs interface 1088 + * 1089 + * The memory controller driver exposes some files in debugfs that can be used 1090 + * to control the EMC frequency. The top-level directory can be found here: 1091 + * 1092 + * /sys/kernel/debug/emc 1093 + * 1094 + * It contains the following files: 1095 + * 1096 + * - available_rates: This file contains a list of valid, space-separated 1097 + * EMC frequencies. 1098 + * 1099 + * - min_rate: Writing a value to this file sets the given frequency as the 1100 + * floor of the permitted range. If this is higher than the currently 1101 + * configured EMC frequency, this will cause the frequency to be 1102 + * increased so that it stays within the valid range. 1103 + * 1104 + * - max_rate: Similarily to the min_rate file, writing a value to this file 1105 + * sets the given frequency as the ceiling of the permitted range. If 1106 + * the value is lower than the currently configured EMC frequency, this 1107 + * will cause the frequency to be decreased so that it stays within the 1108 + * valid range. 1109 + */ 1110 + 1111 + static bool tegra_emc_validate_rate(struct tegra_emc *emc, unsigned long rate) 1112 + { 1113 + unsigned int i; 1114 + 1115 + for (i = 0; i < emc->num_timings; i++) 1116 + if (rate == emc->timings[i].rate) 1117 + return true; 1118 + 1119 + return false; 1120 + } 1121 + 1122 + static int tegra_emc_debug_available_rates_show(struct seq_file *s, void *data) 1123 + { 1124 + struct tegra_emc *emc = s->private; 1125 + const char *prefix = ""; 1126 + unsigned int i; 1127 + 1128 + for (i = 0; i < emc->num_timings; i++) { 1129 + seq_printf(s, "%s%lu", prefix, emc->timings[i].rate); 1130 + prefix = " "; 1131 + } 1132 + 1133 + seq_puts(s, "\n"); 1134 + 1135 + return 0; 1136 + } 1137 + 1138 + static int tegra_emc_debug_available_rates_open(struct inode *inode, 1139 + struct file *file) 1140 + { 1141 + return single_open(file, tegra_emc_debug_available_rates_show, 1142 + inode->i_private); 1143 + } 1144 + 1145 + static const struct file_operations tegra_emc_debug_available_rates_fops = { 1146 + .open = tegra_emc_debug_available_rates_open, 1147 + .read = seq_read, 1148 + .llseek = seq_lseek, 1149 + .release = single_release, 1150 + }; 1151 + 1152 + static int tegra_emc_debug_min_rate_get(void *data, u64 *rate) 1153 + { 1154 + struct tegra_emc *emc = data; 1155 + 1156 + *rate = emc->debugfs.min_rate; 1157 + 1158 + return 0; 1159 + } 1160 + 1161 + static int tegra_emc_debug_min_rate_set(void *data, u64 rate) 1162 + { 1163 + struct tegra_emc *emc = data; 1164 + int err; 1165 + 1166 + if (!tegra_emc_validate_rate(emc, rate)) 1167 + return -EINVAL; 1168 + 1169 + err = clk_set_min_rate(emc->clk, rate); 1170 + if (err < 0) 1171 + return err; 1172 + 1173 + emc->debugfs.min_rate = rate; 1174 + 1175 + return 0; 1176 + } 1177 + 1178 + DEFINE_SIMPLE_ATTRIBUTE(tegra_emc_debug_min_rate_fops, 1179 + tegra_emc_debug_min_rate_get, 1180 + tegra_emc_debug_min_rate_set, "%llu\n"); 1181 + 1182 + static int tegra_emc_debug_max_rate_get(void *data, u64 *rate) 1183 + { 1184 + struct tegra_emc *emc = data; 1185 + 1186 + *rate = emc->debugfs.max_rate; 1187 + 1188 + return 0; 1189 + } 1190 + 1191 + static int tegra_emc_debug_max_rate_set(void *data, u64 rate) 1192 + { 1193 + struct tegra_emc *emc = data; 1194 + int err; 1195 + 1196 + if (!tegra_emc_validate_rate(emc, rate)) 1197 + return -EINVAL; 1198 + 1199 + err = clk_set_max_rate(emc->clk, rate); 1200 + if (err < 0) 1201 + return err; 1202 + 1203 + emc->debugfs.max_rate = rate; 1204 + 1205 + return 0; 1206 + } 1207 + 1208 + DEFINE_SIMPLE_ATTRIBUTE(tegra_emc_debug_max_rate_fops, 1209 + tegra_emc_debug_max_rate_get, 1210 + tegra_emc_debug_max_rate_set, "%llu\n"); 1211 + 1212 + static void tegra_emc_debugfs_init(struct tegra_emc *emc) 1213 + { 1214 + struct device *dev = emc->dev; 1215 + unsigned int i; 1216 + int err; 1217 + 1218 + emc->debugfs.min_rate = ULONG_MAX; 1219 + emc->debugfs.max_rate = 0; 1220 + 1221 + for (i = 0; i < emc->num_timings; i++) { 1222 + if (emc->timings[i].rate < emc->debugfs.min_rate) 1223 + emc->debugfs.min_rate = emc->timings[i].rate; 1224 + 1225 + if (emc->timings[i].rate > emc->debugfs.max_rate) 1226 + emc->debugfs.max_rate = emc->timings[i].rate; 1227 + } 1228 + 1229 + err = clk_set_rate_range(emc->clk, emc->debugfs.min_rate, 1230 + emc->debugfs.max_rate); 1231 + if (err < 0) { 1232 + dev_err(dev, "failed to set rate range [%lu-%lu] for %pC\n", 1233 + emc->debugfs.min_rate, emc->debugfs.max_rate, 1234 + emc->clk); 1235 + } 1236 + 1237 + emc->debugfs.root = debugfs_create_dir("emc", NULL); 1238 + if (!emc->debugfs.root) { 1239 + dev_err(emc->dev, "failed to create debugfs directory\n"); 1240 + return; 1241 + } 1242 + 1243 + debugfs_create_file("available_rates", S_IRUGO, emc->debugfs.root, 1244 + emc, &tegra_emc_debug_available_rates_fops); 1245 + debugfs_create_file("min_rate", S_IRUGO | S_IWUSR, emc->debugfs.root, 1246 + emc, &tegra_emc_debug_min_rate_fops); 1247 + debugfs_create_file("max_rate", S_IRUGO | S_IWUSR, emc->debugfs.root, 1248 + emc, &tegra_emc_debug_max_rate_fops); 1249 + } 1250 + 1251 static int tegra_emc_probe(struct platform_device *pdev) 1252 { 1253 struct platform_device *mc; ··· 1169 } 1170 1171 platform_set_drvdata(pdev, emc); 1172 + tegra_emc_debugfs_init(emc); 1173 1174 return 0; 1175 ··· 1181 static int tegra_emc_suspend(struct device *dev) 1182 { 1183 struct tegra_emc *emc = dev_get_drvdata(dev); 1184 + int err; 1185 1186 + /* take exclusive control over the clock's rate */ 1187 + err = clk_rate_exclusive_get(emc->clk); 1188 + if (err) { 1189 + dev_err(emc->dev, "failed to acquire clk: %d\n", err); 1190 + return err; 1191 + } 1192 + 1193 + /* suspending in a bad state will hang machine */ 1194 + if (WARN(emc->bad_state, "hardware in a bad state\n")) 1195 return -EINVAL; 1196 1197 emc->bad_state = true; ··· 1201 1202 emc_setup_hw(emc); 1203 emc->bad_state = false; 1204 + 1205 + clk_rate_exclusive_put(emc->clk); 1206 1207 return 0; 1208 }
+1 -1
drivers/net/ethernet/freescale/Kconfig
··· 74 75 config UCC_GETH 76 tristate "Freescale QE Gigabit Ethernet" 77 - depends on QUICC_ENGINE 78 select FSL_PQ_MDIO 79 select PHYLIB 80 ---help---
··· 74 75 config UCC_GETH 76 tristate "Freescale QE Gigabit Ethernet" 77 + depends on QUICC_ENGINE && PPC32 78 select FSL_PQ_MDIO 79 select PHYLIB 80 ---help---
+14 -9
drivers/net/wan/fsl_ucc_hdlc.c
··· 84 int ret, i; 85 void *bd_buffer; 86 dma_addr_t bd_dma_addr; 87 - u32 riptr; 88 - u32 tiptr; 89 u32 gumr; 90 91 ut_info = priv->ut_info; ··· 195 priv->ucc_pram_offset = qe_muram_alloc(sizeof(struct ucc_hdlc_param), 196 ALIGNMENT_OF_UCC_HDLC_PRAM); 197 198 - if (IS_ERR_VALUE(priv->ucc_pram_offset)) { 199 dev_err(priv->dev, "Can not allocate MURAM for hdlc parameter.\n"); 200 ret = -ENOMEM; 201 goto free_tx_bd; ··· 233 234 /* Alloc riptr, tiptr */ 235 riptr = qe_muram_alloc(32, 32); 236 - if (IS_ERR_VALUE(riptr)) { 237 dev_err(priv->dev, "Cannot allocate MURAM mem for Receive internal temp data pointer\n"); 238 ret = -ENOMEM; 239 goto free_tx_skbuff; 240 } 241 242 tiptr = qe_muram_alloc(32, 32); 243 - if (IS_ERR_VALUE(tiptr)) { 244 dev_err(priv->dev, "Cannot allocate MURAM mem for Transmit internal temp data pointer\n"); 245 ret = -ENOMEM; 246 goto free_riptr; 247 } 248 249 /* Set RIPTR, TIPTR */ ··· 628 629 if (howmany < budget) { 630 napi_complete_done(napi, howmany); 631 - qe_setbits32(priv->uccf->p_uccm, 632 - (UCCE_HDLC_RX_EVENTS | UCCE_HDLC_TX_EVENTS) << 16); 633 } 634 635 return howmany; ··· 735 736 static void uhdlc_memclean(struct ucc_hdlc_private *priv) 737 { 738 - qe_muram_free(priv->ucc_pram->riptr); 739 - qe_muram_free(priv->ucc_pram->tiptr); 740 741 if (priv->rx_bd_base) { 742 dma_free_coherent(priv->dev,
··· 84 int ret, i; 85 void *bd_buffer; 86 dma_addr_t bd_dma_addr; 87 + s32 riptr; 88 + s32 tiptr; 89 u32 gumr; 90 91 ut_info = priv->ut_info; ··· 195 priv->ucc_pram_offset = qe_muram_alloc(sizeof(struct ucc_hdlc_param), 196 ALIGNMENT_OF_UCC_HDLC_PRAM); 197 198 + if (priv->ucc_pram_offset < 0) { 199 dev_err(priv->dev, "Can not allocate MURAM for hdlc parameter.\n"); 200 ret = -ENOMEM; 201 goto free_tx_bd; ··· 233 234 /* Alloc riptr, tiptr */ 235 riptr = qe_muram_alloc(32, 32); 236 + if (riptr < 0) { 237 dev_err(priv->dev, "Cannot allocate MURAM mem for Receive internal temp data pointer\n"); 238 ret = -ENOMEM; 239 goto free_tx_skbuff; 240 } 241 242 tiptr = qe_muram_alloc(32, 32); 243 + if (tiptr < 0) { 244 dev_err(priv->dev, "Cannot allocate MURAM mem for Transmit internal temp data pointer\n"); 245 ret = -ENOMEM; 246 goto free_riptr; 247 + } 248 + if (riptr != (u16)riptr || tiptr != (u16)tiptr) { 249 + dev_err(priv->dev, "MURAM allocation out of addressable range\n"); 250 + ret = -ENOMEM; 251 + goto free_tiptr; 252 } 253 254 /* Set RIPTR, TIPTR */ ··· 623 624 if (howmany < budget) { 625 napi_complete_done(napi, howmany); 626 + qe_setbits_be32(priv->uccf->p_uccm, 627 + (UCCE_HDLC_RX_EVENTS | UCCE_HDLC_TX_EVENTS) << 16); 628 } 629 630 return howmany; ··· 730 731 static void uhdlc_memclean(struct ucc_hdlc_private *priv) 732 { 733 + qe_muram_free(ioread16be(&priv->ucc_pram->riptr)); 734 + qe_muram_free(ioread16be(&priv->ucc_pram->tiptr)); 735 736 if (priv->rx_bd_base) { 737 dma_free_coherent(priv->dev,
+1 -1
drivers/net/wan/fsl_ucc_hdlc.h
··· 98 99 unsigned short tx_ring_size; 100 unsigned short rx_ring_size; 101 - u32 ucc_pram_offset; 102 103 unsigned short encoding; 104 unsigned short parity;
··· 98 99 unsigned short tx_ring_size; 100 unsigned short rx_ring_size; 101 + s32 ucc_pram_offset; 102 103 unsigned short encoding; 104 unsigned short parity;
+36
drivers/of/base.c
··· 416 EXPORT_SYMBOL(of_cpu_node_to_id); 417 418 /** 419 * __of_device_is_compatible() - Check if the node matches given constraints 420 * @device: pointer to node 421 * @compat: required compatible string, NULL or "" for any match
··· 416 EXPORT_SYMBOL(of_cpu_node_to_id); 417 418 /** 419 + * of_get_cpu_state_node - Get CPU's idle state node at the given index 420 + * 421 + * @cpu_node: The device node for the CPU 422 + * @index: The index in the list of the idle states 423 + * 424 + * Two generic methods can be used to describe a CPU's idle states, either via 425 + * a flattened description through the "cpu-idle-states" binding or via the 426 + * hierarchical layout, using the "power-domains" and the "domain-idle-states" 427 + * bindings. This function check for both and returns the idle state node for 428 + * the requested index. 429 + * 430 + * In case an idle state node is found at @index, the refcount is incremented 431 + * for it, so call of_node_put() on it when done. Returns NULL if not found. 432 + */ 433 + struct device_node *of_get_cpu_state_node(struct device_node *cpu_node, 434 + int index) 435 + { 436 + struct of_phandle_args args; 437 + int err; 438 + 439 + err = of_parse_phandle_with_args(cpu_node, "power-domains", 440 + "#power-domain-cells", 0, &args); 441 + if (!err) { 442 + struct device_node *state_node = 443 + of_parse_phandle(args.np, "domain-idle-states", index); 444 + 445 + of_node_put(args.np); 446 + if (state_node) 447 + return state_node; 448 + } 449 + 450 + return of_parse_phandle(cpu_node, "cpu-idle-states", index); 451 + } 452 + EXPORT_SYMBOL(of_get_cpu_state_node); 453 + 454 + /** 455 * __of_device_is_compatible() - Check if the node matches given constraints 456 * @device: pointer to node 457 * @compat: required compatible string, NULL or "" for any match
+24 -1
drivers/reset/Kconfig
··· 49 This enables the reset controller driver for Broadcom STB SoCs using 50 a SUN_TOP_CTRL_SW_INIT style controller. 51 52 config RESET_HSDK 53 bool "Synopsys HSDK Reset Driver" 54 depends on HAS_IOMEM ··· 70 select MFD_SYSCON 71 help 72 This enables the reset controller driver for i.MX7 SoCs. 73 74 config RESET_LANTIQ 75 bool "Lantiq XWAY Reset Driver" if COMPILE_TEST ··· 105 This enables the reset driver for Audio Memory Arbiter of 106 Amlogic's A113 based SoCs 107 108 config RESET_OXNAS 109 bool 110 ··· 122 This enables the reset driver for ImgTec Pistachio SoCs. 123 124 config RESET_QCOM_AOSS 125 - bool "Qcom AOSS Reset Driver" 126 depends on ARCH_QCOM || COMPILE_TEST 127 help 128 This enables the AOSS (always on subsystem) reset driver
··· 49 This enables the reset controller driver for Broadcom STB SoCs using 50 a SUN_TOP_CTRL_SW_INIT style controller. 51 52 + config RESET_BRCMSTB_RESCAL 53 + bool "Broadcom STB RESCAL reset controller" 54 + default ARCH_BRCMSTB || COMPILE_TEST 55 + help 56 + This enables the RESCAL reset controller for SATA, PCIe0, or PCIe1 on 57 + BCM7216. 58 + 59 config RESET_HSDK 60 bool "Synopsys HSDK Reset Driver" 61 depends on HAS_IOMEM ··· 63 select MFD_SYSCON 64 help 65 This enables the reset controller driver for i.MX7 SoCs. 66 + 67 + config RESET_INTEL_GW 68 + bool "Intel Reset Controller Driver" 69 + depends on OF 70 + select REGMAP_MMIO 71 + help 72 + This enables the reset controller driver for Intel Gateway SoCs. 73 + Say Y to control the reset signals provided by reset controller. 74 + Otherwise, say N. 75 76 config RESET_LANTIQ 77 bool "Lantiq XWAY Reset Driver" if COMPILE_TEST ··· 89 This enables the reset driver for Audio Memory Arbiter of 90 Amlogic's A113 based SoCs 91 92 + config RESET_NPCM 93 + bool "NPCM BMC Reset Driver" if COMPILE_TEST 94 + default ARCH_NPCM 95 + help 96 + This enables the reset controller driver for Nuvoton NPCM 97 + BMC SoCs. 98 + 99 config RESET_OXNAS 100 bool 101 ··· 99 This enables the reset driver for ImgTec Pistachio SoCs. 100 101 config RESET_QCOM_AOSS 102 + tristate "Qcom AOSS Reset Driver" 103 depends on ARCH_QCOM || COMPILE_TEST 104 help 105 This enables the AOSS (always on subsystem) reset driver
+3
drivers/reset/Makefile
··· 8 obj-$(CONFIG_RESET_AXS10X) += reset-axs10x.o 9 obj-$(CONFIG_RESET_BERLIN) += reset-berlin.o 10 obj-$(CONFIG_RESET_BRCMSTB) += reset-brcmstb.o 11 obj-$(CONFIG_RESET_HSDK) += reset-hsdk.o 12 obj-$(CONFIG_RESET_IMX7) += reset-imx7.o 13 obj-$(CONFIG_RESET_LANTIQ) += reset-lantiq.o 14 obj-$(CONFIG_RESET_LPC18XX) += reset-lpc18xx.o 15 obj-$(CONFIG_RESET_MESON) += reset-meson.o 16 obj-$(CONFIG_RESET_MESON_AUDIO_ARB) += reset-meson-audio-arb.o 17 obj-$(CONFIG_RESET_OXNAS) += reset-oxnas.o 18 obj-$(CONFIG_RESET_PISTACHIO) += reset-pistachio.o 19 obj-$(CONFIG_RESET_QCOM_AOSS) += reset-qcom-aoss.o
··· 8 obj-$(CONFIG_RESET_AXS10X) += reset-axs10x.o 9 obj-$(CONFIG_RESET_BERLIN) += reset-berlin.o 10 obj-$(CONFIG_RESET_BRCMSTB) += reset-brcmstb.o 11 + obj-$(CONFIG_RESET_BRCMSTB_RESCAL) += reset-brcmstb-rescal.o 12 obj-$(CONFIG_RESET_HSDK) += reset-hsdk.o 13 obj-$(CONFIG_RESET_IMX7) += reset-imx7.o 14 + obj-$(CONFIG_RESET_INTEL_GW) += reset-intel-gw.o 15 obj-$(CONFIG_RESET_LANTIQ) += reset-lantiq.o 16 obj-$(CONFIG_RESET_LPC18XX) += reset-lpc18xx.o 17 obj-$(CONFIG_RESET_MESON) += reset-meson.o 18 obj-$(CONFIG_RESET_MESON_AUDIO_ARB) += reset-meson-audio-arb.o 19 + obj-$(CONFIG_RESET_NPCM) += reset-npcm.o 20 obj-$(CONFIG_RESET_OXNAS) += reset-oxnas.o 21 obj-$(CONFIG_RESET_PISTACHIO) += reset-pistachio.o 22 obj-$(CONFIG_RESET_QCOM_AOSS) += reset-qcom-aoss.o
+17 -16
drivers/reset/core.c
··· 150 return -ENOMEM; 151 152 ret = reset_controller_register(rcdev); 153 - if (!ret) { 154 - *rcdevp = rcdev; 155 - devres_add(dev, rcdevp); 156 - } else { 157 devres_free(rcdevp); 158 } 159 160 return ret; 161 } ··· 788 return ERR_PTR(-ENOMEM); 789 790 rstc = __reset_control_get(dev, id, index, shared, optional, acquired); 791 - if (!IS_ERR_OR_NULL(rstc)) { 792 - *ptr = rstc; 793 - devres_add(dev, ptr); 794 - } else { 795 devres_free(ptr); 796 } 797 798 return rstc; 799 } ··· 921 struct reset_control * 922 devm_reset_control_array_get(struct device *dev, bool shared, bool optional) 923 { 924 - struct reset_control **devres; 925 - struct reset_control *rstc; 926 927 - devres = devres_alloc(devm_reset_control_release, sizeof(*devres), 928 - GFP_KERNEL); 929 - if (!devres) 930 return ERR_PTR(-ENOMEM); 931 932 rstc = of_reset_control_array_get(dev->of_node, shared, optional, true); 933 if (IS_ERR_OR_NULL(rstc)) { 934 - devres_free(devres); 935 return rstc; 936 } 937 938 - *devres = rstc; 939 - devres_add(dev, devres); 940 941 return rstc; 942 }
··· 150 return -ENOMEM; 151 152 ret = reset_controller_register(rcdev); 153 + if (ret) { 154 devres_free(rcdevp); 155 + return ret; 156 } 157 + 158 + *rcdevp = rcdev; 159 + devres_add(dev, rcdevp); 160 161 return ret; 162 } ··· 787 return ERR_PTR(-ENOMEM); 788 789 rstc = __reset_control_get(dev, id, index, shared, optional, acquired); 790 + if (IS_ERR_OR_NULL(rstc)) { 791 devres_free(ptr); 792 + return rstc; 793 } 794 + 795 + *ptr = rstc; 796 + devres_add(dev, ptr); 797 798 return rstc; 799 } ··· 919 struct reset_control * 920 devm_reset_control_array_get(struct device *dev, bool shared, bool optional) 921 { 922 + struct reset_control **ptr, *rstc; 923 924 + ptr = devres_alloc(devm_reset_control_release, sizeof(*ptr), 925 + GFP_KERNEL); 926 + if (!ptr) 927 return ERR_PTR(-ENOMEM); 928 929 rstc = of_reset_control_array_get(dev->of_node, shared, optional, true); 930 if (IS_ERR_OR_NULL(rstc)) { 931 + devres_free(ptr); 932 return rstc; 933 } 934 935 + *ptr = rstc; 936 + devres_add(dev, ptr); 937 938 return rstc; 939 }
+107
drivers/reset/reset-brcmstb-rescal.c
···
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (C) 2018-2020 Broadcom */ 3 + 4 + #include <linux/device.h> 5 + #include <linux/iopoll.h> 6 + #include <linux/module.h> 7 + #include <linux/of.h> 8 + #include <linux/platform_device.h> 9 + #include <linux/reset-controller.h> 10 + 11 + #define BRCM_RESCAL_START 0x0 12 + #define BRCM_RESCAL_START_BIT BIT(0) 13 + #define BRCM_RESCAL_CTRL 0x4 14 + #define BRCM_RESCAL_STATUS 0x8 15 + #define BRCM_RESCAL_STATUS_BIT BIT(0) 16 + 17 + struct brcm_rescal_reset { 18 + void __iomem *base; 19 + struct device *dev; 20 + struct reset_controller_dev rcdev; 21 + }; 22 + 23 + static int brcm_rescal_reset_set(struct reset_controller_dev *rcdev, 24 + unsigned long id) 25 + { 26 + struct brcm_rescal_reset *data = 27 + container_of(rcdev, struct brcm_rescal_reset, rcdev); 28 + void __iomem *base = data->base; 29 + u32 reg; 30 + int ret; 31 + 32 + reg = readl(base + BRCM_RESCAL_START); 33 + writel(reg | BRCM_RESCAL_START_BIT, base + BRCM_RESCAL_START); 34 + reg = readl(base + BRCM_RESCAL_START); 35 + if (!(reg & BRCM_RESCAL_START_BIT)) { 36 + dev_err(data->dev, "failed to start SATA/PCIe rescal\n"); 37 + return -EIO; 38 + } 39 + 40 + ret = readl_poll_timeout(base + BRCM_RESCAL_STATUS, reg, 41 + !(reg & BRCM_RESCAL_STATUS_BIT), 100, 1000); 42 + if (ret) { 43 + dev_err(data->dev, "time out on SATA/PCIe rescal\n"); 44 + return ret; 45 + } 46 + 47 + reg = readl(base + BRCM_RESCAL_START); 48 + writel(reg & ~BRCM_RESCAL_START_BIT, base + BRCM_RESCAL_START); 49 + 50 + dev_dbg(data->dev, "SATA/PCIe rescal success\n"); 51 + 52 + return 0; 53 + } 54 + 55 + static int brcm_rescal_reset_xlate(struct reset_controller_dev *rcdev, 56 + const struct of_phandle_args *reset_spec) 57 + { 58 + /* This is needed if #reset-cells == 0. */ 59 + return 0; 60 + } 61 + 62 + static const struct reset_control_ops brcm_rescal_reset_ops = { 63 + .reset = brcm_rescal_reset_set, 64 + }; 65 + 66 + static int brcm_rescal_reset_probe(struct platform_device *pdev) 67 + { 68 + struct brcm_rescal_reset *data; 69 + struct resource *res; 70 + 71 + data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 72 + if (!data) 73 + return -ENOMEM; 74 + 75 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 76 + data->base = devm_ioremap_resource(&pdev->dev, res); 77 + if (IS_ERR(data->base)) 78 + return PTR_ERR(data->base); 79 + 80 + data->rcdev.owner = THIS_MODULE; 81 + data->rcdev.nr_resets = 1; 82 + data->rcdev.ops = &brcm_rescal_reset_ops; 83 + data->rcdev.of_node = pdev->dev.of_node; 84 + data->rcdev.of_xlate = brcm_rescal_reset_xlate; 85 + data->dev = &pdev->dev; 86 + 87 + return devm_reset_controller_register(&pdev->dev, &data->rcdev); 88 + } 89 + 90 + static const struct of_device_id brcm_rescal_reset_of_match[] = { 91 + { .compatible = "brcm,bcm7216-pcie-sata-rescal" }, 92 + { }, 93 + }; 94 + MODULE_DEVICE_TABLE(of, brcm_rescal_reset_of_match); 95 + 96 + static struct platform_driver brcm_rescal_reset_driver = { 97 + .probe = brcm_rescal_reset_probe, 98 + .driver = { 99 + .name = "brcm-rescal-reset", 100 + .of_match_table = brcm_rescal_reset_of_match, 101 + } 102 + }; 103 + module_platform_driver(brcm_rescal_reset_driver); 104 + 105 + MODULE_AUTHOR("Broadcom"); 106 + MODULE_DESCRIPTION("Broadcom SATA/PCIe rescal reset controller"); 107 + MODULE_LICENSE("GPL v2");
+262
drivers/reset/reset-intel-gw.c
···
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2019 Intel Corporation. 4 + * Lei Chuanhua <Chuanhua.lei@intel.com> 5 + */ 6 + 7 + #include <linux/bitfield.h> 8 + #include <linux/init.h> 9 + #include <linux/of_device.h> 10 + #include <linux/platform_device.h> 11 + #include <linux/reboot.h> 12 + #include <linux/regmap.h> 13 + #include <linux/reset-controller.h> 14 + 15 + #define RCU_RST_STAT 0x0024 16 + #define RCU_RST_REQ 0x0048 17 + 18 + #define REG_OFFSET GENMASK(31, 16) 19 + #define BIT_OFFSET GENMASK(15, 8) 20 + #define STAT_BIT_OFFSET GENMASK(7, 0) 21 + 22 + #define to_reset_data(x) container_of(x, struct intel_reset_data, rcdev) 23 + 24 + struct intel_reset_soc { 25 + bool legacy; 26 + u32 reset_cell_count; 27 + }; 28 + 29 + struct intel_reset_data { 30 + struct reset_controller_dev rcdev; 31 + struct notifier_block restart_nb; 32 + const struct intel_reset_soc *soc_data; 33 + struct regmap *regmap; 34 + struct device *dev; 35 + u32 reboot_id; 36 + }; 37 + 38 + static const struct regmap_config intel_rcu_regmap_config = { 39 + .name = "intel-reset", 40 + .reg_bits = 32, 41 + .reg_stride = 4, 42 + .val_bits = 32, 43 + .fast_io = true, 44 + }; 45 + 46 + /* 47 + * Reset status register offset relative to 48 + * the reset control register(X) is X + 4 49 + */ 50 + static u32 id_to_reg_and_bit_offsets(struct intel_reset_data *data, 51 + unsigned long id, u32 *rst_req, 52 + u32 *req_bit, u32 *stat_bit) 53 + { 54 + *rst_req = FIELD_GET(REG_OFFSET, id); 55 + *req_bit = FIELD_GET(BIT_OFFSET, id); 56 + 57 + if (data->soc_data->legacy) 58 + *stat_bit = FIELD_GET(STAT_BIT_OFFSET, id); 59 + else 60 + *stat_bit = *req_bit; 61 + 62 + if (data->soc_data->legacy && *rst_req == RCU_RST_REQ) 63 + return RCU_RST_STAT; 64 + else 65 + return *rst_req + 0x4; 66 + } 67 + 68 + static int intel_set_clr_bits(struct intel_reset_data *data, unsigned long id, 69 + bool set) 70 + { 71 + u32 rst_req, req_bit, rst_stat, stat_bit, val; 72 + int ret; 73 + 74 + rst_stat = id_to_reg_and_bit_offsets(data, id, &rst_req, 75 + &req_bit, &stat_bit); 76 + 77 + val = set ? BIT(req_bit) : 0; 78 + ret = regmap_update_bits(data->regmap, rst_req, BIT(req_bit), val); 79 + if (ret) 80 + return ret; 81 + 82 + return regmap_read_poll_timeout(data->regmap, rst_stat, val, 83 + set == !!(val & BIT(stat_bit)), 20, 84 + 200); 85 + } 86 + 87 + static int intel_assert_device(struct reset_controller_dev *rcdev, 88 + unsigned long id) 89 + { 90 + struct intel_reset_data *data = to_reset_data(rcdev); 91 + int ret; 92 + 93 + ret = intel_set_clr_bits(data, id, true); 94 + if (ret) 95 + dev_err(data->dev, "Reset assert failed %d\n", ret); 96 + 97 + return ret; 98 + } 99 + 100 + static int intel_deassert_device(struct reset_controller_dev *rcdev, 101 + unsigned long id) 102 + { 103 + struct intel_reset_data *data = to_reset_data(rcdev); 104 + int ret; 105 + 106 + ret = intel_set_clr_bits(data, id, false); 107 + if (ret) 108 + dev_err(data->dev, "Reset deassert failed %d\n", ret); 109 + 110 + return ret; 111 + } 112 + 113 + static int intel_reset_status(struct reset_controller_dev *rcdev, 114 + unsigned long id) 115 + { 116 + struct intel_reset_data *data = to_reset_data(rcdev); 117 + u32 rst_req, req_bit, rst_stat, stat_bit, val; 118 + int ret; 119 + 120 + rst_stat = id_to_reg_and_bit_offsets(data, id, &rst_req, 121 + &req_bit, &stat_bit); 122 + ret = regmap_read(data->regmap, rst_stat, &val); 123 + if (ret) 124 + return ret; 125 + 126 + return !!(val & BIT(stat_bit)); 127 + } 128 + 129 + static const struct reset_control_ops intel_reset_ops = { 130 + .assert = intel_assert_device, 131 + .deassert = intel_deassert_device, 132 + .status = intel_reset_status, 133 + }; 134 + 135 + static int intel_reset_xlate(struct reset_controller_dev *rcdev, 136 + const struct of_phandle_args *spec) 137 + { 138 + struct intel_reset_data *data = to_reset_data(rcdev); 139 + u32 id; 140 + 141 + if (spec->args[1] > 31) 142 + return -EINVAL; 143 + 144 + id = FIELD_PREP(REG_OFFSET, spec->args[0]); 145 + id |= FIELD_PREP(BIT_OFFSET, spec->args[1]); 146 + 147 + if (data->soc_data->legacy) { 148 + if (spec->args[2] > 31) 149 + return -EINVAL; 150 + 151 + id |= FIELD_PREP(STAT_BIT_OFFSET, spec->args[2]); 152 + } 153 + 154 + return id; 155 + } 156 + 157 + static int intel_reset_restart_handler(struct notifier_block *nb, 158 + unsigned long action, void *data) 159 + { 160 + struct intel_reset_data *reset_data; 161 + 162 + reset_data = container_of(nb, struct intel_reset_data, restart_nb); 163 + intel_assert_device(&reset_data->rcdev, reset_data->reboot_id); 164 + 165 + return NOTIFY_DONE; 166 + } 167 + 168 + static int intel_reset_probe(struct platform_device *pdev) 169 + { 170 + struct device_node *np = pdev->dev.of_node; 171 + struct device *dev = &pdev->dev; 172 + struct intel_reset_data *data; 173 + void __iomem *base; 174 + u32 rb_id[3]; 175 + int ret; 176 + 177 + data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 178 + if (!data) 179 + return -ENOMEM; 180 + 181 + data->soc_data = of_device_get_match_data(dev); 182 + if (!data->soc_data) 183 + return -ENODEV; 184 + 185 + base = devm_platform_ioremap_resource(pdev, 0); 186 + if (IS_ERR(base)) 187 + return PTR_ERR(base); 188 + 189 + data->regmap = devm_regmap_init_mmio(dev, base, 190 + &intel_rcu_regmap_config); 191 + if (IS_ERR(data->regmap)) { 192 + dev_err(dev, "regmap initialization failed\n"); 193 + return PTR_ERR(data->regmap); 194 + } 195 + 196 + ret = device_property_read_u32_array(dev, "intel,global-reset", rb_id, 197 + data->soc_data->reset_cell_count); 198 + if (ret) { 199 + dev_err(dev, "Failed to get global reset offset!\n"); 200 + return ret; 201 + } 202 + 203 + data->dev = dev; 204 + data->rcdev.of_node = np; 205 + data->rcdev.owner = dev->driver->owner; 206 + data->rcdev.ops = &intel_reset_ops; 207 + data->rcdev.of_xlate = intel_reset_xlate; 208 + data->rcdev.of_reset_n_cells = data->soc_data->reset_cell_count; 209 + ret = devm_reset_controller_register(&pdev->dev, &data->rcdev); 210 + if (ret) 211 + return ret; 212 + 213 + data->reboot_id = FIELD_PREP(REG_OFFSET, rb_id[0]); 214 + data->reboot_id |= FIELD_PREP(BIT_OFFSET, rb_id[1]); 215 + 216 + if (data->soc_data->legacy) 217 + data->reboot_id |= FIELD_PREP(STAT_BIT_OFFSET, rb_id[2]); 218 + 219 + data->restart_nb.notifier_call = intel_reset_restart_handler; 220 + data->restart_nb.priority = 128; 221 + register_restart_handler(&data->restart_nb); 222 + 223 + return 0; 224 + } 225 + 226 + static const struct intel_reset_soc xrx200_data = { 227 + .legacy = true, 228 + .reset_cell_count = 3, 229 + }; 230 + 231 + static const struct intel_reset_soc lgm_data = { 232 + .legacy = false, 233 + .reset_cell_count = 2, 234 + }; 235 + 236 + static const struct of_device_id intel_reset_match[] = { 237 + { .compatible = "intel,rcu-lgm", .data = &lgm_data }, 238 + { .compatible = "intel,rcu-xrx200", .data = &xrx200_data }, 239 + {} 240 + }; 241 + 242 + static struct platform_driver intel_reset_driver = { 243 + .probe = intel_reset_probe, 244 + .driver = { 245 + .name = "intel-reset", 246 + .of_match_table = intel_reset_match, 247 + }, 248 + }; 249 + 250 + static int __init intel_reset_init(void) 251 + { 252 + return platform_driver_register(&intel_reset_driver); 253 + } 254 + 255 + /* 256 + * RCU is system core entity which is in Always On Domain whose clocks 257 + * or resource initialization happens in system core initialization. 258 + * Also, it is required for most of the platform or architecture 259 + * specific devices to perform reset operation as part of initialization. 260 + * So perform RCU as post core initialization. 261 + */ 262 + postcore_initcall(intel_reset_init);
+291
drivers/reset/reset-npcm.c
···
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) 2019 Nuvoton Technology corporation. 3 + 4 + #include <linux/delay.h> 5 + #include <linux/err.h> 6 + #include <linux/io.h> 7 + #include <linux/init.h> 8 + #include <linux/of.h> 9 + #include <linux/of_device.h> 10 + #include <linux/platform_device.h> 11 + #include <linux/reboot.h> 12 + #include <linux/reset-controller.h> 13 + #include <linux/spinlock.h> 14 + #include <linux/mfd/syscon.h> 15 + #include <linux/regmap.h> 16 + #include <linux/of_address.h> 17 + 18 + /* NPCM7xx GCR registers */ 19 + #define NPCM_MDLR_OFFSET 0x7C 20 + #define NPCM_MDLR_USBD0 BIT(9) 21 + #define NPCM_MDLR_USBD1 BIT(8) 22 + #define NPCM_MDLR_USBD2_4 BIT(21) 23 + #define NPCM_MDLR_USBD5_9 BIT(22) 24 + 25 + #define NPCM_USB1PHYCTL_OFFSET 0x140 26 + #define NPCM_USB2PHYCTL_OFFSET 0x144 27 + #define NPCM_USBXPHYCTL_RS BIT(28) 28 + 29 + /* NPCM7xx Reset registers */ 30 + #define NPCM_SWRSTR 0x14 31 + #define NPCM_SWRST BIT(2) 32 + 33 + #define NPCM_IPSRST1 0x20 34 + #define NPCM_IPSRST1_USBD1 BIT(5) 35 + #define NPCM_IPSRST1_USBD2 BIT(8) 36 + #define NPCM_IPSRST1_USBD3 BIT(25) 37 + #define NPCM_IPSRST1_USBD4 BIT(22) 38 + #define NPCM_IPSRST1_USBD5 BIT(23) 39 + #define NPCM_IPSRST1_USBD6 BIT(24) 40 + 41 + #define NPCM_IPSRST2 0x24 42 + #define NPCM_IPSRST2_USB_HOST BIT(26) 43 + 44 + #define NPCM_IPSRST3 0x34 45 + #define NPCM_IPSRST3_USBD0 BIT(4) 46 + #define NPCM_IPSRST3_USBD7 BIT(5) 47 + #define NPCM_IPSRST3_USBD8 BIT(6) 48 + #define NPCM_IPSRST3_USBD9 BIT(7) 49 + #define NPCM_IPSRST3_USBPHY1 BIT(24) 50 + #define NPCM_IPSRST3_USBPHY2 BIT(25) 51 + 52 + #define NPCM_RC_RESETS_PER_REG 32 53 + #define NPCM_MASK_RESETS GENMASK(4, 0) 54 + 55 + struct npcm_rc_data { 56 + struct reset_controller_dev rcdev; 57 + struct notifier_block restart_nb; 58 + u32 sw_reset_number; 59 + void __iomem *base; 60 + spinlock_t lock; 61 + }; 62 + 63 + #define to_rc_data(p) container_of(p, struct npcm_rc_data, rcdev) 64 + 65 + static int npcm_rc_restart(struct notifier_block *nb, unsigned long mode, 66 + void *cmd) 67 + { 68 + struct npcm_rc_data *rc = container_of(nb, struct npcm_rc_data, 69 + restart_nb); 70 + 71 + writel(NPCM_SWRST << rc->sw_reset_number, rc->base + NPCM_SWRSTR); 72 + mdelay(1000); 73 + 74 + pr_emerg("%s: unable to restart system\n", __func__); 75 + 76 + return NOTIFY_DONE; 77 + } 78 + 79 + static int npcm_rc_setclear_reset(struct reset_controller_dev *rcdev, 80 + unsigned long id, bool set) 81 + { 82 + struct npcm_rc_data *rc = to_rc_data(rcdev); 83 + unsigned int rst_bit = BIT(id & NPCM_MASK_RESETS); 84 + unsigned int ctrl_offset = id >> 8; 85 + unsigned long flags; 86 + u32 stat; 87 + 88 + spin_lock_irqsave(&rc->lock, flags); 89 + stat = readl(rc->base + ctrl_offset); 90 + if (set) 91 + writel(stat | rst_bit, rc->base + ctrl_offset); 92 + else 93 + writel(stat & ~rst_bit, rc->base + ctrl_offset); 94 + spin_unlock_irqrestore(&rc->lock, flags); 95 + 96 + return 0; 97 + } 98 + 99 + static int npcm_rc_assert(struct reset_controller_dev *rcdev, unsigned long id) 100 + { 101 + return npcm_rc_setclear_reset(rcdev, id, true); 102 + } 103 + 104 + static int npcm_rc_deassert(struct reset_controller_dev *rcdev, 105 + unsigned long id) 106 + { 107 + return npcm_rc_setclear_reset(rcdev, id, false); 108 + } 109 + 110 + static int npcm_rc_status(struct reset_controller_dev *rcdev, 111 + unsigned long id) 112 + { 113 + struct npcm_rc_data *rc = to_rc_data(rcdev); 114 + unsigned int rst_bit = BIT(id & NPCM_MASK_RESETS); 115 + unsigned int ctrl_offset = id >> 8; 116 + 117 + return (readl(rc->base + ctrl_offset) & rst_bit); 118 + } 119 + 120 + static int npcm_reset_xlate(struct reset_controller_dev *rcdev, 121 + const struct of_phandle_args *reset_spec) 122 + { 123 + unsigned int offset, bit; 124 + 125 + offset = reset_spec->args[0]; 126 + if (offset != NPCM_IPSRST1 && offset != NPCM_IPSRST2 && 127 + offset != NPCM_IPSRST3) { 128 + dev_err(rcdev->dev, "Error reset register (0x%x)\n", offset); 129 + return -EINVAL; 130 + } 131 + bit = reset_spec->args[1]; 132 + if (bit >= NPCM_RC_RESETS_PER_REG) { 133 + dev_err(rcdev->dev, "Error reset number (%d)\n", bit); 134 + return -EINVAL; 135 + } 136 + 137 + return (offset << 8) | bit; 138 + } 139 + 140 + static const struct of_device_id npcm_rc_match[] = { 141 + { .compatible = "nuvoton,npcm750-reset", 142 + .data = (void *)"nuvoton,npcm750-gcr" }, 143 + { } 144 + }; 145 + 146 + /* 147 + * The following procedure should be observed in USB PHY, USB device and 148 + * USB host initialization at BMC boot 149 + */ 150 + static int npcm_usb_reset(struct platform_device *pdev, struct npcm_rc_data *rc) 151 + { 152 + u32 mdlr, iprst1, iprst2, iprst3; 153 + struct device *dev = &pdev->dev; 154 + struct regmap *gcr_regmap; 155 + u32 ipsrst1_bits = 0; 156 + u32 ipsrst2_bits = NPCM_IPSRST2_USB_HOST; 157 + u32 ipsrst3_bits = 0; 158 + const char *gcr_dt; 159 + 160 + gcr_dt = (const char *) 161 + of_match_device(dev->driver->of_match_table, dev)->data; 162 + 163 + gcr_regmap = syscon_regmap_lookup_by_compatible(gcr_dt); 164 + if (IS_ERR(gcr_regmap)) { 165 + dev_err(&pdev->dev, "Failed to find %s\n", gcr_dt); 166 + return PTR_ERR(gcr_regmap); 167 + } 168 + 169 + /* checking which USB device is enabled */ 170 + regmap_read(gcr_regmap, NPCM_MDLR_OFFSET, &mdlr); 171 + if (!(mdlr & NPCM_MDLR_USBD0)) 172 + ipsrst3_bits |= NPCM_IPSRST3_USBD0; 173 + if (!(mdlr & NPCM_MDLR_USBD1)) 174 + ipsrst1_bits |= NPCM_IPSRST1_USBD1; 175 + if (!(mdlr & NPCM_MDLR_USBD2_4)) 176 + ipsrst1_bits |= (NPCM_IPSRST1_USBD2 | 177 + NPCM_IPSRST1_USBD3 | 178 + NPCM_IPSRST1_USBD4); 179 + if (!(mdlr & NPCM_MDLR_USBD0)) { 180 + ipsrst1_bits |= (NPCM_IPSRST1_USBD5 | 181 + NPCM_IPSRST1_USBD6); 182 + ipsrst3_bits |= (NPCM_IPSRST3_USBD7 | 183 + NPCM_IPSRST3_USBD8 | 184 + NPCM_IPSRST3_USBD9); 185 + } 186 + 187 + /* assert reset USB PHY and USB devices */ 188 + iprst1 = readl(rc->base + NPCM_IPSRST1); 189 + iprst2 = readl(rc->base + NPCM_IPSRST2); 190 + iprst3 = readl(rc->base + NPCM_IPSRST3); 191 + 192 + iprst1 |= ipsrst1_bits; 193 + iprst2 |= ipsrst2_bits; 194 + iprst3 |= (ipsrst3_bits | NPCM_IPSRST3_USBPHY1 | 195 + NPCM_IPSRST3_USBPHY2); 196 + 197 + writel(iprst1, rc->base + NPCM_IPSRST1); 198 + writel(iprst2, rc->base + NPCM_IPSRST2); 199 + writel(iprst3, rc->base + NPCM_IPSRST3); 200 + 201 + /* clear USB PHY RS bit */ 202 + regmap_update_bits(gcr_regmap, NPCM_USB1PHYCTL_OFFSET, 203 + NPCM_USBXPHYCTL_RS, 0); 204 + regmap_update_bits(gcr_regmap, NPCM_USB2PHYCTL_OFFSET, 205 + NPCM_USBXPHYCTL_RS, 0); 206 + 207 + /* deassert reset USB PHY */ 208 + iprst3 &= ~(NPCM_IPSRST3_USBPHY1 | NPCM_IPSRST3_USBPHY2); 209 + writel(iprst3, rc->base + NPCM_IPSRST3); 210 + 211 + udelay(50); 212 + 213 + /* set USB PHY RS bit */ 214 + regmap_update_bits(gcr_regmap, NPCM_USB1PHYCTL_OFFSET, 215 + NPCM_USBXPHYCTL_RS, NPCM_USBXPHYCTL_RS); 216 + regmap_update_bits(gcr_regmap, NPCM_USB2PHYCTL_OFFSET, 217 + NPCM_USBXPHYCTL_RS, NPCM_USBXPHYCTL_RS); 218 + 219 + /* deassert reset USB devices*/ 220 + iprst1 &= ~ipsrst1_bits; 221 + iprst2 &= ~ipsrst2_bits; 222 + iprst3 &= ~ipsrst3_bits; 223 + 224 + writel(iprst1, rc->base + NPCM_IPSRST1); 225 + writel(iprst2, rc->base + NPCM_IPSRST2); 226 + writel(iprst3, rc->base + NPCM_IPSRST3); 227 + 228 + return 0; 229 + } 230 + 231 + static const struct reset_control_ops npcm_rc_ops = { 232 + .assert = npcm_rc_assert, 233 + .deassert = npcm_rc_deassert, 234 + .status = npcm_rc_status, 235 + }; 236 + 237 + static int npcm_rc_probe(struct platform_device *pdev) 238 + { 239 + struct npcm_rc_data *rc; 240 + int ret; 241 + 242 + rc = devm_kzalloc(&pdev->dev, sizeof(*rc), GFP_KERNEL); 243 + if (!rc) 244 + return -ENOMEM; 245 + 246 + rc->base = devm_platform_ioremap_resource(pdev, 0); 247 + if (IS_ERR(rc->base)) 248 + return PTR_ERR(rc->base); 249 + 250 + spin_lock_init(&rc->lock); 251 + 252 + rc->rcdev.owner = THIS_MODULE; 253 + rc->rcdev.ops = &npcm_rc_ops; 254 + rc->rcdev.of_node = pdev->dev.of_node; 255 + rc->rcdev.of_reset_n_cells = 2; 256 + rc->rcdev.of_xlate = npcm_reset_xlate; 257 + 258 + platform_set_drvdata(pdev, rc); 259 + 260 + ret = devm_reset_controller_register(&pdev->dev, &rc->rcdev); 261 + if (ret) { 262 + dev_err(&pdev->dev, "unable to register device\n"); 263 + return ret; 264 + } 265 + 266 + if (npcm_usb_reset(pdev, rc)) 267 + dev_warn(&pdev->dev, "NPCM USB reset failed, can cause issues with UDC and USB host\n"); 268 + 269 + if (!of_property_read_u32(pdev->dev.of_node, "nuvoton,sw-reset-number", 270 + &rc->sw_reset_number)) { 271 + if (rc->sw_reset_number && rc->sw_reset_number < 5) { 272 + rc->restart_nb.priority = 192, 273 + rc->restart_nb.notifier_call = npcm_rc_restart, 274 + ret = register_restart_handler(&rc->restart_nb); 275 + if (ret) 276 + dev_warn(&pdev->dev, "failed to register restart handler\n"); 277 + } 278 + } 279 + 280 + return ret; 281 + } 282 + 283 + static struct platform_driver npcm_rc_driver = { 284 + .probe = npcm_rc_probe, 285 + .driver = { 286 + .name = "npcm-reset", 287 + .of_match_table = npcm_rc_match, 288 + .suppress_bind_attrs = true, 289 + }, 290 + }; 291 + builtin_platform_driver(npcm_rc_driver);
+2 -1
drivers/reset/reset-qcom-aoss.c
··· 118 { .compatible = "qcom,sdm845-aoss-cc", .data = &sdm845_aoss_desc }, 119 {} 120 }; 121 122 static struct platform_driver qcom_aoss_reset_driver = { 123 .probe = qcom_aoss_reset_probe, ··· 128 }, 129 }; 130 131 - builtin_platform_driver(qcom_aoss_reset_driver); 132 133 MODULE_DESCRIPTION("Qualcomm AOSS Reset Driver"); 134 MODULE_LICENSE("GPL v2");
··· 118 { .compatible = "qcom,sdm845-aoss-cc", .data = &sdm845_aoss_desc }, 119 {} 120 }; 121 + MODULE_DEVICE_TABLE(of, qcom_aoss_reset_of_match); 122 123 static struct platform_driver qcom_aoss_reset_driver = { 124 .probe = qcom_aoss_reset_probe, ··· 127 }, 128 }; 129 130 + module_platform_driver(qcom_aoss_reset_driver); 131 132 MODULE_DESCRIPTION("Qualcomm AOSS Reset Driver"); 133 MODULE_LICENSE("GPL v2");
+1 -1
drivers/reset/reset-scmi.c
··· 108 } 109 110 static const struct scmi_device_id scmi_id_table[] = { 111 - { SCMI_PROTOCOL_RESET }, 112 { }, 113 }; 114 MODULE_DEVICE_TABLE(scmi, scmi_id_table);
··· 108 } 109 110 static const struct scmi_device_id scmi_id_table[] = { 111 + { SCMI_PROTOCOL_RESET, "reset" }, 112 { }, 113 }; 114 MODULE_DEVICE_TABLE(scmi, scmi_id_table);
+8 -5
drivers/reset/reset-uniphier.c
··· 193 #define UNIPHIER_PERI_RESET_FI2C(id, ch) \ 194 UNIPHIER_RESETX((id), 0x114, 24 + (ch)) 195 196 - #define UNIPHIER_PERI_RESET_SCSSI(id) \ 197 - UNIPHIER_RESETX((id), 0x110, 17) 198 199 #define UNIPHIER_PERI_RESET_MCSSI(id) \ 200 UNIPHIER_RESETX((id), 0x114, 14) ··· 209 UNIPHIER_PERI_RESET_I2C(6, 2), 210 UNIPHIER_PERI_RESET_I2C(7, 3), 211 UNIPHIER_PERI_RESET_I2C(8, 4), 212 - UNIPHIER_PERI_RESET_SCSSI(11), 213 UNIPHIER_RESET_END, 214 }; 215 ··· 225 UNIPHIER_PERI_RESET_FI2C(8, 4), 226 UNIPHIER_PERI_RESET_FI2C(9, 5), 227 UNIPHIER_PERI_RESET_FI2C(10, 6), 228 - UNIPHIER_PERI_RESET_SCSSI(11), 229 - UNIPHIER_PERI_RESET_MCSSI(12), 230 UNIPHIER_RESET_END, 231 }; 232
··· 193 #define UNIPHIER_PERI_RESET_FI2C(id, ch) \ 194 UNIPHIER_RESETX((id), 0x114, 24 + (ch)) 195 196 + #define UNIPHIER_PERI_RESET_SCSSI(id, ch) \ 197 + UNIPHIER_RESETX((id), 0x110, 17 + (ch)) 198 199 #define UNIPHIER_PERI_RESET_MCSSI(id) \ 200 UNIPHIER_RESETX((id), 0x114, 14) ··· 209 UNIPHIER_PERI_RESET_I2C(6, 2), 210 UNIPHIER_PERI_RESET_I2C(7, 3), 211 UNIPHIER_PERI_RESET_I2C(8, 4), 212 + UNIPHIER_PERI_RESET_SCSSI(11, 0), 213 UNIPHIER_RESET_END, 214 }; 215 ··· 225 UNIPHIER_PERI_RESET_FI2C(8, 4), 226 UNIPHIER_PERI_RESET_FI2C(9, 5), 227 UNIPHIER_PERI_RESET_FI2C(10, 6), 228 + UNIPHIER_PERI_RESET_SCSSI(11, 0), 229 + UNIPHIER_PERI_RESET_SCSSI(12, 1), 230 + UNIPHIER_PERI_RESET_SCSSI(13, 2), 231 + UNIPHIER_PERI_RESET_SCSSI(14, 3), 232 + UNIPHIER_PERI_RESET_MCSSI(15), 233 UNIPHIER_RESET_END, 234 }; 235
+22 -8
drivers/soc/bcm/brcmstb/biuctrl.c
··· 63 [CPU_WRITEBACK_CTRL_REG] = -1, 64 }; 65 66 - /* Odd cases, e.g: 7260 */ 67 static const int b53_cpubiuctrl_no_wb_regs[] = { 68 [CPU_CREDIT_REG] = 0x0b0, 69 [CPU_MCP_FLOW_REG] = 0x0b4, ··· 74 [CPU_CREDIT_REG] = 0x0b0, 75 [CPU_MCP_FLOW_REG] = 0x0b4, 76 [CPU_WRITEBACK_CTRL_REG] = 0x22c, 77 }; 78 79 #define NUM_CPU_BIUCTRL_REGS 3 ··· 107 return 0; 108 } 109 110 - static const u32 b53_mach_compat[] = { 111 0x7268, 112 0x7271, 113 0x7278, 114 }; 115 116 - static void __init mcp_b53_set(void) 117 { 118 unsigned int i; 119 u32 reg; 120 121 reg = brcmstb_get_family_id(); 122 123 - for (i = 0; i < ARRAY_SIZE(b53_mach_compat); i++) { 124 - if (BRCM_ID(reg) == b53_mach_compat[i]) 125 break; 126 } 127 128 - if (i == ARRAY_SIZE(b53_mach_compat)) 129 return; 130 131 /* Set all 3 MCP interfaces to 8 credits */ ··· 167 static int __init setup_hifcpubiuctrl_regs(struct device_node *np) 168 { 169 struct device_node *cpu_dn; 170 int ret = 0; 171 172 cpubiuctrl_base = of_iomap(np, 0); ··· 190 cpubiuctrl_regs = b15_cpubiuctrl_regs; 191 else if (of_device_is_compatible(cpu_dn, "brcm,brahma-b53")) 192 cpubiuctrl_regs = b53_cpubiuctrl_regs; 193 else { 194 pr_err("unsupported CPU\n"); 195 ret = -EINVAL; 196 } 197 of_node_put(cpu_dn); 198 199 - if (BRCM_ID(brcmstb_get_family_id()) == 0x7260) 200 cpubiuctrl_regs = b53_cpubiuctrl_no_wb_regs; 201 out: 202 of_node_put(np); ··· 262 return ret; 263 } 264 265 - mcp_b53_set(); 266 #ifdef CONFIG_PM_SLEEP 267 register_syscore_ops(&brcmstb_cpu_credit_syscore_ops); 268 #endif
··· 63 [CPU_WRITEBACK_CTRL_REG] = -1, 64 }; 65 66 + /* Odd cases, e.g: 7260A0 */ 67 static const int b53_cpubiuctrl_no_wb_regs[] = { 68 [CPU_CREDIT_REG] = 0x0b0, 69 [CPU_MCP_FLOW_REG] = 0x0b4, ··· 74 [CPU_CREDIT_REG] = 0x0b0, 75 [CPU_MCP_FLOW_REG] = 0x0b4, 76 [CPU_WRITEBACK_CTRL_REG] = 0x22c, 77 + }; 78 + 79 + static const int a72_cpubiuctrl_regs[] = { 80 + [CPU_CREDIT_REG] = 0x18, 81 + [CPU_MCP_FLOW_REG] = 0x1c, 82 + [CPU_WRITEBACK_CTRL_REG] = 0x20, 83 }; 84 85 #define NUM_CPU_BIUCTRL_REGS 3 ··· 101 return 0; 102 } 103 104 + static const u32 a72_b53_mach_compat[] = { 105 + 0x7211, 106 + 0x7216, 107 + 0x7255, 108 + 0x7260, 109 0x7268, 110 0x7271, 111 0x7278, 112 }; 113 114 + static void __init mcp_a72_b53_set(void) 115 { 116 unsigned int i; 117 u32 reg; 118 119 reg = brcmstb_get_family_id(); 120 121 + for (i = 0; i < ARRAY_SIZE(a72_b53_mach_compat); i++) { 122 + if (BRCM_ID(reg) == a72_b53_mach_compat[i]) 123 break; 124 } 125 126 + if (i == ARRAY_SIZE(a72_b53_mach_compat)) 127 return; 128 129 /* Set all 3 MCP interfaces to 8 credits */ ··· 157 static int __init setup_hifcpubiuctrl_regs(struct device_node *np) 158 { 159 struct device_node *cpu_dn; 160 + u32 family_id; 161 int ret = 0; 162 163 cpubiuctrl_base = of_iomap(np, 0); ··· 179 cpubiuctrl_regs = b15_cpubiuctrl_regs; 180 else if (of_device_is_compatible(cpu_dn, "brcm,brahma-b53")) 181 cpubiuctrl_regs = b53_cpubiuctrl_regs; 182 + else if (of_device_is_compatible(cpu_dn, "arm,cortex-a72")) 183 + cpubiuctrl_regs = a72_cpubiuctrl_regs; 184 else { 185 pr_err("unsupported CPU\n"); 186 ret = -EINVAL; 187 } 188 of_node_put(cpu_dn); 189 190 + family_id = brcmstb_get_family_id(); 191 + if (BRCM_ID(family_id) == 0x7260 && BRCM_REV(family_id) == 0) 192 cpubiuctrl_regs = b53_cpubiuctrl_no_wb_regs; 193 out: 194 of_node_put(np); ··· 248 return ret; 249 } 250 251 + mcp_a72_b53_set(); 252 #ifdef CONFIG_PM_SLEEP 253 register_syscore_ops(&brcmstb_cpu_credit_syscore_ops); 254 #endif
+2 -1
drivers/soc/fsl/qe/Kconfig
··· 5 6 config QUICC_ENGINE 7 bool "QUICC Engine (QE) framework support" 8 - depends on FSL_SOC && PPC32 9 select GENERIC_ALLOCATOR 10 select CRC32 11 help
··· 5 6 config QUICC_ENGINE 7 bool "QUICC Engine (QE) framework support" 8 + depends on OF && HAS_IOMEM 9 + depends on PPC || ARM || ARM64 || COMPILE_TEST 10 select GENERIC_ALLOCATOR 11 select CRC32 12 help
+19 -17
drivers/soc/fsl/qe/gpio.c
··· 41 container_of(mm_gc, struct qe_gpio_chip, mm_gc); 42 struct qe_pio_regs __iomem *regs = mm_gc->regs; 43 44 - qe_gc->cpdata = in_be32(&regs->cpdata); 45 qe_gc->saved_regs.cpdata = qe_gc->cpdata; 46 - qe_gc->saved_regs.cpdir1 = in_be32(&regs->cpdir1); 47 - qe_gc->saved_regs.cpdir2 = in_be32(&regs->cpdir2); 48 - qe_gc->saved_regs.cppar1 = in_be32(&regs->cppar1); 49 - qe_gc->saved_regs.cppar2 = in_be32(&regs->cppar2); 50 - qe_gc->saved_regs.cpodr = in_be32(&regs->cpodr); 51 } 52 53 static int qe_gpio_get(struct gpio_chip *gc, unsigned int gpio) ··· 56 struct qe_pio_regs __iomem *regs = mm_gc->regs; 57 u32 pin_mask = 1 << (QE_PIO_PINS - 1 - gpio); 58 59 - return !!(in_be32(&regs->cpdata) & pin_mask); 60 } 61 62 static void qe_gpio_set(struct gpio_chip *gc, unsigned int gpio, int val) ··· 74 else 75 qe_gc->cpdata &= ~pin_mask; 76 77 - out_be32(&regs->cpdata, qe_gc->cpdata); 78 79 spin_unlock_irqrestore(&qe_gc->lock, flags); 80 } ··· 101 } 102 } 103 104 - out_be32(&regs->cpdata, qe_gc->cpdata); 105 106 spin_unlock_irqrestore(&qe_gc->lock, flags); 107 } ··· 160 { 161 struct qe_pin *qe_pin; 162 struct gpio_chip *gc; 163 - struct of_mm_gpio_chip *mm_gc; 164 struct qe_gpio_chip *qe_gc; 165 int err; 166 unsigned long flags; ··· 185 goto err0; 186 } 187 188 - mm_gc = to_of_mm_gpio_chip(gc); 189 qe_gc = gpiochip_get_data(gc); 190 191 spin_lock_irqsave(&qe_gc->lock, flags); ··· 253 spin_lock_irqsave(&qe_gc->lock, flags); 254 255 if (second_reg) { 256 - clrsetbits_be32(&regs->cpdir2, mask2, sregs->cpdir2 & mask2); 257 - clrsetbits_be32(&regs->cppar2, mask2, sregs->cppar2 & mask2); 258 } else { 259 - clrsetbits_be32(&regs->cpdir1, mask2, sregs->cpdir1 & mask2); 260 - clrsetbits_be32(&regs->cppar1, mask2, sregs->cppar1 & mask2); 261 } 262 263 if (sregs->cpdata & mask1) ··· 269 else 270 qe_gc->cpdata &= ~mask1; 271 272 - out_be32(&regs->cpdata, qe_gc->cpdata); 273 - clrsetbits_be32(&regs->cpodr, mask1, sregs->cpodr & mask1); 274 275 spin_unlock_irqrestore(&qe_gc->lock, flags); 276 }
··· 41 container_of(mm_gc, struct qe_gpio_chip, mm_gc); 42 struct qe_pio_regs __iomem *regs = mm_gc->regs; 43 44 + qe_gc->cpdata = qe_ioread32be(&regs->cpdata); 45 qe_gc->saved_regs.cpdata = qe_gc->cpdata; 46 + qe_gc->saved_regs.cpdir1 = qe_ioread32be(&regs->cpdir1); 47 + qe_gc->saved_regs.cpdir2 = qe_ioread32be(&regs->cpdir2); 48 + qe_gc->saved_regs.cppar1 = qe_ioread32be(&regs->cppar1); 49 + qe_gc->saved_regs.cppar2 = qe_ioread32be(&regs->cppar2); 50 + qe_gc->saved_regs.cpodr = qe_ioread32be(&regs->cpodr); 51 } 52 53 static int qe_gpio_get(struct gpio_chip *gc, unsigned int gpio) ··· 56 struct qe_pio_regs __iomem *regs = mm_gc->regs; 57 u32 pin_mask = 1 << (QE_PIO_PINS - 1 - gpio); 58 59 + return !!(qe_ioread32be(&regs->cpdata) & pin_mask); 60 } 61 62 static void qe_gpio_set(struct gpio_chip *gc, unsigned int gpio, int val) ··· 74 else 75 qe_gc->cpdata &= ~pin_mask; 76 77 + qe_iowrite32be(qe_gc->cpdata, &regs->cpdata); 78 79 spin_unlock_irqrestore(&qe_gc->lock, flags); 80 } ··· 101 } 102 } 103 104 + qe_iowrite32be(qe_gc->cpdata, &regs->cpdata); 105 106 spin_unlock_irqrestore(&qe_gc->lock, flags); 107 } ··· 160 { 161 struct qe_pin *qe_pin; 162 struct gpio_chip *gc; 163 struct qe_gpio_chip *qe_gc; 164 int err; 165 unsigned long flags; ··· 186 goto err0; 187 } 188 189 qe_gc = gpiochip_get_data(gc); 190 191 spin_lock_irqsave(&qe_gc->lock, flags); ··· 255 spin_lock_irqsave(&qe_gc->lock, flags); 256 257 if (second_reg) { 258 + qe_clrsetbits_be32(&regs->cpdir2, mask2, 259 + sregs->cpdir2 & mask2); 260 + qe_clrsetbits_be32(&regs->cppar2, mask2, 261 + sregs->cppar2 & mask2); 262 } else { 263 + qe_clrsetbits_be32(&regs->cpdir1, mask2, 264 + sregs->cpdir1 & mask2); 265 + qe_clrsetbits_be32(&regs->cppar1, mask2, 266 + sregs->cppar1 & mask2); 267 } 268 269 if (sregs->cpdata & mask1) ··· 267 else 268 qe_gc->cpdata &= ~mask1; 269 270 + qe_iowrite32be(qe_gc->cpdata, &regs->cpdata); 271 + qe_clrsetbits_be32(&regs->cpodr, mask1, sregs->cpodr & mask1); 272 273 spin_unlock_irqrestore(&qe_gc->lock, flags); 274 }
+45 -59
drivers/soc/fsl/qe/qe.c
··· 22 #include <linux/module.h> 23 #include <linux/delay.h> 24 #include <linux/ioport.h> 25 #include <linux/crc32.h> 26 #include <linux/mod_devicetable.h> 27 #include <linux/of_platform.h> 28 - #include <asm/irq.h> 29 - #include <asm/page.h> 30 - #include <asm/pgtable.h> 31 #include <soc/fsl/qe/immap_qe.h> 32 #include <soc/fsl/qe/qe.h> 33 - #include <asm/prom.h> 34 - #include <asm/rheap.h> 35 36 static void qe_snums_init(void); 37 static int qe_sdma_init(void); ··· 104 { 105 unsigned long flags; 106 u8 mcn_shift = 0, dev_shift = 0; 107 - u32 ret; 108 109 spin_lock_irqsave(&qe_lock, flags); 110 if (cmd == QE_RESET) { 111 - out_be32(&qe_immr->cp.cecr, (u32) (cmd | QE_CR_FLG)); 112 } else { 113 if (cmd == QE_ASSIGN_PAGE) { 114 /* Here device is the SNUM, not sub-block */ ··· 126 mcn_shift = QE_CR_MCN_NORMAL_SHIFT; 127 } 128 129 - out_be32(&qe_immr->cp.cecdr, cmd_input); 130 - out_be32(&qe_immr->cp.cecr, 131 - (cmd | QE_CR_FLG | ((u32) device << dev_shift) | (u32) 132 - mcn_protocol << mcn_shift)); 133 } 134 135 /* wait for the QE_CR_FLG to clear */ 136 - ret = spin_event_timeout((in_be32(&qe_immr->cp.cecr) & QE_CR_FLG) == 0, 137 - 100, 0); 138 - /* On timeout (e.g. failure), the expression will be false (ret == 0), 139 - otherwise it will be true (ret == 1). */ 140 spin_unlock_irqrestore(&qe_lock, flags); 141 142 - return ret == 1; 143 } 144 EXPORT_SYMBOL(qe_issue_cmd); 145 ··· 159 unsigned int qe_get_brg_clk(void) 160 { 161 struct device_node *qe; 162 - int size; 163 - const u32 *prop; 164 unsigned int mod; 165 166 if (brg_clk) ··· 169 if (!qe) 170 return brg_clk; 171 172 - prop = of_get_property(qe, "brg-frequency", &size); 173 - if (prop && size == sizeof(*prop)) 174 - brg_clk = *prop; 175 176 of_node_put(qe); 177 ··· 189 190 #define PVR_VER_836x 0x8083 191 #define PVR_VER_832x 0x8084 192 193 /* Program the BRG to the given sampling rate and multiplier 194 * ··· 224 /* Errata QE_General4, which affects some MPC832x and MPC836x SOCs, says 225 that the BRG divisor must be even if you're not using divide-by-16 226 mode. */ 227 - if (pvr_version_is(PVR_VER_836x) || pvr_version_is(PVR_VER_832x)) 228 if (!div16 && (divisor & 1) && (divisor > 3)) 229 divisor++; 230 231 tempval = ((divisor - 1) << QE_BRGC_DIVISOR_SHIFT) | 232 QE_BRGC_ENABLE | div16; 233 234 - out_be32(&qe_immr->brg.brgc[brg - QE_BRG1], tempval); 235 236 return 0; 237 } ··· 365 static int qe_sdma_init(void) 366 { 367 struct sdma __iomem *sdma = &qe_immr->sdma; 368 - static unsigned long sdma_buf_offset = (unsigned long)-ENOMEM; 369 - 370 - if (!sdma) 371 - return -ENODEV; 372 373 /* allocate 2 internal temporary buffers (512 bytes size each) for 374 * the SDMA */ 375 - if (IS_ERR_VALUE(sdma_buf_offset)) { 376 sdma_buf_offset = qe_muram_alloc(512 * 2, 4096); 377 - if (IS_ERR_VALUE(sdma_buf_offset)) 378 return -ENOMEM; 379 } 380 381 - out_be32(&sdma->sdebcr, (u32) sdma_buf_offset & QE_SDEBCR_BA_MASK); 382 - out_be32(&sdma->sdmr, (QE_SDMR_GLB_1_MSK | 383 - (0x1 << QE_SDMR_CEN_SHIFT))); 384 385 return 0; 386 } ··· 416 "uploading microcode '%s'\n", ucode->id); 417 418 /* Use auto-increment */ 419 - out_be32(&qe_immr->iram.iadd, be32_to_cpu(ucode->iram_offset) | 420 - QE_IRAM_IADD_AIE | QE_IRAM_IADD_BADDR); 421 422 for (i = 0; i < be32_to_cpu(ucode->count); i++) 423 - out_be32(&qe_immr->iram.idata, be32_to_cpu(code[i])); 424 425 /* Set I-RAM Ready Register */ 426 - out_be32(&qe_immr->iram.iready, be32_to_cpu(QE_IRAM_READY)); 427 } 428 429 /* ··· 508 * If the microcode calls for it, split the I-RAM. 509 */ 510 if (!firmware->split) 511 - setbits16(&qe_immr->cp.cercr, QE_CP_CERCR_CIR); 512 513 if (firmware->soc.model) 514 printk(KERN_INFO ··· 542 u32 trap = be32_to_cpu(ucode->traps[j]); 543 544 if (trap) 545 - out_be32(&qe_immr->rsp[i].tibcr[j], trap); 546 } 547 548 /* Enable traps */ 549 - out_be32(&qe_immr->rsp[i].eccr, be32_to_cpu(ucode->eccr)); 550 } 551 552 qe_firmware_uploaded = 1; ··· 566 struct qe_firmware_info *qe_get_firmware_info(void) 567 { 568 static int initialized; 569 - struct property *prop; 570 struct device_node *qe; 571 struct device_node *fw = NULL; 572 const char *sprop; 573 - unsigned int i; 574 575 /* 576 * If we haven't checked yet, and a driver hasn't uploaded a firmware ··· 602 strlcpy(qe_firmware_info.id, sprop, 603 sizeof(qe_firmware_info.id)); 604 605 - prop = of_find_property(fw, "extended-modes", NULL); 606 - if (prop && (prop->length == sizeof(u64))) { 607 - const u64 *iprop = prop->value; 608 609 - qe_firmware_info.extended_modes = *iprop; 610 - } 611 - 612 - prop = of_find_property(fw, "virtual-traps", NULL); 613 - if (prop && (prop->length == 32)) { 614 - const u32 *iprop = prop->value; 615 - 616 - for (i = 0; i < ARRAY_SIZE(qe_firmware_info.vtraps); i++) 617 - qe_firmware_info.vtraps[i] = iprop[i]; 618 - } 619 620 of_node_put(fw); 621 ··· 617 unsigned int qe_get_num_of_risc(void) 618 { 619 struct device_node *qe; 620 - int size; 621 unsigned int num_of_risc = 0; 622 - const u32 *prop; 623 624 qe = qe_get_device_node(); 625 if (!qe) 626 return num_of_risc; 627 628 - prop = of_get_property(qe, "fsl,qe-num-riscs", &size); 629 - if (prop && size == sizeof(*prop)) 630 - num_of_risc = *prop; 631 632 of_node_put(qe); 633
··· 22 #include <linux/module.h> 23 #include <linux/delay.h> 24 #include <linux/ioport.h> 25 + #include <linux/iopoll.h> 26 #include <linux/crc32.h> 27 #include <linux/mod_devicetable.h> 28 #include <linux/of_platform.h> 29 #include <soc/fsl/qe/immap_qe.h> 30 #include <soc/fsl/qe/qe.h> 31 32 static void qe_snums_init(void); 33 static int qe_sdma_init(void); ··· 108 { 109 unsigned long flags; 110 u8 mcn_shift = 0, dev_shift = 0; 111 + u32 val; 112 + int ret; 113 114 spin_lock_irqsave(&qe_lock, flags); 115 if (cmd == QE_RESET) { 116 + qe_iowrite32be((u32)(cmd | QE_CR_FLG), &qe_immr->cp.cecr); 117 } else { 118 if (cmd == QE_ASSIGN_PAGE) { 119 /* Here device is the SNUM, not sub-block */ ··· 129 mcn_shift = QE_CR_MCN_NORMAL_SHIFT; 130 } 131 132 + qe_iowrite32be(cmd_input, &qe_immr->cp.cecdr); 133 + qe_iowrite32be((cmd | QE_CR_FLG | ((u32)device << dev_shift) | (u32)mcn_protocol << mcn_shift), 134 + &qe_immr->cp.cecr); 135 } 136 137 /* wait for the QE_CR_FLG to clear */ 138 + ret = readx_poll_timeout_atomic(qe_ioread32be, &qe_immr->cp.cecr, val, 139 + (val & QE_CR_FLG) == 0, 0, 100); 140 + /* On timeout, ret is -ETIMEDOUT, otherwise it will be 0. */ 141 spin_unlock_irqrestore(&qe_lock, flags); 142 143 + return ret == 0; 144 } 145 EXPORT_SYMBOL(qe_issue_cmd); 146 ··· 164 unsigned int qe_get_brg_clk(void) 165 { 166 struct device_node *qe; 167 + u32 brg; 168 unsigned int mod; 169 170 if (brg_clk) ··· 175 if (!qe) 176 return brg_clk; 177 178 + if (!of_property_read_u32(qe, "brg-frequency", &brg)) 179 + brg_clk = brg; 180 181 of_node_put(qe); 182 ··· 196 197 #define PVR_VER_836x 0x8083 198 #define PVR_VER_832x 0x8084 199 + 200 + static bool qe_general4_errata(void) 201 + { 202 + #ifdef CONFIG_PPC32 203 + return pvr_version_is(PVR_VER_836x) || pvr_version_is(PVR_VER_832x); 204 + #endif 205 + return false; 206 + } 207 208 /* Program the BRG to the given sampling rate and multiplier 209 * ··· 223 /* Errata QE_General4, which affects some MPC832x and MPC836x SOCs, says 224 that the BRG divisor must be even if you're not using divide-by-16 225 mode. */ 226 + if (qe_general4_errata()) 227 if (!div16 && (divisor & 1) && (divisor > 3)) 228 divisor++; 229 230 tempval = ((divisor - 1) << QE_BRGC_DIVISOR_SHIFT) | 231 QE_BRGC_ENABLE | div16; 232 233 + qe_iowrite32be(tempval, &qe_immr->brg.brgc[brg - QE_BRG1]); 234 235 return 0; 236 } ··· 364 static int qe_sdma_init(void) 365 { 366 struct sdma __iomem *sdma = &qe_immr->sdma; 367 + static s32 sdma_buf_offset = -ENOMEM; 368 369 /* allocate 2 internal temporary buffers (512 bytes size each) for 370 * the SDMA */ 371 + if (sdma_buf_offset < 0) { 372 sdma_buf_offset = qe_muram_alloc(512 * 2, 4096); 373 + if (sdma_buf_offset < 0) 374 return -ENOMEM; 375 } 376 377 + qe_iowrite32be((u32)sdma_buf_offset & QE_SDEBCR_BA_MASK, 378 + &sdma->sdebcr); 379 + qe_iowrite32be((QE_SDMR_GLB_1_MSK | (0x1 << QE_SDMR_CEN_SHIFT)), 380 + &sdma->sdmr); 381 382 return 0; 383 } ··· 417 "uploading microcode '%s'\n", ucode->id); 418 419 /* Use auto-increment */ 420 + qe_iowrite32be(be32_to_cpu(ucode->iram_offset) | QE_IRAM_IADD_AIE | QE_IRAM_IADD_BADDR, 421 + &qe_immr->iram.iadd); 422 423 for (i = 0; i < be32_to_cpu(ucode->count); i++) 424 + qe_iowrite32be(be32_to_cpu(code[i]), &qe_immr->iram.idata); 425 426 /* Set I-RAM Ready Register */ 427 + qe_iowrite32be(be32_to_cpu(QE_IRAM_READY), &qe_immr->iram.iready); 428 } 429 430 /* ··· 509 * If the microcode calls for it, split the I-RAM. 510 */ 511 if (!firmware->split) 512 + qe_setbits_be16(&qe_immr->cp.cercr, QE_CP_CERCR_CIR); 513 514 if (firmware->soc.model) 515 printk(KERN_INFO ··· 543 u32 trap = be32_to_cpu(ucode->traps[j]); 544 545 if (trap) 546 + qe_iowrite32be(trap, 547 + &qe_immr->rsp[i].tibcr[j]); 548 } 549 550 /* Enable traps */ 551 + qe_iowrite32be(be32_to_cpu(ucode->eccr), 552 + &qe_immr->rsp[i].eccr); 553 } 554 555 qe_firmware_uploaded = 1; ··· 565 struct qe_firmware_info *qe_get_firmware_info(void) 566 { 567 static int initialized; 568 struct device_node *qe; 569 struct device_node *fw = NULL; 570 const char *sprop; 571 572 /* 573 * If we haven't checked yet, and a driver hasn't uploaded a firmware ··· 603 strlcpy(qe_firmware_info.id, sprop, 604 sizeof(qe_firmware_info.id)); 605 606 + of_property_read_u64(fw, "extended-modes", 607 + &qe_firmware_info.extended_modes); 608 609 + of_property_read_u32_array(fw, "virtual-traps", qe_firmware_info.vtraps, 610 + ARRAY_SIZE(qe_firmware_info.vtraps)); 611 612 of_node_put(fw); 613 ··· 627 unsigned int qe_get_num_of_risc(void) 628 { 629 struct device_node *qe; 630 unsigned int num_of_risc = 0; 631 632 qe = qe_get_device_node(); 633 if (!qe) 634 return num_of_risc; 635 636 + of_property_read_u32(qe, "fsl,qe-num-riscs", &num_of_risc); 637 638 of_node_put(qe); 639
+26 -26
drivers/soc/fsl/qe/qe_common.c
··· 32 33 struct muram_block { 34 struct list_head head; 35 - unsigned long start; 36 int size; 37 }; 38 ··· 110 * @algo: algorithm for alloc. 111 * @data: data for genalloc's algorithm. 112 * 113 - * This function returns an offset into the muram area. 114 */ 115 - static unsigned long cpm_muram_alloc_common(unsigned long size, 116 - genpool_algo_t algo, void *data) 117 { 118 struct muram_block *entry; 119 - unsigned long start; 120 121 - if (!muram_pool && cpm_muram_init()) 122 - goto out2; 123 - 124 - start = gen_pool_alloc_algo(muram_pool, size, algo, data); 125 - if (!start) 126 - goto out2; 127 - start = start - GENPOOL_OFFSET; 128 - memset_io(cpm_muram_addr(start), 0, size); 129 entry = kmalloc(sizeof(*entry), GFP_ATOMIC); 130 if (!entry) 131 - goto out1; 132 entry->start = start; 133 entry->size = size; 134 list_add(&entry->head, &muram_block_list); 135 136 return start; 137 - out1: 138 - gen_pool_free(muram_pool, start, size); 139 - out2: 140 - return (unsigned long)-ENOMEM; 141 } 142 143 /* ··· 141 * @size: number of bytes to allocate 142 * @align: requested alignment, in bytes 143 * 144 - * This function returns an offset into the muram area. 145 * Use cpm_dpram_addr() to get the virtual address of the area. 146 * Use cpm_muram_free() to free the allocation. 147 */ 148 - unsigned long cpm_muram_alloc(unsigned long size, unsigned long align) 149 { 150 - unsigned long start; 151 unsigned long flags; 152 struct genpool_data_align muram_pool_data; 153 ··· 165 * cpm_muram_free - free a chunk of multi-user ram 166 * @offset: The beginning of the chunk as returned by cpm_muram_alloc(). 167 */ 168 - int cpm_muram_free(unsigned long offset) 169 { 170 unsigned long flags; 171 int size; 172 struct muram_block *tmp; 173 174 size = 0; 175 spin_lock_irqsave(&cpm_muram_lock, flags); ··· 186 } 187 gen_pool_free(muram_pool, offset + GENPOOL_OFFSET, size); 188 spin_unlock_irqrestore(&cpm_muram_lock, flags); 189 - return size; 190 } 191 EXPORT_SYMBOL(cpm_muram_free); 192 ··· 193 * cpm_muram_alloc_fixed - reserve a specific region of multi-user ram 194 * @offset: offset of allocation start address 195 * @size: number of bytes to allocate 196 - * This function returns an offset into the muram area 197 * Use cpm_dpram_addr() to get the virtual address of the area. 198 * Use cpm_muram_free() to free the allocation. 199 */ 200 - unsigned long cpm_muram_alloc_fixed(unsigned long offset, unsigned long size) 201 { 202 - unsigned long start; 203 unsigned long flags; 204 struct genpool_data_fixed muram_pool_data_fixed; 205
··· 32 33 struct muram_block { 34 struct list_head head; 35 + s32 start; 36 int size; 37 }; 38 ··· 110 * @algo: algorithm for alloc. 111 * @data: data for genalloc's algorithm. 112 * 113 + * This function returns a non-negative offset into the muram area, or 114 + * a negative errno on failure. 115 */ 116 + static s32 cpm_muram_alloc_common(unsigned long size, 117 + genpool_algo_t algo, void *data) 118 { 119 struct muram_block *entry; 120 + s32 start; 121 122 entry = kmalloc(sizeof(*entry), GFP_ATOMIC); 123 if (!entry) 124 + return -ENOMEM; 125 + start = gen_pool_alloc_algo(muram_pool, size, algo, data); 126 + if (!start) { 127 + kfree(entry); 128 + return -ENOMEM; 129 + } 130 + start = start - GENPOOL_OFFSET; 131 + memset_io(cpm_muram_addr(start), 0, size); 132 entry->start = start; 133 entry->size = size; 134 list_add(&entry->head, &muram_block_list); 135 136 return start; 137 } 138 139 /* ··· 145 * @size: number of bytes to allocate 146 * @align: requested alignment, in bytes 147 * 148 + * This function returns a non-negative offset into the muram area, or 149 + * a negative errno on failure. 150 * Use cpm_dpram_addr() to get the virtual address of the area. 151 * Use cpm_muram_free() to free the allocation. 152 */ 153 + s32 cpm_muram_alloc(unsigned long size, unsigned long align) 154 { 155 + s32 start; 156 unsigned long flags; 157 struct genpool_data_align muram_pool_data; 158 ··· 168 * cpm_muram_free - free a chunk of multi-user ram 169 * @offset: The beginning of the chunk as returned by cpm_muram_alloc(). 170 */ 171 + void cpm_muram_free(s32 offset) 172 { 173 unsigned long flags; 174 int size; 175 struct muram_block *tmp; 176 + 177 + if (offset < 0) 178 + return; 179 180 size = 0; 181 spin_lock_irqsave(&cpm_muram_lock, flags); ··· 186 } 187 gen_pool_free(muram_pool, offset + GENPOOL_OFFSET, size); 188 spin_unlock_irqrestore(&cpm_muram_lock, flags); 189 } 190 EXPORT_SYMBOL(cpm_muram_free); 191 ··· 194 * cpm_muram_alloc_fixed - reserve a specific region of multi-user ram 195 * @offset: offset of allocation start address 196 * @size: number of bytes to allocate 197 + * This function returns @offset if the area was available, a negative 198 + * errno otherwise. 199 * Use cpm_dpram_addr() to get the virtual address of the area. 200 * Use cpm_muram_free() to free the allocation. 201 */ 202 + s32 cpm_muram_alloc_fixed(unsigned long offset, unsigned long size) 203 { 204 + s32 start; 205 unsigned long flags; 206 struct genpool_data_fixed muram_pool_data_fixed; 207
+126 -159
drivers/soc/fsl/qe/qe_ic.c
··· 15 #include <linux/kernel.h> 16 #include <linux/init.h> 17 #include <linux/errno.h> 18 #include <linux/reboot.h> 19 #include <linux/slab.h> 20 #include <linux/stddef.h> ··· 25 #include <linux/spinlock.h> 26 #include <asm/irq.h> 27 #include <asm/io.h> 28 - #include <soc/fsl/qe/qe_ic.h> 29 30 - #include "qe_ic.h" 31 32 static DEFINE_RAW_SPINLOCK(qe_ic_lock); 33 ··· 220 }, 221 }; 222 223 - static inline u32 qe_ic_read(volatile __be32 __iomem * base, unsigned int reg) 224 { 225 - return in_be32(base + (reg >> 2)); 226 } 227 228 - static inline void qe_ic_write(volatile __be32 __iomem * base, unsigned int reg, 229 u32 value) 230 { 231 - out_be32(base + (reg >> 2), value); 232 } 233 234 static inline struct qe_ic *qe_ic_from_irq(unsigned int virq) ··· 330 .xlate = irq_domain_xlate_onetwocell, 331 }; 332 333 - /* Return an interrupt vector or NO_IRQ if no interrupt is pending. */ 334 - unsigned int qe_ic_get_low_irq(struct qe_ic *qe_ic) 335 { 336 int irq; 337 ··· 341 irq = qe_ic_read(qe_ic->regs, QEIC_CIVEC) >> 26; 342 343 if (irq == 0) 344 - return NO_IRQ; 345 346 return irq_linear_revmap(qe_ic->irqhost, irq); 347 } 348 349 - /* Return an interrupt vector or NO_IRQ if no interrupt is pending. */ 350 - unsigned int qe_ic_get_high_irq(struct qe_ic *qe_ic) 351 { 352 int irq; 353 ··· 357 irq = qe_ic_read(qe_ic->regs, QEIC_CHIVEC) >> 26; 358 359 if (irq == 0) 360 - return NO_IRQ; 361 362 return irq_linear_revmap(qe_ic->irqhost, irq); 363 } 364 365 - void __init qe_ic_init(struct device_node *node, unsigned int flags, 366 - void (*low_handler)(struct irq_desc *desc), 367 - void (*high_handler)(struct irq_desc *desc)) 368 { 369 struct qe_ic *qe_ic; 370 struct resource res; 371 - u32 temp = 0, ret, high_active = 0; 372 373 ret = of_address_to_resource(node, 0, &res); 374 if (ret) ··· 434 qe_ic->virq_high = irq_of_parse_and_map(node, 0); 435 qe_ic->virq_low = irq_of_parse_and_map(node, 1); 436 437 - if (qe_ic->virq_low == NO_IRQ) { 438 printk(KERN_ERR "Failed to map QE_IC low IRQ\n"); 439 kfree(qe_ic); 440 return; 441 } 442 - 443 - /* default priority scheme is grouped. If spread mode is */ 444 - /* required, configure cicr accordingly. */ 445 - if (flags & QE_IC_SPREADMODE_GRP_W) 446 - temp |= CICR_GWCC; 447 - if (flags & QE_IC_SPREADMODE_GRP_X) 448 - temp |= CICR_GXCC; 449 - if (flags & QE_IC_SPREADMODE_GRP_Y) 450 - temp |= CICR_GYCC; 451 - if (flags & QE_IC_SPREADMODE_GRP_Z) 452 - temp |= CICR_GZCC; 453 - if (flags & QE_IC_SPREADMODE_GRP_RISCA) 454 - temp |= CICR_GRTA; 455 - if (flags & QE_IC_SPREADMODE_GRP_RISCB) 456 - temp |= CICR_GRTB; 457 - 458 - /* choose destination signal for highest priority interrupt */ 459 - if (flags & QE_IC_HIGH_SIGNAL) { 460 - temp |= (SIGNAL_HIGH << CICR_HPIT_SHIFT); 461 - high_active = 1; 462 } 463 464 - qe_ic_write(qe_ic->regs, QEIC_CICR, temp); 465 466 irq_set_handler_data(qe_ic->virq_low, qe_ic); 467 irq_set_chained_handler(qe_ic->virq_low, low_handler); 468 469 - if (qe_ic->virq_high != NO_IRQ && 470 - qe_ic->virq_high != qe_ic->virq_low) { 471 irq_set_handler_data(qe_ic->virq_high, qe_ic); 472 irq_set_chained_handler(qe_ic->virq_high, high_handler); 473 } 474 } 475 476 - void qe_ic_set_highest_priority(unsigned int virq, int high) 477 { 478 - struct qe_ic *qe_ic = qe_ic_from_irq(virq); 479 - unsigned int src = virq_to_hw(virq); 480 - u32 temp = 0; 481 482 - temp = qe_ic_read(qe_ic->regs, QEIC_CICR); 483 - 484 - temp &= ~CICR_HP_MASK; 485 - temp |= src << CICR_HP_SHIFT; 486 - 487 - temp &= ~CICR_HPIT_MASK; 488 - temp |= (high ? SIGNAL_HIGH : SIGNAL_LOW) << CICR_HPIT_SHIFT; 489 - 490 - qe_ic_write(qe_ic->regs, QEIC_CICR, temp); 491 - } 492 - 493 - /* Set Priority level within its group, from 1 to 8 */ 494 - int qe_ic_set_priority(unsigned int virq, unsigned int priority) 495 - { 496 - struct qe_ic *qe_ic = qe_ic_from_irq(virq); 497 - unsigned int src = virq_to_hw(virq); 498 - u32 temp; 499 - 500 - if (priority > 8 || priority == 0) 501 - return -EINVAL; 502 - if (WARN_ONCE(src >= ARRAY_SIZE(qe_ic_info), 503 - "%s: Invalid hw irq number for QEIC\n", __func__)) 504 - return -EINVAL; 505 - if (qe_ic_info[src].pri_reg == 0) 506 - return -EINVAL; 507 - 508 - temp = qe_ic_read(qe_ic->regs, qe_ic_info[src].pri_reg); 509 - 510 - if (priority < 4) { 511 - temp &= ~(0x7 << (32 - priority * 3)); 512 - temp |= qe_ic_info[src].pri_code << (32 - priority * 3); 513 - } else { 514 - temp &= ~(0x7 << (24 - priority * 3)); 515 - temp |= qe_ic_info[src].pri_code << (24 - priority * 3); 516 } 517 - 518 - qe_ic_write(qe_ic->regs, qe_ic_info[src].pri_reg, temp); 519 - 520 return 0; 521 } 522 - 523 - /* Set a QE priority to use high irq, only priority 1~2 can use high irq */ 524 - int qe_ic_set_high_priority(unsigned int virq, unsigned int priority, int high) 525 - { 526 - struct qe_ic *qe_ic = qe_ic_from_irq(virq); 527 - unsigned int src = virq_to_hw(virq); 528 - u32 temp, control_reg = QEIC_CICNR, shift = 0; 529 - 530 - if (priority > 2 || priority == 0) 531 - return -EINVAL; 532 - if (WARN_ONCE(src >= ARRAY_SIZE(qe_ic_info), 533 - "%s: Invalid hw irq number for QEIC\n", __func__)) 534 - return -EINVAL; 535 - 536 - switch (qe_ic_info[src].pri_reg) { 537 - case QEIC_CIPZCC: 538 - shift = CICNR_ZCC1T_SHIFT; 539 - break; 540 - case QEIC_CIPWCC: 541 - shift = CICNR_WCC1T_SHIFT; 542 - break; 543 - case QEIC_CIPYCC: 544 - shift = CICNR_YCC1T_SHIFT; 545 - break; 546 - case QEIC_CIPXCC: 547 - shift = CICNR_XCC1T_SHIFT; 548 - break; 549 - case QEIC_CIPRTA: 550 - shift = CRICR_RTA1T_SHIFT; 551 - control_reg = QEIC_CRICR; 552 - break; 553 - case QEIC_CIPRTB: 554 - shift = CRICR_RTB1T_SHIFT; 555 - control_reg = QEIC_CRICR; 556 - break; 557 - default: 558 - return -EINVAL; 559 - } 560 - 561 - shift += (2 - priority) * 2; 562 - temp = qe_ic_read(qe_ic->regs, control_reg); 563 - temp &= ~(SIGNAL_MASK << shift); 564 - temp |= (high ? SIGNAL_HIGH : SIGNAL_LOW) << shift; 565 - qe_ic_write(qe_ic->regs, control_reg, temp); 566 - 567 - return 0; 568 - } 569 - 570 - static struct bus_type qe_ic_subsys = { 571 - .name = "qe_ic", 572 - .dev_name = "qe_ic", 573 - }; 574 - 575 - static struct device device_qe_ic = { 576 - .id = 0, 577 - .bus = &qe_ic_subsys, 578 - }; 579 - 580 - static int __init init_qe_ic_sysfs(void) 581 - { 582 - int rc; 583 - 584 - printk(KERN_DEBUG "Registering qe_ic with sysfs...\n"); 585 - 586 - rc = subsys_system_register(&qe_ic_subsys, NULL); 587 - if (rc) { 588 - printk(KERN_ERR "Failed registering qe_ic sys class\n"); 589 - return -ENODEV; 590 - } 591 - rc = device_register(&device_qe_ic); 592 - if (rc) { 593 - printk(KERN_ERR "Failed registering qe_ic sys device\n"); 594 - return -ENODEV; 595 - } 596 - return 0; 597 - } 598 - 599 - subsys_initcall(init_qe_ic_sysfs);
··· 15 #include <linux/kernel.h> 16 #include <linux/init.h> 17 #include <linux/errno.h> 18 + #include <linux/irq.h> 19 #include <linux/reboot.h> 20 #include <linux/slab.h> 21 #include <linux/stddef.h> ··· 24 #include <linux/spinlock.h> 25 #include <asm/irq.h> 26 #include <asm/io.h> 27 + #include <soc/fsl/qe/qe.h> 28 29 + #define NR_QE_IC_INTS 64 30 + 31 + /* QE IC registers offset */ 32 + #define QEIC_CICR 0x00 33 + #define QEIC_CIVEC 0x04 34 + #define QEIC_CIPXCC 0x10 35 + #define QEIC_CIPYCC 0x14 36 + #define QEIC_CIPWCC 0x18 37 + #define QEIC_CIPZCC 0x1c 38 + #define QEIC_CIMR 0x20 39 + #define QEIC_CRIMR 0x24 40 + #define QEIC_CIPRTA 0x30 41 + #define QEIC_CIPRTB 0x34 42 + #define QEIC_CHIVEC 0x60 43 + 44 + struct qe_ic { 45 + /* Control registers offset */ 46 + u32 __iomem *regs; 47 + 48 + /* The remapper for this QEIC */ 49 + struct irq_domain *irqhost; 50 + 51 + /* The "linux" controller struct */ 52 + struct irq_chip hc_irq; 53 + 54 + /* VIRQ numbers of QE high/low irqs */ 55 + unsigned int virq_high; 56 + unsigned int virq_low; 57 + }; 58 + 59 + /* 60 + * QE interrupt controller internal structure 61 + */ 62 + struct qe_ic_info { 63 + /* Location of this source at the QIMR register */ 64 + u32 mask; 65 + 66 + /* Mask register offset */ 67 + u32 mask_reg; 68 + 69 + /* 70 + * For grouped interrupts sources - the interrupt code as 71 + * appears at the group priority register 72 + */ 73 + u8 pri_code; 74 + 75 + /* Group priority register offset */ 76 + u32 pri_reg; 77 + }; 78 79 static DEFINE_RAW_SPINLOCK(qe_ic_lock); 80 ··· 171 }, 172 }; 173 174 + static inline u32 qe_ic_read(__be32 __iomem *base, unsigned int reg) 175 { 176 + return qe_ioread32be(base + (reg >> 2)); 177 } 178 179 + static inline void qe_ic_write(__be32 __iomem *base, unsigned int reg, 180 u32 value) 181 { 182 + qe_iowrite32be(value, base + (reg >> 2)); 183 } 184 185 static inline struct qe_ic *qe_ic_from_irq(unsigned int virq) ··· 281 .xlate = irq_domain_xlate_onetwocell, 282 }; 283 284 + /* Return an interrupt vector or 0 if no interrupt is pending. */ 285 + static unsigned int qe_ic_get_low_irq(struct qe_ic *qe_ic) 286 { 287 int irq; 288 ··· 292 irq = qe_ic_read(qe_ic->regs, QEIC_CIVEC) >> 26; 293 294 if (irq == 0) 295 + return 0; 296 297 return irq_linear_revmap(qe_ic->irqhost, irq); 298 } 299 300 + /* Return an interrupt vector or 0 if no interrupt is pending. */ 301 + static unsigned int qe_ic_get_high_irq(struct qe_ic *qe_ic) 302 { 303 int irq; 304 ··· 308 irq = qe_ic_read(qe_ic->regs, QEIC_CHIVEC) >> 26; 309 310 if (irq == 0) 311 + return 0; 312 313 return irq_linear_revmap(qe_ic->irqhost, irq); 314 } 315 316 + static void qe_ic_cascade_low(struct irq_desc *desc) 317 { 318 + struct qe_ic *qe_ic = irq_desc_get_handler_data(desc); 319 + unsigned int cascade_irq = qe_ic_get_low_irq(qe_ic); 320 + struct irq_chip *chip = irq_desc_get_chip(desc); 321 + 322 + if (cascade_irq != 0) 323 + generic_handle_irq(cascade_irq); 324 + 325 + if (chip->irq_eoi) 326 + chip->irq_eoi(&desc->irq_data); 327 + } 328 + 329 + static void qe_ic_cascade_high(struct irq_desc *desc) 330 + { 331 + struct qe_ic *qe_ic = irq_desc_get_handler_data(desc); 332 + unsigned int cascade_irq = qe_ic_get_high_irq(qe_ic); 333 + struct irq_chip *chip = irq_desc_get_chip(desc); 334 + 335 + if (cascade_irq != 0) 336 + generic_handle_irq(cascade_irq); 337 + 338 + if (chip->irq_eoi) 339 + chip->irq_eoi(&desc->irq_data); 340 + } 341 + 342 + static void qe_ic_cascade_muxed_mpic(struct irq_desc *desc) 343 + { 344 + struct qe_ic *qe_ic = irq_desc_get_handler_data(desc); 345 + unsigned int cascade_irq; 346 + struct irq_chip *chip = irq_desc_get_chip(desc); 347 + 348 + cascade_irq = qe_ic_get_high_irq(qe_ic); 349 + if (cascade_irq == 0) 350 + cascade_irq = qe_ic_get_low_irq(qe_ic); 351 + 352 + if (cascade_irq != 0) 353 + generic_handle_irq(cascade_irq); 354 + 355 + chip->irq_eoi(&desc->irq_data); 356 + } 357 + 358 + static void __init qe_ic_init(struct device_node *node) 359 + { 360 + void (*low_handler)(struct irq_desc *desc); 361 + void (*high_handler)(struct irq_desc *desc); 362 struct qe_ic *qe_ic; 363 struct resource res; 364 + u32 ret; 365 366 ret = of_address_to_resource(node, 0, &res); 367 if (ret) ··· 343 qe_ic->virq_high = irq_of_parse_and_map(node, 0); 344 qe_ic->virq_low = irq_of_parse_and_map(node, 1); 345 346 + if (!qe_ic->virq_low) { 347 printk(KERN_ERR "Failed to map QE_IC low IRQ\n"); 348 kfree(qe_ic); 349 return; 350 } 351 + if (qe_ic->virq_high != qe_ic->virq_low) { 352 + low_handler = qe_ic_cascade_low; 353 + high_handler = qe_ic_cascade_high; 354 + } else { 355 + low_handler = qe_ic_cascade_muxed_mpic; 356 + high_handler = NULL; 357 } 358 359 + qe_ic_write(qe_ic->regs, QEIC_CICR, 0); 360 361 irq_set_handler_data(qe_ic->virq_low, qe_ic); 362 irq_set_chained_handler(qe_ic->virq_low, low_handler); 363 364 + if (qe_ic->virq_high && qe_ic->virq_high != qe_ic->virq_low) { 365 irq_set_handler_data(qe_ic->virq_high, qe_ic); 366 irq_set_chained_handler(qe_ic->virq_high, high_handler); 367 } 368 } 369 370 + static int __init qe_ic_of_init(void) 371 { 372 + struct device_node *np; 373 374 + np = of_find_compatible_node(NULL, NULL, "fsl,qe-ic"); 375 + if (!np) { 376 + np = of_find_node_by_type(NULL, "qeic"); 377 + if (!np) 378 + return -ENODEV; 379 } 380 + qe_ic_init(np); 381 + of_node_put(np); 382 return 0; 383 } 384 + subsys_initcall(qe_ic_of_init);
-99
drivers/soc/fsl/qe/qe_ic.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * drivers/soc/fsl/qe/qe_ic.h 4 - * 5 - * QUICC ENGINE Interrupt Controller Header 6 - * 7 - * Copyright (C) 2006 Freescale Semiconductor, Inc. All rights reserved. 8 - * 9 - * Author: Li Yang <leoli@freescale.com> 10 - * Based on code from Shlomi Gridish <gridish@freescale.com> 11 - */ 12 - #ifndef _POWERPC_SYSDEV_QE_IC_H 13 - #define _POWERPC_SYSDEV_QE_IC_H 14 - 15 - #include <soc/fsl/qe/qe_ic.h> 16 - 17 - #define NR_QE_IC_INTS 64 18 - 19 - /* QE IC registers offset */ 20 - #define QEIC_CICR 0x00 21 - #define QEIC_CIVEC 0x04 22 - #define QEIC_CRIPNR 0x08 23 - #define QEIC_CIPNR 0x0c 24 - #define QEIC_CIPXCC 0x10 25 - #define QEIC_CIPYCC 0x14 26 - #define QEIC_CIPWCC 0x18 27 - #define QEIC_CIPZCC 0x1c 28 - #define QEIC_CIMR 0x20 29 - #define QEIC_CRIMR 0x24 30 - #define QEIC_CICNR 0x28 31 - #define QEIC_CIPRTA 0x30 32 - #define QEIC_CIPRTB 0x34 33 - #define QEIC_CRICR 0x3c 34 - #define QEIC_CHIVEC 0x60 35 - 36 - /* Interrupt priority registers */ 37 - #define CIPCC_SHIFT_PRI0 29 38 - #define CIPCC_SHIFT_PRI1 26 39 - #define CIPCC_SHIFT_PRI2 23 40 - #define CIPCC_SHIFT_PRI3 20 41 - #define CIPCC_SHIFT_PRI4 13 42 - #define CIPCC_SHIFT_PRI5 10 43 - #define CIPCC_SHIFT_PRI6 7 44 - #define CIPCC_SHIFT_PRI7 4 45 - 46 - /* CICR priority modes */ 47 - #define CICR_GWCC 0x00040000 48 - #define CICR_GXCC 0x00020000 49 - #define CICR_GYCC 0x00010000 50 - #define CICR_GZCC 0x00080000 51 - #define CICR_GRTA 0x00200000 52 - #define CICR_GRTB 0x00400000 53 - #define CICR_HPIT_SHIFT 8 54 - #define CICR_HPIT_MASK 0x00000300 55 - #define CICR_HP_SHIFT 24 56 - #define CICR_HP_MASK 0x3f000000 57 - 58 - /* CICNR */ 59 - #define CICNR_WCC1T_SHIFT 20 60 - #define CICNR_ZCC1T_SHIFT 28 61 - #define CICNR_YCC1T_SHIFT 12 62 - #define CICNR_XCC1T_SHIFT 4 63 - 64 - /* CRICR */ 65 - #define CRICR_RTA1T_SHIFT 20 66 - #define CRICR_RTB1T_SHIFT 28 67 - 68 - /* Signal indicator */ 69 - #define SIGNAL_MASK 3 70 - #define SIGNAL_HIGH 2 71 - #define SIGNAL_LOW 0 72 - 73 - struct qe_ic { 74 - /* Control registers offset */ 75 - volatile u32 __iomem *regs; 76 - 77 - /* The remapper for this QEIC */ 78 - struct irq_domain *irqhost; 79 - 80 - /* The "linux" controller struct */ 81 - struct irq_chip hc_irq; 82 - 83 - /* VIRQ numbers of QE high/low irqs */ 84 - unsigned int virq_high; 85 - unsigned int virq_low; 86 - }; 87 - 88 - /* 89 - * QE interrupt controller internal structure 90 - */ 91 - struct qe_ic_info { 92 - u32 mask; /* location of this source at the QIMR register. */ 93 - u32 mask_reg; /* Mask register offset */ 94 - u8 pri_code; /* for grouped interrupts sources - the interrupt 95 - code as appears at the group priority register */ 96 - u32 pri_reg; /* Group priority register offset */ 97 - }; 98 - 99 - #endif /* _POWERPC_SYSDEV_QE_IC_H */
···
+33 -37
drivers/soc/fsl/qe/qe_io.c
··· 18 19 #include <asm/io.h> 20 #include <soc/fsl/qe/qe.h> 21 - #include <asm/prom.h> 22 - #include <sysdev/fsl_soc.h> 23 24 #undef DEBUG 25 ··· 28 { 29 struct resource res; 30 int ret; 31 - const u32 *num_ports; 32 33 /* Map Parallel I/O ports registers */ 34 ret = of_address_to_resource(np, 0, &res); ··· 36 return ret; 37 par_io = ioremap(res.start, resource_size(&res)); 38 39 - num_ports = of_get_property(np, "num-ports", NULL); 40 - if (num_ports) 41 - num_par_io_ports = *num_ports; 42 43 return 0; 44 } ··· 54 pin_mask1bit = (u32) (1 << (QE_PIO_PINS - (pin + 1))); 55 56 /* Set open drain, if required */ 57 - tmp_val = in_be32(&par_io->cpodr); 58 if (open_drain) 59 - out_be32(&par_io->cpodr, pin_mask1bit | tmp_val); 60 else 61 - out_be32(&par_io->cpodr, ~pin_mask1bit & tmp_val); 62 63 /* define direction */ 64 tmp_val = (pin > (QE_PIO_PINS / 2) - 1) ? 65 - in_be32(&par_io->cpdir2) : 66 - in_be32(&par_io->cpdir1); 67 68 /* get all bits mask for 2 bit per port */ 69 pin_mask2bits = (u32) (0x3 << (QE_PIO_PINS - ··· 75 76 /* clear and set 2 bits mask */ 77 if (pin > (QE_PIO_PINS / 2) - 1) { 78 - out_be32(&par_io->cpdir2, 79 - ~pin_mask2bits & tmp_val); 80 tmp_val &= ~pin_mask2bits; 81 - out_be32(&par_io->cpdir2, new_mask2bits | tmp_val); 82 } else { 83 - out_be32(&par_io->cpdir1, 84 - ~pin_mask2bits & tmp_val); 85 tmp_val &= ~pin_mask2bits; 86 - out_be32(&par_io->cpdir1, new_mask2bits | tmp_val); 87 } 88 /* define pin assignment */ 89 tmp_val = (pin > (QE_PIO_PINS / 2) - 1) ? 90 - in_be32(&par_io->cppar2) : 91 - in_be32(&par_io->cppar1); 92 93 new_mask2bits = (u32) (assignment << (QE_PIO_PINS - 94 (pin % (QE_PIO_PINS / 2) + 1) * 2)); 95 /* clear and set 2 bits mask */ 96 if (pin > (QE_PIO_PINS / 2) - 1) { 97 - out_be32(&par_io->cppar2, 98 - ~pin_mask2bits & tmp_val); 99 tmp_val &= ~pin_mask2bits; 100 - out_be32(&par_io->cppar2, new_mask2bits | tmp_val); 101 } else { 102 - out_be32(&par_io->cppar1, 103 - ~pin_mask2bits & tmp_val); 104 tmp_val &= ~pin_mask2bits; 105 - out_be32(&par_io->cppar1, new_mask2bits | tmp_val); 106 } 107 } 108 EXPORT_SYMBOL(__par_io_config_pin); ··· 126 /* calculate pin location */ 127 pin_mask = (u32) (1 << (QE_PIO_PINS - 1 - pin)); 128 129 - tmp_val = in_be32(&par_io[port].cpdata); 130 131 if (val == 0) /* clear */ 132 - out_be32(&par_io[port].cpdata, ~pin_mask & tmp_val); 133 else /* set */ 134 - out_be32(&par_io[port].cpdata, pin_mask | tmp_val); 135 136 return 0; 137 } ··· 140 int par_io_of_config(struct device_node *np) 141 { 142 struct device_node *pio; 143 - const phandle *ph; 144 int pio_map_len; 145 - const unsigned int *pio_map; 146 147 if (par_io == NULL) { 148 printk(KERN_ERR "par_io not initialized\n"); 149 return -1; 150 } 151 152 - ph = of_get_property(np, "pio-handle", NULL); 153 - if (ph == NULL) { 154 printk(KERN_ERR "pio-handle not available\n"); 155 return -1; 156 } 157 - 158 - pio = of_find_node_by_phandle(*ph); 159 160 pio_map = of_get_property(pio, "pio-map", &pio_map_len); 161 if (pio_map == NULL) { ··· 166 } 167 168 while (pio_map_len > 0) { 169 - par_io_config_pin((u8) pio_map[0], (u8) pio_map[1], 170 - (int) pio_map[2], (int) pio_map[3], 171 - (int) pio_map[4], (int) pio_map[5]); 172 pio_map += 6; 173 pio_map_len -= 6; 174 }
··· 18 19 #include <asm/io.h> 20 #include <soc/fsl/qe/qe.h> 21 22 #undef DEBUG 23 ··· 30 { 31 struct resource res; 32 int ret; 33 + u32 num_ports; 34 35 /* Map Parallel I/O ports registers */ 36 ret = of_address_to_resource(np, 0, &res); ··· 38 return ret; 39 par_io = ioremap(res.start, resource_size(&res)); 40 41 + if (!of_property_read_u32(np, "num-ports", &num_ports)) 42 + num_par_io_ports = num_ports; 43 44 return 0; 45 } ··· 57 pin_mask1bit = (u32) (1 << (QE_PIO_PINS - (pin + 1))); 58 59 /* Set open drain, if required */ 60 + tmp_val = qe_ioread32be(&par_io->cpodr); 61 if (open_drain) 62 + qe_iowrite32be(pin_mask1bit | tmp_val, &par_io->cpodr); 63 else 64 + qe_iowrite32be(~pin_mask1bit & tmp_val, &par_io->cpodr); 65 66 /* define direction */ 67 tmp_val = (pin > (QE_PIO_PINS / 2) - 1) ? 68 + qe_ioread32be(&par_io->cpdir2) : 69 + qe_ioread32be(&par_io->cpdir1); 70 71 /* get all bits mask for 2 bit per port */ 72 pin_mask2bits = (u32) (0x3 << (QE_PIO_PINS - ··· 78 79 /* clear and set 2 bits mask */ 80 if (pin > (QE_PIO_PINS / 2) - 1) { 81 + qe_iowrite32be(~pin_mask2bits & tmp_val, &par_io->cpdir2); 82 tmp_val &= ~pin_mask2bits; 83 + qe_iowrite32be(new_mask2bits | tmp_val, &par_io->cpdir2); 84 } else { 85 + qe_iowrite32be(~pin_mask2bits & tmp_val, &par_io->cpdir1); 86 tmp_val &= ~pin_mask2bits; 87 + qe_iowrite32be(new_mask2bits | tmp_val, &par_io->cpdir1); 88 } 89 /* define pin assignment */ 90 tmp_val = (pin > (QE_PIO_PINS / 2) - 1) ? 91 + qe_ioread32be(&par_io->cppar2) : 92 + qe_ioread32be(&par_io->cppar1); 93 94 new_mask2bits = (u32) (assignment << (QE_PIO_PINS - 95 (pin % (QE_PIO_PINS / 2) + 1) * 2)); 96 /* clear and set 2 bits mask */ 97 if (pin > (QE_PIO_PINS / 2) - 1) { 98 + qe_iowrite32be(~pin_mask2bits & tmp_val, &par_io->cppar2); 99 tmp_val &= ~pin_mask2bits; 100 + qe_iowrite32be(new_mask2bits | tmp_val, &par_io->cppar2); 101 } else { 102 + qe_iowrite32be(~pin_mask2bits & tmp_val, &par_io->cppar1); 103 tmp_val &= ~pin_mask2bits; 104 + qe_iowrite32be(new_mask2bits | tmp_val, &par_io->cppar1); 105 } 106 } 107 EXPORT_SYMBOL(__par_io_config_pin); ··· 133 /* calculate pin location */ 134 pin_mask = (u32) (1 << (QE_PIO_PINS - 1 - pin)); 135 136 + tmp_val = qe_ioread32be(&par_io[port].cpdata); 137 138 if (val == 0) /* clear */ 139 + qe_iowrite32be(~pin_mask & tmp_val, &par_io[port].cpdata); 140 else /* set */ 141 + qe_iowrite32be(pin_mask | tmp_val, &par_io[port].cpdata); 142 143 return 0; 144 } ··· 147 int par_io_of_config(struct device_node *np) 148 { 149 struct device_node *pio; 150 int pio_map_len; 151 + const __be32 *pio_map; 152 153 if (par_io == NULL) { 154 printk(KERN_ERR "par_io not initialized\n"); 155 return -1; 156 } 157 158 + pio = of_parse_phandle(np, "pio-handle", 0); 159 + if (pio == NULL) { 160 printk(KERN_ERR "pio-handle not available\n"); 161 return -1; 162 } 163 164 pio_map = of_get_property(pio, "pio-map", &pio_map_len); 165 if (pio_map == NULL) { ··· 176 } 177 178 while (pio_map_len > 0) { 179 + u8 port = be32_to_cpu(pio_map[0]); 180 + u8 pin = be32_to_cpu(pio_map[1]); 181 + int dir = be32_to_cpu(pio_map[2]); 182 + int open_drain = be32_to_cpu(pio_map[3]); 183 + int assignment = be32_to_cpu(pio_map[4]); 184 + int has_irq = be32_to_cpu(pio_map[5]); 185 + 186 + par_io_config_pin(port, pin, dir, open_drain, 187 + assignment, has_irq); 188 pio_map += 6; 189 pio_map_len -= 6; 190 }
+4 -4
drivers/soc/fsl/qe/qe_tdm.c
··· 169 &siram[siram_entry_id * 32 + 0x200 + i]); 170 } 171 172 - setbits16(&siram[(siram_entry_id * 32) + (utdm->num_of_ts - 1)], 173 - SIR_LAST); 174 - setbits16(&siram[(siram_entry_id * 32) + 0x200 + (utdm->num_of_ts - 1)], 175 - SIR_LAST); 176 177 /* Set SIxMR register */ 178 sixmr = SIMR_SAD(siram_entry_id);
··· 169 &siram[siram_entry_id * 32 + 0x200 + i]); 170 } 171 172 + qe_setbits_be16(&siram[(siram_entry_id * 32) + (utdm->num_of_ts - 1)], 173 + SIR_LAST); 174 + qe_setbits_be16(&siram[(siram_entry_id * 32) + 0x200 + (utdm->num_of_ts - 1)], 175 + SIR_LAST); 176 177 /* Set SIxMR register */ 178 sixmr = SIMR_SAD(siram_entry_id);
+13 -14
drivers/soc/fsl/qe/ucc.c
··· 15 #include <linux/spinlock.h> 16 #include <linux/export.h> 17 18 - #include <asm/irq.h> 19 #include <asm/io.h> 20 #include <soc/fsl/qe/immap_qe.h> 21 #include <soc/fsl/qe/qe.h> ··· 34 return -EINVAL; 35 36 spin_lock_irqsave(&cmxgcr_lock, flags); 37 - clrsetbits_be32(&qe_immr->qmx.cmxgcr, QE_CMXGCR_MII_ENET_MNG, 38 - ucc_num << QE_CMXGCR_MII_ENET_MNG_SHIFT); 39 spin_unlock_irqrestore(&cmxgcr_lock, flags); 40 41 return 0; ··· 79 return -EINVAL; 80 } 81 82 - clrsetbits_8(guemr, UCC_GUEMR_MODE_MASK, 83 - UCC_GUEMR_SET_RESERVED3 | speed); 84 85 return 0; 86 } ··· 108 get_cmxucr_reg(ucc_num, &cmxucr, &reg_num, &shift); 109 110 if (set) 111 - setbits32(cmxucr, mask << shift); 112 else 113 - clrbits32(cmxucr, mask << shift); 114 115 return 0; 116 } ··· 206 if (mode == COMM_DIR_RX) 207 shift += 4; 208 209 - clrsetbits_be32(cmxucr, QE_CMXUCR_TX_CLK_SRC_MASK << shift, 210 - clock_bits << shift); 211 212 return 0; 213 } ··· 539 cmxs1cr = (tdm_num < 4) ? &qe_mux_reg->cmxsi1cr_l : 540 &qe_mux_reg->cmxsi1cr_h; 541 542 - qe_clrsetbits32(cmxs1cr, QE_CMXUCR_TX_CLK_SRC_MASK << shift, 543 - clock_bits << shift); 544 545 return 0; 546 } ··· 649 650 shift = ucc_get_tdm_sync_shift(mode, tdm_num); 651 652 - qe_clrsetbits32(&qe_mux_reg->cmxsi1syr, 653 - QE_CMXUCR_TX_CLK_SRC_MASK << shift, 654 - source << shift); 655 656 return 0; 657 }
··· 15 #include <linux/spinlock.h> 16 #include <linux/export.h> 17 18 #include <asm/io.h> 19 #include <soc/fsl/qe/immap_qe.h> 20 #include <soc/fsl/qe/qe.h> ··· 35 return -EINVAL; 36 37 spin_lock_irqsave(&cmxgcr_lock, flags); 38 + qe_clrsetbits_be32(&qe_immr->qmx.cmxgcr, QE_CMXGCR_MII_ENET_MNG, 39 + ucc_num << QE_CMXGCR_MII_ENET_MNG_SHIFT); 40 spin_unlock_irqrestore(&cmxgcr_lock, flags); 41 42 return 0; ··· 80 return -EINVAL; 81 } 82 83 + qe_clrsetbits_8(guemr, UCC_GUEMR_MODE_MASK, 84 + UCC_GUEMR_SET_RESERVED3 | speed); 85 86 return 0; 87 } ··· 109 get_cmxucr_reg(ucc_num, &cmxucr, &reg_num, &shift); 110 111 if (set) 112 + qe_setbits_be32(cmxucr, mask << shift); 113 else 114 + qe_clrbits_be32(cmxucr, mask << shift); 115 116 return 0; 117 } ··· 207 if (mode == COMM_DIR_RX) 208 shift += 4; 209 210 + qe_clrsetbits_be32(cmxucr, QE_CMXUCR_TX_CLK_SRC_MASK << shift, 211 + clock_bits << shift); 212 213 return 0; 214 } ··· 540 cmxs1cr = (tdm_num < 4) ? &qe_mux_reg->cmxsi1cr_l : 541 &qe_mux_reg->cmxsi1cr_h; 542 543 + qe_clrsetbits_be32(cmxs1cr, QE_CMXUCR_TX_CLK_SRC_MASK << shift, 544 + clock_bits << shift); 545 546 return 0; 547 } ··· 650 651 shift = ucc_get_tdm_sync_shift(mode, tdm_num); 652 653 + qe_clrsetbits_be32(&qe_mux_reg->cmxsi1syr, 654 + QE_CMXUCR_TX_CLK_SRC_MASK << shift, 655 + source << shift); 656 657 return 0; 658 }
+43 -43
drivers/soc/fsl/qe/ucc_fast.c
··· 29 printk(KERN_INFO "Base address: 0x%p\n", uccf->uf_regs); 30 31 printk(KERN_INFO "gumr : addr=0x%p, val=0x%08x\n", 32 - &uccf->uf_regs->gumr, in_be32(&uccf->uf_regs->gumr)); 33 printk(KERN_INFO "upsmr : addr=0x%p, val=0x%08x\n", 34 - &uccf->uf_regs->upsmr, in_be32(&uccf->uf_regs->upsmr)); 35 printk(KERN_INFO "utodr : addr=0x%p, val=0x%04x\n", 36 - &uccf->uf_regs->utodr, in_be16(&uccf->uf_regs->utodr)); 37 printk(KERN_INFO "udsr : addr=0x%p, val=0x%04x\n", 38 - &uccf->uf_regs->udsr, in_be16(&uccf->uf_regs->udsr)); 39 printk(KERN_INFO "ucce : addr=0x%p, val=0x%08x\n", 40 - &uccf->uf_regs->ucce, in_be32(&uccf->uf_regs->ucce)); 41 printk(KERN_INFO "uccm : addr=0x%p, val=0x%08x\n", 42 - &uccf->uf_regs->uccm, in_be32(&uccf->uf_regs->uccm)); 43 printk(KERN_INFO "uccs : addr=0x%p, val=0x%02x\n", 44 - &uccf->uf_regs->uccs, in_8(&uccf->uf_regs->uccs)); 45 printk(KERN_INFO "urfb : addr=0x%p, val=0x%08x\n", 46 - &uccf->uf_regs->urfb, in_be32(&uccf->uf_regs->urfb)); 47 printk(KERN_INFO "urfs : addr=0x%p, val=0x%04x\n", 48 - &uccf->uf_regs->urfs, in_be16(&uccf->uf_regs->urfs)); 49 printk(KERN_INFO "urfet : addr=0x%p, val=0x%04x\n", 50 - &uccf->uf_regs->urfet, in_be16(&uccf->uf_regs->urfet)); 51 printk(KERN_INFO "urfset: addr=0x%p, val=0x%04x\n", 52 - &uccf->uf_regs->urfset, in_be16(&uccf->uf_regs->urfset)); 53 printk(KERN_INFO "utfb : addr=0x%p, val=0x%08x\n", 54 - &uccf->uf_regs->utfb, in_be32(&uccf->uf_regs->utfb)); 55 printk(KERN_INFO "utfs : addr=0x%p, val=0x%04x\n", 56 - &uccf->uf_regs->utfs, in_be16(&uccf->uf_regs->utfs)); 57 printk(KERN_INFO "utfet : addr=0x%p, val=0x%04x\n", 58 - &uccf->uf_regs->utfet, in_be16(&uccf->uf_regs->utfet)); 59 printk(KERN_INFO "utftt : addr=0x%p, val=0x%04x\n", 60 - &uccf->uf_regs->utftt, in_be16(&uccf->uf_regs->utftt)); 61 printk(KERN_INFO "utpt : addr=0x%p, val=0x%04x\n", 62 - &uccf->uf_regs->utpt, in_be16(&uccf->uf_regs->utpt)); 63 printk(KERN_INFO "urtry : addr=0x%p, val=0x%08x\n", 64 - &uccf->uf_regs->urtry, in_be32(&uccf->uf_regs->urtry)); 65 printk(KERN_INFO "guemr : addr=0x%p, val=0x%02x\n", 66 - &uccf->uf_regs->guemr, in_8(&uccf->uf_regs->guemr)); 67 } 68 EXPORT_SYMBOL(ucc_fast_dump_regs); 69 ··· 86 87 void ucc_fast_transmit_on_demand(struct ucc_fast_private * uccf) 88 { 89 - out_be16(&uccf->uf_regs->utodr, UCC_FAST_TOD); 90 } 91 EXPORT_SYMBOL(ucc_fast_transmit_on_demand); 92 ··· 98 uf_regs = uccf->uf_regs; 99 100 /* Enable reception and/or transmission on this UCC. */ 101 - gumr = in_be32(&uf_regs->gumr); 102 if (mode & COMM_DIR_TX) { 103 gumr |= UCC_FAST_GUMR_ENT; 104 uccf->enabled_tx = 1; ··· 107 gumr |= UCC_FAST_GUMR_ENR; 108 uccf->enabled_rx = 1; 109 } 110 - out_be32(&uf_regs->gumr, gumr); 111 } 112 EXPORT_SYMBOL(ucc_fast_enable); 113 ··· 119 uf_regs = uccf->uf_regs; 120 121 /* Disable reception and/or transmission on this UCC. */ 122 - gumr = in_be32(&uf_regs->gumr); 123 if (mode & COMM_DIR_TX) { 124 gumr &= ~UCC_FAST_GUMR_ENT; 125 uccf->enabled_tx = 0; ··· 128 gumr &= ~UCC_FAST_GUMR_ENR; 129 uccf->enabled_rx = 0; 130 } 131 - out_be32(&uf_regs->gumr, gumr); 132 } 133 EXPORT_SYMBOL(ucc_fast_disable); 134 ··· 197 __func__); 198 return -ENOMEM; 199 } 200 201 /* Fill fast UCC structure */ 202 uccf->uf_info = uf_info; ··· 262 gumr |= uf_info->tenc; 263 gumr |= uf_info->tcrc; 264 gumr |= uf_info->mode; 265 - out_be32(&uf_regs->gumr, gumr); 266 267 /* Allocate memory for Tx Virtual Fifo */ 268 uccf->ucc_fast_tx_virtual_fifo_base_offset = 269 qe_muram_alloc(uf_info->utfs, UCC_FAST_VIRT_FIFO_REGS_ALIGNMENT); 270 - if (IS_ERR_VALUE(uccf->ucc_fast_tx_virtual_fifo_base_offset)) { 271 printk(KERN_ERR "%s: cannot allocate MURAM for TX FIFO\n", 272 __func__); 273 - uccf->ucc_fast_tx_virtual_fifo_base_offset = 0; 274 ucc_fast_free(uccf); 275 return -ENOMEM; 276 } ··· 279 qe_muram_alloc(uf_info->urfs + 280 UCC_FAST_RECEIVE_VIRTUAL_FIFO_SIZE_FUDGE_FACTOR, 281 UCC_FAST_VIRT_FIFO_REGS_ALIGNMENT); 282 - if (IS_ERR_VALUE(uccf->ucc_fast_rx_virtual_fifo_base_offset)) { 283 printk(KERN_ERR "%s: cannot allocate MURAM for RX FIFO\n", 284 __func__); 285 - uccf->ucc_fast_rx_virtual_fifo_base_offset = 0; 286 ucc_fast_free(uccf); 287 return -ENOMEM; 288 } 289 290 /* Set Virtual Fifo registers */ 291 - out_be16(&uf_regs->urfs, uf_info->urfs); 292 - out_be16(&uf_regs->urfet, uf_info->urfet); 293 - out_be16(&uf_regs->urfset, uf_info->urfset); 294 - out_be16(&uf_regs->utfs, uf_info->utfs); 295 - out_be16(&uf_regs->utfet, uf_info->utfet); 296 - out_be16(&uf_regs->utftt, uf_info->utftt); 297 /* utfb, urfb are offsets from MURAM base */ 298 - out_be32(&uf_regs->utfb, uccf->ucc_fast_tx_virtual_fifo_base_offset); 299 - out_be32(&uf_regs->urfb, uccf->ucc_fast_rx_virtual_fifo_base_offset); 300 301 /* Mux clocking */ 302 /* Grant Support */ ··· 365 } 366 367 /* Set interrupt mask register at UCC level. */ 368 - out_be32(&uf_regs->uccm, uf_info->uccm_mask); 369 370 /* First, clear anything pending at UCC level, 371 * otherwise, old garbage may come through 372 * as soon as the dam is opened. */ 373 374 /* Writing '1' clears */ 375 - out_be32(&uf_regs->ucce, 0xffffffff); 376 377 *uccf_ret = uccf; 378 return 0; ··· 384 if (!uccf) 385 return; 386 387 - if (uccf->ucc_fast_tx_virtual_fifo_base_offset) 388 - qe_muram_free(uccf->ucc_fast_tx_virtual_fifo_base_offset); 389 - 390 - if (uccf->ucc_fast_rx_virtual_fifo_base_offset) 391 - qe_muram_free(uccf->ucc_fast_rx_virtual_fifo_base_offset); 392 393 if (uccf->uf_regs) 394 iounmap(uccf->uf_regs);
··· 29 printk(KERN_INFO "Base address: 0x%p\n", uccf->uf_regs); 30 31 printk(KERN_INFO "gumr : addr=0x%p, val=0x%08x\n", 32 + &uccf->uf_regs->gumr, qe_ioread32be(&uccf->uf_regs->gumr)); 33 printk(KERN_INFO "upsmr : addr=0x%p, val=0x%08x\n", 34 + &uccf->uf_regs->upsmr, qe_ioread32be(&uccf->uf_regs->upsmr)); 35 printk(KERN_INFO "utodr : addr=0x%p, val=0x%04x\n", 36 + &uccf->uf_regs->utodr, qe_ioread16be(&uccf->uf_regs->utodr)); 37 printk(KERN_INFO "udsr : addr=0x%p, val=0x%04x\n", 38 + &uccf->uf_regs->udsr, qe_ioread16be(&uccf->uf_regs->udsr)); 39 printk(KERN_INFO "ucce : addr=0x%p, val=0x%08x\n", 40 + &uccf->uf_regs->ucce, qe_ioread32be(&uccf->uf_regs->ucce)); 41 printk(KERN_INFO "uccm : addr=0x%p, val=0x%08x\n", 42 + &uccf->uf_regs->uccm, qe_ioread32be(&uccf->uf_regs->uccm)); 43 printk(KERN_INFO "uccs : addr=0x%p, val=0x%02x\n", 44 + &uccf->uf_regs->uccs, qe_ioread8(&uccf->uf_regs->uccs)); 45 printk(KERN_INFO "urfb : addr=0x%p, val=0x%08x\n", 46 + &uccf->uf_regs->urfb, qe_ioread32be(&uccf->uf_regs->urfb)); 47 printk(KERN_INFO "urfs : addr=0x%p, val=0x%04x\n", 48 + &uccf->uf_regs->urfs, qe_ioread16be(&uccf->uf_regs->urfs)); 49 printk(KERN_INFO "urfet : addr=0x%p, val=0x%04x\n", 50 + &uccf->uf_regs->urfet, qe_ioread16be(&uccf->uf_regs->urfet)); 51 printk(KERN_INFO "urfset: addr=0x%p, val=0x%04x\n", 52 + &uccf->uf_regs->urfset, 53 + qe_ioread16be(&uccf->uf_regs->urfset)); 54 printk(KERN_INFO "utfb : addr=0x%p, val=0x%08x\n", 55 + &uccf->uf_regs->utfb, qe_ioread32be(&uccf->uf_regs->utfb)); 56 printk(KERN_INFO "utfs : addr=0x%p, val=0x%04x\n", 57 + &uccf->uf_regs->utfs, qe_ioread16be(&uccf->uf_regs->utfs)); 58 printk(KERN_INFO "utfet : addr=0x%p, val=0x%04x\n", 59 + &uccf->uf_regs->utfet, qe_ioread16be(&uccf->uf_regs->utfet)); 60 printk(KERN_INFO "utftt : addr=0x%p, val=0x%04x\n", 61 + &uccf->uf_regs->utftt, qe_ioread16be(&uccf->uf_regs->utftt)); 62 printk(KERN_INFO "utpt : addr=0x%p, val=0x%04x\n", 63 + &uccf->uf_regs->utpt, qe_ioread16be(&uccf->uf_regs->utpt)); 64 printk(KERN_INFO "urtry : addr=0x%p, val=0x%08x\n", 65 + &uccf->uf_regs->urtry, qe_ioread32be(&uccf->uf_regs->urtry)); 66 printk(KERN_INFO "guemr : addr=0x%p, val=0x%02x\n", 67 + &uccf->uf_regs->guemr, qe_ioread8(&uccf->uf_regs->guemr)); 68 } 69 EXPORT_SYMBOL(ucc_fast_dump_regs); 70 ··· 85 86 void ucc_fast_transmit_on_demand(struct ucc_fast_private * uccf) 87 { 88 + qe_iowrite16be(UCC_FAST_TOD, &uccf->uf_regs->utodr); 89 } 90 EXPORT_SYMBOL(ucc_fast_transmit_on_demand); 91 ··· 97 uf_regs = uccf->uf_regs; 98 99 /* Enable reception and/or transmission on this UCC. */ 100 + gumr = qe_ioread32be(&uf_regs->gumr); 101 if (mode & COMM_DIR_TX) { 102 gumr |= UCC_FAST_GUMR_ENT; 103 uccf->enabled_tx = 1; ··· 106 gumr |= UCC_FAST_GUMR_ENR; 107 uccf->enabled_rx = 1; 108 } 109 + qe_iowrite32be(gumr, &uf_regs->gumr); 110 } 111 EXPORT_SYMBOL(ucc_fast_enable); 112 ··· 118 uf_regs = uccf->uf_regs; 119 120 /* Disable reception and/or transmission on this UCC. */ 121 + gumr = qe_ioread32be(&uf_regs->gumr); 122 if (mode & COMM_DIR_TX) { 123 gumr &= ~UCC_FAST_GUMR_ENT; 124 uccf->enabled_tx = 0; ··· 127 gumr &= ~UCC_FAST_GUMR_ENR; 128 uccf->enabled_rx = 0; 129 } 130 + qe_iowrite32be(gumr, &uf_regs->gumr); 131 } 132 EXPORT_SYMBOL(ucc_fast_disable); 133 ··· 196 __func__); 197 return -ENOMEM; 198 } 199 + uccf->ucc_fast_tx_virtual_fifo_base_offset = -1; 200 + uccf->ucc_fast_rx_virtual_fifo_base_offset = -1; 201 202 /* Fill fast UCC structure */ 203 uccf->uf_info = uf_info; ··· 259 gumr |= uf_info->tenc; 260 gumr |= uf_info->tcrc; 261 gumr |= uf_info->mode; 262 + qe_iowrite32be(gumr, &uf_regs->gumr); 263 264 /* Allocate memory for Tx Virtual Fifo */ 265 uccf->ucc_fast_tx_virtual_fifo_base_offset = 266 qe_muram_alloc(uf_info->utfs, UCC_FAST_VIRT_FIFO_REGS_ALIGNMENT); 267 + if (uccf->ucc_fast_tx_virtual_fifo_base_offset < 0) { 268 printk(KERN_ERR "%s: cannot allocate MURAM for TX FIFO\n", 269 __func__); 270 ucc_fast_free(uccf); 271 return -ENOMEM; 272 } ··· 277 qe_muram_alloc(uf_info->urfs + 278 UCC_FAST_RECEIVE_VIRTUAL_FIFO_SIZE_FUDGE_FACTOR, 279 UCC_FAST_VIRT_FIFO_REGS_ALIGNMENT); 280 + if (uccf->ucc_fast_rx_virtual_fifo_base_offset < 0) { 281 printk(KERN_ERR "%s: cannot allocate MURAM for RX FIFO\n", 282 __func__); 283 ucc_fast_free(uccf); 284 return -ENOMEM; 285 } 286 287 /* Set Virtual Fifo registers */ 288 + qe_iowrite16be(uf_info->urfs, &uf_regs->urfs); 289 + qe_iowrite16be(uf_info->urfet, &uf_regs->urfet); 290 + qe_iowrite16be(uf_info->urfset, &uf_regs->urfset); 291 + qe_iowrite16be(uf_info->utfs, &uf_regs->utfs); 292 + qe_iowrite16be(uf_info->utfet, &uf_regs->utfet); 293 + qe_iowrite16be(uf_info->utftt, &uf_regs->utftt); 294 /* utfb, urfb are offsets from MURAM base */ 295 + qe_iowrite32be(uccf->ucc_fast_tx_virtual_fifo_base_offset, 296 + &uf_regs->utfb); 297 + qe_iowrite32be(uccf->ucc_fast_rx_virtual_fifo_base_offset, 298 + &uf_regs->urfb); 299 300 /* Mux clocking */ 301 /* Grant Support */ ··· 362 } 363 364 /* Set interrupt mask register at UCC level. */ 365 + qe_iowrite32be(uf_info->uccm_mask, &uf_regs->uccm); 366 367 /* First, clear anything pending at UCC level, 368 * otherwise, old garbage may come through 369 * as soon as the dam is opened. */ 370 371 /* Writing '1' clears */ 372 + qe_iowrite32be(0xffffffff, &uf_regs->ucce); 373 374 *uccf_ret = uccf; 375 return 0; ··· 381 if (!uccf) 382 return; 383 384 + qe_muram_free(uccf->ucc_fast_tx_virtual_fifo_base_offset); 385 + qe_muram_free(uccf->ucc_fast_rx_virtual_fifo_base_offset); 386 387 if (uccf->uf_regs) 388 iounmap(uccf->uf_regs);
+28 -32
drivers/soc/fsl/qe/ucc_slow.c
··· 78 us_regs = uccs->us_regs; 79 80 /* Enable reception and/or transmission on this UCC. */ 81 - gumr_l = in_be32(&us_regs->gumr_l); 82 if (mode & COMM_DIR_TX) { 83 gumr_l |= UCC_SLOW_GUMR_L_ENT; 84 uccs->enabled_tx = 1; ··· 87 gumr_l |= UCC_SLOW_GUMR_L_ENR; 88 uccs->enabled_rx = 1; 89 } 90 - out_be32(&us_regs->gumr_l, gumr_l); 91 } 92 EXPORT_SYMBOL(ucc_slow_enable); 93 ··· 99 us_regs = uccs->us_regs; 100 101 /* Disable reception and/or transmission on this UCC. */ 102 - gumr_l = in_be32(&us_regs->gumr_l); 103 if (mode & COMM_DIR_TX) { 104 gumr_l &= ~UCC_SLOW_GUMR_L_ENT; 105 uccs->enabled_tx = 0; ··· 108 gumr_l &= ~UCC_SLOW_GUMR_L_ENR; 109 uccs->enabled_rx = 0; 110 } 111 - out_be32(&us_regs->gumr_l, gumr_l); 112 } 113 EXPORT_SYMBOL(ucc_slow_disable); 114 ··· 154 __func__); 155 return -ENOMEM; 156 } 157 158 /* Fill slow UCC structure */ 159 uccs->us_info = us_info; ··· 182 /* Get PRAM base */ 183 uccs->us_pram_offset = 184 qe_muram_alloc(UCC_SLOW_PRAM_SIZE, ALIGNMENT_OF_UCC_SLOW_PRAM); 185 - if (IS_ERR_VALUE(uccs->us_pram_offset)) { 186 printk(KERN_ERR "%s: cannot allocate MURAM for PRAM", __func__); 187 ucc_slow_free(uccs); 188 return -ENOMEM; ··· 201 return ret; 202 } 203 204 - out_be16(&uccs->us_pram->mrblr, us_info->max_rx_buf_length); 205 206 INIT_LIST_HEAD(&uccs->confQ); 207 ··· 209 uccs->rx_base_offset = 210 qe_muram_alloc(us_info->rx_bd_ring_len * sizeof(struct qe_bd), 211 QE_ALIGNMENT_OF_BD); 212 - if (IS_ERR_VALUE(uccs->rx_base_offset)) { 213 printk(KERN_ERR "%s: cannot allocate %u RX BDs\n", __func__, 214 us_info->rx_bd_ring_len); 215 - uccs->rx_base_offset = 0; 216 ucc_slow_free(uccs); 217 return -ENOMEM; 218 } ··· 219 uccs->tx_base_offset = 220 qe_muram_alloc(us_info->tx_bd_ring_len * sizeof(struct qe_bd), 221 QE_ALIGNMENT_OF_BD); 222 - if (IS_ERR_VALUE(uccs->tx_base_offset)) { 223 printk(KERN_ERR "%s: cannot allocate TX BDs", __func__); 224 - uccs->tx_base_offset = 0; 225 ucc_slow_free(uccs); 226 return -ENOMEM; 227 } ··· 229 bd = uccs->confBd = uccs->tx_bd = qe_muram_addr(uccs->tx_base_offset); 230 for (i = 0; i < us_info->tx_bd_ring_len - 1; i++) { 231 /* clear bd buffer */ 232 - out_be32(&bd->buf, 0); 233 /* set bd status and length */ 234 - out_be32((u32 *) bd, 0); 235 bd++; 236 } 237 /* for last BD set Wrap bit */ 238 - out_be32(&bd->buf, 0); 239 - out_be32((u32 *) bd, cpu_to_be32(T_W)); 240 241 /* Init Rx bds */ 242 bd = uccs->rx_bd = qe_muram_addr(uccs->rx_base_offset); 243 for (i = 0; i < us_info->rx_bd_ring_len - 1; i++) { 244 /* set bd status and length */ 245 - out_be32((u32*)bd, 0); 246 /* clear bd buffer */ 247 - out_be32(&bd->buf, 0); 248 bd++; 249 } 250 /* for last BD set Wrap bit */ 251 - out_be32((u32*)bd, cpu_to_be32(R_W)); 252 - out_be32(&bd->buf, 0); 253 254 /* Set GUMR (For more details see the hardware spec.). */ 255 /* gumr_h */ ··· 270 gumr |= UCC_SLOW_GUMR_H_TXSY; 271 if (us_info->rtsm) 272 gumr |= UCC_SLOW_GUMR_H_RTSM; 273 - out_be32(&us_regs->gumr_h, gumr); 274 275 /* gumr_l */ 276 gumr = us_info->tdcr | us_info->rdcr | us_info->tenc | us_info->renc | ··· 283 gumr |= UCC_SLOW_GUMR_L_TINV; 284 if (us_info->tend) 285 gumr |= UCC_SLOW_GUMR_L_TEND; 286 - out_be32(&us_regs->gumr_l, gumr); 287 288 /* Function code registers */ 289 ··· 293 uccs->us_pram->rbmr = UCC_BMR_BO_BE; 294 295 /* rbase, tbase are offsets from MURAM base */ 296 - out_be16(&uccs->us_pram->rbase, uccs->rx_base_offset); 297 - out_be16(&uccs->us_pram->tbase, uccs->tx_base_offset); 298 299 /* Mux clocking */ 300 /* Grant Support */ ··· 324 } 325 326 /* Set interrupt mask register at UCC level. */ 327 - out_be16(&us_regs->uccm, us_info->uccm_mask); 328 329 /* First, clear anything pending at UCC level, 330 * otherwise, old garbage may come through 331 * as soon as the dam is opened. */ 332 333 /* Writing '1' clears */ 334 - out_be16(&us_regs->ucce, 0xffff); 335 336 /* Issue QE Init command */ 337 if (us_info->init_tx && us_info->init_rx) ··· 353 if (!uccs) 354 return; 355 356 - if (uccs->rx_base_offset) 357 - qe_muram_free(uccs->rx_base_offset); 358 - 359 - if (uccs->tx_base_offset) 360 - qe_muram_free(uccs->tx_base_offset); 361 - 362 - if (uccs->us_pram) 363 - qe_muram_free(uccs->us_pram_offset); 364 365 if (uccs->us_regs) 366 iounmap(uccs->us_regs);
··· 78 us_regs = uccs->us_regs; 79 80 /* Enable reception and/or transmission on this UCC. */ 81 + gumr_l = qe_ioread32be(&us_regs->gumr_l); 82 if (mode & COMM_DIR_TX) { 83 gumr_l |= UCC_SLOW_GUMR_L_ENT; 84 uccs->enabled_tx = 1; ··· 87 gumr_l |= UCC_SLOW_GUMR_L_ENR; 88 uccs->enabled_rx = 1; 89 } 90 + qe_iowrite32be(gumr_l, &us_regs->gumr_l); 91 } 92 EXPORT_SYMBOL(ucc_slow_enable); 93 ··· 99 us_regs = uccs->us_regs; 100 101 /* Disable reception and/or transmission on this UCC. */ 102 + gumr_l = qe_ioread32be(&us_regs->gumr_l); 103 if (mode & COMM_DIR_TX) { 104 gumr_l &= ~UCC_SLOW_GUMR_L_ENT; 105 uccs->enabled_tx = 0; ··· 108 gumr_l &= ~UCC_SLOW_GUMR_L_ENR; 109 uccs->enabled_rx = 0; 110 } 111 + qe_iowrite32be(gumr_l, &us_regs->gumr_l); 112 } 113 EXPORT_SYMBOL(ucc_slow_disable); 114 ··· 154 __func__); 155 return -ENOMEM; 156 } 157 + uccs->rx_base_offset = -1; 158 + uccs->tx_base_offset = -1; 159 + uccs->us_pram_offset = -1; 160 161 /* Fill slow UCC structure */ 162 uccs->us_info = us_info; ··· 179 /* Get PRAM base */ 180 uccs->us_pram_offset = 181 qe_muram_alloc(UCC_SLOW_PRAM_SIZE, ALIGNMENT_OF_UCC_SLOW_PRAM); 182 + if (uccs->us_pram_offset < 0) { 183 printk(KERN_ERR "%s: cannot allocate MURAM for PRAM", __func__); 184 ucc_slow_free(uccs); 185 return -ENOMEM; ··· 198 return ret; 199 } 200 201 + qe_iowrite16be(us_info->max_rx_buf_length, &uccs->us_pram->mrblr); 202 203 INIT_LIST_HEAD(&uccs->confQ); 204 ··· 206 uccs->rx_base_offset = 207 qe_muram_alloc(us_info->rx_bd_ring_len * sizeof(struct qe_bd), 208 QE_ALIGNMENT_OF_BD); 209 + if (uccs->rx_base_offset < 0) { 210 printk(KERN_ERR "%s: cannot allocate %u RX BDs\n", __func__, 211 us_info->rx_bd_ring_len); 212 ucc_slow_free(uccs); 213 return -ENOMEM; 214 } ··· 217 uccs->tx_base_offset = 218 qe_muram_alloc(us_info->tx_bd_ring_len * sizeof(struct qe_bd), 219 QE_ALIGNMENT_OF_BD); 220 + if (uccs->tx_base_offset < 0) { 221 printk(KERN_ERR "%s: cannot allocate TX BDs", __func__); 222 ucc_slow_free(uccs); 223 return -ENOMEM; 224 } ··· 228 bd = uccs->confBd = uccs->tx_bd = qe_muram_addr(uccs->tx_base_offset); 229 for (i = 0; i < us_info->tx_bd_ring_len - 1; i++) { 230 /* clear bd buffer */ 231 + qe_iowrite32be(0, &bd->buf); 232 /* set bd status and length */ 233 + qe_iowrite32be(0, (u32 *)bd); 234 bd++; 235 } 236 /* for last BD set Wrap bit */ 237 + qe_iowrite32be(0, &bd->buf); 238 + qe_iowrite32be(cpu_to_be32(T_W), (u32 *)bd); 239 240 /* Init Rx bds */ 241 bd = uccs->rx_bd = qe_muram_addr(uccs->rx_base_offset); 242 for (i = 0; i < us_info->rx_bd_ring_len - 1; i++) { 243 /* set bd status and length */ 244 + qe_iowrite32be(0, (u32 *)bd); 245 /* clear bd buffer */ 246 + qe_iowrite32be(0, &bd->buf); 247 bd++; 248 } 249 /* for last BD set Wrap bit */ 250 + qe_iowrite32be(cpu_to_be32(R_W), (u32 *)bd); 251 + qe_iowrite32be(0, &bd->buf); 252 253 /* Set GUMR (For more details see the hardware spec.). */ 254 /* gumr_h */ ··· 269 gumr |= UCC_SLOW_GUMR_H_TXSY; 270 if (us_info->rtsm) 271 gumr |= UCC_SLOW_GUMR_H_RTSM; 272 + qe_iowrite32be(gumr, &us_regs->gumr_h); 273 274 /* gumr_l */ 275 gumr = us_info->tdcr | us_info->rdcr | us_info->tenc | us_info->renc | ··· 282 gumr |= UCC_SLOW_GUMR_L_TINV; 283 if (us_info->tend) 284 gumr |= UCC_SLOW_GUMR_L_TEND; 285 + qe_iowrite32be(gumr, &us_regs->gumr_l); 286 287 /* Function code registers */ 288 ··· 292 uccs->us_pram->rbmr = UCC_BMR_BO_BE; 293 294 /* rbase, tbase are offsets from MURAM base */ 295 + qe_iowrite16be(uccs->rx_base_offset, &uccs->us_pram->rbase); 296 + qe_iowrite16be(uccs->tx_base_offset, &uccs->us_pram->tbase); 297 298 /* Mux clocking */ 299 /* Grant Support */ ··· 323 } 324 325 /* Set interrupt mask register at UCC level. */ 326 + qe_iowrite16be(us_info->uccm_mask, &us_regs->uccm); 327 328 /* First, clear anything pending at UCC level, 329 * otherwise, old garbage may come through 330 * as soon as the dam is opened. */ 331 332 /* Writing '1' clears */ 333 + qe_iowrite16be(0xffff, &us_regs->ucce); 334 335 /* Issue QE Init command */ 336 if (us_info->init_tx && us_info->init_rx) ··· 352 if (!uccs) 353 return; 354 355 + qe_muram_free(uccs->rx_base_offset); 356 + qe_muram_free(uccs->tx_base_offset); 357 + qe_muram_free(uccs->us_pram_offset); 358 359 if (uccs->us_regs) 360 iounmap(uccs->us_regs);
+1 -1
drivers/soc/fsl/qe/usb.c
··· 43 44 spin_lock_irqsave(&cmxgcr_lock, flags); 45 46 - clrsetbits_be32(&mux->cmxgcr, QE_CMXGCR_USBCS, val); 47 48 spin_unlock_irqrestore(&cmxgcr_lock, flags); 49
··· 43 44 spin_lock_irqsave(&cmxgcr_lock, flags); 45 46 + qe_clrsetbits_be32(&mux->cmxgcr, QE_CMXGCR_USBCS, val); 47 48 spin_unlock_irqrestore(&cmxgcr_lock, flags); 49
+1 -1
drivers/soc/imx/Kconfig
··· 10 11 config IMX_SCU_SOC 12 bool "i.MX System Controller Unit SoC info support" 13 - depends on IMX_SCU 14 select SOC_BUS 15 help 16 If you say yes here you get support for the NXP i.MX System
··· 10 11 config IMX_SCU_SOC 12 bool "i.MX System Controller Unit SoC info support" 13 + depends on IMX_SCU || COMPILE_TEST 14 select SOC_BUS 15 help 16 If you say yes here you get support for the NXP i.MX System
+9
drivers/soc/imx/soc-imx8.c
··· 142 .soc_revision = imx8mm_soc_revision, 143 }; 144 145 static const struct of_device_id imx8_soc_match[] = { 146 { .compatible = "fsl,imx8mq", .data = &imx8mq_soc_data, }, 147 { .compatible = "fsl,imx8mm", .data = &imx8mm_soc_data, }, 148 { .compatible = "fsl,imx8mn", .data = &imx8mn_soc_data, }, 149 { } 150 }; 151 ··· 209 ret = PTR_ERR(soc_dev); 210 goto free_serial_number; 211 } 212 213 if (IS_ENABLED(CONFIG_ARM_IMX_CPUFREQ_DT)) 214 platform_device_register_simple("imx-cpufreq-dt", -1, NULL, 0);
··· 142 .soc_revision = imx8mm_soc_revision, 143 }; 144 145 + static const struct imx8_soc_data imx8mp_soc_data = { 146 + .name = "i.MX8MP", 147 + .soc_revision = imx8mm_soc_revision, 148 + }; 149 + 150 static const struct of_device_id imx8_soc_match[] = { 151 { .compatible = "fsl,imx8mq", .data = &imx8mq_soc_data, }, 152 { .compatible = "fsl,imx8mm", .data = &imx8mm_soc_data, }, 153 { .compatible = "fsl,imx8mn", .data = &imx8mn_soc_data, }, 154 + { .compatible = "fsl,imx8mp", .data = &imx8mp_soc_data, }, 155 { } 156 }; 157 ··· 203 ret = PTR_ERR(soc_dev); 204 goto free_serial_number; 205 } 206 + 207 + pr_info("SoC: %s revision %s\n", soc_dev_attr->soc_id, 208 + soc_dev_attr->revision); 209 210 if (IS_ENABLED(CONFIG_ARM_IMX_CPUFREQ_DT)) 211 platform_device_register_simple("imx-cpufreq-dt", -1, NULL, 0);
-2
drivers/soc/mediatek/mtk-cmdq-helper.c
··· 12 #define CMDQ_WRITE_ENABLE_MASK BIT(0) 13 #define CMDQ_POLL_ENABLE_MASK BIT(0) 14 #define CMDQ_EOC_IRQ_EN BIT(0) 15 - #define CMDQ_EOC_CMD ((u64)((CMDQ_CODE_EOC << CMDQ_OP_CODE_SHIFT)) \ 16 - << 32 | CMDQ_EOC_IRQ_EN) 17 18 struct cmdq_instruction { 19 union {
··· 12 #define CMDQ_WRITE_ENABLE_MASK BIT(0) 13 #define CMDQ_POLL_ENABLE_MASK BIT(0) 14 #define CMDQ_EOC_IRQ_EN BIT(0) 15 16 struct cmdq_instruction { 17 union {
+15 -15
drivers/soc/qcom/Kconfig
··· 45 neighboring subsystems going up or down. 46 47 config QCOM_GSBI 48 - tristate "QCOM General Serial Bus Interface" 49 - depends on ARCH_QCOM || COMPILE_TEST 50 - select MFD_SYSCON 51 - help 52 - Say y here to enable GSBI support. The GSBI provides control 53 - functions for connecting the underlying serial UART, SPI, and I2C 54 - devices to the output pins. 55 56 config QCOM_LLCC 57 tristate "Qualcomm Technologies, Inc. LLCC driver" ··· 71 depends on ARCH_QCOM 72 select QCOM_SCM 73 help 74 - The On Chip Memory (OCMEM) allocator allows various clients to 75 - allocate memory from OCMEM based on performance, latency and power 76 - requirements. This is typically used by the GPU, camera/video, and 77 - audio components on some Snapdragon SoCs. 78 79 config QCOM_PM 80 bool "Qualcomm Power Management" ··· 198 depends on ARCH_QCOM || COMPILE_TEST 199 depends on RPMSG 200 help 201 - Enable APR IPC protocol support between 202 - application processor and QDSP6. APR is 203 - used by audio driver to configure QDSP6 204 - ASM, ADM and AFE modules. 205 endmenu
··· 45 neighboring subsystems going up or down. 46 47 config QCOM_GSBI 48 + tristate "QCOM General Serial Bus Interface" 49 + depends on ARCH_QCOM || COMPILE_TEST 50 + select MFD_SYSCON 51 + help 52 + Say y here to enable GSBI support. The GSBI provides control 53 + functions for connecting the underlying serial UART, SPI, and I2C 54 + devices to the output pins. 55 56 config QCOM_LLCC 57 tristate "Qualcomm Technologies, Inc. LLCC driver" ··· 71 depends on ARCH_QCOM 72 select QCOM_SCM 73 help 74 + The On Chip Memory (OCMEM) allocator allows various clients to 75 + allocate memory from OCMEM based on performance, latency and power 76 + requirements. This is typically used by the GPU, camera/video, and 77 + audio components on some Snapdragon SoCs. 78 79 config QCOM_PM 80 bool "Qualcomm Power Management" ··· 198 depends on ARCH_QCOM || COMPILE_TEST 199 depends on RPMSG 200 help 201 + Enable APR IPC protocol support between 202 + application processor and QDSP6. APR is 203 + used by audio driver to configure QDSP6 204 + ASM, ADM and AFE modules. 205 endmenu
+6 -2
drivers/soc/qcom/qmi_interface.c
··· 655 656 qmi->sock = qmi_sock_create(qmi, &qmi->sq); 657 if (IS_ERR(qmi->sock)) { 658 - pr_err("failed to create QMI socket\n"); 659 - ret = PTR_ERR(qmi->sock); 660 goto err_destroy_wq; 661 } 662
··· 655 656 qmi->sock = qmi_sock_create(qmi, &qmi->sq); 657 if (IS_ERR(qmi->sock)) { 658 + if (PTR_ERR(qmi->sock) == -EAFNOSUPPORT) { 659 + ret = -EPROBE_DEFER; 660 + } else { 661 + pr_err("failed to create QMI socket\n"); 662 + ret = PTR_ERR(qmi->sock); 663 + } 664 goto err_destroy_wq; 665 } 666
+56
drivers/soc/qcom/rpmhpd.c
··· 93 94 static struct rpmhpd sdm845_mx_ao = { 95 .pd = { .name = "mx_ao", }, 96 .peer = &sdm845_mx, 97 .res_name = "mx.lvl", 98 }; ··· 108 109 static struct rpmhpd sdm845_cx_ao = { 110 .pd = { .name = "cx_ao", }, 111 .peer = &sdm845_cx, 112 .parent = &sdm845_mx_ao.pd, 113 .res_name = "cx.lvl", ··· 131 .num_pds = ARRAY_SIZE(sdm845_rpmhpds), 132 }; 133 134 static const struct of_device_id rpmhpd_match_table[] = { 135 { .compatible = "qcom,sdm845-rpmhpd", .data = &sdm845_desc }, 136 { } 137 }; 138
··· 93 94 static struct rpmhpd sdm845_mx_ao = { 95 .pd = { .name = "mx_ao", }, 96 + .active_only = true, 97 .peer = &sdm845_mx, 98 .res_name = "mx.lvl", 99 }; ··· 107 108 static struct rpmhpd sdm845_cx_ao = { 109 .pd = { .name = "cx_ao", }, 110 + .active_only = true, 111 .peer = &sdm845_cx, 112 .parent = &sdm845_mx_ao.pd, 113 .res_name = "cx.lvl", ··· 129 .num_pds = ARRAY_SIZE(sdm845_rpmhpds), 130 }; 131 132 + /* SM8150 RPMH powerdomains */ 133 + 134 + static struct rpmhpd sm8150_mmcx_ao; 135 + static struct rpmhpd sm8150_mmcx = { 136 + .pd = { .name = "mmcx", }, 137 + .peer = &sm8150_mmcx_ao, 138 + .res_name = "mmcx.lvl", 139 + }; 140 + 141 + static struct rpmhpd sm8150_mmcx_ao = { 142 + .pd = { .name = "mmcx_ao", }, 143 + .active_only = true, 144 + .peer = &sm8150_mmcx, 145 + .res_name = "mmcx.lvl", 146 + }; 147 + 148 + static struct rpmhpd *sm8150_rpmhpds[] = { 149 + [SM8150_MSS] = &sdm845_mss, 150 + [SM8150_EBI] = &sdm845_ebi, 151 + [SM8150_LMX] = &sdm845_lmx, 152 + [SM8150_LCX] = &sdm845_lcx, 153 + [SM8150_GFX] = &sdm845_gfx, 154 + [SM8150_MX] = &sdm845_mx, 155 + [SM8150_MX_AO] = &sdm845_mx_ao, 156 + [SM8150_CX] = &sdm845_cx, 157 + [SM8150_CX_AO] = &sdm845_cx_ao, 158 + [SM8150_MMCX] = &sm8150_mmcx, 159 + [SM8150_MMCX_AO] = &sm8150_mmcx_ao, 160 + }; 161 + 162 + static const struct rpmhpd_desc sm8150_desc = { 163 + .rpmhpds = sm8150_rpmhpds, 164 + .num_pds = ARRAY_SIZE(sm8150_rpmhpds), 165 + }; 166 + 167 + /* SC7180 RPMH powerdomains */ 168 + static struct rpmhpd *sc7180_rpmhpds[] = { 169 + [SC7180_CX] = &sdm845_cx, 170 + [SC7180_CX_AO] = &sdm845_cx_ao, 171 + [SC7180_GFX] = &sdm845_gfx, 172 + [SC7180_MX] = &sdm845_mx, 173 + [SC7180_MX_AO] = &sdm845_mx_ao, 174 + [SC7180_LMX] = &sdm845_lmx, 175 + [SC7180_LCX] = &sdm845_lcx, 176 + [SC7180_MSS] = &sdm845_mss, 177 + }; 178 + 179 + static const struct rpmhpd_desc sc7180_desc = { 180 + .rpmhpds = sc7180_rpmhpds, 181 + .num_pds = ARRAY_SIZE(sc7180_rpmhpds), 182 + }; 183 + 184 static const struct of_device_id rpmhpd_match_table[] = { 185 + { .compatible = "qcom,sc7180-rpmhpd", .data = &sc7180_desc }, 186 { .compatible = "qcom,sdm845-rpmhpd", .data = &sdm845_desc }, 187 + { .compatible = "qcom,sm8150-rpmhpd", .data = &sm8150_desc }, 188 { } 189 }; 190
+9 -5
drivers/soc/renesas/Kconfig
··· 192 help 193 This enables support for the Renesas RZ/G2E SoC. 194 195 config ARCH_R8A7795 196 bool "Renesas R-Car H3 SoC Platform" 197 select ARCH_RCAR_GEN3 198 select SYSC_R8A7795 199 help 200 This enables support for the Renesas R-Car H3 SoC. 201 202 config ARCH_R8A77960 203 - bool 204 select ARCH_RCAR_GEN3 205 select SYSC_R8A77960 206 - 207 - config ARCH_R8A7796 208 - bool "Renesas R-Car M3-W SoC Platform" 209 - select ARCH_R8A77960 210 help 211 This enables support for the Renesas R-Car M3-W SoC. 212
··· 192 help 193 This enables support for the Renesas RZ/G2E SoC. 194 195 + config ARCH_R8A77950 196 + bool 197 + 198 + config ARCH_R8A77951 199 + bool 200 + 201 config ARCH_R8A7795 202 bool "Renesas R-Car H3 SoC Platform" 203 + select ARCH_R8A77950 204 + select ARCH_R8A77951 205 select ARCH_RCAR_GEN3 206 select SYSC_R8A7795 207 help 208 This enables support for the Renesas R-Car H3 SoC. 209 210 config ARCH_R8A77960 211 + bool "Renesas R-Car M3-W SoC Platform" 212 select ARCH_RCAR_GEN3 213 select SYSC_R8A77960 214 help 215 This enables support for the Renesas R-Car M3-W SoC. 216
+1 -1
drivers/soc/renesas/rcar-rst.c
··· 21 22 struct rst_config { 23 unsigned int modemr; /* Mode Monitoring Register Offset */ 24 - int (*configure)(void *base); /* Platform specific configuration */ 25 }; 26 27 static const struct rst_config rcar_rst_gen1 __initconst = {
··· 21 22 struct rst_config { 23 unsigned int modemr; /* Mode Monitoring Register Offset */ 24 + int (*configure)(void __iomem *base); /* Platform specific config */ 25 }; 26 27 static const struct rst_config rcar_rst_gen1 __initconst = {
+1 -1
drivers/soc/samsung/Kconfig
··· 1 # SPDX-License-Identifier: GPL-2.0 2 # 3 - # SAMSUNG SoC drivers 4 # 5 menuconfig SOC_SAMSUNG 6 bool "Samsung SoC driver support" if COMPILE_TEST
··· 1 # SPDX-License-Identifier: GPL-2.0 2 # 3 + # Samsung SoC drivers 4 # 5 menuconfig SOC_SAMSUNG 6 bool "Samsung SoC driver support" if COMPILE_TEST
+1 -1
drivers/soc/samsung/exynos-chipid.c
··· 3 * Copyright (c) 2019 Samsung Electronics Co., Ltd. 4 * http://www.samsung.com/ 5 * 6 - * EXYNOS - CHIP ID support 7 * Author: Pankaj Dubey <pankaj.dubey@samsung.com> 8 * Author: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> 9 */
··· 3 * Copyright (c) 2019 Samsung Electronics Co., Ltd. 4 * http://www.samsung.com/ 5 * 6 + * Exynos - CHIP ID support 7 * Author: Pankaj Dubey <pankaj.dubey@samsung.com> 8 * Author: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> 9 */
+2 -4
drivers/soc/samsung/exynos-pmu.c
··· 3 // Copyright (c) 2011-2014 Samsung Electronics Co., Ltd. 4 // http://www.samsung.com/ 5 // 6 - // EXYNOS - CPU PMU(Power Management Unit) support 7 8 #include <linux/of.h> 9 #include <linux/of_address.h> ··· 110 static int exynos_pmu_probe(struct platform_device *pdev) 111 { 112 struct device *dev = &pdev->dev; 113 - struct resource *res; 114 115 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 116 - pmu_base_addr = devm_ioremap_resource(dev, res); 117 if (IS_ERR(pmu_base_addr)) 118 return PTR_ERR(pmu_base_addr); 119
··· 3 // Copyright (c) 2011-2014 Samsung Electronics Co., Ltd. 4 // http://www.samsung.com/ 5 // 6 + // Exynos - CPU PMU(Power Management Unit) support 7 8 #include <linux/of.h> 9 #include <linux/of_address.h> ··· 110 static int exynos_pmu_probe(struct platform_device *pdev) 111 { 112 struct device *dev = &pdev->dev; 113 114 + pmu_base_addr = devm_platform_ioremap_resource(pdev, 0); 115 if (IS_ERR(pmu_base_addr)) 116 return PTR_ERR(pmu_base_addr); 117
+1 -1
drivers/soc/samsung/exynos-pmu.h
··· 3 * Copyright (c) 2015 Samsung Electronics Co., Ltd. 4 * http://www.samsung.com 5 * 6 - * Header for EXYNOS PMU Driver support 7 */ 8 9 #ifndef __EXYNOS_PMU_H
··· 3 * Copyright (c) 2015 Samsung Electronics Co., Ltd. 4 * http://www.samsung.com 5 * 6 + * Header for Exynos PMU Driver support 7 */ 8 9 #ifndef __EXYNOS_PMU_H
+1 -1
drivers/soc/samsung/exynos3250-pmu.c
··· 3 // Copyright (c) 2011-2015 Samsung Electronics Co., Ltd. 4 // http://www.samsung.com/ 5 // 6 - // EXYNOS3250 - CPU PMU (Power Management Unit) support 7 8 #include <linux/soc/samsung/exynos-regs-pmu.h> 9 #include <linux/soc/samsung/exynos-pmu.h>
··· 3 // Copyright (c) 2011-2015 Samsung Electronics Co., Ltd. 4 // http://www.samsung.com/ 5 // 6 + // Exynos3250 - CPU PMU (Power Management Unit) support 7 8 #include <linux/soc/samsung/exynos-regs-pmu.h> 9 #include <linux/soc/samsung/exynos-pmu.h>
+1 -1
drivers/soc/samsung/exynos4-pmu.c
··· 3 // Copyright (c) 2011-2015 Samsung Electronics Co., Ltd. 4 // http://www.samsung.com/ 5 // 6 - // EXYNOS4 - CPU PMU(Power Management Unit) support 7 8 #include <linux/soc/samsung/exynos-regs-pmu.h> 9 #include <linux/soc/samsung/exynos-pmu.h>
··· 3 // Copyright (c) 2011-2015 Samsung Electronics Co., Ltd. 4 // http://www.samsung.com/ 5 // 6 + // Exynos4 - CPU PMU(Power Management Unit) support 7 8 #include <linux/soc/samsung/exynos-regs-pmu.h> 9 #include <linux/soc/samsung/exynos-pmu.h>
+1 -1
drivers/soc/samsung/exynos5250-pmu.c
··· 3 // Copyright (c) 2011-2015 Samsung Electronics Co., Ltd. 4 // http://www.samsung.com/ 5 // 6 - // EXYNOS5250 - CPU PMU (Power Management Unit) support 7 8 #include <linux/soc/samsung/exynos-regs-pmu.h> 9 #include <linux/soc/samsung/exynos-pmu.h>
··· 3 // Copyright (c) 2011-2015 Samsung Electronics Co., Ltd. 4 // http://www.samsung.com/ 5 // 6 + // Exynos5250 - CPU PMU (Power Management Unit) support 7 8 #include <linux/soc/samsung/exynos-regs-pmu.h> 9 #include <linux/soc/samsung/exynos-pmu.h>
+1 -1
drivers/soc/samsung/exynos5420-pmu.c
··· 3 // Copyright (c) 2011-2015 Samsung Electronics Co., Ltd. 4 // http://www.samsung.com/ 5 // 6 - // EXYNOS5420 - CPU PMU (Power Management Unit) support 7 8 #include <linux/pm.h> 9 #include <linux/soc/samsung/exynos-regs-pmu.h>
··· 3 // Copyright (c) 2011-2015 Samsung Electronics Co., Ltd. 4 // http://www.samsung.com/ 5 // 6 + // Exynos5420 - CPU PMU (Power Management Unit) support 7 8 #include <linux/pm.h> 9 #include <linux/soc/samsung/exynos-regs-pmu.h>
+1
drivers/soc/tegra/Kconfig
··· 126 def_bool y 127 depends on ARCH_TEGRA 128 select SOC_BUS 129 130 config SOC_TEGRA_FLOWCTRL 131 bool
··· 126 def_bool y 127 depends on ARCH_TEGRA 128 select SOC_BUS 129 + select TEGRA20_APB_DMA if ARCH_TEGRA_2x_SOC 130 131 config SOC_TEGRA_FLOWCTRL 132 bool
+3
drivers/soc/tegra/fuse/fuse-tegra.c
··· 49 }; 50 51 static const struct of_device_id tegra_fuse_match[] = { 52 #ifdef CONFIG_ARCH_TEGRA_186_SOC 53 { .compatible = "nvidia,tegra186-efuse", .data = &tegra186_fuse_soc }, 54 #endif
··· 49 }; 50 51 static const struct of_device_id tegra_fuse_match[] = { 52 + #ifdef CONFIG_ARCH_TEGRA_194_SOC 53 + { .compatible = "nvidia,tegra194-efuse", .data = &tegra194_fuse_soc }, 54 + #endif 55 #ifdef CONFIG_ARCH_TEGRA_186_SOC 56 { .compatible = "nvidia,tegra186-efuse", .data = &tegra186_fuse_soc }, 57 #endif
+29
drivers/soc/tegra/fuse/fuse-tegra30.c
··· 320 .num_lookups = ARRAY_SIZE(tegra186_fuse_lookups), 321 }; 322 #endif
··· 320 .num_lookups = ARRAY_SIZE(tegra186_fuse_lookups), 321 }; 322 #endif 323 + 324 + #if defined(CONFIG_ARCH_TEGRA_194_SOC) 325 + static const struct nvmem_cell_lookup tegra194_fuse_lookups[] = { 326 + { 327 + .nvmem_name = "fuse", 328 + .cell_name = "xusb-pad-calibration", 329 + .dev_id = "3520000.padctl", 330 + .con_id = "calibration", 331 + }, { 332 + .nvmem_name = "fuse", 333 + .cell_name = "xusb-pad-calibration-ext", 334 + .dev_id = "3520000.padctl", 335 + .con_id = "calibration-ext", 336 + }, 337 + }; 338 + 339 + static const struct tegra_fuse_info tegra194_fuse_info = { 340 + .read = tegra30_fuse_read, 341 + .size = 0x300, 342 + .spare = 0x280, 343 + }; 344 + 345 + const struct tegra_fuse_soc tegra194_fuse_soc = { 346 + .init = tegra30_fuse_init, 347 + .info = &tegra194_fuse_info, 348 + .lookups = tegra194_fuse_lookups, 349 + .num_lookups = ARRAY_SIZE(tegra194_fuse_lookups), 350 + }; 351 + #endif
+4
drivers/soc/tegra/fuse/fuse.h
··· 108 extern const struct tegra_fuse_soc tegra186_fuse_soc; 109 #endif 110 111 #endif
··· 108 extern const struct tegra_fuse_soc tegra186_fuse_soc; 109 #endif 110 111 + #ifdef CONFIG_ARCH_TEGRA_194_SOC 112 + extern const struct tegra_fuse_soc tegra194_fuse_soc; 113 + #endif 114 + 115 #endif
+20 -14
drivers/soc/tegra/fuse/tegra-apbmisc.c
··· 21 #define PMC_STRAPPING_OPT_A_RAM_CODE_MASK_SHORT \ 22 (0x3 << PMC_STRAPPING_OPT_A_RAM_CODE_SHIFT) 23 24 - static void __iomem *apbmisc_base; 25 - static void __iomem *strapping_base; 26 static bool long_ram_code; 27 28 u32 tegra_read_chipid(void) 29 { 30 - if (!apbmisc_base) { 31 - WARN(1, "Tegra Chip ID not yet available\n"); 32 - return 0; 33 - } 34 35 - return readl_relaxed(apbmisc_base + 4); 36 } 37 38 u8 tegra_get_chip_id(void) ··· 39 40 u32 tegra_read_straps(void) 41 { 42 - if (strapping_base) 43 - return readl_relaxed(strapping_base); 44 - else 45 - return 0; 46 } 47 48 u32 tegra_read_ram_code(void) ··· 59 static const struct of_device_id apbmisc_match[] __initconst = { 60 { .compatible = "nvidia,tegra20-apbmisc", }, 61 { .compatible = "nvidia,tegra186-misc", }, 62 {}, 63 }; 64 ··· 100 101 void __init tegra_init_apbmisc(void) 102 { 103 struct resource apbmisc, straps; 104 struct device_node *np; 105 ··· 121 apbmisc.flags = IORESOURCE_MEM; 122 123 /* strapping options */ 124 - if (tegra_get_chip_id() == TEGRA124) { 125 straps.start = 0x7000e864; 126 straps.end = 0x7000e867; 127 } else { ··· 158 } 159 160 apbmisc_base = ioremap(apbmisc.start, resource_size(&apbmisc)); 161 - if (!apbmisc_base) 162 pr_err("failed to map APBMISC registers\n"); 163 164 strapping_base = ioremap(straps.start, resource_size(&straps)); 165 - if (!strapping_base) 166 pr_err("failed to map strapping options registers\n"); 167 168 long_ram_code = of_property_read_bool(np, "nvidia,long-ram-code"); 169 }
··· 21 #define PMC_STRAPPING_OPT_A_RAM_CODE_MASK_SHORT \ 22 (0x3 << PMC_STRAPPING_OPT_A_RAM_CODE_SHIFT) 23 24 static bool long_ram_code; 25 + static u32 strapping; 26 + static u32 chipid; 27 28 u32 tegra_read_chipid(void) 29 { 30 + WARN(!chipid, "Tegra ABP MISC not yet available\n"); 31 32 + return chipid; 33 } 34 35 u8 tegra_get_chip_id(void) ··· 42 43 u32 tegra_read_straps(void) 44 { 45 + WARN(!chipid, "Tegra ABP MISC not yet available\n"); 46 + 47 + return strapping; 48 } 49 50 u32 tegra_read_ram_code(void) ··· 63 static const struct of_device_id apbmisc_match[] __initconst = { 64 { .compatible = "nvidia,tegra20-apbmisc", }, 65 { .compatible = "nvidia,tegra186-misc", }, 66 + { .compatible = "nvidia,tegra194-misc", }, 67 {}, 68 }; 69 ··· 103 104 void __init tegra_init_apbmisc(void) 105 { 106 + void __iomem *apbmisc_base, *strapping_base; 107 struct resource apbmisc, straps; 108 struct device_node *np; 109 ··· 123 apbmisc.flags = IORESOURCE_MEM; 124 125 /* strapping options */ 126 + if (of_machine_is_compatible("nvidia,tegra124")) { 127 straps.start = 0x7000e864; 128 straps.end = 0x7000e867; 129 } else { ··· 160 } 161 162 apbmisc_base = ioremap(apbmisc.start, resource_size(&apbmisc)); 163 + if (!apbmisc_base) { 164 pr_err("failed to map APBMISC registers\n"); 165 + } else { 166 + chipid = readl_relaxed(apbmisc_base + 4); 167 + iounmap(apbmisc_base); 168 + } 169 170 strapping_base = ioremap(straps.start, resource_size(&straps)); 171 + if (!strapping_base) { 172 pr_err("failed to map strapping options registers\n"); 173 + } else { 174 + strapping = readl_relaxed(strapping_base); 175 + iounmap(strapping_base); 176 + } 177 178 long_ram_code = of_property_read_bool(np, "nvidia,long-ram-code"); 179 }
+7 -1
drivers/soc/tegra/regulators-tegra20.c
··· 162 core_target_uV = max(rtc_uV - max_spread, core_target_uV); 163 } 164 165 err = regulator_set_voltage_rdev(core_rdev, 166 core_target_uV, 167 core_max_uV, ··· 173 return err; 174 175 core_uV = core_target_uV; 176 - 177 if (rtc_uV < rtc_min_uV) { 178 rtc_target_uV = min(rtc_uV + max_spread, rtc_min_uV); 179 rtc_target_uV = min(core_uV + max_spread, rtc_target_uV); ··· 181 rtc_target_uV = max(rtc_uV - max_spread, rtc_min_uV); 182 rtc_target_uV = max(core_uV - max_spread, rtc_target_uV); 183 } 184 185 err = regulator_set_voltage_rdev(rtc_rdev, 186 rtc_target_uV,
··· 162 core_target_uV = max(rtc_uV - max_spread, core_target_uV); 163 } 164 165 + if (core_uV == core_target_uV) 166 + goto update_rtc; 167 + 168 err = regulator_set_voltage_rdev(core_rdev, 169 core_target_uV, 170 core_max_uV, ··· 170 return err; 171 172 core_uV = core_target_uV; 173 + update_rtc: 174 if (rtc_uV < rtc_min_uV) { 175 rtc_target_uV = min(rtc_uV + max_spread, rtc_min_uV); 176 rtc_target_uV = min(core_uV + max_spread, rtc_target_uV); ··· 178 rtc_target_uV = max(rtc_uV - max_spread, rtc_min_uV); 179 rtc_target_uV = max(core_uV - max_spread, rtc_target_uV); 180 } 181 + 182 + if (rtc_uV == rtc_target_uV) 183 + continue; 184 185 err = regulator_set_voltage_rdev(rtc_rdev, 186 rtc_target_uV,
+6
drivers/soc/tegra/regulators-tegra30.c
··· 209 cpu_target_uV = max(core_uV - max_spread, cpu_target_uV); 210 } 211 212 err = regulator_set_voltage_rdev(cpu_rdev, 213 cpu_target_uV, 214 cpu_max_uV, ··· 233 } else { 234 core_target_uV = max(core_target_uV, core_uV - core_max_step); 235 } 236 237 err = regulator_set_voltage_rdev(core_rdev, 238 core_target_uV,
··· 209 cpu_target_uV = max(core_uV - max_spread, cpu_target_uV); 210 } 211 212 + if (cpu_uV == cpu_target_uV) 213 + goto update_core; 214 + 215 err = regulator_set_voltage_rdev(cpu_rdev, 216 cpu_target_uV, 217 cpu_max_uV, ··· 230 } else { 231 core_target_uV = max(core_target_uV, core_uV - core_max_step); 232 } 233 + 234 + if (core_uV == core_target_uV) 235 + continue; 236 237 err = regulator_set_voltage_rdev(core_rdev, 238 core_target_uV,
+5 -2
drivers/soc/ti/knav_qmss_queue.c
··· 25 26 static struct knav_device *kdev; 27 static DEFINE_MUTEX(knav_dev_lock); 28 29 /* Queue manager register indices in DTS */ 30 #define KNAV_QUEUE_PEEK_REG_INDEX 0 ··· 54 #define knav_queue_idx_to_inst(kdev, idx) \ 55 (kdev->instances + (idx << kdev->inst_shift)) 56 57 - #define for_each_handle_rcu(qh, inst) \ 58 - list_for_each_entry_rcu(qh, &inst->handles, list) 59 60 #define for_each_instance(idx, inst, kdev) \ 61 for (idx = 0, inst = kdev->instances; \
··· 25 26 static struct knav_device *kdev; 27 static DEFINE_MUTEX(knav_dev_lock); 28 + #define knav_dev_lock_held() \ 29 + lockdep_is_held(&knav_dev_lock) 30 31 /* Queue manager register indices in DTS */ 32 #define KNAV_QUEUE_PEEK_REG_INDEX 0 ··· 52 #define knav_queue_idx_to_inst(kdev, idx) \ 53 (kdev->instances + (idx << kdev->inst_shift)) 54 55 + #define for_each_handle_rcu(qh, inst) \ 56 + list_for_each_entry_rcu(qh, &inst->handles, list, \ 57 + knav_dev_lock_held()) 58 59 #define for_each_instance(idx, inst, kdev) \ 60 for (idx = 0, inst = kdev->instances; \
+5 -1
drivers/soc/xilinx/Kconfig
··· 21 bool "Enable Xilinx Zynq MPSoC Power Management driver" 22 depends on PM && ARCH_ZYNQMP 23 default y 24 help 25 Say yes to enable power management support for ZyqnMP SoC. 26 This driver uses firmware driver as an interface for power 27 management request to firmware. It registers isr to handle 28 - power management callbacks from firmware. 29 If in doubt, say N. 30 31 config ZYNQMP_PM_DOMAINS
··· 21 bool "Enable Xilinx Zynq MPSoC Power Management driver" 22 depends on PM && ARCH_ZYNQMP 23 default y 24 + select MAILBOX 25 + select ZYNQMP_IPI_MBOX 26 help 27 Say yes to enable power management support for ZyqnMP SoC. 28 This driver uses firmware driver as an interface for power 29 management request to firmware. It registers isr to handle 30 + power management callbacks from firmware. It registers mailbox client 31 + to handle power management callbacks from firmware. 32 + 33 If in doubt, say N. 34 35 config ZYNQMP_PM_DOMAINS
+106 -12
drivers/soc/xilinx/zynqmp_power.c
··· 2 /* 3 * Xilinx Zynq MPSoC Power Management 4 * 5 - * Copyright (C) 2014-2018 Xilinx, Inc. 6 * 7 * Davorin Mista <davorin.mista@aggios.com> 8 * Jolly Shah <jollys@xilinx.com> ··· 16 #include <linux/suspend.h> 17 18 #include <linux/firmware/xlnx-zynqmp.h> 19 20 enum pm_suspend_mode { 21 PM_SUSPEND_MODE_FIRST = 0, ··· 46 }; 47 48 static enum pm_suspend_mode suspend_mode = PM_SUSPEND_MODE_STD; 49 - static const struct zynqmp_eemi_ops *eemi_ops; 50 51 enum pm_api_cb_id { 52 PM_INIT_SUSPEND_CB = 30, ··· 80 } 81 82 return IRQ_HANDLED; 83 } 84 85 static ssize_t suspend_mode_show(struct device *dev, ··· 180 { 181 int ret, irq; 182 u32 pm_api_version; 183 184 eemi_ops = zynqmp_pm_get_eemi_ops(); 185 if (IS_ERR(eemi_ops)) ··· 196 if (pm_api_version < ZYNQMP_PM_VERSION) 197 return -ENODEV; 198 199 - irq = platform_get_irq(pdev, 0); 200 - if (irq <= 0) 201 - return -ENXIO; 202 203 - ret = devm_request_threaded_irq(&pdev->dev, irq, NULL, zynqmp_pm_isr, 204 - IRQF_NO_SUSPEND | IRQF_ONESHOT, 205 - dev_name(&pdev->dev), &pdev->dev); 206 - if (ret) { 207 - dev_err(&pdev->dev, "devm_request_threaded_irq '%d' failed " 208 - "with %d\n", irq, ret); 209 - return ret; 210 } 211 212 ret = sysfs_create_file(&pdev->dev.kobj, &dev_attr_suspend_mode.attr); ··· 250 static int zynqmp_pm_remove(struct platform_device *pdev) 251 { 252 sysfs_remove_file(&pdev->dev.kobj, &dev_attr_suspend_mode.attr); 253 254 return 0; 255 }
··· 2 /* 3 * Xilinx Zynq MPSoC Power Management 4 * 5 + * Copyright (C) 2014-2019 Xilinx, Inc. 6 * 7 * Davorin Mista <davorin.mista@aggios.com> 8 * Jolly Shah <jollys@xilinx.com> ··· 16 #include <linux/suspend.h> 17 18 #include <linux/firmware/xlnx-zynqmp.h> 19 + #include <linux/mailbox/zynqmp-ipi-message.h> 20 + 21 + /** 22 + * struct zynqmp_pm_work_struct - Wrapper for struct work_struct 23 + * @callback_work: Work structure 24 + * @args: Callback arguments 25 + */ 26 + struct zynqmp_pm_work_struct { 27 + struct work_struct callback_work; 28 + u32 args[CB_ARG_CNT]; 29 + }; 30 + 31 + static struct zynqmp_pm_work_struct *zynqmp_pm_init_suspend_work; 32 + static struct mbox_chan *rx_chan; 33 + static const struct zynqmp_eemi_ops *eemi_ops; 34 35 enum pm_suspend_mode { 36 PM_SUSPEND_MODE_FIRST = 0, ··· 31 }; 32 33 static enum pm_suspend_mode suspend_mode = PM_SUSPEND_MODE_STD; 34 35 enum pm_api_cb_id { 36 PM_INIT_SUSPEND_CB = 30, ··· 66 } 67 68 return IRQ_HANDLED; 69 + } 70 + 71 + static void ipi_receive_callback(struct mbox_client *cl, void *data) 72 + { 73 + struct zynqmp_ipi_message *msg = (struct zynqmp_ipi_message *)data; 74 + u32 payload[CB_PAYLOAD_SIZE]; 75 + int ret; 76 + 77 + memcpy(payload, msg->data, sizeof(msg->len)); 78 + /* First element is callback API ID, others are callback arguments */ 79 + if (payload[0] == PM_INIT_SUSPEND_CB) { 80 + if (work_pending(&zynqmp_pm_init_suspend_work->callback_work)) 81 + return; 82 + 83 + /* Copy callback arguments into work's structure */ 84 + memcpy(zynqmp_pm_init_suspend_work->args, &payload[1], 85 + sizeof(zynqmp_pm_init_suspend_work->args)); 86 + 87 + queue_work(system_unbound_wq, 88 + &zynqmp_pm_init_suspend_work->callback_work); 89 + 90 + /* Send NULL message to mbox controller to ack the message */ 91 + ret = mbox_send_message(rx_chan, NULL); 92 + if (ret) 93 + pr_err("IPI ack failed. Error %d\n", ret); 94 + } 95 + } 96 + 97 + /** 98 + * zynqmp_pm_init_suspend_work_fn - Initialize suspend 99 + * @work: Pointer to work_struct 100 + * 101 + * Bottom-half of PM callback IRQ handler. 102 + */ 103 + static void zynqmp_pm_init_suspend_work_fn(struct work_struct *work) 104 + { 105 + struct zynqmp_pm_work_struct *pm_work = 106 + container_of(work, struct zynqmp_pm_work_struct, callback_work); 107 + 108 + if (pm_work->args[0] == SUSPEND_SYSTEM_SHUTDOWN) { 109 + orderly_poweroff(true); 110 + } else if (pm_work->args[0] == SUSPEND_POWER_REQUEST) { 111 + pm_suspend(PM_SUSPEND_MEM); 112 + } else { 113 + pr_err("%s Unsupported InitSuspendCb reason code %d.\n", 114 + __func__, pm_work->args[0]); 115 + } 116 } 117 118 static ssize_t suspend_mode_show(struct device *dev, ··· 119 { 120 int ret, irq; 121 u32 pm_api_version; 122 + struct mbox_client *client; 123 124 eemi_ops = zynqmp_pm_get_eemi_ops(); 125 if (IS_ERR(eemi_ops)) ··· 134 if (pm_api_version < ZYNQMP_PM_VERSION) 135 return -ENODEV; 136 137 + if (of_find_property(pdev->dev.of_node, "mboxes", NULL)) { 138 + zynqmp_pm_init_suspend_work = 139 + devm_kzalloc(&pdev->dev, 140 + sizeof(struct zynqmp_pm_work_struct), 141 + GFP_KERNEL); 142 + if (!zynqmp_pm_init_suspend_work) 143 + return -ENOMEM; 144 145 + INIT_WORK(&zynqmp_pm_init_suspend_work->callback_work, 146 + zynqmp_pm_init_suspend_work_fn); 147 + client = devm_kzalloc(&pdev->dev, sizeof(*client), GFP_KERNEL); 148 + if (!client) 149 + return -ENOMEM; 150 + 151 + client->dev = &pdev->dev; 152 + client->rx_callback = ipi_receive_callback; 153 + 154 + rx_chan = mbox_request_channel_byname(client, "rx"); 155 + if (IS_ERR(rx_chan)) { 156 + dev_err(&pdev->dev, "Failed to request rx channel\n"); 157 + return IS_ERR(rx_chan); 158 + } 159 + } else if (of_find_property(pdev->dev.of_node, "interrupts", NULL)) { 160 + irq = platform_get_irq(pdev, 0); 161 + if (irq <= 0) 162 + return -ENXIO; 163 + 164 + ret = devm_request_threaded_irq(&pdev->dev, irq, NULL, 165 + zynqmp_pm_isr, 166 + IRQF_NO_SUSPEND | IRQF_ONESHOT, 167 + dev_name(&pdev->dev), 168 + &pdev->dev); 169 + if (ret) { 170 + dev_err(&pdev->dev, "devm_request_threaded_irq '%d' " 171 + "failed with %d\n", irq, ret); 172 + return ret; 173 + } 174 + } else { 175 + dev_err(&pdev->dev, "Required property not found in DT node\n"); 176 + return -ENOENT; 177 } 178 179 ret = sysfs_create_file(&pdev->dev.kobj, &dev_attr_suspend_mode.attr); ··· 159 static int zynqmp_pm_remove(struct platform_device *pdev) 160 { 161 sysfs_remove_file(&pdev->dev.kobj, &dev_attr_suspend_mode.attr); 162 + 163 + if (!rx_chan) 164 + mbox_free_channel(rx_chan); 165 166 return 0; 167 }
+63 -88
drivers/tee/optee/core.c
··· 534 arm_smccc_hvc(a0, a1, a2, a3, a4, a5, a6, a7, res); 535 } 536 537 - static optee_invoke_fn *get_invoke_func(struct device_node *np) 538 { 539 const char *method; 540 541 - pr_info("probing for conduit method from DT.\n"); 542 543 - if (of_property_read_string(np, "method", &method)) { 544 pr_warn("missing \"method\" property\n"); 545 return ERR_PTR(-ENXIO); 546 } ··· 554 return ERR_PTR(-EINVAL); 555 } 556 557 - static struct optee *optee_probe(struct device_node *np) 558 { 559 optee_invoke_fn *invoke_fn; 560 struct tee_shm_pool *pool = ERR_PTR(-EINVAL); ··· 594 u32 sec_caps; 595 int rc; 596 597 - invoke_fn = get_invoke_func(np); 598 if (IS_ERR(invoke_fn)) 599 - return (void *)invoke_fn; 600 601 if (!optee_msg_api_uid_is_optee_api(invoke_fn)) { 602 pr_warn("api uid mismatch\n"); 603 - return ERR_PTR(-EINVAL); 604 } 605 606 optee_msg_get_os_revision(invoke_fn); 607 608 if (!optee_msg_api_revision_is_compatible(invoke_fn)) { 609 pr_warn("api revision mismatch\n"); 610 - return ERR_PTR(-EINVAL); 611 } 612 613 if (!optee_msg_exchange_capabilities(invoke_fn, &sec_caps)) { 614 pr_warn("capabilities mismatch\n"); 615 - return ERR_PTR(-EINVAL); 616 } 617 618 /* ··· 628 pool = optee_config_shm_memremap(invoke_fn, &memremaped_shm); 629 630 if (IS_ERR(pool)) 631 - return (void *)pool; 632 633 optee = kzalloc(sizeof(*optee), GFP_KERNEL); 634 if (!optee) { ··· 673 if (optee->sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) 674 pr_info("dynamic shared memory is enabled\n"); 675 676 - return optee; 677 err: 678 if (optee) { 679 /* ··· 698 tee_shm_pool_free(pool); 699 if (memremaped_shm) 700 memunmap(memremaped_shm); 701 - return ERR_PTR(rc); 702 } 703 704 - static void optee_remove(struct optee *optee) 705 - { 706 - /* 707 - * Ask OP-TEE to free all cached shared memory objects to decrease 708 - * reference counters and also avoid wild pointers in secure world 709 - * into the old shared memory range. 710 - */ 711 - optee_disable_shm_cache(optee); 712 - 713 - /* 714 - * The two devices has to be unregistered before we can free the 715 - * other resources. 716 - */ 717 - tee_device_unregister(optee->supp_teedev); 718 - tee_device_unregister(optee->teedev); 719 - 720 - tee_shm_pool_free(optee->pool); 721 - if (optee->memremaped_shm) 722 - memunmap(optee->memremaped_shm); 723 - optee_wait_queue_exit(&optee->wait_queue); 724 - optee_supp_uninit(&optee->supp); 725 - mutex_destroy(&optee->call_queue.mutex); 726 - 727 - kfree(optee); 728 - } 729 - 730 - static const struct of_device_id optee_match[] = { 731 { .compatible = "linaro,optee-tz" }, 732 {}, 733 }; 734 735 - static struct optee *optee_svc; 736 - 737 - static int __init optee_driver_init(void) 738 - { 739 - struct device_node *fw_np = NULL; 740 - struct device_node *np = NULL; 741 - struct optee *optee = NULL; 742 - int rc = 0; 743 - 744 - /* Node is supposed to be below /firmware */ 745 - fw_np = of_find_node_by_name(NULL, "firmware"); 746 - if (!fw_np) 747 - return -ENODEV; 748 - 749 - np = of_find_matching_node(fw_np, optee_match); 750 - if (!np || !of_device_is_available(np)) { 751 - of_node_put(np); 752 - return -ENODEV; 753 - } 754 - 755 - optee = optee_probe(np); 756 - of_node_put(np); 757 - 758 - if (IS_ERR(optee)) 759 - return PTR_ERR(optee); 760 - 761 - rc = optee_enumerate_devices(); 762 - if (rc) { 763 - optee_remove(optee); 764 - return rc; 765 - } 766 - 767 - pr_info("initialized driver\n"); 768 - 769 - optee_svc = optee; 770 - 771 - return 0; 772 - } 773 - module_init(optee_driver_init); 774 - 775 - static void __exit optee_driver_exit(void) 776 - { 777 - struct optee *optee = optee_svc; 778 - 779 - optee_svc = NULL; 780 - if (optee) 781 - optee_remove(optee); 782 - } 783 - module_exit(optee_driver_exit); 784 785 MODULE_AUTHOR("Linaro"); 786 MODULE_DESCRIPTION("OP-TEE driver"); 787 MODULE_SUPPORTED_DEVICE(""); 788 MODULE_VERSION("1.0"); 789 MODULE_LICENSE("GPL v2");
··· 534 arm_smccc_hvc(a0, a1, a2, a3, a4, a5, a6, a7, res); 535 } 536 537 + static optee_invoke_fn *get_invoke_func(struct device *dev) 538 { 539 const char *method; 540 541 + pr_info("probing for conduit method.\n"); 542 543 + if (device_property_read_string(dev, "method", &method)) { 544 pr_warn("missing \"method\" property\n"); 545 return ERR_PTR(-ENXIO); 546 } ··· 554 return ERR_PTR(-EINVAL); 555 } 556 557 + static int optee_remove(struct platform_device *pdev) 558 + { 559 + struct optee *optee = platform_get_drvdata(pdev); 560 + 561 + /* 562 + * Ask OP-TEE to free all cached shared memory objects to decrease 563 + * reference counters and also avoid wild pointers in secure world 564 + * into the old shared memory range. 565 + */ 566 + optee_disable_shm_cache(optee); 567 + 568 + /* 569 + * The two devices have to be unregistered before we can free the 570 + * other resources. 571 + */ 572 + tee_device_unregister(optee->supp_teedev); 573 + tee_device_unregister(optee->teedev); 574 + 575 + tee_shm_pool_free(optee->pool); 576 + if (optee->memremaped_shm) 577 + memunmap(optee->memremaped_shm); 578 + optee_wait_queue_exit(&optee->wait_queue); 579 + optee_supp_uninit(&optee->supp); 580 + mutex_destroy(&optee->call_queue.mutex); 581 + 582 + kfree(optee); 583 + 584 + return 0; 585 + } 586 + 587 + static int optee_probe(struct platform_device *pdev) 588 { 589 optee_invoke_fn *invoke_fn; 590 struct tee_shm_pool *pool = ERR_PTR(-EINVAL); ··· 564 u32 sec_caps; 565 int rc; 566 567 + invoke_fn = get_invoke_func(&pdev->dev); 568 if (IS_ERR(invoke_fn)) 569 + return PTR_ERR(invoke_fn); 570 571 if (!optee_msg_api_uid_is_optee_api(invoke_fn)) { 572 pr_warn("api uid mismatch\n"); 573 + return -EINVAL; 574 } 575 576 optee_msg_get_os_revision(invoke_fn); 577 578 if (!optee_msg_api_revision_is_compatible(invoke_fn)) { 579 pr_warn("api revision mismatch\n"); 580 + return -EINVAL; 581 } 582 583 if (!optee_msg_exchange_capabilities(invoke_fn, &sec_caps)) { 584 pr_warn("capabilities mismatch\n"); 585 + return -EINVAL; 586 } 587 588 /* ··· 598 pool = optee_config_shm_memremap(invoke_fn, &memremaped_shm); 599 600 if (IS_ERR(pool)) 601 + return PTR_ERR(pool); 602 603 optee = kzalloc(sizeof(*optee), GFP_KERNEL); 604 if (!optee) { ··· 643 if (optee->sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) 644 pr_info("dynamic shared memory is enabled\n"); 645 646 + platform_set_drvdata(pdev, optee); 647 + 648 + rc = optee_enumerate_devices(); 649 + if (rc) { 650 + optee_remove(pdev); 651 + return rc; 652 + } 653 + 654 + pr_info("initialized driver\n"); 655 + return 0; 656 err: 657 if (optee) { 658 /* ··· 659 tee_shm_pool_free(pool); 660 if (memremaped_shm) 661 memunmap(memremaped_shm); 662 + return rc; 663 } 664 665 + static const struct of_device_id optee_dt_match[] = { 666 { .compatible = "linaro,optee-tz" }, 667 {}, 668 }; 669 + MODULE_DEVICE_TABLE(of, optee_dt_match); 670 671 + static struct platform_driver optee_driver = { 672 + .probe = optee_probe, 673 + .remove = optee_remove, 674 + .driver = { 675 + .name = "optee", 676 + .of_match_table = optee_dt_match, 677 + }, 678 + }; 679 + module_platform_driver(optee_driver); 680 681 MODULE_AUTHOR("Linaro"); 682 MODULE_DESCRIPTION("OP-TEE driver"); 683 MODULE_SUPPORTED_DEVICE(""); 684 MODULE_VERSION("1.0"); 685 MODULE_LICENSE("GPL v2"); 686 + MODULE_ALIAS("platform:optee");
+200 -183
drivers/tty/serial/ucc_uart.c
··· 32 #include <soc/fsl/qe/ucc_slow.h> 33 34 #include <linux/firmware.h> 35 - #include <asm/reg.h> 36 37 /* 38 * The GUMR flag for Soft UART. This would normally be defined in qe.h, ··· 261 struct qe_bd *bdp = qe_port->tx_bd_base; 262 263 while (1) { 264 - if (in_be16(&bdp->status) & BD_SC_READY) 265 /* This BD is not done, so return "not done" */ 266 return 0; 267 268 - if (in_be16(&bdp->status) & BD_SC_WRAP) 269 /* 270 * This BD is done and it's the last one, so return 271 * "done" ··· 311 struct uart_qe_port *qe_port = 312 container_of(port, struct uart_qe_port, port); 313 314 - clrbits16(&qe_port->uccp->uccm, UCC_UART_UCCE_TX); 315 } 316 317 /* ··· 341 /* Pick next descriptor and fill from buffer */ 342 bdp = qe_port->tx_cur; 343 344 - p = qe2cpu_addr(bdp->buf, qe_port); 345 346 *p++ = port->x_char; 347 - out_be16(&bdp->length, 1); 348 - setbits16(&bdp->status, BD_SC_READY); 349 /* Get next BD. */ 350 - if (in_be16(&bdp->status) & BD_SC_WRAP) 351 bdp = qe_port->tx_bd_base; 352 else 353 bdp++; ··· 366 /* Pick next descriptor and fill from buffer */ 367 bdp = qe_port->tx_cur; 368 369 - while (!(in_be16(&bdp->status) & BD_SC_READY) && 370 (xmit->tail != xmit->head)) { 371 count = 0; 372 - p = qe2cpu_addr(bdp->buf, qe_port); 373 while (count < qe_port->tx_fifosize) { 374 *p++ = xmit->buf[xmit->tail]; 375 xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); ··· 379 break; 380 } 381 382 - out_be16(&bdp->length, count); 383 - setbits16(&bdp->status, BD_SC_READY); 384 385 /* Get next BD. */ 386 - if (in_be16(&bdp->status) & BD_SC_WRAP) 387 bdp = qe_port->tx_bd_base; 388 else 389 bdp++; ··· 416 container_of(port, struct uart_qe_port, port); 417 418 /* If we currently are transmitting, then just return */ 419 - if (in_be16(&qe_port->uccp->uccm) & UCC_UART_UCCE_TX) 420 return; 421 422 /* Otherwise, pump the port and start transmission */ 423 if (qe_uart_tx_pump(qe_port)) 424 - setbits16(&qe_port->uccp->uccm, UCC_UART_UCCE_TX); 425 } 426 427 /* ··· 432 struct uart_qe_port *qe_port = 433 container_of(port, struct uart_qe_port, port); 434 435 - clrbits16(&qe_port->uccp->uccm, UCC_UART_UCCE_RX); 436 } 437 438 /* Start or stop sending break signal ··· 471 */ 472 bdp = qe_port->rx_cur; 473 while (1) { 474 - status = in_be16(&bdp->status); 475 476 /* If this one is empty, then we assume we've read them all */ 477 if (status & BD_SC_EMPTY) 478 break; 479 480 /* get number of characters, and check space in RX buffer */ 481 - i = in_be16(&bdp->length); 482 483 /* If we don't have enough room in RX buffer for the entire BD, 484 * then we try later, which will be the next RX interrupt. ··· 489 } 490 491 /* get pointer */ 492 - cp = qe2cpu_addr(bdp->buf, qe_port); 493 494 /* loop through the buffer */ 495 while (i-- > 0) { ··· 509 } 510 511 /* This BD is ready to be used again. Clear status. get next */ 512 - clrsetbits_be16(&bdp->status, BD_SC_BR | BD_SC_FR | BD_SC_PR | 513 - BD_SC_OV | BD_SC_ID, BD_SC_EMPTY); 514 - if (in_be16(&bdp->status) & BD_SC_WRAP) 515 bdp = qe_port->rx_bd_base; 516 else 517 bdp++; ··· 569 u16 events; 570 571 /* Clear the interrupts */ 572 - events = in_be16(&uccp->ucce); 573 - out_be16(&uccp->ucce, events); 574 575 if (events & UCC_UART_UCCE_BRKE) 576 uart_handle_break(&qe_port->port); ··· 601 bdp = qe_port->rx_bd_base; 602 qe_port->rx_cur = qe_port->rx_bd_base; 603 for (i = 0; i < (qe_port->rx_nrfifos - 1); i++) { 604 - out_be16(&bdp->status, BD_SC_EMPTY | BD_SC_INTRPT); 605 - out_be32(&bdp->buf, cpu2qe_addr(bd_virt, qe_port)); 606 - out_be16(&bdp->length, 0); 607 bd_virt += qe_port->rx_fifosize; 608 bdp++; 609 } 610 611 /* */ 612 - out_be16(&bdp->status, BD_SC_WRAP | BD_SC_EMPTY | BD_SC_INTRPT); 613 - out_be32(&bdp->buf, cpu2qe_addr(bd_virt, qe_port)); 614 - out_be16(&bdp->length, 0); 615 616 /* Set the physical address of the host memory 617 * buffers in the buffer descriptors, and the ··· 622 qe_port->tx_cur = qe_port->tx_bd_base; 623 bdp = qe_port->tx_bd_base; 624 for (i = 0; i < (qe_port->tx_nrfifos - 1); i++) { 625 - out_be16(&bdp->status, BD_SC_INTRPT); 626 - out_be32(&bdp->buf, cpu2qe_addr(bd_virt, qe_port)); 627 - out_be16(&bdp->length, 0); 628 bd_virt += qe_port->tx_fifosize; 629 bdp++; 630 } 631 632 /* Loopback requires the preamble bit to be set on the first TX BD */ 633 #ifdef LOOPBACK 634 - setbits16(&qe_port->tx_cur->status, BD_SC_P); 635 #endif 636 637 - out_be16(&bdp->status, BD_SC_WRAP | BD_SC_INTRPT); 638 - out_be32(&bdp->buf, cpu2qe_addr(bd_virt, qe_port)); 639 - out_be16(&bdp->length, 0); 640 } 641 642 /* ··· 658 ucc_slow_disable(qe_port->us_private, COMM_DIR_RX_AND_TX); 659 660 /* Program the UCC UART parameter RAM */ 661 - out_8(&uccup->common.rbmr, UCC_BMR_GBL | UCC_BMR_BO_BE); 662 - out_8(&uccup->common.tbmr, UCC_BMR_GBL | UCC_BMR_BO_BE); 663 - out_be16(&uccup->common.mrblr, qe_port->rx_fifosize); 664 - out_be16(&uccup->maxidl, 0x10); 665 - out_be16(&uccup->brkcr, 1); 666 - out_be16(&uccup->parec, 0); 667 - out_be16(&uccup->frmec, 0); 668 - out_be16(&uccup->nosec, 0); 669 - out_be16(&uccup->brkec, 0); 670 - out_be16(&uccup->uaddr[0], 0); 671 - out_be16(&uccup->uaddr[1], 0); 672 - out_be16(&uccup->toseq, 0); 673 for (i = 0; i < 8; i++) 674 - out_be16(&uccup->cchars[i], 0xC000); 675 - out_be16(&uccup->rccm, 0xc0ff); 676 677 /* Configure the GUMR registers for UART */ 678 if (soft_uart) { 679 /* Soft-UART requires a 1X multiplier for TX */ 680 - clrsetbits_be32(&uccp->gumr_l, 681 - UCC_SLOW_GUMR_L_MODE_MASK | UCC_SLOW_GUMR_L_TDCR_MASK | 682 - UCC_SLOW_GUMR_L_RDCR_MASK, 683 - UCC_SLOW_GUMR_L_MODE_UART | UCC_SLOW_GUMR_L_TDCR_1 | 684 - UCC_SLOW_GUMR_L_RDCR_16); 685 686 - clrsetbits_be32(&uccp->gumr_h, UCC_SLOW_GUMR_H_RFW, 687 - UCC_SLOW_GUMR_H_TRX | UCC_SLOW_GUMR_H_TTX); 688 } else { 689 - clrsetbits_be32(&uccp->gumr_l, 690 - UCC_SLOW_GUMR_L_MODE_MASK | UCC_SLOW_GUMR_L_TDCR_MASK | 691 - UCC_SLOW_GUMR_L_RDCR_MASK, 692 - UCC_SLOW_GUMR_L_MODE_UART | UCC_SLOW_GUMR_L_TDCR_16 | 693 - UCC_SLOW_GUMR_L_RDCR_16); 694 695 - clrsetbits_be32(&uccp->gumr_h, 696 - UCC_SLOW_GUMR_H_TRX | UCC_SLOW_GUMR_H_TTX, 697 - UCC_SLOW_GUMR_H_RFW); 698 } 699 700 #ifdef LOOPBACK 701 - clrsetbits_be32(&uccp->gumr_l, UCC_SLOW_GUMR_L_DIAG_MASK, 702 - UCC_SLOW_GUMR_L_DIAG_LOOP); 703 - clrsetbits_be32(&uccp->gumr_h, 704 - UCC_SLOW_GUMR_H_CTSP | UCC_SLOW_GUMR_H_RSYN, 705 - UCC_SLOW_GUMR_H_CDS); 706 #endif 707 708 /* Disable rx interrupts and clear all pending events. */ 709 - out_be16(&uccp->uccm, 0); 710 - out_be16(&uccp->ucce, 0xffff); 711 - out_be16(&uccp->udsr, 0x7e7e); 712 713 /* Initialize UPSMR */ 714 - out_be16(&uccp->upsmr, 0); 715 716 if (soft_uart) { 717 - out_be16(&uccup->supsmr, 0x30); 718 - out_be16(&uccup->res92, 0); 719 - out_be32(&uccup->rx_state, 0); 720 - out_be32(&uccup->rx_cnt, 0); 721 - out_8(&uccup->rx_bitmark, 0); 722 - out_8(&uccup->rx_length, 10); 723 - out_be32(&uccup->dump_ptr, 0x4000); 724 - out_8(&uccup->rx_temp_dlst_qe, 0); 725 - out_be32(&uccup->rx_frame_rem, 0); 726 - out_8(&uccup->rx_frame_rem_size, 0); 727 /* Soft-UART requires TX to be 1X */ 728 - out_8(&uccup->tx_mode, 729 - UCC_UART_TX_STATE_UART | UCC_UART_TX_STATE_X1); 730 - out_be16(&uccup->tx_state, 0); 731 - out_8(&uccup->resD4, 0); 732 - out_be16(&uccup->resD5, 0); 733 734 /* Set UART mode. 735 * Enable receive and transmit. ··· 739 * ... 740 * 6.Receiver must use 16x over sampling 741 */ 742 - clrsetbits_be32(&uccp->gumr_l, 743 - UCC_SLOW_GUMR_L_MODE_MASK | UCC_SLOW_GUMR_L_TDCR_MASK | 744 - UCC_SLOW_GUMR_L_RDCR_MASK, 745 - UCC_SLOW_GUMR_L_MODE_QMC | UCC_SLOW_GUMR_L_TDCR_16 | 746 - UCC_SLOW_GUMR_L_RDCR_16); 747 748 - clrsetbits_be32(&uccp->gumr_h, 749 - UCC_SLOW_GUMR_H_RFW | UCC_SLOW_GUMR_H_RSYN, 750 - UCC_SLOW_GUMR_H_SUART | UCC_SLOW_GUMR_H_TRX | 751 - UCC_SLOW_GUMR_H_TTX | UCC_SLOW_GUMR_H_TFL); 752 753 #ifdef LOOPBACK 754 - clrsetbits_be32(&uccp->gumr_l, UCC_SLOW_GUMR_L_DIAG_MASK, 755 - UCC_SLOW_GUMR_L_DIAG_LOOP); 756 - clrbits32(&uccp->gumr_h, UCC_SLOW_GUMR_H_CTSP | 757 - UCC_SLOW_GUMR_H_CDS); 758 #endif 759 760 cecr_subblock = ucc_slow_get_qe_cr_subblock(qe_port->ucc_num); ··· 794 } 795 796 /* Startup rx-int */ 797 - setbits16(&qe_port->uccp->uccm, UCC_UART_UCCE_RX); 798 ucc_slow_enable(qe_port->us_private, COMM_DIR_RX_AND_TX); 799 800 return 0; ··· 830 831 /* Stop uarts */ 832 ucc_slow_disable(qe_port->us_private, COMM_DIR_RX_AND_TX); 833 - clrbits16(&uccp->uccm, UCC_UART_UCCE_TX | UCC_UART_UCCE_RX); 834 835 /* Shut them really down and reinit buffer descriptors */ 836 ucc_slow_graceful_stop_tx(qe_port->us_private); ··· 850 struct ucc_slow __iomem *uccp = qe_port->uccp; 851 unsigned int baud; 852 unsigned long flags; 853 - u16 upsmr = in_be16(&uccp->upsmr); 854 struct ucc_uart_pram __iomem *uccup = qe_port->uccup; 855 - u16 supsmr = in_be16(&uccup->supsmr); 856 u8 char_length = 2; /* 1 + CL + PEN + 1 + SL */ 857 858 /* Character length programmed into the mode register is the ··· 950 /* Update the per-port timeout. */ 951 uart_update_timeout(port, termios->c_cflag, baud); 952 953 - out_be16(&uccp->upsmr, upsmr); 954 if (soft_uart) { 955 - out_be16(&uccup->supsmr, supsmr); 956 - out_8(&uccup->rx_length, char_length); 957 958 /* Soft-UART requires a 1X multiplier for TX */ 959 qe_setbrg(qe_port->us_info.rx_clock, baud, 16); ··· 1095 .verify_port = qe_uart_verify_port, 1096 }; 1097 1098 /* 1099 * Obtain the SOC model number and revision level 1100 * ··· 1184 release_firmware(fw); 1185 } 1186 1187 static int ucc_uart_probe(struct platform_device *ofdev) 1188 { 1189 struct device_node *np = ofdev->dev.of_node; 1190 - const unsigned int *iprop; /* Integer OF properties */ 1191 const char *sprop; /* String OF properties */ 1192 struct uart_qe_port *qe_port = NULL; 1193 struct resource res; 1194 int ret; 1195 1196 /* 1197 * Determine if we need Soft-UART mode 1198 */ 1199 - if (of_find_property(np, "soft-uart", NULL)) { 1200 - dev_dbg(&ofdev->dev, "using Soft-UART mode\n"); 1201 - soft_uart = 1; 1202 - } 1203 - 1204 - /* 1205 - * If we are using Soft-UART, determine if we need to upload the 1206 - * firmware, too. 1207 - */ 1208 - if (soft_uart) { 1209 - struct qe_firmware_info *qe_fw_info; 1210 - 1211 - qe_fw_info = qe_get_firmware_info(); 1212 - 1213 - /* Check if the firmware has been uploaded. */ 1214 - if (qe_fw_info && strstr(qe_fw_info->id, "Soft-UART")) { 1215 - firmware_loaded = 1; 1216 - } else { 1217 - char filename[32]; 1218 - unsigned int soc; 1219 - unsigned int rev_h; 1220 - unsigned int rev_l; 1221 - 1222 - soc = soc_info(&rev_h, &rev_l); 1223 - if (!soc) { 1224 - dev_err(&ofdev->dev, "unknown CPU model\n"); 1225 - return -ENXIO; 1226 - } 1227 - sprintf(filename, "fsl_qe_ucode_uart_%u_%u%u.bin", 1228 - soc, rev_h, rev_l); 1229 - 1230 - dev_info(&ofdev->dev, "waiting for firmware %s\n", 1231 - filename); 1232 - 1233 - /* 1234 - * We call request_firmware_nowait instead of 1235 - * request_firmware so that the driver can load and 1236 - * initialize the ports without holding up the rest of 1237 - * the kernel. If hotplug support is enabled in the 1238 - * kernel, then we use it. 1239 - */ 1240 - ret = request_firmware_nowait(THIS_MODULE, 1241 - FW_ACTION_HOTPLUG, filename, &ofdev->dev, 1242 - GFP_KERNEL, &ofdev->dev, uart_firmware_cont); 1243 - if (ret) { 1244 - dev_err(&ofdev->dev, 1245 - "could not load firmware %s\n", 1246 - filename); 1247 - return ret; 1248 - } 1249 - } 1250 - } 1251 1252 qe_port = kzalloc(sizeof(struct uart_qe_port), GFP_KERNEL); 1253 if (!qe_port) { ··· 1286 1287 /* Get the UCC number (device ID) */ 1288 /* UCCs are numbered 1-7 */ 1289 - iprop = of_get_property(np, "cell-index", NULL); 1290 - if (!iprop) { 1291 - iprop = of_get_property(np, "device-id", NULL); 1292 - if (!iprop) { 1293 - dev_err(&ofdev->dev, "UCC is unspecified in " 1294 - "device tree\n"); 1295 ret = -EINVAL; 1296 goto out_free; 1297 } 1298 } 1299 1300 - if ((*iprop < 1) || (*iprop > UCC_MAX_NUM)) { 1301 - dev_err(&ofdev->dev, "no support for UCC%u\n", *iprop); 1302 ret = -ENODEV; 1303 goto out_free; 1304 } 1305 - qe_port->ucc_num = *iprop - 1; 1306 1307 /* 1308 * In the future, we should not require the BRG to be specified in the ··· 1343 } 1344 1345 /* Get the port number, numbered 0-3 */ 1346 - iprop = of_get_property(np, "port-number", NULL); 1347 - if (!iprop) { 1348 dev_err(&ofdev->dev, "missing port-number in device tree\n"); 1349 ret = -EINVAL; 1350 goto out_free; 1351 } 1352 - qe_port->port.line = *iprop; 1353 if (qe_port->port.line >= UCC_MAX_UART) { 1354 dev_err(&ofdev->dev, "port-number must be 0-%u\n", 1355 UCC_MAX_UART - 1); ··· 1378 } 1379 } 1380 1381 - iprop = of_get_property(np, "brg-frequency", NULL); 1382 - if (!iprop) { 1383 dev_err(&ofdev->dev, 1384 "missing brg-frequency in device tree\n"); 1385 ret = -EINVAL; 1386 goto out_np; 1387 } 1388 1389 - if (*iprop) 1390 - qe_port->port.uartclk = *iprop; 1391 else { 1392 /* 1393 * Older versions of U-Boot do not initialize the brg-frequency 1394 * property, so in this case we assume the BRG frequency is 1395 * half the QE bus frequency. 1396 */ 1397 - iprop = of_get_property(np, "bus-frequency", NULL); 1398 - if (!iprop) { 1399 dev_err(&ofdev->dev, 1400 "missing QE bus-frequency in device tree\n"); 1401 ret = -EINVAL; 1402 goto out_np; 1403 } 1404 - if (*iprop) 1405 - qe_port->port.uartclk = *iprop / 2; 1406 else { 1407 dev_err(&ofdev->dev, 1408 "invalid QE bus-frequency in device tree\n");
··· 32 #include <soc/fsl/qe/ucc_slow.h> 33 34 #include <linux/firmware.h> 35 + #include <soc/fsl/cpm.h> 36 + 37 + #ifdef CONFIG_PPC32 38 + #include <asm/reg.h> /* mfspr, SPRN_SVR */ 39 + #endif 40 41 /* 42 * The GUMR flag for Soft UART. This would normally be defined in qe.h, ··· 257 struct qe_bd *bdp = qe_port->tx_bd_base; 258 259 while (1) { 260 + if (qe_ioread16be(&bdp->status) & BD_SC_READY) 261 /* This BD is not done, so return "not done" */ 262 return 0; 263 264 + if (qe_ioread16be(&bdp->status) & BD_SC_WRAP) 265 /* 266 * This BD is done and it's the last one, so return 267 * "done" ··· 307 struct uart_qe_port *qe_port = 308 container_of(port, struct uart_qe_port, port); 309 310 + qe_clrbits_be16(&qe_port->uccp->uccm, UCC_UART_UCCE_TX); 311 } 312 313 /* ··· 337 /* Pick next descriptor and fill from buffer */ 338 bdp = qe_port->tx_cur; 339 340 + p = qe2cpu_addr(be32_to_cpu(bdp->buf), qe_port); 341 342 *p++ = port->x_char; 343 + qe_iowrite16be(1, &bdp->length); 344 + qe_setbits_be16(&bdp->status, BD_SC_READY); 345 /* Get next BD. */ 346 + if (qe_ioread16be(&bdp->status) & BD_SC_WRAP) 347 bdp = qe_port->tx_bd_base; 348 else 349 bdp++; ··· 362 /* Pick next descriptor and fill from buffer */ 363 bdp = qe_port->tx_cur; 364 365 + while (!(qe_ioread16be(&bdp->status) & BD_SC_READY) && 366 (xmit->tail != xmit->head)) { 367 count = 0; 368 + p = qe2cpu_addr(be32_to_cpu(bdp->buf), qe_port); 369 while (count < qe_port->tx_fifosize) { 370 *p++ = xmit->buf[xmit->tail]; 371 xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); ··· 375 break; 376 } 377 378 + qe_iowrite16be(count, &bdp->length); 379 + qe_setbits_be16(&bdp->status, BD_SC_READY); 380 381 /* Get next BD. */ 382 + if (qe_ioread16be(&bdp->status) & BD_SC_WRAP) 383 bdp = qe_port->tx_bd_base; 384 else 385 bdp++; ··· 412 container_of(port, struct uart_qe_port, port); 413 414 /* If we currently are transmitting, then just return */ 415 + if (qe_ioread16be(&qe_port->uccp->uccm) & UCC_UART_UCCE_TX) 416 return; 417 418 /* Otherwise, pump the port and start transmission */ 419 if (qe_uart_tx_pump(qe_port)) 420 + qe_setbits_be16(&qe_port->uccp->uccm, UCC_UART_UCCE_TX); 421 } 422 423 /* ··· 428 struct uart_qe_port *qe_port = 429 container_of(port, struct uart_qe_port, port); 430 431 + qe_clrbits_be16(&qe_port->uccp->uccm, UCC_UART_UCCE_RX); 432 } 433 434 /* Start or stop sending break signal ··· 467 */ 468 bdp = qe_port->rx_cur; 469 while (1) { 470 + status = qe_ioread16be(&bdp->status); 471 472 /* If this one is empty, then we assume we've read them all */ 473 if (status & BD_SC_EMPTY) 474 break; 475 476 /* get number of characters, and check space in RX buffer */ 477 + i = qe_ioread16be(&bdp->length); 478 479 /* If we don't have enough room in RX buffer for the entire BD, 480 * then we try later, which will be the next RX interrupt. ··· 485 } 486 487 /* get pointer */ 488 + cp = qe2cpu_addr(be32_to_cpu(bdp->buf), qe_port); 489 490 /* loop through the buffer */ 491 while (i-- > 0) { ··· 505 } 506 507 /* This BD is ready to be used again. Clear status. get next */ 508 + qe_clrsetbits_be16(&bdp->status, 509 + BD_SC_BR | BD_SC_FR | BD_SC_PR | BD_SC_OV | BD_SC_ID, 510 + BD_SC_EMPTY); 511 + if (qe_ioread16be(&bdp->status) & BD_SC_WRAP) 512 bdp = qe_port->rx_bd_base; 513 else 514 bdp++; ··· 564 u16 events; 565 566 /* Clear the interrupts */ 567 + events = qe_ioread16be(&uccp->ucce); 568 + qe_iowrite16be(events, &uccp->ucce); 569 570 if (events & UCC_UART_UCCE_BRKE) 571 uart_handle_break(&qe_port->port); ··· 596 bdp = qe_port->rx_bd_base; 597 qe_port->rx_cur = qe_port->rx_bd_base; 598 for (i = 0; i < (qe_port->rx_nrfifos - 1); i++) { 599 + qe_iowrite16be(BD_SC_EMPTY | BD_SC_INTRPT, &bdp->status); 600 + qe_iowrite32be(cpu2qe_addr(bd_virt, qe_port), &bdp->buf); 601 + qe_iowrite16be(0, &bdp->length); 602 bd_virt += qe_port->rx_fifosize; 603 bdp++; 604 } 605 606 /* */ 607 + qe_iowrite16be(BD_SC_WRAP | BD_SC_EMPTY | BD_SC_INTRPT, &bdp->status); 608 + qe_iowrite32be(cpu2qe_addr(bd_virt, qe_port), &bdp->buf); 609 + qe_iowrite16be(0, &bdp->length); 610 611 /* Set the physical address of the host memory 612 * buffers in the buffer descriptors, and the ··· 617 qe_port->tx_cur = qe_port->tx_bd_base; 618 bdp = qe_port->tx_bd_base; 619 for (i = 0; i < (qe_port->tx_nrfifos - 1); i++) { 620 + qe_iowrite16be(BD_SC_INTRPT, &bdp->status); 621 + qe_iowrite32be(cpu2qe_addr(bd_virt, qe_port), &bdp->buf); 622 + qe_iowrite16be(0, &bdp->length); 623 bd_virt += qe_port->tx_fifosize; 624 bdp++; 625 } 626 627 /* Loopback requires the preamble bit to be set on the first TX BD */ 628 #ifdef LOOPBACK 629 + qe_setbits_be16(&qe_port->tx_cur->status, BD_SC_P); 630 #endif 631 632 + qe_iowrite16be(BD_SC_WRAP | BD_SC_INTRPT, &bdp->status); 633 + qe_iowrite32be(cpu2qe_addr(bd_virt, qe_port), &bdp->buf); 634 + qe_iowrite16be(0, &bdp->length); 635 } 636 637 /* ··· 653 ucc_slow_disable(qe_port->us_private, COMM_DIR_RX_AND_TX); 654 655 /* Program the UCC UART parameter RAM */ 656 + qe_iowrite8(UCC_BMR_GBL | UCC_BMR_BO_BE, &uccup->common.rbmr); 657 + qe_iowrite8(UCC_BMR_GBL | UCC_BMR_BO_BE, &uccup->common.tbmr); 658 + qe_iowrite16be(qe_port->rx_fifosize, &uccup->common.mrblr); 659 + qe_iowrite16be(0x10, &uccup->maxidl); 660 + qe_iowrite16be(1, &uccup->brkcr); 661 + qe_iowrite16be(0, &uccup->parec); 662 + qe_iowrite16be(0, &uccup->frmec); 663 + qe_iowrite16be(0, &uccup->nosec); 664 + qe_iowrite16be(0, &uccup->brkec); 665 + qe_iowrite16be(0, &uccup->uaddr[0]); 666 + qe_iowrite16be(0, &uccup->uaddr[1]); 667 + qe_iowrite16be(0, &uccup->toseq); 668 for (i = 0; i < 8; i++) 669 + qe_iowrite16be(0xC000, &uccup->cchars[i]); 670 + qe_iowrite16be(0xc0ff, &uccup->rccm); 671 672 /* Configure the GUMR registers for UART */ 673 if (soft_uart) { 674 /* Soft-UART requires a 1X multiplier for TX */ 675 + qe_clrsetbits_be32(&uccp->gumr_l, 676 + UCC_SLOW_GUMR_L_MODE_MASK | UCC_SLOW_GUMR_L_TDCR_MASK | UCC_SLOW_GUMR_L_RDCR_MASK, 677 + UCC_SLOW_GUMR_L_MODE_UART | UCC_SLOW_GUMR_L_TDCR_1 | UCC_SLOW_GUMR_L_RDCR_16); 678 679 + qe_clrsetbits_be32(&uccp->gumr_h, UCC_SLOW_GUMR_H_RFW, 680 + UCC_SLOW_GUMR_H_TRX | UCC_SLOW_GUMR_H_TTX); 681 } else { 682 + qe_clrsetbits_be32(&uccp->gumr_l, 683 + UCC_SLOW_GUMR_L_MODE_MASK | UCC_SLOW_GUMR_L_TDCR_MASK | UCC_SLOW_GUMR_L_RDCR_MASK, 684 + UCC_SLOW_GUMR_L_MODE_UART | UCC_SLOW_GUMR_L_TDCR_16 | UCC_SLOW_GUMR_L_RDCR_16); 685 686 + qe_clrsetbits_be32(&uccp->gumr_h, 687 + UCC_SLOW_GUMR_H_TRX | UCC_SLOW_GUMR_H_TTX, 688 + UCC_SLOW_GUMR_H_RFW); 689 } 690 691 #ifdef LOOPBACK 692 + qe_clrsetbits_be32(&uccp->gumr_l, UCC_SLOW_GUMR_L_DIAG_MASK, 693 + UCC_SLOW_GUMR_L_DIAG_LOOP); 694 + qe_clrsetbits_be32(&uccp->gumr_h, 695 + UCC_SLOW_GUMR_H_CTSP | UCC_SLOW_GUMR_H_RSYN, 696 + UCC_SLOW_GUMR_H_CDS); 697 #endif 698 699 /* Disable rx interrupts and clear all pending events. */ 700 + qe_iowrite16be(0, &uccp->uccm); 701 + qe_iowrite16be(0xffff, &uccp->ucce); 702 + qe_iowrite16be(0x7e7e, &uccp->udsr); 703 704 /* Initialize UPSMR */ 705 + qe_iowrite16be(0, &uccp->upsmr); 706 707 if (soft_uart) { 708 + qe_iowrite16be(0x30, &uccup->supsmr); 709 + qe_iowrite16be(0, &uccup->res92); 710 + qe_iowrite32be(0, &uccup->rx_state); 711 + qe_iowrite32be(0, &uccup->rx_cnt); 712 + qe_iowrite8(0, &uccup->rx_bitmark); 713 + qe_iowrite8(10, &uccup->rx_length); 714 + qe_iowrite32be(0x4000, &uccup->dump_ptr); 715 + qe_iowrite8(0, &uccup->rx_temp_dlst_qe); 716 + qe_iowrite32be(0, &uccup->rx_frame_rem); 717 + qe_iowrite8(0, &uccup->rx_frame_rem_size); 718 /* Soft-UART requires TX to be 1X */ 719 + qe_iowrite8(UCC_UART_TX_STATE_UART | UCC_UART_TX_STATE_X1, 720 + &uccup->tx_mode); 721 + qe_iowrite16be(0, &uccup->tx_state); 722 + qe_iowrite8(0, &uccup->resD4); 723 + qe_iowrite16be(0, &uccup->resD5); 724 725 /* Set UART mode. 726 * Enable receive and transmit. ··· 738 * ... 739 * 6.Receiver must use 16x over sampling 740 */ 741 + qe_clrsetbits_be32(&uccp->gumr_l, 742 + UCC_SLOW_GUMR_L_MODE_MASK | UCC_SLOW_GUMR_L_TDCR_MASK | UCC_SLOW_GUMR_L_RDCR_MASK, 743 + UCC_SLOW_GUMR_L_MODE_QMC | UCC_SLOW_GUMR_L_TDCR_16 | UCC_SLOW_GUMR_L_RDCR_16); 744 745 + qe_clrsetbits_be32(&uccp->gumr_h, 746 + UCC_SLOW_GUMR_H_RFW | UCC_SLOW_GUMR_H_RSYN, 747 + UCC_SLOW_GUMR_H_SUART | UCC_SLOW_GUMR_H_TRX | UCC_SLOW_GUMR_H_TTX | UCC_SLOW_GUMR_H_TFL); 748 749 #ifdef LOOPBACK 750 + qe_clrsetbits_be32(&uccp->gumr_l, UCC_SLOW_GUMR_L_DIAG_MASK, 751 + UCC_SLOW_GUMR_L_DIAG_LOOP); 752 + qe_clrbits_be32(&uccp->gumr_h, 753 + UCC_SLOW_GUMR_H_CTSP | UCC_SLOW_GUMR_H_CDS); 754 #endif 755 756 cecr_subblock = ucc_slow_get_qe_cr_subblock(qe_port->ucc_num); ··· 796 } 797 798 /* Startup rx-int */ 799 + qe_setbits_be16(&qe_port->uccp->uccm, UCC_UART_UCCE_RX); 800 ucc_slow_enable(qe_port->us_private, COMM_DIR_RX_AND_TX); 801 802 return 0; ··· 832 833 /* Stop uarts */ 834 ucc_slow_disable(qe_port->us_private, COMM_DIR_RX_AND_TX); 835 + qe_clrbits_be16(&uccp->uccm, UCC_UART_UCCE_TX | UCC_UART_UCCE_RX); 836 837 /* Shut them really down and reinit buffer descriptors */ 838 ucc_slow_graceful_stop_tx(qe_port->us_private); ··· 852 struct ucc_slow __iomem *uccp = qe_port->uccp; 853 unsigned int baud; 854 unsigned long flags; 855 + u16 upsmr = qe_ioread16be(&uccp->upsmr); 856 struct ucc_uart_pram __iomem *uccup = qe_port->uccup; 857 + u16 supsmr = qe_ioread16be(&uccup->supsmr); 858 u8 char_length = 2; /* 1 + CL + PEN + 1 + SL */ 859 860 /* Character length programmed into the mode register is the ··· 952 /* Update the per-port timeout. */ 953 uart_update_timeout(port, termios->c_cflag, baud); 954 955 + qe_iowrite16be(upsmr, &uccp->upsmr); 956 if (soft_uart) { 957 + qe_iowrite16be(supsmr, &uccup->supsmr); 958 + qe_iowrite8(char_length, &uccup->rx_length); 959 960 /* Soft-UART requires a 1X multiplier for TX */ 961 qe_setbrg(qe_port->us_info.rx_clock, baud, 16); ··· 1097 .verify_port = qe_uart_verify_port, 1098 }; 1099 1100 + 1101 + #ifdef CONFIG_PPC32 1102 /* 1103 * Obtain the SOC model number and revision level 1104 * ··· 1184 release_firmware(fw); 1185 } 1186 1187 + static int soft_uart_init(struct platform_device *ofdev) 1188 + { 1189 + struct device_node *np = ofdev->dev.of_node; 1190 + struct qe_firmware_info *qe_fw_info; 1191 + int ret; 1192 + 1193 + if (of_find_property(np, "soft-uart", NULL)) { 1194 + dev_dbg(&ofdev->dev, "using Soft-UART mode\n"); 1195 + soft_uart = 1; 1196 + } else { 1197 + return 0; 1198 + } 1199 + 1200 + qe_fw_info = qe_get_firmware_info(); 1201 + 1202 + /* Check if the firmware has been uploaded. */ 1203 + if (qe_fw_info && strstr(qe_fw_info->id, "Soft-UART")) { 1204 + firmware_loaded = 1; 1205 + } else { 1206 + char filename[32]; 1207 + unsigned int soc; 1208 + unsigned int rev_h; 1209 + unsigned int rev_l; 1210 + 1211 + soc = soc_info(&rev_h, &rev_l); 1212 + if (!soc) { 1213 + dev_err(&ofdev->dev, "unknown CPU model\n"); 1214 + return -ENXIO; 1215 + } 1216 + sprintf(filename, "fsl_qe_ucode_uart_%u_%u%u.bin", 1217 + soc, rev_h, rev_l); 1218 + 1219 + dev_info(&ofdev->dev, "waiting for firmware %s\n", 1220 + filename); 1221 + 1222 + /* 1223 + * We call request_firmware_nowait instead of 1224 + * request_firmware so that the driver can load and 1225 + * initialize the ports without holding up the rest of 1226 + * the kernel. If hotplug support is enabled in the 1227 + * kernel, then we use it. 1228 + */ 1229 + ret = request_firmware_nowait(THIS_MODULE, 1230 + FW_ACTION_HOTPLUG, filename, &ofdev->dev, 1231 + GFP_KERNEL, &ofdev->dev, uart_firmware_cont); 1232 + if (ret) { 1233 + dev_err(&ofdev->dev, 1234 + "could not load firmware %s\n", 1235 + filename); 1236 + return ret; 1237 + } 1238 + } 1239 + return 0; 1240 + } 1241 + 1242 + #else /* !CONFIG_PPC32 */ 1243 + 1244 + static int soft_uart_init(struct platform_device *ofdev) 1245 + { 1246 + return 0; 1247 + } 1248 + 1249 + #endif 1250 + 1251 + 1252 static int ucc_uart_probe(struct platform_device *ofdev) 1253 { 1254 struct device_node *np = ofdev->dev.of_node; 1255 const char *sprop; /* String OF properties */ 1256 struct uart_qe_port *qe_port = NULL; 1257 struct resource res; 1258 + u32 val; 1259 int ret; 1260 1261 /* 1262 * Determine if we need Soft-UART mode 1263 */ 1264 + ret = soft_uart_init(ofdev); 1265 + if (ret) 1266 + return ret; 1267 1268 qe_port = kzalloc(sizeof(struct uart_qe_port), GFP_KERNEL); 1269 if (!qe_port) { ··· 1270 1271 /* Get the UCC number (device ID) */ 1272 /* UCCs are numbered 1-7 */ 1273 + if (of_property_read_u32(np, "cell-index", &val)) { 1274 + if (of_property_read_u32(np, "device-id", &val)) { 1275 + dev_err(&ofdev->dev, "UCC is unspecified in device tree\n"); 1276 ret = -EINVAL; 1277 goto out_free; 1278 } 1279 } 1280 1281 + if (val < 1 || val > UCC_MAX_NUM) { 1282 + dev_err(&ofdev->dev, "no support for UCC%u\n", val); 1283 ret = -ENODEV; 1284 goto out_free; 1285 } 1286 + qe_port->ucc_num = val - 1; 1287 1288 /* 1289 * In the future, we should not require the BRG to be specified in the ··· 1330 } 1331 1332 /* Get the port number, numbered 0-3 */ 1333 + if (of_property_read_u32(np, "port-number", &val)) { 1334 dev_err(&ofdev->dev, "missing port-number in device tree\n"); 1335 ret = -EINVAL; 1336 goto out_free; 1337 } 1338 + qe_port->port.line = val; 1339 if (qe_port->port.line >= UCC_MAX_UART) { 1340 dev_err(&ofdev->dev, "port-number must be 0-%u\n", 1341 UCC_MAX_UART - 1); ··· 1366 } 1367 } 1368 1369 + if (of_property_read_u32(np, "brg-frequency", &val)) { 1370 dev_err(&ofdev->dev, 1371 "missing brg-frequency in device tree\n"); 1372 ret = -EINVAL; 1373 goto out_np; 1374 } 1375 1376 + if (val) 1377 + qe_port->port.uartclk = val; 1378 else { 1379 + if (!IS_ENABLED(CONFIG_PPC32)) { 1380 + dev_err(&ofdev->dev, 1381 + "invalid brg-frequency in device tree\n"); 1382 + ret = -EINVAL; 1383 + goto out_np; 1384 + } 1385 + 1386 /* 1387 * Older versions of U-Boot do not initialize the brg-frequency 1388 * property, so in this case we assume the BRG frequency is 1389 * half the QE bus frequency. 1390 */ 1391 + if (of_property_read_u32(np, "bus-frequency", &val)) { 1392 dev_err(&ofdev->dev, 1393 "missing QE bus-frequency in device tree\n"); 1394 ret = -EINVAL; 1395 goto out_np; 1396 } 1397 + if (val) 1398 + qe_port->port.uartclk = val / 2; 1399 else { 1400 dev_err(&ofdev->dev, 1401 "invalid QE bus-frequency in device tree\n");
+14
include/dt-bindings/power/mt6765-power.h
···
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _DT_BINDINGS_POWER_MT6765_POWER_H 3 + #define _DT_BINDINGS_POWER_MT6765_POWER_H 4 + 5 + #define MT6765_POWER_DOMAIN_CONN 0 6 + #define MT6765_POWER_DOMAIN_MM 1 7 + #define MT6765_POWER_DOMAIN_MFG_ASYNC 2 8 + #define MT6765_POWER_DOMAIN_ISP 3 9 + #define MT6765_POWER_DOMAIN_MFG 4 10 + #define MT6765_POWER_DOMAIN_MFG_CORE0 5 11 + #define MT6765_POWER_DOMAIN_CAM 6 12 + #define MT6765_POWER_DOMAIN_VCODEC 7 13 + 14 + #endif /* _DT_BINDINGS_POWER_MT6765_POWER_H */
+24
include/dt-bindings/power/qcom-rpmpd.h
··· 15 #define SDM845_GFX 7 16 #define SDM845_MSS 8 17 18 /* SDM845 Power Domain performance levels */ 19 #define RPMH_REGULATOR_LEVEL_RETENTION 16 20 #define RPMH_REGULATOR_LEVEL_MIN_SVS 48 21 #define RPMH_REGULATOR_LEVEL_LOW_SVS 64 22 #define RPMH_REGULATOR_LEVEL_SVS 128 23 #define RPMH_REGULATOR_LEVEL_SVS_L1 192 24 #define RPMH_REGULATOR_LEVEL_NOM 256 25 #define RPMH_REGULATOR_LEVEL_NOM_L1 320 26 #define RPMH_REGULATOR_LEVEL_NOM_L2 336
··· 15 #define SDM845_GFX 7 16 #define SDM845_MSS 8 17 18 + /* SM8150 Power Domain Indexes */ 19 + #define SM8150_MSS 0 20 + #define SM8150_EBI 1 21 + #define SM8150_LMX 2 22 + #define SM8150_LCX 3 23 + #define SM8150_GFX 4 24 + #define SM8150_MX 5 25 + #define SM8150_MX_AO 6 26 + #define SM8150_CX 7 27 + #define SM8150_CX_AO 8 28 + #define SM8150_MMCX 9 29 + #define SM8150_MMCX_AO 10 30 + 31 + /* SC7180 Power Domain Indexes */ 32 + #define SC7180_CX 0 33 + #define SC7180_CX_AO 1 34 + #define SC7180_GFX 2 35 + #define SC7180_MX 3 36 + #define SC7180_MX_AO 4 37 + #define SC7180_LMX 5 38 + #define SC7180_LCX 6 39 + #define SC7180_MSS 7 40 + 41 /* SDM845 Power Domain performance levels */ 42 #define RPMH_REGULATOR_LEVEL_RETENTION 16 43 #define RPMH_REGULATOR_LEVEL_MIN_SVS 48 44 #define RPMH_REGULATOR_LEVEL_LOW_SVS 64 45 #define RPMH_REGULATOR_LEVEL_SVS 128 46 #define RPMH_REGULATOR_LEVEL_SVS_L1 192 47 + #define RPMH_REGULATOR_LEVEL_SVS_L2 224 48 #define RPMH_REGULATOR_LEVEL_NOM 256 49 #define RPMH_REGULATOR_LEVEL_NOM_L1 320 50 #define RPMH_REGULATOR_LEVEL_NOM_L2 336
+91
include/dt-bindings/reset/nuvoton,npcm7xx-reset.h
···
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + // Copyright (c) 2019 Nuvoton Technology corporation. 3 + 4 + #ifndef _DT_BINDINGS_NPCM7XX_RESET_H 5 + #define _DT_BINDINGS_NPCM7XX_RESET_H 6 + 7 + #define NPCM7XX_RESET_IPSRST1 0x20 8 + #define NPCM7XX_RESET_IPSRST2 0x24 9 + #define NPCM7XX_RESET_IPSRST3 0x34 10 + 11 + /* Reset lines on IP1 reset module (NPCM7XX_RESET_IPSRST1) */ 12 + #define NPCM7XX_RESET_FIU3 1 13 + #define NPCM7XX_RESET_UDC1 5 14 + #define NPCM7XX_RESET_EMC1 6 15 + #define NPCM7XX_RESET_UART_2_3 7 16 + #define NPCM7XX_RESET_UDC2 8 17 + #define NPCM7XX_RESET_PECI 9 18 + #define NPCM7XX_RESET_AES 10 19 + #define NPCM7XX_RESET_UART_0_1 11 20 + #define NPCM7XX_RESET_MC 12 21 + #define NPCM7XX_RESET_SMB2 13 22 + #define NPCM7XX_RESET_SMB3 14 23 + #define NPCM7XX_RESET_SMB4 15 24 + #define NPCM7XX_RESET_SMB5 16 25 + #define NPCM7XX_RESET_PWM_M0 18 26 + #define NPCM7XX_RESET_TIMER_0_4 19 27 + #define NPCM7XX_RESET_TIMER_5_9 20 28 + #define NPCM7XX_RESET_EMC2 21 29 + #define NPCM7XX_RESET_UDC4 22 30 + #define NPCM7XX_RESET_UDC5 23 31 + #define NPCM7XX_RESET_UDC6 24 32 + #define NPCM7XX_RESET_UDC3 25 33 + #define NPCM7XX_RESET_ADC 27 34 + #define NPCM7XX_RESET_SMB6 28 35 + #define NPCM7XX_RESET_SMB7 29 36 + #define NPCM7XX_RESET_SMB0 30 37 + #define NPCM7XX_RESET_SMB1 31 38 + 39 + /* Reset lines on IP2 reset module (NPCM7XX_RESET_IPSRST2) */ 40 + #define NPCM7XX_RESET_MFT0 0 41 + #define NPCM7XX_RESET_MFT1 1 42 + #define NPCM7XX_RESET_MFT2 2 43 + #define NPCM7XX_RESET_MFT3 3 44 + #define NPCM7XX_RESET_MFT4 4 45 + #define NPCM7XX_RESET_MFT5 5 46 + #define NPCM7XX_RESET_MFT6 6 47 + #define NPCM7XX_RESET_MFT7 7 48 + #define NPCM7XX_RESET_MMC 8 49 + #define NPCM7XX_RESET_SDHC 9 50 + #define NPCM7XX_RESET_GFX_SYS 10 51 + #define NPCM7XX_RESET_AHB_PCIBRG 11 52 + #define NPCM7XX_RESET_VDMA 12 53 + #define NPCM7XX_RESET_ECE 13 54 + #define NPCM7XX_RESET_VCD 14 55 + #define NPCM7XX_RESET_OTP 16 56 + #define NPCM7XX_RESET_SIOX1 18 57 + #define NPCM7XX_RESET_SIOX2 19 58 + #define NPCM7XX_RESET_3DES 21 59 + #define NPCM7XX_RESET_PSPI1 22 60 + #define NPCM7XX_RESET_PSPI2 23 61 + #define NPCM7XX_RESET_GMAC2 25 62 + #define NPCM7XX_RESET_USB_HOST 26 63 + #define NPCM7XX_RESET_GMAC1 28 64 + #define NPCM7XX_RESET_CP 31 65 + 66 + /* Reset lines on IP3 reset module (NPCM7XX_RESET_IPSRST3) */ 67 + #define NPCM7XX_RESET_PWM_M1 0 68 + #define NPCM7XX_RESET_SMB12 1 69 + #define NPCM7XX_RESET_SPIX 2 70 + #define NPCM7XX_RESET_SMB13 3 71 + #define NPCM7XX_RESET_UDC0 4 72 + #define NPCM7XX_RESET_UDC7 5 73 + #define NPCM7XX_RESET_UDC8 6 74 + #define NPCM7XX_RESET_UDC9 7 75 + #define NPCM7XX_RESET_PCI_MAILBOX 9 76 + #define NPCM7XX_RESET_SMB14 12 77 + #define NPCM7XX_RESET_SHA 13 78 + #define NPCM7XX_RESET_SEC_ECC 14 79 + #define NPCM7XX_RESET_PCIE_RC 15 80 + #define NPCM7XX_RESET_TIMER_10_14 16 81 + #define NPCM7XX_RESET_RNG 17 82 + #define NPCM7XX_RESET_SMB15 18 83 + #define NPCM7XX_RESET_SMB8 19 84 + #define NPCM7XX_RESET_SMB9 20 85 + #define NPCM7XX_RESET_SMB10 21 86 + #define NPCM7XX_RESET_SMB11 22 87 + #define NPCM7XX_RESET_ESPI 23 88 + #define NPCM7XX_RESET_USB_PHY_1 24 89 + #define NPCM7XX_RESET_USB_PHY_2 25 90 + 91 + #endif
+1
include/linux/cpuhotplug.h
··· 96 CPUHP_AP_OFFLINE, 97 CPUHP_AP_SCHED_STARTING, 98 CPUHP_AP_RCUTREE_DYING, 99 CPUHP_AP_IRQ_GIC_STARTING, 100 CPUHP_AP_IRQ_HIP04_STARTING, 101 CPUHP_AP_IRQ_ARMADA_XP_STARTING,
··· 96 CPUHP_AP_OFFLINE, 97 CPUHP_AP_SCHED_STARTING, 98 CPUHP_AP_RCUTREE_DYING, 99 + CPUHP_AP_CPU_PM_STARTING, 100 CPUHP_AP_IRQ_GIC_STARTING, 101 CPUHP_AP_IRQ_HIP04_STARTING, 102 CPUHP_AP_IRQ_ARMADA_XP_STARTING,
+7
include/linux/firmware/xlnx-zynqmp.h
··· 48 #define ZYNQMP_PM_CAPABILITY_WAKEUP 0x4U 49 #define ZYNQMP_PM_CAPABILITY_UNUSABLE 0x8U 50 51 /* 52 * Firmware FPGA Manager flags 53 * XILINX_ZYNQMP_PM_FPGA_FULL: FPGA full reconfiguration ··· 82 PM_CLOCK_GETRATE, 83 PM_CLOCK_SETPARENT, 84 PM_CLOCK_GETPARENT, 85 }; 86 87 /* PMU-FW return status codes */ 88 enum pm_ret_status { 89 XST_PM_SUCCESS = 0, 90 XST_PM_INTERNAL = 2000, 91 XST_PM_CONFLICT, 92 XST_PM_NO_ACCESS,
··· 48 #define ZYNQMP_PM_CAPABILITY_WAKEUP 0x4U 49 #define ZYNQMP_PM_CAPABILITY_UNUSABLE 0x8U 50 51 + /* Feature check status */ 52 + #define PM_FEATURE_INVALID -1 53 + #define PM_FEATURE_UNCHECKED 0 54 + 55 /* 56 * Firmware FPGA Manager flags 57 * XILINX_ZYNQMP_PM_FPGA_FULL: FPGA full reconfiguration ··· 78 PM_CLOCK_GETRATE, 79 PM_CLOCK_SETPARENT, 80 PM_CLOCK_GETPARENT, 81 + PM_FEATURE_CHECK = 63, 82 + PM_API_MAX, 83 }; 84 85 /* PMU-FW return status codes */ 86 enum pm_ret_status { 87 XST_PM_SUCCESS = 0, 88 + XST_PM_NO_FEATURE = 19, 89 XST_PM_INTERNAL = 2000, 90 XST_PM_CONFLICT, 91 XST_PM_NO_ACCESS,
+8
include/linux/of.h
··· 351 int *lenp); 352 extern struct device_node *of_get_cpu_node(int cpu, unsigned int *thread); 353 extern struct device_node *of_get_next_cpu_node(struct device_node *prev); 354 355 #define for_each_property_of_node(dn, pp) \ 356 for (pp = dn->properties; pp != NULL; pp = pp->next) ··· 763 } 764 765 static inline struct device_node *of_get_next_cpu_node(struct device_node *prev) 766 { 767 return NULL; 768 }
··· 351 int *lenp); 352 extern struct device_node *of_get_cpu_node(int cpu, unsigned int *thread); 353 extern struct device_node *of_get_next_cpu_node(struct device_node *prev); 354 + extern struct device_node *of_get_cpu_state_node(struct device_node *cpu_node, 355 + int index); 356 357 #define for_each_property_of_node(dn, pp) \ 358 for (pp = dn->properties; pp != NULL; pp = pp->next) ··· 761 } 762 763 static inline struct device_node *of_get_next_cpu_node(struct device_node *prev) 764 + { 765 + return NULL; 766 + } 767 + 768 + static inline struct device_node *of_get_cpu_state_node(struct device_node *cpu_node, 769 + int index) 770 { 771 return NULL; 772 }
+1
include/linux/platform_data/ti-sysc.h
··· 49 s8 emufree_shift; 50 }; 51 52 #define SYSC_QUIRK_FORCE_MSTANDBY BIT(20) 53 #define SYSC_MODULE_QUIRK_AESS BIT(19) 54 #define SYSC_MODULE_QUIRK_SGX BIT(18)
··· 49 s8 emufree_shift; 50 }; 51 52 + #define SYSC_QUIRK_CLKDM_NOAUTO BIT(21) 53 #define SYSC_QUIRK_FORCE_MSTANDBY BIT(20) 54 #define SYSC_MODULE_QUIRK_AESS BIT(19) 55 #define SYSC_MODULE_QUIRK_SGX BIT(18)
+8
include/linux/pm_domain.h
··· 284 int of_genpd_add_device(struct of_phandle_args *args, struct device *dev); 285 int of_genpd_add_subdomain(struct of_phandle_args *parent_spec, 286 struct of_phandle_args *subdomain_spec); 287 struct generic_pm_domain *of_genpd_remove_last(struct device_node *np); 288 int of_genpd_parse_idle_states(struct device_node *dn, 289 struct genpd_power_state **states, int *n); ··· 320 321 static inline int of_genpd_add_subdomain(struct of_phandle_args *parent_spec, 322 struct of_phandle_args *subdomain_spec) 323 { 324 return -ENODEV; 325 }
··· 284 int of_genpd_add_device(struct of_phandle_args *args, struct device *dev); 285 int of_genpd_add_subdomain(struct of_phandle_args *parent_spec, 286 struct of_phandle_args *subdomain_spec); 287 + int of_genpd_remove_subdomain(struct of_phandle_args *parent_spec, 288 + struct of_phandle_args *subdomain_spec); 289 struct generic_pm_domain *of_genpd_remove_last(struct device_node *np); 290 int of_genpd_parse_idle_states(struct device_node *dn, 291 struct genpd_power_state **states, int *n); ··· 318 319 static inline int of_genpd_add_subdomain(struct of_phandle_args *parent_spec, 320 struct of_phandle_args *subdomain_spec) 321 + { 322 + return -ENODEV; 323 + } 324 + 325 + static inline int of_genpd_remove_subdomain(struct of_phandle_args *parent_spec, 326 + struct of_phandle_args *subdomain_spec) 327 { 328 return -ENODEV; 329 }
+2
include/linux/psci.h
··· 18 19 int psci_cpu_suspend_enter(u32 state); 20 bool psci_power_state_is_valid(u32 state); 21 22 enum smccc_version { 23 SMCCC_VERSION_1_0,
··· 18 19 int psci_cpu_suspend_enter(u32 state); 20 bool psci_power_state_is_valid(u32 state); 21 + int psci_set_osi_mode(void); 22 + bool psci_has_osi_support(void); 23 24 enum smccc_version { 25 SMCCC_VERSION_1_0,
+70 -53
include/linux/qcom_scm.h
··· 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* Copyright (c) 2010-2015, 2018, The Linux Foundation. All rights reserved. 3 * Copyright (C) 2015 Linaro Ltd. 4 */ 5 #ifndef __QCOM_SCM_H ··· 55 #define QCOM_SCM_PERM_RWX (QCOM_SCM_PERM_RW | QCOM_SCM_PERM_EXEC) 56 57 #if IS_ENABLED(CONFIG_QCOM_SCM) 58 extern int qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus); 59 extern int qcom_scm_set_warm_boot_addr(void *entry, const cpumask_t *cpus); 60 - extern bool qcom_scm_is_available(void); 61 - extern bool qcom_scm_hdcp_available(void); 62 - extern int qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt, 63 - u32 *resp); 64 - extern bool qcom_scm_ocmem_lock_available(void); 65 - extern int qcom_scm_ocmem_lock(enum qcom_scm_ocmem_client id, u32 offset, 66 - u32 size, u32 mode); 67 - extern int qcom_scm_ocmem_unlock(enum qcom_scm_ocmem_client id, u32 offset, 68 - u32 size); 69 - extern bool qcom_scm_pas_supported(u32 peripheral); 70 extern int qcom_scm_pas_init_image(u32 peripheral, const void *metadata, 71 size_t size); 72 extern int qcom_scm_pas_mem_setup(u32 peripheral, phys_addr_t addr, 73 phys_addr_t size); 74 extern int qcom_scm_pas_auth_and_reset(u32 peripheral); 75 extern int qcom_scm_pas_shutdown(u32 peripheral); 76 - extern int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz, 77 - unsigned int *src, 78 - const struct qcom_scm_vmperm *newvm, 79 - unsigned int dest_cnt); 80 - extern void qcom_scm_cpu_power_down(u32 flags); 81 - extern u32 qcom_scm_get_version(void); 82 - extern int qcom_scm_set_remote_state(u32 state, u32 id); 83 extern bool qcom_scm_restore_sec_cfg_available(void); 84 extern int qcom_scm_restore_sec_cfg(u32 device_id, u32 spare); 85 extern int qcom_scm_iommu_secure_ptbl_size(u32 spare, size_t *size); 86 extern int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare); 87 extern int qcom_scm_qsmmu500_wait_safe_toggle(bool en); 88 - extern int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val); 89 - extern int qcom_scm_io_writel(phys_addr_t addr, unsigned int val); 90 #else 91 92 #include <linux/errno.h> 93 94 - static inline 95 - int qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus) 96 - { 97 - return -ENODEV; 98 - } 99 - static inline 100 - int qcom_scm_set_warm_boot_addr(void *entry, const cpumask_t *cpus) 101 - { 102 - return -ENODEV; 103 - } 104 static inline bool qcom_scm_is_available(void) { return false; } 105 static inline bool qcom_scm_hdcp_available(void) { return false; } 106 static inline int qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt, 107 - u32 *resp) { return -ENODEV; } 108 - static inline bool qcom_scm_pas_supported(u32 peripheral) { return false; } 109 - static inline int qcom_scm_pas_init_image(u32 peripheral, const void *metadata, 110 - size_t size) { return -ENODEV; } 111 - static inline int qcom_scm_pas_mem_setup(u32 peripheral, phys_addr_t addr, 112 - phys_addr_t size) { return -ENODEV; } 113 - static inline int 114 - qcom_scm_pas_auth_and_reset(u32 peripheral) { return -ENODEV; } 115 - static inline int qcom_scm_pas_shutdown(u32 peripheral) { return -ENODEV; } 116 - static inline int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz, 117 - unsigned int *src, 118 - const struct qcom_scm_vmperm *newvm, 119 - unsigned int dest_cnt) { return -ENODEV; } 120 - static inline void qcom_scm_cpu_power_down(u32 flags) {} 121 - static inline u32 qcom_scm_get_version(void) { return 0; } 122 - static inline u32 123 - qcom_scm_set_remote_state(u32 state,u32 id) { return -ENODEV; } 124 - static inline int qcom_scm_restore_sec_cfg(u32 device_id, u32 spare) { return -ENODEV; } 125 - static inline int qcom_scm_iommu_secure_ptbl_size(u32 spare, size_t *size) { return -ENODEV; } 126 - static inline int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare) { return -ENODEV; } 127 - static inline int qcom_scm_qsmmu500_wait_safe_toggle(bool en) { return -ENODEV; } 128 - static inline int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val) { return -ENODEV; } 129 - static inline int qcom_scm_io_writel(phys_addr_t addr, unsigned int val) { return -ENODEV; } 130 #endif 131 #endif
··· 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* Copyright (c) 2010-2015, 2018-2019 The Linux Foundation. All rights reserved. 3 * Copyright (C) 2015 Linaro Ltd. 4 */ 5 #ifndef __QCOM_SCM_H ··· 55 #define QCOM_SCM_PERM_RWX (QCOM_SCM_PERM_RW | QCOM_SCM_PERM_EXEC) 56 57 #if IS_ENABLED(CONFIG_QCOM_SCM) 58 + extern bool qcom_scm_is_available(void); 59 + 60 extern int qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus); 61 extern int qcom_scm_set_warm_boot_addr(void *entry, const cpumask_t *cpus); 62 + extern void qcom_scm_cpu_power_down(u32 flags); 63 + extern int qcom_scm_set_remote_state(u32 state, u32 id); 64 + 65 extern int qcom_scm_pas_init_image(u32 peripheral, const void *metadata, 66 size_t size); 67 extern int qcom_scm_pas_mem_setup(u32 peripheral, phys_addr_t addr, 68 phys_addr_t size); 69 extern int qcom_scm_pas_auth_and_reset(u32 peripheral); 70 extern int qcom_scm_pas_shutdown(u32 peripheral); 71 + extern bool qcom_scm_pas_supported(u32 peripheral); 72 + 73 + extern int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val); 74 + extern int qcom_scm_io_writel(phys_addr_t addr, unsigned int val); 75 + 76 extern bool qcom_scm_restore_sec_cfg_available(void); 77 extern int qcom_scm_restore_sec_cfg(u32 device_id, u32 spare); 78 extern int qcom_scm_iommu_secure_ptbl_size(u32 spare, size_t *size); 79 extern int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare); 80 + extern int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz, 81 + unsigned int *src, 82 + const struct qcom_scm_vmperm *newvm, 83 + unsigned int dest_cnt); 84 + 85 + extern bool qcom_scm_ocmem_lock_available(void); 86 + extern int qcom_scm_ocmem_lock(enum qcom_scm_ocmem_client id, u32 offset, 87 + u32 size, u32 mode); 88 + extern int qcom_scm_ocmem_unlock(enum qcom_scm_ocmem_client id, u32 offset, 89 + u32 size); 90 + 91 + extern bool qcom_scm_hdcp_available(void); 92 + extern int qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt, 93 + u32 *resp); 94 + 95 extern int qcom_scm_qsmmu500_wait_safe_toggle(bool en); 96 #else 97 98 #include <linux/errno.h> 99 100 static inline bool qcom_scm_is_available(void) { return false; } 101 + 102 + static inline int qcom_scm_set_cold_boot_addr(void *entry, 103 + const cpumask_t *cpus) { return -ENODEV; } 104 + static inline int qcom_scm_set_warm_boot_addr(void *entry, 105 + const cpumask_t *cpus) { return -ENODEV; } 106 + static inline void qcom_scm_cpu_power_down(u32 flags) {} 107 + static inline u32 qcom_scm_set_remote_state(u32 state,u32 id) 108 + { return -ENODEV; } 109 + 110 + static inline int qcom_scm_pas_init_image(u32 peripheral, const void *metadata, 111 + size_t size) { return -ENODEV; } 112 + static inline int qcom_scm_pas_mem_setup(u32 peripheral, phys_addr_t addr, 113 + phys_addr_t size) { return -ENODEV; } 114 + static inline int qcom_scm_pas_auth_and_reset(u32 peripheral) 115 + { return -ENODEV; } 116 + static inline int qcom_scm_pas_shutdown(u32 peripheral) { return -ENODEV; } 117 + static inline bool qcom_scm_pas_supported(u32 peripheral) { return false; } 118 + 119 + static inline int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val) 120 + { return -ENODEV; } 121 + static inline int qcom_scm_io_writel(phys_addr_t addr, unsigned int val) 122 + { return -ENODEV; } 123 + 124 + static inline bool qcom_scm_restore_sec_cfg_available(void) { return false; } 125 + static inline int qcom_scm_restore_sec_cfg(u32 device_id, u32 spare) 126 + { return -ENODEV; } 127 + static inline int qcom_scm_iommu_secure_ptbl_size(u32 spare, size_t *size) 128 + { return -ENODEV; } 129 + static inline int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare) 130 + { return -ENODEV; } 131 + static inline int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz, 132 + unsigned int *src, const struct qcom_scm_vmperm *newvm, 133 + unsigned int dest_cnt) { return -ENODEV; } 134 + 135 + static inline bool qcom_scm_ocmem_lock_available(void) { return false; } 136 + static inline int qcom_scm_ocmem_lock(enum qcom_scm_ocmem_client id, u32 offset, 137 + u32 size, u32 mode) { return -ENODEV; } 138 + static inline int qcom_scm_ocmem_unlock(enum qcom_scm_ocmem_client id, 139 + u32 offset, u32 size) { return -ENODEV; } 140 + 141 static inline bool qcom_scm_hdcp_available(void) { return false; } 142 static inline int qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt, 143 + u32 *resp) { return -ENODEV; } 144 + 145 + static inline int qcom_scm_qsmmu500_wait_safe_toggle(bool en) 146 + { return -ENODEV; } 147 #endif 148 #endif
+4 -1
include/linux/scmi_protocol.h
··· 257 struct scmi_device { 258 u32 id; 259 u8 protocol_id; 260 struct device dev; 261 struct scmi_handle *handle; 262 }; ··· 265 #define to_scmi_dev(d) container_of(d, struct scmi_device, dev) 266 267 struct scmi_device * 268 - scmi_device_create(struct device_node *np, struct device *parent, int protocol); 269 void scmi_device_destroy(struct scmi_device *scmi_dev); 270 271 struct scmi_device_id { 272 u8 protocol_id; 273 }; 274 275 struct scmi_driver {
··· 257 struct scmi_device { 258 u32 id; 259 u8 protocol_id; 260 + const char *name; 261 struct device dev; 262 struct scmi_handle *handle; 263 }; ··· 264 #define to_scmi_dev(d) container_of(d, struct scmi_device, dev) 265 266 struct scmi_device * 267 + scmi_device_create(struct device_node *np, struct device *parent, int protocol, 268 + const char *name); 269 void scmi_device_destroy(struct scmi_device *scmi_dev); 270 271 struct scmi_device_id { 272 u8 protocol_id; 273 + const char *name; 274 }; 275 276 struct scmi_driver {
+1 -1
include/linux/soc/samsung/exynos-pmu.h
··· 3 * Copyright (c) 2014 Samsung Electronics Co., Ltd. 4 * http://www.samsung.com 5 * 6 - * Header for EXYNOS PMU Driver support 7 */ 8 9 #ifndef __LINUX_SOC_EXYNOS_PMU_H
··· 3 * Copyright (c) 2014 Samsung Electronics Co., Ltd. 4 * http://www.samsung.com 5 * 6 + * Header for Exynos PMU Driver support 7 */ 8 9 #ifndef __LINUX_SOC_EXYNOS_PMU_H
+8 -8
include/linux/soc/samsung/exynos-regs-pmu.h
··· 3 * Copyright (c) 2010-2015 Samsung Electronics Co., Ltd. 4 * http://www.samsung.com 5 * 6 - * EXYNOS - Power management unit definition 7 * 8 * Notice: 9 * This is not a list of all Exynos Power Management Unit SFRs. ··· 185 /* Only for S5Pv210 */ 186 #define S5PV210_EINT_WAKEUP_MASK 0xC004 187 188 - /* Only for EXYNOS4210 */ 189 #define S5P_CMU_CLKSTOP_LCD1_LOWPWR 0x1154 190 #define S5P_CMU_RESET_LCD1_LOWPWR 0x1174 191 #define S5P_MODIMIF_MEM_LOWPWR 0x11C4 ··· 193 #define S5P_SATA_MEM_LOWPWR 0x11E4 194 #define S5P_LCD1_LOWPWR 0x1394 195 196 - /* Only for EXYNOS4x12 */ 197 #define S5P_ISP_ARM_LOWPWR 0x1050 198 #define S5P_DIS_IRQ_ISP_ARM_LOCAL_LOWPWR 0x1054 199 #define S5P_DIS_IRQ_ISP_ARM_CENTRAL_LOWPWR 0x1058 ··· 234 #define S5P_SECSS_MEM_OPTION 0x2EC8 235 #define S5P_ROTATOR_MEM_OPTION 0x2F48 236 237 - /* Only for EXYNOS4412 */ 238 #define S5P_ARM_CORE2_LOWPWR 0x1020 239 #define S5P_DIS_IRQ_CORE2 0x1024 240 #define S5P_DIS_IRQ_CENTRAL2 0x1028 ··· 242 #define S5P_DIS_IRQ_CORE3 0x1034 243 #define S5P_DIS_IRQ_CENTRAL3 0x1038 244 245 - /* Only for EXYNOS3XXX */ 246 #define EXYNOS3_ARM_CORE0_SYS_PWR_REG 0x1000 247 #define EXYNOS3_DIS_IRQ_ARM_CORE0_LOCAL_SYS_PWR_REG 0x1004 248 #define EXYNOS3_DIS_IRQ_ARM_CORE0_CENTRAL_SYS_PWR_REG 0x1008 ··· 347 #define EXYNOS3_OPTION_USE_SC_FEEDBACK (1 << 1) 348 #define EXYNOS3_OPTION_SKIP_DEACTIVATE_ACEACP_IN_PWDN (1 << 7) 349 350 - /* For EXYNOS5 */ 351 352 #define EXYNOS5_AUTO_WDTRESET_DISABLE 0x0408 353 #define EXYNOS5_MASK_WDTRESET_REQUEST 0x040C ··· 484 485 #define EXYNOS5420_SWRESET_KFC_SEL 0x3 486 487 - /* Only for EXYNOS5420 */ 488 #define EXYNOS5420_L2RSTDISABLE_VALUE BIT(3) 489 490 #define EXYNOS5420_LPI_MASK 0x0004 ··· 645 | EXYNOS5420_KFC_USE_STANDBY_WFI2 \ 646 | EXYNOS5420_KFC_USE_STANDBY_WFI3) 647 648 - /* For EXYNOS5433 */ 649 #define EXYNOS5433_EINT_WAKEUP_MASK (0x060C) 650 #define EXYNOS5433_USBHOST30_PHY_CONTROL (0x0728) 651 #define EXYNOS5433_PAD_RETENTION_AUD_OPTION (0x3028)
··· 3 * Copyright (c) 2010-2015 Samsung Electronics Co., Ltd. 4 * http://www.samsung.com 5 * 6 + * Exynos - Power management unit definition 7 * 8 * Notice: 9 * This is not a list of all Exynos Power Management Unit SFRs. ··· 185 /* Only for S5Pv210 */ 186 #define S5PV210_EINT_WAKEUP_MASK 0xC004 187 188 + /* Only for Exynos4210 */ 189 #define S5P_CMU_CLKSTOP_LCD1_LOWPWR 0x1154 190 #define S5P_CMU_RESET_LCD1_LOWPWR 0x1174 191 #define S5P_MODIMIF_MEM_LOWPWR 0x11C4 ··· 193 #define S5P_SATA_MEM_LOWPWR 0x11E4 194 #define S5P_LCD1_LOWPWR 0x1394 195 196 + /* Only for Exynos4x12 */ 197 #define S5P_ISP_ARM_LOWPWR 0x1050 198 #define S5P_DIS_IRQ_ISP_ARM_LOCAL_LOWPWR 0x1054 199 #define S5P_DIS_IRQ_ISP_ARM_CENTRAL_LOWPWR 0x1058 ··· 234 #define S5P_SECSS_MEM_OPTION 0x2EC8 235 #define S5P_ROTATOR_MEM_OPTION 0x2F48 236 237 + /* Only for Exynos4412 */ 238 #define S5P_ARM_CORE2_LOWPWR 0x1020 239 #define S5P_DIS_IRQ_CORE2 0x1024 240 #define S5P_DIS_IRQ_CENTRAL2 0x1028 ··· 242 #define S5P_DIS_IRQ_CORE3 0x1034 243 #define S5P_DIS_IRQ_CENTRAL3 0x1038 244 245 + /* Only for Exynos3XXX */ 246 #define EXYNOS3_ARM_CORE0_SYS_PWR_REG 0x1000 247 #define EXYNOS3_DIS_IRQ_ARM_CORE0_LOCAL_SYS_PWR_REG 0x1004 248 #define EXYNOS3_DIS_IRQ_ARM_CORE0_CENTRAL_SYS_PWR_REG 0x1008 ··· 347 #define EXYNOS3_OPTION_USE_SC_FEEDBACK (1 << 1) 348 #define EXYNOS3_OPTION_SKIP_DEACTIVATE_ACEACP_IN_PWDN (1 << 7) 349 350 + /* For Exynos5 */ 351 352 #define EXYNOS5_AUTO_WDTRESET_DISABLE 0x0408 353 #define EXYNOS5_MASK_WDTRESET_REQUEST 0x040C ··· 484 485 #define EXYNOS5420_SWRESET_KFC_SEL 0x3 486 487 + /* Only for Exynos5420 */ 488 #define EXYNOS5420_L2RSTDISABLE_VALUE BIT(3) 489 490 #define EXYNOS5420_LPI_MASK 0x0004 ··· 645 | EXYNOS5420_KFC_USE_STANDBY_WFI2 \ 646 | EXYNOS5420_KFC_USE_STANDBY_WFI3) 647 648 + /* For Exynos5433 */ 649 #define EXYNOS5433_EINT_WAKEUP_MASK (0x060C) 650 #define EXYNOS5433_USBHOST30_PHY_CONTROL (0x0728) 651 #define EXYNOS5433_PAD_RETENTION_AUD_OPTION (0x3028)
+171
include/soc/fsl/cpm.h
···
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __CPM_H 3 + #define __CPM_H 4 + 5 + #include <linux/compiler.h> 6 + #include <linux/types.h> 7 + #include <linux/errno.h> 8 + #include <linux/of.h> 9 + #include <soc/fsl/qe/qe.h> 10 + 11 + /* 12 + * SPI Parameter RAM common to QE and CPM. 13 + */ 14 + struct spi_pram { 15 + __be16 rbase; /* Rx Buffer descriptor base address */ 16 + __be16 tbase; /* Tx Buffer descriptor base address */ 17 + u8 rfcr; /* Rx function code */ 18 + u8 tfcr; /* Tx function code */ 19 + __be16 mrblr; /* Max receive buffer length */ 20 + __be32 rstate; /* Internal */ 21 + __be32 rdp; /* Internal */ 22 + __be16 rbptr; /* Internal */ 23 + __be16 rbc; /* Internal */ 24 + __be32 rxtmp; /* Internal */ 25 + __be32 tstate; /* Internal */ 26 + __be32 tdp; /* Internal */ 27 + __be16 tbptr; /* Internal */ 28 + __be16 tbc; /* Internal */ 29 + __be32 txtmp; /* Internal */ 30 + __be32 res; /* Tx temp. */ 31 + __be16 rpbase; /* Relocation pointer (CPM1 only) */ 32 + __be16 res1; /* Reserved */ 33 + }; 34 + 35 + /* 36 + * USB Controller pram common to QE and CPM. 37 + */ 38 + struct usb_ctlr { 39 + u8 usb_usmod; 40 + u8 usb_usadr; 41 + u8 usb_uscom; 42 + u8 res1[1]; 43 + __be16 usb_usep[4]; 44 + u8 res2[4]; 45 + __be16 usb_usber; 46 + u8 res3[2]; 47 + __be16 usb_usbmr; 48 + u8 res4[1]; 49 + u8 usb_usbs; 50 + /* Fields down below are QE-only */ 51 + __be16 usb_ussft; 52 + u8 res5[2]; 53 + __be16 usb_usfrn; 54 + u8 res6[0x22]; 55 + } __attribute__ ((packed)); 56 + 57 + /* 58 + * Function code bits, usually generic to devices. 59 + */ 60 + #ifdef CONFIG_CPM1 61 + #define CPMFCR_GBL ((u_char)0x00) /* Flag doesn't exist in CPM1 */ 62 + #define CPMFCR_TC2 ((u_char)0x00) /* Flag doesn't exist in CPM1 */ 63 + #define CPMFCR_DTB ((u_char)0x00) /* Flag doesn't exist in CPM1 */ 64 + #define CPMFCR_BDB ((u_char)0x00) /* Flag doesn't exist in CPM1 */ 65 + #else 66 + #define CPMFCR_GBL ((u_char)0x20) /* Set memory snooping */ 67 + #define CPMFCR_TC2 ((u_char)0x04) /* Transfer code 2 value */ 68 + #define CPMFCR_DTB ((u_char)0x02) /* Use local bus for data when set */ 69 + #define CPMFCR_BDB ((u_char)0x01) /* Use local bus for BD when set */ 70 + #endif 71 + #define CPMFCR_EB ((u_char)0x10) /* Set big endian byte order */ 72 + 73 + /* Opcodes common to CPM1 and CPM2 74 + */ 75 + #define CPM_CR_INIT_TRX ((ushort)0x0000) 76 + #define CPM_CR_INIT_RX ((ushort)0x0001) 77 + #define CPM_CR_INIT_TX ((ushort)0x0002) 78 + #define CPM_CR_HUNT_MODE ((ushort)0x0003) 79 + #define CPM_CR_STOP_TX ((ushort)0x0004) 80 + #define CPM_CR_GRA_STOP_TX ((ushort)0x0005) 81 + #define CPM_CR_RESTART_TX ((ushort)0x0006) 82 + #define CPM_CR_CLOSE_RX_BD ((ushort)0x0007) 83 + #define CPM_CR_SET_GADDR ((ushort)0x0008) 84 + #define CPM_CR_SET_TIMER ((ushort)0x0008) 85 + #define CPM_CR_STOP_IDMA ((ushort)0x000b) 86 + 87 + /* Buffer descriptors used by many of the CPM protocols. */ 88 + typedef struct cpm_buf_desc { 89 + ushort cbd_sc; /* Status and Control */ 90 + ushort cbd_datlen; /* Data length in buffer */ 91 + uint cbd_bufaddr; /* Buffer address in host memory */ 92 + } cbd_t; 93 + 94 + /* Buffer descriptor control/status used by serial 95 + */ 96 + 97 + #define BD_SC_EMPTY (0x8000) /* Receive is empty */ 98 + #define BD_SC_READY (0x8000) /* Transmit is ready */ 99 + #define BD_SC_WRAP (0x2000) /* Last buffer descriptor */ 100 + #define BD_SC_INTRPT (0x1000) /* Interrupt on change */ 101 + #define BD_SC_LAST (0x0800) /* Last buffer in frame */ 102 + #define BD_SC_TC (0x0400) /* Transmit CRC */ 103 + #define BD_SC_CM (0x0200) /* Continuous mode */ 104 + #define BD_SC_ID (0x0100) /* Rec'd too many idles */ 105 + #define BD_SC_P (0x0100) /* xmt preamble */ 106 + #define BD_SC_BR (0x0020) /* Break received */ 107 + #define BD_SC_FR (0x0010) /* Framing error */ 108 + #define BD_SC_PR (0x0008) /* Parity error */ 109 + #define BD_SC_NAK (0x0004) /* NAK - did not respond */ 110 + #define BD_SC_OV (0x0002) /* Overrun */ 111 + #define BD_SC_UN (0x0002) /* Underrun */ 112 + #define BD_SC_CD (0x0001) /* */ 113 + #define BD_SC_CL (0x0001) /* Collision */ 114 + 115 + /* Buffer descriptor control/status used by Ethernet receive. 116 + * Common to SCC and FCC. 117 + */ 118 + #define BD_ENET_RX_EMPTY (0x8000) 119 + #define BD_ENET_RX_WRAP (0x2000) 120 + #define BD_ENET_RX_INTR (0x1000) 121 + #define BD_ENET_RX_LAST (0x0800) 122 + #define BD_ENET_RX_FIRST (0x0400) 123 + #define BD_ENET_RX_MISS (0x0100) 124 + #define BD_ENET_RX_BC (0x0080) /* FCC Only */ 125 + #define BD_ENET_RX_MC (0x0040) /* FCC Only */ 126 + #define BD_ENET_RX_LG (0x0020) 127 + #define BD_ENET_RX_NO (0x0010) 128 + #define BD_ENET_RX_SH (0x0008) 129 + #define BD_ENET_RX_CR (0x0004) 130 + #define BD_ENET_RX_OV (0x0002) 131 + #define BD_ENET_RX_CL (0x0001) 132 + #define BD_ENET_RX_STATS (0x01ff) /* All status bits */ 133 + 134 + /* Buffer descriptor control/status used by Ethernet transmit. 135 + * Common to SCC and FCC. 136 + */ 137 + #define BD_ENET_TX_READY (0x8000) 138 + #define BD_ENET_TX_PAD (0x4000) 139 + #define BD_ENET_TX_WRAP (0x2000) 140 + #define BD_ENET_TX_INTR (0x1000) 141 + #define BD_ENET_TX_LAST (0x0800) 142 + #define BD_ENET_TX_TC (0x0400) 143 + #define BD_ENET_TX_DEF (0x0200) 144 + #define BD_ENET_TX_HB (0x0100) 145 + #define BD_ENET_TX_LC (0x0080) 146 + #define BD_ENET_TX_RL (0x0040) 147 + #define BD_ENET_TX_RCMASK (0x003c) 148 + #define BD_ENET_TX_UN (0x0002) 149 + #define BD_ENET_TX_CSL (0x0001) 150 + #define BD_ENET_TX_STATS (0x03ff) /* All status bits */ 151 + 152 + /* Buffer descriptor control/status used by Transparent mode SCC. 153 + */ 154 + #define BD_SCC_TX_LAST (0x0800) 155 + 156 + /* Buffer descriptor control/status used by I2C. 157 + */ 158 + #define BD_I2C_START (0x0400) 159 + 160 + #ifdef CONFIG_CPM 161 + int cpm_command(u32 command, u8 opcode); 162 + #else 163 + static inline int cpm_command(u32 command, u8 opcode) 164 + { 165 + return -ENOSYS; 166 + } 167 + #endif /* CONFIG_CPM */ 168 + 169 + int cpm2_gpiochip_add32(struct device *dev); 170 + 171 + #endif
+37 -22
include/soc/fsl/qe/qe.h
··· 17 #include <linux/spinlock.h> 18 #include <linux/errno.h> 19 #include <linux/err.h> 20 - #include <asm/cpm.h> 21 #include <soc/fsl/qe/immap_qe.h> 22 #include <linux/of.h> 23 #include <linux/of_address.h> ··· 98 int cpm_muram_init(void); 99 100 #if defined(CONFIG_CPM) || defined(CONFIG_QUICC_ENGINE) 101 - unsigned long cpm_muram_alloc(unsigned long size, unsigned long align); 102 - int cpm_muram_free(unsigned long offset); 103 - unsigned long cpm_muram_alloc_fixed(unsigned long offset, unsigned long size); 104 void __iomem *cpm_muram_addr(unsigned long offset); 105 unsigned long cpm_muram_offset(void __iomem *addr); 106 dma_addr_t cpm_muram_dma(void __iomem *addr); 107 #else 108 - static inline unsigned long cpm_muram_alloc(unsigned long size, 109 - unsigned long align) 110 { 111 return -ENOSYS; 112 } 113 114 - static inline int cpm_muram_free(unsigned long offset) 115 { 116 - return -ENOSYS; 117 } 118 119 - static inline unsigned long cpm_muram_alloc_fixed(unsigned long offset, 120 - unsigned long size) 121 { 122 return -ENOSYS; 123 } ··· 240 #define qe_muram_offset cpm_muram_offset 241 #define qe_muram_dma cpm_muram_dma 242 243 - #define qe_setbits32(_addr, _v) iowrite32be(ioread32be(_addr) | (_v), (_addr)) 244 - #define qe_clrbits32(_addr, _v) iowrite32be(ioread32be(_addr) & ~(_v), (_addr)) 245 246 - #define qe_setbits16(_addr, _v) iowrite16be(ioread16be(_addr) | (_v), (_addr)) 247 - #define qe_clrbits16(_addr, _v) iowrite16be(ioread16be(_addr) & ~(_v), (_addr)) 248 249 - #define qe_setbits8(_addr, _v) iowrite8(ioread8(_addr) | (_v), (_addr)) 250 - #define qe_clrbits8(_addr, _v) iowrite8(ioread8(_addr) & ~(_v), (_addr)) 251 252 - #define qe_clrsetbits32(addr, clear, set) \ 253 - iowrite32be((ioread32be(addr) & ~(clear)) | (set), (addr)) 254 - #define qe_clrsetbits16(addr, clear, set) \ 255 - iowrite16be((ioread16be(addr) & ~(clear)) | (set), (addr)) 256 - #define qe_clrsetbits8(addr, clear, set) \ 257 - iowrite8((ioread8(addr) & ~(clear)) | (set), (addr)) 258 259 /* Structure that defines QE firmware binary files. 260 *
··· 17 #include <linux/spinlock.h> 18 #include <linux/errno.h> 19 #include <linux/err.h> 20 + #include <soc/fsl/cpm.h> 21 #include <soc/fsl/qe/immap_qe.h> 22 #include <linux/of.h> 23 #include <linux/of_address.h> ··· 98 int cpm_muram_init(void); 99 100 #if defined(CONFIG_CPM) || defined(CONFIG_QUICC_ENGINE) 101 + s32 cpm_muram_alloc(unsigned long size, unsigned long align); 102 + void cpm_muram_free(s32 offset); 103 + s32 cpm_muram_alloc_fixed(unsigned long offset, unsigned long size); 104 void __iomem *cpm_muram_addr(unsigned long offset); 105 unsigned long cpm_muram_offset(void __iomem *addr); 106 dma_addr_t cpm_muram_dma(void __iomem *addr); 107 #else 108 + static inline s32 cpm_muram_alloc(unsigned long size, 109 + unsigned long align) 110 { 111 return -ENOSYS; 112 } 113 114 + static inline void cpm_muram_free(s32 offset) 115 { 116 } 117 118 + static inline s32 cpm_muram_alloc_fixed(unsigned long offset, 119 + unsigned long size) 120 { 121 return -ENOSYS; 122 } ··· 241 #define qe_muram_offset cpm_muram_offset 242 #define qe_muram_dma cpm_muram_dma 243 244 + #ifdef CONFIG_PPC32 245 + #define qe_iowrite8(val, addr) out_8(addr, val) 246 + #define qe_iowrite16be(val, addr) out_be16(addr, val) 247 + #define qe_iowrite32be(val, addr) out_be32(addr, val) 248 + #define qe_ioread8(addr) in_8(addr) 249 + #define qe_ioread16be(addr) in_be16(addr) 250 + #define qe_ioread32be(addr) in_be32(addr) 251 + #else 252 + #define qe_iowrite8(val, addr) iowrite8(val, addr) 253 + #define qe_iowrite16be(val, addr) iowrite16be(val, addr) 254 + #define qe_iowrite32be(val, addr) iowrite32be(val, addr) 255 + #define qe_ioread8(addr) ioread8(addr) 256 + #define qe_ioread16be(addr) ioread16be(addr) 257 + #define qe_ioread32be(addr) ioread32be(addr) 258 + #endif 259 260 + #define qe_setbits_be32(_addr, _v) qe_iowrite32be(qe_ioread32be(_addr) | (_v), (_addr)) 261 + #define qe_clrbits_be32(_addr, _v) qe_iowrite32be(qe_ioread32be(_addr) & ~(_v), (_addr)) 262 263 + #define qe_setbits_be16(_addr, _v) qe_iowrite16be(qe_ioread16be(_addr) | (_v), (_addr)) 264 + #define qe_clrbits_be16(_addr, _v) qe_iowrite16be(qe_ioread16be(_addr) & ~(_v), (_addr)) 265 266 + #define qe_setbits_8(_addr, _v) qe_iowrite8(qe_ioread8(_addr) | (_v), (_addr)) 267 + #define qe_clrbits_8(_addr, _v) qe_iowrite8(qe_ioread8(_addr) & ~(_v), (_addr)) 268 + 269 + #define qe_clrsetbits_be32(addr, clear, set) \ 270 + qe_iowrite32be((qe_ioread32be(addr) & ~(clear)) | (set), (addr)) 271 + #define qe_clrsetbits_be16(addr, clear, set) \ 272 + qe_iowrite16be((qe_ioread16be(addr) & ~(clear)) | (set), (addr)) 273 + #define qe_clrsetbits_8(addr, clear, set) \ 274 + qe_iowrite8((qe_ioread8(addr) & ~(clear)) | (set), (addr)) 275 276 /* Structure that defines QE firmware binary files. 277 *
-135
include/soc/fsl/qe/qe_ic.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * Copyright (C) 2006 Freescale Semiconductor, Inc. All rights reserved. 4 - * 5 - * Authors: Shlomi Gridish <gridish@freescale.com> 6 - * Li Yang <leoli@freescale.com> 7 - * 8 - * Description: 9 - * QE IC external definitions and structure. 10 - */ 11 - #ifndef _ASM_POWERPC_QE_IC_H 12 - #define _ASM_POWERPC_QE_IC_H 13 - 14 - #include <linux/irq.h> 15 - 16 - struct device_node; 17 - struct qe_ic; 18 - 19 - #define NUM_OF_QE_IC_GROUPS 6 20 - 21 - /* Flags when we init the QE IC */ 22 - #define QE_IC_SPREADMODE_GRP_W 0x00000001 23 - #define QE_IC_SPREADMODE_GRP_X 0x00000002 24 - #define QE_IC_SPREADMODE_GRP_Y 0x00000004 25 - #define QE_IC_SPREADMODE_GRP_Z 0x00000008 26 - #define QE_IC_SPREADMODE_GRP_RISCA 0x00000010 27 - #define QE_IC_SPREADMODE_GRP_RISCB 0x00000020 28 - 29 - #define QE_IC_LOW_SIGNAL 0x00000100 30 - #define QE_IC_HIGH_SIGNAL 0x00000200 31 - 32 - #define QE_IC_GRP_W_PRI0_DEST_SIGNAL_HIGH 0x00001000 33 - #define QE_IC_GRP_W_PRI1_DEST_SIGNAL_HIGH 0x00002000 34 - #define QE_IC_GRP_X_PRI0_DEST_SIGNAL_HIGH 0x00004000 35 - #define QE_IC_GRP_X_PRI1_DEST_SIGNAL_HIGH 0x00008000 36 - #define QE_IC_GRP_Y_PRI0_DEST_SIGNAL_HIGH 0x00010000 37 - #define QE_IC_GRP_Y_PRI1_DEST_SIGNAL_HIGH 0x00020000 38 - #define QE_IC_GRP_Z_PRI0_DEST_SIGNAL_HIGH 0x00040000 39 - #define QE_IC_GRP_Z_PRI1_DEST_SIGNAL_HIGH 0x00080000 40 - #define QE_IC_GRP_RISCA_PRI0_DEST_SIGNAL_HIGH 0x00100000 41 - #define QE_IC_GRP_RISCA_PRI1_DEST_SIGNAL_HIGH 0x00200000 42 - #define QE_IC_GRP_RISCB_PRI0_DEST_SIGNAL_HIGH 0x00400000 43 - #define QE_IC_GRP_RISCB_PRI1_DEST_SIGNAL_HIGH 0x00800000 44 - #define QE_IC_GRP_W_DEST_SIGNAL_SHIFT (12) 45 - 46 - /* QE interrupt sources groups */ 47 - enum qe_ic_grp_id { 48 - QE_IC_GRP_W = 0, /* QE interrupt controller group W */ 49 - QE_IC_GRP_X, /* QE interrupt controller group X */ 50 - QE_IC_GRP_Y, /* QE interrupt controller group Y */ 51 - QE_IC_GRP_Z, /* QE interrupt controller group Z */ 52 - QE_IC_GRP_RISCA, /* QE interrupt controller RISC group A */ 53 - QE_IC_GRP_RISCB /* QE interrupt controller RISC group B */ 54 - }; 55 - 56 - #ifdef CONFIG_QUICC_ENGINE 57 - void qe_ic_init(struct device_node *node, unsigned int flags, 58 - void (*low_handler)(struct irq_desc *desc), 59 - void (*high_handler)(struct irq_desc *desc)); 60 - unsigned int qe_ic_get_low_irq(struct qe_ic *qe_ic); 61 - unsigned int qe_ic_get_high_irq(struct qe_ic *qe_ic); 62 - #else 63 - static inline void qe_ic_init(struct device_node *node, unsigned int flags, 64 - void (*low_handler)(struct irq_desc *desc), 65 - void (*high_handler)(struct irq_desc *desc)) 66 - {} 67 - static inline unsigned int qe_ic_get_low_irq(struct qe_ic *qe_ic) 68 - { return 0; } 69 - static inline unsigned int qe_ic_get_high_irq(struct qe_ic *qe_ic) 70 - { return 0; } 71 - #endif /* CONFIG_QUICC_ENGINE */ 72 - 73 - void qe_ic_set_highest_priority(unsigned int virq, int high); 74 - int qe_ic_set_priority(unsigned int virq, unsigned int priority); 75 - int qe_ic_set_high_priority(unsigned int virq, unsigned int priority, int high); 76 - 77 - static inline void qe_ic_cascade_low_ipic(struct irq_desc *desc) 78 - { 79 - struct qe_ic *qe_ic = irq_desc_get_handler_data(desc); 80 - unsigned int cascade_irq = qe_ic_get_low_irq(qe_ic); 81 - 82 - if (cascade_irq != NO_IRQ) 83 - generic_handle_irq(cascade_irq); 84 - } 85 - 86 - static inline void qe_ic_cascade_high_ipic(struct irq_desc *desc) 87 - { 88 - struct qe_ic *qe_ic = irq_desc_get_handler_data(desc); 89 - unsigned int cascade_irq = qe_ic_get_high_irq(qe_ic); 90 - 91 - if (cascade_irq != NO_IRQ) 92 - generic_handle_irq(cascade_irq); 93 - } 94 - 95 - static inline void qe_ic_cascade_low_mpic(struct irq_desc *desc) 96 - { 97 - struct qe_ic *qe_ic = irq_desc_get_handler_data(desc); 98 - unsigned int cascade_irq = qe_ic_get_low_irq(qe_ic); 99 - struct irq_chip *chip = irq_desc_get_chip(desc); 100 - 101 - if (cascade_irq != NO_IRQ) 102 - generic_handle_irq(cascade_irq); 103 - 104 - chip->irq_eoi(&desc->irq_data); 105 - } 106 - 107 - static inline void qe_ic_cascade_high_mpic(struct irq_desc *desc) 108 - { 109 - struct qe_ic *qe_ic = irq_desc_get_handler_data(desc); 110 - unsigned int cascade_irq = qe_ic_get_high_irq(qe_ic); 111 - struct irq_chip *chip = irq_desc_get_chip(desc); 112 - 113 - if (cascade_irq != NO_IRQ) 114 - generic_handle_irq(cascade_irq); 115 - 116 - chip->irq_eoi(&desc->irq_data); 117 - } 118 - 119 - static inline void qe_ic_cascade_muxed_mpic(struct irq_desc *desc) 120 - { 121 - struct qe_ic *qe_ic = irq_desc_get_handler_data(desc); 122 - unsigned int cascade_irq; 123 - struct irq_chip *chip = irq_desc_get_chip(desc); 124 - 125 - cascade_irq = qe_ic_get_high_irq(qe_ic); 126 - if (cascade_irq == NO_IRQ) 127 - cascade_irq = qe_ic_get_low_irq(qe_ic); 128 - 129 - if (cascade_irq != NO_IRQ) 130 - generic_handle_irq(cascade_irq); 131 - 132 - chip->irq_eoi(&desc->irq_data); 133 - } 134 - 135 - #endif /* _ASM_POWERPC_QE_IC_H */
···
+2 -2
include/soc/fsl/qe/ucc_fast.h
··· 188 int stopped_tx; /* Whether channel has been stopped for Tx 189 (STOP_TX, etc.) */ 190 int stopped_rx; /* Whether channel has been stopped for Rx */ 191 - u32 ucc_fast_tx_virtual_fifo_base_offset;/* pointer to base of Tx 192 virtual fifo */ 193 - u32 ucc_fast_rx_virtual_fifo_base_offset;/* pointer to base of Rx 194 virtual fifo */ 195 #ifdef STATISTICS 196 u32 tx_frames; /* Transmitted frames counter. */
··· 188 int stopped_tx; /* Whether channel has been stopped for Tx 189 (STOP_TX, etc.) */ 190 int stopped_rx; /* Whether channel has been stopped for Rx */ 191 + s32 ucc_fast_tx_virtual_fifo_base_offset;/* pointer to base of Tx 192 virtual fifo */ 193 + s32 ucc_fast_rx_virtual_fifo_base_offset;/* pointer to base of Rx 194 virtual fifo */ 195 #ifdef STATISTICS 196 u32 tx_frames; /* Transmitted frames counter. */
+3 -3
include/soc/fsl/qe/ucc_slow.h
··· 185 struct ucc_slow_info *us_info; 186 struct ucc_slow __iomem *us_regs; /* Ptr to memory map of UCC regs */ 187 struct ucc_slow_pram *us_pram; /* a pointer to the parameter RAM */ 188 - u32 us_pram_offset; 189 int enabled_tx; /* Whether channel is enabled for Tx (ENT) */ 190 int enabled_rx; /* Whether channel is enabled for Rx (ENR) */ 191 int stopped_tx; /* Whether channel has been stopped for Tx ··· 194 struct list_head confQ; /* frames passed to chip waiting for tx */ 195 u32 first_tx_bd_mask; /* mask is used in Tx routine to save status 196 and length for first BD in a frame */ 197 - u32 tx_base_offset; /* first BD in Tx BD table offset (In MURAM) */ 198 - u32 rx_base_offset; /* first BD in Rx BD table offset (In MURAM) */ 199 struct qe_bd *confBd; /* next BD for confirm after Tx */ 200 struct qe_bd *tx_bd; /* next BD for new Tx request */ 201 struct qe_bd *rx_bd; /* next BD to collect after Rx */
··· 185 struct ucc_slow_info *us_info; 186 struct ucc_slow __iomem *us_regs; /* Ptr to memory map of UCC regs */ 187 struct ucc_slow_pram *us_pram; /* a pointer to the parameter RAM */ 188 + s32 us_pram_offset; 189 int enabled_tx; /* Whether channel is enabled for Tx (ENT) */ 190 int enabled_rx; /* Whether channel is enabled for Rx (ENR) */ 191 int stopped_tx; /* Whether channel has been stopped for Tx ··· 194 struct list_head confQ; /* frames passed to chip waiting for tx */ 195 u32 first_tx_bd_mask; /* mask is used in Tx routine to save status 196 and length for first BD in a frame */ 197 + s32 tx_base_offset; /* first BD in Tx BD table offset (In MURAM) */ 198 + s32 rx_base_offset; /* first BD in Rx BD table offset (In MURAM) */ 199 struct qe_bd *confBd; /* next BD for confirm after Tx */ 200 struct qe_bd *tx_bd; /* next BD for new Tx request */ 201 struct qe_bd *rx_bd; /* next BD to collect after Rx */
+90
include/trace/events/scmi.h
···
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #undef TRACE_SYSTEM 3 + #define TRACE_SYSTEM scmi 4 + 5 + #if !defined(_TRACE_SCMI_H) || defined(TRACE_HEADER_MULTI_READ) 6 + #define _TRACE_SCMI_H 7 + 8 + #include <linux/tracepoint.h> 9 + 10 + TRACE_EVENT(scmi_xfer_begin, 11 + TP_PROTO(int transfer_id, u8 msg_id, u8 protocol_id, u16 seq, 12 + bool poll), 13 + TP_ARGS(transfer_id, msg_id, protocol_id, seq, poll), 14 + 15 + TP_STRUCT__entry( 16 + __field(int, transfer_id) 17 + __field(u8, msg_id) 18 + __field(u8, protocol_id) 19 + __field(u16, seq) 20 + __field(bool, poll) 21 + ), 22 + 23 + TP_fast_assign( 24 + __entry->transfer_id = transfer_id; 25 + __entry->msg_id = msg_id; 26 + __entry->protocol_id = protocol_id; 27 + __entry->seq = seq; 28 + __entry->poll = poll; 29 + ), 30 + 31 + TP_printk("transfer_id=%d msg_id=%u protocol_id=%u seq=%u poll=%u", 32 + __entry->transfer_id, __entry->msg_id, __entry->protocol_id, 33 + __entry->seq, __entry->poll) 34 + ); 35 + 36 + TRACE_EVENT(scmi_xfer_end, 37 + TP_PROTO(int transfer_id, u8 msg_id, u8 protocol_id, u16 seq, 38 + u32 status), 39 + TP_ARGS(transfer_id, msg_id, protocol_id, seq, status), 40 + 41 + TP_STRUCT__entry( 42 + __field(int, transfer_id) 43 + __field(u8, msg_id) 44 + __field(u8, protocol_id) 45 + __field(u16, seq) 46 + __field(u32, status) 47 + ), 48 + 49 + TP_fast_assign( 50 + __entry->transfer_id = transfer_id; 51 + __entry->msg_id = msg_id; 52 + __entry->protocol_id = protocol_id; 53 + __entry->seq = seq; 54 + __entry->status = status; 55 + ), 56 + 57 + TP_printk("transfer_id=%d msg_id=%u protocol_id=%u seq=%u status=%u", 58 + __entry->transfer_id, __entry->msg_id, __entry->protocol_id, 59 + __entry->seq, __entry->status) 60 + ); 61 + 62 + TRACE_EVENT(scmi_rx_done, 63 + TP_PROTO(int transfer_id, u8 msg_id, u8 protocol_id, u16 seq, 64 + u8 msg_type), 65 + TP_ARGS(transfer_id, msg_id, protocol_id, seq, msg_type), 66 + 67 + TP_STRUCT__entry( 68 + __field(int, transfer_id) 69 + __field(u8, msg_id) 70 + __field(u8, protocol_id) 71 + __field(u16, seq) 72 + __field(u8, msg_type) 73 + ), 74 + 75 + TP_fast_assign( 76 + __entry->transfer_id = transfer_id; 77 + __entry->msg_id = msg_id; 78 + __entry->protocol_id = protocol_id; 79 + __entry->seq = seq; 80 + __entry->msg_type = msg_type; 81 + ), 82 + 83 + TP_printk("transfer_id=%d msg_id=%u protocol_id=%u seq=%u msg_type=%u", 84 + __entry->transfer_id, __entry->msg_id, __entry->protocol_id, 85 + __entry->seq, __entry->msg_type) 86 + ); 87 + #endif /* _TRACE_SCMI_H */ 88 + 89 + /* This part must be outside protection */ 90 + #include <trace/define_trace.h>