Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC driver updates from Kevin Hilman:
"Some of these are for drivers/soc, where we're now putting
SoC-specific drivers these days. Some are for other driver subsystems
where we have received acks from the appropriate maintainers.

Some highlights:

- simple-mfd: document DT bindings and misc updates
- migrate mach-berlin to simple-mfd for clock, pinctrl and reset
- memory: support for Tegra132 SoC
- memory: introduce tegra EMC driver for scaling memory frequency
- misc. updates for ARM CCI and CCN busses"

* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (48 commits)
drivers: soc: sunxi: Introduce SoC driver to map SRAMs
arm-cci: Add aliases for PMU events
arm-cci: Add CCI-500 PMU support
arm-cci: Sanitise CCI400 PMU driver specific code
arm-cci: Abstract handling for CCI events
arm-cci: Abstract out the PMU counter details
arm-cci: Cleanup PMU driver code
arm-cci: Do not enable CCI-400 PMU by default
firmware: qcom: scm: Add HDCP Support
ARM: berlin: add an ADC node for the BG2Q
ARM: berlin: remove useless chip and system ctrl compatibles
clk: berlin: drop direct of_iomap of nodes reg property
ARM: berlin: move BG2Q clock node
ARM: berlin: move BG2CD clock node
ARM: berlin: move BG2 clock node
clk: berlin: prepare simple-mfd conversion
pinctrl: berlin: drop SoC stub provided regmap
ARM: berlin: move pinctrl to simple-mfd nodes
pinctrl: berlin: prepare to use regmap provided by syscon
reset: berlin: drop arch_initcall initialization
...

+4847 -1070
+10 -7
Documentation/arm/CCN.txt
··· 33 33 Cycle counter is described by a "type" value 0xff and does 34 34 not require any other settings. 35 35 36 + The driver also provides a "cpumask" sysfs attribute, which contains 37 + a single CPU ID, of the processor which will be used to handle all 38 + the CCN PMU events. It is recommended that the user space tools 39 + request the events on this processor (if not, the perf_event->cpu value 40 + will be overwritten anyway). In case of this processor being offlined, 41 + the events are migrated to another one and the attribute is updated. 42 + 36 43 Example of perf tool use: 37 44 38 45 / # perf list | grep ccn 39 46 ccn/cycles/ [Kernel PMU event] 40 47 <...> 41 - ccn/xp_valid_flit/ [Kernel PMU event] 48 + ccn/xp_valid_flit,xp=?,port=?,vc=?,dir=?/ [Kernel PMU event] 42 49 <...> 43 50 44 - / # perf stat -C 0 -e ccn/cycles/,ccn/xp_valid_flit,xp=1,port=0,vc=1,dir=1/ \ 51 + / # perf stat -a -e ccn/cycles/,ccn/xp_valid_flit,xp=1,port=0,vc=1,dir=1/ \ 45 52 sleep 1 46 53 47 54 The driver does not support sampling, therefore "perf record" will 48 - not work. Also notice that only single cpu is being selected 49 - ("-C 0") - this is because perf framework does not support 50 - "non-CPU related" counters (yet?) so system-wide session ("-a") 51 - would try (and in most cases fail) to set up the same event 52 - per each CPU. 55 + not work. Per-task (without "-a") perf sessions are not supported.
+3 -1
Documentation/devicetree/bindings/arm/cci.txt
··· 31 31 - compatible 32 32 Usage: required 33 33 Value type: <string> 34 - Definition: must be set to 34 + Definition: must contain one of the following: 35 35 "arm,cci-400" 36 + "arm,cci-500" 36 37 37 38 - reg 38 39 Usage: required ··· 100 99 "arm,cci-400-pmu,r1" 101 100 "arm,cci-400-pmu" - DEPRECATED, permitted only where OS has 102 101 secure acces to CCI registers 102 + "arm,cci-500-pmu,r0" 103 103 - reg: 104 104 Usage: required 105 105 Value type: Integer cells. A register entry, expressed
+82 -2
Documentation/devicetree/bindings/memory-controllers/nvidia,tegra-mc.txt
··· 1 1 NVIDIA Tegra Memory Controller device tree bindings 2 2 =================================================== 3 3 4 + memory-controller node 5 + ---------------------- 6 + 4 7 Required properties: 5 8 - compatible: Should be "nvidia,tegra<chip>-mc" 6 9 - reg: Physical base address and length of the controller's registers. ··· 18 15 This device implements an IOMMU that complies with the generic IOMMU binding. 19 16 See ../iommu/iommu.txt for details. 20 17 21 - Example: 22 - -------- 18 + emc-timings subnode 19 + ------------------- 23 20 21 + The node should contain a "emc-timings" subnode for each supported RAM type (see field RAM_CODE in 22 + register PMC_STRAPPING_OPT_A). 23 + 24 + Required properties for "emc-timings" nodes : 25 + - nvidia,ram-code : Should contain the value of RAM_CODE this timing set is used for. 26 + 27 + timing subnode 28 + -------------- 29 + 30 + Each "emc-timings" node should contain a subnode for every supported EMC clock rate. 31 + 32 + Required properties for timing nodes : 33 + - clock-frequency : Should contain the memory clock rate in Hz. 34 + - nvidia,emem-configuration : Values to be written to the EMEM register block. For the Tegra124 SoC 35 + (see section "15.6.1 MC Registers" in the TRM), these are the registers whose values need to be 36 + specified, according to the board documentation: 37 + 38 + MC_EMEM_ARB_CFG 39 + MC_EMEM_ARB_OUTSTANDING_REQ 40 + MC_EMEM_ARB_TIMING_RCD 41 + MC_EMEM_ARB_TIMING_RP 42 + MC_EMEM_ARB_TIMING_RC 43 + MC_EMEM_ARB_TIMING_RAS 44 + MC_EMEM_ARB_TIMING_FAW 45 + MC_EMEM_ARB_TIMING_RRD 46 + MC_EMEM_ARB_TIMING_RAP2PRE 47 + MC_EMEM_ARB_TIMING_WAP2PRE 48 + MC_EMEM_ARB_TIMING_R2R 49 + MC_EMEM_ARB_TIMING_W2W 50 + MC_EMEM_ARB_TIMING_R2W 51 + MC_EMEM_ARB_TIMING_W2R 52 + MC_EMEM_ARB_DA_TURNS 53 + MC_EMEM_ARB_DA_COVERS 54 + MC_EMEM_ARB_MISC0 55 + MC_EMEM_ARB_MISC1 56 + MC_EMEM_ARB_RING1_THROTTLE 57 + 58 + Example SoC include file: 59 + 60 + / { 24 61 mc: memory-controller@0,70019000 { 25 62 compatible = "nvidia,tegra124-mc"; 26 63 reg = <0x0 0x70019000 0x0 0x1000>; ··· 77 34 ... 78 35 iommus = <&mc TEGRA_SWGROUP_SDMMC1A>; 79 36 }; 37 + }; 38 + 39 + Example board file: 40 + 41 + / { 42 + memory-controller@0,70019000 { 43 + emc-timings-3 { 44 + nvidia,ram-code = <3>; 45 + 46 + timing-12750000 { 47 + clock-frequency = <12750000>; 48 + 49 + nvidia,emem-configuration = < 50 + 0x40040001 /* MC_EMEM_ARB_CFG */ 51 + 0x8000000a /* MC_EMEM_ARB_OUTSTANDING_REQ */ 52 + 0x00000001 /* MC_EMEM_ARB_TIMING_RCD */ 53 + 0x00000001 /* MC_EMEM_ARB_TIMING_RP */ 54 + 0x00000002 /* MC_EMEM_ARB_TIMING_RC */ 55 + 0x00000000 /* MC_EMEM_ARB_TIMING_RAS */ 56 + 0x00000002 /* MC_EMEM_ARB_TIMING_FAW */ 57 + 0x00000001 /* MC_EMEM_ARB_TIMING_RRD */ 58 + 0x00000002 /* MC_EMEM_ARB_TIMING_RAP2PRE */ 59 + 0x00000008 /* MC_EMEM_ARB_TIMING_WAP2PRE */ 60 + 0x00000003 /* MC_EMEM_ARB_TIMING_R2R */ 61 + 0x00000002 /* MC_EMEM_ARB_TIMING_W2W */ 62 + 0x00000003 /* MC_EMEM_ARB_TIMING_R2W */ 63 + 0x00000006 /* MC_EMEM_ARB_TIMING_W2R */ 64 + 0x06030203 /* MC_EMEM_ARB_DA_TURNS */ 65 + 0x000a0402 /* MC_EMEM_ARB_DA_COVERS */ 66 + 0x77e30303 /* MC_EMEM_ARB_MISC0 */ 67 + 0x70000f03 /* MC_EMEM_ARB_MISC1 */ 68 + 0x001f0000 /* MC_EMEM_ARB_RING1_THROTTLE */ 69 + >; 70 + }; 71 + }; 72 + }; 73 + };
+374
Documentation/devicetree/bindings/memory-controllers/tegra-emc.txt
··· 1 + NVIDIA Tegra124 SoC EMC (external memory controller) 2 + ==================================================== 3 + 4 + Required properties : 5 + - compatible : Should be "nvidia,tegra124-emc". 6 + - reg : physical base address and length of the controller's registers. 7 + - nvidia,memory-controller : phandle of the MC driver. 8 + 9 + The node should contain a "emc-timings" subnode for each supported RAM type 10 + (see field RAM_CODE in register PMC_STRAPPING_OPT_A), with its unit address 11 + being its RAM_CODE. 12 + 13 + Required properties for "emc-timings" nodes : 14 + - nvidia,ram-code : Should contain the value of RAM_CODE this timing set is 15 + used for. 16 + 17 + Each "emc-timings" node should contain a "timing" subnode for every supported 18 + EMC clock rate. The "timing" subnodes should have the clock rate in Hz as 19 + their unit address. 20 + 21 + Required properties for "timing" nodes : 22 + - clock-frequency : Should contain the memory clock rate in Hz. 23 + - The following properties contain EMC timing characterization values 24 + (specified in the board documentation) : 25 + - nvidia,emc-auto-cal-config : EMC_AUTO_CAL_CONFIG 26 + - nvidia,emc-auto-cal-config2 : EMC_AUTO_CAL_CONFIG2 27 + - nvidia,emc-auto-cal-config3 : EMC_AUTO_CAL_CONFIG3 28 + - nvidia,emc-auto-cal-interval : EMC_AUTO_CAL_INTERVAL 29 + - nvidia,emc-bgbias-ctl0 : EMC_BGBIAS_CTL0 30 + - nvidia,emc-cfg : EMC_CFG 31 + - nvidia,emc-cfg-2 : EMC_CFG_2 32 + - nvidia,emc-ctt-term-ctrl : EMC_CTT_TERM_CTRL 33 + - nvidia,emc-mode-1 : Mode Register 1 34 + - nvidia,emc-mode-2 : Mode Register 2 35 + - nvidia,emc-mode-4 : Mode Register 4 36 + - nvidia,emc-mode-reset : Mode Register 0 37 + - nvidia,emc-mrs-wait-cnt : EMC_MRS_WAIT_CNT 38 + - nvidia,emc-sel-dpd-ctrl : EMC_SEL_DPD_CTRL 39 + - nvidia,emc-xm2dqspadctrl2 : EMC_XM2DQSPADCTRL2 40 + - nvidia,emc-zcal-cnt-long : EMC_ZCAL_WAIT_CNT after clock change 41 + - nvidia,emc-zcal-interval : EMC_ZCAL_INTERVAL 42 + - nvidia,emc-configuration : EMC timing characterization data. These are the 43 + registers (see section "15.6.2 EMC Registers" in the TRM) whose values need to 44 + be specified, according to the board documentation: 45 + 46 + EMC_RC 47 + EMC_RFC 48 + EMC_RFC_SLR 49 + EMC_RAS 50 + EMC_RP 51 + EMC_R2W 52 + EMC_W2R 53 + EMC_R2P 54 + EMC_W2P 55 + EMC_RD_RCD 56 + EMC_WR_RCD 57 + EMC_RRD 58 + EMC_REXT 59 + EMC_WEXT 60 + EMC_WDV 61 + EMC_WDV_MASK 62 + EMC_QUSE 63 + EMC_QUSE_WIDTH 64 + EMC_IBDLY 65 + EMC_EINPUT 66 + EMC_EINPUT_DURATION 67 + EMC_PUTERM_EXTRA 68 + EMC_PUTERM_WIDTH 69 + EMC_PUTERM_ADJ 70 + EMC_CDB_CNTL_1 71 + EMC_CDB_CNTL_2 72 + EMC_CDB_CNTL_3 73 + EMC_QRST 74 + EMC_QSAFE 75 + EMC_RDV 76 + EMC_RDV_MASK 77 + EMC_REFRESH 78 + EMC_BURST_REFRESH_NUM 79 + EMC_PRE_REFRESH_REQ_CNT 80 + EMC_PDEX2WR 81 + EMC_PDEX2RD 82 + EMC_PCHG2PDEN 83 + EMC_ACT2PDEN 84 + EMC_AR2PDEN 85 + EMC_RW2PDEN 86 + EMC_TXSR 87 + EMC_TXSRDLL 88 + EMC_TCKE 89 + EMC_TCKESR 90 + EMC_TPD 91 + EMC_TFAW 92 + EMC_TRPAB 93 + EMC_TCLKSTABLE 94 + EMC_TCLKSTOP 95 + EMC_TREFBW 96 + EMC_FBIO_CFG6 97 + EMC_ODT_WRITE 98 + EMC_ODT_READ 99 + EMC_FBIO_CFG5 100 + EMC_CFG_DIG_DLL 101 + EMC_CFG_DIG_DLL_PERIOD 102 + EMC_DLL_XFORM_DQS0 103 + EMC_DLL_XFORM_DQS1 104 + EMC_DLL_XFORM_DQS2 105 + EMC_DLL_XFORM_DQS3 106 + EMC_DLL_XFORM_DQS4 107 + EMC_DLL_XFORM_DQS5 108 + EMC_DLL_XFORM_DQS6 109 + EMC_DLL_XFORM_DQS7 110 + EMC_DLL_XFORM_DQS8 111 + EMC_DLL_XFORM_DQS9 112 + EMC_DLL_XFORM_DQS10 113 + EMC_DLL_XFORM_DQS11 114 + EMC_DLL_XFORM_DQS12 115 + EMC_DLL_XFORM_DQS13 116 + EMC_DLL_XFORM_DQS14 117 + EMC_DLL_XFORM_DQS15 118 + EMC_DLL_XFORM_QUSE0 119 + EMC_DLL_XFORM_QUSE1 120 + EMC_DLL_XFORM_QUSE2 121 + EMC_DLL_XFORM_QUSE3 122 + EMC_DLL_XFORM_QUSE4 123 + EMC_DLL_XFORM_QUSE5 124 + EMC_DLL_XFORM_QUSE6 125 + EMC_DLL_XFORM_QUSE7 126 + EMC_DLL_XFORM_ADDR0 127 + EMC_DLL_XFORM_ADDR1 128 + EMC_DLL_XFORM_ADDR2 129 + EMC_DLL_XFORM_ADDR3 130 + EMC_DLL_XFORM_ADDR4 131 + EMC_DLL_XFORM_ADDR5 132 + EMC_DLL_XFORM_QUSE8 133 + EMC_DLL_XFORM_QUSE9 134 + EMC_DLL_XFORM_QUSE10 135 + EMC_DLL_XFORM_QUSE11 136 + EMC_DLL_XFORM_QUSE12 137 + EMC_DLL_XFORM_QUSE13 138 + EMC_DLL_XFORM_QUSE14 139 + EMC_DLL_XFORM_QUSE15 140 + EMC_DLI_TRIM_TXDQS0 141 + EMC_DLI_TRIM_TXDQS1 142 + EMC_DLI_TRIM_TXDQS2 143 + EMC_DLI_TRIM_TXDQS3 144 + EMC_DLI_TRIM_TXDQS4 145 + EMC_DLI_TRIM_TXDQS5 146 + EMC_DLI_TRIM_TXDQS6 147 + EMC_DLI_TRIM_TXDQS7 148 + EMC_DLI_TRIM_TXDQS8 149 + EMC_DLI_TRIM_TXDQS9 150 + EMC_DLI_TRIM_TXDQS10 151 + EMC_DLI_TRIM_TXDQS11 152 + EMC_DLI_TRIM_TXDQS12 153 + EMC_DLI_TRIM_TXDQS13 154 + EMC_DLI_TRIM_TXDQS14 155 + EMC_DLI_TRIM_TXDQS15 156 + EMC_DLL_XFORM_DQ0 157 + EMC_DLL_XFORM_DQ1 158 + EMC_DLL_XFORM_DQ2 159 + EMC_DLL_XFORM_DQ3 160 + EMC_DLL_XFORM_DQ4 161 + EMC_DLL_XFORM_DQ5 162 + EMC_DLL_XFORM_DQ6 163 + EMC_DLL_XFORM_DQ7 164 + EMC_XM2CMDPADCTRL 165 + EMC_XM2CMDPADCTRL4 166 + EMC_XM2CMDPADCTRL5 167 + EMC_XM2DQPADCTRL2 168 + EMC_XM2DQPADCTRL3 169 + EMC_XM2CLKPADCTRL 170 + EMC_XM2CLKPADCTRL2 171 + EMC_XM2COMPPADCTRL 172 + EMC_XM2VTTGENPADCTRL 173 + EMC_XM2VTTGENPADCTRL2 174 + EMC_XM2VTTGENPADCTRL3 175 + EMC_XM2DQSPADCTRL3 176 + EMC_XM2DQSPADCTRL4 177 + EMC_XM2DQSPADCTRL5 178 + EMC_XM2DQSPADCTRL6 179 + EMC_DSR_VTTGEN_DRV 180 + EMC_TXDSRVTTGEN 181 + EMC_FBIO_SPARE 182 + EMC_ZCAL_WAIT_CNT 183 + EMC_MRS_WAIT_CNT2 184 + EMC_CTT 185 + EMC_CTT_DURATION 186 + EMC_CFG_PIPE 187 + EMC_DYN_SELF_REF_CONTROL 188 + EMC_QPOP 189 + 190 + Example SoC include file: 191 + 192 + / { 193 + emc@0,7001b000 { 194 + compatible = "nvidia,tegra124-emc"; 195 + reg = <0x0 0x7001b000 0x0 0x1000>; 196 + 197 + nvidia,memory-controller = <&mc>; 198 + }; 199 + }; 200 + 201 + Example board file: 202 + 203 + / { 204 + emc@0,7001b000 { 205 + emc-timings-3 { 206 + nvidia,ram-code = <3>; 207 + 208 + timing-12750000 { 209 + clock-frequency = <12750000>; 210 + 211 + nvidia,emc-zcal-cnt-long = <0x00000042>; 212 + nvidia,emc-auto-cal-interval = <0x001fffff>; 213 + nvidia,emc-ctt-term-ctrl = <0x00000802>; 214 + nvidia,emc-cfg = <0x73240000>; 215 + nvidia,emc-cfg-2 = <0x000008c5>; 216 + nvidia,emc-sel-dpd-ctrl = <0x00040128>; 217 + nvidia,emc-bgbias-ctl0 = <0x00000008>; 218 + nvidia,emc-auto-cal-config = <0xa1430000>; 219 + nvidia,emc-auto-cal-config2 = <0x00000000>; 220 + nvidia,emc-auto-cal-config3 = <0x00000000>; 221 + nvidia,emc-mode-reset = <0x80001221>; 222 + nvidia,emc-mode-1 = <0x80100003>; 223 + nvidia,emc-mode-2 = <0x80200008>; 224 + nvidia,emc-mode-4 = <0x00000000>; 225 + 226 + nvidia,emc-configuration = < 227 + 0x00000000 /* EMC_RC */ 228 + 0x00000003 /* EMC_RFC */ 229 + 0x00000000 /* EMC_RFC_SLR */ 230 + 0x00000000 /* EMC_RAS */ 231 + 0x00000000 /* EMC_RP */ 232 + 0x00000004 /* EMC_R2W */ 233 + 0x0000000a /* EMC_W2R */ 234 + 0x00000003 /* EMC_R2P */ 235 + 0x0000000b /* EMC_W2P */ 236 + 0x00000000 /* EMC_RD_RCD */ 237 + 0x00000000 /* EMC_WR_RCD */ 238 + 0x00000003 /* EMC_RRD */ 239 + 0x00000003 /* EMC_REXT */ 240 + 0x00000000 /* EMC_WEXT */ 241 + 0x00000006 /* EMC_WDV */ 242 + 0x00000006 /* EMC_WDV_MASK */ 243 + 0x00000006 /* EMC_QUSE */ 244 + 0x00000002 /* EMC_QUSE_WIDTH */ 245 + 0x00000000 /* EMC_IBDLY */ 246 + 0x00000005 /* EMC_EINPUT */ 247 + 0x00000005 /* EMC_EINPUT_DURATION */ 248 + 0x00010000 /* EMC_PUTERM_EXTRA */ 249 + 0x00000003 /* EMC_PUTERM_WIDTH */ 250 + 0x00000000 /* EMC_PUTERM_ADJ */ 251 + 0x00000000 /* EMC_CDB_CNTL_1 */ 252 + 0x00000000 /* EMC_CDB_CNTL_2 */ 253 + 0x00000000 /* EMC_CDB_CNTL_3 */ 254 + 0x00000004 /* EMC_QRST */ 255 + 0x0000000c /* EMC_QSAFE */ 256 + 0x0000000d /* EMC_RDV */ 257 + 0x0000000f /* EMC_RDV_MASK */ 258 + 0x00000060 /* EMC_REFRESH */ 259 + 0x00000000 /* EMC_BURST_REFRESH_NUM */ 260 + 0x00000018 /* EMC_PRE_REFRESH_REQ_CNT */ 261 + 0x00000002 /* EMC_PDEX2WR */ 262 + 0x00000002 /* EMC_PDEX2RD */ 263 + 0x00000001 /* EMC_PCHG2PDEN */ 264 + 0x00000000 /* EMC_ACT2PDEN */ 265 + 0x00000007 /* EMC_AR2PDEN */ 266 + 0x0000000f /* EMC_RW2PDEN */ 267 + 0x00000005 /* EMC_TXSR */ 268 + 0x00000005 /* EMC_TXSRDLL */ 269 + 0x00000004 /* EMC_TCKE */ 270 + 0x00000005 /* EMC_TCKESR */ 271 + 0x00000004 /* EMC_TPD */ 272 + 0x00000000 /* EMC_TFAW */ 273 + 0x00000000 /* EMC_TRPAB */ 274 + 0x00000005 /* EMC_TCLKSTABLE */ 275 + 0x00000005 /* EMC_TCLKSTOP */ 276 + 0x00000064 /* EMC_TREFBW */ 277 + 0x00000000 /* EMC_FBIO_CFG6 */ 278 + 0x00000000 /* EMC_ODT_WRITE */ 279 + 0x00000000 /* EMC_ODT_READ */ 280 + 0x106aa298 /* EMC_FBIO_CFG5 */ 281 + 0x002c00a0 /* EMC_CFG_DIG_DLL */ 282 + 0x00008000 /* EMC_CFG_DIG_DLL_PERIOD */ 283 + 0x00064000 /* EMC_DLL_XFORM_DQS0 */ 284 + 0x00064000 /* EMC_DLL_XFORM_DQS1 */ 285 + 0x00064000 /* EMC_DLL_XFORM_DQS2 */ 286 + 0x00064000 /* EMC_DLL_XFORM_DQS3 */ 287 + 0x00064000 /* EMC_DLL_XFORM_DQS4 */ 288 + 0x00064000 /* EMC_DLL_XFORM_DQS5 */ 289 + 0x00064000 /* EMC_DLL_XFORM_DQS6 */ 290 + 0x00064000 /* EMC_DLL_XFORM_DQS7 */ 291 + 0x00064000 /* EMC_DLL_XFORM_DQS8 */ 292 + 0x00064000 /* EMC_DLL_XFORM_DQS9 */ 293 + 0x00064000 /* EMC_DLL_XFORM_DQS10 */ 294 + 0x00064000 /* EMC_DLL_XFORM_DQS11 */ 295 + 0x00064000 /* EMC_DLL_XFORM_DQS12 */ 296 + 0x00064000 /* EMC_DLL_XFORM_DQS13 */ 297 + 0x00064000 /* EMC_DLL_XFORM_DQS14 */ 298 + 0x00064000 /* EMC_DLL_XFORM_DQS15 */ 299 + 0x00000000 /* EMC_DLL_XFORM_QUSE0 */ 300 + 0x00000000 /* EMC_DLL_XFORM_QUSE1 */ 301 + 0x00000000 /* EMC_DLL_XFORM_QUSE2 */ 302 + 0x00000000 /* EMC_DLL_XFORM_QUSE3 */ 303 + 0x00000000 /* EMC_DLL_XFORM_QUSE4 */ 304 + 0x00000000 /* EMC_DLL_XFORM_QUSE5 */ 305 + 0x00000000 /* EMC_DLL_XFORM_QUSE6 */ 306 + 0x00000000 /* EMC_DLL_XFORM_QUSE7 */ 307 + 0x00000000 /* EMC_DLL_XFORM_ADDR0 */ 308 + 0x00000000 /* EMC_DLL_XFORM_ADDR1 */ 309 + 0x00000000 /* EMC_DLL_XFORM_ADDR2 */ 310 + 0x00000000 /* EMC_DLL_XFORM_ADDR3 */ 311 + 0x00000000 /* EMC_DLL_XFORM_ADDR4 */ 312 + 0x00000000 /* EMC_DLL_XFORM_ADDR5 */ 313 + 0x00000000 /* EMC_DLL_XFORM_QUSE8 */ 314 + 0x00000000 /* EMC_DLL_XFORM_QUSE9 */ 315 + 0x00000000 /* EMC_DLL_XFORM_QUSE10 */ 316 + 0x00000000 /* EMC_DLL_XFORM_QUSE11 */ 317 + 0x00000000 /* EMC_DLL_XFORM_QUSE12 */ 318 + 0x00000000 /* EMC_DLL_XFORM_QUSE13 */ 319 + 0x00000000 /* EMC_DLL_XFORM_QUSE14 */ 320 + 0x00000000 /* EMC_DLL_XFORM_QUSE15 */ 321 + 0x00000000 /* EMC_DLI_TRIM_TXDQS0 */ 322 + 0x00000000 /* EMC_DLI_TRIM_TXDQS1 */ 323 + 0x00000000 /* EMC_DLI_TRIM_TXDQS2 */ 324 + 0x00000000 /* EMC_DLI_TRIM_TXDQS3 */ 325 + 0x00000000 /* EMC_DLI_TRIM_TXDQS4 */ 326 + 0x00000000 /* EMC_DLI_TRIM_TXDQS5 */ 327 + 0x00000000 /* EMC_DLI_TRIM_TXDQS6 */ 328 + 0x00000000 /* EMC_DLI_TRIM_TXDQS7 */ 329 + 0x00000000 /* EMC_DLI_TRIM_TXDQS8 */ 330 + 0x00000000 /* EMC_DLI_TRIM_TXDQS9 */ 331 + 0x00000000 /* EMC_DLI_TRIM_TXDQS10 */ 332 + 0x00000000 /* EMC_DLI_TRIM_TXDQS11 */ 333 + 0x00000000 /* EMC_DLI_TRIM_TXDQS12 */ 334 + 0x00000000 /* EMC_DLI_TRIM_TXDQS13 */ 335 + 0x00000000 /* EMC_DLI_TRIM_TXDQS14 */ 336 + 0x00000000 /* EMC_DLI_TRIM_TXDQS15 */ 337 + 0x000fc000 /* EMC_DLL_XFORM_DQ0 */ 338 + 0x000fc000 /* EMC_DLL_XFORM_DQ1 */ 339 + 0x000fc000 /* EMC_DLL_XFORM_DQ2 */ 340 + 0x000fc000 /* EMC_DLL_XFORM_DQ3 */ 341 + 0x0000fc00 /* EMC_DLL_XFORM_DQ4 */ 342 + 0x0000fc00 /* EMC_DLL_XFORM_DQ5 */ 343 + 0x0000fc00 /* EMC_DLL_XFORM_DQ6 */ 344 + 0x0000fc00 /* EMC_DLL_XFORM_DQ7 */ 345 + 0x10000280 /* EMC_XM2CMDPADCTRL */ 346 + 0x00000000 /* EMC_XM2CMDPADCTRL4 */ 347 + 0x00111111 /* EMC_XM2CMDPADCTRL5 */ 348 + 0x00000000 /* EMC_XM2DQPADCTRL2 */ 349 + 0x00000000 /* EMC_XM2DQPADCTRL3 */ 350 + 0x77ffc081 /* EMC_XM2CLKPADCTRL */ 351 + 0x00000e0e /* EMC_XM2CLKPADCTRL2 */ 352 + 0x81f1f108 /* EMC_XM2COMPPADCTRL */ 353 + 0x07070004 /* EMC_XM2VTTGENPADCTRL */ 354 + 0x0000003f /* EMC_XM2VTTGENPADCTRL2 */ 355 + 0x016eeeee /* EMC_XM2VTTGENPADCTRL3 */ 356 + 0x51451400 /* EMC_XM2DQSPADCTRL3 */ 357 + 0x00514514 /* EMC_XM2DQSPADCTRL4 */ 358 + 0x00514514 /* EMC_XM2DQSPADCTRL5 */ 359 + 0x51451400 /* EMC_XM2DQSPADCTRL6 */ 360 + 0x0000003f /* EMC_DSR_VTTGEN_DRV */ 361 + 0x00000007 /* EMC_TXDSRVTTGEN */ 362 + 0x00000000 /* EMC_FBIO_SPARE */ 363 + 0x00000042 /* EMC_ZCAL_WAIT_CNT */ 364 + 0x000e000e /* EMC_MRS_WAIT_CNT2 */ 365 + 0x00000000 /* EMC_CTT */ 366 + 0x00000003 /* EMC_CTT_DURATION */ 367 + 0x0000f2f3 /* EMC_CFG_PIPE */ 368 + 0x800001c5 /* EMC_DYN_SELF_REF_CONTROL */ 369 + 0x0000000a /* EMC_QPOP */ 370 + >; 371 + }; 372 + }; 373 + }; 374 + };
+41
Documentation/devicetree/bindings/mfd/mfd.txt
··· 1 + Multi-Function Devices (MFD) 2 + 3 + These devices comprise a nexus for heterogeneous hardware blocks containing 4 + more than one non-unique yet varying hardware functionality. 5 + 6 + A typical MFD can be: 7 + 8 + - A mixed signal ASIC on an external bus, sometimes a PMIC (Power Management 9 + Integrated Circuit) that is manufactured in a lower technology node (rough 10 + silicon) that handles analog drivers for things like audio amplifiers, LED 11 + drivers, level shifters, PHY (physical interfaces to things like USB or 12 + ethernet), regulators etc. 13 + 14 + - A range of memory registers containing "miscellaneous system registers" also 15 + known as a system controller "syscon" or any other memory range containing a 16 + mix of unrelated hardware devices. 17 + 18 + Optional properties: 19 + 20 + - compatible : "simple-mfd" - this signifies that the operating system should 21 + consider all subnodes of the MFD device as separate devices akin to how 22 + "simple-bus" inidicates when to see subnodes as children for a simple 23 + memory-mapped bus. For more complex devices, when the nexus driver has to 24 + probe registers to figure out what child devices exist etc, this should not 25 + be used. In the latter case the child devices will be determined by the 26 + operating system. 27 + 28 + Example: 29 + 30 + foo@1000 { 31 + compatible = "syscon", "simple-mfd"; 32 + reg = <0x01000 0x1000>; 33 + 34 + led@08.0 { 35 + compatible = "register-bit-led"; 36 + offset = <0x08>; 37 + mask = <0x01>; 38 + label = "myled"; 39 + default-state = "on"; 40 + }; 41 + };
+2
Documentation/devicetree/bindings/misc/nvidia,tegra20-apbmisc.txt
··· 10 10 The second entry gives the physical address and length of the 11 11 registers indicating the strapping options. 12 12 13 + Optional properties: 14 + - nvidia,long-ram-code: If present, the RAM code is long (4 bit). If not, short (2 bit).
+72
Documentation/devicetree/bindings/soc/sunxi/sram.txt
··· 1 + Allwinnner SoC SRAM controllers 2 + ----------------------------------------------------- 3 + 4 + The SRAM controller found on most Allwinner devices is represented by 5 + a regular node for the SRAM controller itself, with sub-nodes 6 + reprensenting the SRAM handled by the SRAM controller. 7 + 8 + Controller Node 9 + --------------- 10 + 11 + Required properties: 12 + - compatible : "allwinner,sun4i-a10-sram-controller" 13 + - reg : sram controller register offset + length 14 + 15 + SRAM nodes 16 + ---------- 17 + 18 + Each SRAM is described using the mmio-sram bindings documented in 19 + Documentation/devicetree/bindings/misc/sram.txt 20 + 21 + Each SRAM will have SRAM sections that are going to be handled by the 22 + SRAM controller as subnodes. These sections are represented following 23 + once again the representation described in the mmio-sram binding. 24 + 25 + The valid sections compatible are: 26 + - allwinner,sun4i-a10-sram-a3-a4 27 + - allwinner,sun4i-a10-sram-d 28 + 29 + Devices using SRAM sections 30 + --------------------------- 31 + 32 + Some devices need to request to the SRAM controller to map an SRAM for 33 + their exclusive use. 34 + 35 + The relationship between such a device and an SRAM section is 36 + expressed through the allwinner,sram property, that will take a 37 + phandle and an argument. 38 + 39 + This valid values for this argument are: 40 + - 0: CPU 41 + - 1: Device 42 + 43 + Example 44 + ------- 45 + sram-controller@01c00000 { 46 + compatible = "allwinner,sun4i-a10-sram-controller"; 47 + reg = <0x01c00000 0x30>; 48 + #address-cells = <1>; 49 + #size-cells = <1>; 50 + ranges; 51 + 52 + sram_a: sram@00000000 { 53 + compatible = "mmio-sram"; 54 + reg = <0x00000000 0xc000>; 55 + #address-cells = <1>; 56 + #size-cells = <1>; 57 + ranges = <0 0x00000000 0xc000>; 58 + 59 + emac_sram: sram-section@8000 { 60 + compatible = "allwinner,sun4i-a10-sram-a3-a4"; 61 + reg = <0x8000 0x4000>; 62 + status = "disabled"; 63 + }; 64 + }; 65 + }; 66 + 67 + emac: ethernet@01c0b000 { 68 + compatible = "allwinner,sun4i-a10-emac"; 69 + ... 70 + 71 + allwinner,sram = <&emac_sram 1>; 72 + };
+1 -1
arch/arm/boot/dts/arm-realview-pb1176.dts
··· 114 114 ranges; 115 115 116 116 syscon: syscon@10000000 { 117 - compatible = "arm,realview-pb1176-syscon", "syscon"; 117 + compatible = "arm,realview-pb1176-syscon", "syscon", "simple-mfd"; 118 118 reg = <0x10000000 0x1000>; 119 119 120 120 led@08.0 {
+51 -37
arch/arm/boot/dts/berlin2.dtsi
··· 84 84 sdhci0: sdhci@ab0000 { 85 85 compatible = "mrvl,pxav3-mmc"; 86 86 reg = <0xab0000 0x200>; 87 - clocks = <&chip CLKID_SDIO0XIN>, <&chip CLKID_SDIO0>; 87 + clocks = <&chip_clk CLKID_SDIO0XIN>, <&chip_clk CLKID_SDIO0>; 88 88 clock-names = "io", "core"; 89 89 interrupts = <GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>; 90 90 status = "disabled"; ··· 93 93 sdhci1: sdhci@ab0800 { 94 94 compatible = "mrvl,pxav3-mmc"; 95 95 reg = <0xab0800 0x200>; 96 - clocks = <&chip CLKID_SDIO1XIN>, <&chip CLKID_SDIO1>; 96 + clocks = <&chip_clk CLKID_SDIO1XIN>, <&chip_clk CLKID_SDIO1>; 97 97 clock-names = "io", "core"; 98 98 interrupts = <GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>; 99 99 status = "disabled"; ··· 103 103 compatible = "mrvl,pxav3-mmc"; 104 104 reg = <0xab1000 0x200>; 105 105 interrupts = <GIC_SPI 28 IRQ_TYPE_LEVEL_HIGH>; 106 - clocks = <&chip CLKID_NFC_ECC>, <&chip CLKID_NFC>; 106 + clocks = <&chip_clk CLKID_NFC_ECC>, <&chip_clk CLKID_NFC>; 107 107 clock-names = "io", "core"; 108 108 pinctrl-0 = <&emmc_pmux>; 109 109 pinctrl-names = "default"; ··· 133 133 compatible = "arm,cortex-a9-twd-timer"; 134 134 reg = <0xad0600 0x20>; 135 135 interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_HIGH)>; 136 - clocks = <&chip CLKID_TWD>; 136 + clocks = <&chip_clk CLKID_TWD>; 137 137 }; 138 138 139 139 eth1: ethernet@b90000 { 140 140 compatible = "marvell,pxa168-eth"; 141 141 reg = <0xb90000 0x10000>; 142 - clocks = <&chip CLKID_GETH1>; 142 + clocks = <&chip_clk CLKID_GETH1>; 143 143 interrupts = <GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>; 144 144 /* set by bootloader */ 145 145 local-mac-address = [00 00 00 00 00 00]; ··· 162 162 eth0: ethernet@e50000 { 163 163 compatible = "marvell,pxa168-eth"; 164 164 reg = <0xe50000 0x10000>; 165 - clocks = <&chip CLKID_GETH0>; 165 + clocks = <&chip_clk CLKID_GETH0>; 166 166 interrupts = <GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>; 167 167 /* set by bootloader */ 168 168 local-mac-address = [00 00 00 00 00 00]; ··· 261 261 compatible = "snps,dw-apb-timer"; 262 262 reg = <0x2c00 0x14>; 263 263 interrupts = <8>; 264 - clocks = <&chip CLKID_CFG>; 264 + clocks = <&chip_clk CLKID_CFG>; 265 265 clock-names = "timer"; 266 266 status = "okay"; 267 267 }; ··· 270 270 compatible = "snps,dw-apb-timer"; 271 271 reg = <0x2c14 0x14>; 272 272 interrupts = <9>; 273 - clocks = <&chip CLKID_CFG>; 273 + clocks = <&chip_clk CLKID_CFG>; 274 274 clock-names = "timer"; 275 275 status = "okay"; 276 276 }; ··· 279 279 compatible = "snps,dw-apb-timer"; 280 280 reg = <0x2c28 0x14>; 281 281 interrupts = <10>; 282 - clocks = <&chip CLKID_CFG>; 282 + clocks = <&chip_clk CLKID_CFG>; 283 283 clock-names = "timer"; 284 284 status = "disabled"; 285 285 }; ··· 288 288 compatible = "snps,dw-apb-timer"; 289 289 reg = <0x2c3c 0x14>; 290 290 interrupts = <11>; 291 - clocks = <&chip CLKID_CFG>; 291 + clocks = <&chip_clk CLKID_CFG>; 292 292 clock-names = "timer"; 293 293 status = "disabled"; 294 294 }; ··· 297 297 compatible = "snps,dw-apb-timer"; 298 298 reg = <0x2c50 0x14>; 299 299 interrupts = <12>; 300 - clocks = <&chip CLKID_CFG>; 300 + clocks = <&chip_clk CLKID_CFG>; 301 301 clock-names = "timer"; 302 302 status = "disabled"; 303 303 }; ··· 306 306 compatible = "snps,dw-apb-timer"; 307 307 reg = <0x2c64 0x14>; 308 308 interrupts = <13>; 309 - clocks = <&chip CLKID_CFG>; 309 + clocks = <&chip_clk CLKID_CFG>; 310 310 clock-names = "timer"; 311 311 status = "disabled"; 312 312 }; ··· 315 315 compatible = "snps,dw-apb-timer"; 316 316 reg = <0x2c78 0x14>; 317 317 interrupts = <14>; 318 - clocks = <&chip CLKID_CFG>; 318 + clocks = <&chip_clk CLKID_CFG>; 319 319 clock-names = "timer"; 320 320 status = "disabled"; 321 321 }; ··· 324 324 compatible = "snps,dw-apb-timer"; 325 325 reg = <0x2c8c 0x14>; 326 326 interrupts = <15>; 327 - clocks = <&chip CLKID_CFG>; 327 + clocks = <&chip_clk CLKID_CFG>; 328 328 clock-names = "timer"; 329 329 status = "disabled"; 330 330 }; ··· 343 343 compatible = "marvell,berlin2-ahci", "generic-ahci"; 344 344 reg = <0xe90000 0x1000>; 345 345 interrupts = <GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>; 346 - clocks = <&chip CLKID_SATA>; 346 + clocks = <&chip_clk CLKID_SATA>; 347 347 #address-cells = <1>; 348 348 #size-cells = <0>; 349 349 ··· 363 363 sata_phy: phy@e900a0 { 364 364 compatible = "marvell,berlin2-sata-phy"; 365 365 reg = <0xe900a0 0x200>; 366 - clocks = <&chip CLKID_SATA>; 366 + clocks = <&chip_clk CLKID_SATA>; 367 367 #address-cells = <1>; 368 368 #size-cells = <0>; 369 369 #phy-cells = <1>; ··· 379 379 }; 380 380 381 381 chip: chip-control@ea0000 { 382 - compatible = "marvell,berlin2-chip-ctrl"; 383 - #clock-cells = <1>; 384 - #reset-cells = <2>; 382 + compatible = "simple-mfd", "syscon"; 385 383 reg = <0xea0000 0x400>; 386 - clocks = <&refclk>; 387 - clock-names = "refclk"; 388 384 389 - emmc_pmux: emmc-pmux { 390 - groups = "G26"; 391 - function = "emmc"; 385 + chip_clk: clock { 386 + compatible = "marvell,berlin2-clk"; 387 + #clock-cells = <1>; 388 + clocks = <&refclk>; 389 + clock-names = "refclk"; 390 + }; 391 + 392 + soc_pinctrl: pin-controller { 393 + compatible = "marvell,berlin2-soc-pinctrl"; 394 + 395 + emmc_pmux: emmc-pmux { 396 + groups = "G26"; 397 + function = "emmc"; 398 + }; 399 + }; 400 + 401 + chip_rst: reset { 402 + compatible = "marvell,berlin2-reset"; 403 + #reset-cells = <2>; 392 404 }; 393 405 }; 394 406 ··· 482 470 }; 483 471 484 472 sysctrl: system-controller@d000 { 485 - compatible = "marvell,berlin2-system-ctrl"; 473 + compatible = "simple-mfd", "syscon"; 486 474 reg = <0xd000 0x100>; 487 475 488 - uart0_pmux: uart0-pmux { 489 - groups = "GSM4"; 490 - function = "uart0"; 491 - }; 476 + sys_pinctrl: pin-controller { 477 + compatible = "marvell,berlin2-system-pinctrl"; 478 + uart0_pmux: uart0-pmux { 479 + groups = "GSM4"; 480 + function = "uart0"; 481 + }; 492 482 493 - uart1_pmux: uart1-pmux { 494 - groups = "GSM5"; 495 - function = "uart1"; 496 - }; 497 - 498 - uart2_pmux: uart2-pmux { 499 - groups = "GSM3"; 500 - function = "uart2"; 483 + uart1_pmux: uart1-pmux { 484 + groups = "GSM5"; 485 + function = "uart1"; 486 + }; 487 + uart2_pmux: uart2-pmux { 488 + groups = "GSM3"; 489 + function = "uart2"; 490 + }; 501 491 }; 502 492 }; 503 493
+41 -25
arch/arm/boot/dts/berlin2cd.dtsi
··· 81 81 sdhci0: sdhci@ab0000 { 82 82 compatible = "mrvl,pxav3-mmc"; 83 83 reg = <0xab0000 0x200>; 84 - clocks = <&chip CLKID_SDIO0XIN>, <&chip CLKID_SDIO0>; 84 + clocks = <&chip_clk CLKID_SDIO0XIN>, <&chip_clk CLKID_SDIO0>; 85 85 clock-names = "io", "core"; 86 86 interrupts = <GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>; 87 87 status = "disabled"; ··· 105 105 compatible = "arm,cortex-a9-twd-timer"; 106 106 reg = <0xad0600 0x20>; 107 107 interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(1) | IRQ_TYPE_LEVEL_HIGH)>; 108 - clocks = <&chip CLKID_TWD>; 108 + clocks = <&chip_clk CLKID_TWD>; 109 109 }; 110 110 111 111 usb_phy0: usb-phy@b74000 { 112 112 compatible = "marvell,berlin2cd-usb-phy"; 113 113 reg = <0xb74000 0x128>; 114 114 #phy-cells = <0>; 115 - resets = <&chip 0x178 23>; 115 + resets = <&chip_rst 0x178 23>; 116 116 status = "disabled"; 117 117 }; 118 118 ··· 120 120 compatible = "marvell,berlin2cd-usb-phy"; 121 121 reg = <0xb78000 0x128>; 122 122 #phy-cells = <0>; 123 - resets = <&chip 0x178 24>; 123 + resets = <&chip_rst 0x178 24>; 124 124 status = "disabled"; 125 125 }; 126 126 127 127 eth1: ethernet@b90000 { 128 128 compatible = "marvell,pxa168-eth"; 129 129 reg = <0xb90000 0x10000>; 130 - clocks = <&chip CLKID_GETH1>; 130 + clocks = <&chip_clk CLKID_GETH1>; 131 131 interrupts = <GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>; 132 132 /* set by bootloader */ 133 133 local-mac-address = [00 00 00 00 00 00]; ··· 145 145 eth0: ethernet@e50000 { 146 146 compatible = "marvell,pxa168-eth"; 147 147 reg = <0xe50000 0x10000>; 148 - clocks = <&chip CLKID_GETH0>; 148 + clocks = <&chip_clk CLKID_GETH0>; 149 149 interrupts = <GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>; 150 150 /* set by bootloader */ 151 151 local-mac-address = [00 00 00 00 00 00]; ··· 244 244 compatible = "snps,dw-apb-timer"; 245 245 reg = <0x2c00 0x14>; 246 246 interrupts = <8>; 247 - clocks = <&chip CLKID_CFG>; 247 + clocks = <&chip_clk CLKID_CFG>; 248 248 clock-names = "timer"; 249 249 status = "okay"; 250 250 }; ··· 253 253 compatible = "snps,dw-apb-timer"; 254 254 reg = <0x2c14 0x14>; 255 255 interrupts = <9>; 256 - clocks = <&chip CLKID_CFG>; 256 + clocks = <&chip_clk CLKID_CFG>; 257 257 clock-names = "timer"; 258 258 status = "okay"; 259 259 }; ··· 262 262 compatible = "snps,dw-apb-timer"; 263 263 reg = <0x2c28 0x14>; 264 264 interrupts = <10>; 265 - clocks = <&chip CLKID_CFG>; 265 + clocks = <&chip_clk CLKID_CFG>; 266 266 clock-names = "timer"; 267 267 status = "disabled"; 268 268 }; ··· 271 271 compatible = "snps,dw-apb-timer"; 272 272 reg = <0x2c3c 0x14>; 273 273 interrupts = <11>; 274 - clocks = <&chip CLKID_CFG>; 274 + clocks = <&chip_clk CLKID_CFG>; 275 275 clock-names = "timer"; 276 276 status = "disabled"; 277 277 }; ··· 280 280 compatible = "snps,dw-apb-timer"; 281 281 reg = <0x2c50 0x14>; 282 282 interrupts = <12>; 283 - clocks = <&chip CLKID_CFG>; 283 + clocks = <&chip_clk CLKID_CFG>; 284 284 clock-names = "timer"; 285 285 status = "disabled"; 286 286 }; ··· 289 289 compatible = "snps,dw-apb-timer"; 290 290 reg = <0x2c64 0x14>; 291 291 interrupts = <13>; 292 - clocks = <&chip CLKID_CFG>; 292 + clocks = <&chip_clk CLKID_CFG>; 293 293 clock-names = "timer"; 294 294 status = "disabled"; 295 295 }; ··· 298 298 compatible = "snps,dw-apb-timer"; 299 299 reg = <0x2c78 0x14>; 300 300 interrupts = <14>; 301 - clocks = <&chip CLKID_CFG>; 301 + clocks = <&chip_clk CLKID_CFG>; 302 302 clock-names = "timer"; 303 303 status = "disabled"; 304 304 }; ··· 307 307 compatible = "snps,dw-apb-timer"; 308 308 reg = <0x2c8c 0x14>; 309 309 interrupts = <15>; 310 - clocks = <&chip CLKID_CFG>; 310 + clocks = <&chip_clk CLKID_CFG>; 311 311 clock-names = "timer"; 312 312 status = "disabled"; 313 313 }; ··· 323 323 }; 324 324 325 325 chip: chip-control@ea0000 { 326 - compatible = "marvell,berlin2cd-chip-ctrl"; 327 - #clock-cells = <1>; 328 - #reset-cells = <2>; 326 + compatible = "simple-mfd", "syscon"; 329 327 reg = <0xea0000 0x400>; 330 - clocks = <&refclk>; 331 - clock-names = "refclk"; 332 328 333 - uart0_pmux: uart0-pmux { 334 - groups = "G6"; 335 - function = "uart0"; 329 + chip_clk: clock { 330 + compatible = "marvell,berlin2-clk"; 331 + #clock-cells = <1>; 332 + clocks = <&refclk>; 333 + clock-names = "refclk"; 334 + }; 335 + 336 + soc_pinctrl: pin-controller { 337 + compatible = "marvell,berlin2cd-soc-pinctrl"; 338 + 339 + uart0_pmux: uart0-pmux { 340 + groups = "G6"; 341 + function = "uart0"; 342 + }; 343 + }; 344 + 345 + chip_rst: reset { 346 + compatible = "marvell,berlin2-reset"; 347 + #reset-cells = <2>; 336 348 }; 337 349 }; 338 350 ··· 352 340 compatible = "chipidea,usb2"; 353 341 reg = <0xed0000 0x200>; 354 342 interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>; 355 - clocks = <&chip CLKID_USB0>; 343 + clocks = <&chip_clk CLKID_USB0>; 356 344 phys = <&usb_phy0>; 357 345 phy-names = "usb-phy"; 358 346 status = "disabled"; ··· 362 350 compatible = "chipidea,usb2"; 363 351 reg = <0xee0000 0x200>; 364 352 interrupts = <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>; 365 - clocks = <&chip CLKID_USB1>; 353 + clocks = <&chip_clk CLKID_USB1>; 366 354 phys = <&usb_phy1>; 367 355 phy-names = "usb-phy"; 368 356 status = "disabled"; ··· 429 417 }; 430 418 431 419 sysctrl: system-controller@d000 { 432 - compatible = "marvell,berlin2cd-system-ctrl"; 420 + compatible = "simple-mfd", "syscon"; 433 421 reg = <0xd000 0x100>; 422 + 423 + sys_pinctrl: pin-controller { 424 + compatible = "marvell,berlin2cd-system-pinctrl"; 425 + }; 434 426 }; 435 427 436 428 sic: interrupt-controller@e000 {
+73 -51
arch/arm/boot/dts/berlin2q.dtsi
··· 102 102 sdhci0: sdhci@ab0000 { 103 103 compatible = "mrvl,pxav3-mmc"; 104 104 reg = <0xab0000 0x200>; 105 - clocks = <&chip CLKID_SDIO1XIN>; 105 + clocks = <&chip_clk CLKID_SDIO1XIN>; 106 106 interrupts = <GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>; 107 107 status = "disabled"; 108 108 }; ··· 110 110 sdhci1: sdhci@ab0800 { 111 111 compatible = "mrvl,pxav3-mmc"; 112 112 reg = <0xab0800 0x200>; 113 - clocks = <&chip CLKID_SDIO1XIN>; 113 + clocks = <&chip_clk CLKID_SDIO1XIN>; 114 114 interrupts = <GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>; 115 115 status = "disabled"; 116 116 }; ··· 119 119 compatible = "mrvl,pxav3-mmc"; 120 120 reg = <0xab1000 0x200>; 121 121 interrupts = <GIC_SPI 28 IRQ_TYPE_LEVEL_HIGH>; 122 - clocks = <&chip CLKID_NFC_ECC>, <&chip CLKID_NFC>; 122 + clocks = <&chip_clk CLKID_NFC_ECC>, <&chip_clk CLKID_NFC>; 123 123 clock-names = "io", "core"; 124 124 status = "disabled"; 125 125 }; ··· 140 140 local-timer@ad0600 { 141 141 compatible = "arm,cortex-a9-twd-timer"; 142 142 reg = <0xad0600 0x20>; 143 - clocks = <&chip CLKID_TWD>; 143 + clocks = <&chip_clk CLKID_TWD>; 144 144 interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_HIGH)>; 145 145 }; 146 146 ··· 155 155 compatible = "marvell,berlin2-usb-phy"; 156 156 reg = <0xa2f400 0x128>; 157 157 #phy-cells = <0>; 158 - resets = <&chip 0x104 14>; 158 + resets = <&chip_rst 0x104 14>; 159 159 status = "disabled"; 160 160 }; 161 161 ··· 163 163 compatible = "chipidea,usb2"; 164 164 reg = <0xa30000 0x10000>; 165 165 interrupts = <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>; 166 - clocks = <&chip CLKID_USB2>; 166 + clocks = <&chip_clk CLKID_USB2>; 167 167 phys = <&usb_phy2>; 168 168 phy-names = "usb-phy"; 169 169 status = "disabled"; ··· 173 173 compatible = "marvell,berlin2-usb-phy"; 174 174 reg = <0xb74000 0x128>; 175 175 #phy-cells = <0>; 176 - resets = <&chip 0x104 12>; 176 + resets = <&chip_rst 0x104 12>; 177 177 status = "disabled"; 178 178 }; 179 179 ··· 181 181 compatible = "marvell,berlin2-usb-phy"; 182 182 reg = <0xb78000 0x128>; 183 183 #phy-cells = <0>; 184 - resets = <&chip 0x104 13>; 184 + resets = <&chip_rst 0x104 13>; 185 185 status = "disabled"; 186 186 }; 187 187 188 188 eth0: ethernet@b90000 { 189 189 compatible = "marvell,pxa168-eth"; 190 190 reg = <0xb90000 0x10000>; 191 - clocks = <&chip CLKID_GETH0>; 191 + clocks = <&chip_clk CLKID_GETH0>; 192 192 interrupts = <GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>; 193 193 /* set by bootloader */ 194 194 local-mac-address = [00 00 00 00 00 00]; ··· 295 295 reg = <0x1400 0x100>; 296 296 interrupt-parent = <&aic>; 297 297 interrupts = <4>; 298 - clocks = <&chip CLKID_CFG>; 298 + clocks = <&chip_clk CLKID_CFG>; 299 299 pinctrl-0 = <&twsi0_pmux>; 300 300 pinctrl-names = "default"; 301 301 status = "disabled"; ··· 308 308 reg = <0x1800 0x100>; 309 309 interrupt-parent = <&aic>; 310 310 interrupts = <5>; 311 - clocks = <&chip CLKID_CFG>; 311 + clocks = <&chip_clk CLKID_CFG>; 312 312 pinctrl-0 = <&twsi1_pmux>; 313 313 pinctrl-names = "default"; 314 314 status = "disabled"; ··· 317 317 timer0: timer@2c00 { 318 318 compatible = "snps,dw-apb-timer"; 319 319 reg = <0x2c00 0x14>; 320 - clocks = <&chip CLKID_CFG>; 320 + clocks = <&chip_clk CLKID_CFG>; 321 321 clock-names = "timer"; 322 322 interrupts = <8>; 323 323 }; ··· 325 325 timer1: timer@2c14 { 326 326 compatible = "snps,dw-apb-timer"; 327 327 reg = <0x2c14 0x14>; 328 - clocks = <&chip CLKID_CFG>; 328 + clocks = <&chip_clk CLKID_CFG>; 329 329 clock-names = "timer"; 330 330 }; 331 331 332 332 timer2: timer@2c28 { 333 333 compatible = "snps,dw-apb-timer"; 334 334 reg = <0x2c28 0x14>; 335 - clocks = <&chip CLKID_CFG>; 335 + clocks = <&chip_clk CLKID_CFG>; 336 336 clock-names = "timer"; 337 337 status = "disabled"; 338 338 }; ··· 340 340 timer3: timer@2c3c { 341 341 compatible = "snps,dw-apb-timer"; 342 342 reg = <0x2c3c 0x14>; 343 - clocks = <&chip CLKID_CFG>; 343 + clocks = <&chip_clk CLKID_CFG>; 344 344 clock-names = "timer"; 345 345 status = "disabled"; 346 346 }; ··· 348 348 timer4: timer@2c50 { 349 349 compatible = "snps,dw-apb-timer"; 350 350 reg = <0x2c50 0x14>; 351 - clocks = <&chip CLKID_CFG>; 351 + clocks = <&chip_clk CLKID_CFG>; 352 352 clock-names = "timer"; 353 353 status = "disabled"; 354 354 }; ··· 356 356 timer5: timer@2c64 { 357 357 compatible = "snps,dw-apb-timer"; 358 358 reg = <0x2c64 0x14>; 359 - clocks = <&chip CLKID_CFG>; 359 + clocks = <&chip_clk CLKID_CFG>; 360 360 clock-names = "timer"; 361 361 status = "disabled"; 362 362 }; ··· 364 364 timer6: timer@2c78 { 365 365 compatible = "snps,dw-apb-timer"; 366 366 reg = <0x2c78 0x14>; 367 - clocks = <&chip CLKID_CFG>; 367 + clocks = <&chip_clk CLKID_CFG>; 368 368 clock-names = "timer"; 369 369 status = "disabled"; 370 370 }; ··· 372 372 timer7: timer@2c8c { 373 373 compatible = "snps,dw-apb-timer"; 374 374 reg = <0x2c8c 0x14>; 375 - clocks = <&chip CLKID_CFG>; 375 + clocks = <&chip_clk CLKID_CFG>; 376 376 clock-names = "timer"; 377 377 status = "disabled"; 378 378 }; ··· 388 388 }; 389 389 390 390 chip: chip-control@ea0000 { 391 - compatible = "marvell,berlin2q-chip-ctrl"; 392 - #clock-cells = <1>; 393 - #reset-cells = <2>; 391 + compatible = "simple-mfd", "syscon"; 394 392 reg = <0xea0000 0x400>, <0xdd0170 0x10>; 395 - clocks = <&refclk>; 396 - clock-names = "refclk"; 397 393 398 - twsi0_pmux: twsi0-pmux { 399 - groups = "G6"; 400 - function = "twsi0"; 394 + chip_clk: clock { 395 + compatible = "marvell,berlin2q-clk"; 396 + #clock-cells = <1>; 397 + clocks = <&refclk>; 398 + clock-names = "refclk"; 401 399 }; 402 400 403 - twsi1_pmux: twsi1-pmux { 404 - groups = "G7"; 405 - function = "twsi1"; 401 + soc_pinctrl: pin-controller { 402 + compatible = "marvell,berlin2q-soc-pinctrl"; 403 + 404 + twsi0_pmux: twsi0-pmux { 405 + groups = "G6"; 406 + function = "twsi0"; 407 + }; 408 + 409 + twsi1_pmux: twsi1-pmux { 410 + groups = "G7"; 411 + function = "twsi1"; 412 + }; 413 + }; 414 + 415 + chip_rst: reset { 416 + compatible = "marvell,berlin2-reset"; 417 + #reset-cells = <2>; 406 418 }; 407 419 }; 408 420 ··· 422 410 compatible = "marvell,berlin2q-ahci", "generic-ahci"; 423 411 reg = <0xe90000 0x1000>; 424 412 interrupts = <GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>; 425 - clocks = <&chip CLKID_SATA>; 413 + clocks = <&chip_clk CLKID_SATA>; 426 414 #address-cells = <1>; 427 415 #size-cells = <0>; 428 416 ··· 442 430 sata_phy: phy@e900a0 { 443 431 compatible = "marvell,berlin2q-sata-phy"; 444 432 reg = <0xe900a0 0x200>; 445 - clocks = <&chip CLKID_SATA>; 433 + clocks = <&chip_clk CLKID_SATA>; 446 434 #address-cells = <1>; 447 435 #size-cells = <0>; 448 436 #phy-cells = <1>; ··· 461 449 compatible = "chipidea,usb2"; 462 450 reg = <0xed0000 0x10000>; 463 451 interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>; 464 - clocks = <&chip CLKID_USB0>; 452 + clocks = <&chip_clk CLKID_USB0>; 465 453 phys = <&usb_phy0>; 466 454 phy-names = "usb-phy"; 467 455 status = "disabled"; ··· 471 459 compatible = "chipidea,usb2"; 472 460 reg = <0xee0000 0x10000>; 473 461 interrupts = <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>; 474 - clocks = <&chip CLKID_USB1>; 462 + clocks = <&chip_clk CLKID_USB1>; 475 463 phys = <&usb_phy1>; 476 464 phy-names = "usb-phy"; 477 465 status = "disabled"; ··· 566 554 }; 567 555 568 556 sysctrl: pin-controller@d000 { 569 - compatible = "marvell,berlin2q-system-ctrl"; 557 + compatible = "simple-mfd", "syscon"; 570 558 reg = <0xd000 0x100>; 571 559 572 - uart0_pmux: uart0-pmux { 573 - groups = "GSM12"; 574 - function = "uart0"; 560 + sys_pinctrl: pin-controller { 561 + compatible = "marvell,berlin2q-system-pinctrl"; 562 + 563 + uart0_pmux: uart0-pmux { 564 + groups = "GSM12"; 565 + function = "uart0"; 566 + }; 567 + 568 + uart1_pmux: uart1-pmux { 569 + groups = "GSM14"; 570 + function = "uart1"; 571 + }; 572 + 573 + twsi2_pmux: twsi2-pmux { 574 + groups = "GSM13"; 575 + function = "twsi2"; 576 + }; 577 + 578 + twsi3_pmux: twsi3-pmux { 579 + groups = "GSM14"; 580 + function = "twsi3"; 581 + }; 575 582 }; 576 583 577 - uart1_pmux: uart1-pmux { 578 - groups = "GSM14"; 579 - function = "uart1"; 580 - }; 581 - 582 - twsi2_pmux: twsi2-pmux { 583 - groups = "GSM13"; 584 - function = "twsi2"; 585 - }; 586 - 587 - twsi3_pmux: twsi3-pmux { 588 - groups = "GSM14"; 589 - function = "twsi3"; 584 + adc: adc { 585 + compatible = "marvell,berlin2-adc"; 586 + interrupts = <12>, <14>; 587 + interrupt-names = "adc", "tsen"; 590 588 }; 591 589 }; 592 590
+2 -2
arch/arm/boot/dts/integrator.dtsi
··· 6 6 7 7 / { 8 8 core-module@10000000 { 9 - compatible = "arm,core-module-integrator", "syscon"; 9 + compatible = "arm,core-module-integrator", "syscon", "simple-mfd"; 10 10 reg = <0x10000000 0x200>; 11 11 12 12 /* Use core module LED to indicate CPU load */ ··· 95 95 96 96 syscon { 97 97 /* Debug registers mapped as syscon */ 98 - compatible = "syscon"; 98 + compatible = "syscon", "simple-mfd"; 99 99 reg = <0x1a000000 0x10>; 100 100 101 101 led@04.0 {
+1
arch/arm/mach-berlin/Kconfig
··· 6 6 select DW_APB_ICTL 7 7 select DW_APB_TIMER_OF 8 8 select GENERIC_IRQ_CHIP 9 + select MFD_SYSCON 9 10 select PINCTRL 10 11 11 12 if ARCH_BERLIN
+68
arch/arm64/boot/dts/arm/juno-motherboard.dtsi
··· 138 138 clock-output-names = "timerclken0", "timerclken1", "timerclken2", "timerclken3"; 139 139 }; 140 140 141 + apbregs@010000 { 142 + compatible = "syscon", "simple-mfd"; 143 + reg = <0x010000 0x1000>; 144 + 145 + led@08.0 { 146 + compatible = "register-bit-led"; 147 + offset = <0x08>; 148 + mask = <0x01>; 149 + label = "vexpress:0"; 150 + linux,default-trigger = "heartbeat"; 151 + default-state = "on"; 152 + }; 153 + led@08.1 { 154 + compatible = "register-bit-led"; 155 + offset = <0x08>; 156 + mask = <0x02>; 157 + label = "vexpress:1"; 158 + linux,default-trigger = "mmc0"; 159 + default-state = "off"; 160 + }; 161 + led@08.2 { 162 + compatible = "register-bit-led"; 163 + offset = <0x08>; 164 + mask = <0x04>; 165 + label = "vexpress:2"; 166 + linux,default-trigger = "cpu0"; 167 + default-state = "off"; 168 + }; 169 + led@08.3 { 170 + compatible = "register-bit-led"; 171 + offset = <0x08>; 172 + mask = <0x08>; 173 + label = "vexpress:3"; 174 + linux,default-trigger = "cpu1"; 175 + default-state = "off"; 176 + }; 177 + led@08.4 { 178 + compatible = "register-bit-led"; 179 + offset = <0x08>; 180 + mask = <0x10>; 181 + label = "vexpress:4"; 182 + linux,default-trigger = "cpu2"; 183 + default-state = "off"; 184 + }; 185 + led@08.5 { 186 + compatible = "register-bit-led"; 187 + offset = <0x08>; 188 + mask = <0x20>; 189 + label = "vexpress:5"; 190 + linux,default-trigger = "cpu3"; 191 + default-state = "off"; 192 + }; 193 + led@08.6 { 194 + compatible = "register-bit-led"; 195 + offset = <0x08>; 196 + mask = <0x40>; 197 + label = "vexpress:6"; 198 + default-state = "off"; 199 + }; 200 + led@08.7 { 201 + compatible = "register-bit-led"; 202 + offset = <0x08>; 203 + mask = <0x80>; 204 + label = "vexpress:7"; 205 + default-state = "off"; 206 + }; 207 + }; 208 + 141 209 mmci@050000 { 142 210 compatible = "arm,pl180", "arm,primecell"; 143 211 reg = <0x050000 0x1000>;
+6
arch/arm64/configs/defconfig
··· 139 139 CONFIG_MMC_SDHCI=y 140 140 CONFIG_MMC_SDHCI_PLTFM=y 141 141 CONFIG_MMC_SPI=y 142 + CONFIG_NEW_LEDS=y 143 + CONFIG_LEDS_CLASS=y 144 + CONFIG_LEDS_SYSCON=y 145 + CONFIG_LEDS_TRIGGERS=y 146 + CONFIG_LEDS_TRIGGER_HEARTBEAT=y 147 + CONFIG_LEDS_TRIGGER_CPU=y 142 148 CONFIG_RTC_CLASS=y 143 149 CONFIG_RTC_DRV_EFI=y 144 150 CONFIG_RTC_DRV_XGENE=y
+24 -7
drivers/bus/Kconfig
··· 7 7 config ARM_CCI 8 8 bool 9 9 10 + config ARM_CCI_PMU 11 + bool 12 + select ARM_CCI 13 + 10 14 config ARM_CCI400_COMMON 11 15 bool 12 16 select ARM_CCI 13 17 14 18 config ARM_CCI400_PMU 15 19 bool "ARM CCI400 PMU support" 16 - default y 17 - depends on ARM || ARM64 18 - depends on HW_PERF_EVENTS 20 + depends on (ARM && CPU_V7) || ARM64 21 + depends on PERF_EVENTS 19 22 select ARM_CCI400_COMMON 23 + select ARM_CCI_PMU 20 24 help 21 - Support for PMU events monitoring on the ARM CCI cache coherent 22 - interconnect. 23 - 24 - If unsure, say Y 25 + Support for PMU events monitoring on the ARM CCI-400 (cache coherent 26 + interconnect). CCI-400 supports counting events related to the 27 + connected slave/master interfaces. 25 28 26 29 config ARM_CCI400_PORT_CTRL 27 30 bool ··· 33 30 help 34 31 Low level power management driver for CCI400 cache coherent 35 32 interconnect for ARM platforms. 33 + 34 + config ARM_CCI500_PMU 35 + bool "ARM CCI500 PMU support" 36 + default y 37 + depends on (ARM && CPU_V7) || ARM64 38 + depends on PERF_EVENTS 39 + select ARM_CCI_PMU 40 + help 41 + Support for PMU events monitoring on the ARM CCI-500 cache coherent 42 + interconnect. CCI-500 provides 8 independent event counters, which 43 + can count events pertaining to the slave/master interfaces as well 44 + as the internal events to the CCI. 45 + 46 + If unsure, say Y 36 47 37 48 config ARM_CCN 38 49 bool "ARM CCN driver support"
+736 -169
drivers/bus/arm-cci.c
··· 52 52 #ifdef CONFIG_ARM_CCI400_COMMON 53 53 {.compatible = "arm,cci-400", .data = CCI400_PORTS_DATA }, 54 54 #endif 55 + #ifdef CONFIG_ARM_CCI500_PMU 56 + { .compatible = "arm,cci-500", }, 57 + #endif 55 58 {}, 56 59 }; 57 60 58 - #ifdef CONFIG_ARM_CCI400_PMU 61 + #ifdef CONFIG_ARM_CCI_PMU 59 62 60 - #define DRIVER_NAME "CCI-400" 63 + #define DRIVER_NAME "ARM-CCI" 61 64 #define DRIVER_NAME_PMU DRIVER_NAME " PMU" 62 65 63 66 #define CCI_PMCR 0x0100 ··· 80 77 81 78 #define CCI_PMU_OVRFLW_FLAG 1 82 79 83 - #define CCI_PMU_CNTR_BASE(idx) ((idx) * SZ_4K) 80 + #define CCI_PMU_CNTR_SIZE(model) ((model)->cntr_size) 81 + #define CCI_PMU_CNTR_BASE(model, idx) ((idx) * CCI_PMU_CNTR_SIZE(model)) 82 + #define CCI_PMU_CNTR_MASK ((1ULL << 32) -1) 83 + #define CCI_PMU_CNTR_LAST(cci_pmu) (cci_pmu->num_cntrs - 1) 84 84 85 - #define CCI_PMU_CNTR_MASK ((1ULL << 32) -1) 86 - 87 - #define CCI_PMU_EVENT_MASK 0xffUL 88 - #define CCI_PMU_EVENT_SOURCE(event) ((event >> 5) & 0x7) 89 - #define CCI_PMU_EVENT_CODE(event) (event & 0x1f) 90 - 91 - #define CCI_PMU_MAX_HW_EVENTS 5 /* CCI PMU has 4 counters + 1 cycle counter */ 85 + #define CCI_PMU_MAX_HW_CNTRS(model) \ 86 + ((model)->num_hw_cntrs + (model)->fixed_hw_cntrs) 92 87 93 88 /* Types of interfaces that can generate events */ 94 89 enum { 95 90 CCI_IF_SLAVE, 96 91 CCI_IF_MASTER, 92 + #ifdef CONFIG_ARM_CCI500_PMU 93 + CCI_IF_GLOBAL, 94 + #endif 97 95 CCI_IF_MAX, 98 96 }; 99 97 ··· 104 100 }; 105 101 106 102 struct cci_pmu_hw_events { 107 - struct perf_event *events[CCI_PMU_MAX_HW_EVENTS]; 108 - unsigned long used_mask[BITS_TO_LONGS(CCI_PMU_MAX_HW_EVENTS)]; 103 + struct perf_event **events; 104 + unsigned long *used_mask; 109 105 raw_spinlock_t pmu_lock; 110 106 }; 111 107 108 + struct cci_pmu; 109 + /* 110 + * struct cci_pmu_model: 111 + * @fixed_hw_cntrs - Number of fixed event counters 112 + * @num_hw_cntrs - Maximum number of programmable event counters 113 + * @cntr_size - Size of an event counter mapping 114 + */ 112 115 struct cci_pmu_model { 113 116 char *name; 117 + u32 fixed_hw_cntrs; 118 + u32 num_hw_cntrs; 119 + u32 cntr_size; 120 + u64 nformat_attrs; 121 + u64 nevent_attrs; 122 + struct dev_ext_attribute *format_attrs; 123 + struct dev_ext_attribute *event_attrs; 114 124 struct event_range event_ranges[CCI_IF_MAX]; 125 + int (*validate_hw_event)(struct cci_pmu *, unsigned long); 126 + int (*get_event_idx)(struct cci_pmu *, struct cci_pmu_hw_events *, unsigned long); 115 127 }; 116 128 117 129 static struct cci_pmu_model cci_pmu_models[]; ··· 136 116 void __iomem *base; 137 117 struct pmu pmu; 138 118 int nr_irqs; 139 - int irqs[CCI_PMU_MAX_HW_EVENTS]; 119 + int *irqs; 140 120 unsigned long active_irqs; 141 121 const struct cci_pmu_model *model; 142 122 struct cci_pmu_hw_events hw_events; 143 123 struct platform_device *plat_device; 144 - int num_events; 124 + int num_cntrs; 145 125 atomic_t active_events; 146 126 struct mutex reserve_mutex; 127 + struct notifier_block cpu_nb; 147 128 cpumask_t cpus; 148 129 }; 149 - static struct cci_pmu *pmu; 150 130 151 131 #define to_cci_pmu(c) (container_of(c, struct cci_pmu, pmu)) 152 132 153 - /* Port ids */ 154 - #define CCI_PORT_S0 0 155 - #define CCI_PORT_S1 1 156 - #define CCI_PORT_S2 2 157 - #define CCI_PORT_S3 3 158 - #define CCI_PORT_S4 4 159 - #define CCI_PORT_M0 5 160 - #define CCI_PORT_M1 6 161 - #define CCI_PORT_M2 7 133 + enum cci_models { 134 + #ifdef CONFIG_ARM_CCI400_PMU 135 + CCI400_R0, 136 + CCI400_R1, 137 + #endif 138 + #ifdef CONFIG_ARM_CCI500_PMU 139 + CCI500_R0, 140 + #endif 141 + CCI_MODEL_MAX 142 + }; 162 143 163 - #define CCI_REV_R0 0 164 - #define CCI_REV_R1 1 165 - #define CCI_REV_R1_PX 5 144 + static ssize_t cci_pmu_format_show(struct device *dev, 145 + struct device_attribute *attr, char *buf); 146 + static ssize_t cci_pmu_event_show(struct device *dev, 147 + struct device_attribute *attr, char *buf); 148 + 149 + #define CCI_EXT_ATTR_ENTRY(_name, _func, _config) \ 150 + { __ATTR(_name, S_IRUGO, _func, NULL), (void *)_config } 151 + 152 + #define CCI_FORMAT_EXT_ATTR_ENTRY(_name, _config) \ 153 + CCI_EXT_ATTR_ENTRY(_name, cci_pmu_format_show, (char *)_config) 154 + #define CCI_EVENT_EXT_ATTR_ENTRY(_name, _config) \ 155 + CCI_EXT_ATTR_ENTRY(_name, cci_pmu_event_show, (unsigned long)_config) 156 + 157 + /* CCI400 PMU Specific definitions */ 158 + 159 + #ifdef CONFIG_ARM_CCI400_PMU 160 + 161 + /* Port ids */ 162 + #define CCI400_PORT_S0 0 163 + #define CCI400_PORT_S1 1 164 + #define CCI400_PORT_S2 2 165 + #define CCI400_PORT_S3 3 166 + #define CCI400_PORT_S4 4 167 + #define CCI400_PORT_M0 5 168 + #define CCI400_PORT_M1 6 169 + #define CCI400_PORT_M2 7 170 + 171 + #define CCI400_R1_PX 5 166 172 167 173 /* 168 174 * Instead of an event id to monitor CCI cycles, a dedicated counter is ··· 196 150 * make use of this event in hardware. 197 151 */ 198 152 enum cci400_perf_events { 199 - CCI_PMU_CYCLES = 0xff 153 + CCI400_PMU_CYCLES = 0xff 200 154 }; 201 155 202 - #define CCI_PMU_CYCLE_CNTR_IDX 0 203 - #define CCI_PMU_CNTR0_IDX 1 204 - #define CCI_PMU_CNTR_LAST(cci_pmu) (CCI_PMU_CYCLE_CNTR_IDX + cci_pmu->num_events - 1) 156 + #define CCI400_PMU_CYCLE_CNTR_IDX 0 157 + #define CCI400_PMU_CNTR0_IDX 1 205 158 206 159 /* 207 160 * CCI PMU event id is an 8-bit value made of two parts - bits 7:5 for one of 8 ··· 214 169 * the different revisions and are used to validate the event to be monitored. 215 170 */ 216 171 217 - #define CCI_REV_R0_SLAVE_PORT_MIN_EV 0x00 218 - #define CCI_REV_R0_SLAVE_PORT_MAX_EV 0x13 219 - #define CCI_REV_R0_MASTER_PORT_MIN_EV 0x14 220 - #define CCI_REV_R0_MASTER_PORT_MAX_EV 0x1a 172 + #define CCI400_PMU_EVENT_MASK 0xffUL 173 + #define CCI400_PMU_EVENT_SOURCE_SHIFT 5 174 + #define CCI400_PMU_EVENT_SOURCE_MASK 0x7 175 + #define CCI400_PMU_EVENT_CODE_SHIFT 0 176 + #define CCI400_PMU_EVENT_CODE_MASK 0x1f 177 + #define CCI400_PMU_EVENT_SOURCE(event) \ 178 + ((event >> CCI400_PMU_EVENT_SOURCE_SHIFT) & \ 179 + CCI400_PMU_EVENT_SOURCE_MASK) 180 + #define CCI400_PMU_EVENT_CODE(event) \ 181 + ((event >> CCI400_PMU_EVENT_CODE_SHIFT) & CCI400_PMU_EVENT_CODE_MASK) 221 182 222 - #define CCI_REV_R1_SLAVE_PORT_MIN_EV 0x00 223 - #define CCI_REV_R1_SLAVE_PORT_MAX_EV 0x14 224 - #define CCI_REV_R1_MASTER_PORT_MIN_EV 0x00 225 - #define CCI_REV_R1_MASTER_PORT_MAX_EV 0x11 183 + #define CCI400_R0_SLAVE_PORT_MIN_EV 0x00 184 + #define CCI400_R0_SLAVE_PORT_MAX_EV 0x13 185 + #define CCI400_R0_MASTER_PORT_MIN_EV 0x14 186 + #define CCI400_R0_MASTER_PORT_MAX_EV 0x1a 226 187 227 - static int pmu_validate_hw_event(unsigned long hw_event) 188 + #define CCI400_R1_SLAVE_PORT_MIN_EV 0x00 189 + #define CCI400_R1_SLAVE_PORT_MAX_EV 0x14 190 + #define CCI400_R1_MASTER_PORT_MIN_EV 0x00 191 + #define CCI400_R1_MASTER_PORT_MAX_EV 0x11 192 + 193 + #define CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(_name, _config) \ 194 + CCI_EXT_ATTR_ENTRY(_name, cci400_pmu_cycle_event_show, \ 195 + (unsigned long)_config) 196 + 197 + static ssize_t cci400_pmu_cycle_event_show(struct device *dev, 198 + struct device_attribute *attr, char *buf); 199 + 200 + static struct dev_ext_attribute cci400_pmu_format_attrs[] = { 201 + CCI_FORMAT_EXT_ATTR_ENTRY(event, "config:0-4"), 202 + CCI_FORMAT_EXT_ATTR_ENTRY(source, "config:5-7"), 203 + }; 204 + 205 + static struct dev_ext_attribute cci400_r0_pmu_event_attrs[] = { 206 + /* Slave events */ 207 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_any, 0x0), 208 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_device, 0x01), 209 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_normal_or_nonshareable, 0x2), 210 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_inner_or_outershareable, 0x3), 211 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maintenance, 0x4), 212 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_mem_barrier, 0x5), 213 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_sync_barrier, 0x6), 214 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7), 215 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg_sync, 0x8), 216 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_tt_full, 0x9), 217 + CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_last_hs_snoop, 0xA), 218 + CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall_rvalids_h_rready_l, 0xB), 219 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_any, 0xC), 220 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_device, 0xD), 221 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_normal_or_nonshareable, 0xE), 222 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_inner_or_outershare_wback_wclean, 0xF), 223 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_unique, 0x10), 224 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_line_unique, 0x11), 225 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_evict, 0x12), 226 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall_tt_full, 0x13), 227 + /* Master events */ 228 + CCI_EVENT_EXT_ATTR_ENTRY(mi_retry_speculative_fetch, 0x14), 229 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_addr_hazard, 0x15), 230 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_id_hazard, 0x16), 231 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_tt_full, 0x17), 232 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_barrier_hazard, 0x18), 233 + CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_barrier_hazard, 0x19), 234 + CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_tt_full, 0x1A), 235 + /* Special event for cycles counter */ 236 + CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(cycles, 0xff), 237 + }; 238 + 239 + static struct dev_ext_attribute cci400_r1_pmu_event_attrs[] = { 240 + /* Slave events */ 241 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_any, 0x0), 242 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_device, 0x01), 243 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_normal_or_nonshareable, 0x2), 244 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_inner_or_outershareable, 0x3), 245 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maintenance, 0x4), 246 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_mem_barrier, 0x5), 247 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_sync_barrier, 0x6), 248 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7), 249 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg_sync, 0x8), 250 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_tt_full, 0x9), 251 + CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_last_hs_snoop, 0xA), 252 + CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall_rvalids_h_rready_l, 0xB), 253 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_any, 0xC), 254 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_device, 0xD), 255 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_normal_or_nonshareable, 0xE), 256 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_inner_or_outershare_wback_wclean, 0xF), 257 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_unique, 0x10), 258 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_write_line_unique, 0x11), 259 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_evict, 0x12), 260 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall_tt_full, 0x13), 261 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_slave_id_hazard, 0x14), 262 + /* Master events */ 263 + CCI_EVENT_EXT_ATTR_ENTRY(mi_retry_speculative_fetch, 0x0), 264 + CCI_EVENT_EXT_ATTR_ENTRY(mi_stall_cycle_addr_hazard, 0x1), 265 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_master_id_hazard, 0x2), 266 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_hi_prio_rtq_full, 0x3), 267 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_barrier_hazard, 0x4), 268 + CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_barrier_hazard, 0x5), 269 + CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_wtq_full, 0x6), 270 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_low_prio_rtq_full, 0x7), 271 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_mid_prio_rtq_full, 0x8), 272 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn0, 0x9), 273 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn1, 0xA), 274 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn2, 0xB), 275 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall_qvn_vn3, 0xC), 276 + CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn0, 0xD), 277 + CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn1, 0xE), 278 + CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn2, 0xF), 279 + CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall_qvn_vn3, 0x10), 280 + CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_unique_or_line_unique_addr_hazard, 0x11), 281 + /* Special event for cycles counter */ 282 + CCI400_CYCLE_EVENT_EXT_ATTR_ENTRY(cycles, 0xff), 283 + }; 284 + 285 + static ssize_t cci400_pmu_cycle_event_show(struct device *dev, 286 + struct device_attribute *attr, char *buf) 228 287 { 229 - u8 ev_source = CCI_PMU_EVENT_SOURCE(hw_event); 230 - u8 ev_code = CCI_PMU_EVENT_CODE(hw_event); 288 + struct dev_ext_attribute *eattr = container_of(attr, 289 + struct dev_ext_attribute, attr); 290 + return snprintf(buf, PAGE_SIZE, "config=0x%lx\n", (unsigned long)eattr->var); 291 + } 292 + 293 + static int cci400_get_event_idx(struct cci_pmu *cci_pmu, 294 + struct cci_pmu_hw_events *hw, 295 + unsigned long cci_event) 296 + { 297 + int idx; 298 + 299 + /* cycles event idx is fixed */ 300 + if (cci_event == CCI400_PMU_CYCLES) { 301 + if (test_and_set_bit(CCI400_PMU_CYCLE_CNTR_IDX, hw->used_mask)) 302 + return -EAGAIN; 303 + 304 + return CCI400_PMU_CYCLE_CNTR_IDX; 305 + } 306 + 307 + for (idx = CCI400_PMU_CNTR0_IDX; idx <= CCI_PMU_CNTR_LAST(cci_pmu); ++idx) 308 + if (!test_and_set_bit(idx, hw->used_mask)) 309 + return idx; 310 + 311 + /* No counters available */ 312 + return -EAGAIN; 313 + } 314 + 315 + static int cci400_validate_hw_event(struct cci_pmu *cci_pmu, unsigned long hw_event) 316 + { 317 + u8 ev_source = CCI400_PMU_EVENT_SOURCE(hw_event); 318 + u8 ev_code = CCI400_PMU_EVENT_CODE(hw_event); 231 319 int if_type; 232 320 233 - if (hw_event & ~CCI_PMU_EVENT_MASK) 321 + if (hw_event & ~CCI400_PMU_EVENT_MASK) 234 322 return -ENOENT; 235 323 324 + if (hw_event == CCI400_PMU_CYCLES) 325 + return hw_event; 326 + 236 327 switch (ev_source) { 237 - case CCI_PORT_S0: 238 - case CCI_PORT_S1: 239 - case CCI_PORT_S2: 240 - case CCI_PORT_S3: 241 - case CCI_PORT_S4: 328 + case CCI400_PORT_S0: 329 + case CCI400_PORT_S1: 330 + case CCI400_PORT_S2: 331 + case CCI400_PORT_S3: 332 + case CCI400_PORT_S4: 242 333 /* Slave Interface */ 243 334 if_type = CCI_IF_SLAVE; 244 335 break; 245 - case CCI_PORT_M0: 246 - case CCI_PORT_M1: 247 - case CCI_PORT_M2: 336 + case CCI400_PORT_M0: 337 + case CCI400_PORT_M1: 338 + case CCI400_PORT_M2: 248 339 /* Master Interface */ 249 340 if_type = CCI_IF_MASTER; 250 341 break; ··· 388 207 return -ENOENT; 389 208 } 390 209 391 - if (ev_code >= pmu->model->event_ranges[if_type].min && 392 - ev_code <= pmu->model->event_ranges[if_type].max) 210 + if (ev_code >= cci_pmu->model->event_ranges[if_type].min && 211 + ev_code <= cci_pmu->model->event_ranges[if_type].max) 393 212 return hw_event; 394 213 395 214 return -ENOENT; 396 215 } 397 216 398 - static int probe_cci_revision(void) 217 + static int probe_cci400_revision(void) 399 218 { 400 219 int rev; 401 220 rev = readl_relaxed(cci_ctrl_base + CCI_PID2) & CCI_PID2_REV_MASK; 402 221 rev >>= CCI_PID2_REV_SHIFT; 403 222 404 - if (rev < CCI_REV_R1_PX) 405 - return CCI_REV_R0; 223 + if (rev < CCI400_R1_PX) 224 + return CCI400_R0; 406 225 else 407 - return CCI_REV_R1; 226 + return CCI400_R1; 408 227 } 409 228 410 229 static const struct cci_pmu_model *probe_cci_model(struct platform_device *pdev) 411 230 { 412 231 if (platform_has_secure_cci_access()) 413 - return &cci_pmu_models[probe_cci_revision()]; 232 + return &cci_pmu_models[probe_cci400_revision()]; 414 233 return NULL; 234 + } 235 + #else /* !CONFIG_ARM_CCI400_PMU */ 236 + static inline struct cci_pmu_model *probe_cci_model(struct platform_device *pdev) 237 + { 238 + return NULL; 239 + } 240 + #endif /* CONFIG_ARM_CCI400_PMU */ 241 + 242 + #ifdef CONFIG_ARM_CCI500_PMU 243 + 244 + /* 245 + * CCI500 provides 8 independent event counters that can count 246 + * any of the events available. 247 + * 248 + * CCI500 PMU event id is an 9-bit value made of two parts. 249 + * bits [8:5] - Source for the event 250 + * 0x0-0x6 - Slave interfaces 251 + * 0x8-0xD - Master interfaces 252 + * 0xf - Global Events 253 + * 0x7,0xe - Reserved 254 + * 255 + * bits [4:0] - Event code (specific to type of interface) 256 + */ 257 + 258 + /* Port ids */ 259 + #define CCI500_PORT_S0 0x0 260 + #define CCI500_PORT_S1 0x1 261 + #define CCI500_PORT_S2 0x2 262 + #define CCI500_PORT_S3 0x3 263 + #define CCI500_PORT_S4 0x4 264 + #define CCI500_PORT_S5 0x5 265 + #define CCI500_PORT_S6 0x6 266 + 267 + #define CCI500_PORT_M0 0x8 268 + #define CCI500_PORT_M1 0x9 269 + #define CCI500_PORT_M2 0xa 270 + #define CCI500_PORT_M3 0xb 271 + #define CCI500_PORT_M4 0xc 272 + #define CCI500_PORT_M5 0xd 273 + 274 + #define CCI500_PORT_GLOBAL 0xf 275 + 276 + #define CCI500_PMU_EVENT_MASK 0x1ffUL 277 + #define CCI500_PMU_EVENT_SOURCE_SHIFT 0x5 278 + #define CCI500_PMU_EVENT_SOURCE_MASK 0xf 279 + #define CCI500_PMU_EVENT_CODE_SHIFT 0x0 280 + #define CCI500_PMU_EVENT_CODE_MASK 0x1f 281 + 282 + #define CCI500_PMU_EVENT_SOURCE(event) \ 283 + ((event >> CCI500_PMU_EVENT_SOURCE_SHIFT) & CCI500_PMU_EVENT_SOURCE_MASK) 284 + #define CCI500_PMU_EVENT_CODE(event) \ 285 + ((event >> CCI500_PMU_EVENT_CODE_SHIFT) & CCI500_PMU_EVENT_CODE_MASK) 286 + 287 + #define CCI500_SLAVE_PORT_MIN_EV 0x00 288 + #define CCI500_SLAVE_PORT_MAX_EV 0x1f 289 + #define CCI500_MASTER_PORT_MIN_EV 0x00 290 + #define CCI500_MASTER_PORT_MAX_EV 0x06 291 + #define CCI500_GLOBAL_PORT_MIN_EV 0x00 292 + #define CCI500_GLOBAL_PORT_MAX_EV 0x0f 293 + 294 + 295 + #define CCI500_GLOBAL_EVENT_EXT_ATTR_ENTRY(_name, _config) \ 296 + CCI_EXT_ATTR_ENTRY(_name, cci500_pmu_global_event_show, \ 297 + (unsigned long) _config) 298 + 299 + static ssize_t cci500_pmu_global_event_show(struct device *dev, 300 + struct device_attribute *attr, char *buf); 301 + 302 + static struct dev_ext_attribute cci500_pmu_format_attrs[] = { 303 + CCI_FORMAT_EXT_ATTR_ENTRY(event, "config:0-4"), 304 + CCI_FORMAT_EXT_ATTR_ENTRY(source, "config:5-8"), 305 + }; 306 + 307 + static struct dev_ext_attribute cci500_pmu_event_attrs[] = { 308 + /* Slave events */ 309 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_arvalid, 0x0), 310 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_dev, 0x1), 311 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_nonshareable, 0x2), 312 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_shareable_non_alloc, 0x3), 313 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_shareable_alloc, 0x4), 314 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_invalidate, 0x5), 315 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_cache_maint, 0x6), 316 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_dvm_msg, 0x7), 317 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_rval, 0x8), 318 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_hs_rlast_snoop, 0x9), 319 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_hs_awalid, 0xA), 320 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_dev, 0xB), 321 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_non_shareable, 0xC), 322 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wb, 0xD), 323 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wlu, 0xE), 324 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_share_wunique, 0xF), 325 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_evict, 0x10), 326 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_wrevict, 0x11), 327 + CCI_EVENT_EXT_ATTR_ENTRY(si_w_data_beat, 0x12), 328 + CCI_EVENT_EXT_ATTR_ENTRY(si_srq_acvalid, 0x13), 329 + CCI_EVENT_EXT_ATTR_ENTRY(si_srq_read, 0x14), 330 + CCI_EVENT_EXT_ATTR_ENTRY(si_srq_clean, 0x15), 331 + CCI_EVENT_EXT_ATTR_ENTRY(si_srq_data_transfer_low, 0x16), 332 + CCI_EVENT_EXT_ATTR_ENTRY(si_rrq_stall_arvalid, 0x17), 333 + CCI_EVENT_EXT_ATTR_ENTRY(si_r_data_stall, 0x18), 334 + CCI_EVENT_EXT_ATTR_ENTRY(si_wrq_stall, 0x19), 335 + CCI_EVENT_EXT_ATTR_ENTRY(si_w_data_stall, 0x1A), 336 + CCI_EVENT_EXT_ATTR_ENTRY(si_w_resp_stall, 0x1B), 337 + CCI_EVENT_EXT_ATTR_ENTRY(si_srq_stall, 0x1C), 338 + CCI_EVENT_EXT_ATTR_ENTRY(si_s_data_stall, 0x1D), 339 + CCI_EVENT_EXT_ATTR_ENTRY(si_rq_stall_ot_limit, 0x1E), 340 + CCI_EVENT_EXT_ATTR_ENTRY(si_r_stall_arbit, 0x1F), 341 + 342 + /* Master events */ 343 + CCI_EVENT_EXT_ATTR_ENTRY(mi_r_data_beat_any, 0x0), 344 + CCI_EVENT_EXT_ATTR_ENTRY(mi_w_data_beat_any, 0x1), 345 + CCI_EVENT_EXT_ATTR_ENTRY(mi_rrq_stall, 0x2), 346 + CCI_EVENT_EXT_ATTR_ENTRY(mi_r_data_stall, 0x3), 347 + CCI_EVENT_EXT_ATTR_ENTRY(mi_wrq_stall, 0x4), 348 + CCI_EVENT_EXT_ATTR_ENTRY(mi_w_data_stall, 0x5), 349 + CCI_EVENT_EXT_ATTR_ENTRY(mi_w_resp_stall, 0x6), 350 + 351 + /* Global events */ 352 + CCI500_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_0_1, 0x0), 353 + CCI500_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_2_3, 0x1), 354 + CCI500_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_4_5, 0x2), 355 + CCI500_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_filter_bank_6_7, 0x3), 356 + CCI500_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_0_1, 0x4), 357 + CCI500_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_2_3, 0x5), 358 + CCI500_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_4_5, 0x6), 359 + CCI500_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_access_miss_filter_bank_6_7, 0x7), 360 + CCI500_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_back_invalidation, 0x8), 361 + CCI500_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_stall_alloc_busy, 0x9), 362 + CCI500_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_stall_tt_full, 0xA), 363 + CCI500_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_wrq, 0xB), 364 + CCI500_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_cd_hs, 0xC), 365 + CCI500_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_rq_stall_addr_hazard, 0xD), 366 + CCI500_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snopp_rq_stall_tt_full, 0xE), 367 + CCI500_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_rq_tzmp1_prot, 0xF), 368 + }; 369 + 370 + static ssize_t cci500_pmu_global_event_show(struct device *dev, 371 + struct device_attribute *attr, char *buf) 372 + { 373 + struct dev_ext_attribute *eattr = container_of(attr, 374 + struct dev_ext_attribute, attr); 375 + /* Global events have single fixed source code */ 376 + return snprintf(buf, PAGE_SIZE, "event=0x%lx,source=0x%x\n", 377 + (unsigned long)eattr->var, CCI500_PORT_GLOBAL); 378 + } 379 + 380 + static int cci500_validate_hw_event(struct cci_pmu *cci_pmu, 381 + unsigned long hw_event) 382 + { 383 + u32 ev_source = CCI500_PMU_EVENT_SOURCE(hw_event); 384 + u32 ev_code = CCI500_PMU_EVENT_CODE(hw_event); 385 + int if_type; 386 + 387 + if (hw_event & ~CCI500_PMU_EVENT_MASK) 388 + return -ENOENT; 389 + 390 + switch (ev_source) { 391 + case CCI500_PORT_S0: 392 + case CCI500_PORT_S1: 393 + case CCI500_PORT_S2: 394 + case CCI500_PORT_S3: 395 + case CCI500_PORT_S4: 396 + case CCI500_PORT_S5: 397 + case CCI500_PORT_S6: 398 + if_type = CCI_IF_SLAVE; 399 + break; 400 + case CCI500_PORT_M0: 401 + case CCI500_PORT_M1: 402 + case CCI500_PORT_M2: 403 + case CCI500_PORT_M3: 404 + case CCI500_PORT_M4: 405 + case CCI500_PORT_M5: 406 + if_type = CCI_IF_MASTER; 407 + break; 408 + case CCI500_PORT_GLOBAL: 409 + if_type = CCI_IF_GLOBAL; 410 + break; 411 + default: 412 + return -ENOENT; 413 + } 414 + 415 + if (ev_code >= cci_pmu->model->event_ranges[if_type].min && 416 + ev_code <= cci_pmu->model->event_ranges[if_type].max) 417 + return hw_event; 418 + 419 + return -ENOENT; 420 + } 421 + #endif /* CONFIG_ARM_CCI500_PMU */ 422 + 423 + static ssize_t cci_pmu_format_show(struct device *dev, 424 + struct device_attribute *attr, char *buf) 425 + { 426 + struct dev_ext_attribute *eattr = container_of(attr, 427 + struct dev_ext_attribute, attr); 428 + return snprintf(buf, PAGE_SIZE, "%s\n", (char *)eattr->var); 429 + } 430 + 431 + static ssize_t cci_pmu_event_show(struct device *dev, 432 + struct device_attribute *attr, char *buf) 433 + { 434 + struct dev_ext_attribute *eattr = container_of(attr, 435 + struct dev_ext_attribute, attr); 436 + /* source parameter is mandatory for normal PMU events */ 437 + return snprintf(buf, PAGE_SIZE, "source=?,event=0x%lx\n", 438 + (unsigned long)eattr->var); 415 439 } 416 440 417 441 static int pmu_is_valid_counter(struct cci_pmu *cci_pmu, int idx) 418 442 { 419 - return CCI_PMU_CYCLE_CNTR_IDX <= idx && 420 - idx <= CCI_PMU_CNTR_LAST(cci_pmu); 443 + return 0 <= idx && idx <= CCI_PMU_CNTR_LAST(cci_pmu); 421 444 } 422 445 423 - static u32 pmu_read_register(int idx, unsigned int offset) 446 + static u32 pmu_read_register(struct cci_pmu *cci_pmu, int idx, unsigned int offset) 424 447 { 425 - return readl_relaxed(pmu->base + CCI_PMU_CNTR_BASE(idx) + offset); 448 + return readl_relaxed(cci_pmu->base + 449 + CCI_PMU_CNTR_BASE(cci_pmu->model, idx) + offset); 426 450 } 427 451 428 - static void pmu_write_register(u32 value, int idx, unsigned int offset) 452 + static void pmu_write_register(struct cci_pmu *cci_pmu, u32 value, 453 + int idx, unsigned int offset) 429 454 { 430 - return writel_relaxed(value, pmu->base + CCI_PMU_CNTR_BASE(idx) + offset); 455 + return writel_relaxed(value, cci_pmu->base + 456 + CCI_PMU_CNTR_BASE(cci_pmu->model, idx) + offset); 431 457 } 432 458 433 - static void pmu_disable_counter(int idx) 459 + static void pmu_disable_counter(struct cci_pmu *cci_pmu, int idx) 434 460 { 435 - pmu_write_register(0, idx, CCI_PMU_CNTR_CTRL); 461 + pmu_write_register(cci_pmu, 0, idx, CCI_PMU_CNTR_CTRL); 436 462 } 437 463 438 - static void pmu_enable_counter(int idx) 464 + static void pmu_enable_counter(struct cci_pmu *cci_pmu, int idx) 439 465 { 440 - pmu_write_register(1, idx, CCI_PMU_CNTR_CTRL); 466 + pmu_write_register(cci_pmu, 1, idx, CCI_PMU_CNTR_CTRL); 441 467 } 442 468 443 - static void pmu_set_event(int idx, unsigned long event) 469 + static void pmu_set_event(struct cci_pmu *cci_pmu, int idx, unsigned long event) 444 470 { 445 - pmu_write_register(event, idx, CCI_PMU_EVT_SEL); 471 + pmu_write_register(cci_pmu, event, idx, CCI_PMU_EVT_SEL); 446 472 } 447 473 474 + /* 475 + * Returns the number of programmable counters actually implemented 476 + * by the cci 477 + */ 448 478 static u32 pmu_get_max_counters(void) 449 479 { 450 - u32 n_cnts = (readl_relaxed(cci_ctrl_base + CCI_PMCR) & 451 - CCI_PMCR_NCNT_MASK) >> CCI_PMCR_NCNT_SHIFT; 452 - 453 - /* add 1 for cycle counter */ 454 - return n_cnts + 1; 480 + return (readl_relaxed(cci_ctrl_base + CCI_PMCR) & 481 + CCI_PMCR_NCNT_MASK) >> CCI_PMCR_NCNT_SHIFT; 455 482 } 456 483 457 484 static int pmu_get_event_idx(struct cci_pmu_hw_events *hw, struct perf_event *event) 458 485 { 459 486 struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 460 - struct hw_perf_event *hw_event = &event->hw; 461 - unsigned long cci_event = hw_event->config_base; 487 + unsigned long cci_event = event->hw.config_base; 462 488 int idx; 463 489 464 - if (cci_event == CCI_PMU_CYCLES) { 465 - if (test_and_set_bit(CCI_PMU_CYCLE_CNTR_IDX, hw->used_mask)) 466 - return -EAGAIN; 490 + if (cci_pmu->model->get_event_idx) 491 + return cci_pmu->model->get_event_idx(cci_pmu, hw, cci_event); 467 492 468 - return CCI_PMU_CYCLE_CNTR_IDX; 469 - } 470 - 471 - for (idx = CCI_PMU_CNTR0_IDX; idx <= CCI_PMU_CNTR_LAST(cci_pmu); ++idx) 493 + /* Generic code to find an unused idx from the mask */ 494 + for(idx = 0; idx <= CCI_PMU_CNTR_LAST(cci_pmu); idx++) 472 495 if (!test_and_set_bit(idx, hw->used_mask)) 473 496 return idx; 474 497 ··· 682 297 683 298 static int pmu_map_event(struct perf_event *event) 684 299 { 685 - int mapping; 686 - unsigned long config = event->attr.config; 300 + struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 687 301 688 - if (event->attr.type < PERF_TYPE_MAX) 302 + if (event->attr.type < PERF_TYPE_MAX || 303 + !cci_pmu->model->validate_hw_event) 689 304 return -ENOENT; 690 305 691 - if (config == CCI_PMU_CYCLES) 692 - mapping = config; 693 - else 694 - mapping = pmu_validate_hw_event(config); 695 - 696 - return mapping; 306 + return cci_pmu->model->validate_hw_event(cci_pmu, event->attr.config); 697 307 } 698 308 699 309 static int pmu_request_irq(struct cci_pmu *cci_pmu, irq_handler_t handler) ··· 699 319 if (unlikely(!pmu_device)) 700 320 return -ENODEV; 701 321 702 - if (pmu->nr_irqs < 1) { 322 + if (cci_pmu->nr_irqs < 1) { 703 323 dev_err(&pmu_device->dev, "no irqs for CCI PMUs defined\n"); 704 324 return -ENODEV; 705 325 } ··· 711 331 * 712 332 * This should allow handling of non-unique interrupt for the counters. 713 333 */ 714 - for (i = 0; i < pmu->nr_irqs; i++) { 715 - int err = request_irq(pmu->irqs[i], handler, IRQF_SHARED, 334 + for (i = 0; i < cci_pmu->nr_irqs; i++) { 335 + int err = request_irq(cci_pmu->irqs[i], handler, IRQF_SHARED, 716 336 "arm-cci-pmu", cci_pmu); 717 337 if (err) { 718 338 dev_err(&pmu_device->dev, "unable to request IRQ%d for ARM CCI PMU counters\n", 719 - pmu->irqs[i]); 339 + cci_pmu->irqs[i]); 720 340 return err; 721 341 } 722 342 723 - set_bit(i, &pmu->active_irqs); 343 + set_bit(i, &cci_pmu->active_irqs); 724 344 } 725 345 726 346 return 0; ··· 730 350 { 731 351 int i; 732 352 733 - for (i = 0; i < pmu->nr_irqs; i++) { 734 - if (!test_and_clear_bit(i, &pmu->active_irqs)) 353 + for (i = 0; i < cci_pmu->nr_irqs; i++) { 354 + if (!test_and_clear_bit(i, &cci_pmu->active_irqs)) 735 355 continue; 736 356 737 - free_irq(pmu->irqs[i], cci_pmu); 357 + free_irq(cci_pmu->irqs[i], cci_pmu); 738 358 } 739 359 } 740 360 ··· 749 369 dev_err(&cci_pmu->plat_device->dev, "Invalid CCI PMU counter %d\n", idx); 750 370 return 0; 751 371 } 752 - value = pmu_read_register(idx, CCI_PMU_CNTR); 372 + value = pmu_read_register(cci_pmu, idx, CCI_PMU_CNTR); 753 373 754 374 return value; 755 375 } ··· 763 383 if (unlikely(!pmu_is_valid_counter(cci_pmu, idx))) 764 384 dev_err(&cci_pmu->plat_device->dev, "Invalid CCI PMU counter %d\n", idx); 765 385 else 766 - pmu_write_register(value, idx, CCI_PMU_CNTR); 386 + pmu_write_register(cci_pmu, value, idx, CCI_PMU_CNTR); 767 387 } 768 388 769 389 static u64 pmu_event_update(struct perf_event *event) ··· 807 427 { 808 428 unsigned long flags; 809 429 struct cci_pmu *cci_pmu = dev; 810 - struct cci_pmu_hw_events *events = &pmu->hw_events; 430 + struct cci_pmu_hw_events *events = &cci_pmu->hw_events; 811 431 int idx, handled = IRQ_NONE; 812 432 813 433 raw_spin_lock_irqsave(&events->pmu_lock, flags); ··· 816 436 * This should work regardless of whether we have per-counter overflow 817 437 * interrupt or a combined overflow interrupt. 818 438 */ 819 - for (idx = CCI_PMU_CYCLE_CNTR_IDX; idx <= CCI_PMU_CNTR_LAST(cci_pmu); idx++) { 439 + for (idx = 0; idx <= CCI_PMU_CNTR_LAST(cci_pmu); idx++) { 820 440 struct perf_event *event = events->events[idx]; 821 441 struct hw_perf_event *hw_counter; 822 442 ··· 826 446 hw_counter = &event->hw; 827 447 828 448 /* Did this counter overflow? */ 829 - if (!(pmu_read_register(idx, CCI_PMU_OVRFLW) & 449 + if (!(pmu_read_register(cci_pmu, idx, CCI_PMU_OVRFLW) & 830 450 CCI_PMU_OVRFLW_FLAG)) 831 451 continue; 832 452 833 - pmu_write_register(CCI_PMU_OVRFLW_FLAG, idx, CCI_PMU_OVRFLW); 453 + pmu_write_register(cci_pmu, CCI_PMU_OVRFLW_FLAG, idx, 454 + CCI_PMU_OVRFLW); 834 455 835 456 pmu_event_update(event); 836 457 pmu_event_set_period(event); ··· 873 492 { 874 493 struct cci_pmu *cci_pmu = to_cci_pmu(pmu); 875 494 struct cci_pmu_hw_events *hw_events = &cci_pmu->hw_events; 876 - int enabled = bitmap_weight(hw_events->used_mask, cci_pmu->num_events); 495 + int enabled = bitmap_weight(hw_events->used_mask, cci_pmu->num_cntrs); 877 496 unsigned long flags; 878 497 u32 val; 879 498 ··· 904 523 raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags); 905 524 } 906 525 526 + /* 527 + * Check if the idx represents a non-programmable counter. 528 + * All the fixed event counters are mapped before the programmable 529 + * counters. 530 + */ 531 + static bool pmu_fixed_hw_idx(struct cci_pmu *cci_pmu, int idx) 532 + { 533 + return (idx >= 0) && (idx < cci_pmu->model->fixed_hw_cntrs); 534 + } 535 + 907 536 static void cci_pmu_start(struct perf_event *event, int pmu_flags) 908 537 { 909 538 struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); ··· 938 547 939 548 raw_spin_lock_irqsave(&hw_events->pmu_lock, flags); 940 549 941 - /* Configure the event to count, unless you are counting cycles */ 942 - if (idx != CCI_PMU_CYCLE_CNTR_IDX) 943 - pmu_set_event(idx, hwc->config_base); 550 + /* Configure the counter unless you are counting a fixed event */ 551 + if (!pmu_fixed_hw_idx(cci_pmu, idx)) 552 + pmu_set_event(cci_pmu, idx, hwc->config_base); 944 553 945 554 pmu_event_set_period(event); 946 - pmu_enable_counter(idx); 555 + pmu_enable_counter(cci_pmu, idx); 947 556 948 557 raw_spin_unlock_irqrestore(&hw_events->pmu_lock, flags); 949 558 } ··· 966 575 * We always reprogram the counter, so ignore PERF_EF_UPDATE. See 967 576 * cci_pmu_start() 968 577 */ 969 - pmu_disable_counter(idx); 578 + pmu_disable_counter(cci_pmu, idx); 970 579 pmu_event_update(event); 971 580 hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE; 972 581 } ··· 1046 655 validate_group(struct perf_event *event) 1047 656 { 1048 657 struct perf_event *sibling, *leader = event->group_leader; 658 + struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 659 + unsigned long mask[BITS_TO_LONGS(cci_pmu->num_cntrs)]; 1049 660 struct cci_pmu_hw_events fake_pmu = { 1050 661 /* 1051 662 * Initialise the fake PMU. We only need to populate the 1052 663 * used_mask for the purposes of validation. 1053 664 */ 1054 - .used_mask = { 0 }, 665 + .used_mask = mask, 1055 666 }; 667 + memset(mask, 0, BITS_TO_LONGS(cci_pmu->num_cntrs) * sizeof(unsigned long)); 1056 668 1057 669 if (!validate_event(event->pmu, &fake_pmu, leader)) 1058 670 return -EINVAL; ··· 1173 779 return err; 1174 780 } 1175 781 1176 - static ssize_t pmu_attr_cpumask_show(struct device *dev, 782 + static ssize_t pmu_cpumask_attr_show(struct device *dev, 1177 783 struct device_attribute *attr, char *buf) 1178 784 { 785 + struct dev_ext_attribute *eattr = container_of(attr, 786 + struct dev_ext_attribute, attr); 787 + struct cci_pmu *cci_pmu = eattr->var; 788 + 1179 789 int n = scnprintf(buf, PAGE_SIZE - 1, "%*pbl", 1180 - cpumask_pr_args(&pmu->cpus)); 790 + cpumask_pr_args(&cci_pmu->cpus)); 1181 791 buf[n++] = '\n'; 1182 792 buf[n] = '\0'; 1183 793 return n; 1184 794 } 1185 795 1186 - static DEVICE_ATTR(cpumask, S_IRUGO, pmu_attr_cpumask_show, NULL); 796 + static struct dev_ext_attribute pmu_cpumask_attr = { 797 + __ATTR(cpumask, S_IRUGO, pmu_cpumask_attr_show, NULL), 798 + NULL, /* Populated in cci_pmu_init */ 799 + }; 1187 800 1188 801 static struct attribute *pmu_attrs[] = { 1189 - &dev_attr_cpumask.attr, 802 + &pmu_cpumask_attr.attr.attr, 1190 803 NULL, 1191 804 }; 1192 805 ··· 1201 800 .attrs = pmu_attrs, 1202 801 }; 1203 802 803 + static struct attribute_group pmu_format_attr_group = { 804 + .name = "format", 805 + .attrs = NULL, /* Filled in cci_pmu_init_attrs */ 806 + }; 807 + 808 + static struct attribute_group pmu_event_attr_group = { 809 + .name = "events", 810 + .attrs = NULL, /* Filled in cci_pmu_init_attrs */ 811 + }; 812 + 1204 813 static const struct attribute_group *pmu_attr_groups[] = { 1205 814 &pmu_attr_group, 815 + &pmu_format_attr_group, 816 + &pmu_event_attr_group, 1206 817 NULL 1207 818 }; 819 + 820 + static struct attribute **alloc_attrs(struct platform_device *pdev, 821 + int n, struct dev_ext_attribute *source) 822 + { 823 + int i; 824 + struct attribute **attrs; 825 + 826 + /* Alloc n + 1 (for terminating NULL) */ 827 + attrs = devm_kcalloc(&pdev->dev, n + 1, sizeof(struct attribute *), 828 + GFP_KERNEL); 829 + if (!attrs) 830 + return attrs; 831 + for(i = 0; i < n; i++) 832 + attrs[i] = &source[i].attr.attr; 833 + return attrs; 834 + } 835 + 836 + static int cci_pmu_init_attrs(struct cci_pmu *cci_pmu, struct platform_device *pdev) 837 + { 838 + const struct cci_pmu_model *model = cci_pmu->model; 839 + struct attribute **attrs; 840 + 841 + /* 842 + * All allocations below are managed, hence doesn't need to be 843 + * free'd explicitly in case of an error. 844 + */ 845 + 846 + if (model->nevent_attrs) { 847 + attrs = alloc_attrs(pdev, model->nevent_attrs, 848 + model->event_attrs); 849 + if (!attrs) 850 + return -ENOMEM; 851 + pmu_event_attr_group.attrs = attrs; 852 + } 853 + if (model->nformat_attrs) { 854 + attrs = alloc_attrs(pdev, model->nformat_attrs, 855 + model->format_attrs); 856 + if (!attrs) 857 + return -ENOMEM; 858 + pmu_format_attr_group.attrs = attrs; 859 + } 860 + pmu_cpumask_attr.var = cci_pmu; 861 + 862 + return 0; 863 + } 1208 864 1209 865 static int cci_pmu_init(struct cci_pmu *cci_pmu, struct platform_device *pdev) 1210 866 { 1211 867 char *name = cci_pmu->model->name; 868 + u32 num_cntrs; 869 + int rc; 870 + 871 + rc = cci_pmu_init_attrs(cci_pmu, pdev); 872 + if (rc) 873 + return rc; 874 + 1212 875 cci_pmu->pmu = (struct pmu) { 1213 876 .name = cci_pmu->model->name, 1214 877 .task_ctx_nr = perf_invalid_context, ··· 1288 823 }; 1289 824 1290 825 cci_pmu->plat_device = pdev; 1291 - cci_pmu->num_events = pmu_get_max_counters(); 826 + num_cntrs = pmu_get_max_counters(); 827 + if (num_cntrs > cci_pmu->model->num_hw_cntrs) { 828 + dev_warn(&pdev->dev, 829 + "PMU implements more counters(%d) than supported by" 830 + " the model(%d), truncated.", 831 + num_cntrs, cci_pmu->model->num_hw_cntrs); 832 + num_cntrs = cci_pmu->model->num_hw_cntrs; 833 + } 834 + cci_pmu->num_cntrs = num_cntrs + cci_pmu->model->fixed_hw_cntrs; 1292 835 1293 836 return perf_pmu_register(&cci_pmu->pmu, name, -1); 1294 837 } ··· 1304 831 static int cci_pmu_cpu_notifier(struct notifier_block *self, 1305 832 unsigned long action, void *hcpu) 1306 833 { 834 + struct cci_pmu *cci_pmu = container_of(self, 835 + struct cci_pmu, cpu_nb); 1307 836 unsigned int cpu = (long)hcpu; 1308 837 unsigned int target; 1309 838 1310 839 switch (action & ~CPU_TASKS_FROZEN) { 1311 840 case CPU_DOWN_PREPARE: 1312 - if (!cpumask_test_and_clear_cpu(cpu, &pmu->cpus)) 841 + if (!cpumask_test_and_clear_cpu(cpu, &cci_pmu->cpus)) 1313 842 break; 1314 843 target = cpumask_any_but(cpu_online_mask, cpu); 1315 844 if (target < 0) // UP, last CPU ··· 1320 845 * TODO: migrate context once core races on event->ctx have 1321 846 * been fixed. 1322 847 */ 1323 - cpumask_set_cpu(target, &pmu->cpus); 848 + cpumask_set_cpu(target, &cci_pmu->cpus); 1324 849 default: 1325 850 break; 1326 851 } ··· 1328 853 return NOTIFY_OK; 1329 854 } 1330 855 1331 - static struct notifier_block cci_pmu_cpu_nb = { 1332 - .notifier_call = cci_pmu_cpu_notifier, 1333 - /* 1334 - * to migrate uncore events, our notifier should be executed 1335 - * before perf core's notifier. 1336 - */ 1337 - .priority = CPU_PRI_PERF + 1, 1338 - }; 1339 - 1340 856 static struct cci_pmu_model cci_pmu_models[] = { 1341 - [CCI_REV_R0] = { 857 + #ifdef CONFIG_ARM_CCI400_PMU 858 + [CCI400_R0] = { 1342 859 .name = "CCI_400", 860 + .fixed_hw_cntrs = 1, /* Cycle counter */ 861 + .num_hw_cntrs = 4, 862 + .cntr_size = SZ_4K, 863 + .format_attrs = cci400_pmu_format_attrs, 864 + .nformat_attrs = ARRAY_SIZE(cci400_pmu_format_attrs), 865 + .event_attrs = cci400_r0_pmu_event_attrs, 866 + .nevent_attrs = ARRAY_SIZE(cci400_r0_pmu_event_attrs), 1343 867 .event_ranges = { 1344 868 [CCI_IF_SLAVE] = { 1345 - CCI_REV_R0_SLAVE_PORT_MIN_EV, 1346 - CCI_REV_R0_SLAVE_PORT_MAX_EV, 869 + CCI400_R0_SLAVE_PORT_MIN_EV, 870 + CCI400_R0_SLAVE_PORT_MAX_EV, 1347 871 }, 1348 872 [CCI_IF_MASTER] = { 1349 - CCI_REV_R0_MASTER_PORT_MIN_EV, 1350 - CCI_REV_R0_MASTER_PORT_MAX_EV, 873 + CCI400_R0_MASTER_PORT_MIN_EV, 874 + CCI400_R0_MASTER_PORT_MAX_EV, 1351 875 }, 1352 876 }, 877 + .validate_hw_event = cci400_validate_hw_event, 878 + .get_event_idx = cci400_get_event_idx, 1353 879 }, 1354 - [CCI_REV_R1] = { 880 + [CCI400_R1] = { 1355 881 .name = "CCI_400_r1", 882 + .fixed_hw_cntrs = 1, /* Cycle counter */ 883 + .num_hw_cntrs = 4, 884 + .cntr_size = SZ_4K, 885 + .format_attrs = cci400_pmu_format_attrs, 886 + .nformat_attrs = ARRAY_SIZE(cci400_pmu_format_attrs), 887 + .event_attrs = cci400_r1_pmu_event_attrs, 888 + .nevent_attrs = ARRAY_SIZE(cci400_r1_pmu_event_attrs), 1356 889 .event_ranges = { 1357 890 [CCI_IF_SLAVE] = { 1358 - CCI_REV_R1_SLAVE_PORT_MIN_EV, 1359 - CCI_REV_R1_SLAVE_PORT_MAX_EV, 891 + CCI400_R1_SLAVE_PORT_MIN_EV, 892 + CCI400_R1_SLAVE_PORT_MAX_EV, 1360 893 }, 1361 894 [CCI_IF_MASTER] = { 1362 - CCI_REV_R1_MASTER_PORT_MIN_EV, 1363 - CCI_REV_R1_MASTER_PORT_MAX_EV, 895 + CCI400_R1_MASTER_PORT_MIN_EV, 896 + CCI400_R1_MASTER_PORT_MAX_EV, 1364 897 }, 1365 898 }, 899 + .validate_hw_event = cci400_validate_hw_event, 900 + .get_event_idx = cci400_get_event_idx, 1366 901 }, 902 + #endif 903 + #ifdef CONFIG_ARM_CCI500_PMU 904 + [CCI500_R0] = { 905 + .name = "CCI_500", 906 + .fixed_hw_cntrs = 0, 907 + .num_hw_cntrs = 8, 908 + .cntr_size = SZ_64K, 909 + .format_attrs = cci500_pmu_format_attrs, 910 + .nformat_attrs = ARRAY_SIZE(cci500_pmu_format_attrs), 911 + .event_attrs = cci500_pmu_event_attrs, 912 + .nevent_attrs = ARRAY_SIZE(cci500_pmu_event_attrs), 913 + .event_ranges = { 914 + [CCI_IF_SLAVE] = { 915 + CCI500_SLAVE_PORT_MIN_EV, 916 + CCI500_SLAVE_PORT_MAX_EV, 917 + }, 918 + [CCI_IF_MASTER] = { 919 + CCI500_MASTER_PORT_MIN_EV, 920 + CCI500_MASTER_PORT_MAX_EV, 921 + }, 922 + [CCI_IF_GLOBAL] = { 923 + CCI500_GLOBAL_PORT_MIN_EV, 924 + CCI500_GLOBAL_PORT_MAX_EV, 925 + }, 926 + }, 927 + .validate_hw_event = cci500_validate_hw_event, 928 + }, 929 + #endif 1367 930 }; 1368 931 1369 932 static const struct of_device_id arm_cci_pmu_matches[] = { 933 + #ifdef CONFIG_ARM_CCI400_PMU 1370 934 { 1371 935 .compatible = "arm,cci-400-pmu", 1372 936 .data = NULL, 1373 937 }, 1374 938 { 1375 939 .compatible = "arm,cci-400-pmu,r0", 1376 - .data = &cci_pmu_models[CCI_REV_R0], 940 + .data = &cci_pmu_models[CCI400_R0], 1377 941 }, 1378 942 { 1379 943 .compatible = "arm,cci-400-pmu,r1", 1380 - .data = &cci_pmu_models[CCI_REV_R1], 944 + .data = &cci_pmu_models[CCI400_R1], 1381 945 }, 946 + #endif 947 + #ifdef CONFIG_ARM_CCI500_PMU 948 + { 949 + .compatible = "arm,cci-500-pmu,r0", 950 + .data = &cci_pmu_models[CCI500_R0], 951 + }, 952 + #endif 1382 953 {}, 1383 954 }; 1384 955 ··· 1453 932 return false; 1454 933 } 1455 934 1456 - static int cci_pmu_probe(struct platform_device *pdev) 935 + static struct cci_pmu *cci_pmu_alloc(struct platform_device *pdev) 1457 936 { 1458 - struct resource *res; 1459 - int i, ret, irq; 937 + struct cci_pmu *cci_pmu; 1460 938 const struct cci_pmu_model *model; 1461 939 940 + /* 941 + * All allocations are devm_* hence we don't have to free 942 + * them explicitly on an error, as it would end up in driver 943 + * detach. 944 + */ 1462 945 model = get_cci_model(pdev); 1463 946 if (!model) { 1464 947 dev_warn(&pdev->dev, "CCI PMU version not supported\n"); 1465 - return -ENODEV; 948 + return ERR_PTR(-ENODEV); 1466 949 } 1467 950 1468 - pmu = devm_kzalloc(&pdev->dev, sizeof(*pmu), GFP_KERNEL); 1469 - if (!pmu) 1470 - return -ENOMEM; 951 + cci_pmu = devm_kzalloc(&pdev->dev, sizeof(*cci_pmu), GFP_KERNEL); 952 + if (!cci_pmu) 953 + return ERR_PTR(-ENOMEM); 1471 954 1472 - pmu->model = model; 955 + cci_pmu->model = model; 956 + cci_pmu->irqs = devm_kcalloc(&pdev->dev, CCI_PMU_MAX_HW_CNTRS(model), 957 + sizeof(*cci_pmu->irqs), GFP_KERNEL); 958 + if (!cci_pmu->irqs) 959 + return ERR_PTR(-ENOMEM); 960 + cci_pmu->hw_events.events = devm_kcalloc(&pdev->dev, 961 + CCI_PMU_MAX_HW_CNTRS(model), 962 + sizeof(*cci_pmu->hw_events.events), 963 + GFP_KERNEL); 964 + if (!cci_pmu->hw_events.events) 965 + return ERR_PTR(-ENOMEM); 966 + cci_pmu->hw_events.used_mask = devm_kcalloc(&pdev->dev, 967 + BITS_TO_LONGS(CCI_PMU_MAX_HW_CNTRS(model)), 968 + sizeof(*cci_pmu->hw_events.used_mask), 969 + GFP_KERNEL); 970 + if (!cci_pmu->hw_events.used_mask) 971 + return ERR_PTR(-ENOMEM); 972 + 973 + return cci_pmu; 974 + } 975 + 976 + 977 + static int cci_pmu_probe(struct platform_device *pdev) 978 + { 979 + struct resource *res; 980 + struct cci_pmu *cci_pmu; 981 + int i, ret, irq; 982 + 983 + cci_pmu = cci_pmu_alloc(pdev); 984 + if (IS_ERR(cci_pmu)) 985 + return PTR_ERR(cci_pmu); 986 + 1473 987 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1474 - pmu->base = devm_ioremap_resource(&pdev->dev, res); 1475 - if (IS_ERR(pmu->base)) 988 + cci_pmu->base = devm_ioremap_resource(&pdev->dev, res); 989 + if (IS_ERR(cci_pmu->base)) 1476 990 return -ENOMEM; 1477 991 1478 992 /* 1479 - * CCI PMU has 5 overflow signals - one per counter; but some may be tied 993 + * CCI PMU has one overflow interrupt per counter; but some may be tied 1480 994 * together to a common interrupt. 1481 995 */ 1482 - pmu->nr_irqs = 0; 1483 - for (i = 0; i < CCI_PMU_MAX_HW_EVENTS; i++) { 996 + cci_pmu->nr_irqs = 0; 997 + for (i = 0; i < CCI_PMU_MAX_HW_CNTRS(cci_pmu->model); i++) { 1484 998 irq = platform_get_irq(pdev, i); 1485 999 if (irq < 0) 1486 1000 break; 1487 1001 1488 - if (is_duplicate_irq(irq, pmu->irqs, pmu->nr_irqs)) 1002 + if (is_duplicate_irq(irq, cci_pmu->irqs, cci_pmu->nr_irqs)) 1489 1003 continue; 1490 1004 1491 - pmu->irqs[pmu->nr_irqs++] = irq; 1005 + cci_pmu->irqs[cci_pmu->nr_irqs++] = irq; 1492 1006 } 1493 1007 1494 1008 /* 1495 1009 * Ensure that the device tree has as many interrupts as the number 1496 1010 * of counters. 1497 1011 */ 1498 - if (i < CCI_PMU_MAX_HW_EVENTS) { 1012 + if (i < CCI_PMU_MAX_HW_CNTRS(cci_pmu->model)) { 1499 1013 dev_warn(&pdev->dev, "In-correct number of interrupts: %d, should be %d\n", 1500 - i, CCI_PMU_MAX_HW_EVENTS); 1014 + i, CCI_PMU_MAX_HW_CNTRS(cci_pmu->model)); 1501 1015 return -EINVAL; 1502 1016 } 1503 1017 1504 - raw_spin_lock_init(&pmu->hw_events.pmu_lock); 1505 - mutex_init(&pmu->reserve_mutex); 1506 - atomic_set(&pmu->active_events, 0); 1507 - cpumask_set_cpu(smp_processor_id(), &pmu->cpus); 1018 + raw_spin_lock_init(&cci_pmu->hw_events.pmu_lock); 1019 + mutex_init(&cci_pmu->reserve_mutex); 1020 + atomic_set(&cci_pmu->active_events, 0); 1021 + cpumask_set_cpu(smp_processor_id(), &cci_pmu->cpus); 1508 1022 1509 - ret = register_cpu_notifier(&cci_pmu_cpu_nb); 1023 + cci_pmu->cpu_nb = (struct notifier_block) { 1024 + .notifier_call = cci_pmu_cpu_notifier, 1025 + /* 1026 + * to migrate uncore events, our notifier should be executed 1027 + * before perf core's notifier. 1028 + */ 1029 + .priority = CPU_PRI_PERF + 1, 1030 + }; 1031 + 1032 + ret = register_cpu_notifier(&cci_pmu->cpu_nb); 1510 1033 if (ret) 1511 1034 return ret; 1512 1035 1513 - ret = cci_pmu_init(pmu, pdev); 1514 - if (ret) 1036 + ret = cci_pmu_init(cci_pmu, pdev); 1037 + if (ret) { 1038 + unregister_cpu_notifier(&cci_pmu->cpu_nb); 1515 1039 return ret; 1040 + } 1516 1041 1517 - pr_info("ARM %s PMU driver probed", pmu->model->name); 1042 + pr_info("ARM %s PMU driver probed", cci_pmu->model->name); 1518 1043 return 0; 1519 1044 } 1520 1045 ··· 1599 1032 return platform_driver_register(&cci_platform_driver); 1600 1033 } 1601 1034 1602 - #else /* !CONFIG_ARM_CCI400_PMU */ 1035 + #else /* !CONFIG_ARM_CCI_PMU */ 1603 1036 1604 1037 static int __init cci_platform_init(void) 1605 1038 { 1606 1039 return 0; 1607 1040 } 1608 1041 1609 - #endif /* CONFIG_ARM_CCI400_PMU */ 1042 + #endif /* CONFIG_ARM_CCI_PMU */ 1610 1043 1611 1044 #ifdef CONFIG_ARM_CCI400_PORT_CTRL 1612 1045
+214 -56
drivers/bus/arm-ccn.c
··· 166 166 167 167 struct hrtimer hrtimer; 168 168 169 + cpumask_t cpu; 170 + struct notifier_block cpu_nb; 171 + 169 172 struct pmu pmu; 170 173 }; 171 174 172 175 struct arm_ccn { 173 176 struct device *dev; 174 177 void __iomem *base; 175 - unsigned irq_used:1; 178 + unsigned int irq; 179 + 176 180 unsigned sbas_present:1; 177 181 unsigned sbsx_present:1; 178 182 ··· 216 212 217 213 static void arm_ccn_pmu_config_set(u64 *config, u32 node_xp, u32 type, u32 port) 218 214 { 219 - *config &= ~((0xff << 0) | (0xff << 8) | (0xff << 24)); 215 + *config &= ~((0xff << 0) | (0xff << 8) | (0x3 << 24)); 220 216 *config |= (node_xp << 0) | (type << 8) | (port << 24); 221 217 } 222 218 ··· 340 336 if (event->mask) 341 337 res += snprintf(buf + res, PAGE_SIZE - res, ",mask=0x%x", 342 338 event->mask); 339 + 340 + /* Arguments required by an event */ 341 + switch (event->type) { 342 + case CCN_TYPE_CYCLES: 343 + break; 344 + case CCN_TYPE_XP: 345 + res += snprintf(buf + res, PAGE_SIZE - res, 346 + ",xp=?,port=?,vc=?,dir=?"); 347 + if (event->event == CCN_EVENT_WATCHPOINT) 348 + res += snprintf(buf + res, PAGE_SIZE - res, 349 + ",cmp_l=?,cmp_h=?,mask=?"); 350 + break; 351 + default: 352 + res += snprintf(buf + res, PAGE_SIZE - res, ",node=?"); 353 + break; 354 + } 355 + 343 356 res += snprintf(buf + res, PAGE_SIZE - res, "\n"); 344 357 345 358 return res; ··· 542 521 .attrs = arm_ccn_pmu_cmp_mask_attrs, 543 522 }; 544 523 524 + static ssize_t arm_ccn_pmu_cpumask_show(struct device *dev, 525 + struct device_attribute *attr, char *buf) 526 + { 527 + struct arm_ccn *ccn = pmu_to_arm_ccn(dev_get_drvdata(dev)); 528 + 529 + return cpumap_print_to_pagebuf(true, buf, &ccn->dt.cpu); 530 + } 531 + 532 + static struct device_attribute arm_ccn_pmu_cpumask_attr = 533 + __ATTR(cpumask, S_IRUGO, arm_ccn_pmu_cpumask_show, NULL); 534 + 535 + static struct attribute *arm_ccn_pmu_cpumask_attrs[] = { 536 + &arm_ccn_pmu_cpumask_attr.attr, 537 + NULL, 538 + }; 539 + 540 + static struct attribute_group arm_ccn_pmu_cpumask_attr_group = { 541 + .attrs = arm_ccn_pmu_cpumask_attrs, 542 + }; 545 543 546 544 /* 547 545 * Default poll period is 10ms, which is way over the top anyway, ··· 582 542 &arm_ccn_pmu_events_attr_group, 583 543 &arm_ccn_pmu_format_attr_group, 584 544 &arm_ccn_pmu_cmp_mask_attr_group, 545 + &arm_ccn_pmu_cpumask_attr_group, 585 546 NULL 586 547 }; 587 548 ··· 628 587 return 0; 629 588 } 630 589 631 - static void arm_ccn_pmu_event_destroy(struct perf_event *event) 590 + static int arm_ccn_pmu_event_alloc(struct perf_event *event) 591 + { 592 + struct arm_ccn *ccn = pmu_to_arm_ccn(event->pmu); 593 + struct hw_perf_event *hw = &event->hw; 594 + u32 node_xp, type, event_id; 595 + struct arm_ccn_component *source; 596 + int bit; 597 + 598 + node_xp = CCN_CONFIG_NODE(event->attr.config); 599 + type = CCN_CONFIG_TYPE(event->attr.config); 600 + event_id = CCN_CONFIG_EVENT(event->attr.config); 601 + 602 + /* Allocate the cycle counter */ 603 + if (type == CCN_TYPE_CYCLES) { 604 + if (test_and_set_bit(CCN_IDX_PMU_CYCLE_COUNTER, 605 + ccn->dt.pmu_counters_mask)) 606 + return -EAGAIN; 607 + 608 + hw->idx = CCN_IDX_PMU_CYCLE_COUNTER; 609 + ccn->dt.pmu_counters[CCN_IDX_PMU_CYCLE_COUNTER].event = event; 610 + 611 + return 0; 612 + } 613 + 614 + /* Allocate an event counter */ 615 + hw->idx = arm_ccn_pmu_alloc_bit(ccn->dt.pmu_counters_mask, 616 + CCN_NUM_PMU_EVENT_COUNTERS); 617 + if (hw->idx < 0) { 618 + dev_dbg(ccn->dev, "No more counters available!\n"); 619 + return -EAGAIN; 620 + } 621 + 622 + if (type == CCN_TYPE_XP) 623 + source = &ccn->xp[node_xp]; 624 + else 625 + source = &ccn->node[node_xp]; 626 + ccn->dt.pmu_counters[hw->idx].source = source; 627 + 628 + /* Allocate an event source or a watchpoint */ 629 + if (type == CCN_TYPE_XP && event_id == CCN_EVENT_WATCHPOINT) 630 + bit = arm_ccn_pmu_alloc_bit(source->xp.dt_cmp_mask, 631 + CCN_NUM_XP_WATCHPOINTS); 632 + else 633 + bit = arm_ccn_pmu_alloc_bit(source->pmu_events_mask, 634 + CCN_NUM_PMU_EVENTS); 635 + if (bit < 0) { 636 + dev_dbg(ccn->dev, "No more event sources/watchpoints on node/XP %d!\n", 637 + node_xp); 638 + clear_bit(hw->idx, ccn->dt.pmu_counters_mask); 639 + return -EAGAIN; 640 + } 641 + hw->config_base = bit; 642 + 643 + ccn->dt.pmu_counters[hw->idx].event = event; 644 + 645 + return 0; 646 + } 647 + 648 + static void arm_ccn_pmu_event_release(struct perf_event *event) 632 649 { 633 650 struct arm_ccn *ccn = pmu_to_arm_ccn(event->pmu); 634 651 struct hw_perf_event *hw = &event->hw; ··· 715 616 struct arm_ccn *ccn; 716 617 struct hw_perf_event *hw = &event->hw; 717 618 u32 node_xp, type, event_id; 718 - int valid, bit; 719 - struct arm_ccn_component *source; 619 + int valid; 720 620 int i; 621 + struct perf_event *sibling; 721 622 722 623 if (event->attr.type != event->pmu->type) 723 624 return -ENOENT; 724 625 725 626 ccn = pmu_to_arm_ccn(event->pmu); 726 - event->destroy = arm_ccn_pmu_event_destroy; 727 627 728 628 if (hw->sample_period) { 729 629 dev_warn(ccn->dev, "Sampling not supported!\n"); ··· 740 642 dev_warn(ccn->dev, "Can't provide per-task data!\n"); 741 643 return -EOPNOTSUPP; 742 644 } 645 + /* 646 + * Many perf core operations (eg. events rotation) operate on a 647 + * single CPU context. This is obvious for CPU PMUs, where one 648 + * expects the same sets of events being observed on all CPUs, 649 + * but can lead to issues for off-core PMUs, like CCN, where each 650 + * event could be theoretically assigned to a different CPU. To 651 + * mitigate this, we enforce CPU assignment to one, selected 652 + * processor (the one described in the "cpumask" attribute). 653 + */ 654 + event->cpu = cpumask_first(&ccn->dt.cpu); 743 655 744 656 node_xp = CCN_CONFIG_NODE(event->attr.config); 745 657 type = CCN_CONFIG_TYPE(event->attr.config); ··· 819 711 node_xp, type, port); 820 712 } 821 713 822 - /* Allocate the cycle counter */ 823 - if (type == CCN_TYPE_CYCLES) { 824 - if (test_and_set_bit(CCN_IDX_PMU_CYCLE_COUNTER, 825 - ccn->dt.pmu_counters_mask)) 826 - return -EAGAIN; 714 + /* 715 + * We must NOT create groups containing mixed PMUs, although software 716 + * events are acceptable (for example to create a CCN group 717 + * periodically read when a hrtimer aka cpu-clock leader triggers). 718 + */ 719 + if (event->group_leader->pmu != event->pmu && 720 + !is_software_event(event->group_leader)) 721 + return -EINVAL; 827 722 828 - hw->idx = CCN_IDX_PMU_CYCLE_COUNTER; 829 - ccn->dt.pmu_counters[CCN_IDX_PMU_CYCLE_COUNTER].event = event; 830 - 831 - return 0; 832 - } 833 - 834 - /* Allocate an event counter */ 835 - hw->idx = arm_ccn_pmu_alloc_bit(ccn->dt.pmu_counters_mask, 836 - CCN_NUM_PMU_EVENT_COUNTERS); 837 - if (hw->idx < 0) { 838 - dev_warn(ccn->dev, "No more counters available!\n"); 839 - return -EAGAIN; 840 - } 841 - 842 - if (type == CCN_TYPE_XP) 843 - source = &ccn->xp[node_xp]; 844 - else 845 - source = &ccn->node[node_xp]; 846 - ccn->dt.pmu_counters[hw->idx].source = source; 847 - 848 - /* Allocate an event source or a watchpoint */ 849 - if (type == CCN_TYPE_XP && event_id == CCN_EVENT_WATCHPOINT) 850 - bit = arm_ccn_pmu_alloc_bit(source->xp.dt_cmp_mask, 851 - CCN_NUM_XP_WATCHPOINTS); 852 - else 853 - bit = arm_ccn_pmu_alloc_bit(source->pmu_events_mask, 854 - CCN_NUM_PMU_EVENTS); 855 - if (bit < 0) { 856 - dev_warn(ccn->dev, "No more event sources/watchpoints on node/XP %d!\n", 857 - node_xp); 858 - clear_bit(hw->idx, ccn->dt.pmu_counters_mask); 859 - return -EAGAIN; 860 - } 861 - hw->config_base = bit; 862 - 863 - ccn->dt.pmu_counters[hw->idx].event = event; 723 + list_for_each_entry(sibling, &event->group_leader->sibling_list, 724 + group_entry) 725 + if (sibling->pmu != event->pmu && 726 + !is_software_event(sibling)) 727 + return -EINVAL; 864 728 865 729 return 0; 866 730 } ··· 915 835 arm_ccn_pmu_read_counter(ccn, hw->idx)); 916 836 hw->state = 0; 917 837 918 - if (!ccn->irq_used) 838 + /* 839 + * Pin the timer, so that the overflows are handled by the chosen 840 + * event->cpu (this is the same one as presented in "cpumask" 841 + * attribute). 842 + */ 843 + if (!ccn->irq) 919 844 hrtimer_start(&ccn->dt.hrtimer, arm_ccn_pmu_timer_period(), 920 - HRTIMER_MODE_REL); 845 + HRTIMER_MODE_REL_PINNED); 921 846 922 847 /* Set the DT bus input, engaging the counter */ 923 848 arm_ccn_pmu_xp_dt_config(event, 1); ··· 937 852 /* Disable counting, setting the DT bus to pass-through mode */ 938 853 arm_ccn_pmu_xp_dt_config(event, 0); 939 854 940 - if (!ccn->irq_used) 855 + if (!ccn->irq) 941 856 hrtimer_cancel(&ccn->dt.hrtimer); 942 857 943 858 /* Let the DT bus drain */ ··· 1099 1014 1100 1015 static int arm_ccn_pmu_event_add(struct perf_event *event, int flags) 1101 1016 { 1017 + int err; 1102 1018 struct hw_perf_event *hw = &event->hw; 1019 + 1020 + err = arm_ccn_pmu_event_alloc(event); 1021 + if (err) 1022 + return err; 1103 1023 1104 1024 arm_ccn_pmu_event_config(event); 1105 1025 ··· 1119 1029 static void arm_ccn_pmu_event_del(struct perf_event *event, int flags) 1120 1030 { 1121 1031 arm_ccn_pmu_event_stop(event, PERF_EF_UPDATE); 1032 + 1033 + arm_ccn_pmu_event_release(event); 1122 1034 } 1123 1035 1124 1036 static void arm_ccn_pmu_event_read(struct perf_event *event) ··· 1171 1079 } 1172 1080 1173 1081 1082 + static int arm_ccn_pmu_cpu_notifier(struct notifier_block *nb, 1083 + unsigned long action, void *hcpu) 1084 + { 1085 + struct arm_ccn_dt *dt = container_of(nb, struct arm_ccn_dt, cpu_nb); 1086 + struct arm_ccn *ccn = container_of(dt, struct arm_ccn, dt); 1087 + unsigned int cpu = (long)hcpu; /* for (long) see kernel/cpu.c */ 1088 + unsigned int target; 1089 + 1090 + switch (action & ~CPU_TASKS_FROZEN) { 1091 + case CPU_DOWN_PREPARE: 1092 + if (!cpumask_test_and_clear_cpu(cpu, &dt->cpu)) 1093 + break; 1094 + target = cpumask_any_but(cpu_online_mask, cpu); 1095 + if (target < 0) 1096 + break; 1097 + perf_pmu_migrate_context(&dt->pmu, cpu, target); 1098 + cpumask_set_cpu(target, &dt->cpu); 1099 + WARN_ON(irq_set_affinity(ccn->irq, &dt->cpu) != 0); 1100 + default: 1101 + break; 1102 + } 1103 + 1104 + return NOTIFY_OK; 1105 + } 1106 + 1107 + 1174 1108 static DEFINE_IDA(arm_ccn_pmu_ida); 1175 1109 1176 1110 static int arm_ccn_pmu_init(struct arm_ccn *ccn) 1177 1111 { 1178 1112 int i; 1179 1113 char *name; 1114 + int err; 1180 1115 1181 1116 /* Initialize DT subsystem */ 1182 1117 ccn->dt.base = ccn->base + CCN_REGION_SIZE; ··· 1255 1136 }; 1256 1137 1257 1138 /* No overflow interrupt? Have to use a timer instead. */ 1258 - if (!ccn->irq_used) { 1139 + if (!ccn->irq) { 1259 1140 dev_info(ccn->dev, "No access to interrupts, using timer.\n"); 1260 1141 hrtimer_init(&ccn->dt.hrtimer, CLOCK_MONOTONIC, 1261 1142 HRTIMER_MODE_REL); 1262 1143 ccn->dt.hrtimer.function = arm_ccn_pmu_timer_handler; 1263 1144 } 1264 1145 1265 - return perf_pmu_register(&ccn->dt.pmu, name, -1); 1146 + /* Pick one CPU which we will use to collect data from CCN... */ 1147 + cpumask_set_cpu(smp_processor_id(), &ccn->dt.cpu); 1148 + 1149 + /* 1150 + * ... and change the selection when it goes offline. Priority is 1151 + * picked to have a chance to migrate events before perf is notified. 1152 + */ 1153 + ccn->dt.cpu_nb.notifier_call = arm_ccn_pmu_cpu_notifier; 1154 + ccn->dt.cpu_nb.priority = CPU_PRI_PERF + 1, 1155 + err = register_cpu_notifier(&ccn->dt.cpu_nb); 1156 + if (err) 1157 + goto error_cpu_notifier; 1158 + 1159 + /* Also make sure that the overflow interrupt is handled by this CPU */ 1160 + if (ccn->irq) { 1161 + err = irq_set_affinity(ccn->irq, &ccn->dt.cpu); 1162 + if (err) { 1163 + dev_err(ccn->dev, "Failed to set interrupt affinity!\n"); 1164 + goto error_set_affinity; 1165 + } 1166 + } 1167 + 1168 + err = perf_pmu_register(&ccn->dt.pmu, name, -1); 1169 + if (err) 1170 + goto error_pmu_register; 1171 + 1172 + return 0; 1173 + 1174 + error_pmu_register: 1175 + error_set_affinity: 1176 + unregister_cpu_notifier(&ccn->dt.cpu_nb); 1177 + error_cpu_notifier: 1178 + ida_simple_remove(&arm_ccn_pmu_ida, ccn->dt.id); 1179 + for (i = 0; i < ccn->num_xps; i++) 1180 + writel(0, ccn->xp[i].base + CCN_XP_DT_CONTROL); 1181 + writel(0, ccn->dt.base + CCN_DT_PMCR); 1182 + return err; 1266 1183 } 1267 1184 1268 1185 static void arm_ccn_pmu_cleanup(struct arm_ccn *ccn) 1269 1186 { 1270 1187 int i; 1271 1188 1189 + irq_set_affinity(ccn->irq, cpu_possible_mask); 1190 + unregister_cpu_notifier(&ccn->dt.cpu_nb); 1272 1191 for (i = 0; i < ccn->num_xps; i++) 1273 1192 writel(0, ccn->xp[i].base + CCN_XP_DT_CONTROL); 1274 1193 writel(0, ccn->dt.base + CCN_DT_PMCR); ··· 1442 1285 { 1443 1286 struct arm_ccn *ccn; 1444 1287 struct resource *res; 1288 + unsigned int irq; 1445 1289 int err; 1446 1290 1447 1291 ccn = devm_kzalloc(&pdev->dev, sizeof(*ccn), GFP_KERNEL); ··· 1467 1309 res = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 1468 1310 if (!res) 1469 1311 return -EINVAL; 1312 + irq = res->start; 1470 1313 1471 1314 /* Check if we can use the interrupt */ 1472 1315 writel(CCN_MN_ERRINT_STATUS__PMU_EVENTS__DISABLE, ··· 1477 1318 /* Can set 'disable' bits, so can acknowledge interrupts */ 1478 1319 writel(CCN_MN_ERRINT_STATUS__PMU_EVENTS__ENABLE, 1479 1320 ccn->base + CCN_MN_ERRINT_STATUS); 1480 - err = devm_request_irq(ccn->dev, res->start, 1481 - arm_ccn_irq_handler, 0, dev_name(ccn->dev), 1482 - ccn); 1321 + err = devm_request_irq(ccn->dev, irq, arm_ccn_irq_handler, 0, 1322 + dev_name(ccn->dev), ccn); 1483 1323 if (err) 1484 1324 return err; 1485 1325 1486 - ccn->irq_used = 1; 1326 + ccn->irq = irq; 1487 1327 } 1488 1328 1489 1329
+3 -4
drivers/clk/berlin/bg2.c
··· 502 502 503 503 static void __init berlin2_clock_setup(struct device_node *np) 504 504 { 505 + struct device_node *parent_np = of_get_parent(np); 505 506 const char *parent_names[9]; 506 507 struct clk *clk; 507 508 u8 avpll_flags = 0; 508 509 int n; 509 510 510 - gbase = of_iomap(np, 0); 511 + gbase = of_iomap(parent_np, 0); 511 512 if (!gbase) 512 513 return; 513 514 ··· 686 685 bg2_fail: 687 686 iounmap(gbase); 688 687 } 689 - CLK_OF_DECLARE(berlin2_clock, "marvell,berlin2-chip-ctrl", 690 - berlin2_clock_setup); 691 - CLK_OF_DECLARE(berlin2cd_clock, "marvell,berlin2cd-chip-ctrl", 688 + CLK_OF_DECLARE(berlin2_clk, "marvell,berlin2-clk", 692 689 berlin2_clock_setup);
+4 -3
drivers/clk/berlin/bg2q.c
··· 290 290 291 291 static void __init berlin2q_clock_setup(struct device_node *np) 292 292 { 293 + struct device_node *parent_np = of_get_parent(np); 293 294 const char *parent_names[9]; 294 295 struct clk *clk; 295 296 int n; 296 297 297 - gbase = of_iomap(np, 0); 298 + gbase = of_iomap(parent_np, 0); 298 299 if (!gbase) { 299 300 pr_err("%s: Unable to map global base\n", np->full_name); 300 301 return; 301 302 } 302 303 303 304 /* BG2Q CPU PLL is not part of global registers */ 304 - cpupll_base = of_iomap(np, 1); 305 + cpupll_base = of_iomap(parent_np, 1); 305 306 if (!cpupll_base) { 306 307 pr_err("%s: Unable to map cpupll base\n", np->full_name); 307 308 iounmap(gbase); ··· 385 384 iounmap(cpupll_base); 386 385 iounmap(gbase); 387 386 } 388 - CLK_OF_DECLARE(berlin2q_clock, "marvell,berlin2q-chip-ctrl", 387 + CLK_OF_DECLARE(berlin2q_clk, "marvell,berlin2q-clk", 389 388 berlin2q_clock_setup);
+2 -1
drivers/firmware/Makefile
··· 12 12 obj-$(CONFIG_ISCSI_IBFT) += iscsi_ibft.o 13 13 obj-$(CONFIG_FIRMWARE_MEMMAP) += memmap.o 14 14 obj-$(CONFIG_QCOM_SCM) += qcom_scm.o 15 - CFLAGS_qcom_scm.o :=$(call as-instr,.arch_extension sec,-DREQUIRES_SEC=1) 15 + obj-$(CONFIG_QCOM_SCM) += qcom_scm-32.o 16 + CFLAGS_qcom_scm-32.o :=$(call as-instr,.arch_extension sec,-DREQUIRES_SEC=1) 16 17 17 18 obj-$(CONFIG_GOOGLE_FIRMWARE) += google/ 18 19 obj-$(CONFIG_EFI) += efi/
+503
drivers/firmware/qcom_scm-32.c
··· 1 + /* Copyright (c) 2010,2015, The Linux Foundation. All rights reserved. 2 + * Copyright (C) 2015 Linaro Ltd. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 and 6 + * only version 2 as published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + * 13 + * You should have received a copy of the GNU General Public License 14 + * along with this program; if not, write to the Free Software 15 + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 16 + * 02110-1301, USA. 17 + */ 18 + 19 + #include <linux/slab.h> 20 + #include <linux/io.h> 21 + #include <linux/module.h> 22 + #include <linux/mutex.h> 23 + #include <linux/errno.h> 24 + #include <linux/err.h> 25 + #include <linux/qcom_scm.h> 26 + 27 + #include <asm/outercache.h> 28 + #include <asm/cacheflush.h> 29 + 30 + #include "qcom_scm.h" 31 + 32 + #define QCOM_SCM_FLAG_COLDBOOT_CPU0 0x00 33 + #define QCOM_SCM_FLAG_COLDBOOT_CPU1 0x01 34 + #define QCOM_SCM_FLAG_COLDBOOT_CPU2 0x08 35 + #define QCOM_SCM_FLAG_COLDBOOT_CPU3 0x20 36 + 37 + #define QCOM_SCM_FLAG_WARMBOOT_CPU0 0x04 38 + #define QCOM_SCM_FLAG_WARMBOOT_CPU1 0x02 39 + #define QCOM_SCM_FLAG_WARMBOOT_CPU2 0x10 40 + #define QCOM_SCM_FLAG_WARMBOOT_CPU3 0x40 41 + 42 + struct qcom_scm_entry { 43 + int flag; 44 + void *entry; 45 + }; 46 + 47 + static struct qcom_scm_entry qcom_scm_wb[] = { 48 + { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU0 }, 49 + { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU1 }, 50 + { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU2 }, 51 + { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU3 }, 52 + }; 53 + 54 + static DEFINE_MUTEX(qcom_scm_lock); 55 + 56 + /** 57 + * struct qcom_scm_command - one SCM command buffer 58 + * @len: total available memory for command and response 59 + * @buf_offset: start of command buffer 60 + * @resp_hdr_offset: start of response buffer 61 + * @id: command to be executed 62 + * @buf: buffer returned from qcom_scm_get_command_buffer() 63 + * 64 + * An SCM command is laid out in memory as follows: 65 + * 66 + * ------------------- <--- struct qcom_scm_command 67 + * | command header | 68 + * ------------------- <--- qcom_scm_get_command_buffer() 69 + * | command buffer | 70 + * ------------------- <--- struct qcom_scm_response and 71 + * | response header | qcom_scm_command_to_response() 72 + * ------------------- <--- qcom_scm_get_response_buffer() 73 + * | response buffer | 74 + * ------------------- 75 + * 76 + * There can be arbitrary padding between the headers and buffers so 77 + * you should always use the appropriate qcom_scm_get_*_buffer() routines 78 + * to access the buffers in a safe manner. 79 + */ 80 + struct qcom_scm_command { 81 + __le32 len; 82 + __le32 buf_offset; 83 + __le32 resp_hdr_offset; 84 + __le32 id; 85 + __le32 buf[0]; 86 + }; 87 + 88 + /** 89 + * struct qcom_scm_response - one SCM response buffer 90 + * @len: total available memory for response 91 + * @buf_offset: start of response data relative to start of qcom_scm_response 92 + * @is_complete: indicates if the command has finished processing 93 + */ 94 + struct qcom_scm_response { 95 + __le32 len; 96 + __le32 buf_offset; 97 + __le32 is_complete; 98 + }; 99 + 100 + /** 101 + * alloc_qcom_scm_command() - Allocate an SCM command 102 + * @cmd_size: size of the command buffer 103 + * @resp_size: size of the response buffer 104 + * 105 + * Allocate an SCM command, including enough room for the command 106 + * and response headers as well as the command and response buffers. 107 + * 108 + * Returns a valid &qcom_scm_command on success or %NULL if the allocation fails. 109 + */ 110 + static struct qcom_scm_command *alloc_qcom_scm_command(size_t cmd_size, size_t resp_size) 111 + { 112 + struct qcom_scm_command *cmd; 113 + size_t len = sizeof(*cmd) + sizeof(struct qcom_scm_response) + cmd_size + 114 + resp_size; 115 + u32 offset; 116 + 117 + cmd = kzalloc(PAGE_ALIGN(len), GFP_KERNEL); 118 + if (cmd) { 119 + cmd->len = cpu_to_le32(len); 120 + offset = offsetof(struct qcom_scm_command, buf); 121 + cmd->buf_offset = cpu_to_le32(offset); 122 + cmd->resp_hdr_offset = cpu_to_le32(offset + cmd_size); 123 + } 124 + return cmd; 125 + } 126 + 127 + /** 128 + * free_qcom_scm_command() - Free an SCM command 129 + * @cmd: command to free 130 + * 131 + * Free an SCM command. 132 + */ 133 + static inline void free_qcom_scm_command(struct qcom_scm_command *cmd) 134 + { 135 + kfree(cmd); 136 + } 137 + 138 + /** 139 + * qcom_scm_command_to_response() - Get a pointer to a qcom_scm_response 140 + * @cmd: command 141 + * 142 + * Returns a pointer to a response for a command. 143 + */ 144 + static inline struct qcom_scm_response *qcom_scm_command_to_response( 145 + const struct qcom_scm_command *cmd) 146 + { 147 + return (void *)cmd + le32_to_cpu(cmd->resp_hdr_offset); 148 + } 149 + 150 + /** 151 + * qcom_scm_get_command_buffer() - Get a pointer to a command buffer 152 + * @cmd: command 153 + * 154 + * Returns a pointer to the command buffer of a command. 155 + */ 156 + static inline void *qcom_scm_get_command_buffer(const struct qcom_scm_command *cmd) 157 + { 158 + return (void *)cmd->buf; 159 + } 160 + 161 + /** 162 + * qcom_scm_get_response_buffer() - Get a pointer to a response buffer 163 + * @rsp: response 164 + * 165 + * Returns a pointer to a response buffer of a response. 166 + */ 167 + static inline void *qcom_scm_get_response_buffer(const struct qcom_scm_response *rsp) 168 + { 169 + return (void *)rsp + le32_to_cpu(rsp->buf_offset); 170 + } 171 + 172 + static int qcom_scm_remap_error(int err) 173 + { 174 + pr_err("qcom_scm_call failed with error code %d\n", err); 175 + switch (err) { 176 + case QCOM_SCM_ERROR: 177 + return -EIO; 178 + case QCOM_SCM_EINVAL_ADDR: 179 + case QCOM_SCM_EINVAL_ARG: 180 + return -EINVAL; 181 + case QCOM_SCM_EOPNOTSUPP: 182 + return -EOPNOTSUPP; 183 + case QCOM_SCM_ENOMEM: 184 + return -ENOMEM; 185 + } 186 + return -EINVAL; 187 + } 188 + 189 + static u32 smc(u32 cmd_addr) 190 + { 191 + int context_id; 192 + register u32 r0 asm("r0") = 1; 193 + register u32 r1 asm("r1") = (u32)&context_id; 194 + register u32 r2 asm("r2") = cmd_addr; 195 + do { 196 + asm volatile( 197 + __asmeq("%0", "r0") 198 + __asmeq("%1", "r0") 199 + __asmeq("%2", "r1") 200 + __asmeq("%3", "r2") 201 + #ifdef REQUIRES_SEC 202 + ".arch_extension sec\n" 203 + #endif 204 + "smc #0 @ switch to secure world\n" 205 + : "=r" (r0) 206 + : "r" (r0), "r" (r1), "r" (r2) 207 + : "r3"); 208 + } while (r0 == QCOM_SCM_INTERRUPTED); 209 + 210 + return r0; 211 + } 212 + 213 + static int __qcom_scm_call(const struct qcom_scm_command *cmd) 214 + { 215 + int ret; 216 + u32 cmd_addr = virt_to_phys(cmd); 217 + 218 + /* 219 + * Flush the command buffer so that the secure world sees 220 + * the correct data. 221 + */ 222 + __cpuc_flush_dcache_area((void *)cmd, cmd->len); 223 + outer_flush_range(cmd_addr, cmd_addr + cmd->len); 224 + 225 + ret = smc(cmd_addr); 226 + if (ret < 0) 227 + ret = qcom_scm_remap_error(ret); 228 + 229 + return ret; 230 + } 231 + 232 + static void qcom_scm_inv_range(unsigned long start, unsigned long end) 233 + { 234 + u32 cacheline_size, ctr; 235 + 236 + asm volatile("mrc p15, 0, %0, c0, c0, 1" : "=r" (ctr)); 237 + cacheline_size = 4 << ((ctr >> 16) & 0xf); 238 + 239 + start = round_down(start, cacheline_size); 240 + end = round_up(end, cacheline_size); 241 + outer_inv_range(start, end); 242 + while (start < end) { 243 + asm ("mcr p15, 0, %0, c7, c6, 1" : : "r" (start) 244 + : "memory"); 245 + start += cacheline_size; 246 + } 247 + dsb(); 248 + isb(); 249 + } 250 + 251 + /** 252 + * qcom_scm_call() - Send an SCM command 253 + * @svc_id: service identifier 254 + * @cmd_id: command identifier 255 + * @cmd_buf: command buffer 256 + * @cmd_len: length of the command buffer 257 + * @resp_buf: response buffer 258 + * @resp_len: length of the response buffer 259 + * 260 + * Sends a command to the SCM and waits for the command to finish processing. 261 + * 262 + * A note on cache maintenance: 263 + * Note that any buffers that are expected to be accessed by the secure world 264 + * must be flushed before invoking qcom_scm_call and invalidated in the cache 265 + * immediately after qcom_scm_call returns. Cache maintenance on the command 266 + * and response buffers is taken care of by qcom_scm_call; however, callers are 267 + * responsible for any other cached buffers passed over to the secure world. 268 + */ 269 + static int qcom_scm_call(u32 svc_id, u32 cmd_id, const void *cmd_buf, 270 + size_t cmd_len, void *resp_buf, size_t resp_len) 271 + { 272 + int ret; 273 + struct qcom_scm_command *cmd; 274 + struct qcom_scm_response *rsp; 275 + unsigned long start, end; 276 + 277 + cmd = alloc_qcom_scm_command(cmd_len, resp_len); 278 + if (!cmd) 279 + return -ENOMEM; 280 + 281 + cmd->id = cpu_to_le32((svc_id << 10) | cmd_id); 282 + if (cmd_buf) 283 + memcpy(qcom_scm_get_command_buffer(cmd), cmd_buf, cmd_len); 284 + 285 + mutex_lock(&qcom_scm_lock); 286 + ret = __qcom_scm_call(cmd); 287 + mutex_unlock(&qcom_scm_lock); 288 + if (ret) 289 + goto out; 290 + 291 + rsp = qcom_scm_command_to_response(cmd); 292 + start = (unsigned long)rsp; 293 + 294 + do { 295 + qcom_scm_inv_range(start, start + sizeof(*rsp)); 296 + } while (!rsp->is_complete); 297 + 298 + end = (unsigned long)qcom_scm_get_response_buffer(rsp) + resp_len; 299 + qcom_scm_inv_range(start, end); 300 + 301 + if (resp_buf) 302 + memcpy(resp_buf, qcom_scm_get_response_buffer(rsp), resp_len); 303 + out: 304 + free_qcom_scm_command(cmd); 305 + return ret; 306 + } 307 + 308 + #define SCM_CLASS_REGISTER (0x2 << 8) 309 + #define SCM_MASK_IRQS BIT(5) 310 + #define SCM_ATOMIC(svc, cmd, n) (((((svc) << 10)|((cmd) & 0x3ff)) << 12) | \ 311 + SCM_CLASS_REGISTER | \ 312 + SCM_MASK_IRQS | \ 313 + (n & 0xf)) 314 + 315 + /** 316 + * qcom_scm_call_atomic1() - Send an atomic SCM command with one argument 317 + * @svc_id: service identifier 318 + * @cmd_id: command identifier 319 + * @arg1: first argument 320 + * 321 + * This shall only be used with commands that are guaranteed to be 322 + * uninterruptable, atomic and SMP safe. 323 + */ 324 + static s32 qcom_scm_call_atomic1(u32 svc, u32 cmd, u32 arg1) 325 + { 326 + int context_id; 327 + 328 + register u32 r0 asm("r0") = SCM_ATOMIC(svc, cmd, 1); 329 + register u32 r1 asm("r1") = (u32)&context_id; 330 + register u32 r2 asm("r2") = arg1; 331 + 332 + asm volatile( 333 + __asmeq("%0", "r0") 334 + __asmeq("%1", "r0") 335 + __asmeq("%2", "r1") 336 + __asmeq("%3", "r2") 337 + #ifdef REQUIRES_SEC 338 + ".arch_extension sec\n" 339 + #endif 340 + "smc #0 @ switch to secure world\n" 341 + : "=r" (r0) 342 + : "r" (r0), "r" (r1), "r" (r2) 343 + : "r3"); 344 + return r0; 345 + } 346 + 347 + u32 qcom_scm_get_version(void) 348 + { 349 + int context_id; 350 + static u32 version = -1; 351 + register u32 r0 asm("r0"); 352 + register u32 r1 asm("r1"); 353 + 354 + if (version != -1) 355 + return version; 356 + 357 + mutex_lock(&qcom_scm_lock); 358 + 359 + r0 = 0x1 << 8; 360 + r1 = (u32)&context_id; 361 + do { 362 + asm volatile( 363 + __asmeq("%0", "r0") 364 + __asmeq("%1", "r1") 365 + __asmeq("%2", "r0") 366 + __asmeq("%3", "r1") 367 + #ifdef REQUIRES_SEC 368 + ".arch_extension sec\n" 369 + #endif 370 + "smc #0 @ switch to secure world\n" 371 + : "=r" (r0), "=r" (r1) 372 + : "r" (r0), "r" (r1) 373 + : "r2", "r3"); 374 + } while (r0 == QCOM_SCM_INTERRUPTED); 375 + 376 + version = r1; 377 + mutex_unlock(&qcom_scm_lock); 378 + 379 + return version; 380 + } 381 + EXPORT_SYMBOL(qcom_scm_get_version); 382 + 383 + /* 384 + * Set the cold/warm boot address for one of the CPU cores. 385 + */ 386 + static int qcom_scm_set_boot_addr(u32 addr, int flags) 387 + { 388 + struct { 389 + __le32 flags; 390 + __le32 addr; 391 + } cmd; 392 + 393 + cmd.addr = cpu_to_le32(addr); 394 + cmd.flags = cpu_to_le32(flags); 395 + return qcom_scm_call(QCOM_SCM_SVC_BOOT, QCOM_SCM_BOOT_ADDR, 396 + &cmd, sizeof(cmd), NULL, 0); 397 + } 398 + 399 + /** 400 + * qcom_scm_set_cold_boot_addr() - Set the cold boot address for cpus 401 + * @entry: Entry point function for the cpus 402 + * @cpus: The cpumask of cpus that will use the entry point 403 + * 404 + * Set the cold boot address of the cpus. Any cpu outside the supported 405 + * range would be removed from the cpu present mask. 406 + */ 407 + int __qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus) 408 + { 409 + int flags = 0; 410 + int cpu; 411 + int scm_cb_flags[] = { 412 + QCOM_SCM_FLAG_COLDBOOT_CPU0, 413 + QCOM_SCM_FLAG_COLDBOOT_CPU1, 414 + QCOM_SCM_FLAG_COLDBOOT_CPU2, 415 + QCOM_SCM_FLAG_COLDBOOT_CPU3, 416 + }; 417 + 418 + if (!cpus || (cpus && cpumask_empty(cpus))) 419 + return -EINVAL; 420 + 421 + for_each_cpu(cpu, cpus) { 422 + if (cpu < ARRAY_SIZE(scm_cb_flags)) 423 + flags |= scm_cb_flags[cpu]; 424 + else 425 + set_cpu_present(cpu, false); 426 + } 427 + 428 + return qcom_scm_set_boot_addr(virt_to_phys(entry), flags); 429 + } 430 + 431 + /** 432 + * qcom_scm_set_warm_boot_addr() - Set the warm boot address for cpus 433 + * @entry: Entry point function for the cpus 434 + * @cpus: The cpumask of cpus that will use the entry point 435 + * 436 + * Set the Linux entry point for the SCM to transfer control to when coming 437 + * out of a power down. CPU power down may be executed on cpuidle or hotplug. 438 + */ 439 + int __qcom_scm_set_warm_boot_addr(void *entry, const cpumask_t *cpus) 440 + { 441 + int ret; 442 + int flags = 0; 443 + int cpu; 444 + 445 + /* 446 + * Reassign only if we are switching from hotplug entry point 447 + * to cpuidle entry point or vice versa. 448 + */ 449 + for_each_cpu(cpu, cpus) { 450 + if (entry == qcom_scm_wb[cpu].entry) 451 + continue; 452 + flags |= qcom_scm_wb[cpu].flag; 453 + } 454 + 455 + /* No change in entry function */ 456 + if (!flags) 457 + return 0; 458 + 459 + ret = qcom_scm_set_boot_addr(virt_to_phys(entry), flags); 460 + if (!ret) { 461 + for_each_cpu(cpu, cpus) 462 + qcom_scm_wb[cpu].entry = entry; 463 + } 464 + 465 + return ret; 466 + } 467 + 468 + /** 469 + * qcom_scm_cpu_power_down() - Power down the cpu 470 + * @flags - Flags to flush cache 471 + * 472 + * This is an end point to power down cpu. If there was a pending interrupt, 473 + * the control would return from this function, otherwise, the cpu jumps to the 474 + * warm boot entry point set for this cpu upon reset. 475 + */ 476 + void __qcom_scm_cpu_power_down(u32 flags) 477 + { 478 + qcom_scm_call_atomic1(QCOM_SCM_SVC_BOOT, QCOM_SCM_CMD_TERMINATE_PC, 479 + flags & QCOM_SCM_FLUSH_FLAG_MASK); 480 + } 481 + 482 + int __qcom_scm_is_call_available(u32 svc_id, u32 cmd_id) 483 + { 484 + int ret; 485 + u32 svc_cmd = (svc_id << 10) | cmd_id; 486 + u32 ret_val = 0; 487 + 488 + ret = qcom_scm_call(QCOM_SCM_SVC_INFO, QCOM_IS_CALL_AVAIL_CMD, &svc_cmd, 489 + sizeof(svc_cmd), &ret_val, sizeof(ret_val)); 490 + if (ret) 491 + return ret; 492 + 493 + return ret_val; 494 + } 495 + 496 + int __qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt, u32 *resp) 497 + { 498 + if (req_cnt > QCOM_SCM_HDCP_MAX_REQ_CNT) 499 + return -ERANGE; 500 + 501 + return qcom_scm_call(QCOM_SCM_SVC_HDCP, QCOM_SCM_CMD_HDCP, 502 + req, req_cnt * sizeof(*req), resp, sizeof(*resp)); 503 + }
+38 -436
drivers/firmware/qcom_scm.c
··· 1 - /* Copyright (c) 2010, Code Aurora Forum. All rights reserved. 1 + /* Copyright (c) 2010,2015, The Linux Foundation. All rights reserved. 2 2 * Copyright (C) 2015 Linaro Ltd. 3 3 * 4 4 * This program is free software; you can redistribute it and/or modify ··· 16 16 * 02110-1301, USA. 17 17 */ 18 18 19 - #include <linux/slab.h> 20 - #include <linux/io.h> 21 - #include <linux/module.h> 22 - #include <linux/mutex.h> 23 - #include <linux/errno.h> 24 - #include <linux/err.h> 19 + #include <linux/cpumask.h> 20 + #include <linux/export.h> 21 + #include <linux/types.h> 25 22 #include <linux/qcom_scm.h> 26 23 27 - #include <asm/outercache.h> 28 - #include <asm/cacheflush.h> 29 - 30 - 31 - #define QCOM_SCM_ENOMEM -5 32 - #define QCOM_SCM_EOPNOTSUPP -4 33 - #define QCOM_SCM_EINVAL_ADDR -3 34 - #define QCOM_SCM_EINVAL_ARG -2 35 - #define QCOM_SCM_ERROR -1 36 - #define QCOM_SCM_INTERRUPTED 1 37 - 38 - #define QCOM_SCM_FLAG_COLDBOOT_CPU0 0x00 39 - #define QCOM_SCM_FLAG_COLDBOOT_CPU1 0x01 40 - #define QCOM_SCM_FLAG_COLDBOOT_CPU2 0x08 41 - #define QCOM_SCM_FLAG_COLDBOOT_CPU3 0x20 42 - 43 - #define QCOM_SCM_FLAG_WARMBOOT_CPU0 0x04 44 - #define QCOM_SCM_FLAG_WARMBOOT_CPU1 0x02 45 - #define QCOM_SCM_FLAG_WARMBOOT_CPU2 0x10 46 - #define QCOM_SCM_FLAG_WARMBOOT_CPU3 0x40 47 - 48 - struct qcom_scm_entry { 49 - int flag; 50 - void *entry; 51 - }; 52 - 53 - static struct qcom_scm_entry qcom_scm_wb[] = { 54 - { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU0 }, 55 - { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU1 }, 56 - { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU2 }, 57 - { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU3 }, 58 - }; 59 - 60 - static DEFINE_MUTEX(qcom_scm_lock); 61 - 62 - /** 63 - * struct qcom_scm_command - one SCM command buffer 64 - * @len: total available memory for command and response 65 - * @buf_offset: start of command buffer 66 - * @resp_hdr_offset: start of response buffer 67 - * @id: command to be executed 68 - * @buf: buffer returned from qcom_scm_get_command_buffer() 69 - * 70 - * An SCM command is laid out in memory as follows: 71 - * 72 - * ------------------- <--- struct qcom_scm_command 73 - * | command header | 74 - * ------------------- <--- qcom_scm_get_command_buffer() 75 - * | command buffer | 76 - * ------------------- <--- struct qcom_scm_response and 77 - * | response header | qcom_scm_command_to_response() 78 - * ------------------- <--- qcom_scm_get_response_buffer() 79 - * | response buffer | 80 - * ------------------- 81 - * 82 - * There can be arbitrary padding between the headers and buffers so 83 - * you should always use the appropriate qcom_scm_get_*_buffer() routines 84 - * to access the buffers in a safe manner. 85 - */ 86 - struct qcom_scm_command { 87 - __le32 len; 88 - __le32 buf_offset; 89 - __le32 resp_hdr_offset; 90 - __le32 id; 91 - __le32 buf[0]; 92 - }; 93 - 94 - /** 95 - * struct qcom_scm_response - one SCM response buffer 96 - * @len: total available memory for response 97 - * @buf_offset: start of response data relative to start of qcom_scm_response 98 - * @is_complete: indicates if the command has finished processing 99 - */ 100 - struct qcom_scm_response { 101 - __le32 len; 102 - __le32 buf_offset; 103 - __le32 is_complete; 104 - }; 105 - 106 - /** 107 - * alloc_qcom_scm_command() - Allocate an SCM command 108 - * @cmd_size: size of the command buffer 109 - * @resp_size: size of the response buffer 110 - * 111 - * Allocate an SCM command, including enough room for the command 112 - * and response headers as well as the command and response buffers. 113 - * 114 - * Returns a valid &qcom_scm_command on success or %NULL if the allocation fails. 115 - */ 116 - static struct qcom_scm_command *alloc_qcom_scm_command(size_t cmd_size, size_t resp_size) 117 - { 118 - struct qcom_scm_command *cmd; 119 - size_t len = sizeof(*cmd) + sizeof(struct qcom_scm_response) + cmd_size + 120 - resp_size; 121 - u32 offset; 122 - 123 - cmd = kzalloc(PAGE_ALIGN(len), GFP_KERNEL); 124 - if (cmd) { 125 - cmd->len = cpu_to_le32(len); 126 - offset = offsetof(struct qcom_scm_command, buf); 127 - cmd->buf_offset = cpu_to_le32(offset); 128 - cmd->resp_hdr_offset = cpu_to_le32(offset + cmd_size); 129 - } 130 - return cmd; 131 - } 132 - 133 - /** 134 - * free_qcom_scm_command() - Free an SCM command 135 - * @cmd: command to free 136 - * 137 - * Free an SCM command. 138 - */ 139 - static inline void free_qcom_scm_command(struct qcom_scm_command *cmd) 140 - { 141 - kfree(cmd); 142 - } 143 - 144 - /** 145 - * qcom_scm_command_to_response() - Get a pointer to a qcom_scm_response 146 - * @cmd: command 147 - * 148 - * Returns a pointer to a response for a command. 149 - */ 150 - static inline struct qcom_scm_response *qcom_scm_command_to_response( 151 - const struct qcom_scm_command *cmd) 152 - { 153 - return (void *)cmd + le32_to_cpu(cmd->resp_hdr_offset); 154 - } 155 - 156 - /** 157 - * qcom_scm_get_command_buffer() - Get a pointer to a command buffer 158 - * @cmd: command 159 - * 160 - * Returns a pointer to the command buffer of a command. 161 - */ 162 - static inline void *qcom_scm_get_command_buffer(const struct qcom_scm_command *cmd) 163 - { 164 - return (void *)cmd->buf; 165 - } 166 - 167 - /** 168 - * qcom_scm_get_response_buffer() - Get a pointer to a response buffer 169 - * @rsp: response 170 - * 171 - * Returns a pointer to a response buffer of a response. 172 - */ 173 - static inline void *qcom_scm_get_response_buffer(const struct qcom_scm_response *rsp) 174 - { 175 - return (void *)rsp + le32_to_cpu(rsp->buf_offset); 176 - } 177 - 178 - static int qcom_scm_remap_error(int err) 179 - { 180 - pr_err("qcom_scm_call failed with error code %d\n", err); 181 - switch (err) { 182 - case QCOM_SCM_ERROR: 183 - return -EIO; 184 - case QCOM_SCM_EINVAL_ADDR: 185 - case QCOM_SCM_EINVAL_ARG: 186 - return -EINVAL; 187 - case QCOM_SCM_EOPNOTSUPP: 188 - return -EOPNOTSUPP; 189 - case QCOM_SCM_ENOMEM: 190 - return -ENOMEM; 191 - } 192 - return -EINVAL; 193 - } 194 - 195 - static u32 smc(u32 cmd_addr) 196 - { 197 - int context_id; 198 - register u32 r0 asm("r0") = 1; 199 - register u32 r1 asm("r1") = (u32)&context_id; 200 - register u32 r2 asm("r2") = cmd_addr; 201 - do { 202 - asm volatile( 203 - __asmeq("%0", "r0") 204 - __asmeq("%1", "r0") 205 - __asmeq("%2", "r1") 206 - __asmeq("%3", "r2") 207 - #ifdef REQUIRES_SEC 208 - ".arch_extension sec\n" 209 - #endif 210 - "smc #0 @ switch to secure world\n" 211 - : "=r" (r0) 212 - : "r" (r0), "r" (r1), "r" (r2) 213 - : "r3"); 214 - } while (r0 == QCOM_SCM_INTERRUPTED); 215 - 216 - return r0; 217 - } 218 - 219 - static int __qcom_scm_call(const struct qcom_scm_command *cmd) 220 - { 221 - int ret; 222 - u32 cmd_addr = virt_to_phys(cmd); 223 - 224 - /* 225 - * Flush the command buffer so that the secure world sees 226 - * the correct data. 227 - */ 228 - __cpuc_flush_dcache_area((void *)cmd, cmd->len); 229 - outer_flush_range(cmd_addr, cmd_addr + cmd->len); 230 - 231 - ret = smc(cmd_addr); 232 - if (ret < 0) 233 - ret = qcom_scm_remap_error(ret); 234 - 235 - return ret; 236 - } 237 - 238 - static void qcom_scm_inv_range(unsigned long start, unsigned long end) 239 - { 240 - u32 cacheline_size, ctr; 241 - 242 - asm volatile("mrc p15, 0, %0, c0, c0, 1" : "=r" (ctr)); 243 - cacheline_size = 4 << ((ctr >> 16) & 0xf); 244 - 245 - start = round_down(start, cacheline_size); 246 - end = round_up(end, cacheline_size); 247 - outer_inv_range(start, end); 248 - while (start < end) { 249 - asm ("mcr p15, 0, %0, c7, c6, 1" : : "r" (start) 250 - : "memory"); 251 - start += cacheline_size; 252 - } 253 - dsb(); 254 - isb(); 255 - } 256 - 257 - /** 258 - * qcom_scm_call() - Send an SCM command 259 - * @svc_id: service identifier 260 - * @cmd_id: command identifier 261 - * @cmd_buf: command buffer 262 - * @cmd_len: length of the command buffer 263 - * @resp_buf: response buffer 264 - * @resp_len: length of the response buffer 265 - * 266 - * Sends a command to the SCM and waits for the command to finish processing. 267 - * 268 - * A note on cache maintenance: 269 - * Note that any buffers that are expected to be accessed by the secure world 270 - * must be flushed before invoking qcom_scm_call and invalidated in the cache 271 - * immediately after qcom_scm_call returns. Cache maintenance on the command 272 - * and response buffers is taken care of by qcom_scm_call; however, callers are 273 - * responsible for any other cached buffers passed over to the secure world. 274 - */ 275 - static int qcom_scm_call(u32 svc_id, u32 cmd_id, const void *cmd_buf, 276 - size_t cmd_len, void *resp_buf, size_t resp_len) 277 - { 278 - int ret; 279 - struct qcom_scm_command *cmd; 280 - struct qcom_scm_response *rsp; 281 - unsigned long start, end; 282 - 283 - cmd = alloc_qcom_scm_command(cmd_len, resp_len); 284 - if (!cmd) 285 - return -ENOMEM; 286 - 287 - cmd->id = cpu_to_le32((svc_id << 10) | cmd_id); 288 - if (cmd_buf) 289 - memcpy(qcom_scm_get_command_buffer(cmd), cmd_buf, cmd_len); 290 - 291 - mutex_lock(&qcom_scm_lock); 292 - ret = __qcom_scm_call(cmd); 293 - mutex_unlock(&qcom_scm_lock); 294 - if (ret) 295 - goto out; 296 - 297 - rsp = qcom_scm_command_to_response(cmd); 298 - start = (unsigned long)rsp; 299 - 300 - do { 301 - qcom_scm_inv_range(start, start + sizeof(*rsp)); 302 - } while (!rsp->is_complete); 303 - 304 - end = (unsigned long)qcom_scm_get_response_buffer(rsp) + resp_len; 305 - qcom_scm_inv_range(start, end); 306 - 307 - if (resp_buf) 308 - memcpy(resp_buf, qcom_scm_get_response_buffer(rsp), resp_len); 309 - out: 310 - free_qcom_scm_command(cmd); 311 - return ret; 312 - } 313 - 314 - #define SCM_CLASS_REGISTER (0x2 << 8) 315 - #define SCM_MASK_IRQS BIT(5) 316 - #define SCM_ATOMIC(svc, cmd, n) (((((svc) << 10)|((cmd) & 0x3ff)) << 12) | \ 317 - SCM_CLASS_REGISTER | \ 318 - SCM_MASK_IRQS | \ 319 - (n & 0xf)) 320 - 321 - /** 322 - * qcom_scm_call_atomic1() - Send an atomic SCM command with one argument 323 - * @svc_id: service identifier 324 - * @cmd_id: command identifier 325 - * @arg1: first argument 326 - * 327 - * This shall only be used with commands that are guaranteed to be 328 - * uninterruptable, atomic and SMP safe. 329 - */ 330 - static s32 qcom_scm_call_atomic1(u32 svc, u32 cmd, u32 arg1) 331 - { 332 - int context_id; 333 - 334 - register u32 r0 asm("r0") = SCM_ATOMIC(svc, cmd, 1); 335 - register u32 r1 asm("r1") = (u32)&context_id; 336 - register u32 r2 asm("r2") = arg1; 337 - 338 - asm volatile( 339 - __asmeq("%0", "r0") 340 - __asmeq("%1", "r0") 341 - __asmeq("%2", "r1") 342 - __asmeq("%3", "r2") 343 - #ifdef REQUIRES_SEC 344 - ".arch_extension sec\n" 345 - #endif 346 - "smc #0 @ switch to secure world\n" 347 - : "=r" (r0) 348 - : "r" (r0), "r" (r1), "r" (r2) 349 - : "r3"); 350 - return r0; 351 - } 352 - 353 - u32 qcom_scm_get_version(void) 354 - { 355 - int context_id; 356 - static u32 version = -1; 357 - register u32 r0 asm("r0"); 358 - register u32 r1 asm("r1"); 359 - 360 - if (version != -1) 361 - return version; 362 - 363 - mutex_lock(&qcom_scm_lock); 364 - 365 - r0 = 0x1 << 8; 366 - r1 = (u32)&context_id; 367 - do { 368 - asm volatile( 369 - __asmeq("%0", "r0") 370 - __asmeq("%1", "r1") 371 - __asmeq("%2", "r0") 372 - __asmeq("%3", "r1") 373 - #ifdef REQUIRES_SEC 374 - ".arch_extension sec\n" 375 - #endif 376 - "smc #0 @ switch to secure world\n" 377 - : "=r" (r0), "=r" (r1) 378 - : "r" (r0), "r" (r1) 379 - : "r2", "r3"); 380 - } while (r0 == QCOM_SCM_INTERRUPTED); 381 - 382 - version = r1; 383 - mutex_unlock(&qcom_scm_lock); 384 - 385 - return version; 386 - } 387 - EXPORT_SYMBOL(qcom_scm_get_version); 388 - 389 - #define QCOM_SCM_SVC_BOOT 0x1 390 - #define QCOM_SCM_BOOT_ADDR 0x1 391 - /* 392 - * Set the cold/warm boot address for one of the CPU cores. 393 - */ 394 - static int qcom_scm_set_boot_addr(u32 addr, int flags) 395 - { 396 - struct { 397 - __le32 flags; 398 - __le32 addr; 399 - } cmd; 400 - 401 - cmd.addr = cpu_to_le32(addr); 402 - cmd.flags = cpu_to_le32(flags); 403 - return qcom_scm_call(QCOM_SCM_SVC_BOOT, QCOM_SCM_BOOT_ADDR, 404 - &cmd, sizeof(cmd), NULL, 0); 405 - } 24 + #include "qcom_scm.h" 406 25 407 26 /** 408 27 * qcom_scm_set_cold_boot_addr() - Set the cold boot address for cpus ··· 33 414 */ 34 415 int qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus) 35 416 { 36 - int flags = 0; 37 - int cpu; 38 - int scm_cb_flags[] = { 39 - QCOM_SCM_FLAG_COLDBOOT_CPU0, 40 - QCOM_SCM_FLAG_COLDBOOT_CPU1, 41 - QCOM_SCM_FLAG_COLDBOOT_CPU2, 42 - QCOM_SCM_FLAG_COLDBOOT_CPU3, 43 - }; 44 - 45 - if (!cpus || (cpus && cpumask_empty(cpus))) 46 - return -EINVAL; 47 - 48 - for_each_cpu(cpu, cpus) { 49 - if (cpu < ARRAY_SIZE(scm_cb_flags)) 50 - flags |= scm_cb_flags[cpu]; 51 - else 52 - set_cpu_present(cpu, false); 53 - } 54 - 55 - return qcom_scm_set_boot_addr(virt_to_phys(entry), flags); 417 + return __qcom_scm_set_cold_boot_addr(entry, cpus); 56 418 } 57 419 EXPORT_SYMBOL(qcom_scm_set_cold_boot_addr); 58 420 ··· 47 447 */ 48 448 int qcom_scm_set_warm_boot_addr(void *entry, const cpumask_t *cpus) 49 449 { 50 - int ret; 51 - int flags = 0; 52 - int cpu; 53 - 54 - /* 55 - * Reassign only if we are switching from hotplug entry point 56 - * to cpuidle entry point or vice versa. 57 - */ 58 - for_each_cpu(cpu, cpus) { 59 - if (entry == qcom_scm_wb[cpu].entry) 60 - continue; 61 - flags |= qcom_scm_wb[cpu].flag; 62 - } 63 - 64 - /* No change in entry function */ 65 - if (!flags) 66 - return 0; 67 - 68 - ret = qcom_scm_set_boot_addr(virt_to_phys(entry), flags); 69 - if (!ret) { 70 - for_each_cpu(cpu, cpus) 71 - qcom_scm_wb[cpu].entry = entry; 72 - } 73 - 74 - return ret; 450 + return __qcom_scm_set_warm_boot_addr(entry, cpus); 75 451 } 76 452 EXPORT_SYMBOL(qcom_scm_set_warm_boot_addr); 77 - 78 - #define QCOM_SCM_CMD_TERMINATE_PC 0x2 79 - #define QCOM_SCM_FLUSH_FLAG_MASK 0x3 80 453 81 454 /** 82 455 * qcom_scm_cpu_power_down() - Power down the cpu ··· 61 488 */ 62 489 void qcom_scm_cpu_power_down(u32 flags) 63 490 { 64 - qcom_scm_call_atomic1(QCOM_SCM_SVC_BOOT, QCOM_SCM_CMD_TERMINATE_PC, 65 - flags & QCOM_SCM_FLUSH_FLAG_MASK); 491 + __qcom_scm_cpu_power_down(flags); 66 492 } 67 493 EXPORT_SYMBOL(qcom_scm_cpu_power_down); 494 + 495 + /** 496 + * qcom_scm_hdcp_available() - Check if secure environment supports HDCP. 497 + * 498 + * Return true if HDCP is supported, false if not. 499 + */ 500 + bool qcom_scm_hdcp_available(void) 501 + { 502 + int ret; 503 + 504 + ret = __qcom_scm_is_call_available(QCOM_SCM_SVC_HDCP, 505 + QCOM_SCM_CMD_HDCP); 506 + 507 + return (ret > 0) ? true : false; 508 + } 509 + EXPORT_SYMBOL(qcom_scm_hdcp_available); 510 + 511 + /** 512 + * qcom_scm_hdcp_req() - Send HDCP request. 513 + * @req: HDCP request array 514 + * @req_cnt: HDCP request array count 515 + * @resp: response buffer passed to SCM 516 + * 517 + * Write HDCP register(s) through SCM. 518 + */ 519 + int qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt, u32 *resp) 520 + { 521 + return __qcom_scm_hdcp_req(req, req_cnt, resp); 522 + } 523 + EXPORT_SYMBOL(qcom_scm_hdcp_req);
+47
drivers/firmware/qcom_scm.h
··· 1 + /* Copyright (c) 2010-2015, The Linux Foundation. All rights reserved. 2 + * 3 + * This program is free software; you can redistribute it and/or modify 4 + * it under the terms of the GNU General Public License version 2 and 5 + * only version 2 as published by the Free Software Foundation. 6 + * 7 + * This program is distributed in the hope that it will be useful, 8 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 9 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10 + * GNU General Public License for more details. 11 + */ 12 + #ifndef __QCOM_SCM_INT_H 13 + #define __QCOM_SCM_INT_H 14 + 15 + #define QCOM_SCM_SVC_BOOT 0x1 16 + #define QCOM_SCM_BOOT_ADDR 0x1 17 + #define QCOM_SCM_BOOT_ADDR_MC 0x11 18 + 19 + #define QCOM_SCM_FLAG_HLOS 0x01 20 + #define QCOM_SCM_FLAG_COLDBOOT_MC 0x02 21 + #define QCOM_SCM_FLAG_WARMBOOT_MC 0x04 22 + extern int __qcom_scm_set_warm_boot_addr(void *entry, const cpumask_t *cpus); 23 + extern int __qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus); 24 + 25 + #define QCOM_SCM_CMD_TERMINATE_PC 0x2 26 + #define QCOM_SCM_FLUSH_FLAG_MASK 0x3 27 + #define QCOM_SCM_CMD_CORE_HOTPLUGGED 0x10 28 + extern void __qcom_scm_cpu_power_down(u32 flags); 29 + 30 + #define QCOM_SCM_SVC_INFO 0x6 31 + #define QCOM_IS_CALL_AVAIL_CMD 0x1 32 + extern int __qcom_scm_is_call_available(u32 svc_id, u32 cmd_id); 33 + 34 + #define QCOM_SCM_SVC_HDCP 0x11 35 + #define QCOM_SCM_CMD_HDCP 0x01 36 + extern int __qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt, 37 + u32 *resp); 38 + 39 + /* common error codes */ 40 + #define QCOM_SCM_ENOMEM -5 41 + #define QCOM_SCM_EOPNOTSUPP -4 42 + #define QCOM_SCM_EINVAL_ADDR -3 43 + #define QCOM_SCM_EINVAL_ARG -2 44 + #define QCOM_SCM_ERROR -1 45 + #define QCOM_SCM_INTERRUPTED 1 46 + 47 + #endif
+1 -1
drivers/iommu/Kconfig
··· 219 219 select IOMMU_API 220 220 help 221 221 This driver supports the IOMMU hardware (SMMU) found on NVIDIA Tegra 222 - SoCs (Tegra30 up to Tegra124). 222 + SoCs (Tegra30 up to Tegra132). 223 223 224 224 config EXYNOS_IOMMU 225 225 bool "Exynos IOMMU Support"
+109
drivers/iommu/tegra-smmu.c
··· 7 7 */ 8 8 9 9 #include <linux/bitops.h> 10 + #include <linux/debugfs.h> 10 11 #include <linux/err.h> 11 12 #include <linux/iommu.h> 12 13 #include <linux/kernel.h> ··· 32 31 struct mutex lock; 33 32 34 33 struct list_head list; 34 + 35 + struct dentry *debugfs; 35 36 }; 36 37 37 38 struct tegra_smmu_as { ··· 676 673 } 677 674 } 678 675 676 + static int tegra_smmu_swgroups_show(struct seq_file *s, void *data) 677 + { 678 + struct tegra_smmu *smmu = s->private; 679 + unsigned int i; 680 + u32 value; 681 + 682 + seq_printf(s, "swgroup enabled ASID\n"); 683 + seq_printf(s, "------------------------\n"); 684 + 685 + for (i = 0; i < smmu->soc->num_swgroups; i++) { 686 + const struct tegra_smmu_swgroup *group = &smmu->soc->swgroups[i]; 687 + const char *status; 688 + unsigned int asid; 689 + 690 + value = smmu_readl(smmu, group->reg); 691 + 692 + if (value & SMMU_ASID_ENABLE) 693 + status = "yes"; 694 + else 695 + status = "no"; 696 + 697 + asid = value & SMMU_ASID_MASK; 698 + 699 + seq_printf(s, "%-9s %-7s %#04x\n", group->name, status, 700 + asid); 701 + } 702 + 703 + return 0; 704 + } 705 + 706 + static int tegra_smmu_swgroups_open(struct inode *inode, struct file *file) 707 + { 708 + return single_open(file, tegra_smmu_swgroups_show, inode->i_private); 709 + } 710 + 711 + static const struct file_operations tegra_smmu_swgroups_fops = { 712 + .open = tegra_smmu_swgroups_open, 713 + .read = seq_read, 714 + .llseek = seq_lseek, 715 + .release = single_release, 716 + }; 717 + 718 + static int tegra_smmu_clients_show(struct seq_file *s, void *data) 719 + { 720 + struct tegra_smmu *smmu = s->private; 721 + unsigned int i; 722 + u32 value; 723 + 724 + seq_printf(s, "client enabled\n"); 725 + seq_printf(s, "--------------------\n"); 726 + 727 + for (i = 0; i < smmu->soc->num_clients; i++) { 728 + const struct tegra_mc_client *client = &smmu->soc->clients[i]; 729 + const char *status; 730 + 731 + value = smmu_readl(smmu, client->smmu.reg); 732 + 733 + if (value & BIT(client->smmu.bit)) 734 + status = "yes"; 735 + else 736 + status = "no"; 737 + 738 + seq_printf(s, "%-12s %s\n", client->name, status); 739 + } 740 + 741 + return 0; 742 + } 743 + 744 + static int tegra_smmu_clients_open(struct inode *inode, struct file *file) 745 + { 746 + return single_open(file, tegra_smmu_clients_show, inode->i_private); 747 + } 748 + 749 + static const struct file_operations tegra_smmu_clients_fops = { 750 + .open = tegra_smmu_clients_open, 751 + .read = seq_read, 752 + .llseek = seq_lseek, 753 + .release = single_release, 754 + }; 755 + 756 + static void tegra_smmu_debugfs_init(struct tegra_smmu *smmu) 757 + { 758 + smmu->debugfs = debugfs_create_dir("smmu", NULL); 759 + if (!smmu->debugfs) 760 + return; 761 + 762 + debugfs_create_file("swgroups", S_IRUGO, smmu->debugfs, smmu, 763 + &tegra_smmu_swgroups_fops); 764 + debugfs_create_file("clients", S_IRUGO, smmu->debugfs, smmu, 765 + &tegra_smmu_clients_fops); 766 + } 767 + 768 + static void tegra_smmu_debugfs_exit(struct tegra_smmu *smmu) 769 + { 770 + debugfs_remove_recursive(smmu->debugfs); 771 + } 772 + 679 773 struct tegra_smmu *tegra_smmu_probe(struct device *dev, 680 774 const struct tegra_smmu_soc *soc, 681 775 struct tegra_mc *mc) ··· 843 743 if (err < 0) 844 744 return ERR_PTR(err); 845 745 746 + if (IS_ENABLED(CONFIG_DEBUG_FS)) 747 + tegra_smmu_debugfs_init(smmu); 748 + 846 749 return smmu; 750 + } 751 + 752 + void tegra_smmu_remove(struct tegra_smmu *smmu) 753 + { 754 + if (IS_ENABLED(CONFIG_DEBUG_FS)) 755 + tegra_smmu_debugfs_exit(smmu); 847 756 }
+88 -88
drivers/leds/leds-syscon.c
··· 20 20 * MA 02111-1307 USA 21 21 */ 22 22 #include <linux/io.h> 23 + #include <linux/module.h> 23 24 #include <linux/of_device.h> 24 25 #include <linux/of_address.h> 25 26 #include <linux/platform_device.h> ··· 67 66 dev_err(sled->cdev.dev, "error updating LED status\n"); 68 67 } 69 68 70 - static int __init syscon_leds_spawn(struct device_node *np, 71 - struct device *dev, 72 - struct regmap *map) 69 + static int syscon_led_probe(struct platform_device *pdev) 73 70 { 74 - struct device_node *child; 71 + struct device *dev = &pdev->dev; 72 + struct device_node *np = dev->of_node; 73 + struct device *parent; 74 + struct regmap *map; 75 + struct syscon_led *sled; 76 + const char *state; 75 77 int ret; 76 78 77 - for_each_available_child_of_node(np, child) { 78 - struct syscon_led *sled; 79 - const char *state; 80 - 81 - /* Only check for register-bit-leds */ 82 - if (of_property_match_string(child, "compatible", 83 - "register-bit-led") < 0) 84 - continue; 85 - 86 - sled = devm_kzalloc(dev, sizeof(*sled), GFP_KERNEL); 87 - if (!sled) 88 - return -ENOMEM; 89 - 90 - sled->map = map; 91 - 92 - if (of_property_read_u32(child, "offset", &sled->offset)) 93 - return -EINVAL; 94 - if (of_property_read_u32(child, "mask", &sled->mask)) 95 - return -EINVAL; 96 - sled->cdev.name = 97 - of_get_property(child, "label", NULL) ? : child->name; 98 - sled->cdev.default_trigger = 99 - of_get_property(child, "linux,default-trigger", NULL); 100 - 101 - state = of_get_property(child, "default-state", NULL); 102 - if (state) { 103 - if (!strcmp(state, "keep")) { 104 - u32 val; 105 - 106 - ret = regmap_read(map, sled->offset, &val); 107 - if (ret < 0) 108 - return ret; 109 - sled->state = !!(val & sled->mask); 110 - } else if (!strcmp(state, "on")) { 111 - sled->state = true; 112 - ret = regmap_update_bits(map, sled->offset, 113 - sled->mask, 114 - sled->mask); 115 - if (ret < 0) 116 - return ret; 117 - } else { 118 - sled->state = false; 119 - ret = regmap_update_bits(map, sled->offset, 120 - sled->mask, 0); 121 - if (ret < 0) 122 - return ret; 123 - } 124 - } 125 - sled->cdev.brightness_set = syscon_led_set; 126 - 127 - ret = led_classdev_register(dev, &sled->cdev); 128 - if (ret < 0) 129 - return ret; 130 - 131 - dev_info(dev, "registered LED %s\n", sled->cdev.name); 79 + parent = dev->parent; 80 + if (!parent) { 81 + dev_err(dev, "no parent for syscon LED\n"); 82 + return -ENODEV; 132 83 } 84 + map = syscon_node_to_regmap(parent->of_node); 85 + if (!map) { 86 + dev_err(dev, "no regmap for syscon LED parent\n"); 87 + return -ENODEV; 88 + } 89 + 90 + sled = devm_kzalloc(dev, sizeof(*sled), GFP_KERNEL); 91 + if (!sled) 92 + return -ENOMEM; 93 + 94 + sled->map = map; 95 + 96 + if (of_property_read_u32(np, "offset", &sled->offset)) 97 + return -EINVAL; 98 + if (of_property_read_u32(np, "mask", &sled->mask)) 99 + return -EINVAL; 100 + sled->cdev.name = 101 + of_get_property(np, "label", NULL) ? : np->name; 102 + sled->cdev.default_trigger = 103 + of_get_property(np, "linux,default-trigger", NULL); 104 + 105 + state = of_get_property(np, "default-state", NULL); 106 + if (state) { 107 + if (!strcmp(state, "keep")) { 108 + u32 val; 109 + 110 + ret = regmap_read(map, sled->offset, &val); 111 + if (ret < 0) 112 + return ret; 113 + sled->state = !!(val & sled->mask); 114 + } else if (!strcmp(state, "on")) { 115 + sled->state = true; 116 + ret = regmap_update_bits(map, sled->offset, 117 + sled->mask, 118 + sled->mask); 119 + if (ret < 0) 120 + return ret; 121 + } else { 122 + sled->state = false; 123 + ret = regmap_update_bits(map, sled->offset, 124 + sled->mask, 0); 125 + if (ret < 0) 126 + return ret; 127 + } 128 + } 129 + sled->cdev.brightness_set = syscon_led_set; 130 + 131 + ret = led_classdev_register(dev, &sled->cdev); 132 + if (ret < 0) 133 + return ret; 134 + 135 + platform_set_drvdata(pdev, sled); 136 + dev_info(dev, "registered LED %s\n", sled->cdev.name); 137 + 133 138 return 0; 134 139 } 135 140 136 - static int __init syscon_leds_init(void) 141 + static int syscon_led_remove(struct platform_device *pdev) 137 142 { 138 - struct device_node *np; 143 + struct syscon_led *sled = platform_get_drvdata(pdev); 139 144 140 - for_each_of_allnodes(np) { 141 - struct platform_device *pdev; 142 - struct regmap *map; 143 - int ret; 144 - 145 - if (!of_device_is_compatible(np, "syscon")) 146 - continue; 147 - 148 - map = syscon_node_to_regmap(np); 149 - if (IS_ERR(map)) { 150 - pr_err("error getting regmap for syscon LEDs\n"); 151 - continue; 152 - } 153 - 154 - /* 155 - * If the map is there, the device should be there, we allocate 156 - * memory on the syscon device's behalf here. 157 - */ 158 - pdev = of_find_device_by_node(np); 159 - if (!pdev) 160 - return -ENODEV; 161 - ret = syscon_leds_spawn(np, &pdev->dev, map); 162 - if (ret) 163 - dev_err(&pdev->dev, "could not spawn syscon LEDs\n"); 164 - } 165 - 145 + led_classdev_unregister(&sled->cdev); 146 + /* Turn it off */ 147 + regmap_update_bits(sled->map, sled->offset, sled->mask, 0); 166 148 return 0; 167 149 } 168 - device_initcall(syscon_leds_init); 150 + 151 + static const struct of_device_id of_syscon_leds_match[] = { 152 + { .compatible = "register-bit-led", }, 153 + {}, 154 + }; 155 + 156 + MODULE_DEVICE_TABLE(of, of_syscon_leds_match); 157 + 158 + static struct platform_driver syscon_led_driver = { 159 + .probe = syscon_led_probe, 160 + .remove = syscon_led_remove, 161 + .driver = { 162 + .name = "leds-syscon", 163 + .of_match_table = of_syscon_leds_match, 164 + }, 165 + }; 166 + module_platform_driver(syscon_led_driver);
+10
drivers/memory/tegra/Kconfig
··· 5 5 help 6 6 This driver supports the Memory Controller (MC) hardware found on 7 7 NVIDIA Tegra SoCs. 8 + 9 + config TEGRA124_EMC 10 + bool "NVIDIA Tegra124 External Memory Controller driver" 11 + default y 12 + depends on TEGRA_MC && ARCH_TEGRA_124_SOC 13 + help 14 + This driver is for the External Memory Controller (EMC) found on 15 + Tegra124 chips. The EMC controls the external DRAM on the board. 16 + This driver is required to change memory timings / clock rate for 17 + external memory.
+3
drivers/memory/tegra/Makefile
··· 3 3 tegra-mc-$(CONFIG_ARCH_TEGRA_3x_SOC) += tegra30.o 4 4 tegra-mc-$(CONFIG_ARCH_TEGRA_114_SOC) += tegra114.o 5 5 tegra-mc-$(CONFIG_ARCH_TEGRA_124_SOC) += tegra124.o 6 + tegra-mc-$(CONFIG_ARCH_TEGRA_132_SOC) += tegra124.o 6 7 7 8 obj-$(CONFIG_TEGRA_MC) += tegra-mc.o 9 + 10 + obj-$(CONFIG_TEGRA124_EMC) += tegra124-emc.o
+141 -2
drivers/memory/tegra/mc.c
··· 13 13 #include <linux/of.h> 14 14 #include <linux/platform_device.h> 15 15 #include <linux/slab.h> 16 + #include <linux/sort.h> 17 + 18 + #include <soc/tegra/fuse.h> 16 19 17 20 #include "mc.h" 18 21 ··· 51 48 #define MC_EMEM_ARB_CFG_CYCLES_PER_UPDATE_MASK 0x1ff 52 49 #define MC_EMEM_ARB_MISC0 0xd8 53 50 51 + #define MC_EMEM_ADR_CFG 0x54 52 + #define MC_EMEM_ADR_CFG_EMEM_NUMDEV BIT(0) 53 + 54 54 static const struct of_device_id tegra_mc_of_match[] = { 55 55 #ifdef CONFIG_ARCH_TEGRA_3x_SOC 56 56 { .compatible = "nvidia,tegra30-mc", .data = &tegra30_mc_soc }, ··· 63 57 #endif 64 58 #ifdef CONFIG_ARCH_TEGRA_124_SOC 65 59 { .compatible = "nvidia,tegra124-mc", .data = &tegra124_mc_soc }, 60 + #endif 61 + #ifdef CONFIG_ARCH_TEGRA_132_SOC 62 + { .compatible = "nvidia,tegra132-mc", .data = &tegra132_mc_soc }, 66 63 #endif 67 64 { } 68 65 }; ··· 96 87 value |= (la->def & la->mask) << la->shift; 97 88 writel(value, mc->regs + la->reg); 98 89 } 90 + 91 + return 0; 92 + } 93 + 94 + void tegra_mc_write_emem_configuration(struct tegra_mc *mc, unsigned long rate) 95 + { 96 + unsigned int i; 97 + struct tegra_mc_timing *timing = NULL; 98 + 99 + for (i = 0; i < mc->num_timings; i++) { 100 + if (mc->timings[i].rate == rate) { 101 + timing = &mc->timings[i]; 102 + break; 103 + } 104 + } 105 + 106 + if (!timing) { 107 + dev_err(mc->dev, "no memory timing registered for rate %lu\n", 108 + rate); 109 + return; 110 + } 111 + 112 + for (i = 0; i < mc->soc->num_emem_regs; ++i) 113 + mc_writel(mc, timing->emem_data[i], mc->soc->emem_regs[i]); 114 + } 115 + 116 + unsigned int tegra_mc_get_emem_device_count(struct tegra_mc *mc) 117 + { 118 + u8 dram_count; 119 + 120 + dram_count = mc_readl(mc, MC_EMEM_ADR_CFG); 121 + dram_count &= MC_EMEM_ADR_CFG_EMEM_NUMDEV; 122 + dram_count++; 123 + 124 + return dram_count; 125 + } 126 + 127 + static int load_one_timing(struct tegra_mc *mc, 128 + struct tegra_mc_timing *timing, 129 + struct device_node *node) 130 + { 131 + int err; 132 + u32 tmp; 133 + 134 + err = of_property_read_u32(node, "clock-frequency", &tmp); 135 + if (err) { 136 + dev_err(mc->dev, 137 + "timing %s: failed to read rate\n", node->name); 138 + return err; 139 + } 140 + 141 + timing->rate = tmp; 142 + timing->emem_data = devm_kcalloc(mc->dev, mc->soc->num_emem_regs, 143 + sizeof(u32), GFP_KERNEL); 144 + if (!timing->emem_data) 145 + return -ENOMEM; 146 + 147 + err = of_property_read_u32_array(node, "nvidia,emem-configuration", 148 + timing->emem_data, 149 + mc->soc->num_emem_regs); 150 + if (err) { 151 + dev_err(mc->dev, 152 + "timing %s: failed to read EMEM configuration\n", 153 + node->name); 154 + return err; 155 + } 156 + 157 + return 0; 158 + } 159 + 160 + static int load_timings(struct tegra_mc *mc, struct device_node *node) 161 + { 162 + struct device_node *child; 163 + struct tegra_mc_timing *timing; 164 + int child_count = of_get_child_count(node); 165 + int i = 0, err; 166 + 167 + mc->timings = devm_kcalloc(mc->dev, child_count, sizeof(*timing), 168 + GFP_KERNEL); 169 + if (!mc->timings) 170 + return -ENOMEM; 171 + 172 + mc->num_timings = child_count; 173 + 174 + for_each_child_of_node(node, child) { 175 + timing = &mc->timings[i++]; 176 + 177 + err = load_one_timing(mc, timing, child); 178 + if (err) 179 + return err; 180 + } 181 + 182 + return 0; 183 + } 184 + 185 + static int tegra_mc_setup_timings(struct tegra_mc *mc) 186 + { 187 + struct device_node *node; 188 + u32 ram_code, node_ram_code; 189 + int err; 190 + 191 + ram_code = tegra_read_ram_code(); 192 + 193 + mc->num_timings = 0; 194 + 195 + for_each_child_of_node(mc->dev->of_node, node) { 196 + err = of_property_read_u32(node, "nvidia,ram-code", 197 + &node_ram_code); 198 + if (err || (node_ram_code != ram_code)) { 199 + of_node_put(node); 200 + continue; 201 + } 202 + 203 + err = load_timings(mc, node); 204 + if (err) 205 + return err; 206 + of_node_put(node); 207 + break; 208 + } 209 + 210 + if (mc->num_timings == 0) 211 + dev_warn(mc->dev, 212 + "no memory timings for RAM code %u registered\n", 213 + ram_code); 99 214 100 215 return 0; 101 216 } ··· 381 248 return err; 382 249 } 383 250 251 + err = tegra_mc_setup_timings(mc); 252 + if (err < 0) { 253 + dev_err(&pdev->dev, "failed to setup timings: %d\n", err); 254 + return err; 255 + } 256 + 384 257 if (IS_ENABLED(CONFIG_TEGRA_IOMMU_SMMU)) { 385 258 mc->smmu = tegra_smmu_probe(&pdev->dev, mc->soc->smmu, mc); 386 259 if (IS_ERR(mc->smmu)) { ··· 412 273 413 274 value = MC_INT_DECERR_MTS | MC_INT_SECERR_SEC | MC_INT_DECERR_VPR | 414 275 MC_INT_INVALID_APB_ASID_UPDATE | MC_INT_INVALID_SMMU_PAGE | 415 - MC_INT_ARBITRATION_EMEM | MC_INT_SECURITY_VIOLATION | 416 - MC_INT_DECERR_EMEM; 276 + MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM; 277 + 417 278 mc_writel(mc, value, MC_INTMASK); 418 279 419 280 return 0;
+4
drivers/memory/tegra/mc.h
··· 37 37 extern const struct tegra_mc_soc tegra124_mc_soc; 38 38 #endif 39 39 40 + #ifdef CONFIG_ARCH_TEGRA_132_SOC 41 + extern const struct tegra_mc_soc tegra132_mc_soc; 42 + #endif 43 + 40 44 #endif /* MEMORY_TEGRA_MC_H */
+16 -16
drivers/memory/tegra/tegra114.c
··· 896 896 }; 897 897 898 898 static const struct tegra_smmu_swgroup tegra114_swgroups[] = { 899 - { .swgroup = TEGRA_SWGROUP_DC, .reg = 0x240 }, 900 - { .swgroup = TEGRA_SWGROUP_DCB, .reg = 0x244 }, 901 - { .swgroup = TEGRA_SWGROUP_EPP, .reg = 0x248 }, 902 - { .swgroup = TEGRA_SWGROUP_G2, .reg = 0x24c }, 903 - { .swgroup = TEGRA_SWGROUP_AVPC, .reg = 0x23c }, 904 - { .swgroup = TEGRA_SWGROUP_NV, .reg = 0x268 }, 905 - { .swgroup = TEGRA_SWGROUP_HDA, .reg = 0x254 }, 906 - { .swgroup = TEGRA_SWGROUP_HC, .reg = 0x250 }, 907 - { .swgroup = TEGRA_SWGROUP_MSENC, .reg = 0x264 }, 908 - { .swgroup = TEGRA_SWGROUP_PPCS, .reg = 0x270 }, 909 - { .swgroup = TEGRA_SWGROUP_VDE, .reg = 0x27c }, 910 - { .swgroup = TEGRA_SWGROUP_VI, .reg = 0x280 }, 911 - { .swgroup = TEGRA_SWGROUP_ISP, .reg = 0x258 }, 912 - { .swgroup = TEGRA_SWGROUP_XUSB_HOST, .reg = 0x288 }, 913 - { .swgroup = TEGRA_SWGROUP_XUSB_DEV, .reg = 0x28c }, 914 - { .swgroup = TEGRA_SWGROUP_TSEC, .reg = 0x294 }, 899 + { .name = "dc", .swgroup = TEGRA_SWGROUP_DC, .reg = 0x240 }, 900 + { .name = "dcb", .swgroup = TEGRA_SWGROUP_DCB, .reg = 0x244 }, 901 + { .name = "epp", .swgroup = TEGRA_SWGROUP_EPP, .reg = 0x248 }, 902 + { .name = "g2", .swgroup = TEGRA_SWGROUP_G2, .reg = 0x24c }, 903 + { .name = "avpc", .swgroup = TEGRA_SWGROUP_AVPC, .reg = 0x23c }, 904 + { .name = "nv", .swgroup = TEGRA_SWGROUP_NV, .reg = 0x268 }, 905 + { .name = "hda", .swgroup = TEGRA_SWGROUP_HDA, .reg = 0x254 }, 906 + { .name = "hc", .swgroup = TEGRA_SWGROUP_HC, .reg = 0x250 }, 907 + { .name = "msenc", .swgroup = TEGRA_SWGROUP_MSENC, .reg = 0x264 }, 908 + { .name = "ppcs", .swgroup = TEGRA_SWGROUP_PPCS, .reg = 0x270 }, 909 + { .name = "vde", .swgroup = TEGRA_SWGROUP_VDE, .reg = 0x27c }, 910 + { .name = "vi", .swgroup = TEGRA_SWGROUP_VI, .reg = 0x280 }, 911 + { .name = "isp", .swgroup = TEGRA_SWGROUP_ISP, .reg = 0x258 }, 912 + { .name = "xusb_host", .swgroup = TEGRA_SWGROUP_XUSB_HOST, .reg = 0x288 }, 913 + { .name = "xusb_dev", .swgroup = TEGRA_SWGROUP_XUSB_DEV, .reg = 0x28c }, 914 + { .name = "tsec", .swgroup = TEGRA_SWGROUP_TSEC, .reg = 0x294 }, 915 915 }; 916 916 917 917 static void tegra114_flush_dcache(struct page *page, unsigned long offset,
+1140
drivers/memory/tegra/tegra124-emc.c
··· 1 + /* 2 + * Copyright (c) 2014, NVIDIA CORPORATION. All rights reserved. 3 + * 4 + * Author: 5 + * Mikko Perttunen <mperttunen@nvidia.com> 6 + * 7 + * This software is licensed under the terms of the GNU General Public 8 + * License version 2, as published by the Free Software Foundation, and 9 + * may be copied, distributed, and modified under those terms. 10 + * 11 + * This program is distributed in the hope that it will be useful, 12 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + * 16 + */ 17 + 18 + #include <linux/clk-provider.h> 19 + #include <linux/clk.h> 20 + #include <linux/clkdev.h> 21 + #include <linux/debugfs.h> 22 + #include <linux/delay.h> 23 + #include <linux/of_address.h> 24 + #include <linux/of_platform.h> 25 + #include <linux/platform_device.h> 26 + #include <linux/sort.h> 27 + #include <linux/string.h> 28 + 29 + #include <soc/tegra/emc.h> 30 + #include <soc/tegra/fuse.h> 31 + #include <soc/tegra/mc.h> 32 + 33 + #define EMC_FBIO_CFG5 0x104 34 + #define EMC_FBIO_CFG5_DRAM_TYPE_MASK 0x3 35 + #define EMC_FBIO_CFG5_DRAM_TYPE_SHIFT 0 36 + 37 + #define EMC_INTSTATUS 0x0 38 + #define EMC_INTSTATUS_CLKCHANGE_COMPLETE BIT(4) 39 + 40 + #define EMC_CFG 0xc 41 + #define EMC_CFG_DRAM_CLKSTOP_PD BIT(31) 42 + #define EMC_CFG_DRAM_CLKSTOP_SR BIT(30) 43 + #define EMC_CFG_DRAM_ACPD BIT(29) 44 + #define EMC_CFG_DYN_SREF BIT(28) 45 + #define EMC_CFG_PWR_MASK ((0xF << 28) | BIT(18)) 46 + #define EMC_CFG_DSR_VTTGEN_DRV_EN BIT(18) 47 + 48 + #define EMC_REFCTRL 0x20 49 + #define EMC_REFCTRL_DEV_SEL_SHIFT 0 50 + #define EMC_REFCTRL_ENABLE BIT(31) 51 + 52 + #define EMC_TIMING_CONTROL 0x28 53 + #define EMC_RC 0x2c 54 + #define EMC_RFC 0x30 55 + #define EMC_RAS 0x34 56 + #define EMC_RP 0x38 57 + #define EMC_R2W 0x3c 58 + #define EMC_W2R 0x40 59 + #define EMC_R2P 0x44 60 + #define EMC_W2P 0x48 61 + #define EMC_RD_RCD 0x4c 62 + #define EMC_WR_RCD 0x50 63 + #define EMC_RRD 0x54 64 + #define EMC_REXT 0x58 65 + #define EMC_WDV 0x5c 66 + #define EMC_QUSE 0x60 67 + #define EMC_QRST 0x64 68 + #define EMC_QSAFE 0x68 69 + #define EMC_RDV 0x6c 70 + #define EMC_REFRESH 0x70 71 + #define EMC_BURST_REFRESH_NUM 0x74 72 + #define EMC_PDEX2WR 0x78 73 + #define EMC_PDEX2RD 0x7c 74 + #define EMC_PCHG2PDEN 0x80 75 + #define EMC_ACT2PDEN 0x84 76 + #define EMC_AR2PDEN 0x88 77 + #define EMC_RW2PDEN 0x8c 78 + #define EMC_TXSR 0x90 79 + #define EMC_TCKE 0x94 80 + #define EMC_TFAW 0x98 81 + #define EMC_TRPAB 0x9c 82 + #define EMC_TCLKSTABLE 0xa0 83 + #define EMC_TCLKSTOP 0xa4 84 + #define EMC_TREFBW 0xa8 85 + #define EMC_ODT_WRITE 0xb0 86 + #define EMC_ODT_READ 0xb4 87 + #define EMC_WEXT 0xb8 88 + #define EMC_CTT 0xbc 89 + #define EMC_RFC_SLR 0xc0 90 + #define EMC_MRS_WAIT_CNT2 0xc4 91 + 92 + #define EMC_MRS_WAIT_CNT 0xc8 93 + #define EMC_MRS_WAIT_CNT_SHORT_WAIT_SHIFT 0 94 + #define EMC_MRS_WAIT_CNT_SHORT_WAIT_MASK \ 95 + (0x3FF << EMC_MRS_WAIT_CNT_SHORT_WAIT_SHIFT) 96 + #define EMC_MRS_WAIT_CNT_LONG_WAIT_SHIFT 16 97 + #define EMC_MRS_WAIT_CNT_LONG_WAIT_MASK \ 98 + (0x3FF << EMC_MRS_WAIT_CNT_LONG_WAIT_SHIFT) 99 + 100 + #define EMC_MRS 0xcc 101 + #define EMC_MODE_SET_DLL_RESET BIT(8) 102 + #define EMC_MODE_SET_LONG_CNT BIT(26) 103 + #define EMC_EMRS 0xd0 104 + #define EMC_REF 0xd4 105 + #define EMC_PRE 0xd8 106 + 107 + #define EMC_SELF_REF 0xe0 108 + #define EMC_SELF_REF_CMD_ENABLED BIT(0) 109 + #define EMC_SELF_REF_DEV_SEL_SHIFT 30 110 + 111 + #define EMC_MRW 0xe8 112 + 113 + #define EMC_MRR 0xec 114 + #define EMC_MRR_MA_SHIFT 16 115 + #define LPDDR2_MR4_TEMP_SHIFT 0 116 + 117 + #define EMC_XM2DQSPADCTRL3 0xf8 118 + #define EMC_FBIO_SPARE 0x100 119 + 120 + #define EMC_FBIO_CFG6 0x114 121 + #define EMC_EMRS2 0x12c 122 + #define EMC_MRW2 0x134 123 + #define EMC_MRW4 0x13c 124 + #define EMC_EINPUT 0x14c 125 + #define EMC_EINPUT_DURATION 0x150 126 + #define EMC_PUTERM_EXTRA 0x154 127 + #define EMC_TCKESR 0x158 128 + #define EMC_TPD 0x15c 129 + 130 + #define EMC_AUTO_CAL_CONFIG 0x2a4 131 + #define EMC_AUTO_CAL_CONFIG_AUTO_CAL_START BIT(31) 132 + #define EMC_AUTO_CAL_INTERVAL 0x2a8 133 + #define EMC_AUTO_CAL_STATUS 0x2ac 134 + #define EMC_AUTO_CAL_STATUS_ACTIVE BIT(31) 135 + #define EMC_STATUS 0x2b4 136 + #define EMC_STATUS_TIMING_UPDATE_STALLED BIT(23) 137 + 138 + #define EMC_CFG_2 0x2b8 139 + #define EMC_CFG_2_MODE_SHIFT 0 140 + #define EMC_CFG_2_DIS_STP_OB_CLK_DURING_NON_WR BIT(6) 141 + 142 + #define EMC_CFG_DIG_DLL 0x2bc 143 + #define EMC_CFG_DIG_DLL_PERIOD 0x2c0 144 + #define EMC_RDV_MASK 0x2cc 145 + #define EMC_WDV_MASK 0x2d0 146 + #define EMC_CTT_DURATION 0x2d8 147 + #define EMC_CTT_TERM_CTRL 0x2dc 148 + #define EMC_ZCAL_INTERVAL 0x2e0 149 + #define EMC_ZCAL_WAIT_CNT 0x2e4 150 + 151 + #define EMC_ZQ_CAL 0x2ec 152 + #define EMC_ZQ_CAL_CMD BIT(0) 153 + #define EMC_ZQ_CAL_LONG BIT(4) 154 + #define EMC_ZQ_CAL_LONG_CMD_DEV0 \ 155 + (DRAM_DEV_SEL_0 | EMC_ZQ_CAL_LONG | EMC_ZQ_CAL_CMD) 156 + #define EMC_ZQ_CAL_LONG_CMD_DEV1 \ 157 + (DRAM_DEV_SEL_1 | EMC_ZQ_CAL_LONG | EMC_ZQ_CAL_CMD) 158 + 159 + #define EMC_XM2CMDPADCTRL 0x2f0 160 + #define EMC_XM2DQSPADCTRL 0x2f8 161 + #define EMC_XM2DQSPADCTRL2 0x2fc 162 + #define EMC_XM2DQSPADCTRL2_RX_FT_REC_ENABLE BIT(0) 163 + #define EMC_XM2DQSPADCTRL2_VREF_ENABLE BIT(5) 164 + #define EMC_XM2DQPADCTRL 0x300 165 + #define EMC_XM2DQPADCTRL2 0x304 166 + #define EMC_XM2CLKPADCTRL 0x308 167 + #define EMC_XM2COMPPADCTRL 0x30c 168 + #define EMC_XM2VTTGENPADCTRL 0x310 169 + #define EMC_XM2VTTGENPADCTRL2 0x314 170 + #define EMC_XM2VTTGENPADCTRL3 0x318 171 + #define EMC_XM2DQSPADCTRL4 0x320 172 + #define EMC_DLL_XFORM_DQS0 0x328 173 + #define EMC_DLL_XFORM_DQS1 0x32c 174 + #define EMC_DLL_XFORM_DQS2 0x330 175 + #define EMC_DLL_XFORM_DQS3 0x334 176 + #define EMC_DLL_XFORM_DQS4 0x338 177 + #define EMC_DLL_XFORM_DQS5 0x33c 178 + #define EMC_DLL_XFORM_DQS6 0x340 179 + #define EMC_DLL_XFORM_DQS7 0x344 180 + #define EMC_DLL_XFORM_QUSE0 0x348 181 + #define EMC_DLL_XFORM_QUSE1 0x34c 182 + #define EMC_DLL_XFORM_QUSE2 0x350 183 + #define EMC_DLL_XFORM_QUSE3 0x354 184 + #define EMC_DLL_XFORM_QUSE4 0x358 185 + #define EMC_DLL_XFORM_QUSE5 0x35c 186 + #define EMC_DLL_XFORM_QUSE6 0x360 187 + #define EMC_DLL_XFORM_QUSE7 0x364 188 + #define EMC_DLL_XFORM_DQ0 0x368 189 + #define EMC_DLL_XFORM_DQ1 0x36c 190 + #define EMC_DLL_XFORM_DQ2 0x370 191 + #define EMC_DLL_XFORM_DQ3 0x374 192 + #define EMC_DLI_TRIM_TXDQS0 0x3a8 193 + #define EMC_DLI_TRIM_TXDQS1 0x3ac 194 + #define EMC_DLI_TRIM_TXDQS2 0x3b0 195 + #define EMC_DLI_TRIM_TXDQS3 0x3b4 196 + #define EMC_DLI_TRIM_TXDQS4 0x3b8 197 + #define EMC_DLI_TRIM_TXDQS5 0x3bc 198 + #define EMC_DLI_TRIM_TXDQS6 0x3c0 199 + #define EMC_DLI_TRIM_TXDQS7 0x3c4 200 + #define EMC_STALL_THEN_EXE_AFTER_CLKCHANGE 0x3cc 201 + #define EMC_SEL_DPD_CTRL 0x3d8 202 + #define EMC_SEL_DPD_CTRL_DATA_SEL_DPD BIT(8) 203 + #define EMC_SEL_DPD_CTRL_ODT_SEL_DPD BIT(5) 204 + #define EMC_SEL_DPD_CTRL_RESET_SEL_DPD BIT(4) 205 + #define EMC_SEL_DPD_CTRL_CA_SEL_DPD BIT(3) 206 + #define EMC_SEL_DPD_CTRL_CLK_SEL_DPD BIT(2) 207 + #define EMC_SEL_DPD_CTRL_DDR3_MASK \ 208 + ((0xf << 2) | BIT(8)) 209 + #define EMC_SEL_DPD_CTRL_MASK \ 210 + ((0x3 << 2) | BIT(5) | BIT(8)) 211 + #define EMC_PRE_REFRESH_REQ_CNT 0x3dc 212 + #define EMC_DYN_SELF_REF_CONTROL 0x3e0 213 + #define EMC_TXSRDLL 0x3e4 214 + #define EMC_CCFIFO_ADDR 0x3e8 215 + #define EMC_CCFIFO_DATA 0x3ec 216 + #define EMC_CCFIFO_STATUS 0x3f0 217 + #define EMC_CDB_CNTL_1 0x3f4 218 + #define EMC_CDB_CNTL_2 0x3f8 219 + #define EMC_XM2CLKPADCTRL2 0x3fc 220 + #define EMC_AUTO_CAL_CONFIG2 0x458 221 + #define EMC_AUTO_CAL_CONFIG3 0x45c 222 + #define EMC_IBDLY 0x468 223 + #define EMC_DLL_XFORM_ADDR0 0x46c 224 + #define EMC_DLL_XFORM_ADDR1 0x470 225 + #define EMC_DLL_XFORM_ADDR2 0x474 226 + #define EMC_DSR_VTTGEN_DRV 0x47c 227 + #define EMC_TXDSRVTTGEN 0x480 228 + #define EMC_XM2CMDPADCTRL4 0x484 229 + #define EMC_XM2CMDPADCTRL5 0x488 230 + #define EMC_DLL_XFORM_DQS8 0x4a0 231 + #define EMC_DLL_XFORM_DQS9 0x4a4 232 + #define EMC_DLL_XFORM_DQS10 0x4a8 233 + #define EMC_DLL_XFORM_DQS11 0x4ac 234 + #define EMC_DLL_XFORM_DQS12 0x4b0 235 + #define EMC_DLL_XFORM_DQS13 0x4b4 236 + #define EMC_DLL_XFORM_DQS14 0x4b8 237 + #define EMC_DLL_XFORM_DQS15 0x4bc 238 + #define EMC_DLL_XFORM_QUSE8 0x4c0 239 + #define EMC_DLL_XFORM_QUSE9 0x4c4 240 + #define EMC_DLL_XFORM_QUSE10 0x4c8 241 + #define EMC_DLL_XFORM_QUSE11 0x4cc 242 + #define EMC_DLL_XFORM_QUSE12 0x4d0 243 + #define EMC_DLL_XFORM_QUSE13 0x4d4 244 + #define EMC_DLL_XFORM_QUSE14 0x4d8 245 + #define EMC_DLL_XFORM_QUSE15 0x4dc 246 + #define EMC_DLL_XFORM_DQ4 0x4e0 247 + #define EMC_DLL_XFORM_DQ5 0x4e4 248 + #define EMC_DLL_XFORM_DQ6 0x4e8 249 + #define EMC_DLL_XFORM_DQ7 0x4ec 250 + #define EMC_DLI_TRIM_TXDQS8 0x520 251 + #define EMC_DLI_TRIM_TXDQS9 0x524 252 + #define EMC_DLI_TRIM_TXDQS10 0x528 253 + #define EMC_DLI_TRIM_TXDQS11 0x52c 254 + #define EMC_DLI_TRIM_TXDQS12 0x530 255 + #define EMC_DLI_TRIM_TXDQS13 0x534 256 + #define EMC_DLI_TRIM_TXDQS14 0x538 257 + #define EMC_DLI_TRIM_TXDQS15 0x53c 258 + #define EMC_CDB_CNTL_3 0x540 259 + #define EMC_XM2DQSPADCTRL5 0x544 260 + #define EMC_XM2DQSPADCTRL6 0x548 261 + #define EMC_XM2DQPADCTRL3 0x54c 262 + #define EMC_DLL_XFORM_ADDR3 0x550 263 + #define EMC_DLL_XFORM_ADDR4 0x554 264 + #define EMC_DLL_XFORM_ADDR5 0x558 265 + #define EMC_CFG_PIPE 0x560 266 + #define EMC_QPOP 0x564 267 + #define EMC_QUSE_WIDTH 0x568 268 + #define EMC_PUTERM_WIDTH 0x56c 269 + #define EMC_BGBIAS_CTL0 0x570 270 + #define EMC_BGBIAS_CTL0_BIAS0_DSC_E_PWRD_IBIAS_RX BIT(3) 271 + #define EMC_BGBIAS_CTL0_BIAS0_DSC_E_PWRD_IBIAS_VTTGEN BIT(2) 272 + #define EMC_BGBIAS_CTL0_BIAS0_DSC_E_PWRD BIT(1) 273 + #define EMC_PUTERM_ADJ 0x574 274 + 275 + #define DRAM_DEV_SEL_ALL 0 276 + #define DRAM_DEV_SEL_0 (2 << 30) 277 + #define DRAM_DEV_SEL_1 (1 << 30) 278 + 279 + #define EMC_CFG_POWER_FEATURES_MASK \ 280 + (EMC_CFG_DYN_SREF | EMC_CFG_DRAM_ACPD | EMC_CFG_DRAM_CLKSTOP_SR | \ 281 + EMC_CFG_DRAM_CLKSTOP_PD | EMC_CFG_DSR_VTTGEN_DRV_EN) 282 + #define EMC_REFCTRL_DEV_SEL(n) (((n > 1) ? 0 : 2) << EMC_REFCTRL_DEV_SEL_SHIFT) 283 + #define EMC_DRAM_DEV_SEL(n) ((n > 1) ? DRAM_DEV_SEL_ALL : DRAM_DEV_SEL_0) 284 + 285 + /* Maximum amount of time in us. to wait for changes to become effective */ 286 + #define EMC_STATUS_UPDATE_TIMEOUT 1000 287 + 288 + enum emc_dram_type { 289 + DRAM_TYPE_DDR3 = 0, 290 + DRAM_TYPE_DDR1 = 1, 291 + DRAM_TYPE_LPDDR3 = 2, 292 + DRAM_TYPE_DDR2 = 3 293 + }; 294 + 295 + enum emc_dll_change { 296 + DLL_CHANGE_NONE, 297 + DLL_CHANGE_ON, 298 + DLL_CHANGE_OFF 299 + }; 300 + 301 + static const unsigned long emc_burst_regs[] = { 302 + EMC_RC, 303 + EMC_RFC, 304 + EMC_RFC_SLR, 305 + EMC_RAS, 306 + EMC_RP, 307 + EMC_R2W, 308 + EMC_W2R, 309 + EMC_R2P, 310 + EMC_W2P, 311 + EMC_RD_RCD, 312 + EMC_WR_RCD, 313 + EMC_RRD, 314 + EMC_REXT, 315 + EMC_WEXT, 316 + EMC_WDV, 317 + EMC_WDV_MASK, 318 + EMC_QUSE, 319 + EMC_QUSE_WIDTH, 320 + EMC_IBDLY, 321 + EMC_EINPUT, 322 + EMC_EINPUT_DURATION, 323 + EMC_PUTERM_EXTRA, 324 + EMC_PUTERM_WIDTH, 325 + EMC_PUTERM_ADJ, 326 + EMC_CDB_CNTL_1, 327 + EMC_CDB_CNTL_2, 328 + EMC_CDB_CNTL_3, 329 + EMC_QRST, 330 + EMC_QSAFE, 331 + EMC_RDV, 332 + EMC_RDV_MASK, 333 + EMC_REFRESH, 334 + EMC_BURST_REFRESH_NUM, 335 + EMC_PRE_REFRESH_REQ_CNT, 336 + EMC_PDEX2WR, 337 + EMC_PDEX2RD, 338 + EMC_PCHG2PDEN, 339 + EMC_ACT2PDEN, 340 + EMC_AR2PDEN, 341 + EMC_RW2PDEN, 342 + EMC_TXSR, 343 + EMC_TXSRDLL, 344 + EMC_TCKE, 345 + EMC_TCKESR, 346 + EMC_TPD, 347 + EMC_TFAW, 348 + EMC_TRPAB, 349 + EMC_TCLKSTABLE, 350 + EMC_TCLKSTOP, 351 + EMC_TREFBW, 352 + EMC_FBIO_CFG6, 353 + EMC_ODT_WRITE, 354 + EMC_ODT_READ, 355 + EMC_FBIO_CFG5, 356 + EMC_CFG_DIG_DLL, 357 + EMC_CFG_DIG_DLL_PERIOD, 358 + EMC_DLL_XFORM_DQS0, 359 + EMC_DLL_XFORM_DQS1, 360 + EMC_DLL_XFORM_DQS2, 361 + EMC_DLL_XFORM_DQS3, 362 + EMC_DLL_XFORM_DQS4, 363 + EMC_DLL_XFORM_DQS5, 364 + EMC_DLL_XFORM_DQS6, 365 + EMC_DLL_XFORM_DQS7, 366 + EMC_DLL_XFORM_DQS8, 367 + EMC_DLL_XFORM_DQS9, 368 + EMC_DLL_XFORM_DQS10, 369 + EMC_DLL_XFORM_DQS11, 370 + EMC_DLL_XFORM_DQS12, 371 + EMC_DLL_XFORM_DQS13, 372 + EMC_DLL_XFORM_DQS14, 373 + EMC_DLL_XFORM_DQS15, 374 + EMC_DLL_XFORM_QUSE0, 375 + EMC_DLL_XFORM_QUSE1, 376 + EMC_DLL_XFORM_QUSE2, 377 + EMC_DLL_XFORM_QUSE3, 378 + EMC_DLL_XFORM_QUSE4, 379 + EMC_DLL_XFORM_QUSE5, 380 + EMC_DLL_XFORM_QUSE6, 381 + EMC_DLL_XFORM_QUSE7, 382 + EMC_DLL_XFORM_ADDR0, 383 + EMC_DLL_XFORM_ADDR1, 384 + EMC_DLL_XFORM_ADDR2, 385 + EMC_DLL_XFORM_ADDR3, 386 + EMC_DLL_XFORM_ADDR4, 387 + EMC_DLL_XFORM_ADDR5, 388 + EMC_DLL_XFORM_QUSE8, 389 + EMC_DLL_XFORM_QUSE9, 390 + EMC_DLL_XFORM_QUSE10, 391 + EMC_DLL_XFORM_QUSE11, 392 + EMC_DLL_XFORM_QUSE12, 393 + EMC_DLL_XFORM_QUSE13, 394 + EMC_DLL_XFORM_QUSE14, 395 + EMC_DLL_XFORM_QUSE15, 396 + EMC_DLI_TRIM_TXDQS0, 397 + EMC_DLI_TRIM_TXDQS1, 398 + EMC_DLI_TRIM_TXDQS2, 399 + EMC_DLI_TRIM_TXDQS3, 400 + EMC_DLI_TRIM_TXDQS4, 401 + EMC_DLI_TRIM_TXDQS5, 402 + EMC_DLI_TRIM_TXDQS6, 403 + EMC_DLI_TRIM_TXDQS7, 404 + EMC_DLI_TRIM_TXDQS8, 405 + EMC_DLI_TRIM_TXDQS9, 406 + EMC_DLI_TRIM_TXDQS10, 407 + EMC_DLI_TRIM_TXDQS11, 408 + EMC_DLI_TRIM_TXDQS12, 409 + EMC_DLI_TRIM_TXDQS13, 410 + EMC_DLI_TRIM_TXDQS14, 411 + EMC_DLI_TRIM_TXDQS15, 412 + EMC_DLL_XFORM_DQ0, 413 + EMC_DLL_XFORM_DQ1, 414 + EMC_DLL_XFORM_DQ2, 415 + EMC_DLL_XFORM_DQ3, 416 + EMC_DLL_XFORM_DQ4, 417 + EMC_DLL_XFORM_DQ5, 418 + EMC_DLL_XFORM_DQ6, 419 + EMC_DLL_XFORM_DQ7, 420 + EMC_XM2CMDPADCTRL, 421 + EMC_XM2CMDPADCTRL4, 422 + EMC_XM2CMDPADCTRL5, 423 + EMC_XM2DQPADCTRL2, 424 + EMC_XM2DQPADCTRL3, 425 + EMC_XM2CLKPADCTRL, 426 + EMC_XM2CLKPADCTRL2, 427 + EMC_XM2COMPPADCTRL, 428 + EMC_XM2VTTGENPADCTRL, 429 + EMC_XM2VTTGENPADCTRL2, 430 + EMC_XM2VTTGENPADCTRL3, 431 + EMC_XM2DQSPADCTRL3, 432 + EMC_XM2DQSPADCTRL4, 433 + EMC_XM2DQSPADCTRL5, 434 + EMC_XM2DQSPADCTRL6, 435 + EMC_DSR_VTTGEN_DRV, 436 + EMC_TXDSRVTTGEN, 437 + EMC_FBIO_SPARE, 438 + EMC_ZCAL_WAIT_CNT, 439 + EMC_MRS_WAIT_CNT2, 440 + EMC_CTT, 441 + EMC_CTT_DURATION, 442 + EMC_CFG_PIPE, 443 + EMC_DYN_SELF_REF_CONTROL, 444 + EMC_QPOP 445 + }; 446 + 447 + struct emc_timing { 448 + unsigned long rate; 449 + 450 + u32 emc_burst_data[ARRAY_SIZE(emc_burst_regs)]; 451 + 452 + u32 emc_auto_cal_config; 453 + u32 emc_auto_cal_config2; 454 + u32 emc_auto_cal_config3; 455 + u32 emc_auto_cal_interval; 456 + u32 emc_bgbias_ctl0; 457 + u32 emc_cfg; 458 + u32 emc_cfg_2; 459 + u32 emc_ctt_term_ctrl; 460 + u32 emc_mode_1; 461 + u32 emc_mode_2; 462 + u32 emc_mode_4; 463 + u32 emc_mode_reset; 464 + u32 emc_mrs_wait_cnt; 465 + u32 emc_sel_dpd_ctrl; 466 + u32 emc_xm2dqspadctrl2; 467 + u32 emc_zcal_cnt_long; 468 + u32 emc_zcal_interval; 469 + }; 470 + 471 + struct tegra_emc { 472 + struct device *dev; 473 + 474 + struct tegra_mc *mc; 475 + 476 + void __iomem *regs; 477 + 478 + enum emc_dram_type dram_type; 479 + unsigned int dram_num; 480 + 481 + struct emc_timing last_timing; 482 + struct emc_timing *timings; 483 + unsigned int num_timings; 484 + }; 485 + 486 + /* Timing change sequence functions */ 487 + 488 + static void emc_ccfifo_writel(struct tegra_emc *emc, u32 value, 489 + unsigned long offset) 490 + { 491 + writel(value, emc->regs + EMC_CCFIFO_DATA); 492 + writel(offset, emc->regs + EMC_CCFIFO_ADDR); 493 + } 494 + 495 + static void emc_seq_update_timing(struct tegra_emc *emc) 496 + { 497 + unsigned int i; 498 + u32 value; 499 + 500 + writel(1, emc->regs + EMC_TIMING_CONTROL); 501 + 502 + for (i = 0; i < EMC_STATUS_UPDATE_TIMEOUT; ++i) { 503 + value = readl(emc->regs + EMC_STATUS); 504 + if ((value & EMC_STATUS_TIMING_UPDATE_STALLED) == 0) 505 + return; 506 + udelay(1); 507 + } 508 + 509 + dev_err(emc->dev, "timing update timed out\n"); 510 + } 511 + 512 + static void emc_seq_disable_auto_cal(struct tegra_emc *emc) 513 + { 514 + unsigned int i; 515 + u32 value; 516 + 517 + writel(0, emc->regs + EMC_AUTO_CAL_INTERVAL); 518 + 519 + for (i = 0; i < EMC_STATUS_UPDATE_TIMEOUT; ++i) { 520 + value = readl(emc->regs + EMC_AUTO_CAL_STATUS); 521 + if ((value & EMC_AUTO_CAL_STATUS_ACTIVE) == 0) 522 + return; 523 + udelay(1); 524 + } 525 + 526 + dev_err(emc->dev, "auto cal disable timed out\n"); 527 + } 528 + 529 + static void emc_seq_wait_clkchange(struct tegra_emc *emc) 530 + { 531 + unsigned int i; 532 + u32 value; 533 + 534 + for (i = 0; i < EMC_STATUS_UPDATE_TIMEOUT; ++i) { 535 + value = readl(emc->regs + EMC_INTSTATUS); 536 + if (value & EMC_INTSTATUS_CLKCHANGE_COMPLETE) 537 + return; 538 + udelay(1); 539 + } 540 + 541 + dev_err(emc->dev, "clock change timed out\n"); 542 + } 543 + 544 + static struct emc_timing *tegra_emc_find_timing(struct tegra_emc *emc, 545 + unsigned long rate) 546 + { 547 + struct emc_timing *timing = NULL; 548 + unsigned int i; 549 + 550 + for (i = 0; i < emc->num_timings; i++) { 551 + if (emc->timings[i].rate == rate) { 552 + timing = &emc->timings[i]; 553 + break; 554 + } 555 + } 556 + 557 + if (!timing) { 558 + dev_err(emc->dev, "no timing for rate %lu\n", rate); 559 + return NULL; 560 + } 561 + 562 + return timing; 563 + } 564 + 565 + int tegra_emc_prepare_timing_change(struct tegra_emc *emc, 566 + unsigned long rate) 567 + { 568 + struct emc_timing *timing = tegra_emc_find_timing(emc, rate); 569 + struct emc_timing *last = &emc->last_timing; 570 + enum emc_dll_change dll_change; 571 + unsigned int pre_wait = 0; 572 + u32 val, val2, mask; 573 + bool update = false; 574 + unsigned int i; 575 + 576 + if (!timing) 577 + return -ENOENT; 578 + 579 + if ((last->emc_mode_1 & 0x1) == (timing->emc_mode_1 & 0x1)) 580 + dll_change = DLL_CHANGE_NONE; 581 + else if (timing->emc_mode_1 & 0x1) 582 + dll_change = DLL_CHANGE_ON; 583 + else 584 + dll_change = DLL_CHANGE_OFF; 585 + 586 + /* Clear CLKCHANGE_COMPLETE interrupts */ 587 + writel(EMC_INTSTATUS_CLKCHANGE_COMPLETE, emc->regs + EMC_INTSTATUS); 588 + 589 + /* Disable dynamic self-refresh */ 590 + val = readl(emc->regs + EMC_CFG); 591 + if (val & EMC_CFG_PWR_MASK) { 592 + val &= ~EMC_CFG_POWER_FEATURES_MASK; 593 + writel(val, emc->regs + EMC_CFG); 594 + 595 + pre_wait = 5; 596 + } 597 + 598 + /* Disable SEL_DPD_CTRL for clock change */ 599 + if (emc->dram_type == DRAM_TYPE_DDR3) 600 + mask = EMC_SEL_DPD_CTRL_DDR3_MASK; 601 + else 602 + mask = EMC_SEL_DPD_CTRL_MASK; 603 + 604 + val = readl(emc->regs + EMC_SEL_DPD_CTRL); 605 + if (val & mask) { 606 + val &= ~mask; 607 + writel(val, emc->regs + EMC_SEL_DPD_CTRL); 608 + } 609 + 610 + /* Prepare DQ/DQS for clock change */ 611 + val = readl(emc->regs + EMC_BGBIAS_CTL0); 612 + val2 = last->emc_bgbias_ctl0; 613 + if (!(timing->emc_bgbias_ctl0 & 614 + EMC_BGBIAS_CTL0_BIAS0_DSC_E_PWRD_IBIAS_RX) && 615 + (val & EMC_BGBIAS_CTL0_BIAS0_DSC_E_PWRD_IBIAS_RX)) { 616 + val2 &= ~EMC_BGBIAS_CTL0_BIAS0_DSC_E_PWRD_IBIAS_RX; 617 + update = true; 618 + } 619 + 620 + if ((val & EMC_BGBIAS_CTL0_BIAS0_DSC_E_PWRD) || 621 + (val & EMC_BGBIAS_CTL0_BIAS0_DSC_E_PWRD_IBIAS_VTTGEN)) { 622 + update = true; 623 + } 624 + 625 + if (update) { 626 + writel(val2, emc->regs + EMC_BGBIAS_CTL0); 627 + if (pre_wait < 5) 628 + pre_wait = 5; 629 + } 630 + 631 + update = false; 632 + val = readl(emc->regs + EMC_XM2DQSPADCTRL2); 633 + if (timing->emc_xm2dqspadctrl2 & EMC_XM2DQSPADCTRL2_VREF_ENABLE && 634 + !(val & EMC_XM2DQSPADCTRL2_VREF_ENABLE)) { 635 + val |= EMC_XM2DQSPADCTRL2_VREF_ENABLE; 636 + update = true; 637 + } 638 + 639 + if (timing->emc_xm2dqspadctrl2 & EMC_XM2DQSPADCTRL2_RX_FT_REC_ENABLE && 640 + !(val & EMC_XM2DQSPADCTRL2_RX_FT_REC_ENABLE)) { 641 + val |= EMC_XM2DQSPADCTRL2_RX_FT_REC_ENABLE; 642 + update = true; 643 + } 644 + 645 + if (update) { 646 + writel(val, emc->regs + EMC_XM2DQSPADCTRL2); 647 + if (pre_wait < 30) 648 + pre_wait = 30; 649 + } 650 + 651 + /* Wait to settle */ 652 + if (pre_wait) { 653 + emc_seq_update_timing(emc); 654 + udelay(pre_wait); 655 + } 656 + 657 + /* Program CTT_TERM control */ 658 + if (last->emc_ctt_term_ctrl != timing->emc_ctt_term_ctrl) { 659 + emc_seq_disable_auto_cal(emc); 660 + writel(timing->emc_ctt_term_ctrl, 661 + emc->regs + EMC_CTT_TERM_CTRL); 662 + emc_seq_update_timing(emc); 663 + } 664 + 665 + /* Program burst shadow registers */ 666 + for (i = 0; i < ARRAY_SIZE(timing->emc_burst_data); ++i) 667 + writel(timing->emc_burst_data[i], 668 + emc->regs + emc_burst_regs[i]); 669 + 670 + writel(timing->emc_xm2dqspadctrl2, emc->regs + EMC_XM2DQSPADCTRL2); 671 + writel(timing->emc_zcal_interval, emc->regs + EMC_ZCAL_INTERVAL); 672 + 673 + tegra_mc_write_emem_configuration(emc->mc, timing->rate); 674 + 675 + val = timing->emc_cfg & ~EMC_CFG_POWER_FEATURES_MASK; 676 + emc_ccfifo_writel(emc, val, EMC_CFG); 677 + 678 + /* Program AUTO_CAL_CONFIG */ 679 + if (timing->emc_auto_cal_config2 != last->emc_auto_cal_config2) 680 + emc_ccfifo_writel(emc, timing->emc_auto_cal_config2, 681 + EMC_AUTO_CAL_CONFIG2); 682 + 683 + if (timing->emc_auto_cal_config3 != last->emc_auto_cal_config3) 684 + emc_ccfifo_writel(emc, timing->emc_auto_cal_config3, 685 + EMC_AUTO_CAL_CONFIG3); 686 + 687 + if (timing->emc_auto_cal_config != last->emc_auto_cal_config) { 688 + val = timing->emc_auto_cal_config; 689 + val &= EMC_AUTO_CAL_CONFIG_AUTO_CAL_START; 690 + emc_ccfifo_writel(emc, val, EMC_AUTO_CAL_CONFIG); 691 + } 692 + 693 + /* DDR3: predict MRS long wait count */ 694 + if (emc->dram_type == DRAM_TYPE_DDR3 && 695 + dll_change == DLL_CHANGE_ON) { 696 + u32 cnt = 512; 697 + 698 + if (timing->emc_zcal_interval != 0 && 699 + last->emc_zcal_interval == 0) 700 + cnt -= emc->dram_num * 256; 701 + 702 + val = (timing->emc_mrs_wait_cnt 703 + & EMC_MRS_WAIT_CNT_SHORT_WAIT_MASK) 704 + >> EMC_MRS_WAIT_CNT_SHORT_WAIT_SHIFT; 705 + if (cnt < val) 706 + cnt = val; 707 + 708 + val = timing->emc_mrs_wait_cnt 709 + & ~EMC_MRS_WAIT_CNT_LONG_WAIT_MASK; 710 + val |= (cnt << EMC_MRS_WAIT_CNT_LONG_WAIT_SHIFT) 711 + & EMC_MRS_WAIT_CNT_LONG_WAIT_MASK; 712 + 713 + writel(val, emc->regs + EMC_MRS_WAIT_CNT); 714 + } 715 + 716 + val = timing->emc_cfg_2; 717 + val &= ~EMC_CFG_2_DIS_STP_OB_CLK_DURING_NON_WR; 718 + emc_ccfifo_writel(emc, val, EMC_CFG_2); 719 + 720 + /* DDR3: Turn off DLL and enter self-refresh */ 721 + if (emc->dram_type == DRAM_TYPE_DDR3 && dll_change == DLL_CHANGE_OFF) 722 + emc_ccfifo_writel(emc, timing->emc_mode_1, EMC_EMRS); 723 + 724 + /* Disable refresh controller */ 725 + emc_ccfifo_writel(emc, EMC_REFCTRL_DEV_SEL(emc->dram_num), 726 + EMC_REFCTRL); 727 + if (emc->dram_type == DRAM_TYPE_DDR3) 728 + emc_ccfifo_writel(emc, EMC_DRAM_DEV_SEL(emc->dram_num) | 729 + EMC_SELF_REF_CMD_ENABLED, 730 + EMC_SELF_REF); 731 + 732 + /* Flow control marker */ 733 + emc_ccfifo_writel(emc, 1, EMC_STALL_THEN_EXE_AFTER_CLKCHANGE); 734 + 735 + /* DDR3: Exit self-refresh */ 736 + if (emc->dram_type == DRAM_TYPE_DDR3) 737 + emc_ccfifo_writel(emc, EMC_DRAM_DEV_SEL(emc->dram_num), 738 + EMC_SELF_REF); 739 + emc_ccfifo_writel(emc, EMC_REFCTRL_DEV_SEL(emc->dram_num) | 740 + EMC_REFCTRL_ENABLE, 741 + EMC_REFCTRL); 742 + 743 + /* Set DRAM mode registers */ 744 + if (emc->dram_type == DRAM_TYPE_DDR3) { 745 + if (timing->emc_mode_1 != last->emc_mode_1) 746 + emc_ccfifo_writel(emc, timing->emc_mode_1, EMC_EMRS); 747 + if (timing->emc_mode_2 != last->emc_mode_2) 748 + emc_ccfifo_writel(emc, timing->emc_mode_2, EMC_EMRS2); 749 + 750 + if ((timing->emc_mode_reset != last->emc_mode_reset) || 751 + dll_change == DLL_CHANGE_ON) { 752 + val = timing->emc_mode_reset; 753 + if (dll_change == DLL_CHANGE_ON) { 754 + val |= EMC_MODE_SET_DLL_RESET; 755 + val |= EMC_MODE_SET_LONG_CNT; 756 + } else { 757 + val &= ~EMC_MODE_SET_DLL_RESET; 758 + } 759 + emc_ccfifo_writel(emc, val, EMC_MRS); 760 + } 761 + } else { 762 + if (timing->emc_mode_2 != last->emc_mode_2) 763 + emc_ccfifo_writel(emc, timing->emc_mode_2, EMC_MRW2); 764 + if (timing->emc_mode_1 != last->emc_mode_1) 765 + emc_ccfifo_writel(emc, timing->emc_mode_1, EMC_MRW); 766 + if (timing->emc_mode_4 != last->emc_mode_4) 767 + emc_ccfifo_writel(emc, timing->emc_mode_4, EMC_MRW4); 768 + } 769 + 770 + /* Issue ZCAL command if turning ZCAL on */ 771 + if (timing->emc_zcal_interval != 0 && last->emc_zcal_interval == 0) { 772 + emc_ccfifo_writel(emc, EMC_ZQ_CAL_LONG_CMD_DEV0, EMC_ZQ_CAL); 773 + if (emc->dram_num > 1) 774 + emc_ccfifo_writel(emc, EMC_ZQ_CAL_LONG_CMD_DEV1, 775 + EMC_ZQ_CAL); 776 + } 777 + 778 + /* Write to RO register to remove stall after change */ 779 + emc_ccfifo_writel(emc, 0, EMC_CCFIFO_STATUS); 780 + 781 + if (timing->emc_cfg_2 & EMC_CFG_2_DIS_STP_OB_CLK_DURING_NON_WR) 782 + emc_ccfifo_writel(emc, timing->emc_cfg_2, EMC_CFG_2); 783 + 784 + /* Disable AUTO_CAL for clock change */ 785 + emc_seq_disable_auto_cal(emc); 786 + 787 + /* Read register to wait until programming has settled */ 788 + readl(emc->regs + EMC_INTSTATUS); 789 + 790 + return 0; 791 + } 792 + 793 + void tegra_emc_complete_timing_change(struct tegra_emc *emc, 794 + unsigned long rate) 795 + { 796 + struct emc_timing *timing = tegra_emc_find_timing(emc, rate); 797 + struct emc_timing *last = &emc->last_timing; 798 + u32 val; 799 + 800 + if (!timing) 801 + return; 802 + 803 + /* Wait until the state machine has settled */ 804 + emc_seq_wait_clkchange(emc); 805 + 806 + /* Restore AUTO_CAL */ 807 + if (timing->emc_ctt_term_ctrl != last->emc_ctt_term_ctrl) 808 + writel(timing->emc_auto_cal_interval, 809 + emc->regs + EMC_AUTO_CAL_INTERVAL); 810 + 811 + /* Restore dynamic self-refresh */ 812 + if (timing->emc_cfg & EMC_CFG_PWR_MASK) 813 + writel(timing->emc_cfg, emc->regs + EMC_CFG); 814 + 815 + /* Set ZCAL wait count */ 816 + writel(timing->emc_zcal_cnt_long, emc->regs + EMC_ZCAL_WAIT_CNT); 817 + 818 + /* LPDDR3: Turn off BGBIAS if low frequency */ 819 + if (emc->dram_type == DRAM_TYPE_LPDDR3 && 820 + timing->emc_bgbias_ctl0 & 821 + EMC_BGBIAS_CTL0_BIAS0_DSC_E_PWRD_IBIAS_RX) { 822 + val = timing->emc_bgbias_ctl0; 823 + val |= EMC_BGBIAS_CTL0_BIAS0_DSC_E_PWRD_IBIAS_VTTGEN; 824 + val |= EMC_BGBIAS_CTL0_BIAS0_DSC_E_PWRD; 825 + writel(val, emc->regs + EMC_BGBIAS_CTL0); 826 + } else { 827 + if (emc->dram_type == DRAM_TYPE_DDR3 && 828 + readl(emc->regs + EMC_BGBIAS_CTL0) != 829 + timing->emc_bgbias_ctl0) { 830 + writel(timing->emc_bgbias_ctl0, 831 + emc->regs + EMC_BGBIAS_CTL0); 832 + } 833 + 834 + writel(timing->emc_auto_cal_interval, 835 + emc->regs + EMC_AUTO_CAL_INTERVAL); 836 + } 837 + 838 + /* Wait for timing to settle */ 839 + udelay(2); 840 + 841 + /* Reprogram SEL_DPD_CTRL */ 842 + writel(timing->emc_sel_dpd_ctrl, emc->regs + EMC_SEL_DPD_CTRL); 843 + emc_seq_update_timing(emc); 844 + 845 + emc->last_timing = *timing; 846 + } 847 + 848 + /* Initialization and deinitialization */ 849 + 850 + static void emc_read_current_timing(struct tegra_emc *emc, 851 + struct emc_timing *timing) 852 + { 853 + unsigned int i; 854 + 855 + for (i = 0; i < ARRAY_SIZE(emc_burst_regs); ++i) 856 + timing->emc_burst_data[i] = 857 + readl(emc->regs + emc_burst_regs[i]); 858 + 859 + timing->emc_cfg = readl(emc->regs + EMC_CFG); 860 + 861 + timing->emc_auto_cal_interval = 0; 862 + timing->emc_zcal_cnt_long = 0; 863 + timing->emc_mode_1 = 0; 864 + timing->emc_mode_2 = 0; 865 + timing->emc_mode_4 = 0; 866 + timing->emc_mode_reset = 0; 867 + } 868 + 869 + static int emc_init(struct tegra_emc *emc) 870 + { 871 + emc->dram_type = readl(emc->regs + EMC_FBIO_CFG5); 872 + emc->dram_type &= EMC_FBIO_CFG5_DRAM_TYPE_MASK; 873 + emc->dram_type >>= EMC_FBIO_CFG5_DRAM_TYPE_SHIFT; 874 + 875 + emc->dram_num = tegra_mc_get_emem_device_count(emc->mc); 876 + 877 + emc_read_current_timing(emc, &emc->last_timing); 878 + 879 + return 0; 880 + } 881 + 882 + static int load_one_timing_from_dt(struct tegra_emc *emc, 883 + struct emc_timing *timing, 884 + struct device_node *node) 885 + { 886 + u32 value; 887 + int err; 888 + 889 + err = of_property_read_u32(node, "clock-frequency", &value); 890 + if (err) { 891 + dev_err(emc->dev, "timing %s: failed to read rate: %d\n", 892 + node->name, err); 893 + return err; 894 + } 895 + 896 + timing->rate = value; 897 + 898 + err = of_property_read_u32_array(node, "nvidia,emc-configuration", 899 + timing->emc_burst_data, 900 + ARRAY_SIZE(timing->emc_burst_data)); 901 + if (err) { 902 + dev_err(emc->dev, 903 + "timing %s: failed to read emc burst data: %d\n", 904 + node->name, err); 905 + return err; 906 + } 907 + 908 + #define EMC_READ_PROP(prop, dtprop) { \ 909 + err = of_property_read_u32(node, dtprop, &timing->prop); \ 910 + if (err) { \ 911 + dev_err(emc->dev, "timing %s: failed to read " #prop ": %d\n", \ 912 + node->name, err); \ 913 + return err; \ 914 + } \ 915 + } 916 + 917 + EMC_READ_PROP(emc_auto_cal_config, "nvidia,emc-auto-cal-config") 918 + EMC_READ_PROP(emc_auto_cal_config2, "nvidia,emc-auto-cal-config2") 919 + EMC_READ_PROP(emc_auto_cal_config3, "nvidia,emc-auto-cal-config3") 920 + EMC_READ_PROP(emc_auto_cal_interval, "nvidia,emc-auto-cal-interval") 921 + EMC_READ_PROP(emc_bgbias_ctl0, "nvidia,emc-bgbias-ctl0") 922 + EMC_READ_PROP(emc_cfg, "nvidia,emc-cfg") 923 + EMC_READ_PROP(emc_cfg_2, "nvidia,emc-cfg-2") 924 + EMC_READ_PROP(emc_ctt_term_ctrl, "nvidia,emc-ctt-term-ctrl") 925 + EMC_READ_PROP(emc_mode_1, "nvidia,emc-mode-1") 926 + EMC_READ_PROP(emc_mode_2, "nvidia,emc-mode-2") 927 + EMC_READ_PROP(emc_mode_4, "nvidia,emc-mode-4") 928 + EMC_READ_PROP(emc_mode_reset, "nvidia,emc-mode-reset") 929 + EMC_READ_PROP(emc_mrs_wait_cnt, "nvidia,emc-mrs-wait-cnt") 930 + EMC_READ_PROP(emc_sel_dpd_ctrl, "nvidia,emc-sel-dpd-ctrl") 931 + EMC_READ_PROP(emc_xm2dqspadctrl2, "nvidia,emc-xm2dqspadctrl2") 932 + EMC_READ_PROP(emc_zcal_cnt_long, "nvidia,emc-zcal-cnt-long") 933 + EMC_READ_PROP(emc_zcal_interval, "nvidia,emc-zcal-interval") 934 + 935 + #undef EMC_READ_PROP 936 + 937 + return 0; 938 + } 939 + 940 + static int cmp_timings(const void *_a, const void *_b) 941 + { 942 + const struct emc_timing *a = _a; 943 + const struct emc_timing *b = _b; 944 + 945 + if (a->rate < b->rate) 946 + return -1; 947 + else if (a->rate == b->rate) 948 + return 0; 949 + else 950 + return 1; 951 + } 952 + 953 + static int tegra_emc_load_timings_from_dt(struct tegra_emc *emc, 954 + struct device_node *node) 955 + { 956 + int child_count = of_get_child_count(node); 957 + struct device_node *child; 958 + struct emc_timing *timing; 959 + unsigned int i = 0; 960 + int err; 961 + 962 + emc->timings = devm_kcalloc(emc->dev, child_count, sizeof(*timing), 963 + GFP_KERNEL); 964 + if (!emc->timings) 965 + return -ENOMEM; 966 + 967 + emc->num_timings = child_count; 968 + 969 + for_each_child_of_node(node, child) { 970 + timing = &emc->timings[i++]; 971 + 972 + err = load_one_timing_from_dt(emc, timing, child); 973 + if (err) 974 + return err; 975 + } 976 + 977 + sort(emc->timings, emc->num_timings, sizeof(*timing), cmp_timings, 978 + NULL); 979 + 980 + return 0; 981 + } 982 + 983 + static const struct of_device_id tegra_emc_of_match[] = { 984 + { .compatible = "nvidia,tegra124-emc" }, 985 + {} 986 + }; 987 + 988 + static struct device_node * 989 + tegra_emc_find_node_by_ram_code(struct device_node *node, u32 ram_code) 990 + { 991 + struct device_node *np; 992 + int err; 993 + 994 + for_each_child_of_node(node, np) { 995 + u32 value; 996 + 997 + err = of_property_read_u32(np, "nvidia,ram-code", &value); 998 + if (err || (value != ram_code)) { 999 + of_node_put(np); 1000 + continue; 1001 + } 1002 + 1003 + return np; 1004 + } 1005 + 1006 + return NULL; 1007 + } 1008 + 1009 + /* Debugfs entry */ 1010 + 1011 + static int emc_debug_rate_get(void *data, u64 *rate) 1012 + { 1013 + struct clk *c = data; 1014 + 1015 + *rate = clk_get_rate(c); 1016 + 1017 + return 0; 1018 + } 1019 + 1020 + static int emc_debug_rate_set(void *data, u64 rate) 1021 + { 1022 + struct clk *c = data; 1023 + 1024 + return clk_set_rate(c, rate); 1025 + } 1026 + 1027 + DEFINE_SIMPLE_ATTRIBUTE(emc_debug_rate_fops, emc_debug_rate_get, 1028 + emc_debug_rate_set, "%lld\n"); 1029 + 1030 + static void emc_debugfs_init(struct device *dev) 1031 + { 1032 + struct dentry *root, *file; 1033 + struct clk *clk; 1034 + 1035 + root = debugfs_create_dir("emc", NULL); 1036 + if (!root) { 1037 + dev_err(dev, "failed to create debugfs directory\n"); 1038 + return; 1039 + } 1040 + 1041 + clk = clk_get_sys("tegra-clk-debug", "emc"); 1042 + if (IS_ERR(clk)) { 1043 + dev_err(dev, "failed to get debug clock: %ld\n", PTR_ERR(clk)); 1044 + return; 1045 + } 1046 + 1047 + file = debugfs_create_file("rate", S_IRUGO | S_IWUSR, root, clk, 1048 + &emc_debug_rate_fops); 1049 + if (!file) 1050 + dev_err(dev, "failed to create debugfs entry\n"); 1051 + } 1052 + 1053 + static int tegra_emc_probe(struct platform_device *pdev) 1054 + { 1055 + struct platform_device *mc; 1056 + struct device_node *np; 1057 + struct tegra_emc *emc; 1058 + struct resource *res; 1059 + u32 ram_code; 1060 + int err; 1061 + 1062 + emc = devm_kzalloc(&pdev->dev, sizeof(*emc), GFP_KERNEL); 1063 + if (!emc) 1064 + return -ENOMEM; 1065 + 1066 + emc->dev = &pdev->dev; 1067 + 1068 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1069 + emc->regs = devm_ioremap_resource(&pdev->dev, res); 1070 + if (IS_ERR(emc->regs)) 1071 + return PTR_ERR(emc->regs); 1072 + 1073 + np = of_parse_phandle(pdev->dev.of_node, "nvidia,memory-controller", 0); 1074 + if (!np) { 1075 + dev_err(&pdev->dev, "could not get memory controller\n"); 1076 + return -ENOENT; 1077 + } 1078 + 1079 + mc = of_find_device_by_node(np); 1080 + if (!mc) 1081 + return -ENOENT; 1082 + 1083 + of_node_put(np); 1084 + 1085 + emc->mc = platform_get_drvdata(mc); 1086 + if (!emc->mc) 1087 + return -EPROBE_DEFER; 1088 + 1089 + ram_code = tegra_read_ram_code(); 1090 + 1091 + np = tegra_emc_find_node_by_ram_code(pdev->dev.of_node, ram_code); 1092 + if (!np) { 1093 + dev_err(&pdev->dev, 1094 + "no memory timings for RAM code %u found in DT\n", 1095 + ram_code); 1096 + return -ENOENT; 1097 + } 1098 + 1099 + err = tegra_emc_load_timings_from_dt(emc, np); 1100 + 1101 + of_node_put(np); 1102 + 1103 + if (err) 1104 + return err; 1105 + 1106 + if (emc->num_timings == 0) { 1107 + dev_err(&pdev->dev, 1108 + "no memory timings for RAM code %u registered\n", 1109 + ram_code); 1110 + return -ENOENT; 1111 + } 1112 + 1113 + err = emc_init(emc); 1114 + if (err) { 1115 + dev_err(&pdev->dev, "EMC initialization failed: %d\n", err); 1116 + return err; 1117 + } 1118 + 1119 + platform_set_drvdata(pdev, emc); 1120 + 1121 + if (IS_ENABLED(CONFIG_DEBUG_FS)) 1122 + emc_debugfs_init(&pdev->dev); 1123 + 1124 + return 0; 1125 + }; 1126 + 1127 + static struct platform_driver tegra_emc_driver = { 1128 + .probe = tegra_emc_probe, 1129 + .driver = { 1130 + .name = "tegra-emc", 1131 + .of_match_table = tegra_emc_of_match, 1132 + .suppress_bind_attrs = true, 1133 + }, 1134 + }; 1135 + 1136 + static int tegra_emc_init(void) 1137 + { 1138 + return platform_driver_register(&tegra_emc_driver); 1139 + } 1140 + subsys_initcall(tegra_emc_init);
+100 -23
drivers/memory/tegra/tegra124.c
··· 15 15 16 16 #include "mc.h" 17 17 18 + #define MC_EMEM_ARB_CFG 0x90 19 + #define MC_EMEM_ARB_OUTSTANDING_REQ 0x94 20 + #define MC_EMEM_ARB_TIMING_RCD 0x98 21 + #define MC_EMEM_ARB_TIMING_RP 0x9c 22 + #define MC_EMEM_ARB_TIMING_RC 0xa0 23 + #define MC_EMEM_ARB_TIMING_RAS 0xa4 24 + #define MC_EMEM_ARB_TIMING_FAW 0xa8 25 + #define MC_EMEM_ARB_TIMING_RRD 0xac 26 + #define MC_EMEM_ARB_TIMING_RAP2PRE 0xb0 27 + #define MC_EMEM_ARB_TIMING_WAP2PRE 0xb4 28 + #define MC_EMEM_ARB_TIMING_R2R 0xb8 29 + #define MC_EMEM_ARB_TIMING_W2W 0xbc 30 + #define MC_EMEM_ARB_TIMING_R2W 0xc0 31 + #define MC_EMEM_ARB_TIMING_W2R 0xc4 32 + #define MC_EMEM_ARB_DA_TURNS 0xd0 33 + #define MC_EMEM_ARB_DA_COVERS 0xd4 34 + #define MC_EMEM_ARB_MISC0 0xd8 35 + #define MC_EMEM_ARB_MISC1 0xdc 36 + #define MC_EMEM_ARB_RING1_THROTTLE 0xe0 37 + 38 + static const unsigned long tegra124_mc_emem_regs[] = { 39 + MC_EMEM_ARB_CFG, 40 + MC_EMEM_ARB_OUTSTANDING_REQ, 41 + MC_EMEM_ARB_TIMING_RCD, 42 + MC_EMEM_ARB_TIMING_RP, 43 + MC_EMEM_ARB_TIMING_RC, 44 + MC_EMEM_ARB_TIMING_RAS, 45 + MC_EMEM_ARB_TIMING_FAW, 46 + MC_EMEM_ARB_TIMING_RRD, 47 + MC_EMEM_ARB_TIMING_RAP2PRE, 48 + MC_EMEM_ARB_TIMING_WAP2PRE, 49 + MC_EMEM_ARB_TIMING_R2R, 50 + MC_EMEM_ARB_TIMING_W2W, 51 + MC_EMEM_ARB_TIMING_R2W, 52 + MC_EMEM_ARB_TIMING_W2R, 53 + MC_EMEM_ARB_DA_TURNS, 54 + MC_EMEM_ARB_DA_COVERS, 55 + MC_EMEM_ARB_MISC0, 56 + MC_EMEM_ARB_MISC1, 57 + MC_EMEM_ARB_RING1_THROTTLE 58 + }; 59 + 18 60 static const struct tegra_mc_client tegra124_mc_clients[] = { 19 61 { 20 62 .id = 0x00, ··· 976 934 }; 977 935 978 936 static const struct tegra_smmu_swgroup tegra124_swgroups[] = { 979 - { .swgroup = TEGRA_SWGROUP_DC, .reg = 0x240 }, 980 - { .swgroup = TEGRA_SWGROUP_DCB, .reg = 0x244 }, 981 - { .swgroup = TEGRA_SWGROUP_AFI, .reg = 0x238 }, 982 - { .swgroup = TEGRA_SWGROUP_AVPC, .reg = 0x23c }, 983 - { .swgroup = TEGRA_SWGROUP_HDA, .reg = 0x254 }, 984 - { .swgroup = TEGRA_SWGROUP_HC, .reg = 0x250 }, 985 - { .swgroup = TEGRA_SWGROUP_MSENC, .reg = 0x264 }, 986 - { .swgroup = TEGRA_SWGROUP_PPCS, .reg = 0x270 }, 987 - { .swgroup = TEGRA_SWGROUP_SATA, .reg = 0x274 }, 988 - { .swgroup = TEGRA_SWGROUP_VDE, .reg = 0x27c }, 989 - { .swgroup = TEGRA_SWGROUP_ISP2, .reg = 0x258 }, 990 - { .swgroup = TEGRA_SWGROUP_XUSB_HOST, .reg = 0x288 }, 991 - { .swgroup = TEGRA_SWGROUP_XUSB_DEV, .reg = 0x28c }, 992 - { .swgroup = TEGRA_SWGROUP_ISP2B, .reg = 0xaa4 }, 993 - { .swgroup = TEGRA_SWGROUP_TSEC, .reg = 0x294 }, 994 - { .swgroup = TEGRA_SWGROUP_A9AVP, .reg = 0x290 }, 995 - { .swgroup = TEGRA_SWGROUP_GPU, .reg = 0xaac }, 996 - { .swgroup = TEGRA_SWGROUP_SDMMC1A, .reg = 0xa94 }, 997 - { .swgroup = TEGRA_SWGROUP_SDMMC2A, .reg = 0xa98 }, 998 - { .swgroup = TEGRA_SWGROUP_SDMMC3A, .reg = 0xa9c }, 999 - { .swgroup = TEGRA_SWGROUP_SDMMC4A, .reg = 0xaa0 }, 1000 - { .swgroup = TEGRA_SWGROUP_VIC, .reg = 0x284 }, 1001 - { .swgroup = TEGRA_SWGROUP_VI, .reg = 0x280 }, 937 + { .name = "dc", .swgroup = TEGRA_SWGROUP_DC, .reg = 0x240 }, 938 + { .name = "dcb", .swgroup = TEGRA_SWGROUP_DCB, .reg = 0x244 }, 939 + { .name = "afi", .swgroup = TEGRA_SWGROUP_AFI, .reg = 0x238 }, 940 + { .name = "avpc", .swgroup = TEGRA_SWGROUP_AVPC, .reg = 0x23c }, 941 + { .name = "hda", .swgroup = TEGRA_SWGROUP_HDA, .reg = 0x254 }, 942 + { .name = "hc", .swgroup = TEGRA_SWGROUP_HC, .reg = 0x250 }, 943 + { .name = "msenc", .swgroup = TEGRA_SWGROUP_MSENC, .reg = 0x264 }, 944 + { .name = "ppcs", .swgroup = TEGRA_SWGROUP_PPCS, .reg = 0x270 }, 945 + { .name = "sata", .swgroup = TEGRA_SWGROUP_SATA, .reg = 0x274 }, 946 + { .name = "vde", .swgroup = TEGRA_SWGROUP_VDE, .reg = 0x27c }, 947 + { .name = "isp2", .swgroup = TEGRA_SWGROUP_ISP2, .reg = 0x258 }, 948 + { .name = "xusb_host", .swgroup = TEGRA_SWGROUP_XUSB_HOST, .reg = 0x288 }, 949 + { .name = "xusb_dev", .swgroup = TEGRA_SWGROUP_XUSB_DEV, .reg = 0x28c }, 950 + { .name = "isp2b", .swgroup = TEGRA_SWGROUP_ISP2B, .reg = 0xaa4 }, 951 + { .name = "tsec", .swgroup = TEGRA_SWGROUP_TSEC, .reg = 0x294 }, 952 + { .name = "a9avp", .swgroup = TEGRA_SWGROUP_A9AVP, .reg = 0x290 }, 953 + { .name = "gpu", .swgroup = TEGRA_SWGROUP_GPU, .reg = 0xaac }, 954 + { .name = "sdmmc1a", .swgroup = TEGRA_SWGROUP_SDMMC1A, .reg = 0xa94 }, 955 + { .name = "sdmmc2a", .swgroup = TEGRA_SWGROUP_SDMMC2A, .reg = 0xa98 }, 956 + { .name = "sdmmc3a", .swgroup = TEGRA_SWGROUP_SDMMC3A, .reg = 0xa9c }, 957 + { .name = "sdmmc4a", .swgroup = TEGRA_SWGROUP_SDMMC4A, .reg = 0xaa0 }, 958 + { .name = "vic", .swgroup = TEGRA_SWGROUP_VIC, .reg = 0x284 }, 959 + { .name = "vi", .swgroup = TEGRA_SWGROUP_VI, .reg = 0x280 }, 1002 960 }; 1003 961 1004 962 #ifdef CONFIG_ARCH_TEGRA_124_SOC ··· 1033 991 .num_address_bits = 34, 1034 992 .atom_size = 32, 1035 993 .smmu = &tegra124_smmu_soc, 994 + .emem_regs = tegra124_mc_emem_regs, 995 + .num_emem_regs = ARRAY_SIZE(tegra124_mc_emem_regs), 1036 996 }; 1037 997 #endif /* CONFIG_ARCH_TEGRA_124_SOC */ 998 + 999 + #ifdef CONFIG_ARCH_TEGRA_132_SOC 1000 + static void tegra132_flush_dcache(struct page *page, unsigned long offset, 1001 + size_t size) 1002 + { 1003 + void *virt = page_address(page) + offset; 1004 + 1005 + __flush_dcache_area(virt, size); 1006 + } 1007 + 1008 + static const struct tegra_smmu_ops tegra132_smmu_ops = { 1009 + .flush_dcache = tegra132_flush_dcache, 1010 + }; 1011 + 1012 + static const struct tegra_smmu_soc tegra132_smmu_soc = { 1013 + .clients = tegra124_mc_clients, 1014 + .num_clients = ARRAY_SIZE(tegra124_mc_clients), 1015 + .swgroups = tegra124_swgroups, 1016 + .num_swgroups = ARRAY_SIZE(tegra124_swgroups), 1017 + .supports_round_robin_arbitration = true, 1018 + .supports_request_limit = true, 1019 + .num_asids = 128, 1020 + .ops = &tegra132_smmu_ops, 1021 + }; 1022 + 1023 + const struct tegra_mc_soc tegra132_mc_soc = { 1024 + .clients = tegra124_mc_clients, 1025 + .num_clients = ARRAY_SIZE(tegra124_mc_clients), 1026 + .num_address_bits = 34, 1027 + .atom_size = 32, 1028 + .smmu = &tegra132_smmu_soc, 1029 + }; 1030 + #endif /* CONFIG_ARCH_TEGRA_132_SOC */
+16 -16
drivers/memory/tegra/tegra30.c
··· 918 918 }; 919 919 920 920 static const struct tegra_smmu_swgroup tegra30_swgroups[] = { 921 - { .swgroup = TEGRA_SWGROUP_DC, .reg = 0x240 }, 922 - { .swgroup = TEGRA_SWGROUP_DCB, .reg = 0x244 }, 923 - { .swgroup = TEGRA_SWGROUP_EPP, .reg = 0x248 }, 924 - { .swgroup = TEGRA_SWGROUP_G2, .reg = 0x24c }, 925 - { .swgroup = TEGRA_SWGROUP_MPE, .reg = 0x264 }, 926 - { .swgroup = TEGRA_SWGROUP_VI, .reg = 0x280 }, 927 - { .swgroup = TEGRA_SWGROUP_AFI, .reg = 0x238 }, 928 - { .swgroup = TEGRA_SWGROUP_AVPC, .reg = 0x23c }, 929 - { .swgroup = TEGRA_SWGROUP_NV, .reg = 0x268 }, 930 - { .swgroup = TEGRA_SWGROUP_NV2, .reg = 0x26c }, 931 - { .swgroup = TEGRA_SWGROUP_HDA, .reg = 0x254 }, 932 - { .swgroup = TEGRA_SWGROUP_HC, .reg = 0x250 }, 933 - { .swgroup = TEGRA_SWGROUP_PPCS, .reg = 0x270 }, 934 - { .swgroup = TEGRA_SWGROUP_SATA, .reg = 0x278 }, 935 - { .swgroup = TEGRA_SWGROUP_VDE, .reg = 0x27c }, 936 - { .swgroup = TEGRA_SWGROUP_ISP, .reg = 0x258 }, 921 + { .name = "dc", .swgroup = TEGRA_SWGROUP_DC, .reg = 0x240 }, 922 + { .name = "dcb", .swgroup = TEGRA_SWGROUP_DCB, .reg = 0x244 }, 923 + { .name = "epp", .swgroup = TEGRA_SWGROUP_EPP, .reg = 0x248 }, 924 + { .name = "g2", .swgroup = TEGRA_SWGROUP_G2, .reg = 0x24c }, 925 + { .name = "mpe", .swgroup = TEGRA_SWGROUP_MPE, .reg = 0x264 }, 926 + { .name = "vi", .swgroup = TEGRA_SWGROUP_VI, .reg = 0x280 }, 927 + { .name = "afi", .swgroup = TEGRA_SWGROUP_AFI, .reg = 0x238 }, 928 + { .name = "avpc", .swgroup = TEGRA_SWGROUP_AVPC, .reg = 0x23c }, 929 + { .name = "nv", .swgroup = TEGRA_SWGROUP_NV, .reg = 0x268 }, 930 + { .name = "nv2", .swgroup = TEGRA_SWGROUP_NV2, .reg = 0x26c }, 931 + { .name = "hda", .swgroup = TEGRA_SWGROUP_HDA, .reg = 0x254 }, 932 + { .name = "hc", .swgroup = TEGRA_SWGROUP_HC, .reg = 0x250 }, 933 + { .name = "ppcs", .swgroup = TEGRA_SWGROUP_PPCS, .reg = 0x270 }, 934 + { .name = "sata", .swgroup = TEGRA_SWGROUP_SATA, .reg = 0x278 }, 935 + { .name = "vde", .swgroup = TEGRA_SWGROUP_VDE, .reg = 0x27c }, 936 + { .name = "isp", .swgroup = TEGRA_SWGROUP_ISP, .reg = 0x258 }, 937 937 }; 938 938 939 939 static void tegra30_flush_dcache(struct page *page, unsigned long offset,
+1
drivers/of/platform.c
··· 25 25 26 26 const struct of_device_id of_default_bus_match_table[] = { 27 27 { .compatible = "simple-bus", }, 28 + { .compatible = "simple-mfd", }, 28 29 #ifdef CONFIG_ARM_AMBA 29 30 { .compatible = "arm,amba-bus", }, 30 31 #endif /* CONFIG_ARM_AMBA */
+2 -24
drivers/pinctrl/berlin/berlin-bg2.c
··· 218 218 219 219 static const struct of_device_id berlin2_pinctrl_match[] = { 220 220 { 221 - .compatible = "marvell,berlin2-chip-ctrl", 221 + .compatible = "marvell,berlin2-soc-pinctrl", 222 222 .data = &berlin2_soc_pinctrl_data 223 223 }, 224 224 { 225 - .compatible = "marvell,berlin2-system-ctrl", 225 + .compatible = "marvell,berlin2-system-pinctrl", 226 226 .data = &berlin2_sysmgr_pinctrl_data 227 227 }, 228 228 {} ··· 233 233 { 234 234 const struct of_device_id *match = 235 235 of_match_device(berlin2_pinctrl_match, &pdev->dev); 236 - struct regmap_config *rmconfig; 237 - struct regmap *regmap; 238 - struct resource *res; 239 - void __iomem *base; 240 - 241 - rmconfig = devm_kzalloc(&pdev->dev, sizeof(*rmconfig), GFP_KERNEL); 242 - if (!rmconfig) 243 - return -ENOMEM; 244 - 245 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 246 - base = devm_ioremap_resource(&pdev->dev, res); 247 - if (IS_ERR(base)) 248 - return PTR_ERR(base); 249 - 250 - rmconfig->reg_bits = 32, 251 - rmconfig->val_bits = 32, 252 - rmconfig->reg_stride = 4, 253 - rmconfig->max_register = resource_size(res); 254 - 255 - regmap = devm_regmap_init_mmio(&pdev->dev, base, rmconfig); 256 - if (IS_ERR(regmap)) 257 - return PTR_ERR(regmap); 258 236 259 237 return berlin_pinctrl_probe(pdev, match->data); 260 238 }
+2 -24
drivers/pinctrl/berlin/berlin-bg2cd.c
··· 161 161 162 162 static const struct of_device_id berlin2cd_pinctrl_match[] = { 163 163 { 164 - .compatible = "marvell,berlin2cd-chip-ctrl", 164 + .compatible = "marvell,berlin2cd-soc-pinctrl", 165 165 .data = &berlin2cd_soc_pinctrl_data 166 166 }, 167 167 { 168 - .compatible = "marvell,berlin2cd-system-ctrl", 168 + .compatible = "marvell,berlin2cd-system-pinctrl", 169 169 .data = &berlin2cd_sysmgr_pinctrl_data 170 170 }, 171 171 {} ··· 176 176 { 177 177 const struct of_device_id *match = 178 178 of_match_device(berlin2cd_pinctrl_match, &pdev->dev); 179 - struct regmap_config *rmconfig; 180 - struct regmap *regmap; 181 - struct resource *res; 182 - void __iomem *base; 183 - 184 - rmconfig = devm_kzalloc(&pdev->dev, sizeof(*rmconfig), GFP_KERNEL); 185 - if (!rmconfig) 186 - return -ENOMEM; 187 - 188 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 189 - base = devm_ioremap_resource(&pdev->dev, res); 190 - if (IS_ERR(base)) 191 - return PTR_ERR(base); 192 - 193 - rmconfig->reg_bits = 32, 194 - rmconfig->val_bits = 32, 195 - rmconfig->reg_stride = 4, 196 - rmconfig->max_register = resource_size(res); 197 - 198 - regmap = devm_regmap_init_mmio(&pdev->dev, base, rmconfig); 199 - if (IS_ERR(regmap)) 200 - return PTR_ERR(regmap); 201 179 202 180 return berlin_pinctrl_probe(pdev, match->data); 203 181 }
+2 -24
drivers/pinctrl/berlin/berlin-bg2q.c
··· 380 380 381 381 static const struct of_device_id berlin2q_pinctrl_match[] = { 382 382 { 383 - .compatible = "marvell,berlin2q-chip-ctrl", 383 + .compatible = "marvell,berlin2q-soc-pinctrl", 384 384 .data = &berlin2q_soc_pinctrl_data, 385 385 }, 386 386 { 387 - .compatible = "marvell,berlin2q-system-ctrl", 387 + .compatible = "marvell,berlin2q-system-pinctrl", 388 388 .data = &berlin2q_sysmgr_pinctrl_data, 389 389 }, 390 390 {} ··· 395 395 { 396 396 const struct of_device_id *match = 397 397 of_match_device(berlin2q_pinctrl_match, &pdev->dev); 398 - struct regmap_config *rmconfig; 399 - struct regmap *regmap; 400 - struct resource *res; 401 - void __iomem *base; 402 - 403 - rmconfig = devm_kzalloc(&pdev->dev, sizeof(*rmconfig), GFP_KERNEL); 404 - if (!rmconfig) 405 - return -ENOMEM; 406 - 407 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 408 - base = devm_ioremap_resource(&pdev->dev, res); 409 - if (IS_ERR(base)) 410 - return PTR_ERR(base); 411 - 412 - rmconfig->reg_bits = 32, 413 - rmconfig->val_bits = 32, 414 - rmconfig->reg_stride = 4, 415 - rmconfig->max_register = resource_size(res); 416 - 417 - regmap = devm_regmap_init_mmio(&pdev->dev, base, rmconfig); 418 - if (IS_ERR(regmap)) 419 - return PTR_ERR(regmap); 420 398 421 399 return berlin_pinctrl_probe(pdev, match->data); 422 400 }
+6 -3
drivers/pinctrl/berlin/berlin.c
··· 11 11 */ 12 12 13 13 #include <linux/io.h> 14 + #include <linux/mfd/syscon.h> 14 15 #include <linux/module.h> 15 16 #include <linux/of.h> 16 17 #include <linux/of_address.h> ··· 296 295 const struct berlin_pinctrl_desc *desc) 297 296 { 298 297 struct device *dev = &pdev->dev; 298 + struct device_node *parent_np = of_get_parent(dev->of_node); 299 299 struct berlin_pinctrl *pctrl; 300 300 struct regmap *regmap; 301 301 int ret; 302 302 303 - regmap = dev_get_regmap(&pdev->dev, NULL); 304 - if (!regmap) 305 - return -ENODEV; 303 + regmap = syscon_node_to_regmap(parent_np); 304 + of_node_put(parent_np); 305 + if (IS_ERR(regmap)) 306 + return PTR_ERR(regmap); 306 307 307 308 pctrl = devm_kzalloc(dev, sizeof(*pctrl), GFP_KERNEL); 308 309 if (!pctrl)
+27 -45
drivers/reset/reset-berlin.c
··· 11 11 12 12 #include <linux/delay.h> 13 13 #include <linux/io.h> 14 + #include <linux/mfd/syscon.h> 14 15 #include <linux/module.h> 15 16 #include <linux/of.h> 16 17 #include <linux/of_address.h> 17 18 #include <linux/platform_device.h> 19 + #include <linux/regmap.h> 18 20 #include <linux/reset-controller.h> 19 21 #include <linux/slab.h> 20 22 #include <linux/types.h> ··· 27 25 container_of((p), struct berlin_reset_priv, rcdev) 28 26 29 27 struct berlin_reset_priv { 30 - void __iomem *base; 31 - unsigned int size; 28 + struct regmap *regmap; 32 29 struct reset_controller_dev rcdev; 33 30 }; 34 31 ··· 38 37 int offset = id >> 8; 39 38 int mask = BIT(id & 0x1f); 40 39 41 - writel(mask, priv->base + offset); 40 + regmap_write(priv->regmap, offset, mask); 42 41 43 42 /* let the reset be effective */ 44 43 udelay(10); ··· 53 52 static int berlin_reset_xlate(struct reset_controller_dev *rcdev, 54 53 const struct of_phandle_args *reset_spec) 55 54 { 56 - struct berlin_reset_priv *priv = to_berlin_reset_priv(rcdev); 57 55 unsigned offset, bit; 58 56 59 57 if (WARN_ON(reset_spec->args_count != rcdev->of_reset_n_cells)) ··· 61 61 offset = reset_spec->args[0]; 62 62 bit = reset_spec->args[1]; 63 63 64 - if (offset >= priv->size) 65 - return -EINVAL; 66 - 67 64 if (bit >= BERLIN_MAX_RESETS) 68 65 return -EINVAL; 69 66 70 67 return (offset << 8) | bit; 71 68 } 72 69 73 - static int __berlin_reset_init(struct device_node *np) 70 + static int berlin2_reset_probe(struct platform_device *pdev) 74 71 { 72 + struct device_node *parent_np = of_get_parent(pdev->dev.of_node); 75 73 struct berlin_reset_priv *priv; 76 - struct resource res; 77 - resource_size_t size; 78 - int ret; 79 74 80 - priv = kzalloc(sizeof(*priv), GFP_KERNEL); 75 + priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); 81 76 if (!priv) 82 77 return -ENOMEM; 83 78 84 - ret = of_address_to_resource(np, 0, &res); 85 - if (ret) 86 - goto err; 87 - 88 - size = resource_size(&res); 89 - priv->base = ioremap(res.start, size); 90 - if (!priv->base) { 91 - ret = -ENOMEM; 92 - goto err; 93 - } 94 - priv->size = size; 79 + priv->regmap = syscon_node_to_regmap(parent_np); 80 + of_node_put(parent_np); 81 + if (IS_ERR(priv->regmap)) 82 + return PTR_ERR(priv->regmap); 95 83 96 84 priv->rcdev.owner = THIS_MODULE; 97 85 priv->rcdev.ops = &berlin_reset_ops; 98 - priv->rcdev.of_node = np; 86 + priv->rcdev.of_node = pdev->dev.of_node; 99 87 priv->rcdev.of_reset_n_cells = 2; 100 88 priv->rcdev.of_xlate = berlin_reset_xlate; 101 89 102 90 reset_controller_register(&priv->rcdev); 103 91 104 92 return 0; 105 - 106 - err: 107 - kfree(priv); 108 - return ret; 109 93 } 110 94 111 - static const struct of_device_id berlin_reset_of_match[] __initconst = { 112 - { .compatible = "marvell,berlin2-chip-ctrl" }, 113 - { .compatible = "marvell,berlin2cd-chip-ctrl" }, 114 - { .compatible = "marvell,berlin2q-chip-ctrl" }, 95 + static const struct of_device_id berlin_reset_dt_match[] = { 96 + { .compatible = "marvell,berlin2-reset" }, 115 97 { }, 116 98 }; 99 + MODULE_DEVICE_TABLE(of, berlin_reset_dt_match); 117 100 118 - static int __init berlin_reset_init(void) 119 - { 120 - struct device_node *np; 121 - int ret; 101 + static struct platform_driver berlin_reset_driver = { 102 + .probe = berlin2_reset_probe, 103 + .driver = { 104 + .name = "berlin2-reset", 105 + .of_match_table = berlin_reset_dt_match, 106 + }, 107 + }; 108 + module_platform_driver(berlin_reset_driver); 122 109 123 - for_each_matching_node(np, berlin_reset_of_match) { 124 - ret = __berlin_reset_init(np); 125 - if (ret) 126 - return ret; 127 - } 128 - 129 - return 0; 130 - } 131 - arch_initcall(berlin_reset_init); 110 + MODULE_AUTHOR("Antoine Tenart <antoine.tenart@free-electrons.com>"); 111 + MODULE_AUTHOR("Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>"); 112 + MODULE_DESCRIPTION("Marvell Berlin reset driver"); 113 + MODULE_LICENSE("GPL");
+1
drivers/soc/Kconfig
··· 2 2 3 3 source "drivers/soc/mediatek/Kconfig" 4 4 source "drivers/soc/qcom/Kconfig" 5 + source "drivers/soc/sunxi/Kconfig" 5 6 source "drivers/soc/ti/Kconfig" 6 7 source "drivers/soc/versatile/Kconfig" 7 8
+1
drivers/soc/Makefile
··· 4 4 5 5 obj-$(CONFIG_ARCH_MEDIATEK) += mediatek/ 6 6 obj-$(CONFIG_ARCH_QCOM) += qcom/ 7 + obj-$(CONFIG_ARCH_SUNXI) += sunxi/ 7 8 obj-$(CONFIG_ARCH_TEGRA) += tegra/ 8 9 obj-$(CONFIG_SOC_TI) += ti/ 9 10 obj-$(CONFIG_PLAT_VERSATILE) += versatile/
+7
drivers/soc/qcom/Kconfig
··· 10 10 functions for connecting the underlying serial UART, SPI, and I2C 11 11 devices to the output pins. 12 12 13 + config QCOM_PM 14 + bool "Qualcomm Power Management" 15 + depends on ARCH_QCOM && !ARM64 16 + help 17 + QCOM Platform specific power driver to manage cores and L2 low power 18 + modes. It interface with various system drivers to put the cores in 19 + low power modes.
+1
drivers/soc/qcom/Makefile
··· 1 1 obj-$(CONFIG_QCOM_GSBI) += qcom_gsbi.o 2 + obj-$(CONFIG_QCOM_PM) += spm.o
+385
drivers/soc/qcom/spm.c
··· 1 + /* 2 + * Copyright (c) 2011-2014, The Linux Foundation. All rights reserved. 3 + * Copyright (c) 2014,2015, Linaro Ltd. 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 and 7 + * only version 2 as published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + */ 14 + 15 + #include <linux/module.h> 16 + #include <linux/kernel.h> 17 + #include <linux/init.h> 18 + #include <linux/io.h> 19 + #include <linux/slab.h> 20 + #include <linux/of.h> 21 + #include <linux/of_address.h> 22 + #include <linux/of_device.h> 23 + #include <linux/err.h> 24 + #include <linux/platform_device.h> 25 + #include <linux/cpuidle.h> 26 + #include <linux/cpu_pm.h> 27 + #include <linux/qcom_scm.h> 28 + 29 + #include <asm/cpuidle.h> 30 + #include <asm/proc-fns.h> 31 + #include <asm/suspend.h> 32 + 33 + #define MAX_PMIC_DATA 2 34 + #define MAX_SEQ_DATA 64 35 + #define SPM_CTL_INDEX 0x7f 36 + #define SPM_CTL_INDEX_SHIFT 4 37 + #define SPM_CTL_EN BIT(0) 38 + 39 + enum pm_sleep_mode { 40 + PM_SLEEP_MODE_STBY, 41 + PM_SLEEP_MODE_RET, 42 + PM_SLEEP_MODE_SPC, 43 + PM_SLEEP_MODE_PC, 44 + PM_SLEEP_MODE_NR, 45 + }; 46 + 47 + enum spm_reg { 48 + SPM_REG_CFG, 49 + SPM_REG_SPM_CTL, 50 + SPM_REG_DLY, 51 + SPM_REG_PMIC_DLY, 52 + SPM_REG_PMIC_DATA_0, 53 + SPM_REG_PMIC_DATA_1, 54 + SPM_REG_VCTL, 55 + SPM_REG_SEQ_ENTRY, 56 + SPM_REG_SPM_STS, 57 + SPM_REG_PMIC_STS, 58 + SPM_REG_NR, 59 + }; 60 + 61 + struct spm_reg_data { 62 + const u8 *reg_offset; 63 + u32 spm_cfg; 64 + u32 spm_dly; 65 + u32 pmic_dly; 66 + u32 pmic_data[MAX_PMIC_DATA]; 67 + u8 seq[MAX_SEQ_DATA]; 68 + u8 start_index[PM_SLEEP_MODE_NR]; 69 + }; 70 + 71 + struct spm_driver_data { 72 + void __iomem *reg_base; 73 + const struct spm_reg_data *reg_data; 74 + }; 75 + 76 + static const u8 spm_reg_offset_v2_1[SPM_REG_NR] = { 77 + [SPM_REG_CFG] = 0x08, 78 + [SPM_REG_SPM_CTL] = 0x30, 79 + [SPM_REG_DLY] = 0x34, 80 + [SPM_REG_SEQ_ENTRY] = 0x80, 81 + }; 82 + 83 + /* SPM register data for 8974, 8084 */ 84 + static const struct spm_reg_data spm_reg_8974_8084_cpu = { 85 + .reg_offset = spm_reg_offset_v2_1, 86 + .spm_cfg = 0x1, 87 + .spm_dly = 0x3C102800, 88 + .seq = { 0x03, 0x0B, 0x0F, 0x00, 0x20, 0x80, 0x10, 0xE8, 0x5B, 0x03, 89 + 0x3B, 0xE8, 0x5B, 0x82, 0x10, 0x0B, 0x30, 0x06, 0x26, 0x30, 90 + 0x0F }, 91 + .start_index[PM_SLEEP_MODE_STBY] = 0, 92 + .start_index[PM_SLEEP_MODE_SPC] = 3, 93 + }; 94 + 95 + static const u8 spm_reg_offset_v1_1[SPM_REG_NR] = { 96 + [SPM_REG_CFG] = 0x08, 97 + [SPM_REG_SPM_CTL] = 0x20, 98 + [SPM_REG_PMIC_DLY] = 0x24, 99 + [SPM_REG_PMIC_DATA_0] = 0x28, 100 + [SPM_REG_PMIC_DATA_1] = 0x2C, 101 + [SPM_REG_SEQ_ENTRY] = 0x80, 102 + }; 103 + 104 + /* SPM register data for 8064 */ 105 + static const struct spm_reg_data spm_reg_8064_cpu = { 106 + .reg_offset = spm_reg_offset_v1_1, 107 + .spm_cfg = 0x1F, 108 + .pmic_dly = 0x02020004, 109 + .pmic_data[0] = 0x0084009C, 110 + .pmic_data[1] = 0x00A4001C, 111 + .seq = { 0x03, 0x0F, 0x00, 0x24, 0x54, 0x10, 0x09, 0x03, 0x01, 112 + 0x10, 0x54, 0x30, 0x0C, 0x24, 0x30, 0x0F }, 113 + .start_index[PM_SLEEP_MODE_STBY] = 0, 114 + .start_index[PM_SLEEP_MODE_SPC] = 2, 115 + }; 116 + 117 + static DEFINE_PER_CPU(struct spm_driver_data *, cpu_spm_drv); 118 + 119 + typedef int (*idle_fn)(int); 120 + static DEFINE_PER_CPU(idle_fn*, qcom_idle_ops); 121 + 122 + static inline void spm_register_write(struct spm_driver_data *drv, 123 + enum spm_reg reg, u32 val) 124 + { 125 + if (drv->reg_data->reg_offset[reg]) 126 + writel_relaxed(val, drv->reg_base + 127 + drv->reg_data->reg_offset[reg]); 128 + } 129 + 130 + /* Ensure a guaranteed write, before return */ 131 + static inline void spm_register_write_sync(struct spm_driver_data *drv, 132 + enum spm_reg reg, u32 val) 133 + { 134 + u32 ret; 135 + 136 + if (!drv->reg_data->reg_offset[reg]) 137 + return; 138 + 139 + do { 140 + writel_relaxed(val, drv->reg_base + 141 + drv->reg_data->reg_offset[reg]); 142 + ret = readl_relaxed(drv->reg_base + 143 + drv->reg_data->reg_offset[reg]); 144 + if (ret == val) 145 + break; 146 + cpu_relax(); 147 + } while (1); 148 + } 149 + 150 + static inline u32 spm_register_read(struct spm_driver_data *drv, 151 + enum spm_reg reg) 152 + { 153 + return readl_relaxed(drv->reg_base + drv->reg_data->reg_offset[reg]); 154 + } 155 + 156 + static void spm_set_low_power_mode(struct spm_driver_data *drv, 157 + enum pm_sleep_mode mode) 158 + { 159 + u32 start_index; 160 + u32 ctl_val; 161 + 162 + start_index = drv->reg_data->start_index[mode]; 163 + 164 + ctl_val = spm_register_read(drv, SPM_REG_SPM_CTL); 165 + ctl_val &= ~(SPM_CTL_INDEX << SPM_CTL_INDEX_SHIFT); 166 + ctl_val |= start_index << SPM_CTL_INDEX_SHIFT; 167 + ctl_val |= SPM_CTL_EN; 168 + spm_register_write_sync(drv, SPM_REG_SPM_CTL, ctl_val); 169 + } 170 + 171 + static int qcom_pm_collapse(unsigned long int unused) 172 + { 173 + qcom_scm_cpu_power_down(QCOM_SCM_CPU_PWR_DOWN_L2_ON); 174 + 175 + /* 176 + * Returns here only if there was a pending interrupt and we did not 177 + * power down as a result. 178 + */ 179 + return -1; 180 + } 181 + 182 + static int qcom_cpu_spc(int cpu) 183 + { 184 + int ret; 185 + struct spm_driver_data *drv = per_cpu(cpu_spm_drv, cpu); 186 + 187 + spm_set_low_power_mode(drv, PM_SLEEP_MODE_SPC); 188 + ret = cpu_suspend(0, qcom_pm_collapse); 189 + /* 190 + * ARM common code executes WFI without calling into our driver and 191 + * if the SPM mode is not reset, then we may accidently power down the 192 + * cpu when we intended only to gate the cpu clock. 193 + * Ensure the state is set to standby before returning. 194 + */ 195 + spm_set_low_power_mode(drv, PM_SLEEP_MODE_STBY); 196 + 197 + return ret; 198 + } 199 + 200 + static int qcom_idle_enter(int cpu, unsigned long index) 201 + { 202 + return per_cpu(qcom_idle_ops, cpu)[index](cpu); 203 + } 204 + 205 + static const struct of_device_id qcom_idle_state_match[] __initconst = { 206 + { .compatible = "qcom,idle-state-spc", .data = qcom_cpu_spc }, 207 + { }, 208 + }; 209 + 210 + static int __init qcom_cpuidle_init(struct device_node *cpu_node, int cpu) 211 + { 212 + const struct of_device_id *match_id; 213 + struct device_node *state_node; 214 + int i; 215 + int state_count = 1; 216 + idle_fn idle_fns[CPUIDLE_STATE_MAX]; 217 + idle_fn *fns; 218 + cpumask_t mask; 219 + bool use_scm_power_down = false; 220 + 221 + for (i = 0; ; i++) { 222 + state_node = of_parse_phandle(cpu_node, "cpu-idle-states", i); 223 + if (!state_node) 224 + break; 225 + 226 + if (!of_device_is_available(state_node)) 227 + continue; 228 + 229 + if (i == CPUIDLE_STATE_MAX) { 230 + pr_warn("%s: cpuidle states reached max possible\n", 231 + __func__); 232 + break; 233 + } 234 + 235 + match_id = of_match_node(qcom_idle_state_match, state_node); 236 + if (!match_id) 237 + return -ENODEV; 238 + 239 + idle_fns[state_count] = match_id->data; 240 + 241 + /* Check if any of the states allow power down */ 242 + if (match_id->data == qcom_cpu_spc) 243 + use_scm_power_down = true; 244 + 245 + state_count++; 246 + } 247 + 248 + if (state_count == 1) 249 + goto check_spm; 250 + 251 + fns = devm_kcalloc(get_cpu_device(cpu), state_count, sizeof(*fns), 252 + GFP_KERNEL); 253 + if (!fns) 254 + return -ENOMEM; 255 + 256 + for (i = 1; i < state_count; i++) 257 + fns[i] = idle_fns[i]; 258 + 259 + if (use_scm_power_down) { 260 + /* We have atleast one power down mode */ 261 + cpumask_clear(&mask); 262 + cpumask_set_cpu(cpu, &mask); 263 + qcom_scm_set_warm_boot_addr(cpu_resume, &mask); 264 + } 265 + 266 + per_cpu(qcom_idle_ops, cpu) = fns; 267 + 268 + /* 269 + * SPM probe for the cpu should have happened by now, if the 270 + * SPM device does not exist, return -ENXIO to indicate that the 271 + * cpu does not support idle states. 272 + */ 273 + check_spm: 274 + return per_cpu(cpu_spm_drv, cpu) ? 0 : -ENXIO; 275 + } 276 + 277 + static struct cpuidle_ops qcom_cpuidle_ops __initdata = { 278 + .suspend = qcom_idle_enter, 279 + .init = qcom_cpuidle_init, 280 + }; 281 + 282 + CPUIDLE_METHOD_OF_DECLARE(qcom_idle_v1, "qcom,kpss-acc-v1", &qcom_cpuidle_ops); 283 + CPUIDLE_METHOD_OF_DECLARE(qcom_idle_v2, "qcom,kpss-acc-v2", &qcom_cpuidle_ops); 284 + 285 + static struct spm_driver_data *spm_get_drv(struct platform_device *pdev, 286 + int *spm_cpu) 287 + { 288 + struct spm_driver_data *drv = NULL; 289 + struct device_node *cpu_node, *saw_node; 290 + int cpu; 291 + bool found; 292 + 293 + for_each_possible_cpu(cpu) { 294 + cpu_node = of_cpu_device_node_get(cpu); 295 + if (!cpu_node) 296 + continue; 297 + saw_node = of_parse_phandle(cpu_node, "qcom,saw", 0); 298 + found = (saw_node == pdev->dev.of_node); 299 + of_node_put(saw_node); 300 + of_node_put(cpu_node); 301 + if (found) 302 + break; 303 + } 304 + 305 + if (found) { 306 + drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL); 307 + if (drv) 308 + *spm_cpu = cpu; 309 + } 310 + 311 + return drv; 312 + } 313 + 314 + static const struct of_device_id spm_match_table[] = { 315 + { .compatible = "qcom,msm8974-saw2-v2.1-cpu", 316 + .data = &spm_reg_8974_8084_cpu }, 317 + { .compatible = "qcom,apq8084-saw2-v2.1-cpu", 318 + .data = &spm_reg_8974_8084_cpu }, 319 + { .compatible = "qcom,apq8064-saw2-v1.1-cpu", 320 + .data = &spm_reg_8064_cpu }, 321 + { }, 322 + }; 323 + 324 + static int spm_dev_probe(struct platform_device *pdev) 325 + { 326 + struct spm_driver_data *drv; 327 + struct resource *res; 328 + const struct of_device_id *match_id; 329 + void __iomem *addr; 330 + int cpu; 331 + 332 + drv = spm_get_drv(pdev, &cpu); 333 + if (!drv) 334 + return -EINVAL; 335 + 336 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 337 + drv->reg_base = devm_ioremap_resource(&pdev->dev, res); 338 + if (IS_ERR(drv->reg_base)) 339 + return PTR_ERR(drv->reg_base); 340 + 341 + match_id = of_match_node(spm_match_table, pdev->dev.of_node); 342 + if (!match_id) 343 + return -ENODEV; 344 + 345 + drv->reg_data = match_id->data; 346 + 347 + /* Write the SPM sequences first.. */ 348 + addr = drv->reg_base + drv->reg_data->reg_offset[SPM_REG_SEQ_ENTRY]; 349 + __iowrite32_copy(addr, drv->reg_data->seq, 350 + ARRAY_SIZE(drv->reg_data->seq) / 4); 351 + 352 + /* 353 + * ..and then the control registers. 354 + * On some SoC if the control registers are written first and if the 355 + * CPU was held in reset, the reset signal could trigger the SPM state 356 + * machine, before the sequences are completely written. 357 + */ 358 + spm_register_write(drv, SPM_REG_CFG, drv->reg_data->spm_cfg); 359 + spm_register_write(drv, SPM_REG_DLY, drv->reg_data->spm_dly); 360 + spm_register_write(drv, SPM_REG_PMIC_DLY, drv->reg_data->pmic_dly); 361 + spm_register_write(drv, SPM_REG_PMIC_DATA_0, 362 + drv->reg_data->pmic_data[0]); 363 + spm_register_write(drv, SPM_REG_PMIC_DATA_1, 364 + drv->reg_data->pmic_data[1]); 365 + 366 + /* Set up Standby as the default low power mode */ 367 + spm_set_low_power_mode(drv, PM_SLEEP_MODE_STBY); 368 + 369 + per_cpu(cpu_spm_drv, cpu) = drv; 370 + 371 + return 0; 372 + } 373 + 374 + static struct platform_driver spm_driver = { 375 + .probe = spm_dev_probe, 376 + .driver = { 377 + .name = "saw", 378 + .of_match_table = spm_match_table, 379 + }, 380 + }; 381 + module_platform_driver(spm_driver); 382 + 383 + MODULE_LICENSE("GPL v2"); 384 + MODULE_DESCRIPTION("SAW power controller driver"); 385 + MODULE_ALIAS("platform:saw");
+10
drivers/soc/sunxi/Kconfig
··· 1 + # 2 + # Allwinner sunXi SoC drivers 3 + # 4 + config SUNXI_SRAM 5 + bool 6 + default ARCH_SUNXI 7 + help 8 + Say y here to enable the SRAM controller support. This 9 + device is responsible on mapping the SRAM in the sunXi SoCs 10 + whether to the CPU/DMA, or to the devices.
+1
drivers/soc/sunxi/Makefile
··· 1 + obj-$(CONFIG_SUNXI_SRAM) += sunxi_sram.o
+284
drivers/soc/sunxi/sunxi_sram.c
··· 1 + /* 2 + * Allwinner SoCs SRAM Controller Driver 3 + * 4 + * Copyright (C) 2015 Maxime Ripard 5 + * 6 + * Author: Maxime Ripard <maxime.ripard@free-electrons.com> 7 + * 8 + * This file is licensed under the terms of the GNU General Public 9 + * License version 2. This program is licensed "as is" without any 10 + * warranty of any kind, whether express or implied. 11 + */ 12 + 13 + #include <linux/debugfs.h> 14 + #include <linux/io.h> 15 + #include <linux/module.h> 16 + #include <linux/of.h> 17 + #include <linux/of_address.h> 18 + #include <linux/of_device.h> 19 + #include <linux/platform_device.h> 20 + 21 + #include <linux/soc/sunxi/sunxi_sram.h> 22 + 23 + struct sunxi_sram_func { 24 + char *func; 25 + u8 val; 26 + }; 27 + 28 + struct sunxi_sram_data { 29 + char *name; 30 + u8 reg; 31 + u8 offset; 32 + u8 width; 33 + struct sunxi_sram_func *func; 34 + struct list_head list; 35 + }; 36 + 37 + struct sunxi_sram_desc { 38 + struct sunxi_sram_data data; 39 + bool claimed; 40 + }; 41 + 42 + #define SUNXI_SRAM_MAP(_val, _func) \ 43 + { \ 44 + .func = _func, \ 45 + .val = _val, \ 46 + } 47 + 48 + #define SUNXI_SRAM_DATA(_name, _reg, _off, _width, ...) \ 49 + { \ 50 + .name = _name, \ 51 + .reg = _reg, \ 52 + .offset = _off, \ 53 + .width = _width, \ 54 + .func = (struct sunxi_sram_func[]){ \ 55 + __VA_ARGS__, { } }, \ 56 + } 57 + 58 + static struct sunxi_sram_desc sun4i_a10_sram_a3_a4 = { 59 + .data = SUNXI_SRAM_DATA("A3-A4", 0x4, 0x4, 2, 60 + SUNXI_SRAM_MAP(0, "cpu"), 61 + SUNXI_SRAM_MAP(1, "emac")), 62 + }; 63 + 64 + static struct sunxi_sram_desc sun4i_a10_sram_d = { 65 + .data = SUNXI_SRAM_DATA("D", 0x4, 0x0, 1, 66 + SUNXI_SRAM_MAP(0, "cpu"), 67 + SUNXI_SRAM_MAP(1, "usb-otg")), 68 + }; 69 + 70 + static const struct of_device_id sunxi_sram_dt_ids[] = { 71 + { 72 + .compatible = "allwinner,sun4i-a10-sram-a3-a4", 73 + .data = &sun4i_a10_sram_a3_a4.data, 74 + }, 75 + { 76 + .compatible = "allwinner,sun4i-a10-sram-d", 77 + .data = &sun4i_a10_sram_d.data, 78 + }, 79 + {} 80 + }; 81 + 82 + static struct device *sram_dev; 83 + static LIST_HEAD(claimed_sram); 84 + static DEFINE_SPINLOCK(sram_lock); 85 + static void __iomem *base; 86 + 87 + static int sunxi_sram_show(struct seq_file *s, void *data) 88 + { 89 + struct device_node *sram_node, *section_node; 90 + const struct sunxi_sram_data *sram_data; 91 + const struct of_device_id *match; 92 + struct sunxi_sram_func *func; 93 + const __be32 *sram_addr_p, *section_addr_p; 94 + u32 val; 95 + 96 + seq_puts(s, "Allwinner sunXi SRAM\n"); 97 + seq_puts(s, "--------------------\n\n"); 98 + 99 + for_each_child_of_node(sram_dev->of_node, sram_node) { 100 + sram_addr_p = of_get_address(sram_node, 0, NULL, NULL); 101 + 102 + seq_printf(s, "sram@%08x\n", 103 + be32_to_cpu(*sram_addr_p)); 104 + 105 + for_each_child_of_node(sram_node, section_node) { 106 + match = of_match_node(sunxi_sram_dt_ids, section_node); 107 + if (!match) 108 + continue; 109 + sram_data = match->data; 110 + 111 + section_addr_p = of_get_address(section_node, 0, 112 + NULL, NULL); 113 + 114 + seq_printf(s, "\tsection@%04x\t(%s)\n", 115 + be32_to_cpu(*section_addr_p), 116 + sram_data->name); 117 + 118 + val = readl(base + sram_data->reg); 119 + val >>= sram_data->offset; 120 + val &= sram_data->width; 121 + 122 + for (func = sram_data->func; func->func; func++) { 123 + seq_printf(s, "\t\t%s%c\n", func->func, 124 + func->val == val ? '*' : ' '); 125 + } 126 + } 127 + 128 + seq_puts(s, "\n"); 129 + } 130 + 131 + return 0; 132 + } 133 + 134 + static int sunxi_sram_open(struct inode *inode, struct file *file) 135 + { 136 + return single_open(file, sunxi_sram_show, inode->i_private); 137 + } 138 + 139 + static const struct file_operations sunxi_sram_fops = { 140 + .open = sunxi_sram_open, 141 + .read = seq_read, 142 + .llseek = seq_lseek, 143 + .release = single_release, 144 + }; 145 + 146 + static inline struct sunxi_sram_desc *to_sram_desc(const struct sunxi_sram_data *data) 147 + { 148 + return container_of(data, struct sunxi_sram_desc, data); 149 + } 150 + 151 + static const struct sunxi_sram_data *sunxi_sram_of_parse(struct device_node *node, 152 + unsigned int *value) 153 + { 154 + const struct of_device_id *match; 155 + struct of_phandle_args args; 156 + int ret; 157 + 158 + ret = of_parse_phandle_with_fixed_args(node, "allwinner,sram", 1, 0, 159 + &args); 160 + if (ret) 161 + return ERR_PTR(ret); 162 + 163 + if (!of_device_is_available(args.np)) { 164 + ret = -EBUSY; 165 + goto err; 166 + } 167 + 168 + if (value) 169 + *value = args.args[0]; 170 + 171 + match = of_match_node(sunxi_sram_dt_ids, args.np); 172 + if (!match) { 173 + ret = -EINVAL; 174 + goto err; 175 + } 176 + 177 + of_node_put(args.np); 178 + return match->data; 179 + 180 + err: 181 + of_node_put(args.np); 182 + return ERR_PTR(ret); 183 + } 184 + 185 + int sunxi_sram_claim(struct device *dev) 186 + { 187 + const struct sunxi_sram_data *sram_data; 188 + struct sunxi_sram_desc *sram_desc; 189 + unsigned int device; 190 + u32 val, mask; 191 + 192 + if (IS_ERR(base)) 193 + return -EPROBE_DEFER; 194 + 195 + if (!dev || !dev->of_node) 196 + return -EINVAL; 197 + 198 + sram_data = sunxi_sram_of_parse(dev->of_node, &device); 199 + if (IS_ERR(sram_data)) 200 + return PTR_ERR(sram_data); 201 + 202 + sram_desc = to_sram_desc(sram_data); 203 + 204 + spin_lock(&sram_lock); 205 + 206 + if (sram_desc->claimed) { 207 + spin_unlock(&sram_lock); 208 + return -EBUSY; 209 + } 210 + 211 + mask = GENMASK(sram_data->offset + sram_data->width, sram_data->offset); 212 + val = readl(base + sram_data->reg); 213 + val &= ~mask; 214 + writel(val | ((device << sram_data->offset) & mask), 215 + base + sram_data->reg); 216 + 217 + spin_unlock(&sram_lock); 218 + 219 + return 0; 220 + } 221 + EXPORT_SYMBOL(sunxi_sram_claim); 222 + 223 + int sunxi_sram_release(struct device *dev) 224 + { 225 + const struct sunxi_sram_data *sram_data; 226 + struct sunxi_sram_desc *sram_desc; 227 + 228 + if (!dev || !dev->of_node) 229 + return -EINVAL; 230 + 231 + sram_data = sunxi_sram_of_parse(dev->of_node, NULL); 232 + if (IS_ERR(sram_data)) 233 + return -EINVAL; 234 + 235 + sram_desc = to_sram_desc(sram_data); 236 + 237 + spin_lock(&sram_lock); 238 + sram_desc->claimed = false; 239 + spin_unlock(&sram_lock); 240 + 241 + return 0; 242 + } 243 + EXPORT_SYMBOL(sunxi_sram_release); 244 + 245 + static int sunxi_sram_probe(struct platform_device *pdev) 246 + { 247 + struct resource *res; 248 + struct dentry *d; 249 + 250 + sram_dev = &pdev->dev; 251 + 252 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 253 + base = devm_ioremap_resource(&pdev->dev, res); 254 + if (IS_ERR(base)) 255 + return PTR_ERR(base); 256 + 257 + of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev); 258 + 259 + d = debugfs_create_file("sram", S_IRUGO, NULL, NULL, 260 + &sunxi_sram_fops); 261 + if (!d) 262 + return -ENOMEM; 263 + 264 + return 0; 265 + } 266 + 267 + static const struct of_device_id sunxi_sram_dt_match[] = { 268 + { .compatible = "allwinner,sun4i-a10-sram-controller" }, 269 + { }, 270 + }; 271 + MODULE_DEVICE_TABLE(of, sunxi_sram_dt_match); 272 + 273 + static struct platform_driver sunxi_sram_driver = { 274 + .driver = { 275 + .name = "sunxi-sram", 276 + .of_match_table = sunxi_sram_dt_match, 277 + }, 278 + .probe = sunxi_sram_probe, 279 + }; 280 + module_platform_driver(sunxi_sram_driver); 281 + 282 + MODULE_AUTHOR("Maxime Ripard <maxime.ripard@free-electrons.com>"); 283 + MODULE_DESCRIPTION("Allwinner sunXi SRAM Controller Driver"); 284 + MODULE_LICENSE("GPL");
+21
drivers/soc/tegra/fuse/tegra-apbmisc.c
··· 28 28 #define APBMISC_SIZE 0x64 29 29 #define FUSE_SKU_INFO 0x10 30 30 31 + #define PMC_STRAPPING_OPT_A_RAM_CODE_SHIFT 4 32 + #define PMC_STRAPPING_OPT_A_RAM_CODE_MASK_LONG \ 33 + (0xf << PMC_STRAPPING_OPT_A_RAM_CODE_SHIFT) 34 + #define PMC_STRAPPING_OPT_A_RAM_CODE_MASK_SHORT \ 35 + (0x3 << PMC_STRAPPING_OPT_A_RAM_CODE_SHIFT) 36 + 31 37 static void __iomem *apbmisc_base; 32 38 static void __iomem *strapping_base; 39 + static bool long_ram_code; 33 40 34 41 u32 tegra_read_chipid(void) 35 42 { ··· 59 52 return readl_relaxed(strapping_base); 60 53 else 61 54 return 0; 55 + } 56 + 57 + u32 tegra_read_ram_code(void) 58 + { 59 + u32 straps = tegra_read_straps(); 60 + 61 + if (long_ram_code) 62 + straps &= PMC_STRAPPING_OPT_A_RAM_CODE_MASK_LONG; 63 + else 64 + straps &= PMC_STRAPPING_OPT_A_RAM_CODE_MASK_SHORT; 65 + 66 + return straps >> PMC_STRAPPING_OPT_A_RAM_CODE_SHIFT; 62 67 } 63 68 64 69 static const struct of_device_id apbmisc_match[] __initconst = { ··· 131 112 strapping_base = of_iomap(np, 1); 132 113 if (!strapping_base) 133 114 pr_err("ioremap tegra strapping_base failed\n"); 115 + 116 + long_ram_code = of_property_read_bool(np, "nvidia,long-ram-code"); 134 117 }
+12 -1
include/linux/qcom_scm.h
··· 1 - /* Copyright (c) 2010-2014, The Linux Foundation. All rights reserved. 1 + /* Copyright (c) 2010-2015, The Linux Foundation. All rights reserved. 2 2 * Copyright (C) 2015 Linaro Ltd. 3 3 * 4 4 * This program is free software; you can redistribute it and/or modify ··· 15 15 16 16 extern int qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus); 17 17 extern int qcom_scm_set_warm_boot_addr(void *entry, const cpumask_t *cpus); 18 + 19 + #define QCOM_SCM_HDCP_MAX_REQ_CNT 5 20 + 21 + struct qcom_scm_hdcp_req { 22 + u32 addr; 23 + u32 val; 24 + }; 25 + 26 + extern bool qcom_scm_hdcp_available(void); 27 + extern int qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt, 28 + u32 *resp); 18 29 19 30 #define QCOM_SCM_CPU_PWR_DOWN_L2_ON 0x0 20 31 #define QCOM_SCM_CPU_PWR_DOWN_L2_OFF 0x1
+19
include/linux/soc/sunxi/sunxi_sram.h
··· 1 + /* 2 + * Allwinner SoCs SRAM Controller Driver 3 + * 4 + * Copyright (C) 2015 Maxime Ripard 5 + * 6 + * Author: Maxime Ripard <maxime.ripard@free-electrons.com> 7 + * 8 + * This file is licensed under the terms of the GNU General Public 9 + * License version 2. This program is licensed "as is" without any 10 + * warranty of any kind, whether express or implied. 11 + */ 12 + 13 + #ifndef _SUNXI_SRAM_H_ 14 + #define _SUNXI_SRAM_H_ 15 + 16 + int sunxi_sram_claim(struct device *dev); 17 + int sunxi_sram_release(struct device *dev); 18 + 19 + #endif /* _SUNXI_SRAM_H_ */
+19
include/soc/tegra/emc.h
··· 1 + /* 2 + * Copyright (c) 2014 NVIDIA Corporation. All rights reserved. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + */ 8 + 9 + #ifndef __SOC_TEGRA_EMC_H__ 10 + #define __SOC_TEGRA_EMC_H__ 11 + 12 + struct tegra_emc; 13 + 14 + int tegra_emc_prepare_timing_change(struct tegra_emc *emc, 15 + unsigned long rate); 16 + void tegra_emc_complete_timing_change(struct tegra_emc *emc, 17 + unsigned long rate); 18 + 19 + #endif /* __SOC_TEGRA_EMC_H__ */
+1
include/soc/tegra/fuse.h
··· 56 56 }; 57 57 58 58 u32 tegra_read_straps(void); 59 + u32 tegra_read_ram_code(void); 59 60 u32 tegra_read_chipid(void); 60 61 int tegra_fuse_readl(unsigned long offset, u32 *value); 61 62
+19 -1
include/soc/tegra/mc.h
··· 20 20 unsigned int bit; 21 21 }; 22 22 23 + struct tegra_mc_timing { 24 + unsigned long rate; 25 + 26 + u32 *emem_data; 27 + }; 28 + 23 29 /* latency allowance */ 24 30 struct tegra_mc_la { 25 31 unsigned int reg; ··· 46 40 }; 47 41 48 42 struct tegra_smmu_swgroup { 43 + const char *name; 49 44 unsigned int swgroup; 50 45 unsigned int reg; 51 46 }; ··· 78 71 struct tegra_smmu *tegra_smmu_probe(struct device *dev, 79 72 const struct tegra_smmu_soc *soc, 80 73 struct tegra_mc *mc); 74 + void tegra_smmu_remove(struct tegra_smmu *smmu); 81 75 #else 82 76 static inline struct tegra_smmu * 83 77 tegra_smmu_probe(struct device *dev, const struct tegra_smmu_soc *soc, ··· 86 78 { 87 79 return NULL; 88 80 } 81 + 82 + static inline void tegra_smmu_remove(struct tegra_smmu *smmu) 83 + { 84 + } 89 85 #endif 90 86 91 87 struct tegra_mc_soc { 92 88 const struct tegra_mc_client *clients; 93 89 unsigned int num_clients; 94 90 95 - const unsigned int *emem_regs; 91 + const unsigned long *emem_regs; 96 92 unsigned int num_emem_regs; 97 93 98 94 unsigned int num_address_bits; ··· 114 102 115 103 const struct tegra_mc_soc *soc; 116 104 unsigned long tick; 105 + 106 + struct tegra_mc_timing *timings; 107 + unsigned int num_timings; 117 108 }; 109 + 110 + void tegra_mc_write_emem_configuration(struct tegra_mc *mc, unsigned long rate); 111 + unsigned int tegra_mc_get_emem_device_count(struct tegra_mc *mc); 118 112 119 113 #endif /* __SOC_TEGRA_MC_H__ */