Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM SoC driver updates from Olof Johansson:
"Various driver updates for platforms:

- A larger set of work on Tegra 2/3 around memory controller and
regulator features, some fuse cleanups, etc..

- MMP platform drivers, in particular for USB PHY, and other smaller
additions.

- Samsung Exynos 5422 driver for DMC (dynamic memory configuration),
and ASV (adaptive voltage), allowing the platform to run at more
optimal operating points.

- Misc refactorings and support for RZ/G2N and R8A774B1 from Renesas

- Clock/reset control driver for TI/OMAP

- Meson-A1 reset controller support

- Qualcomm sdm845 and sda845 SoC IDs for socinfo"

* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (150 commits)
firmware: arm_scmi: Fix doorbell ring logic for !CONFIG_64BIT
soc: fsl: add RCPM driver
dt-bindings: fsl: rcpm: Add 'little-endian' and update Chassis definition
memory: tegra: Consolidate registers definition into common header
memory: tegra: Ensure timing control debug features are disabled
memory: tegra: Introduce Tegra30 EMC driver
memory: tegra: Do not handle error from wait_for_completion_timeout()
memory: tegra: Increase handshake timeout on Tegra20
memory: tegra: Print a brief info message about EMC timings
memory: tegra: Pre-configure debug register on Tegra20
memory: tegra: Include io.h instead of iopoll.h
memory: tegra: Adapt for Tegra20 clock driver changes
memory: tegra: Don't set EMC rate to maximum on probe for Tegra20
memory: tegra: Add gr2d and gr3d to DRM IOMMU group
memory: tegra: Set DMA mask based on supported address bits
soc: at91: Add Atmel SFR SN (Serial Number) support
memory: atmel-ebi: switch to SPDX license identifiers
memory: atmel-ebi: move NUM_CS definition inside EBI driver
soc: mediatek: Refactor bus protection control
soc: mediatek: Refactor sram control
...

+7617 -945
+2 -2
Documentation/arm/microchip.rst
··· 103 103 104 104 * Datasheet 105 105 106 - http://ww1.microchip.com/downloads/en/DeviceDoc/Atmel-11121-32-bit-Cortex-A5-Microcontroller-SAMA5D3_Datasheet.pdf 106 + http://ww1.microchip.com/downloads/en/DeviceDoc/Atmel-11121-32-bit-Cortex-A5-Microcontroller-SAMA5D3_Datasheet_B.pdf 107 107 108 108 * ARM Cortex-A5 + NEON based SoCs 109 109 - sama5d4 family ··· 167 167 168 168 * Datasheet 169 169 170 - http://ww1.microchip.com/downloads/en/DeviceDoc/60001527A.pdf 170 + http://ww1.microchip.com/downloads/en/DeviceDoc/SAM-E70-S70-V70-V71-Family-Data-Sheet-DS60001527D.pdf 171 171 172 172 173 173 Linux kernel information
-41
Documentation/devicetree/bindings/arm/msm/qcom,llcc.txt
··· 1 - == Introduction== 2 - 3 - LLCC (Last Level Cache Controller) provides last level of cache memory in SOC, 4 - that can be shared by multiple clients. Clients here are different cores in the 5 - SOC, the idea is to minimize the local caches at the clients and migrate to 6 - common pool of memory. Cache memory is divided into partitions called slices 7 - which are assigned to clients. Clients can query the slice details, activate 8 - and deactivate them. 9 - 10 - Properties: 11 - - compatible: 12 - Usage: required 13 - Value type: <string> 14 - Definition: must be "qcom,sdm845-llcc" 15 - 16 - - reg: 17 - Usage: required 18 - Value Type: <prop-encoded-array> 19 - Definition: The first element specifies the llcc base start address and 20 - the size of the register region. The second element specifies 21 - the llcc broadcast base address and size of the register region. 22 - 23 - - reg-names: 24 - Usage: required 25 - Value Type: <stringlist> 26 - Definition: Register region names. Must be "llcc_base", "llcc_broadcast_base". 27 - 28 - - interrupts: 29 - Usage: required 30 - Definition: The interrupt is associated with the llcc edac device. 31 - It's used for llcc cache single and double bit error detection 32 - and reporting. 33 - 34 - Example: 35 - 36 - cache-controller@1100000 { 37 - compatible = "qcom,sdm845-llcc"; 38 - reg = <0x1100000 0x200000>, <0x1300000 0x50000> ; 39 - reg-names = "llcc_base", "llcc_broadcast_base"; 40 - interrupts = <GIC_SPI 582 IRQ_TYPE_LEVEL_HIGH>; 41 - };
+55
Documentation/devicetree/bindings/arm/msm/qcom,llcc.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/arm/msm/qcom,llcc.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Last Level Cache Controller 8 + 9 + maintainers: 10 + - Rishabh Bhatnagar <rishabhb@codeaurora.org> 11 + - Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> 12 + 13 + description: | 14 + LLCC (Last Level Cache Controller) provides last level of cache memory in SoC, 15 + that can be shared by multiple clients. Clients here are different cores in the 16 + SoC, the idea is to minimize the local caches at the clients and migrate to 17 + common pool of memory. Cache memory is divided into partitions called slices 18 + which are assigned to clients. Clients can query the slice details, activate 19 + and deactivate them. 20 + 21 + properties: 22 + compatible: 23 + enum: 24 + - qcom,sc7180-llcc 25 + - qcom,sdm845-llcc 26 + 27 + reg: 28 + items: 29 + - description: LLCC base register region 30 + - description: LLCC broadcast base register region 31 + 32 + reg-names: 33 + items: 34 + - const: llcc_base 35 + - const: llcc_broadcast_base 36 + 37 + interrupts: 38 + maxItems: 1 39 + 40 + required: 41 + - compatible 42 + - reg 43 + - reg-names 44 + - interrupts 45 + 46 + examples: 47 + - | 48 + #include <dt-bindings/interrupt-controller/arm-gic.h> 49 + 50 + cache-controller@1100000 { 51 + compatible = "qcom,sdm845-llcc"; 52 + reg = <0x1100000 0x200000>, <0x1300000 0x50000> ; 53 + reg-names = "llcc_base", "llcc_broadcast_base"; 54 + interrupts = <GIC_SPI 582 IRQ_TYPE_LEVEL_HIGH>; 55 + };
+29
Documentation/devicetree/bindings/arm/omap/prm-inst.txt
··· 1 + OMAP PRM instance bindings 2 + 3 + Power and Reset Manager is an IP block on OMAP family of devices which 4 + handle the power domains and their current state, and provide reset 5 + handling for the domains and/or separate IP blocks under the power domain 6 + hierarchy. 7 + 8 + Required properties: 9 + - compatible: Must contain one of the following: 10 + "ti,am3-prm-inst" 11 + "ti,am4-prm-inst" 12 + "ti,omap4-prm-inst" 13 + "ti,omap5-prm-inst" 14 + "ti,dra7-prm-inst" 15 + and additionally must contain: 16 + "ti,omap-prm-inst" 17 + - reg: Contains PRM instance register address range 18 + (base address and length) 19 + 20 + Optional properties: 21 + - #reset-cells: Should be 1 if the PRM instance in question supports resets. 22 + 23 + Example: 24 + 25 + prm_dsp2: prm@1b00 { 26 + compatible = "ti,dra7-prm-inst", "ti,omap-prm-inst"; 27 + reg = <0x1b00 0x40>; 28 + #reset-cells = <1>; 29 + };
+15 -1
Documentation/devicetree/bindings/firmware/xilinx/xlnx,zynqmp-firmware.txt
··· 11 11 services. 12 12 13 13 Required properties: 14 - - compatible: Must contain: "xlnx,zynqmp-firmware" 14 + - compatible: Must contain any of below: 15 + "xlnx,zynqmp-firmware" for Zynq Ultrascale+ MPSoC 16 + "xlnx,versal-firmware" for Versal 15 17 - method: The method of calling the PM-API firmware layer. 16 18 Permitted values are: 17 19 - "smc" : SMC #0, following the SMCCC ··· 23 21 Example 24 22 ------- 25 23 24 + Zynq Ultrascale+ MPSoC 25 + ---------------------- 26 26 firmware { 27 27 zynqmp_firmware: zynqmp-firmware { 28 28 compatible = "xlnx,zynqmp-firmware"; 29 + method = "smc"; 30 + ... 31 + }; 32 + }; 33 + 34 + Versal 35 + ------ 36 + firmware { 37 + versal_firmware: versal-firmware { 38 + compatible = "xlnx,versal-firmware"; 29 39 method = "smc"; 30 40 ... 31 41 };
+6
Documentation/devicetree/bindings/nvmem/amlogic-efuse.txt
··· 4 4 - compatible: should be "amlogic,meson-gxbb-efuse" 5 5 - clocks: phandle to the efuse peripheral clock provided by the 6 6 clock controller. 7 + - secure-monitor: phandle to the secure-monitor node 7 8 8 9 = Data cells = 9 10 Are child nodes of eFuse, bindings of which as described in ··· 17 16 clocks = <&clkc CLKID_EFUSE>; 18 17 #address-cells = <1>; 19 18 #size-cells = <1>; 19 + secure-monitor = <&sm>; 20 20 21 21 sn: sn@14 { 22 22 reg = <0x14 0x10>; ··· 30 28 bid: bid@46 { 31 29 reg = <0x46 0x30>; 32 30 }; 31 + }; 32 + 33 + sm: secure-monitor { 34 + compatible = "amlogic,meson-gxbb-sm"; 33 35 }; 34 36 35 37 = Data consumers =
+1
Documentation/devicetree/bindings/power/qcom,rpmpd.txt
··· 5 5 6 6 Required Properties: 7 7 - compatible: Should be one of the following 8 + * qcom,msm8976-rpmpd: RPM Power domain for the msm8976 family of SoC 8 9 * qcom,msm8996-rpmpd: RPM Power domain for the msm8996 family of SoC 9 10 * qcom,msm8998-rpmpd: RPM Power domain for the msm8998 family of SoC 10 11 * qcom,qcs404-rpmpd: RPM Power domain for the qcs404 family of SoC
+2 -1
Documentation/devicetree/bindings/reset/amlogic,meson-axg-audio-arb.txt
··· 4 4 disables the access of Audio FIFOs to DDR on AXG based SoC. 5 5 6 6 Required properties: 7 - - compatible: 'amlogic,meson-axg-audio-arb' 7 + - compatible: 'amlogic,meson-axg-audio-arb' or 8 + 'amlogic,meson-sm1-audio-arb' 8 9 - reg: physical base address of the controller and length of memory 9 10 mapped region. 10 11 - clocks: phandle to the fifo peripheral clock provided by the audio
+1
Documentation/devicetree/bindings/reset/amlogic,meson-reset.yaml
··· 16 16 - amlogic,meson8b-reset # Reset Controller on Meson8b and compatible SoCs 17 17 - amlogic,meson-gxbb-reset # Reset Controller on GXBB and compatible SoCs 18 18 - amlogic,meson-axg-reset # Reset Controller on AXG and compatible SoCs 19 + - amlogic,meson-a1-reset # Reset Controller on A1 and compatible SoCs 19 20 20 21 reg: 21 22 maxItems: 1
-52
Documentation/devicetree/bindings/reset/qcom,aoss-reset.txt
··· 1 - Qualcomm AOSS Reset Controller 2 - ====================================== 3 - 4 - This binding describes a reset-controller found on AOSS-CC (always on subsystem) 5 - for Qualcomm SDM845 SoCs. 6 - 7 - Required properties: 8 - - compatible: 9 - Usage: required 10 - Value type: <string> 11 - Definition: must be: 12 - "qcom,sdm845-aoss-cc" 13 - 14 - - reg: 15 - Usage: required 16 - Value type: <prop-encoded-array> 17 - Definition: must specify the base address and size of the register 18 - space. 19 - 20 - - #reset-cells: 21 - Usage: required 22 - Value type: <uint> 23 - Definition: must be 1; cell entry represents the reset index. 24 - 25 - Example: 26 - 27 - aoss_reset: reset-controller@c2a0000 { 28 - compatible = "qcom,sdm845-aoss-cc"; 29 - reg = <0xc2a0000 0x31000>; 30 - #reset-cells = <1>; 31 - }; 32 - 33 - Specifying reset lines connected to IP modules 34 - ============================================== 35 - 36 - Device nodes that need access to reset lines should 37 - specify them as a reset phandle in their corresponding node as 38 - specified in reset.txt. 39 - 40 - For list of all valid reset indicies see 41 - <dt-bindings/reset/qcom,sdm845-aoss.h> 42 - 43 - Example: 44 - 45 - modem-pil@4080000 { 46 - ... 47 - 48 - resets = <&aoss_reset AOSS_CC_MSS_RESTART>; 49 - reset-names = "mss_restart"; 50 - 51 - ... 52 - };
+47
Documentation/devicetree/bindings/reset/qcom,aoss-reset.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/reset/qcom,aoss-reset.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm AOSS Reset Controller 8 + 9 + maintainers: 10 + - Sibi Sankar <sibis@codeaurora.org> 11 + 12 + description: 13 + The bindings describe the reset-controller found on AOSS-CC (always on 14 + subsystem) for Qualcomm Technologies Inc SoCs. 15 + 16 + properties: 17 + compatible: 18 + oneOf: 19 + - description: on SC7180 SoCs the following compatibles must be specified 20 + items: 21 + - const: "qcom,sc7180-aoss-cc" 22 + - const: "qcom,sdm845-aoss-cc" 23 + 24 + - description: on SDM845 SoCs the following compatibles must be specified 25 + items: 26 + - const: "qcom,sdm845-aoss-cc" 27 + 28 + reg: 29 + maxItems: 1 30 + 31 + '#reset-cells': 32 + const: 1 33 + 34 + required: 35 + - compatible 36 + - reg 37 + - '#reset-cells' 38 + 39 + additionalProperties: false 40 + 41 + examples: 42 + - | 43 + aoss_reset: reset-controller@c2a0000 { 44 + compatible = "qcom,sdm845-aoss-cc"; 45 + reg = <0xc2a0000 0x31000>; 46 + #reset-cells = <1>; 47 + };
-52
Documentation/devicetree/bindings/reset/qcom,pdc-global.txt
··· 1 - PDC Global 2 - ====================================== 3 - 4 - This binding describes a reset-controller found on PDC-Global (Power Domain 5 - Controller) block for Qualcomm Technologies Inc SDM845 SoCs. 6 - 7 - Required properties: 8 - - compatible: 9 - Usage: required 10 - Value type: <string> 11 - Definition: must be: 12 - "qcom,sdm845-pdc-global" 13 - 14 - - reg: 15 - Usage: required 16 - Value type: <prop-encoded-array> 17 - Definition: must specify the base address and size of the register 18 - space. 19 - 20 - - #reset-cells: 21 - Usage: required 22 - Value type: <uint> 23 - Definition: must be 1; cell entry represents the reset index. 24 - 25 - Example: 26 - 27 - pdc_reset: reset-controller@b2e0000 { 28 - compatible = "qcom,sdm845-pdc-global"; 29 - reg = <0xb2e0000 0x20000>; 30 - #reset-cells = <1>; 31 - }; 32 - 33 - PDC reset clients 34 - ====================================== 35 - 36 - Device nodes that need access to reset lines should 37 - specify them as a reset phandle in their corresponding node as 38 - specified in reset.txt. 39 - 40 - For a list of all valid reset indices see 41 - <dt-bindings/reset/qcom,sdm845-pdc.h> 42 - 43 - Example: 44 - 45 - modem-pil@4080000 { 46 - ... 47 - 48 - resets = <&pdc_reset PDC_MODEM_SYNC_RESET>; 49 - reset-names = "pdc_reset"; 50 - 51 - ... 52 - };
+47
Documentation/devicetree/bindings/reset/qcom,pdc-global.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/reset/qcom,pdc-global.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm PDC Global 8 + 9 + maintainers: 10 + - Sibi Sankar <sibis@codeaurora.org> 11 + 12 + description: 13 + The bindings describes the reset-controller found on PDC-Global (Power Domain 14 + Controller) block for Qualcomm Technologies Inc SoCs. 15 + 16 + properties: 17 + compatible: 18 + oneOf: 19 + - description: on SC7180 SoCs the following compatibles must be specified 20 + items: 21 + - const: "qcom,sc7180-pdc-global" 22 + - const: "qcom,sdm845-pdc-global" 23 + 24 + - description: on SDM845 SoCs the following compatibles must be specified 25 + items: 26 + - const: "qcom,sdm845-pdc-global" 27 + 28 + reg: 29 + maxItems: 1 30 + 31 + '#reset-cells': 32 + const: 1 33 + 34 + required: 35 + - compatible 36 + - reg 37 + - '#reset-cells' 38 + 39 + additionalProperties: false 40 + 41 + examples: 42 + - | 43 + pdc_reset: reset-controller@b2e0000 { 44 + compatible = "qcom,sdm845-pdc-global"; 45 + reg = <0xb2e0000 0x20000>; 46 + #reset-cells = <1>; 47 + };
+3 -2
Documentation/devicetree/bindings/reset/uniphier-reset.txt
··· 130 130 Required properties: 131 131 - compatible: Should be 132 132 "socionext,uniphier-pro4-usb3-reset" - for Pro4 SoC USB3 133 + "socionext,uniphier-pro5-usb3-reset" - for Pro5 SoC USB3 133 134 "socionext,uniphier-pxs2-usb3-reset" - for PXs2 SoC USB3 134 135 "socionext,uniphier-ld20-usb3-reset" - for LD20 SoC USB3 135 136 "socionext,uniphier-pxs3-usb3-reset" - for PXs3 SoC USB3 ··· 142 141 - clocks: A list of phandles to the clock gate for the glue layer. 143 142 According to the clock-names, appropriate clocks are required. 144 143 - clock-names: Should contain 145 - "gio", "link" - for Pro4 SoC 144 + "gio", "link" - for Pro4 and Pro5 SoCs 146 145 "link" - for others 147 146 - resets: A list of phandles to the reset control for the glue layer. 148 147 According to the reset-names, appropriate resets are required. 149 148 - reset-names: Should contain 150 - "gio", "link" - for Pro4 SoC 149 + "gio", "link" - for Pro4 and Pro5 SoCs 151 150 "link" - for others 152 151 153 152 Example:
+10 -4
Documentation/devicetree/bindings/soc/fsl/rcpm.txt
··· 5 5 6 6 Required properites: 7 7 - reg : Offset and length of the register set of the RCPM block. 8 - - fsl,#rcpm-wakeup-cells : The number of IPPDEXPCR register cells in the 8 + - #fsl,rcpm-wakeup-cells : The number of IPPDEXPCR register cells in the 9 9 fsl,rcpm-wakeup property. 10 10 - compatible : Must contain a chip-specific RCPM block compatible string 11 11 and (if applicable) may contain a chassis-version RCPM compatible ··· 20 20 * "fsl,qoriq-rcpm-1.0": for chassis 1.0 rcpm 21 21 * "fsl,qoriq-rcpm-2.0": for chassis 2.0 rcpm 22 22 * "fsl,qoriq-rcpm-2.1": for chassis 2.1 rcpm 23 + * "fsl,qoriq-rcpm-2.1+": for chassis 2.1+ rcpm 23 24 24 25 All references to "1.0" and "2.0" refer to the QorIQ chassis version to 25 26 which the chip complies. ··· 28 27 --------------- ------------------------------- 29 28 1.0 p4080, p5020, p5040, p2041, p3041 30 29 2.0 t4240, b4860, b4420 31 - 2.1 t1040, ls1021 30 + 2.1 t1040, 31 + 2.1+ ls1021a, ls1012a, ls1043a, ls1046a 32 + 33 + Optional properties: 34 + - little-endian : RCPM register block is Little Endian. Without it RCPM 35 + will be Big Endian (default case). 32 36 33 37 Example: 34 38 The RCPM node for T4240: 35 39 rcpm: global-utilities@e2000 { 36 40 compatible = "fsl,t4240-rcpm", "fsl,qoriq-rcpm-2.0"; 37 41 reg = <0xe2000 0x1000>; 38 - fsl,#rcpm-wakeup-cells = <2>; 42 + #fsl,rcpm-wakeup-cells = <2>; 39 43 }; 40 44 41 45 * Freescale RCPM Wakeup Source Device Tree Bindings ··· 50 44 51 45 - fsl,rcpm-wakeup: Consists of a phandle to the rcpm node and the IPPDEXPCR 52 46 register cells. The number of IPPDEXPCR register cells is defined in 53 - "fsl,#rcpm-wakeup-cells" in the rcpm node. The first register cell is 47 + "#fsl,rcpm-wakeup-cells" in the rcpm node. The first register cell is 54 48 the bit mask that should be set in IPPDEXPCR0, and the second register 55 49 cell is for IPPDEXPCR1, and so on. 56 50
+1
Documentation/devicetree/bindings/soc/qcom/qcom,smd-rpm.txt
··· 22 22 "qcom,rpm-apq8084" 23 23 "qcom,rpm-msm8916" 24 24 "qcom,rpm-msm8974" 25 + "qcom,rpm-msm8976" 25 26 "qcom,rpm-msm8998" 26 27 "qcom,rpm-sdm660" 27 28 "qcom,rpm-qcs404"
+17
MAINTAINERS
··· 2140 2140 2141 2141 ARM/QUALCOMM SUPPORT 2142 2142 M: Andy Gross <agross@kernel.org> 2143 + M: Bjorn Andersson <bjorn.andersson@linaro.org> 2143 2144 L: linux-arm-msm@vger.kernel.org 2144 2145 S: Maintained 2145 2146 F: Documentation/devicetree/bindings/soc/qcom/ ··· 5001 5000 F: include/linux/dma-direct.h 5002 5001 F: include/linux/dma-mapping.h 5003 5002 F: include/linux/dma-noncoherent.h 5003 + 5004 + DMC FREQUENCY DRIVER FOR SAMSUNG EXYNOS5422 5005 + M: Lukasz Luba <l.luba@partner.samsung.com> 5006 + L: linux-pm@vger.kernel.org 5007 + L: linux-samsung-soc@vger.kernel.org 5008 + S: Maintained 5009 + F: drivers/memory/samsung/exynos5422-dmc.c 5010 + F: Documentation/devicetree/bindings/memory-controllers/exynos5422-dmc.txt 5004 5011 5005 5012 DME1737 HARDWARE MONITOR DRIVER 5006 5013 M: Juerg Haefliger <juergh@gmail.com> ··· 11075 11066 F: arch/arm/mach-mmp/ 11076 11067 F: linux/soc/mmp/ 11077 11068 11069 + MMP USB PHY DRIVERS 11070 + R: Lubomir Rintel <lkundrak@v3.sk> 11071 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 11072 + S: Maintained 11073 + F: drivers/phy/marvell/phy-mmp3-usb.c 11074 + F: drivers/phy/marvell/phy-pxa-usb.c 11075 + 11078 11076 MMU GATHER AND TLB INVALIDATION 11079 11077 M: Will Deacon <will@kernel.org> 11080 11078 M: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> ··· 14049 14033 F: include/linux/reset.h 14050 14034 F: include/linux/reset/ 14051 14035 F: include/linux/reset-controller.h 14036 + K: \b(?:devm_|of_)?reset_control(?:ler_[a-z]+|_[a-z_]+)?\b 14052 14037 14053 14038 RESTARTABLE SEQUENCES SUPPORT 14054 14039 M: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
+1
arch/arm/mach-omap2/Kconfig
··· 109 109 select TI_SYSC 110 110 select OMAP_IRQCHIP 111 111 select CLKSRC_TI_32K 112 + select ARCH_HAS_RESET_CONTROLLER 112 113 help 113 114 Systems based on OMAP2, OMAP3, OMAP4 or OMAP5 114 115
+54
drivers/base/power/wakeup.c
··· 248 248 EXPORT_SYMBOL_GPL(wakeup_source_unregister); 249 249 250 250 /** 251 + * wakeup_sources_read_lock - Lock wakeup source list for read. 252 + * 253 + * Returns an index of srcu lock for struct wakeup_srcu. 254 + * This index must be passed to the matching wakeup_sources_read_unlock(). 255 + */ 256 + int wakeup_sources_read_lock(void) 257 + { 258 + return srcu_read_lock(&wakeup_srcu); 259 + } 260 + EXPORT_SYMBOL_GPL(wakeup_sources_read_lock); 261 + 262 + /** 263 + * wakeup_sources_read_unlock - Unlock wakeup source list. 264 + * @idx: return value from corresponding wakeup_sources_read_lock() 265 + */ 266 + void wakeup_sources_read_unlock(int idx) 267 + { 268 + srcu_read_unlock(&wakeup_srcu, idx); 269 + } 270 + EXPORT_SYMBOL_GPL(wakeup_sources_read_unlock); 271 + 272 + /** 273 + * wakeup_sources_walk_start - Begin a walk on wakeup source list 274 + * 275 + * Returns first object of the list of wakeup sources. 276 + * 277 + * Note that to be safe, wakeup sources list needs to be locked by calling 278 + * wakeup_source_read_lock() for this. 279 + */ 280 + struct wakeup_source *wakeup_sources_walk_start(void) 281 + { 282 + struct list_head *ws_head = &wakeup_sources; 283 + 284 + return list_entry_rcu(ws_head->next, struct wakeup_source, entry); 285 + } 286 + EXPORT_SYMBOL_GPL(wakeup_sources_walk_start); 287 + 288 + /** 289 + * wakeup_sources_walk_next - Get next wakeup source from the list 290 + * @ws: Previous wakeup source object 291 + * 292 + * Note that to be safe, wakeup sources list needs to be locked by calling 293 + * wakeup_source_read_lock() for this. 294 + */ 295 + struct wakeup_source *wakeup_sources_walk_next(struct wakeup_source *ws) 296 + { 297 + struct list_head *ws_head = &wakeup_sources; 298 + 299 + return list_next_or_null_rcu(ws_head, &ws->entry, 300 + struct wakeup_source, entry); 301 + } 302 + EXPORT_SYMBOL_GPL(wakeup_sources_walk_next); 303 + 304 + /** 251 305 * device_wakeup_attach - Attach a wakeup source object to a device object. 252 306 * @dev: Device to handle. 253 307 * @ws: Wakeup source object to attach to @dev.
+3 -2
drivers/bus/Kconfig
··· 41 41 42 42 config HISILICON_LPC 43 43 bool "Support for ISA I/O space on HiSilicon Hip06/7" 44 - depends on ARM64 && (ARCH_HISI || COMPILE_TEST) 45 - select INDIRECT_PIO 44 + depends on (ARM64 && ARCH_HISI) || (COMPILE_TEST && !ALPHA && !HEXAGON && !PARISC && !C6X) 45 + depends on HAS_IOMEM 46 + select INDIRECT_PIO if ARM64 46 47 help 47 48 Driver to enable I/O access to devices attached to the Low Pin 48 49 Count bus on the HiSilicon Hip06/7 SoC.
+4 -5
drivers/bus/hisi_lpc.c
··· 74 74 /* About 10us. This is specific for single IO operations, such as inb */ 75 75 #define LPC_PEROP_WAITCNT 100 76 76 77 - static int wait_lpc_idle(unsigned char *mbase, unsigned int waitcnt) 77 + static int wait_lpc_idle(void __iomem *mbase, unsigned int waitcnt) 78 78 { 79 79 u32 status; 80 80 ··· 209 209 struct hisi_lpc_dev *lpcdev = hostdata; 210 210 struct lpc_cycle_para iopara; 211 211 unsigned long addr; 212 - u32 rd_data = 0; 212 + __le32 rd_data = 0; 213 213 int ret; 214 214 215 215 if (!lpcdev || !dwidth || dwidth > LPC_MAX_DWIDTH) ··· 244 244 struct lpc_cycle_para iopara; 245 245 const unsigned char *buf; 246 246 unsigned long addr; 247 + __le32 _val = cpu_to_le32(val); 247 248 248 249 if (!lpcdev || !dwidth || dwidth > LPC_MAX_DWIDTH) 249 250 return; 250 251 251 - val = cpu_to_le32(val); 252 - 253 - buf = (const unsigned char *)&val; 252 + buf = (const unsigned char *)&_val; 254 253 addr = hisi_lpc_pio_to_addr(lpcdev, pio); 255 254 256 255 iopara.opflags = FG_INCRADDR_LPC;
+33 -54
drivers/bus/ti-sysc.c
··· 917 917 return -EINVAL; 918 918 } 919 919 920 + if (ddata->cfg.quirks & SYSC_QUIRK_SWSUP_MSTANDBY) 921 + best_mode = SYSC_IDLE_NO; 922 + 920 923 reg &= ~(SYSC_IDLE_MASK << regbits->midle_shift); 921 924 reg |= best_mode << regbits->midle_shift; 922 925 sysc_write(ddata, ddata->offsets[SYSC_SYSCONFIG], reg); ··· 981 978 return ret; 982 979 } 983 980 981 + if (ddata->cfg.quirks & SYSC_QUIRK_SWSUP_MSTANDBY) 982 + best_mode = SYSC_IDLE_FORCE; 983 + 984 984 reg &= ~(SYSC_IDLE_MASK << regbits->midle_shift); 985 985 reg |= best_mode << regbits->midle_shift; 986 986 sysc_write(ddata, ddata->offsets[SYSC_SYSCONFIG], reg); ··· 1043 1037 struct ti_sysc_platform_data *pdata; 1044 1038 int error; 1045 1039 1046 - reset_control_deassert(ddata->rsts); 1047 - 1048 1040 pdata = dev_get_platdata(ddata->dev); 1049 1041 if (!pdata) 1050 1042 return 0; ··· 1054 1050 if (error) 1055 1051 dev_err(dev, "%s: could not enable: %i\n", 1056 1052 __func__, error); 1053 + 1054 + reset_control_deassert(ddata->rsts); 1057 1055 1058 1056 return 0; 1059 1057 } ··· 1110 1104 1111 1105 sysc_clkdm_deny_idle(ddata); 1112 1106 1113 - reset_control_deassert(ddata->rsts); 1114 - 1115 1107 if (sysc_opt_clks_needed(ddata)) { 1116 1108 error = sysc_enable_opt_clocks(ddata); 1117 1109 if (error) ··· 1119 1115 error = sysc_enable_main_clocks(ddata); 1120 1116 if (error) 1121 1117 goto err_opt_clocks; 1118 + 1119 + reset_control_deassert(ddata->rsts); 1122 1120 1123 1121 if (ddata->legacy_mode) { 1124 1122 error = sysc_runtime_resume_legacy(dev, ddata); ··· 1257 1251 SYSC_QUIRK("gpu", 0x50000000, 0x14, -1, -1, 0x00010201, 0xffffffff, 0), 1258 1252 SYSC_QUIRK("gpu", 0x50000000, 0xfe00, 0xfe10, -1, 0x40000000 , 0xffffffff, 1259 1253 SYSC_MODULE_QUIRK_SGX), 1254 + SYSC_QUIRK("usb_otg_hs", 0, 0x400, 0x404, 0x408, 0x00000050, 1255 + 0xffffffff, SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY), 1256 + SYSC_QUIRK("usb_otg_hs", 0, 0, 0x10, -1, 0x4ea2080d, 0xffffffff, 1257 + SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY), 1260 1258 SYSC_QUIRK("wdt", 0, 0, 0x10, 0x14, 0x502a0500, 0xfffff0f0, 1261 1259 SYSC_MODULE_QUIRK_WDT), 1262 1260 /* Watchdog on am3 and am4 */ ··· 1319 1309 SYSC_QUIRK("usbhstll", 0, 0, 0x10, 0x14, 0x00000008, 0xffffffff, 0), 1320 1310 SYSC_QUIRK("usb_host_hs", 0, 0, 0x10, 0x14, 0x50700100, 0xffffffff, 0), 1321 1311 SYSC_QUIRK("usb_host_hs", 0, 0, 0x10, -1, 0x50700101, 0xffffffff, 0), 1322 - SYSC_QUIRK("usb_otg_hs", 0, 0x400, 0x404, 0x408, 0x00000050, 1323 - 0xffffffff, 0), 1324 1312 SYSC_QUIRK("vfpe", 0, 0, 0x104, -1, 0x4d001200, 0xffffffff, 0), 1325 1313 #endif 1326 1314 }; ··· 1540 1532 return error; 1541 1533 } 1542 1534 1543 - /** 1544 - * sysc_rstctrl_reset_deassert - deassert rstctrl reset 1545 - * @ddata: device driver data 1546 - * @reset: reset before deassert 1547 - * 1548 - * A module can have both OCP softreset control and external rstctrl. 1549 - * If more complicated rstctrl resets are needed, please handle these 1550 - * directly from the child device driver and map only the module reset 1551 - * for the parent interconnect target module device. 1552 - * 1553 - * Automatic reset of the module on init can be skipped with the 1554 - * "ti,no-reset-on-init" device tree property. 1555 - */ 1556 - static int sysc_rstctrl_reset_deassert(struct sysc *ddata, bool reset) 1557 - { 1558 - int error; 1559 - 1560 - if (!ddata->rsts) 1561 - return 0; 1562 - 1563 - if (reset) { 1564 - error = reset_control_assert(ddata->rsts); 1565 - if (error) 1566 - return error; 1567 - } 1568 - 1569 - reset_control_deassert(ddata->rsts); 1570 - 1571 - return 0; 1572 - } 1573 - 1574 1535 /* 1575 1536 * Note that the caller must ensure the interconnect target module is enabled 1576 1537 * before calling reset. Otherwise reset will not complete. ··· 1602 1625 static int sysc_init_module(struct sysc *ddata) 1603 1626 { 1604 1627 int error = 0; 1605 - bool manage_clocks = true; 1606 - 1607 - error = sysc_rstctrl_reset_deassert(ddata, false); 1608 - if (error) 1609 - return error; 1610 - 1611 - if (ddata->cfg.quirks & 1612 - (SYSC_QUIRK_NO_IDLE | SYSC_QUIRK_NO_IDLE_ON_INIT)) 1613 - manage_clocks = false; 1614 1628 1615 1629 error = sysc_clockdomain_init(ddata); 1616 1630 if (error) ··· 1622 1654 goto err_opt_clocks; 1623 1655 1624 1656 if (!(ddata->cfg.quirks & SYSC_QUIRK_NO_RESET_ON_INIT)) { 1625 - error = sysc_rstctrl_reset_deassert(ddata, true); 1657 + error = reset_control_deassert(ddata->rsts); 1626 1658 if (error) 1627 1659 goto err_main_clocks; 1628 1660 } ··· 1634 1666 if (ddata->legacy_mode) { 1635 1667 error = sysc_legacy_init(ddata); 1636 1668 if (error) 1637 - goto err_main_clocks; 1669 + goto err_reset; 1638 1670 } 1639 1671 1640 1672 if (!ddata->legacy_mode) { 1641 1673 error = sysc_enable_module(ddata->dev); 1642 1674 if (error) 1643 - goto err_main_clocks; 1675 + goto err_reset; 1644 1676 } 1645 1677 1646 1678 error = sysc_reset(ddata); 1647 1679 if (error) 1648 1680 dev_err(ddata->dev, "Reset failed with %d\n", error); 1649 1681 1650 - if (!ddata->legacy_mode && manage_clocks) 1682 + if (error && !ddata->legacy_mode) 1651 1683 sysc_disable_module(ddata->dev); 1652 1684 1685 + err_reset: 1686 + if (error && !(ddata->cfg.quirks & SYSC_QUIRK_NO_RESET_ON_INIT)) 1687 + reset_control_assert(ddata->rsts); 1688 + 1653 1689 err_main_clocks: 1654 - if (manage_clocks) 1690 + if (error) 1655 1691 sysc_disable_main_clocks(ddata); 1656 1692 err_opt_clocks: 1657 1693 /* No re-enable of clockdomain autoidle to prevent module autoidle */ 1658 - if (manage_clocks) { 1694 + if (error) { 1659 1695 sysc_disable_opt_clocks(ddata); 1660 1696 sysc_clkdm_allow_idle(ddata); 1661 1697 } ··· 2432 2460 goto unprepare; 2433 2461 } 2434 2462 2435 - /* Balance reset counts */ 2436 - if (ddata->rsts) 2463 + /* Balance use counts as PM runtime should have enabled these all */ 2464 + if (!(ddata->cfg.quirks & SYSC_QUIRK_NO_RESET_ON_INIT)) 2437 2465 reset_control_assert(ddata->rsts); 2466 + 2467 + if (!(ddata->cfg.quirks & 2468 + (SYSC_QUIRK_NO_IDLE | SYSC_QUIRK_NO_IDLE_ON_INIT))) { 2469 + sysc_disable_main_clocks(ddata); 2470 + sysc_disable_opt_clocks(ddata); 2471 + sysc_clkdm_allow_idle(ddata); 2472 + } 2438 2473 2439 2474 sysc_show_registers(ddata); 2440 2475
+1 -1
drivers/firmware/arm_scmi/perf.c
··· 323 323 324 324 if (db->mask) 325 325 val = ioread64_hi_lo(db->addr) & db->mask; 326 - iowrite64_hi_lo(db->set, db->addr); 326 + iowrite64_hi_lo(db->set | val, db->addr); 327 327 } 328 328 #endif 329 329 }
+1 -1
drivers/firmware/imx/imx-dsp.c
··· 114 114 115 115 dev_info(dev, "NXP i.MX DSP IPC initialized\n"); 116 116 117 - return devm_of_platform_populate(dev); 117 + return 0; 118 118 out: 119 119 kfree(chan_name); 120 120 for (j = 0; j < i; j++) {
+1
drivers/firmware/imx/imx-scu-irq.c
··· 8 8 9 9 #include <dt-bindings/firmware/imx/rsrc.h> 10 10 #include <linux/firmware/imx/ipc.h> 11 + #include <linux/firmware/imx/sci.h> 11 12 #include <linux/mailbox_client.h> 12 13 13 14 #define IMX_SC_IRQ_FUNC_ENABLE 1
+23 -1
drivers/firmware/imx/imx-scu.c
··· 107 107 struct imx_sc_rpc_msg *hdr; 108 108 u32 *data = msg; 109 109 110 + if (!sc_ipc->msg) { 111 + dev_warn(sc_ipc->dev, "unexpected rx idx %d 0x%08x, ignore!\n", 112 + sc_chan->idx, *data); 113 + return; 114 + } 115 + 110 116 if (sc_chan->idx == 0) { 111 117 hdr = msg; 112 118 sc_ipc->rx_size = hdr->size; ··· 162 156 */ 163 157 int imx_scu_call_rpc(struct imx_sc_ipc *sc_ipc, void *msg, bool have_resp) 164 158 { 159 + uint8_t saved_svc, saved_func; 165 160 struct imx_sc_rpc_msg *hdr; 166 161 int ret; 167 162 ··· 172 165 mutex_lock(&sc_ipc->lock); 173 166 reinit_completion(&sc_ipc->done); 174 167 175 - sc_ipc->msg = msg; 168 + if (have_resp) { 169 + sc_ipc->msg = msg; 170 + saved_svc = ((struct imx_sc_rpc_msg *)msg)->svc; 171 + saved_func = ((struct imx_sc_rpc_msg *)msg)->func; 172 + } 176 173 sc_ipc->count = 0; 177 174 ret = imx_scu_ipc_write(sc_ipc, msg); 178 175 if (ret < 0) { ··· 195 184 /* response status is stored in hdr->func field */ 196 185 hdr = msg; 197 186 ret = hdr->func; 187 + /* 188 + * Some special SCU firmware APIs do NOT have return value 189 + * in hdr->func, but they do have response data, those special 190 + * APIs are defined as void function in SCU firmware, so they 191 + * should be treated as return success always. 192 + */ 193 + if ((saved_svc == IMX_SC_RPC_SVC_MISC) && 194 + (saved_func == IMX_SC_MISC_FUNC_UNIQUE_ID || 195 + saved_func == IMX_SC_MISC_FUNC_GET_BUTTON_STATUS)) 196 + ret = 0; 198 197 } 199 198 200 199 out: 200 + sc_ipc->msg = NULL; 201 201 mutex_unlock(&sc_ipc->lock); 202 202 203 203 dev_dbg(sc_ipc->dev, "RPC SVC done\n");
+66 -44
drivers/firmware/meson/meson_sm.c
··· 35 35 struct meson_sm_cmd cmd[]; 36 36 }; 37 37 38 - struct meson_sm_chip gxbb_chip = { 38 + static const struct meson_sm_chip gxbb_chip = { 39 39 .shmem_size = SZ_4K, 40 40 .cmd_shmem_in_base = 0x82000020, 41 41 .cmd_shmem_out_base = 0x82000021, ··· 53 53 void __iomem *sm_shmem_in_base; 54 54 void __iomem *sm_shmem_out_base; 55 55 }; 56 - 57 - static struct meson_sm_firmware fw; 58 56 59 57 static u32 meson_sm_get_cmd(const struct meson_sm_chip *chip, 60 58 unsigned int cmd_index) ··· 88 90 /** 89 91 * meson_sm_call - generic SMC32 call to the secure-monitor 90 92 * 93 + * @fw: Pointer to secure-monitor firmware 91 94 * @cmd_index: Index of the SMC32 function ID 92 95 * @ret: Returned value 93 96 * @arg0: SMC32 Argument 0 ··· 99 100 * 100 101 * Return: 0 on success, a negative value on error 101 102 */ 102 - int meson_sm_call(unsigned int cmd_index, u32 *ret, u32 arg0, 103 - u32 arg1, u32 arg2, u32 arg3, u32 arg4) 103 + int meson_sm_call(struct meson_sm_firmware *fw, unsigned int cmd_index, 104 + u32 *ret, u32 arg0, u32 arg1, u32 arg2, u32 arg3, u32 arg4) 104 105 { 105 106 u32 cmd, lret; 106 107 107 - if (!fw.chip) 108 + if (!fw->chip) 108 109 return -ENOENT; 109 110 110 - cmd = meson_sm_get_cmd(fw.chip, cmd_index); 111 + cmd = meson_sm_get_cmd(fw->chip, cmd_index); 111 112 if (!cmd) 112 113 return -EINVAL; 113 114 ··· 123 124 /** 124 125 * meson_sm_call_read - retrieve data from secure-monitor 125 126 * 127 + * @fw: Pointer to secure-monitor firmware 126 128 * @buffer: Buffer to store the retrieved data 127 129 * @bsize: Size of the buffer 128 130 * @cmd_index: Index of the SMC32 function ID ··· 137 137 * When 0 is returned there is no guarantee about the amount of 138 138 * data read and bsize bytes are copied in buffer. 139 139 */ 140 - int meson_sm_call_read(void *buffer, unsigned int bsize, unsigned int cmd_index, 141 - u32 arg0, u32 arg1, u32 arg2, u32 arg3, u32 arg4) 140 + int meson_sm_call_read(struct meson_sm_firmware *fw, void *buffer, 141 + unsigned int bsize, unsigned int cmd_index, u32 arg0, 142 + u32 arg1, u32 arg2, u32 arg3, u32 arg4) 142 143 { 143 144 u32 size; 144 145 int ret; 145 146 146 - if (!fw.chip) 147 + if (!fw->chip) 147 148 return -ENOENT; 148 149 149 - if (!fw.chip->cmd_shmem_out_base) 150 + if (!fw->chip->cmd_shmem_out_base) 150 151 return -EINVAL; 151 152 152 - if (bsize > fw.chip->shmem_size) 153 + if (bsize > fw->chip->shmem_size) 153 154 return -EINVAL; 154 155 155 - if (meson_sm_call(cmd_index, &size, arg0, arg1, arg2, arg3, arg4) < 0) 156 + if (meson_sm_call(fw, cmd_index, &size, arg0, arg1, arg2, arg3, arg4) < 0) 156 157 return -EINVAL; 157 158 158 159 if (size > bsize) ··· 165 164 size = bsize; 166 165 167 166 if (buffer) 168 - memcpy(buffer, fw.sm_shmem_out_base, size); 167 + memcpy(buffer, fw->sm_shmem_out_base, size); 169 168 170 169 return ret; 171 170 } ··· 174 173 /** 175 174 * meson_sm_call_write - send data to secure-monitor 176 175 * 176 + * @fw: Pointer to secure-monitor firmware 177 177 * @buffer: Buffer containing data to send 178 178 * @size: Size of the data to send 179 179 * @cmd_index: Index of the SMC32 function ID ··· 186 184 * 187 185 * Return: size of sent data on success, a negative value on error 188 186 */ 189 - int meson_sm_call_write(void *buffer, unsigned int size, unsigned int cmd_index, 190 - u32 arg0, u32 arg1, u32 arg2, u32 arg3, u32 arg4) 187 + int meson_sm_call_write(struct meson_sm_firmware *fw, void *buffer, 188 + unsigned int size, unsigned int cmd_index, u32 arg0, 189 + u32 arg1, u32 arg2, u32 arg3, u32 arg4) 191 190 { 192 191 u32 written; 193 192 194 - if (!fw.chip) 193 + if (!fw->chip) 195 194 return -ENOENT; 196 195 197 - if (size > fw.chip->shmem_size) 196 + if (size > fw->chip->shmem_size) 198 197 return -EINVAL; 199 198 200 - if (!fw.chip->cmd_shmem_in_base) 199 + if (!fw->chip->cmd_shmem_in_base) 201 200 return -EINVAL; 202 201 203 - memcpy(fw.sm_shmem_in_base, buffer, size); 202 + memcpy(fw->sm_shmem_in_base, buffer, size); 204 203 205 - if (meson_sm_call(cmd_index, &written, arg0, arg1, arg2, arg3, arg4) < 0) 204 + if (meson_sm_call(fw, cmd_index, &written, arg0, arg1, arg2, arg3, arg4) < 0) 206 205 return -EINVAL; 207 206 208 207 if (!written) ··· 213 210 } 214 211 EXPORT_SYMBOL(meson_sm_call_write); 215 212 213 + /** 214 + * meson_sm_get - get pointer to meson_sm_firmware structure. 215 + * 216 + * @sm_node: Pointer to the secure-monitor Device Tree node. 217 + * 218 + * Return: NULL is the secure-monitor device is not ready. 219 + */ 220 + struct meson_sm_firmware *meson_sm_get(struct device_node *sm_node) 221 + { 222 + struct platform_device *pdev = of_find_device_by_node(sm_node); 223 + 224 + if (!pdev) 225 + return NULL; 226 + 227 + return platform_get_drvdata(pdev); 228 + } 229 + EXPORT_SYMBOL_GPL(meson_sm_get); 230 + 216 231 #define SM_CHIP_ID_LENGTH 119 217 232 #define SM_CHIP_ID_OFFSET 4 218 233 #define SM_CHIP_ID_SIZE 12 ··· 238 217 static ssize_t serial_show(struct device *dev, struct device_attribute *attr, 239 218 char *buf) 240 219 { 220 + struct platform_device *pdev = to_platform_device(dev); 221 + struct meson_sm_firmware *fw; 241 222 uint8_t *id_buf; 242 223 int ret; 224 + 225 + fw = platform_get_drvdata(pdev); 243 226 244 227 id_buf = kmalloc(SM_CHIP_ID_LENGTH, GFP_KERNEL); 245 228 if (!id_buf) 246 229 return -ENOMEM; 247 230 248 - ret = meson_sm_call_read(id_buf, SM_CHIP_ID_LENGTH, SM_GET_CHIP_ID, 231 + ret = meson_sm_call_read(fw, id_buf, SM_CHIP_ID_LENGTH, SM_GET_CHIP_ID, 249 232 0, 0, 0, 0, 0); 250 233 if (ret < 0) { 251 234 kfree(id_buf); 252 235 return ret; 253 236 } 254 237 255 - ret = sprintf(buf, "%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n", 256 - id_buf[SM_CHIP_ID_OFFSET + 0], 257 - id_buf[SM_CHIP_ID_OFFSET + 1], 258 - id_buf[SM_CHIP_ID_OFFSET + 2], 259 - id_buf[SM_CHIP_ID_OFFSET + 3], 260 - id_buf[SM_CHIP_ID_OFFSET + 4], 261 - id_buf[SM_CHIP_ID_OFFSET + 5], 262 - id_buf[SM_CHIP_ID_OFFSET + 6], 263 - id_buf[SM_CHIP_ID_OFFSET + 7], 264 - id_buf[SM_CHIP_ID_OFFSET + 8], 265 - id_buf[SM_CHIP_ID_OFFSET + 9], 266 - id_buf[SM_CHIP_ID_OFFSET + 10], 267 - id_buf[SM_CHIP_ID_OFFSET + 11]); 238 + ret = sprintf(buf, "%12phN\n", &id_buf[SM_CHIP_ID_OFFSET]); 268 239 269 240 kfree(id_buf); 270 241 ··· 281 268 282 269 static int __init meson_sm_probe(struct platform_device *pdev) 283 270 { 271 + struct device *dev = &pdev->dev; 284 272 const struct meson_sm_chip *chip; 273 + struct meson_sm_firmware *fw; 285 274 286 - chip = of_match_device(meson_sm_ids, &pdev->dev)->data; 275 + fw = devm_kzalloc(dev, sizeof(*fw), GFP_KERNEL); 276 + if (!fw) 277 + return -ENOMEM; 278 + 279 + chip = of_match_device(meson_sm_ids, dev)->data; 287 280 288 281 if (chip->cmd_shmem_in_base) { 289 - fw.sm_shmem_in_base = meson_sm_map_shmem(chip->cmd_shmem_in_base, 290 - chip->shmem_size); 291 - if (WARN_ON(!fw.sm_shmem_in_base)) 282 + fw->sm_shmem_in_base = meson_sm_map_shmem(chip->cmd_shmem_in_base, 283 + chip->shmem_size); 284 + if (WARN_ON(!fw->sm_shmem_in_base)) 292 285 goto out; 293 286 } 294 287 295 288 if (chip->cmd_shmem_out_base) { 296 - fw.sm_shmem_out_base = meson_sm_map_shmem(chip->cmd_shmem_out_base, 297 - chip->shmem_size); 298 - if (WARN_ON(!fw.sm_shmem_out_base)) 289 + fw->sm_shmem_out_base = meson_sm_map_shmem(chip->cmd_shmem_out_base, 290 + chip->shmem_size); 291 + if (WARN_ON(!fw->sm_shmem_out_base)) 299 292 goto out_in_base; 300 293 } 301 294 302 - fw.chip = chip; 295 + fw->chip = chip; 296 + 297 + platform_set_drvdata(pdev, fw); 298 + 303 299 pr_info("secure-monitor enabled\n"); 304 300 305 301 if (sysfs_create_group(&pdev->dev.kobj, &meson_sm_sysfs_attr_group)) ··· 317 295 return 0; 318 296 319 297 out_in_base: 320 - iounmap(fw.sm_shmem_in_base); 298 + iounmap(fw->sm_shmem_in_base); 321 299 out: 322 300 return -EINVAL; 323 301 }
+1 -1
drivers/firmware/tegra/bpmp.c
··· 804 804 } 805 805 806 806 static const struct dev_pm_ops tegra_bpmp_pm_ops = { 807 - .resume_early = tegra_bpmp_resume, 807 + .resume_noirq = tegra_bpmp_resume, 808 808 }; 809 809 810 810 #if IS_ENABLED(CONFIG_ARCH_TEGRA_186_SOC) || \
+6 -2
drivers/firmware/xilinx/zynqmp.c
··· 711 711 int ret; 712 712 713 713 np = of_find_compatible_node(NULL, NULL, "xlnx,zynqmp"); 714 - if (!np) 715 - return 0; 714 + if (!np) { 715 + np = of_find_compatible_node(NULL, NULL, "xlnx,versal"); 716 + if (!np) 717 + return 0; 718 + } 716 719 of_node_put(np); 717 720 718 721 ret = get_set_conduit_method(dev->of_node); ··· 773 770 774 771 static const struct of_device_id zynqmp_firmware_of_match[] = { 775 772 {.compatible = "xlnx,zynqmp-firmware"}, 773 + {.compatible = "xlnx,versal-firmware"}, 776 774 {}, 777 775 }; 778 776 MODULE_DEVICE_TABLE(of, zynqmp_firmware_of_match);
+5 -6
drivers/memory/atmel-ebi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * EBI driver for Atmel chips 3 4 * inspired by the fsl weim bus driver 4 5 * 5 6 * Copyright (C) 2013 Jean-Jacques Hiblot <jjhiblot@traphandler.com> 6 - * 7 - * This file is licensed under the terms of the GNU General Public 8 - * License version 2. This program is licensed "as is" without any 9 - * warranty of any kind, whether express or implied. 10 7 */ 11 8 12 9 #include <linux/clk.h> ··· 15 18 #include <linux/of_device.h> 16 19 #include <linux/regmap.h> 17 20 #include <soc/at91/atmel-sfr.h> 21 + 22 + #define AT91_EBI_NUM_CS 8 18 23 19 24 struct atmel_ebi_dev_config { 20 25 int cs; ··· 313 314 if (ret) 314 315 return ret; 315 316 316 - if (cs >= AT91_MATRIX_EBI_NUM_CS || 317 + if (cs >= AT91_EBI_NUM_CS || 317 318 !(ebi->caps->available_cs & BIT(cs))) { 318 319 dev_err(dev, "invalid reg property in %pOF\n", np); 319 320 return -EINVAL; ··· 343 344 apply = true; 344 345 345 346 i = 0; 346 - for_each_set_bit(cs, &cslines, AT91_MATRIX_EBI_NUM_CS) { 347 + for_each_set_bit(cs, &cslines, AT91_EBI_NUM_CS) { 347 348 ebid->configs[i].cs = cs; 348 349 349 350 if (apply) {
+101 -63
drivers/memory/brcmstb_dpfe.c
··· 127 127 MSG_COMMAND, 128 128 MSG_ARG_COUNT, 129 129 MSG_ARG0, 130 - MSG_CHKSUM, 131 130 MSG_FIELD_MAX = 16 /* Max number of arguments */ 132 131 }; 133 132 ··· 179 180 }; 180 181 181 182 /* Things we need for as long as we are active. */ 182 - struct private_data { 183 + struct brcmstb_dpfe_priv { 183 184 void __iomem *regs; 184 185 void __iomem *dmem; 185 186 void __iomem *imem; ··· 231 232 }; 232 233 ATTRIBUTE_GROUPS(dpfe_v3); 233 234 234 - /* API v2 firmware commands */ 235 - static const struct dpfe_api dpfe_api_v2 = { 236 - .version = 2, 235 + /* 236 + * Old API v2 firmware commands, as defined in the rev 0.61 specification, we 237 + * use a version set to 1 to denote that it is not compatible with the new API 238 + * v2 and onwards. 239 + */ 240 + static const struct dpfe_api dpfe_api_old_v2 = { 241 + .version = 1, 237 242 .fw_name = "dpfe.bin", 238 243 .sysfs_attrs = dpfe_v2_groups, 239 244 .command = { ··· 246 243 [MSG_COMMAND] = 1, 247 244 [MSG_ARG_COUNT] = 1, 248 245 [MSG_ARG0] = 1, 249 - [MSG_CHKSUM] = 4, 250 246 }, 251 247 [DPFE_CMD_GET_REFRESH] = { 252 248 [MSG_HEADER] = DPFE_MSG_TYPE_COMMAND, 253 249 [MSG_COMMAND] = 2, 254 250 [MSG_ARG_COUNT] = 1, 255 251 [MSG_ARG0] = 1, 256 - [MSG_CHKSUM] = 5, 257 252 }, 258 253 [DPFE_CMD_GET_VENDOR] = { 259 254 [MSG_HEADER] = DPFE_MSG_TYPE_COMMAND, 260 255 [MSG_COMMAND] = 2, 261 256 [MSG_ARG_COUNT] = 1, 262 257 [MSG_ARG0] = 2, 263 - [MSG_CHKSUM] = 6, 258 + }, 259 + } 260 + }; 261 + 262 + /* 263 + * API v2 firmware commands, as defined in the rev 0.8 specification, named new 264 + * v2 here 265 + */ 266 + static const struct dpfe_api dpfe_api_new_v2 = { 267 + .version = 2, 268 + .fw_name = NULL, /* We expect the firmware to have been downloaded! */ 269 + .sysfs_attrs = dpfe_v2_groups, 270 + .command = { 271 + [DPFE_CMD_GET_INFO] = { 272 + [MSG_HEADER] = DPFE_MSG_TYPE_COMMAND, 273 + [MSG_COMMAND] = 0x101, 274 + }, 275 + [DPFE_CMD_GET_REFRESH] = { 276 + [MSG_HEADER] = DPFE_MSG_TYPE_COMMAND, 277 + [MSG_COMMAND] = 0x201, 278 + }, 279 + [DPFE_CMD_GET_VENDOR] = { 280 + [MSG_HEADER] = DPFE_MSG_TYPE_COMMAND, 281 + [MSG_COMMAND] = 0x202, 264 282 }, 265 283 } 266 284 }; ··· 297 273 [MSG_COMMAND] = 0x0101, 298 274 [MSG_ARG_COUNT] = 1, 299 275 [MSG_ARG0] = 1, 300 - [MSG_CHKSUM] = 0x104, 301 276 }, 302 277 [DPFE_CMD_GET_REFRESH] = { 303 278 [MSG_HEADER] = DPFE_MSG_TYPE_COMMAND, 304 279 [MSG_COMMAND] = 0x0202, 305 280 [MSG_ARG_COUNT] = 0, 306 - /* 307 - * This is a bit ugly. Without arguments, the checksum 308 - * follows right after the argument count and not at 309 - * offset MSG_CHKSUM. 310 - */ 311 - [MSG_ARG0] = 0x203, 312 281 }, 313 282 /* There's no GET_VENDOR command in API v3. */ 314 283 }, 315 284 }; 316 285 317 - static bool is_dcpu_enabled(void __iomem *regs) 286 + static bool is_dcpu_enabled(struct brcmstb_dpfe_priv *priv) 318 287 { 319 288 u32 val; 320 289 321 - val = readl_relaxed(regs + REG_DCPU_RESET); 290 + mutex_lock(&priv->lock); 291 + val = readl_relaxed(priv->regs + REG_DCPU_RESET); 292 + mutex_unlock(&priv->lock); 322 293 323 294 return !(val & DCPU_RESET_MASK); 324 295 } 325 296 326 - static void __disable_dcpu(void __iomem *regs) 297 + static void __disable_dcpu(struct brcmstb_dpfe_priv *priv) 327 298 { 328 299 u32 val; 329 300 330 - if (!is_dcpu_enabled(regs)) 301 + if (!is_dcpu_enabled(priv)) 331 302 return; 332 303 304 + mutex_lock(&priv->lock); 305 + 333 306 /* Put DCPU in reset if it's running. */ 334 - val = readl_relaxed(regs + REG_DCPU_RESET); 307 + val = readl_relaxed(priv->regs + REG_DCPU_RESET); 335 308 val |= (1 << DCPU_RESET_SHIFT); 336 - writel_relaxed(val, regs + REG_DCPU_RESET); 309 + writel_relaxed(val, priv->regs + REG_DCPU_RESET); 310 + 311 + mutex_unlock(&priv->lock); 337 312 } 338 313 339 - static void __enable_dcpu(void __iomem *regs) 314 + static void __enable_dcpu(struct brcmstb_dpfe_priv *priv) 340 315 { 316 + void __iomem *regs = priv->regs; 341 317 u32 val; 318 + 319 + mutex_lock(&priv->lock); 342 320 343 321 /* Clear mailbox registers. */ 344 322 writel_relaxed(0, regs + REG_TO_DCPU_MBOX); ··· 355 329 val = readl_relaxed(regs + REG_DCPU_RESET); 356 330 val &= ~(1 << DCPU_RESET_SHIFT); 357 331 writel_relaxed(val, regs + REG_DCPU_RESET); 332 + 333 + mutex_unlock(&priv->lock); 358 334 } 359 335 360 336 static unsigned int get_msg_chksum(const u32 msg[], unsigned int max) ··· 371 343 return sum; 372 344 } 373 345 374 - static void __iomem *get_msg_ptr(struct private_data *priv, u32 response, 346 + static void __iomem *get_msg_ptr(struct brcmstb_dpfe_priv *priv, u32 response, 375 347 char *buf, ssize_t *size) 376 348 { 377 349 unsigned int msg_type; ··· 410 382 return ptr; 411 383 } 412 384 413 - static void __finalize_command(struct private_data *priv) 385 + static void __finalize_command(struct brcmstb_dpfe_priv *priv) 414 386 { 415 387 unsigned int release_mbox; 416 388 ··· 418 390 * It depends on the API version which MBOX register we have to write to 419 391 * to signal we are done. 420 392 */ 421 - release_mbox = (priv->dpfe_api->version < 3) 393 + release_mbox = (priv->dpfe_api->version < 2) 422 394 ? REG_TO_HOST_MBOX : REG_TO_DCPU_MBOX; 423 395 writel_relaxed(0, priv->regs + release_mbox); 424 396 } 425 397 426 - static int __send_command(struct private_data *priv, unsigned int cmd, 398 + static int __send_command(struct brcmstb_dpfe_priv *priv, unsigned int cmd, 427 399 u32 result[]) 428 400 { 429 401 const u32 *msg = priv->dpfe_api->command[cmd]; ··· 449 421 return -ETIMEDOUT; 450 422 } 451 423 424 + /* Compute checksum over the message */ 425 + chksum_idx = msg[MSG_ARG_COUNT] + MSG_ARG_COUNT + 1; 426 + chksum = get_msg_chksum(msg, chksum_idx); 427 + 452 428 /* Write command and arguments to message area */ 453 - for (i = 0; i < MSG_FIELD_MAX; i++) 454 - writel_relaxed(msg[i], regs + DCPU_MSG_RAM(i)); 429 + for (i = 0; i < MSG_FIELD_MAX; i++) { 430 + if (i == chksum_idx) 431 + writel_relaxed(chksum, regs + DCPU_MSG_RAM(i)); 432 + else 433 + writel_relaxed(msg[i], regs + DCPU_MSG_RAM(i)); 434 + } 455 435 456 436 /* Tell DCPU there is a command waiting */ 457 437 writel_relaxed(1, regs + REG_TO_DCPU_MBOX); ··· 553 517 554 518 /* Verify checksum by reading back the firmware from co-processor RAM. */ 555 519 static int __verify_fw_checksum(struct init_data *init, 556 - struct private_data *priv, 520 + struct brcmstb_dpfe_priv *priv, 557 521 const struct dpfe_firmware_header *header, 558 522 u32 checksum) 559 523 { ··· 607 571 return 0; 608 572 } 609 573 610 - static int brcmstb_dpfe_download_firmware(struct platform_device *pdev, 611 - struct init_data *init) 574 + static int brcmstb_dpfe_download_firmware(struct brcmstb_dpfe_priv *priv) 612 575 { 613 576 const struct dpfe_firmware_header *header; 614 577 unsigned int dmem_size, imem_size; 615 - struct device *dev = &pdev->dev; 578 + struct device *dev = priv->dev; 616 579 bool is_big_endian = false; 617 - struct private_data *priv; 618 580 const struct firmware *fw; 619 581 const u32 *dmem, *imem; 582 + struct init_data init; 620 583 const void *fw_blob; 621 584 int ret; 622 - 623 - priv = platform_get_drvdata(pdev); 624 585 625 586 /* 626 587 * Skip downloading the firmware if the DCPU is already running and 627 588 * responding to commands. 628 589 */ 629 - if (is_dcpu_enabled(priv->regs)) { 590 + if (is_dcpu_enabled(priv)) { 630 591 u32 response[MSG_FIELD_MAX]; 631 592 632 593 ret = __send_command(priv, DPFE_CMD_GET_INFO, response); ··· 639 606 if (!priv->dpfe_api->fw_name) 640 607 return -ENODEV; 641 608 642 - ret = request_firmware(&fw, priv->dpfe_api->fw_name, dev); 643 - /* request_firmware() prints its own error messages. */ 609 + ret = firmware_request_nowarn(&fw, priv->dpfe_api->fw_name, dev); 610 + /* 611 + * Defer the firmware download if the firmware file couldn't be found. 612 + * The root file system may not be available yet. 613 + */ 644 614 if (ret) 645 - return ret; 615 + return (ret == -ENOENT) ? -EPROBE_DEFER : ret; 646 616 647 - ret = __verify_firmware(init, fw); 617 + ret = __verify_firmware(&init, fw); 648 618 if (ret) 649 619 return -EFAULT; 650 620 651 - __disable_dcpu(priv->regs); 621 + __disable_dcpu(priv); 652 622 653 - is_big_endian = init->is_big_endian; 654 - dmem_size = init->dmem_len; 655 - imem_size = init->imem_len; 623 + is_big_endian = init.is_big_endian; 624 + dmem_size = init.dmem_len; 625 + imem_size = init.imem_len; 656 626 657 627 /* At the beginning of the firmware blob is a header. */ 658 628 header = (struct dpfe_firmware_header *)fw->data; ··· 673 637 if (ret) 674 638 return ret; 675 639 676 - ret = __verify_fw_checksum(init, priv, header, init->chksum); 640 + ret = __verify_fw_checksum(&init, priv, header, init.chksum); 677 641 if (ret) 678 642 return ret; 679 643 680 - __enable_dcpu(priv->regs); 644 + __enable_dcpu(priv); 681 645 682 646 return 0; 683 647 } 684 648 685 649 static ssize_t generic_show(unsigned int command, u32 response[], 686 - struct private_data *priv, char *buf) 650 + struct brcmstb_dpfe_priv *priv, char *buf) 687 651 { 688 652 int ret; 689 653 ··· 701 665 char *buf) 702 666 { 703 667 u32 response[MSG_FIELD_MAX]; 704 - struct private_data *priv; 668 + struct brcmstb_dpfe_priv *priv; 705 669 unsigned int info; 706 670 ssize_t ret; 707 671 ··· 724 688 { 725 689 u32 response[MSG_FIELD_MAX]; 726 690 void __iomem *info; 727 - struct private_data *priv; 691 + struct brcmstb_dpfe_priv *priv; 728 692 u8 refresh, sr_abort, ppre, thermal_offs, tuf; 729 693 u32 mr4; 730 694 ssize_t ret; ··· 757 721 const char *buf, size_t count) 758 722 { 759 723 u32 response[MSG_FIELD_MAX]; 760 - struct private_data *priv; 724 + struct brcmstb_dpfe_priv *priv; 761 725 void __iomem *info; 762 726 unsigned long val; 763 727 int ret; ··· 783 747 char *buf) 784 748 { 785 749 u32 response[MSG_FIELD_MAX]; 786 - struct private_data *priv; 750 + struct brcmstb_dpfe_priv *priv; 787 751 void __iomem *info; 788 752 ssize_t ret; 789 753 u32 mr5, mr6, mr7, mr8, err; ··· 814 778 char *buf) 815 779 { 816 780 u32 response[MSG_FIELD_MAX]; 817 - struct private_data *priv; 781 + struct brcmstb_dpfe_priv *priv; 818 782 ssize_t ret; 819 783 u32 mr4, mr5, mr6, mr7, mr8, err; 820 784 ··· 836 800 837 801 static int brcmstb_dpfe_resume(struct platform_device *pdev) 838 802 { 839 - struct init_data init; 803 + struct brcmstb_dpfe_priv *priv = platform_get_drvdata(pdev); 840 804 841 - return brcmstb_dpfe_download_firmware(pdev, &init); 805 + return brcmstb_dpfe_download_firmware(priv); 842 806 } 843 807 844 808 static int brcmstb_dpfe_probe(struct platform_device *pdev) 845 809 { 846 810 struct device *dev = &pdev->dev; 847 - struct private_data *priv; 848 - struct init_data init; 811 + struct brcmstb_dpfe_priv *priv; 849 812 struct resource *res; 850 813 int ret; 851 814 852 815 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 853 816 if (!priv) 854 817 return -ENOMEM; 818 + 819 + priv->dev = dev; 855 820 856 821 mutex_init(&priv->lock); 857 822 platform_set_drvdata(pdev, priv); ··· 888 851 return -ENOENT; 889 852 } 890 853 891 - ret = brcmstb_dpfe_download_firmware(pdev, &init); 854 + ret = brcmstb_dpfe_download_firmware(priv); 892 855 if (ret) { 893 - dev_err(dev, "Couldn't download firmware -- %d\n", ret); 856 + if (ret != -EPROBE_DEFER) 857 + dev_err(dev, "Couldn't download firmware -- %d\n", ret); 894 858 return ret; 895 859 } 896 860 ··· 905 867 906 868 static int brcmstb_dpfe_remove(struct platform_device *pdev) 907 869 { 908 - struct private_data *priv = dev_get_drvdata(&pdev->dev); 870 + struct brcmstb_dpfe_priv *priv = dev_get_drvdata(&pdev->dev); 909 871 910 872 sysfs_remove_groups(&pdev->dev.kobj, priv->dpfe_api->sysfs_attrs); 911 873 ··· 914 876 915 877 static const struct of_device_id brcmstb_dpfe_of_match[] = { 916 878 /* Use legacy API v2 for a select number of chips */ 917 - { .compatible = "brcm,bcm7268-dpfe-cpu", .data = &dpfe_api_v2 }, 918 - { .compatible = "brcm,bcm7271-dpfe-cpu", .data = &dpfe_api_v2 }, 919 - { .compatible = "brcm,bcm7278-dpfe-cpu", .data = &dpfe_api_v2 }, 920 - { .compatible = "brcm,bcm7211-dpfe-cpu", .data = &dpfe_api_v2 }, 879 + { .compatible = "brcm,bcm7268-dpfe-cpu", .data = &dpfe_api_old_v2 }, 880 + { .compatible = "brcm,bcm7271-dpfe-cpu", .data = &dpfe_api_old_v2 }, 881 + { .compatible = "brcm,bcm7278-dpfe-cpu", .data = &dpfe_api_old_v2 }, 882 + { .compatible = "brcm,bcm7211-dpfe-cpu", .data = &dpfe_api_new_v2 }, 921 883 /* API v3 is the default going forward */ 922 884 { .compatible = "brcm,dpfe-cpu", .data = &dpfe_api_v3 }, 923 885 {}
+1 -4
drivers/memory/emif.c
··· 1613 1613 static int get_emif_reg_values(struct emif_data *emif, u32 freq, 1614 1614 struct emif_regs *regs) 1615 1615 { 1616 - u32 cs1_used, ip_rev, phy_type; 1616 + u32 ip_rev, phy_type; 1617 1617 u32 cl, type; 1618 1618 const struct lpddr2_timings *timings; 1619 1619 const struct lpddr2_min_tck *min_tck; ··· 1621 1621 const struct lpddr2_addressing *addressing; 1622 1622 struct emif_data *emif_for_calc; 1623 1623 struct device *dev; 1624 - const struct emif_custom_configs *custom_configs; 1625 1624 1626 1625 dev = emif->dev; 1627 1626 /* ··· 1638 1639 1639 1640 device_info = emif_for_calc->plat_data->device_info; 1640 1641 type = device_info->type; 1641 - cs1_used = device_info->cs1_used; 1642 1642 ip_rev = emif_for_calc->plat_data->ip_rev; 1643 1643 phy_type = emif_for_calc->plat_data->phy_type; 1644 1644 1645 1645 min_tck = emif_for_calc->plat_data->min_tck; 1646 - custom_configs = emif_for_calc->plat_data->custom_configs; 1647 1646 1648 1647 set_ddr_clk_period(freq); 1649 1648
+61
drivers/memory/jedec_ddr.h
··· 29 29 #define DDR_TYPE_LPDDR2_S4 3 30 30 #define DDR_TYPE_LPDDR2_S2 4 31 31 #define DDR_TYPE_LPDDR2_NVM 5 32 + #define DDR_TYPE_LPDDR3 6 32 33 33 34 /* DDR IO width */ 34 35 #define DDR_IO_WIDTH_4 1 ··· 169 168 extern const struct lpddr2_timings 170 169 lpddr2_jedec_timings[NUM_DDR_TIMING_TABLE_ENTRIES]; 171 170 extern const struct lpddr2_min_tck lpddr2_jedec_min_tck; 171 + 172 + /* 173 + * Structure for timings for LPDDR3 based on LPDDR2 plus additional fields. 174 + * All parameters are in pico seconds(ps) excluding max_freq, min_freq which 175 + * are in Hz. 176 + */ 177 + struct lpddr3_timings { 178 + u32 max_freq; 179 + u32 min_freq; 180 + u32 tRFC; 181 + u32 tRRD; 182 + u32 tRPab; 183 + u32 tRPpb; 184 + u32 tRCD; 185 + u32 tRC; 186 + u32 tRAS; 187 + u32 tWTR; 188 + u32 tWR; 189 + u32 tRTP; 190 + u32 tW2W_C2C; 191 + u32 tR2R_C2C; 192 + u32 tWL; 193 + u32 tDQSCK; 194 + u32 tRL; 195 + u32 tFAW; 196 + u32 tXSR; 197 + u32 tXP; 198 + u32 tCKE; 199 + u32 tCKESR; 200 + u32 tMRD; 201 + }; 202 + 203 + /* 204 + * Min value for some parameters in terms of number of tCK cycles(nCK) 205 + * Please set to zero parameters that are not valid for a given memory 206 + * type 207 + */ 208 + struct lpddr3_min_tck { 209 + u32 tRFC; 210 + u32 tRRD; 211 + u32 tRPab; 212 + u32 tRPpb; 213 + u32 tRCD; 214 + u32 tRC; 215 + u32 tRAS; 216 + u32 tWTR; 217 + u32 tWR; 218 + u32 tRTP; 219 + u32 tW2W_C2C; 220 + u32 tR2R_C2C; 221 + u32 tWL; 222 + u32 tDQSCK; 223 + u32 tRL; 224 + u32 tFAW; 225 + u32 tXSR; 226 + u32 tXP; 227 + u32 tCKE; 228 + u32 tCKESR; 229 + u32 tMRD; 230 + }; 172 231 173 232 #endif /* __JEDEC_DDR_H */
+149
drivers/memory/of_memory.c
··· 3 3 * OpenFirmware helpers for memory drivers 4 4 * 5 5 * Copyright (C) 2012 Texas Instruments, Inc. 6 + * Copyright (C) 2019 Samsung Electronics Co., Ltd. 6 7 */ 7 8 8 9 #include <linux/device.h> ··· 150 149 return lpddr2_jedec_timings; 151 150 } 152 151 EXPORT_SYMBOL(of_get_ddr_timings); 152 + 153 + /** 154 + * of_lpddr3_get_min_tck() - extract min timing values for lpddr3 155 + * @np: pointer to ddr device tree node 156 + * @device: device requesting for min timing values 157 + * 158 + * Populates the lpddr3_min_tck structure by extracting data 159 + * from device tree node. Returns a pointer to the populated 160 + * structure. If any error in populating the structure, returns NULL. 161 + */ 162 + const struct lpddr3_min_tck *of_lpddr3_get_min_tck(struct device_node *np, 163 + struct device *dev) 164 + { 165 + int ret = 0; 166 + struct lpddr3_min_tck *min; 167 + 168 + min = devm_kzalloc(dev, sizeof(*min), GFP_KERNEL); 169 + if (!min) 170 + goto default_min_tck; 171 + 172 + ret |= of_property_read_u32(np, "tRFC-min-tck", &min->tRFC); 173 + ret |= of_property_read_u32(np, "tRRD-min-tck", &min->tRRD); 174 + ret |= of_property_read_u32(np, "tRPab-min-tck", &min->tRPab); 175 + ret |= of_property_read_u32(np, "tRPpb-min-tck", &min->tRPpb); 176 + ret |= of_property_read_u32(np, "tRCD-min-tck", &min->tRCD); 177 + ret |= of_property_read_u32(np, "tRC-min-tck", &min->tRC); 178 + ret |= of_property_read_u32(np, "tRAS-min-tck", &min->tRAS); 179 + ret |= of_property_read_u32(np, "tWTR-min-tck", &min->tWTR); 180 + ret |= of_property_read_u32(np, "tWR-min-tck", &min->tWR); 181 + ret |= of_property_read_u32(np, "tRTP-min-tck", &min->tRTP); 182 + ret |= of_property_read_u32(np, "tW2W-C2C-min-tck", &min->tW2W_C2C); 183 + ret |= of_property_read_u32(np, "tR2R-C2C-min-tck", &min->tR2R_C2C); 184 + ret |= of_property_read_u32(np, "tWL-min-tck", &min->tWL); 185 + ret |= of_property_read_u32(np, "tDQSCK-min-tck", &min->tDQSCK); 186 + ret |= of_property_read_u32(np, "tRL-min-tck", &min->tRL); 187 + ret |= of_property_read_u32(np, "tFAW-min-tck", &min->tFAW); 188 + ret |= of_property_read_u32(np, "tXSR-min-tck", &min->tXSR); 189 + ret |= of_property_read_u32(np, "tXP-min-tck", &min->tXP); 190 + ret |= of_property_read_u32(np, "tCKE-min-tck", &min->tCKE); 191 + ret |= of_property_read_u32(np, "tCKESR-min-tck", &min->tCKESR); 192 + ret |= of_property_read_u32(np, "tMRD-min-tck", &min->tMRD); 193 + 194 + if (ret) { 195 + dev_warn(dev, "%s: errors while parsing min-tck values\n", 196 + __func__); 197 + devm_kfree(dev, min); 198 + goto default_min_tck; 199 + } 200 + 201 + return min; 202 + 203 + default_min_tck: 204 + dev_warn(dev, "%s: using default min-tck values\n", __func__); 205 + return NULL; 206 + } 207 + EXPORT_SYMBOL(of_lpddr3_get_min_tck); 208 + 209 + static int of_lpddr3_do_get_timings(struct device_node *np, 210 + struct lpddr3_timings *tim) 211 + { 212 + int ret; 213 + 214 + /* The 'reg' param required since DT has changed, used as 'max-freq' */ 215 + ret = of_property_read_u32(np, "reg", &tim->max_freq); 216 + ret |= of_property_read_u32(np, "min-freq", &tim->min_freq); 217 + ret |= of_property_read_u32(np, "tRFC", &tim->tRFC); 218 + ret |= of_property_read_u32(np, "tRRD", &tim->tRRD); 219 + ret |= of_property_read_u32(np, "tRPab", &tim->tRPab); 220 + ret |= of_property_read_u32(np, "tRPpb", &tim->tRPpb); 221 + ret |= of_property_read_u32(np, "tRCD", &tim->tRCD); 222 + ret |= of_property_read_u32(np, "tRC", &tim->tRC); 223 + ret |= of_property_read_u32(np, "tRAS", &tim->tRAS); 224 + ret |= of_property_read_u32(np, "tWTR", &tim->tWTR); 225 + ret |= of_property_read_u32(np, "tWR", &tim->tWR); 226 + ret |= of_property_read_u32(np, "tRTP", &tim->tRTP); 227 + ret |= of_property_read_u32(np, "tW2W-C2C", &tim->tW2W_C2C); 228 + ret |= of_property_read_u32(np, "tR2R-C2C", &tim->tR2R_C2C); 229 + ret |= of_property_read_u32(np, "tFAW", &tim->tFAW); 230 + ret |= of_property_read_u32(np, "tXSR", &tim->tXSR); 231 + ret |= of_property_read_u32(np, "tXP", &tim->tXP); 232 + ret |= of_property_read_u32(np, "tCKE", &tim->tCKE); 233 + ret |= of_property_read_u32(np, "tCKESR", &tim->tCKESR); 234 + ret |= of_property_read_u32(np, "tMRD", &tim->tMRD); 235 + 236 + return ret; 237 + } 238 + 239 + /** 240 + * of_lpddr3_get_ddr_timings() - extracts the lpddr3 timings and updates no of 241 + * frequencies available. 242 + * @np_ddr: Pointer to ddr device tree node 243 + * @dev: Device requesting for ddr timings 244 + * @device_type: Type of ddr 245 + * @nr_frequencies: No of frequencies available for ddr 246 + * (updated by this function) 247 + * 248 + * Populates lpddr3_timings structure by extracting data from device 249 + * tree node. Returns pointer to populated structure. If any error 250 + * while populating, returns NULL. 251 + */ 252 + const struct lpddr3_timings 253 + *of_lpddr3_get_ddr_timings(struct device_node *np_ddr, struct device *dev, 254 + u32 device_type, u32 *nr_frequencies) 255 + { 256 + struct lpddr3_timings *timings = NULL; 257 + u32 arr_sz = 0, i = 0; 258 + struct device_node *np_tim; 259 + char *tim_compat = NULL; 260 + 261 + switch (device_type) { 262 + case DDR_TYPE_LPDDR3: 263 + tim_compat = "jedec,lpddr3-timings"; 264 + break; 265 + default: 266 + dev_warn(dev, "%s: un-supported memory type\n", __func__); 267 + } 268 + 269 + for_each_child_of_node(np_ddr, np_tim) 270 + if (of_device_is_compatible(np_tim, tim_compat)) 271 + arr_sz++; 272 + 273 + if (arr_sz) 274 + timings = devm_kcalloc(dev, arr_sz, sizeof(*timings), 275 + GFP_KERNEL); 276 + 277 + if (!timings) 278 + goto default_timings; 279 + 280 + for_each_child_of_node(np_ddr, np_tim) { 281 + if (of_device_is_compatible(np_tim, tim_compat)) { 282 + if (of_lpddr3_do_get_timings(np_tim, &timings[i])) { 283 + devm_kfree(dev, timings); 284 + goto default_timings; 285 + } 286 + i++; 287 + } 288 + } 289 + 290 + *nr_frequencies = arr_sz; 291 + 292 + return timings; 293 + 294 + default_timings: 295 + dev_warn(dev, "%s: failed to get timings\n", __func__); 296 + *nr_frequencies = 0; 297 + return NULL; 298 + } 299 + EXPORT_SYMBOL(of_lpddr3_get_ddr_timings);
+18
drivers/memory/of_memory.h
··· 14 14 extern const struct lpddr2_timings 15 15 *of_get_ddr_timings(struct device_node *np_ddr, struct device *dev, 16 16 u32 device_type, u32 *nr_frequencies); 17 + extern const struct lpddr3_min_tck 18 + *of_lpddr3_get_min_tck(struct device_node *np, struct device *dev); 19 + extern const struct lpddr3_timings 20 + *of_lpddr3_get_ddr_timings(struct device_node *np_ddr, 21 + struct device *dev, u32 device_type, u32 *nr_frequencies); 17 22 #else 18 23 static inline const struct lpddr2_min_tck 19 24 *of_get_min_tck(struct device_node *np, struct device *dev) ··· 29 24 static inline const struct lpddr2_timings 30 25 *of_get_ddr_timings(struct device_node *np_ddr, struct device *dev, 31 26 u32 device_type, u32 *nr_frequencies) 27 + { 28 + return NULL; 29 + } 30 + 31 + static inline const struct lpddr3_min_tck 32 + *of_lpddr3_get_min_tck(struct device_node *np, struct device *dev) 33 + { 34 + return NULL; 35 + } 36 + 37 + static inline const struct lpddr3_timings 38 + *of_lpddr3_get_ddr_timings(struct device_node *np_ddr, 39 + struct device *dev, u32 device_type, u32 *nr_frequencies) 32 40 { 33 41 return NULL; 34 42 }
+13
drivers/memory/samsung/Kconfig
··· 7 7 8 8 if SAMSUNG_MC 9 9 10 + config EXYNOS5422_DMC 11 + tristate "EXYNOS5422 Dynamic Memory Controller driver" 12 + depends on ARCH_EXYNOS || (COMPILE_TEST && HAS_IOMEM) 13 + select DDR 14 + depends on DEVFREQ_GOV_SIMPLE_ONDEMAND 15 + depends on (PM_DEVFREQ && PM_DEVFREQ_EVENT) 16 + help 17 + This adds driver for Exynos5422 DMC (Dynamic Memory Controller). 18 + The driver provides support for Dynamic Voltage and Frequency Scaling in 19 + DMC and DRAM. It also supports changing timings of DRAM running with 20 + different frequency. The timings are calculated based on DT memory 21 + information. 22 + 10 23 config EXYNOS_SROM 11 24 bool "Exynos SROM controller driver" if COMPILE_TEST 12 25 depends on (ARM && ARCH_EXYNOS) || (COMPILE_TEST && HAS_IOMEM)
+1
drivers/memory/samsung/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 + obj-$(CONFIG_EXYNOS5422_DMC) += exynos5422-dmc.o 2 3 obj-$(CONFIG_EXYNOS_SROM) += exynos-srom.o
+1550
drivers/memory/samsung/exynos5422-dmc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2019 Samsung Electronics Co., Ltd. 4 + * Author: Lukasz Luba <l.luba@partner.samsung.com> 5 + */ 6 + 7 + #include <linux/clk.h> 8 + #include <linux/devfreq.h> 9 + #include <linux/devfreq-event.h> 10 + #include <linux/device.h> 11 + #include <linux/interrupt.h> 12 + #include <linux/io.h> 13 + #include <linux/mfd/syscon.h> 14 + #include <linux/module.h> 15 + #include <linux/of_device.h> 16 + #include <linux/pm_opp.h> 17 + #include <linux/platform_device.h> 18 + #include <linux/regmap.h> 19 + #include <linux/regulator/consumer.h> 20 + #include <linux/slab.h> 21 + #include "../jedec_ddr.h" 22 + #include "../of_memory.h" 23 + 24 + #define EXYNOS5_DREXI_TIMINGAREF (0x0030) 25 + #define EXYNOS5_DREXI_TIMINGROW0 (0x0034) 26 + #define EXYNOS5_DREXI_TIMINGDATA0 (0x0038) 27 + #define EXYNOS5_DREXI_TIMINGPOWER0 (0x003C) 28 + #define EXYNOS5_DREXI_TIMINGROW1 (0x00E4) 29 + #define EXYNOS5_DREXI_TIMINGDATA1 (0x00E8) 30 + #define EXYNOS5_DREXI_TIMINGPOWER1 (0x00EC) 31 + #define CDREX_PAUSE (0x2091c) 32 + #define CDREX_LPDDR3PHY_CON3 (0x20a20) 33 + #define CDREX_LPDDR3PHY_CLKM_SRC (0x20700) 34 + #define EXYNOS5_TIMING_SET_SWI BIT(28) 35 + #define USE_MX_MSPLL_TIMINGS (1) 36 + #define USE_BPLL_TIMINGS (0) 37 + #define EXYNOS5_AREF_NORMAL (0x2e) 38 + 39 + #define DREX_PPCCLKCON (0x0130) 40 + #define DREX_PEREV2CONFIG (0x013c) 41 + #define DREX_PMNC_PPC (0xE000) 42 + #define DREX_CNTENS_PPC (0xE010) 43 + #define DREX_CNTENC_PPC (0xE020) 44 + #define DREX_INTENS_PPC (0xE030) 45 + #define DREX_INTENC_PPC (0xE040) 46 + #define DREX_FLAG_PPC (0xE050) 47 + #define DREX_PMCNT2_PPC (0xE130) 48 + 49 + /* 50 + * A value for register DREX_PMNC_PPC which should be written to reset 51 + * the cycle counter CCNT (a reference wall clock). It sets zero to the 52 + * CCNT counter. 53 + */ 54 + #define CC_RESET BIT(2) 55 + 56 + /* 57 + * A value for register DREX_PMNC_PPC which does the reset of all performance 58 + * counters to zero. 59 + */ 60 + #define PPC_COUNTER_RESET BIT(1) 61 + 62 + /* 63 + * Enables all configured counters (including cycle counter). The value should 64 + * be written to the register DREX_PMNC_PPC. 65 + */ 66 + #define PPC_ENABLE BIT(0) 67 + 68 + /* A value for register DREX_PPCCLKCON which enables performance events clock. 69 + * Must be written before first access to the performance counters register 70 + * set, otherwise it could crash. 71 + */ 72 + #define PEREV_CLK_EN BIT(0) 73 + 74 + /* 75 + * Values which are used to enable counters, interrupts or configure flags of 76 + * the performance counters. They configure counter 2 and cycle counter. 77 + */ 78 + #define PERF_CNT2 BIT(2) 79 + #define PERF_CCNT BIT(31) 80 + 81 + /* 82 + * Performance event types which are used for setting the preferred event 83 + * to track in the counters. 84 + * There is a set of different types, the values are from range 0 to 0x6f. 85 + * These settings should be written to the configuration register which manages 86 + * the type of the event (register DREX_PEREV2CONFIG). 87 + */ 88 + #define READ_TRANSFER_CH0 (0x6d) 89 + #define READ_TRANSFER_CH1 (0x6f) 90 + 91 + #define PERF_COUNTER_START_VALUE 0xff000000 92 + #define PERF_EVENT_UP_DOWN_THRESHOLD 900000000ULL 93 + 94 + /** 95 + * struct dmc_opp_table - Operating level desciption 96 + * 97 + * Covers frequency and voltage settings of the DMC operating mode. 98 + */ 99 + struct dmc_opp_table { 100 + u32 freq_hz; 101 + u32 volt_uv; 102 + }; 103 + 104 + /** 105 + * struct exynos5_dmc - main structure describing DMC device 106 + * 107 + * The main structure for the Dynamic Memory Controller which covers clocks, 108 + * memory regions, HW information, parameters and current operating mode. 109 + */ 110 + struct exynos5_dmc { 111 + struct device *dev; 112 + struct devfreq *df; 113 + struct devfreq_simple_ondemand_data gov_data; 114 + void __iomem *base_drexi0; 115 + void __iomem *base_drexi1; 116 + struct regmap *clk_regmap; 117 + struct mutex lock; 118 + unsigned long curr_rate; 119 + unsigned long curr_volt; 120 + unsigned long bypass_rate; 121 + struct dmc_opp_table *opp; 122 + struct dmc_opp_table opp_bypass; 123 + int opp_count; 124 + u32 timings_arr_size; 125 + u32 *timing_row; 126 + u32 *timing_data; 127 + u32 *timing_power; 128 + const struct lpddr3_timings *timings; 129 + const struct lpddr3_min_tck *min_tck; 130 + u32 bypass_timing_row; 131 + u32 bypass_timing_data; 132 + u32 bypass_timing_power; 133 + struct regulator *vdd_mif; 134 + struct clk *fout_spll; 135 + struct clk *fout_bpll; 136 + struct clk *mout_spll; 137 + struct clk *mout_bpll; 138 + struct clk *mout_mclk_cdrex; 139 + struct clk *mout_mx_mspll_ccore; 140 + struct clk *mx_mspll_ccore_phy; 141 + struct clk *mout_mx_mspll_ccore_phy; 142 + struct devfreq_event_dev **counter; 143 + int num_counters; 144 + u64 last_overflow_ts[2]; 145 + unsigned long load; 146 + unsigned long total; 147 + bool in_irq_mode; 148 + }; 149 + 150 + #define TIMING_FIELD(t_name, t_bit_beg, t_bit_end) \ 151 + { .name = t_name, .bit_beg = t_bit_beg, .bit_end = t_bit_end } 152 + 153 + #define TIMING_VAL2REG(timing, t_val) \ 154 + ({ \ 155 + u32 __val; \ 156 + __val = (t_val) << (timing)->bit_beg; \ 157 + __val; \ 158 + }) 159 + 160 + struct timing_reg { 161 + char *name; 162 + int bit_beg; 163 + int bit_end; 164 + unsigned int val; 165 + }; 166 + 167 + static const struct timing_reg timing_row[] = { 168 + TIMING_FIELD("tRFC", 24, 31), 169 + TIMING_FIELD("tRRD", 20, 23), 170 + TIMING_FIELD("tRP", 16, 19), 171 + TIMING_FIELD("tRCD", 12, 15), 172 + TIMING_FIELD("tRC", 6, 11), 173 + TIMING_FIELD("tRAS", 0, 5), 174 + }; 175 + 176 + static const struct timing_reg timing_data[] = { 177 + TIMING_FIELD("tWTR", 28, 31), 178 + TIMING_FIELD("tWR", 24, 27), 179 + TIMING_FIELD("tRTP", 20, 23), 180 + TIMING_FIELD("tW2W-C2C", 14, 14), 181 + TIMING_FIELD("tR2R-C2C", 12, 12), 182 + TIMING_FIELD("WL", 8, 11), 183 + TIMING_FIELD("tDQSCK", 4, 7), 184 + TIMING_FIELD("RL", 0, 3), 185 + }; 186 + 187 + static const struct timing_reg timing_power[] = { 188 + TIMING_FIELD("tFAW", 26, 31), 189 + TIMING_FIELD("tXSR", 16, 25), 190 + TIMING_FIELD("tXP", 8, 15), 191 + TIMING_FIELD("tCKE", 4, 7), 192 + TIMING_FIELD("tMRD", 0, 3), 193 + }; 194 + 195 + #define TIMING_COUNT (ARRAY_SIZE(timing_row) + ARRAY_SIZE(timing_data) + \ 196 + ARRAY_SIZE(timing_power)) 197 + 198 + static int exynos5_counters_set_event(struct exynos5_dmc *dmc) 199 + { 200 + int i, ret; 201 + 202 + for (i = 0; i < dmc->num_counters; i++) { 203 + if (!dmc->counter[i]) 204 + continue; 205 + ret = devfreq_event_set_event(dmc->counter[i]); 206 + if (ret < 0) 207 + return ret; 208 + } 209 + return 0; 210 + } 211 + 212 + static int exynos5_counters_enable_edev(struct exynos5_dmc *dmc) 213 + { 214 + int i, ret; 215 + 216 + for (i = 0; i < dmc->num_counters; i++) { 217 + if (!dmc->counter[i]) 218 + continue; 219 + ret = devfreq_event_enable_edev(dmc->counter[i]); 220 + if (ret < 0) 221 + return ret; 222 + } 223 + return 0; 224 + } 225 + 226 + static int exynos5_counters_disable_edev(struct exynos5_dmc *dmc) 227 + { 228 + int i, ret; 229 + 230 + for (i = 0; i < dmc->num_counters; i++) { 231 + if (!dmc->counter[i]) 232 + continue; 233 + ret = devfreq_event_disable_edev(dmc->counter[i]); 234 + if (ret < 0) 235 + return ret; 236 + } 237 + return 0; 238 + } 239 + 240 + /** 241 + * find_target_freq_id() - Finds requested frequency in local DMC configuration 242 + * @dmc: device for which the information is checked 243 + * @target_rate: requested frequency in KHz 244 + * 245 + * Seeks in the local DMC driver structure for the requested frequency value 246 + * and returns index or error value. 247 + */ 248 + static int find_target_freq_idx(struct exynos5_dmc *dmc, 249 + unsigned long target_rate) 250 + { 251 + int i; 252 + 253 + for (i = dmc->opp_count - 1; i >= 0; i--) 254 + if (dmc->opp[i].freq_hz <= target_rate) 255 + return i; 256 + 257 + return -EINVAL; 258 + } 259 + 260 + /** 261 + * exynos5_switch_timing_regs() - Changes bank register set for DRAM timings 262 + * @dmc: device for which the new settings is going to be applied 263 + * @set: boolean variable passing set value 264 + * 265 + * Changes the register set, which holds timing parameters. 266 + * There is two register sets: 0 and 1. The register set 0 267 + * is used in normal operation when the clock is provided from main PLL. 268 + * The bank register set 1 is used when the main PLL frequency is going to be 269 + * changed and the clock is taken from alternative, stable source. 270 + * This function switches between these banks according to the 271 + * currently used clock source. 272 + */ 273 + static void exynos5_switch_timing_regs(struct exynos5_dmc *dmc, bool set) 274 + { 275 + unsigned int reg; 276 + int ret; 277 + 278 + ret = regmap_read(dmc->clk_regmap, CDREX_LPDDR3PHY_CON3, &reg); 279 + 280 + if (set) 281 + reg |= EXYNOS5_TIMING_SET_SWI; 282 + else 283 + reg &= ~EXYNOS5_TIMING_SET_SWI; 284 + 285 + regmap_write(dmc->clk_regmap, CDREX_LPDDR3PHY_CON3, reg); 286 + } 287 + 288 + /** 289 + * exynos5_init_freq_table() - Initialized PM OPP framework 290 + * @dmc: DMC device for which the frequencies are used for OPP init 291 + * @profile: devfreq device's profile 292 + * 293 + * Populate the devfreq device's OPP table based on current frequency, voltage. 294 + */ 295 + static int exynos5_init_freq_table(struct exynos5_dmc *dmc, 296 + struct devfreq_dev_profile *profile) 297 + { 298 + int i, ret; 299 + int idx; 300 + unsigned long freq; 301 + 302 + ret = dev_pm_opp_of_add_table(dmc->dev); 303 + if (ret < 0) { 304 + dev_err(dmc->dev, "Failed to get OPP table\n"); 305 + return ret; 306 + } 307 + 308 + dmc->opp_count = dev_pm_opp_get_opp_count(dmc->dev); 309 + 310 + dmc->opp = devm_kmalloc_array(dmc->dev, dmc->opp_count, 311 + sizeof(struct dmc_opp_table), GFP_KERNEL); 312 + if (!dmc->opp) 313 + goto err_opp; 314 + 315 + idx = dmc->opp_count - 1; 316 + for (i = 0, freq = ULONG_MAX; i < dmc->opp_count; i++, freq--) { 317 + struct dev_pm_opp *opp; 318 + 319 + opp = dev_pm_opp_find_freq_floor(dmc->dev, &freq); 320 + if (IS_ERR(opp)) 321 + goto err_opp; 322 + 323 + dmc->opp[idx - i].freq_hz = freq; 324 + dmc->opp[idx - i].volt_uv = dev_pm_opp_get_voltage(opp); 325 + 326 + dev_pm_opp_put(opp); 327 + } 328 + 329 + return 0; 330 + 331 + err_opp: 332 + dev_pm_opp_of_remove_table(dmc->dev); 333 + 334 + return -EINVAL; 335 + } 336 + 337 + /** 338 + * exynos5_set_bypass_dram_timings() - Low-level changes of the DRAM timings 339 + * @dmc: device for which the new settings is going to be applied 340 + * @param: DRAM parameters which passes timing data 341 + * 342 + * Low-level function for changing timings for DRAM memory clocking from 343 + * 'bypass' clock source (fixed frequency @400MHz). 344 + * It uses timing bank registers set 1. 345 + */ 346 + static void exynos5_set_bypass_dram_timings(struct exynos5_dmc *dmc) 347 + { 348 + writel(EXYNOS5_AREF_NORMAL, 349 + dmc->base_drexi0 + EXYNOS5_DREXI_TIMINGAREF); 350 + 351 + writel(dmc->bypass_timing_row, 352 + dmc->base_drexi0 + EXYNOS5_DREXI_TIMINGROW1); 353 + writel(dmc->bypass_timing_row, 354 + dmc->base_drexi1 + EXYNOS5_DREXI_TIMINGROW1); 355 + writel(dmc->bypass_timing_data, 356 + dmc->base_drexi0 + EXYNOS5_DREXI_TIMINGDATA1); 357 + writel(dmc->bypass_timing_data, 358 + dmc->base_drexi1 + EXYNOS5_DREXI_TIMINGDATA1); 359 + writel(dmc->bypass_timing_power, 360 + dmc->base_drexi0 + EXYNOS5_DREXI_TIMINGPOWER1); 361 + writel(dmc->bypass_timing_power, 362 + dmc->base_drexi1 + EXYNOS5_DREXI_TIMINGPOWER1); 363 + } 364 + 365 + /** 366 + * exynos5_dram_change_timings() - Low-level changes of the DRAM final timings 367 + * @dmc: device for which the new settings is going to be applied 368 + * @target_rate: target frequency of the DMC 369 + * 370 + * Low-level function for changing timings for DRAM memory operating from main 371 + * clock source (BPLL), which can have different frequencies. Thus, each 372 + * frequency must have corresponding timings register values in order to keep 373 + * the needed delays. 374 + * It uses timing bank registers set 0. 375 + */ 376 + static int exynos5_dram_change_timings(struct exynos5_dmc *dmc, 377 + unsigned long target_rate) 378 + { 379 + int idx; 380 + 381 + for (idx = dmc->opp_count - 1; idx >= 0; idx--) 382 + if (dmc->opp[idx].freq_hz <= target_rate) 383 + break; 384 + 385 + if (idx < 0) 386 + return -EINVAL; 387 + 388 + writel(EXYNOS5_AREF_NORMAL, 389 + dmc->base_drexi0 + EXYNOS5_DREXI_TIMINGAREF); 390 + 391 + writel(dmc->timing_row[idx], 392 + dmc->base_drexi0 + EXYNOS5_DREXI_TIMINGROW0); 393 + writel(dmc->timing_row[idx], 394 + dmc->base_drexi1 + EXYNOS5_DREXI_TIMINGROW0); 395 + writel(dmc->timing_data[idx], 396 + dmc->base_drexi0 + EXYNOS5_DREXI_TIMINGDATA0); 397 + writel(dmc->timing_data[idx], 398 + dmc->base_drexi1 + EXYNOS5_DREXI_TIMINGDATA0); 399 + writel(dmc->timing_power[idx], 400 + dmc->base_drexi0 + EXYNOS5_DREXI_TIMINGPOWER0); 401 + writel(dmc->timing_power[idx], 402 + dmc->base_drexi1 + EXYNOS5_DREXI_TIMINGPOWER0); 403 + 404 + return 0; 405 + } 406 + 407 + /** 408 + * exynos5_dmc_align_target_voltage() - Sets the final voltage for the DMC 409 + * @dmc: device for which it is going to be set 410 + * @target_volt: new voltage which is chosen to be final 411 + * 412 + * Function tries to align voltage to the safe level for 'normal' mode. 413 + * It checks the need of higher voltage and changes the value. The target 414 + * voltage might be lower that currently set and still the system will be 415 + * stable. 416 + */ 417 + static int exynos5_dmc_align_target_voltage(struct exynos5_dmc *dmc, 418 + unsigned long target_volt) 419 + { 420 + int ret = 0; 421 + 422 + if (dmc->curr_volt <= target_volt) 423 + return 0; 424 + 425 + ret = regulator_set_voltage(dmc->vdd_mif, target_volt, 426 + target_volt); 427 + if (!ret) 428 + dmc->curr_volt = target_volt; 429 + 430 + return ret; 431 + } 432 + 433 + /** 434 + * exynos5_dmc_align_bypass_voltage() - Sets the voltage for the DMC 435 + * @dmc: device for which it is going to be set 436 + * @target_volt: new voltage which is chosen to be final 437 + * 438 + * Function tries to align voltage to the safe level for the 'bypass' mode. 439 + * It checks the need of higher voltage and changes the value. 440 + * The target voltage must not be less than currently needed, because 441 + * for current frequency the device might become unstable. 442 + */ 443 + static int exynos5_dmc_align_bypass_voltage(struct exynos5_dmc *dmc, 444 + unsigned long target_volt) 445 + { 446 + int ret = 0; 447 + unsigned long bypass_volt = dmc->opp_bypass.volt_uv; 448 + 449 + target_volt = max(bypass_volt, target_volt); 450 + 451 + if (dmc->curr_volt >= target_volt) 452 + return 0; 453 + 454 + ret = regulator_set_voltage(dmc->vdd_mif, target_volt, 455 + target_volt); 456 + if (!ret) 457 + dmc->curr_volt = target_volt; 458 + 459 + return ret; 460 + } 461 + 462 + /** 463 + * exynos5_dmc_align_bypass_dram_timings() - Chooses and sets DRAM timings 464 + * @dmc: device for which it is going to be set 465 + * @target_rate: new frequency which is chosen to be final 466 + * 467 + * Function changes the DRAM timings for the temporary 'bypass' mode. 468 + */ 469 + static int exynos5_dmc_align_bypass_dram_timings(struct exynos5_dmc *dmc, 470 + unsigned long target_rate) 471 + { 472 + int idx = find_target_freq_idx(dmc, target_rate); 473 + 474 + if (idx < 0) 475 + return -EINVAL; 476 + 477 + exynos5_set_bypass_dram_timings(dmc); 478 + 479 + return 0; 480 + } 481 + 482 + /** 483 + * exynos5_dmc_switch_to_bypass_configuration() - Switching to temporary clock 484 + * @dmc: DMC device for which the switching is going to happen 485 + * @target_rate: new frequency which is going to be set as a final 486 + * @target_volt: new voltage which is going to be set as a final 487 + * 488 + * Function configures DMC and clocks for operating in temporary 'bypass' mode. 489 + * This mode is used only temporary but if required, changes voltage and timings 490 + * for DRAM chips. It switches the main clock to stable clock source for the 491 + * period of the main PLL reconfiguration. 492 + */ 493 + static int 494 + exynos5_dmc_switch_to_bypass_configuration(struct exynos5_dmc *dmc, 495 + unsigned long target_rate, 496 + unsigned long target_volt) 497 + { 498 + int ret; 499 + 500 + /* 501 + * Having higher voltage for a particular frequency does not harm 502 + * the chip. Use it for the temporary frequency change when one 503 + * voltage manipulation might be avoided. 504 + */ 505 + ret = exynos5_dmc_align_bypass_voltage(dmc, target_volt); 506 + if (ret) 507 + return ret; 508 + 509 + /* 510 + * Longer delays for DRAM does not cause crash, the opposite does. 511 + */ 512 + ret = exynos5_dmc_align_bypass_dram_timings(dmc, target_rate); 513 + if (ret) 514 + return ret; 515 + 516 + /* 517 + * Delays are long enough, so use them for the new coming clock. 518 + */ 519 + exynos5_switch_timing_regs(dmc, USE_MX_MSPLL_TIMINGS); 520 + 521 + return ret; 522 + } 523 + 524 + /** 525 + * exynos5_dmc_change_freq_and_volt() - Changes voltage and frequency of the DMC 526 + * using safe procedure 527 + * @dmc: device for which the frequency is going to be changed 528 + * @target_rate: requested new frequency 529 + * @target_volt: requested voltage which corresponds to the new frequency 530 + * 531 + * The DMC frequency change procedure requires a few steps. 532 + * The main requirement is to change the clock source in the clk mux 533 + * for the time of main clock PLL locking. The assumption is that the 534 + * alternative clock source set as parent is stable. 535 + * The second parent's clock frequency is fixed to 400MHz, it is named 'bypass' 536 + * clock. This requires alignment in DRAM timing parameters for the new 537 + * T-period. There is two bank sets for keeping DRAM 538 + * timings: set 0 and set 1. The set 0 is used when main clock source is 539 + * chosen. The 2nd set of regs is used for 'bypass' clock. Switching between 540 + * the two bank sets is part of the process. 541 + * The voltage must also be aligned to the minimum required level. There is 542 + * this intermediate step with switching to 'bypass' parent clock source. 543 + * if the old voltage is lower, it requires an increase of the voltage level. 544 + * The complexity of the voltage manipulation is hidden in low level function. 545 + * In this function there is last alignment of the voltage level at the end. 546 + */ 547 + static int 548 + exynos5_dmc_change_freq_and_volt(struct exynos5_dmc *dmc, 549 + unsigned long target_rate, 550 + unsigned long target_volt) 551 + { 552 + int ret; 553 + 554 + ret = exynos5_dmc_switch_to_bypass_configuration(dmc, target_rate, 555 + target_volt); 556 + if (ret) 557 + return ret; 558 + 559 + /* 560 + * Voltage is set at least to a level needed for this frequency, 561 + * so switching clock source is safe now. 562 + */ 563 + clk_prepare_enable(dmc->fout_spll); 564 + clk_prepare_enable(dmc->mout_spll); 565 + clk_prepare_enable(dmc->mout_mx_mspll_ccore); 566 + 567 + ret = clk_set_parent(dmc->mout_mclk_cdrex, dmc->mout_mx_mspll_ccore); 568 + if (ret) 569 + goto disable_clocks; 570 + 571 + /* 572 + * We are safe to increase the timings for current bypass frequency. 573 + * Thanks to this the settings will be ready for the upcoming clock 574 + * source change. 575 + */ 576 + exynos5_dram_change_timings(dmc, target_rate); 577 + 578 + clk_set_rate(dmc->fout_bpll, target_rate); 579 + 580 + exynos5_switch_timing_regs(dmc, USE_BPLL_TIMINGS); 581 + 582 + ret = clk_set_parent(dmc->mout_mclk_cdrex, dmc->mout_bpll); 583 + if (ret) 584 + goto disable_clocks; 585 + 586 + /* 587 + * Make sure if the voltage is not from 'bypass' settings and align to 588 + * the right level for power efficiency. 589 + */ 590 + ret = exynos5_dmc_align_target_voltage(dmc, target_volt); 591 + 592 + disable_clocks: 593 + clk_disable_unprepare(dmc->mout_mx_mspll_ccore); 594 + clk_disable_unprepare(dmc->mout_spll); 595 + clk_disable_unprepare(dmc->fout_spll); 596 + 597 + return ret; 598 + } 599 + 600 + /** 601 + * exynos5_dmc_get_volt_freq() - Gets the frequency and voltage from the OPP 602 + * table. 603 + * @dmc: device for which the frequency is going to be changed 604 + * @freq: requested frequency in KHz 605 + * @target_rate: returned frequency which is the same or lower than 606 + * requested 607 + * @target_volt: returned voltage which corresponds to the returned 608 + * frequency 609 + * 610 + * Function gets requested frequency and checks OPP framework for needed 611 + * frequency and voltage. It populates the values 'target_rate' and 612 + * 'target_volt' or returns error value when OPP framework fails. 613 + */ 614 + static int exynos5_dmc_get_volt_freq(struct exynos5_dmc *dmc, 615 + unsigned long *freq, 616 + unsigned long *target_rate, 617 + unsigned long *target_volt, u32 flags) 618 + { 619 + struct dev_pm_opp *opp; 620 + 621 + opp = devfreq_recommended_opp(dmc->dev, freq, flags); 622 + if (IS_ERR(opp)) 623 + return PTR_ERR(opp); 624 + 625 + *target_rate = dev_pm_opp_get_freq(opp); 626 + *target_volt = dev_pm_opp_get_voltage(opp); 627 + dev_pm_opp_put(opp); 628 + 629 + return 0; 630 + } 631 + 632 + /** 633 + * exynos5_dmc_target() - Function responsible for changing frequency of DMC 634 + * @dev: device for which the frequency is going to be changed 635 + * @freq: requested frequency in KHz 636 + * @flags: flags provided for this frequency change request 637 + * 638 + * An entry function provided to the devfreq framework which provides frequency 639 + * change of the DMC. The function gets the possible rate from OPP table based 640 + * on requested frequency. It calls the next function responsible for the 641 + * frequency and voltage change. In case of failure, does not set 'curr_rate' 642 + * and returns error value to the framework. 643 + */ 644 + static int exynos5_dmc_target(struct device *dev, unsigned long *freq, 645 + u32 flags) 646 + { 647 + struct exynos5_dmc *dmc = dev_get_drvdata(dev); 648 + unsigned long target_rate = 0; 649 + unsigned long target_volt = 0; 650 + int ret; 651 + 652 + ret = exynos5_dmc_get_volt_freq(dmc, freq, &target_rate, &target_volt, 653 + flags); 654 + 655 + if (ret) 656 + return ret; 657 + 658 + if (target_rate == dmc->curr_rate) 659 + return 0; 660 + 661 + mutex_lock(&dmc->lock); 662 + 663 + ret = exynos5_dmc_change_freq_and_volt(dmc, target_rate, target_volt); 664 + 665 + if (ret) { 666 + mutex_unlock(&dmc->lock); 667 + return ret; 668 + } 669 + 670 + dmc->curr_rate = target_rate; 671 + 672 + mutex_unlock(&dmc->lock); 673 + return 0; 674 + } 675 + 676 + /** 677 + * exynos5_counters_get() - Gets the performance counters values. 678 + * @dmc: device for which the counters are going to be checked 679 + * @load_count: variable which is populated with counter value 680 + * @total_count: variable which is used as 'wall clock' reference 681 + * 682 + * Function which provides performance counters values. It sums up counters for 683 + * two DMC channels. The 'total_count' is used as a reference and max value. 684 + * The ratio 'load_count/total_count' shows the busy percentage [0%, 100%]. 685 + */ 686 + static int exynos5_counters_get(struct exynos5_dmc *dmc, 687 + unsigned long *load_count, 688 + unsigned long *total_count) 689 + { 690 + unsigned long total = 0; 691 + struct devfreq_event_data event; 692 + int ret, i; 693 + 694 + *load_count = 0; 695 + 696 + /* Take into account only read+write counters, but stop all */ 697 + for (i = 0; i < dmc->num_counters; i++) { 698 + if (!dmc->counter[i]) 699 + continue; 700 + 701 + ret = devfreq_event_get_event(dmc->counter[i], &event); 702 + if (ret < 0) 703 + return ret; 704 + 705 + *load_count += event.load_count; 706 + 707 + if (total < event.total_count) 708 + total = event.total_count; 709 + } 710 + 711 + *total_count = total; 712 + 713 + return 0; 714 + } 715 + 716 + /** 717 + * exynos5_dmc_start_perf_events() - Setup and start performance event counters 718 + * @dmc: device for which the counters are going to be checked 719 + * @beg_value: initial value for the counter 720 + * 721 + * Function which enables needed counters, interrupts and sets initial values 722 + * then starts the counters. 723 + */ 724 + static void exynos5_dmc_start_perf_events(struct exynos5_dmc *dmc, 725 + u32 beg_value) 726 + { 727 + /* Enable interrupts for counter 2 */ 728 + writel(PERF_CNT2, dmc->base_drexi0 + DREX_INTENS_PPC); 729 + writel(PERF_CNT2, dmc->base_drexi1 + DREX_INTENS_PPC); 730 + 731 + /* Enable counter 2 and CCNT */ 732 + writel(PERF_CNT2 | PERF_CCNT, dmc->base_drexi0 + DREX_CNTENS_PPC); 733 + writel(PERF_CNT2 | PERF_CCNT, dmc->base_drexi1 + DREX_CNTENS_PPC); 734 + 735 + /* Clear overflow flag for all counters */ 736 + writel(PERF_CNT2 | PERF_CCNT, dmc->base_drexi0 + DREX_FLAG_PPC); 737 + writel(PERF_CNT2 | PERF_CCNT, dmc->base_drexi1 + DREX_FLAG_PPC); 738 + 739 + /* Reset all counters */ 740 + writel(CC_RESET | PPC_COUNTER_RESET, dmc->base_drexi0 + DREX_PMNC_PPC); 741 + writel(CC_RESET | PPC_COUNTER_RESET, dmc->base_drexi1 + DREX_PMNC_PPC); 742 + 743 + /* 744 + * Set start value for the counters, the number of samples that 745 + * will be gathered is calculated as: 0xffffffff - beg_value 746 + */ 747 + writel(beg_value, dmc->base_drexi0 + DREX_PMCNT2_PPC); 748 + writel(beg_value, dmc->base_drexi1 + DREX_PMCNT2_PPC); 749 + 750 + /* Start all counters */ 751 + writel(PPC_ENABLE, dmc->base_drexi0 + DREX_PMNC_PPC); 752 + writel(PPC_ENABLE, dmc->base_drexi1 + DREX_PMNC_PPC); 753 + } 754 + 755 + /** 756 + * exynos5_dmc_perf_events_calc() - Calculate utilization 757 + * @dmc: device for which the counters are going to be checked 758 + * @diff_ts: time between last interrupt and current one 759 + * 760 + * Function which calculates needed utilization for the devfreq governor. 761 + * It prepares values for 'busy_time' and 'total_time' based on elapsed time 762 + * between interrupts, which approximates utilization. 763 + */ 764 + static void exynos5_dmc_perf_events_calc(struct exynos5_dmc *dmc, u64 diff_ts) 765 + { 766 + /* 767 + * This is a simple algorithm for managing traffic on DMC. 768 + * When there is almost no load the counters overflow every 4s, 769 + * no mater the DMC frequency. 770 + * The high load might be approximated using linear function. 771 + * Knowing that, simple calculation can provide 'busy_time' and 772 + * 'total_time' to the devfreq governor which picks up target 773 + * frequency. 774 + * We want a fast ramp up and slow decay in frequency change function. 775 + */ 776 + if (diff_ts < PERF_EVENT_UP_DOWN_THRESHOLD) { 777 + /* 778 + * Set higher utilization for the simple_ondemand governor. 779 + * The governor should increase the frequency of the DMC. 780 + */ 781 + dmc->load = 70; 782 + dmc->total = 100; 783 + } else { 784 + /* 785 + * Set low utilization for the simple_ondemand governor. 786 + * The governor should decrease the frequency of the DMC. 787 + */ 788 + dmc->load = 35; 789 + dmc->total = 100; 790 + } 791 + 792 + dev_dbg(dmc->dev, "diff_ts=%llu\n", diff_ts); 793 + } 794 + 795 + /** 796 + * exynos5_dmc_perf_events_check() - Checks the status of the counters 797 + * @dmc: device for which the counters are going to be checked 798 + * 799 + * Function which is called from threaded IRQ to check the counters state 800 + * and to call approximation for the needed utilization. 801 + */ 802 + static void exynos5_dmc_perf_events_check(struct exynos5_dmc *dmc) 803 + { 804 + u32 val; 805 + u64 diff_ts, ts; 806 + 807 + ts = ktime_get_ns(); 808 + 809 + /* Stop all counters */ 810 + writel(0, dmc->base_drexi0 + DREX_PMNC_PPC); 811 + writel(0, dmc->base_drexi1 + DREX_PMNC_PPC); 812 + 813 + /* Check the source in interrupt flag registers (which channel) */ 814 + val = readl(dmc->base_drexi0 + DREX_FLAG_PPC); 815 + if (val) { 816 + diff_ts = ts - dmc->last_overflow_ts[0]; 817 + dmc->last_overflow_ts[0] = ts; 818 + dev_dbg(dmc->dev, "drex0 0xE050 val= 0x%08x\n", val); 819 + } else { 820 + val = readl(dmc->base_drexi1 + DREX_FLAG_PPC); 821 + diff_ts = ts - dmc->last_overflow_ts[1]; 822 + dmc->last_overflow_ts[1] = ts; 823 + dev_dbg(dmc->dev, "drex1 0xE050 val= 0x%08x\n", val); 824 + } 825 + 826 + exynos5_dmc_perf_events_calc(dmc, diff_ts); 827 + 828 + exynos5_dmc_start_perf_events(dmc, PERF_COUNTER_START_VALUE); 829 + } 830 + 831 + /** 832 + * exynos5_dmc_enable_perf_events() - Enable performance events 833 + * @dmc: device for which the counters are going to be checked 834 + * 835 + * Function which is setup needed environment and enables counters. 836 + */ 837 + static void exynos5_dmc_enable_perf_events(struct exynos5_dmc *dmc) 838 + { 839 + u64 ts; 840 + 841 + /* Enable Performance Event Clock */ 842 + writel(PEREV_CLK_EN, dmc->base_drexi0 + DREX_PPCCLKCON); 843 + writel(PEREV_CLK_EN, dmc->base_drexi1 + DREX_PPCCLKCON); 844 + 845 + /* Select read transfers as performance event2 */ 846 + writel(READ_TRANSFER_CH0, dmc->base_drexi0 + DREX_PEREV2CONFIG); 847 + writel(READ_TRANSFER_CH1, dmc->base_drexi1 + DREX_PEREV2CONFIG); 848 + 849 + ts = ktime_get_ns(); 850 + dmc->last_overflow_ts[0] = ts; 851 + dmc->last_overflow_ts[1] = ts; 852 + 853 + /* Devfreq shouldn't be faster than initialization, play safe though. */ 854 + dmc->load = 99; 855 + dmc->total = 100; 856 + } 857 + 858 + /** 859 + * exynos5_dmc_disable_perf_events() - Disable performance events 860 + * @dmc: device for which the counters are going to be checked 861 + * 862 + * Function which stops, disables performance event counters and interrupts. 863 + */ 864 + static void exynos5_dmc_disable_perf_events(struct exynos5_dmc *dmc) 865 + { 866 + /* Stop all counters */ 867 + writel(0, dmc->base_drexi0 + DREX_PMNC_PPC); 868 + writel(0, dmc->base_drexi1 + DREX_PMNC_PPC); 869 + 870 + /* Disable interrupts for counter 2 */ 871 + writel(PERF_CNT2, dmc->base_drexi0 + DREX_INTENC_PPC); 872 + writel(PERF_CNT2, dmc->base_drexi1 + DREX_INTENC_PPC); 873 + 874 + /* Disable counter 2 and CCNT */ 875 + writel(PERF_CNT2 | PERF_CCNT, dmc->base_drexi0 + DREX_CNTENC_PPC); 876 + writel(PERF_CNT2 | PERF_CCNT, dmc->base_drexi1 + DREX_CNTENC_PPC); 877 + 878 + /* Clear overflow flag for all counters */ 879 + writel(PERF_CNT2 | PERF_CCNT, dmc->base_drexi0 + DREX_FLAG_PPC); 880 + writel(PERF_CNT2 | PERF_CCNT, dmc->base_drexi1 + DREX_FLAG_PPC); 881 + } 882 + 883 + /** 884 + * exynos5_dmc_get_status() - Read current DMC performance statistics. 885 + * @dev: device for which the statistics are requested 886 + * @stat: structure which has statistic fields 887 + * 888 + * Function reads the DMC performance counters and calculates 'busy_time' 889 + * and 'total_time'. To protect from overflow, the values are shifted right 890 + * by 10. After read out the counters are setup to count again. 891 + */ 892 + static int exynos5_dmc_get_status(struct device *dev, 893 + struct devfreq_dev_status *stat) 894 + { 895 + struct exynos5_dmc *dmc = dev_get_drvdata(dev); 896 + unsigned long load, total; 897 + int ret; 898 + 899 + if (dmc->in_irq_mode) { 900 + stat->current_frequency = dmc->curr_rate; 901 + stat->busy_time = dmc->load; 902 + stat->total_time = dmc->total; 903 + } else { 904 + ret = exynos5_counters_get(dmc, &load, &total); 905 + if (ret < 0) 906 + return -EINVAL; 907 + 908 + /* To protect from overflow, divide by 1024 */ 909 + stat->busy_time = load >> 10; 910 + stat->total_time = total >> 10; 911 + 912 + ret = exynos5_counters_set_event(dmc); 913 + if (ret < 0) { 914 + dev_err(dev, "could not set event counter\n"); 915 + return ret; 916 + } 917 + } 918 + 919 + return 0; 920 + } 921 + 922 + /** 923 + * exynos5_dmc_get_cur_freq() - Function returns current DMC frequency 924 + * @dev: device for which the framework checks operating frequency 925 + * @freq: returned frequency value 926 + * 927 + * It returns the currently used frequency of the DMC. The real operating 928 + * frequency might be lower when the clock source value could not be divided 929 + * to the requested value. 930 + */ 931 + static int exynos5_dmc_get_cur_freq(struct device *dev, unsigned long *freq) 932 + { 933 + struct exynos5_dmc *dmc = dev_get_drvdata(dev); 934 + 935 + mutex_lock(&dmc->lock); 936 + *freq = dmc->curr_rate; 937 + mutex_unlock(&dmc->lock); 938 + 939 + return 0; 940 + } 941 + 942 + /** 943 + * exynos5_dmc_df_profile - Devfreq governor's profile structure 944 + * 945 + * It provides to the devfreq framework needed functions and polling period. 946 + */ 947 + static struct devfreq_dev_profile exynos5_dmc_df_profile = { 948 + .target = exynos5_dmc_target, 949 + .get_dev_status = exynos5_dmc_get_status, 950 + .get_cur_freq = exynos5_dmc_get_cur_freq, 951 + }; 952 + 953 + /** 954 + * exynos5_dmc_align_initial_frequency() - Align initial frequency value 955 + * @dmc: device for which the frequency is going to be set 956 + * @bootloader_init_freq: initial frequency set by the bootloader in KHz 957 + * 958 + * The initial bootloader frequency, which is present during boot, might be 959 + * different that supported frequency values in the driver. It is possible 960 + * due to different PLL settings or used PLL as a source. 961 + * This function provides the 'initial_freq' for the devfreq framework 962 + * statistics engine which supports only registered values. Thus, some alignment 963 + * must be made. 964 + */ 965 + static unsigned long 966 + exynos5_dmc_align_init_freq(struct exynos5_dmc *dmc, 967 + unsigned long bootloader_init_freq) 968 + { 969 + unsigned long aligned_freq; 970 + int idx; 971 + 972 + idx = find_target_freq_idx(dmc, bootloader_init_freq); 973 + if (idx >= 0) 974 + aligned_freq = dmc->opp[idx].freq_hz; 975 + else 976 + aligned_freq = dmc->opp[dmc->opp_count - 1].freq_hz; 977 + 978 + return aligned_freq; 979 + } 980 + 981 + /** 982 + * create_timings_aligned() - Create register values and align with standard 983 + * @dmc: device for which the frequency is going to be set 984 + * @idx: speed bin in the OPP table 985 + * @clk_period_ps: the period of the clock, known as tCK 986 + * 987 + * The function calculates timings and creates a register value ready for 988 + * a frequency transition. The register contains a few timings. They are 989 + * shifted by a known offset. The timing value is calculated based on memory 990 + * specyfication: minimal time required and minimal cycles required. 991 + */ 992 + static int create_timings_aligned(struct exynos5_dmc *dmc, u32 *reg_timing_row, 993 + u32 *reg_timing_data, u32 *reg_timing_power, 994 + u32 clk_period_ps) 995 + { 996 + u32 val; 997 + const struct timing_reg *reg; 998 + 999 + if (clk_period_ps == 0) 1000 + return -EINVAL; 1001 + 1002 + *reg_timing_row = 0; 1003 + *reg_timing_data = 0; 1004 + *reg_timing_power = 0; 1005 + 1006 + val = dmc->timings->tRFC / clk_period_ps; 1007 + val += dmc->timings->tRFC % clk_period_ps ? 1 : 0; 1008 + val = max(val, dmc->min_tck->tRFC); 1009 + reg = &timing_row[0]; 1010 + *reg_timing_row |= TIMING_VAL2REG(reg, val); 1011 + 1012 + val = dmc->timings->tRRD / clk_period_ps; 1013 + val += dmc->timings->tRRD % clk_period_ps ? 1 : 0; 1014 + val = max(val, dmc->min_tck->tRRD); 1015 + reg = &timing_row[1]; 1016 + *reg_timing_row |= TIMING_VAL2REG(reg, val); 1017 + 1018 + val = dmc->timings->tRPab / clk_period_ps; 1019 + val += dmc->timings->tRPab % clk_period_ps ? 1 : 0; 1020 + val = max(val, dmc->min_tck->tRPab); 1021 + reg = &timing_row[2]; 1022 + *reg_timing_row |= TIMING_VAL2REG(reg, val); 1023 + 1024 + val = dmc->timings->tRCD / clk_period_ps; 1025 + val += dmc->timings->tRCD % clk_period_ps ? 1 : 0; 1026 + val = max(val, dmc->min_tck->tRCD); 1027 + reg = &timing_row[3]; 1028 + *reg_timing_row |= TIMING_VAL2REG(reg, val); 1029 + 1030 + val = dmc->timings->tRC / clk_period_ps; 1031 + val += dmc->timings->tRC % clk_period_ps ? 1 : 0; 1032 + val = max(val, dmc->min_tck->tRC); 1033 + reg = &timing_row[4]; 1034 + *reg_timing_row |= TIMING_VAL2REG(reg, val); 1035 + 1036 + val = dmc->timings->tRAS / clk_period_ps; 1037 + val += dmc->timings->tRAS % clk_period_ps ? 1 : 0; 1038 + val = max(val, dmc->min_tck->tRAS); 1039 + reg = &timing_row[5]; 1040 + *reg_timing_row |= TIMING_VAL2REG(reg, val); 1041 + 1042 + /* data related timings */ 1043 + val = dmc->timings->tWTR / clk_period_ps; 1044 + val += dmc->timings->tWTR % clk_period_ps ? 1 : 0; 1045 + val = max(val, dmc->min_tck->tWTR); 1046 + reg = &timing_data[0]; 1047 + *reg_timing_data |= TIMING_VAL2REG(reg, val); 1048 + 1049 + val = dmc->timings->tWR / clk_period_ps; 1050 + val += dmc->timings->tWR % clk_period_ps ? 1 : 0; 1051 + val = max(val, dmc->min_tck->tWR); 1052 + reg = &timing_data[1]; 1053 + *reg_timing_data |= TIMING_VAL2REG(reg, val); 1054 + 1055 + val = dmc->timings->tRTP / clk_period_ps; 1056 + val += dmc->timings->tRTP % clk_period_ps ? 1 : 0; 1057 + val = max(val, dmc->min_tck->tRTP); 1058 + reg = &timing_data[2]; 1059 + *reg_timing_data |= TIMING_VAL2REG(reg, val); 1060 + 1061 + val = dmc->timings->tW2W_C2C / clk_period_ps; 1062 + val += dmc->timings->tW2W_C2C % clk_period_ps ? 1 : 0; 1063 + val = max(val, dmc->min_tck->tW2W_C2C); 1064 + reg = &timing_data[3]; 1065 + *reg_timing_data |= TIMING_VAL2REG(reg, val); 1066 + 1067 + val = dmc->timings->tR2R_C2C / clk_period_ps; 1068 + val += dmc->timings->tR2R_C2C % clk_period_ps ? 1 : 0; 1069 + val = max(val, dmc->min_tck->tR2R_C2C); 1070 + reg = &timing_data[4]; 1071 + *reg_timing_data |= TIMING_VAL2REG(reg, val); 1072 + 1073 + val = dmc->timings->tWL / clk_period_ps; 1074 + val += dmc->timings->tWL % clk_period_ps ? 1 : 0; 1075 + val = max(val, dmc->min_tck->tWL); 1076 + reg = &timing_data[5]; 1077 + *reg_timing_data |= TIMING_VAL2REG(reg, val); 1078 + 1079 + val = dmc->timings->tDQSCK / clk_period_ps; 1080 + val += dmc->timings->tDQSCK % clk_period_ps ? 1 : 0; 1081 + val = max(val, dmc->min_tck->tDQSCK); 1082 + reg = &timing_data[6]; 1083 + *reg_timing_data |= TIMING_VAL2REG(reg, val); 1084 + 1085 + val = dmc->timings->tRL / clk_period_ps; 1086 + val += dmc->timings->tRL % clk_period_ps ? 1 : 0; 1087 + val = max(val, dmc->min_tck->tRL); 1088 + reg = &timing_data[7]; 1089 + *reg_timing_data |= TIMING_VAL2REG(reg, val); 1090 + 1091 + /* power related timings */ 1092 + val = dmc->timings->tFAW / clk_period_ps; 1093 + val += dmc->timings->tFAW % clk_period_ps ? 1 : 0; 1094 + val = max(val, dmc->min_tck->tXP); 1095 + reg = &timing_power[0]; 1096 + *reg_timing_power |= TIMING_VAL2REG(reg, val); 1097 + 1098 + val = dmc->timings->tXSR / clk_period_ps; 1099 + val += dmc->timings->tXSR % clk_period_ps ? 1 : 0; 1100 + val = max(val, dmc->min_tck->tXSR); 1101 + reg = &timing_power[1]; 1102 + *reg_timing_power |= TIMING_VAL2REG(reg, val); 1103 + 1104 + val = dmc->timings->tXP / clk_period_ps; 1105 + val += dmc->timings->tXP % clk_period_ps ? 1 : 0; 1106 + val = max(val, dmc->min_tck->tXP); 1107 + reg = &timing_power[2]; 1108 + *reg_timing_power |= TIMING_VAL2REG(reg, val); 1109 + 1110 + val = dmc->timings->tCKE / clk_period_ps; 1111 + val += dmc->timings->tCKE % clk_period_ps ? 1 : 0; 1112 + val = max(val, dmc->min_tck->tCKE); 1113 + reg = &timing_power[3]; 1114 + *reg_timing_power |= TIMING_VAL2REG(reg, val); 1115 + 1116 + val = dmc->timings->tMRD / clk_period_ps; 1117 + val += dmc->timings->tMRD % clk_period_ps ? 1 : 0; 1118 + val = max(val, dmc->min_tck->tMRD); 1119 + reg = &timing_power[4]; 1120 + *reg_timing_power |= TIMING_VAL2REG(reg, val); 1121 + 1122 + return 0; 1123 + } 1124 + 1125 + /** 1126 + * of_get_dram_timings() - helper function for parsing DT settings for DRAM 1127 + * @dmc: device for which the frequency is going to be set 1128 + * 1129 + * The function parses DT entries with DRAM information. 1130 + */ 1131 + static int of_get_dram_timings(struct exynos5_dmc *dmc) 1132 + { 1133 + int ret = 0; 1134 + int idx; 1135 + struct device_node *np_ddr; 1136 + u32 freq_mhz, clk_period_ps; 1137 + 1138 + np_ddr = of_parse_phandle(dmc->dev->of_node, "device-handle", 0); 1139 + if (!np_ddr) { 1140 + dev_warn(dmc->dev, "could not find 'device-handle' in DT\n"); 1141 + return -EINVAL; 1142 + } 1143 + 1144 + dmc->timing_row = devm_kmalloc_array(dmc->dev, TIMING_COUNT, 1145 + sizeof(u32), GFP_KERNEL); 1146 + if (!dmc->timing_row) 1147 + return -ENOMEM; 1148 + 1149 + dmc->timing_data = devm_kmalloc_array(dmc->dev, TIMING_COUNT, 1150 + sizeof(u32), GFP_KERNEL); 1151 + if (!dmc->timing_data) 1152 + return -ENOMEM; 1153 + 1154 + dmc->timing_power = devm_kmalloc_array(dmc->dev, TIMING_COUNT, 1155 + sizeof(u32), GFP_KERNEL); 1156 + if (!dmc->timing_power) 1157 + return -ENOMEM; 1158 + 1159 + dmc->timings = of_lpddr3_get_ddr_timings(np_ddr, dmc->dev, 1160 + DDR_TYPE_LPDDR3, 1161 + &dmc->timings_arr_size); 1162 + if (!dmc->timings) { 1163 + of_node_put(np_ddr); 1164 + dev_warn(dmc->dev, "could not get timings from DT\n"); 1165 + return -EINVAL; 1166 + } 1167 + 1168 + dmc->min_tck = of_lpddr3_get_min_tck(np_ddr, dmc->dev); 1169 + if (!dmc->min_tck) { 1170 + of_node_put(np_ddr); 1171 + dev_warn(dmc->dev, "could not get tck from DT\n"); 1172 + return -EINVAL; 1173 + } 1174 + 1175 + /* Sorted array of OPPs with frequency ascending */ 1176 + for (idx = 0; idx < dmc->opp_count; idx++) { 1177 + freq_mhz = dmc->opp[idx].freq_hz / 1000000; 1178 + clk_period_ps = 1000000 / freq_mhz; 1179 + 1180 + ret = create_timings_aligned(dmc, &dmc->timing_row[idx], 1181 + &dmc->timing_data[idx], 1182 + &dmc->timing_power[idx], 1183 + clk_period_ps); 1184 + } 1185 + 1186 + of_node_put(np_ddr); 1187 + 1188 + /* Take the highest frequency's timings as 'bypass' */ 1189 + dmc->bypass_timing_row = dmc->timing_row[idx - 1]; 1190 + dmc->bypass_timing_data = dmc->timing_data[idx - 1]; 1191 + dmc->bypass_timing_power = dmc->timing_power[idx - 1]; 1192 + 1193 + return ret; 1194 + } 1195 + 1196 + /** 1197 + * exynos5_dmc_init_clks() - Initialize clocks needed for DMC operation. 1198 + * @dmc: DMC structure containing needed fields 1199 + * 1200 + * Get the needed clocks defined in DT device, enable and set the right parents. 1201 + * Read current frequency and initialize the initial rate for governor. 1202 + */ 1203 + static int exynos5_dmc_init_clks(struct exynos5_dmc *dmc) 1204 + { 1205 + int ret; 1206 + unsigned long target_volt = 0; 1207 + unsigned long target_rate = 0; 1208 + unsigned int tmp; 1209 + 1210 + dmc->fout_spll = devm_clk_get(dmc->dev, "fout_spll"); 1211 + if (IS_ERR(dmc->fout_spll)) 1212 + return PTR_ERR(dmc->fout_spll); 1213 + 1214 + dmc->fout_bpll = devm_clk_get(dmc->dev, "fout_bpll"); 1215 + if (IS_ERR(dmc->fout_bpll)) 1216 + return PTR_ERR(dmc->fout_bpll); 1217 + 1218 + dmc->mout_mclk_cdrex = devm_clk_get(dmc->dev, "mout_mclk_cdrex"); 1219 + if (IS_ERR(dmc->mout_mclk_cdrex)) 1220 + return PTR_ERR(dmc->mout_mclk_cdrex); 1221 + 1222 + dmc->mout_bpll = devm_clk_get(dmc->dev, "mout_bpll"); 1223 + if (IS_ERR(dmc->mout_bpll)) 1224 + return PTR_ERR(dmc->mout_bpll); 1225 + 1226 + dmc->mout_mx_mspll_ccore = devm_clk_get(dmc->dev, 1227 + "mout_mx_mspll_ccore"); 1228 + if (IS_ERR(dmc->mout_mx_mspll_ccore)) 1229 + return PTR_ERR(dmc->mout_mx_mspll_ccore); 1230 + 1231 + dmc->mout_spll = devm_clk_get(dmc->dev, "ff_dout_spll2"); 1232 + if (IS_ERR(dmc->mout_spll)) { 1233 + dmc->mout_spll = devm_clk_get(dmc->dev, "mout_sclk_spll"); 1234 + if (IS_ERR(dmc->mout_spll)) 1235 + return PTR_ERR(dmc->mout_spll); 1236 + } 1237 + 1238 + /* 1239 + * Convert frequency to KHz values and set it for the governor. 1240 + */ 1241 + dmc->curr_rate = clk_get_rate(dmc->mout_mclk_cdrex); 1242 + dmc->curr_rate = exynos5_dmc_align_init_freq(dmc, dmc->curr_rate); 1243 + exynos5_dmc_df_profile.initial_freq = dmc->curr_rate; 1244 + 1245 + ret = exynos5_dmc_get_volt_freq(dmc, &dmc->curr_rate, &target_rate, 1246 + &target_volt, 0); 1247 + if (ret) 1248 + return ret; 1249 + 1250 + dmc->curr_volt = target_volt; 1251 + 1252 + clk_set_parent(dmc->mout_mx_mspll_ccore, dmc->mout_spll); 1253 + 1254 + dmc->bypass_rate = clk_get_rate(dmc->mout_mx_mspll_ccore); 1255 + 1256 + clk_prepare_enable(dmc->fout_bpll); 1257 + clk_prepare_enable(dmc->mout_bpll); 1258 + 1259 + /* 1260 + * Some bootloaders do not set clock routes correctly. 1261 + * Stop one path in clocks to PHY. 1262 + */ 1263 + regmap_read(dmc->clk_regmap, CDREX_LPDDR3PHY_CLKM_SRC, &tmp); 1264 + tmp &= ~(BIT(1) | BIT(0)); 1265 + regmap_write(dmc->clk_regmap, CDREX_LPDDR3PHY_CLKM_SRC, tmp); 1266 + 1267 + return 0; 1268 + } 1269 + 1270 + /** 1271 + * exynos5_performance_counters_init() - Initializes performance DMC's counters 1272 + * @dmc: DMC for which it does the setup 1273 + * 1274 + * Initialization of performance counters in DMC for estimating usage. 1275 + * The counter's values are used for calculation of a memory bandwidth and based 1276 + * on that the governor changes the frequency. 1277 + * The counters are not used when the governor is GOVERNOR_USERSPACE. 1278 + */ 1279 + static int exynos5_performance_counters_init(struct exynos5_dmc *dmc) 1280 + { 1281 + int counters_size; 1282 + int ret, i; 1283 + 1284 + dmc->num_counters = devfreq_event_get_edev_count(dmc->dev); 1285 + if (dmc->num_counters < 0) { 1286 + dev_err(dmc->dev, "could not get devfreq-event counters\n"); 1287 + return dmc->num_counters; 1288 + } 1289 + 1290 + counters_size = sizeof(struct devfreq_event_dev) * dmc->num_counters; 1291 + dmc->counter = devm_kzalloc(dmc->dev, counters_size, GFP_KERNEL); 1292 + if (!dmc->counter) 1293 + return -ENOMEM; 1294 + 1295 + for (i = 0; i < dmc->num_counters; i++) { 1296 + dmc->counter[i] = 1297 + devfreq_event_get_edev_by_phandle(dmc->dev, i); 1298 + if (IS_ERR_OR_NULL(dmc->counter[i])) 1299 + return -EPROBE_DEFER; 1300 + } 1301 + 1302 + ret = exynos5_counters_enable_edev(dmc); 1303 + if (ret < 0) { 1304 + dev_err(dmc->dev, "could not enable event counter\n"); 1305 + return ret; 1306 + } 1307 + 1308 + ret = exynos5_counters_set_event(dmc); 1309 + if (ret < 0) { 1310 + exynos5_counters_disable_edev(dmc); 1311 + dev_err(dmc->dev, "could not set event counter\n"); 1312 + return ret; 1313 + } 1314 + 1315 + return 0; 1316 + } 1317 + 1318 + /** 1319 + * exynos5_dmc_set_pause_on_switching() - Controls a pause feature in DMC 1320 + * @dmc: device which is used for changing this feature 1321 + * @set: a boolean state passing enable/disable request 1322 + * 1323 + * There is a need of pausing DREX DMC when divider or MUX in clock tree 1324 + * changes its configuration. In such situation access to the memory is blocked 1325 + * in DMC automatically. This feature is used when clock frequency change 1326 + * request appears and touches clock tree. 1327 + */ 1328 + static inline int exynos5_dmc_set_pause_on_switching(struct exynos5_dmc *dmc) 1329 + { 1330 + unsigned int val; 1331 + int ret; 1332 + 1333 + ret = regmap_read(dmc->clk_regmap, CDREX_PAUSE, &val); 1334 + if (ret) 1335 + return ret; 1336 + 1337 + val |= 1UL; 1338 + regmap_write(dmc->clk_regmap, CDREX_PAUSE, val); 1339 + 1340 + return 0; 1341 + } 1342 + 1343 + static irqreturn_t dmc_irq_thread(int irq, void *priv) 1344 + { 1345 + int res; 1346 + struct exynos5_dmc *dmc = priv; 1347 + 1348 + mutex_lock(&dmc->df->lock); 1349 + 1350 + exynos5_dmc_perf_events_check(dmc); 1351 + 1352 + res = update_devfreq(dmc->df); 1353 + if (res) 1354 + dev_warn(dmc->dev, "devfreq failed with %d\n", res); 1355 + 1356 + mutex_unlock(&dmc->df->lock); 1357 + 1358 + return IRQ_HANDLED; 1359 + } 1360 + 1361 + /** 1362 + * exynos5_dmc_probe() - Probe function for the DMC driver 1363 + * @pdev: platform device for which the driver is going to be initialized 1364 + * 1365 + * Initialize basic components: clocks, regulators, performance counters, etc. 1366 + * Read out product version and based on the information setup 1367 + * internal structures for the controller (frequency and voltage) and for DRAM 1368 + * memory parameters: timings for each operating frequency. 1369 + * Register new devfreq device for controlling DVFS of the DMC. 1370 + */ 1371 + static int exynos5_dmc_probe(struct platform_device *pdev) 1372 + { 1373 + int ret = 0; 1374 + struct device *dev = &pdev->dev; 1375 + struct device_node *np = dev->of_node; 1376 + struct exynos5_dmc *dmc; 1377 + struct resource *res; 1378 + int irq[2]; 1379 + 1380 + dmc = devm_kzalloc(dev, sizeof(*dmc), GFP_KERNEL); 1381 + if (!dmc) 1382 + return -ENOMEM; 1383 + 1384 + mutex_init(&dmc->lock); 1385 + 1386 + dmc->dev = dev; 1387 + platform_set_drvdata(pdev, dmc); 1388 + 1389 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1390 + dmc->base_drexi0 = devm_ioremap_resource(dev, res); 1391 + if (IS_ERR(dmc->base_drexi0)) 1392 + return PTR_ERR(dmc->base_drexi0); 1393 + 1394 + res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 1395 + dmc->base_drexi1 = devm_ioremap_resource(dev, res); 1396 + if (IS_ERR(dmc->base_drexi1)) 1397 + return PTR_ERR(dmc->base_drexi1); 1398 + 1399 + dmc->clk_regmap = syscon_regmap_lookup_by_phandle(np, 1400 + "samsung,syscon-clk"); 1401 + if (IS_ERR(dmc->clk_regmap)) 1402 + return PTR_ERR(dmc->clk_regmap); 1403 + 1404 + ret = exynos5_init_freq_table(dmc, &exynos5_dmc_df_profile); 1405 + if (ret) { 1406 + dev_warn(dev, "couldn't initialize frequency settings\n"); 1407 + return ret; 1408 + } 1409 + 1410 + dmc->vdd_mif = devm_regulator_get(dev, "vdd"); 1411 + if (IS_ERR(dmc->vdd_mif)) { 1412 + ret = PTR_ERR(dmc->vdd_mif); 1413 + return ret; 1414 + } 1415 + 1416 + ret = exynos5_dmc_init_clks(dmc); 1417 + if (ret) 1418 + return ret; 1419 + 1420 + ret = of_get_dram_timings(dmc); 1421 + if (ret) { 1422 + dev_warn(dev, "couldn't initialize timings settings\n"); 1423 + goto remove_clocks; 1424 + } 1425 + 1426 + ret = exynos5_dmc_set_pause_on_switching(dmc); 1427 + if (ret) { 1428 + dev_warn(dev, "couldn't get access to PAUSE register\n"); 1429 + goto remove_clocks; 1430 + } 1431 + 1432 + /* There is two modes in which the driver works: polling or IRQ */ 1433 + irq[0] = platform_get_irq_byname(pdev, "drex_0"); 1434 + irq[1] = platform_get_irq_byname(pdev, "drex_1"); 1435 + if (irq[0] > 0 && irq[1] > 0) { 1436 + ret = devm_request_threaded_irq(dev, irq[0], NULL, 1437 + dmc_irq_thread, IRQF_ONESHOT, 1438 + dev_name(dev), dmc); 1439 + if (ret) { 1440 + dev_err(dev, "couldn't grab IRQ\n"); 1441 + goto remove_clocks; 1442 + } 1443 + 1444 + ret = devm_request_threaded_irq(dev, irq[1], NULL, 1445 + dmc_irq_thread, IRQF_ONESHOT, 1446 + dev_name(dev), dmc); 1447 + if (ret) { 1448 + dev_err(dev, "couldn't grab IRQ\n"); 1449 + goto remove_clocks; 1450 + } 1451 + 1452 + /* 1453 + * Setup default thresholds for the devfreq governor. 1454 + * The values are chosen based on experiments. 1455 + */ 1456 + dmc->gov_data.upthreshold = 55; 1457 + dmc->gov_data.downdifferential = 5; 1458 + 1459 + exynos5_dmc_enable_perf_events(dmc); 1460 + 1461 + dmc->in_irq_mode = 1; 1462 + } else { 1463 + ret = exynos5_performance_counters_init(dmc); 1464 + if (ret) { 1465 + dev_warn(dev, "couldn't probe performance counters\n"); 1466 + goto remove_clocks; 1467 + } 1468 + 1469 + /* 1470 + * Setup default thresholds for the devfreq governor. 1471 + * The values are chosen based on experiments. 1472 + */ 1473 + dmc->gov_data.upthreshold = 30; 1474 + dmc->gov_data.downdifferential = 5; 1475 + 1476 + exynos5_dmc_df_profile.polling_ms = 500; 1477 + } 1478 + 1479 + 1480 + dmc->df = devm_devfreq_add_device(dev, &exynos5_dmc_df_profile, 1481 + DEVFREQ_GOV_SIMPLE_ONDEMAND, 1482 + &dmc->gov_data); 1483 + 1484 + if (IS_ERR(dmc->df)) { 1485 + ret = PTR_ERR(dmc->df); 1486 + goto err_devfreq_add; 1487 + } 1488 + 1489 + if (dmc->in_irq_mode) 1490 + exynos5_dmc_start_perf_events(dmc, PERF_COUNTER_START_VALUE); 1491 + 1492 + dev_info(dev, "DMC initialized\n"); 1493 + 1494 + return 0; 1495 + 1496 + err_devfreq_add: 1497 + if (dmc->in_irq_mode) 1498 + exynos5_dmc_disable_perf_events(dmc); 1499 + else 1500 + exynos5_counters_disable_edev(dmc); 1501 + remove_clocks: 1502 + clk_disable_unprepare(dmc->mout_bpll); 1503 + clk_disable_unprepare(dmc->fout_bpll); 1504 + 1505 + return ret; 1506 + } 1507 + 1508 + /** 1509 + * exynos5_dmc_remove() - Remove function for the platform device 1510 + * @pdev: platform device which is going to be removed 1511 + * 1512 + * The function relies on 'devm' framework function which automatically 1513 + * clean the device's resources. It just calls explicitly disable function for 1514 + * the performance counters. 1515 + */ 1516 + static int exynos5_dmc_remove(struct platform_device *pdev) 1517 + { 1518 + struct exynos5_dmc *dmc = dev_get_drvdata(&pdev->dev); 1519 + 1520 + if (dmc->in_irq_mode) 1521 + exynos5_dmc_disable_perf_events(dmc); 1522 + else 1523 + exynos5_counters_disable_edev(dmc); 1524 + 1525 + clk_disable_unprepare(dmc->mout_bpll); 1526 + clk_disable_unprepare(dmc->fout_bpll); 1527 + 1528 + dev_pm_opp_remove_table(dmc->dev); 1529 + 1530 + return 0; 1531 + } 1532 + 1533 + static const struct of_device_id exynos5_dmc_of_match[] = { 1534 + { .compatible = "samsung,exynos5422-dmc", }, 1535 + { }, 1536 + }; 1537 + MODULE_DEVICE_TABLE(of, exynos5_dmc_of_match); 1538 + 1539 + static struct platform_driver exynos5_dmc_platdrv = { 1540 + .probe = exynos5_dmc_probe, 1541 + .remove = exynos5_dmc_remove, 1542 + .driver = { 1543 + .name = "exynos5-dmc", 1544 + .of_match_table = exynos5_dmc_of_match, 1545 + }, 1546 + }; 1547 + module_platform_driver(exynos5_dmc_platdrv); 1548 + MODULE_DESCRIPTION("Driver for Exynos5422 Dynamic Memory Controller dynamic frequency and voltage change"); 1549 + MODULE_LICENSE("GPL v2"); 1550 + MODULE_AUTHOR("Lukasz Luba");
+10
drivers/memory/tegra/Kconfig
··· 17 17 This driver is required to change memory timings / clock rate for 18 18 external memory. 19 19 20 + config TEGRA30_EMC 21 + bool "NVIDIA Tegra30 External Memory Controller driver" 22 + default y 23 + depends on TEGRA_MC && ARCH_TEGRA_3x_SOC 24 + help 25 + This driver is for the External Memory Controller (EMC) found on 26 + Tegra30 chips. The EMC controls the external DRAM on the board. 27 + This driver is required to change memory timings / clock rate for 28 + external memory. 29 + 20 30 config TEGRA124_EMC 21 31 bool "NVIDIA Tegra124 External Memory Controller driver" 22 32 default y
+1
drivers/memory/tegra/Makefile
··· 11 11 obj-$(CONFIG_TEGRA_MC) += tegra-mc.o 12 12 13 13 obj-$(CONFIG_TEGRA20_EMC) += tegra20-emc.o 14 + obj-$(CONFIG_TEGRA30_EMC) += tegra30-emc.o 14 15 obj-$(CONFIG_TEGRA124_EMC) += tegra124-emc.o 15 16 obj-$(CONFIG_ARCH_TEGRA_186_SOC) += tegra186.o
+17 -35
drivers/memory/tegra/mc.c
··· 5 5 6 6 #include <linux/clk.h> 7 7 #include <linux/delay.h> 8 + #include <linux/dma-mapping.h> 8 9 #include <linux/interrupt.h> 9 10 #include <linux/kernel.h> 10 11 #include <linux/module.h> ··· 18 17 #include <soc/tegra/fuse.h> 19 18 20 19 #include "mc.h" 21 - 22 - #define MC_INTSTATUS 0x000 23 - 24 - #define MC_INTMASK 0x004 25 - 26 - #define MC_ERR_STATUS 0x08 27 - #define MC_ERR_STATUS_TYPE_SHIFT 28 28 - #define MC_ERR_STATUS_TYPE_INVALID_SMMU_PAGE (6 << MC_ERR_STATUS_TYPE_SHIFT) 29 - #define MC_ERR_STATUS_TYPE_MASK (0x7 << MC_ERR_STATUS_TYPE_SHIFT) 30 - #define MC_ERR_STATUS_READABLE (1 << 27) 31 - #define MC_ERR_STATUS_WRITABLE (1 << 26) 32 - #define MC_ERR_STATUS_NONSECURE (1 << 25) 33 - #define MC_ERR_STATUS_ADR_HI_SHIFT 20 34 - #define MC_ERR_STATUS_ADR_HI_MASK 0x3 35 - #define MC_ERR_STATUS_SECURITY (1 << 17) 36 - #define MC_ERR_STATUS_RW (1 << 16) 37 - 38 - #define MC_ERR_ADR 0x0c 39 - 40 - #define MC_GART_ERROR_REQ 0x30 41 - #define MC_DECERR_EMEM_OTHERS_STATUS 0x58 42 - #define MC_SECURITY_VIOLATION_STATUS 0x74 43 - 44 - #define MC_EMEM_ARB_CFG 0x90 45 - #define MC_EMEM_ARB_CFG_CYCLES_PER_UPDATE(x) (((x) & 0x1ff) << 0) 46 - #define MC_EMEM_ARB_CFG_CYCLES_PER_UPDATE_MASK 0x1ff 47 - #define MC_EMEM_ARB_MISC0 0xd8 48 - 49 - #define MC_EMEM_ADR_CFG 0x54 50 - #define MC_EMEM_ADR_CFG_EMEM_NUMDEV BIT(0) 51 - 52 - #define MC_TIMING_CONTROL 0xfc 53 - #define MC_TIMING_UPDATE BIT(0) 54 20 55 21 static const struct of_device_id tegra_mc_of_match[] = { 56 22 #ifdef CONFIG_ARCH_TEGRA_2x_SOC ··· 275 307 return 0; 276 308 } 277 309 278 - void tegra_mc_write_emem_configuration(struct tegra_mc *mc, unsigned long rate) 310 + int tegra_mc_write_emem_configuration(struct tegra_mc *mc, unsigned long rate) 279 311 { 280 312 unsigned int i; 281 313 struct tegra_mc_timing *timing = NULL; ··· 290 322 if (!timing) { 291 323 dev_err(mc->dev, "no memory timing registered for rate %lu\n", 292 324 rate); 293 - return; 325 + return -EINVAL; 294 326 } 295 327 296 328 for (i = 0; i < mc->soc->num_emem_regs; ++i) 297 329 mc_writel(mc, timing->emem_data[i], mc->soc->emem_regs[i]); 330 + 331 + return 0; 298 332 } 299 333 300 334 unsigned int tegra_mc_get_emem_device_count(struct tegra_mc *mc) ··· 596 626 struct resource *res; 597 627 struct tegra_mc *mc; 598 628 void *isr; 629 + u64 mask; 599 630 int err; 600 631 601 632 mc = devm_kzalloc(&pdev->dev, sizeof(*mc), GFP_KERNEL); ··· 607 636 spin_lock_init(&mc->lock); 608 637 mc->soc = of_device_get_match_data(&pdev->dev); 609 638 mc->dev = &pdev->dev; 639 + 640 + mask = DMA_BIT_MASK(mc->soc->num_address_bits); 641 + 642 + err = dma_coerce_mask_and_coherent(&pdev->dev, mask); 643 + if (err < 0) { 644 + dev_err(&pdev->dev, "failed to set DMA mask: %d\n", err); 645 + return err; 646 + } 610 647 611 648 /* length of MC tick in nanoseconds */ 612 649 mc->tick = 30; ··· 637 658 } else 638 659 #endif 639 660 { 661 + /* ensure that debug features are disabled */ 662 + mc_writel(mc, 0x00000000, MC_TIMING_CONTROL_DBG); 663 + 640 664 err = tegra_mc_setup_latency_allowance(mc); 641 665 if (err < 0) { 642 666 dev_err(&pdev->dev,
+65 -9
drivers/memory/tegra/mc.h
··· 6 6 #ifndef MEMORY_TEGRA_MC_H 7 7 #define MEMORY_TEGRA_MC_H 8 8 9 + #include <linux/bits.h> 9 10 #include <linux/io.h> 10 11 #include <linux/types.h> 11 12 12 13 #include <soc/tegra/mc.h> 13 14 14 - #define MC_INT_DECERR_MTS (1 << 16) 15 - #define MC_INT_SECERR_SEC (1 << 13) 16 - #define MC_INT_DECERR_VPR (1 << 12) 17 - #define MC_INT_INVALID_APB_ASID_UPDATE (1 << 11) 18 - #define MC_INT_INVALID_SMMU_PAGE (1 << 10) 19 - #define MC_INT_ARBITRATION_EMEM (1 << 9) 20 - #define MC_INT_SECURITY_VIOLATION (1 << 8) 21 - #define MC_INT_INVALID_GART_PAGE (1 << 7) 22 - #define MC_INT_DECERR_EMEM (1 << 6) 15 + #define MC_INTSTATUS 0x00 16 + #define MC_INTMASK 0x04 17 + #define MC_ERR_STATUS 0x08 18 + #define MC_ERR_ADR 0x0c 19 + #define MC_GART_ERROR_REQ 0x30 20 + #define MC_EMEM_ADR_CFG 0x54 21 + #define MC_DECERR_EMEM_OTHERS_STATUS 0x58 22 + #define MC_SECURITY_VIOLATION_STATUS 0x74 23 + #define MC_EMEM_ARB_CFG 0x90 24 + #define MC_EMEM_ARB_OUTSTANDING_REQ 0x94 25 + #define MC_EMEM_ARB_TIMING_RCD 0x98 26 + #define MC_EMEM_ARB_TIMING_RP 0x9c 27 + #define MC_EMEM_ARB_TIMING_RC 0xa0 28 + #define MC_EMEM_ARB_TIMING_RAS 0xa4 29 + #define MC_EMEM_ARB_TIMING_FAW 0xa8 30 + #define MC_EMEM_ARB_TIMING_RRD 0xac 31 + #define MC_EMEM_ARB_TIMING_RAP2PRE 0xb0 32 + #define MC_EMEM_ARB_TIMING_WAP2PRE 0xb4 33 + #define MC_EMEM_ARB_TIMING_R2R 0xb8 34 + #define MC_EMEM_ARB_TIMING_W2W 0xbc 35 + #define MC_EMEM_ARB_TIMING_R2W 0xc0 36 + #define MC_EMEM_ARB_TIMING_W2R 0xc4 37 + #define MC_EMEM_ARB_DA_TURNS 0xd0 38 + #define MC_EMEM_ARB_DA_COVERS 0xd4 39 + #define MC_EMEM_ARB_MISC0 0xd8 40 + #define MC_EMEM_ARB_MISC1 0xdc 41 + #define MC_EMEM_ARB_RING1_THROTTLE 0xe0 42 + #define MC_EMEM_ARB_OVERRIDE 0xe8 43 + #define MC_TIMING_CONTROL_DBG 0xf8 44 + #define MC_TIMING_CONTROL 0xfc 45 + 46 + #define MC_INT_DECERR_MTS BIT(16) 47 + #define MC_INT_SECERR_SEC BIT(13) 48 + #define MC_INT_DECERR_VPR BIT(12) 49 + #define MC_INT_INVALID_APB_ASID_UPDATE BIT(11) 50 + #define MC_INT_INVALID_SMMU_PAGE BIT(10) 51 + #define MC_INT_ARBITRATION_EMEM BIT(9) 52 + #define MC_INT_SECURITY_VIOLATION BIT(8) 53 + #define MC_INT_INVALID_GART_PAGE BIT(7) 54 + #define MC_INT_DECERR_EMEM BIT(6) 55 + 56 + #define MC_ERR_STATUS_TYPE_SHIFT 28 57 + #define MC_ERR_STATUS_TYPE_INVALID_SMMU_PAGE (0x6 << 28) 58 + #define MC_ERR_STATUS_TYPE_MASK (0x7 << 28) 59 + #define MC_ERR_STATUS_READABLE BIT(27) 60 + #define MC_ERR_STATUS_WRITABLE BIT(26) 61 + #define MC_ERR_STATUS_NONSECURE BIT(25) 62 + #define MC_ERR_STATUS_ADR_HI_SHIFT 20 63 + #define MC_ERR_STATUS_ADR_HI_MASK 0x3 64 + #define MC_ERR_STATUS_SECURITY BIT(17) 65 + #define MC_ERR_STATUS_RW BIT(16) 66 + 67 + #define MC_EMEM_ADR_CFG_EMEM_NUMDEV BIT(0) 68 + 69 + #define MC_EMEM_ARB_CFG_CYCLES_PER_UPDATE(x) ((x) & 0x1ff) 70 + #define MC_EMEM_ARB_CFG_CYCLES_PER_UPDATE_MASK 0x1ff 71 + 72 + #define MC_EMEM_ARB_OUTSTANDING_REQ_MAX_MASK 0x1ff 73 + #define MC_EMEM_ARB_OUTSTANDING_REQ_HOLDOFF_OVERRIDE BIT(30) 74 + #define MC_EMEM_ARB_OUTSTANDING_REQ_LIMIT_ENABLE BIT(31) 75 + 76 + #define MC_EMEM_ARB_OVERRIDE_EACK_MASK 0x3 77 + 78 + #define MC_TIMING_UPDATE BIT(0) 23 79 24 80 static inline u32 mc_readl(struct tegra_mc *mc, unsigned long offset) 25 81 {
+6 -4
drivers/memory/tegra/tegra114.c
··· 909 909 { .name = "tsec", .swgroup = TEGRA_SWGROUP_TSEC, .reg = 0x294 }, 910 910 }; 911 911 912 - static const unsigned int tegra114_group_display[] = { 912 + static const unsigned int tegra114_group_drm[] = { 913 913 TEGRA_SWGROUP_DC, 914 914 TEGRA_SWGROUP_DCB, 915 + TEGRA_SWGROUP_G2, 916 + TEGRA_SWGROUP_NV, 915 917 }; 916 918 917 919 static const struct tegra_smmu_group_soc tegra114_groups[] = { 918 920 { 919 - .name = "display", 920 - .swgroups = tegra114_group_display, 921 - .num_swgroups = ARRAY_SIZE(tegra114_group_display), 921 + .name = "drm", 922 + .swgroups = tegra114_group_drm, 923 + .num_swgroups = ARRAY_SIZE(tegra114_group_drm), 922 924 }, 923 925 }; 924 926
+6 -24
drivers/memory/tegra/tegra124.c
··· 10 10 11 11 #include "mc.h" 12 12 13 - #define MC_EMEM_ARB_CFG 0x90 14 - #define MC_EMEM_ARB_OUTSTANDING_REQ 0x94 15 - #define MC_EMEM_ARB_TIMING_RCD 0x98 16 - #define MC_EMEM_ARB_TIMING_RP 0x9c 17 - #define MC_EMEM_ARB_TIMING_RC 0xa0 18 - #define MC_EMEM_ARB_TIMING_RAS 0xa4 19 - #define MC_EMEM_ARB_TIMING_FAW 0xa8 20 - #define MC_EMEM_ARB_TIMING_RRD 0xac 21 - #define MC_EMEM_ARB_TIMING_RAP2PRE 0xb0 22 - #define MC_EMEM_ARB_TIMING_WAP2PRE 0xb4 23 - #define MC_EMEM_ARB_TIMING_R2R 0xb8 24 - #define MC_EMEM_ARB_TIMING_W2W 0xbc 25 - #define MC_EMEM_ARB_TIMING_R2W 0xc0 26 - #define MC_EMEM_ARB_TIMING_W2R 0xc4 27 - #define MC_EMEM_ARB_DA_TURNS 0xd0 28 - #define MC_EMEM_ARB_DA_COVERS 0xd4 29 - #define MC_EMEM_ARB_MISC0 0xd8 30 - #define MC_EMEM_ARB_MISC1 0xdc 31 - #define MC_EMEM_ARB_RING1_THROTTLE 0xe0 32 - 33 13 static const struct tegra_mc_client tegra124_mc_clients[] = { 34 14 { 35 15 .id = 0x00, ··· 954 974 { .name = "vi", .swgroup = TEGRA_SWGROUP_VI, .reg = 0x280 }, 955 975 }; 956 976 957 - static const unsigned int tegra124_group_display[] = { 977 + static const unsigned int tegra124_group_drm[] = { 958 978 TEGRA_SWGROUP_DC, 959 979 TEGRA_SWGROUP_DCB, 980 + TEGRA_SWGROUP_GPU, 981 + TEGRA_SWGROUP_VIC, 960 982 }; 961 983 962 984 static const struct tegra_smmu_group_soc tegra124_groups[] = { 963 985 { 964 - .name = "display", 965 - .swgroups = tegra124_group_display, 966 - .num_swgroups = ARRAY_SIZE(tegra124_group_display), 986 + .name = "drm", 987 + .swgroups = tegra124_group_drm, 988 + .num_swgroups = ARRAY_SIZE(tegra124_group_drm), 967 989 }, 968 990 }; 969 991
+63 -75
drivers/memory/tegra/tegra20-emc.c
··· 6 6 */ 7 7 8 8 #include <linux/clk.h> 9 + #include <linux/clk/tegra.h> 9 10 #include <linux/completion.h> 10 11 #include <linux/err.h> 11 12 #include <linux/interrupt.h> 12 - #include <linux/iopoll.h> 13 + #include <linux/io.h> 13 14 #include <linux/kernel.h> 14 15 #include <linux/module.h> 15 16 #include <linux/of.h> ··· 22 21 23 22 #define EMC_INTSTATUS 0x000 24 23 #define EMC_INTMASK 0x004 24 + #define EMC_DBG 0x008 25 25 #define EMC_TIMING_CONTROL 0x028 26 26 #define EMC_RC 0x02c 27 27 #define EMC_RFC 0x030 ··· 80 78 81 79 #define EMC_REFRESH_OVERFLOW_INT BIT(3) 82 80 #define EMC_CLKCHANGE_COMPLETE_INT BIT(4) 81 + 82 + #define EMC_DBG_READ_MUX_ASSEMBLY BIT(0) 83 + #define EMC_DBG_WRITE_MUX_ACTIVE BIT(1) 84 + #define EMC_DBG_FORCE_UPDATE BIT(2) 85 + #define EMC_DBG_READ_DQM_CTRL BIT(9) 86 + #define EMC_DBG_CFG_PRIORITY BIT(24) 83 87 84 88 static const u16 emc_timing_registers[] = { 85 89 EMC_RC, ··· 145 137 struct device *dev; 146 138 struct completion clk_handshake_complete; 147 139 struct notifier_block clk_nb; 148 - struct clk *backup_clk; 149 - struct clk *emc_mux; 150 - struct clk *pll_m; 151 140 struct clk *clk; 152 141 void __iomem *regs; 153 142 ··· 224 219 225 220 static int emc_complete_timing_change(struct tegra_emc *emc, bool flush) 226 221 { 227 - long timeout; 222 + unsigned long timeout; 228 223 229 224 dev_dbg(emc->dev, "%s: flush %d\n", __func__, flush); 230 225 ··· 236 231 } 237 232 238 233 timeout = wait_for_completion_timeout(&emc->clk_handshake_complete, 239 - usecs_to_jiffies(100)); 234 + msecs_to_jiffies(100)); 240 235 if (timeout == 0) { 241 236 dev_err(emc->dev, "EMC-CAR handshake failed\n"); 242 237 return -EIO; 243 - } else if (timeout < 0) { 244 - dev_err(emc->dev, "failed to wait for EMC-CAR handshake: %ld\n", 245 - timeout); 246 - return timeout; 247 238 } 248 239 249 240 return 0; ··· 364 363 sort(emc->timings, emc->num_timings, sizeof(*timing), cmp_timings, 365 364 NULL); 366 365 366 + dev_info(emc->dev, 367 + "got %u timings for RAM code %u (min %luMHz max %luMHz)\n", 368 + emc->num_timings, 369 + tegra_read_ram_code(), 370 + emc->timings[0].rate / 1000000, 371 + emc->timings[emc->num_timings - 1].rate / 1000000); 372 + 367 373 return 0; 368 374 } 369 375 ··· 406 398 static int emc_setup_hw(struct tegra_emc *emc) 407 399 { 408 400 u32 intmask = EMC_REFRESH_OVERFLOW_INT | EMC_CLKCHANGE_COMPLETE_INT; 409 - u32 emc_cfg; 401 + u32 emc_cfg, emc_dbg; 410 402 411 403 emc_cfg = readl_relaxed(emc->regs + EMC_CFG_2); 412 404 ··· 429 421 writel_relaxed(intmask, emc->regs + EMC_INTMASK); 430 422 writel_relaxed(intmask, emc->regs + EMC_INTSTATUS); 431 423 424 + /* ensure that unwanted debug features are disabled */ 425 + emc_dbg = readl_relaxed(emc->regs + EMC_DBG); 426 + emc_dbg |= EMC_DBG_CFG_PRIORITY; 427 + emc_dbg &= ~EMC_DBG_READ_MUX_ASSEMBLY; 428 + emc_dbg &= ~EMC_DBG_WRITE_MUX_ACTIVE; 429 + emc_dbg &= ~EMC_DBG_FORCE_UPDATE; 430 + writel_relaxed(emc_dbg, emc->regs + EMC_DBG); 431 + 432 432 return 0; 433 433 } 434 434 435 - static int emc_init(struct tegra_emc *emc, unsigned long rate) 435 + static long emc_round_rate(unsigned long rate, 436 + unsigned long min_rate, 437 + unsigned long max_rate, 438 + void *arg) 436 439 { 437 - int err; 440 + struct emc_timing *timing = NULL; 441 + struct tegra_emc *emc = arg; 442 + unsigned int i; 438 443 439 - err = clk_set_parent(emc->emc_mux, emc->backup_clk); 440 - if (err) { 441 - dev_err(emc->dev, 442 - "failed to reparent to backup source: %d\n", err); 443 - return err; 444 + min_rate = min(min_rate, emc->timings[emc->num_timings - 1].rate); 445 + 446 + for (i = 0; i < emc->num_timings; i++) { 447 + if (emc->timings[i].rate < rate && i != emc->num_timings - 1) 448 + continue; 449 + 450 + if (emc->timings[i].rate > max_rate) { 451 + i = max(i, 1u) - 1; 452 + 453 + if (emc->timings[i].rate < min_rate) 454 + break; 455 + } 456 + 457 + if (emc->timings[i].rate < min_rate) 458 + continue; 459 + 460 + timing = &emc->timings[i]; 461 + break; 444 462 } 445 463 446 - err = clk_set_rate(emc->pll_m, rate); 447 - if (err) { 448 - dev_err(emc->dev, 449 - "failed to change pll_m rate: %d\n", err); 450 - return err; 464 + if (!timing) { 465 + dev_err(emc->dev, "no timing for rate %lu min %lu max %lu\n", 466 + rate, min_rate, max_rate); 467 + return -EINVAL; 451 468 } 452 469 453 - err = clk_set_parent(emc->emc_mux, emc->pll_m); 454 - if (err) { 455 - dev_err(emc->dev, 456 - "failed to reparent to pll_m: %d\n", err); 457 - return err; 458 - } 459 - 460 - err = clk_set_rate(emc->clk, rate); 461 - if (err) { 462 - dev_err(emc->dev, 463 - "failed to change emc rate: %d\n", err); 464 - return err; 465 - } 466 - 467 - return 0; 470 + return timing->rate; 468 471 } 469 472 470 473 static int tegra_emc_probe(struct platform_device *pdev) ··· 534 515 return err; 535 516 } 536 517 518 + tegra20_clk_set_emc_round_callback(emc_round_rate, emc); 519 + 537 520 emc->clk = devm_clk_get(&pdev->dev, "emc"); 538 521 if (IS_ERR(emc->clk)) { 539 522 err = PTR_ERR(emc->clk); 540 523 dev_err(&pdev->dev, "failed to get emc clock: %d\n", err); 541 - return err; 542 - } 543 - 544 - emc->pll_m = clk_get_sys(NULL, "pll_m"); 545 - if (IS_ERR(emc->pll_m)) { 546 - err = PTR_ERR(emc->pll_m); 547 - dev_err(&pdev->dev, "failed to get pll_m clock: %d\n", err); 548 - return err; 549 - } 550 - 551 - emc->backup_clk = clk_get_sys(NULL, "pll_p"); 552 - if (IS_ERR(emc->backup_clk)) { 553 - err = PTR_ERR(emc->backup_clk); 554 - dev_err(&pdev->dev, "failed to get pll_p clock: %d\n", err); 555 - goto put_pll_m; 556 - } 557 - 558 - emc->emc_mux = clk_get_parent(emc->clk); 559 - if (IS_ERR(emc->emc_mux)) { 560 - err = PTR_ERR(emc->emc_mux); 561 - dev_err(&pdev->dev, "failed to get emc_mux clock: %d\n", err); 562 - goto put_backup; 524 + goto unset_cb; 563 525 } 564 526 565 527 err = clk_notifier_register(emc->clk, &emc->clk_nb); 566 528 if (err) { 567 529 dev_err(&pdev->dev, "failed to register clk notifier: %d\n", 568 530 err); 569 - goto put_backup; 570 - } 571 - 572 - /* set DRAM clock rate to maximum */ 573 - err = emc_init(emc, emc->timings[emc->num_timings - 1].rate); 574 - if (err) { 575 - dev_err(&pdev->dev, "failed to initialize EMC clock rate: %d\n", 576 - err); 577 - goto unreg_notifier; 531 + goto unset_cb; 578 532 } 579 533 580 534 return 0; 581 535 582 - unreg_notifier: 583 - clk_notifier_unregister(emc->clk, &emc->clk_nb); 584 - put_backup: 585 - clk_put(emc->backup_clk); 586 - put_pll_m: 587 - clk_put(emc->pll_m); 536 + unset_cb: 537 + tegra20_clk_set_emc_round_callback(NULL, NULL); 588 538 589 539 return err; 590 540 }
+1232
drivers/memory/tegra/tegra30-emc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Tegra30 External Memory Controller driver 4 + * 5 + * Based on downstream driver from NVIDIA and tegra124-emc.c 6 + * Copyright (C) 2011-2014 NVIDIA Corporation 7 + * 8 + * Author: Dmitry Osipenko <digetx@gmail.com> 9 + * Copyright (C) 2019 GRATE-DRIVER project 10 + */ 11 + 12 + #include <linux/clk.h> 13 + #include <linux/clk/tegra.h> 14 + #include <linux/completion.h> 15 + #include <linux/delay.h> 16 + #include <linux/err.h> 17 + #include <linux/interrupt.h> 18 + #include <linux/io.h> 19 + #include <linux/iopoll.h> 20 + #include <linux/kernel.h> 21 + #include <linux/module.h> 22 + #include <linux/of_platform.h> 23 + #include <linux/platform_device.h> 24 + #include <linux/sort.h> 25 + #include <linux/types.h> 26 + 27 + #include <soc/tegra/fuse.h> 28 + 29 + #include "mc.h" 30 + 31 + #define EMC_INTSTATUS 0x000 32 + #define EMC_INTMASK 0x004 33 + #define EMC_DBG 0x008 34 + #define EMC_CFG 0x00c 35 + #define EMC_REFCTRL 0x020 36 + #define EMC_TIMING_CONTROL 0x028 37 + #define EMC_RC 0x02c 38 + #define EMC_RFC 0x030 39 + #define EMC_RAS 0x034 40 + #define EMC_RP 0x038 41 + #define EMC_R2W 0x03c 42 + #define EMC_W2R 0x040 43 + #define EMC_R2P 0x044 44 + #define EMC_W2P 0x048 45 + #define EMC_RD_RCD 0x04c 46 + #define EMC_WR_RCD 0x050 47 + #define EMC_RRD 0x054 48 + #define EMC_REXT 0x058 49 + #define EMC_WDV 0x05c 50 + #define EMC_QUSE 0x060 51 + #define EMC_QRST 0x064 52 + #define EMC_QSAFE 0x068 53 + #define EMC_RDV 0x06c 54 + #define EMC_REFRESH 0x070 55 + #define EMC_BURST_REFRESH_NUM 0x074 56 + #define EMC_PDEX2WR 0x078 57 + #define EMC_PDEX2RD 0x07c 58 + #define EMC_PCHG2PDEN 0x080 59 + #define EMC_ACT2PDEN 0x084 60 + #define EMC_AR2PDEN 0x088 61 + #define EMC_RW2PDEN 0x08c 62 + #define EMC_TXSR 0x090 63 + #define EMC_TCKE 0x094 64 + #define EMC_TFAW 0x098 65 + #define EMC_TRPAB 0x09c 66 + #define EMC_TCLKSTABLE 0x0a0 67 + #define EMC_TCLKSTOP 0x0a4 68 + #define EMC_TREFBW 0x0a8 69 + #define EMC_QUSE_EXTRA 0x0ac 70 + #define EMC_ODT_WRITE 0x0b0 71 + #define EMC_ODT_READ 0x0b4 72 + #define EMC_WEXT 0x0b8 73 + #define EMC_CTT 0x0bc 74 + #define EMC_MRS_WAIT_CNT 0x0c8 75 + #define EMC_MRS 0x0cc 76 + #define EMC_EMRS 0x0d0 77 + #define EMC_SELF_REF 0x0e0 78 + #define EMC_MRW 0x0e8 79 + #define EMC_XM2DQSPADCTRL3 0x0f8 80 + #define EMC_FBIO_SPARE 0x100 81 + #define EMC_FBIO_CFG5 0x104 82 + #define EMC_FBIO_CFG6 0x114 83 + #define EMC_CFG_RSV 0x120 84 + #define EMC_AUTO_CAL_CONFIG 0x2a4 85 + #define EMC_AUTO_CAL_INTERVAL 0x2a8 86 + #define EMC_AUTO_CAL_STATUS 0x2ac 87 + #define EMC_STATUS 0x2b4 88 + #define EMC_CFG_2 0x2b8 89 + #define EMC_CFG_DIG_DLL 0x2bc 90 + #define EMC_CFG_DIG_DLL_PERIOD 0x2c0 91 + #define EMC_CTT_DURATION 0x2d8 92 + #define EMC_CTT_TERM_CTRL 0x2dc 93 + #define EMC_ZCAL_INTERVAL 0x2e0 94 + #define EMC_ZCAL_WAIT_CNT 0x2e4 95 + #define EMC_ZQ_CAL 0x2ec 96 + #define EMC_XM2CMDPADCTRL 0x2f0 97 + #define EMC_XM2DQSPADCTRL2 0x2fc 98 + #define EMC_XM2DQPADCTRL2 0x304 99 + #define EMC_XM2CLKPADCTRL 0x308 100 + #define EMC_XM2COMPPADCTRL 0x30c 101 + #define EMC_XM2VTTGENPADCTRL 0x310 102 + #define EMC_XM2VTTGENPADCTRL2 0x314 103 + #define EMC_XM2QUSEPADCTRL 0x318 104 + #define EMC_DLL_XFORM_DQS0 0x328 105 + #define EMC_DLL_XFORM_DQS1 0x32c 106 + #define EMC_DLL_XFORM_DQS2 0x330 107 + #define EMC_DLL_XFORM_DQS3 0x334 108 + #define EMC_DLL_XFORM_DQS4 0x338 109 + #define EMC_DLL_XFORM_DQS5 0x33c 110 + #define EMC_DLL_XFORM_DQS6 0x340 111 + #define EMC_DLL_XFORM_DQS7 0x344 112 + #define EMC_DLL_XFORM_QUSE0 0x348 113 + #define EMC_DLL_XFORM_QUSE1 0x34c 114 + #define EMC_DLL_XFORM_QUSE2 0x350 115 + #define EMC_DLL_XFORM_QUSE3 0x354 116 + #define EMC_DLL_XFORM_QUSE4 0x358 117 + #define EMC_DLL_XFORM_QUSE5 0x35c 118 + #define EMC_DLL_XFORM_QUSE6 0x360 119 + #define EMC_DLL_XFORM_QUSE7 0x364 120 + #define EMC_DLL_XFORM_DQ0 0x368 121 + #define EMC_DLL_XFORM_DQ1 0x36c 122 + #define EMC_DLL_XFORM_DQ2 0x370 123 + #define EMC_DLL_XFORM_DQ3 0x374 124 + #define EMC_DLI_TRIM_TXDQS0 0x3a8 125 + #define EMC_DLI_TRIM_TXDQS1 0x3ac 126 + #define EMC_DLI_TRIM_TXDQS2 0x3b0 127 + #define EMC_DLI_TRIM_TXDQS3 0x3b4 128 + #define EMC_DLI_TRIM_TXDQS4 0x3b8 129 + #define EMC_DLI_TRIM_TXDQS5 0x3bc 130 + #define EMC_DLI_TRIM_TXDQS6 0x3c0 131 + #define EMC_DLI_TRIM_TXDQS7 0x3c4 132 + #define EMC_STALL_THEN_EXE_BEFORE_CLKCHANGE 0x3c8 133 + #define EMC_STALL_THEN_EXE_AFTER_CLKCHANGE 0x3cc 134 + #define EMC_UNSTALL_RW_AFTER_CLKCHANGE 0x3d0 135 + #define EMC_SEL_DPD_CTRL 0x3d8 136 + #define EMC_PRE_REFRESH_REQ_CNT 0x3dc 137 + #define EMC_DYN_SELF_REF_CONTROL 0x3e0 138 + #define EMC_TXSRDLL 0x3e4 139 + 140 + #define EMC_STATUS_TIMING_UPDATE_STALLED BIT(23) 141 + 142 + #define EMC_MODE_SET_DLL_RESET BIT(8) 143 + #define EMC_MODE_SET_LONG_CNT BIT(26) 144 + 145 + #define EMC_SELF_REF_CMD_ENABLED BIT(0) 146 + 147 + #define DRAM_DEV_SEL_ALL (0 << 30) 148 + #define DRAM_DEV_SEL_0 (2 << 30) 149 + #define DRAM_DEV_SEL_1 (1 << 30) 150 + #define DRAM_BROADCAST(num) \ 151 + ((num) > 1 ? DRAM_DEV_SEL_ALL : DRAM_DEV_SEL_0) 152 + 153 + #define EMC_ZQ_CAL_CMD BIT(0) 154 + #define EMC_ZQ_CAL_LONG BIT(4) 155 + #define EMC_ZQ_CAL_LONG_CMD_DEV0 \ 156 + (DRAM_DEV_SEL_0 | EMC_ZQ_CAL_LONG | EMC_ZQ_CAL_CMD) 157 + #define EMC_ZQ_CAL_LONG_CMD_DEV1 \ 158 + (DRAM_DEV_SEL_1 | EMC_ZQ_CAL_LONG | EMC_ZQ_CAL_CMD) 159 + 160 + #define EMC_DBG_READ_MUX_ASSEMBLY BIT(0) 161 + #define EMC_DBG_WRITE_MUX_ACTIVE BIT(1) 162 + #define EMC_DBG_FORCE_UPDATE BIT(2) 163 + #define EMC_DBG_CFG_PRIORITY BIT(24) 164 + 165 + #define EMC_CFG5_QUSE_MODE_SHIFT 13 166 + #define EMC_CFG5_QUSE_MODE_MASK (7 << EMC_CFG5_QUSE_MODE_SHIFT) 167 + 168 + #define EMC_CFG5_QUSE_MODE_INTERNAL_LPBK 2 169 + #define EMC_CFG5_QUSE_MODE_PULSE_INTERN 3 170 + 171 + #define EMC_SEL_DPD_CTRL_QUSE_DPD_ENABLE BIT(9) 172 + 173 + #define EMC_XM2COMPPADCTRL_VREF_CAL_ENABLE BIT(10) 174 + 175 + #define EMC_XM2QUSEPADCTRL_IVREF_ENABLE BIT(4) 176 + 177 + #define EMC_XM2DQSPADCTRL2_VREF_ENABLE BIT(5) 178 + #define EMC_XM2DQSPADCTRL3_VREF_ENABLE BIT(5) 179 + 180 + #define EMC_AUTO_CAL_STATUS_ACTIVE BIT(31) 181 + 182 + #define EMC_FBIO_CFG5_DRAM_TYPE_MASK 0x3 183 + 184 + #define EMC_MRS_WAIT_CNT_SHORT_WAIT_MASK 0x3ff 185 + #define EMC_MRS_WAIT_CNT_LONG_WAIT_SHIFT 16 186 + #define EMC_MRS_WAIT_CNT_LONG_WAIT_MASK \ 187 + (0x3ff << EMC_MRS_WAIT_CNT_LONG_WAIT_SHIFT) 188 + 189 + #define EMC_REFCTRL_DEV_SEL_MASK 0x3 190 + #define EMC_REFCTRL_ENABLE BIT(31) 191 + #define EMC_REFCTRL_ENABLE_ALL(num) \ 192 + (((num) > 1 ? 0 : 2) | EMC_REFCTRL_ENABLE) 193 + #define EMC_REFCTRL_DISABLE_ALL(num) ((num) > 1 ? 0 : 2) 194 + 195 + #define EMC_CFG_PERIODIC_QRST BIT(21) 196 + #define EMC_CFG_DYN_SREF_ENABLE BIT(28) 197 + 198 + #define EMC_CLKCHANGE_REQ_ENABLE BIT(0) 199 + #define EMC_CLKCHANGE_PD_ENABLE BIT(1) 200 + #define EMC_CLKCHANGE_SR_ENABLE BIT(2) 201 + 202 + #define EMC_TIMING_UPDATE BIT(0) 203 + 204 + #define EMC_REFRESH_OVERFLOW_INT BIT(3) 205 + #define EMC_CLKCHANGE_COMPLETE_INT BIT(4) 206 + 207 + enum emc_dram_type { 208 + DRAM_TYPE_DDR3, 209 + DRAM_TYPE_DDR1, 210 + DRAM_TYPE_LPDDR2, 211 + DRAM_TYPE_DDR2, 212 + }; 213 + 214 + enum emc_dll_change { 215 + DLL_CHANGE_NONE, 216 + DLL_CHANGE_ON, 217 + DLL_CHANGE_OFF 218 + }; 219 + 220 + static const u16 emc_timing_registers[] = { 221 + [0] = EMC_RC, 222 + [1] = EMC_RFC, 223 + [2] = EMC_RAS, 224 + [3] = EMC_RP, 225 + [4] = EMC_R2W, 226 + [5] = EMC_W2R, 227 + [6] = EMC_R2P, 228 + [7] = EMC_W2P, 229 + [8] = EMC_RD_RCD, 230 + [9] = EMC_WR_RCD, 231 + [10] = EMC_RRD, 232 + [11] = EMC_REXT, 233 + [12] = EMC_WEXT, 234 + [13] = EMC_WDV, 235 + [14] = EMC_QUSE, 236 + [15] = EMC_QRST, 237 + [16] = EMC_QSAFE, 238 + [17] = EMC_RDV, 239 + [18] = EMC_REFRESH, 240 + [19] = EMC_BURST_REFRESH_NUM, 241 + [20] = EMC_PRE_REFRESH_REQ_CNT, 242 + [21] = EMC_PDEX2WR, 243 + [22] = EMC_PDEX2RD, 244 + [23] = EMC_PCHG2PDEN, 245 + [24] = EMC_ACT2PDEN, 246 + [25] = EMC_AR2PDEN, 247 + [26] = EMC_RW2PDEN, 248 + [27] = EMC_TXSR, 249 + [28] = EMC_TXSRDLL, 250 + [29] = EMC_TCKE, 251 + [30] = EMC_TFAW, 252 + [31] = EMC_TRPAB, 253 + [32] = EMC_TCLKSTABLE, 254 + [33] = EMC_TCLKSTOP, 255 + [34] = EMC_TREFBW, 256 + [35] = EMC_QUSE_EXTRA, 257 + [36] = EMC_FBIO_CFG6, 258 + [37] = EMC_ODT_WRITE, 259 + [38] = EMC_ODT_READ, 260 + [39] = EMC_FBIO_CFG5, 261 + [40] = EMC_CFG_DIG_DLL, 262 + [41] = EMC_CFG_DIG_DLL_PERIOD, 263 + [42] = EMC_DLL_XFORM_DQS0, 264 + [43] = EMC_DLL_XFORM_DQS1, 265 + [44] = EMC_DLL_XFORM_DQS2, 266 + [45] = EMC_DLL_XFORM_DQS3, 267 + [46] = EMC_DLL_XFORM_DQS4, 268 + [47] = EMC_DLL_XFORM_DQS5, 269 + [48] = EMC_DLL_XFORM_DQS6, 270 + [49] = EMC_DLL_XFORM_DQS7, 271 + [50] = EMC_DLL_XFORM_QUSE0, 272 + [51] = EMC_DLL_XFORM_QUSE1, 273 + [52] = EMC_DLL_XFORM_QUSE2, 274 + [53] = EMC_DLL_XFORM_QUSE3, 275 + [54] = EMC_DLL_XFORM_QUSE4, 276 + [55] = EMC_DLL_XFORM_QUSE5, 277 + [56] = EMC_DLL_XFORM_QUSE6, 278 + [57] = EMC_DLL_XFORM_QUSE7, 279 + [58] = EMC_DLI_TRIM_TXDQS0, 280 + [59] = EMC_DLI_TRIM_TXDQS1, 281 + [60] = EMC_DLI_TRIM_TXDQS2, 282 + [61] = EMC_DLI_TRIM_TXDQS3, 283 + [62] = EMC_DLI_TRIM_TXDQS4, 284 + [63] = EMC_DLI_TRIM_TXDQS5, 285 + [64] = EMC_DLI_TRIM_TXDQS6, 286 + [65] = EMC_DLI_TRIM_TXDQS7, 287 + [66] = EMC_DLL_XFORM_DQ0, 288 + [67] = EMC_DLL_XFORM_DQ1, 289 + [68] = EMC_DLL_XFORM_DQ2, 290 + [69] = EMC_DLL_XFORM_DQ3, 291 + [70] = EMC_XM2CMDPADCTRL, 292 + [71] = EMC_XM2DQSPADCTRL2, 293 + [72] = EMC_XM2DQPADCTRL2, 294 + [73] = EMC_XM2CLKPADCTRL, 295 + [74] = EMC_XM2COMPPADCTRL, 296 + [75] = EMC_XM2VTTGENPADCTRL, 297 + [76] = EMC_XM2VTTGENPADCTRL2, 298 + [77] = EMC_XM2QUSEPADCTRL, 299 + [78] = EMC_XM2DQSPADCTRL3, 300 + [79] = EMC_CTT_TERM_CTRL, 301 + [80] = EMC_ZCAL_INTERVAL, 302 + [81] = EMC_ZCAL_WAIT_CNT, 303 + [82] = EMC_MRS_WAIT_CNT, 304 + [83] = EMC_AUTO_CAL_CONFIG, 305 + [84] = EMC_CTT, 306 + [85] = EMC_CTT_DURATION, 307 + [86] = EMC_DYN_SELF_REF_CONTROL, 308 + [87] = EMC_FBIO_SPARE, 309 + [88] = EMC_CFG_RSV, 310 + }; 311 + 312 + struct emc_timing { 313 + unsigned long rate; 314 + 315 + u32 data[ARRAY_SIZE(emc_timing_registers)]; 316 + 317 + u32 emc_auto_cal_interval; 318 + u32 emc_mode_1; 319 + u32 emc_mode_2; 320 + u32 emc_mode_reset; 321 + u32 emc_zcal_cnt_long; 322 + bool emc_cfg_periodic_qrst; 323 + bool emc_cfg_dyn_self_ref; 324 + }; 325 + 326 + struct tegra_emc { 327 + struct device *dev; 328 + struct tegra_mc *mc; 329 + struct completion clk_handshake_complete; 330 + struct notifier_block clk_nb; 331 + struct clk *clk; 332 + void __iomem *regs; 333 + unsigned int irq; 334 + 335 + struct emc_timing *timings; 336 + unsigned int num_timings; 337 + 338 + u32 mc_override; 339 + u32 emc_cfg; 340 + 341 + u32 emc_mode_1; 342 + u32 emc_mode_2; 343 + u32 emc_mode_reset; 344 + 345 + bool vref_cal_toggle : 1; 346 + bool zcal_long : 1; 347 + bool dll_on : 1; 348 + bool prepared : 1; 349 + bool bad_state : 1; 350 + }; 351 + 352 + static irqreturn_t tegra_emc_isr(int irq, void *data) 353 + { 354 + struct tegra_emc *emc = data; 355 + u32 intmask = EMC_REFRESH_OVERFLOW_INT | EMC_CLKCHANGE_COMPLETE_INT; 356 + u32 status; 357 + 358 + status = readl_relaxed(emc->regs + EMC_INTSTATUS) & intmask; 359 + if (!status) 360 + return IRQ_NONE; 361 + 362 + /* notify about EMC-CAR handshake completion */ 363 + if (status & EMC_CLKCHANGE_COMPLETE_INT) 364 + complete(&emc->clk_handshake_complete); 365 + 366 + /* notify about HW problem */ 367 + if (status & EMC_REFRESH_OVERFLOW_INT) 368 + dev_err_ratelimited(emc->dev, 369 + "refresh request overflow timeout\n"); 370 + 371 + /* clear interrupts */ 372 + writel_relaxed(status, emc->regs + EMC_INTSTATUS); 373 + 374 + return IRQ_HANDLED; 375 + } 376 + 377 + static struct emc_timing *emc_find_timing(struct tegra_emc *emc, 378 + unsigned long rate) 379 + { 380 + struct emc_timing *timing = NULL; 381 + unsigned int i; 382 + 383 + for (i = 0; i < emc->num_timings; i++) { 384 + if (emc->timings[i].rate >= rate) { 385 + timing = &emc->timings[i]; 386 + break; 387 + } 388 + } 389 + 390 + if (!timing) { 391 + dev_err(emc->dev, "no timing for rate %lu\n", rate); 392 + return NULL; 393 + } 394 + 395 + return timing; 396 + } 397 + 398 + static bool emc_dqs_preset(struct tegra_emc *emc, struct emc_timing *timing, 399 + bool *schmitt_to_vref) 400 + { 401 + bool preset = false; 402 + u32 val; 403 + 404 + if (timing->data[71] & EMC_XM2DQSPADCTRL2_VREF_ENABLE) { 405 + val = readl_relaxed(emc->regs + EMC_XM2DQSPADCTRL2); 406 + 407 + if (!(val & EMC_XM2DQSPADCTRL2_VREF_ENABLE)) { 408 + val |= EMC_XM2DQSPADCTRL2_VREF_ENABLE; 409 + writel_relaxed(val, emc->regs + EMC_XM2DQSPADCTRL2); 410 + 411 + preset = true; 412 + } 413 + } 414 + 415 + if (timing->data[78] & EMC_XM2DQSPADCTRL3_VREF_ENABLE) { 416 + val = readl_relaxed(emc->regs + EMC_XM2DQSPADCTRL3); 417 + 418 + if (!(val & EMC_XM2DQSPADCTRL3_VREF_ENABLE)) { 419 + val |= EMC_XM2DQSPADCTRL3_VREF_ENABLE; 420 + writel_relaxed(val, emc->regs + EMC_XM2DQSPADCTRL3); 421 + 422 + preset = true; 423 + } 424 + } 425 + 426 + if (timing->data[77] & EMC_XM2QUSEPADCTRL_IVREF_ENABLE) { 427 + val = readl_relaxed(emc->regs + EMC_XM2QUSEPADCTRL); 428 + 429 + if (!(val & EMC_XM2QUSEPADCTRL_IVREF_ENABLE)) { 430 + val |= EMC_XM2QUSEPADCTRL_IVREF_ENABLE; 431 + writel_relaxed(val, emc->regs + EMC_XM2QUSEPADCTRL); 432 + 433 + *schmitt_to_vref = true; 434 + preset = true; 435 + } 436 + } 437 + 438 + return preset; 439 + } 440 + 441 + static int emc_seq_update_timing(struct tegra_emc *emc) 442 + { 443 + u32 val; 444 + int err; 445 + 446 + writel_relaxed(EMC_TIMING_UPDATE, emc->regs + EMC_TIMING_CONTROL); 447 + 448 + err = readl_relaxed_poll_timeout_atomic(emc->regs + EMC_STATUS, val, 449 + !(val & EMC_STATUS_TIMING_UPDATE_STALLED), 450 + 1, 200); 451 + if (err) { 452 + dev_err(emc->dev, "failed to update timing: %d\n", err); 453 + return err; 454 + } 455 + 456 + return 0; 457 + } 458 + 459 + static int emc_prepare_mc_clk_cfg(struct tegra_emc *emc, unsigned long rate) 460 + { 461 + struct tegra_mc *mc = emc->mc; 462 + unsigned int misc0_index = 16; 463 + unsigned int i; 464 + bool same; 465 + 466 + for (i = 0; i < mc->num_timings; i++) { 467 + if (mc->timings[i].rate != rate) 468 + continue; 469 + 470 + if (mc->timings[i].emem_data[misc0_index] & BIT(27)) 471 + same = true; 472 + else 473 + same = false; 474 + 475 + return tegra20_clk_prepare_emc_mc_same_freq(emc->clk, same); 476 + } 477 + 478 + return -EINVAL; 479 + } 480 + 481 + static int emc_prepare_timing_change(struct tegra_emc *emc, unsigned long rate) 482 + { 483 + struct emc_timing *timing = emc_find_timing(emc, rate); 484 + enum emc_dll_change dll_change; 485 + enum emc_dram_type dram_type; 486 + bool schmitt_to_vref = false; 487 + unsigned int pre_wait = 0; 488 + bool qrst_used = false; 489 + unsigned int dram_num; 490 + unsigned int i; 491 + u32 fbio_cfg5; 492 + u32 emc_dbg; 493 + u32 val; 494 + int err; 495 + 496 + if (!timing || emc->bad_state) 497 + return -EINVAL; 498 + 499 + dev_dbg(emc->dev, "%s: using timing rate %lu for requested rate %lu\n", 500 + __func__, timing->rate, rate); 501 + 502 + emc->bad_state = true; 503 + 504 + err = emc_prepare_mc_clk_cfg(emc, rate); 505 + if (err) { 506 + dev_err(emc->dev, "mc clock preparation failed: %d\n", err); 507 + return err; 508 + } 509 + 510 + emc->vref_cal_toggle = false; 511 + emc->mc_override = mc_readl(emc->mc, MC_EMEM_ARB_OVERRIDE); 512 + emc->emc_cfg = readl_relaxed(emc->regs + EMC_CFG); 513 + emc_dbg = readl_relaxed(emc->regs + EMC_DBG); 514 + 515 + if (emc->dll_on == !!(timing->emc_mode_1 & 0x1)) 516 + dll_change = DLL_CHANGE_NONE; 517 + else if (timing->emc_mode_1 & 0x1) 518 + dll_change = DLL_CHANGE_ON; 519 + else 520 + dll_change = DLL_CHANGE_OFF; 521 + 522 + emc->dll_on = !!(timing->emc_mode_1 & 0x1); 523 + 524 + if (timing->data[80] && !readl_relaxed(emc->regs + EMC_ZCAL_INTERVAL)) 525 + emc->zcal_long = true; 526 + else 527 + emc->zcal_long = false; 528 + 529 + fbio_cfg5 = readl_relaxed(emc->regs + EMC_FBIO_CFG5); 530 + dram_type = fbio_cfg5 & EMC_FBIO_CFG5_DRAM_TYPE_MASK; 531 + 532 + dram_num = tegra_mc_get_emem_device_count(emc->mc); 533 + 534 + /* disable dynamic self-refresh */ 535 + if (emc->emc_cfg & EMC_CFG_DYN_SREF_ENABLE) { 536 + emc->emc_cfg &= ~EMC_CFG_DYN_SREF_ENABLE; 537 + writel_relaxed(emc->emc_cfg, emc->regs + EMC_CFG); 538 + 539 + pre_wait = 5; 540 + } 541 + 542 + /* update MC arbiter settings */ 543 + val = mc_readl(emc->mc, MC_EMEM_ARB_OUTSTANDING_REQ); 544 + if (!(val & MC_EMEM_ARB_OUTSTANDING_REQ_HOLDOFF_OVERRIDE) || 545 + ((val & MC_EMEM_ARB_OUTSTANDING_REQ_MAX_MASK) > 0x50)) { 546 + 547 + val = MC_EMEM_ARB_OUTSTANDING_REQ_LIMIT_ENABLE | 548 + MC_EMEM_ARB_OUTSTANDING_REQ_HOLDOFF_OVERRIDE | 0x50; 549 + mc_writel(emc->mc, val, MC_EMEM_ARB_OUTSTANDING_REQ); 550 + mc_writel(emc->mc, MC_TIMING_UPDATE, MC_TIMING_CONTROL); 551 + } 552 + 553 + if (emc->mc_override & MC_EMEM_ARB_OVERRIDE_EACK_MASK) 554 + mc_writel(emc->mc, 555 + emc->mc_override & ~MC_EMEM_ARB_OVERRIDE_EACK_MASK, 556 + MC_EMEM_ARB_OVERRIDE); 557 + 558 + /* check DQ/DQS VREF delay */ 559 + if (emc_dqs_preset(emc, timing, &schmitt_to_vref)) { 560 + if (pre_wait < 3) 561 + pre_wait = 3; 562 + } 563 + 564 + if (pre_wait) { 565 + err = emc_seq_update_timing(emc); 566 + if (err) 567 + return err; 568 + 569 + udelay(pre_wait); 570 + } 571 + 572 + /* disable auto-calibration if VREF mode is switching */ 573 + if (timing->emc_auto_cal_interval) { 574 + val = readl_relaxed(emc->regs + EMC_XM2COMPPADCTRL); 575 + val ^= timing->data[74]; 576 + 577 + if (val & EMC_XM2COMPPADCTRL_VREF_CAL_ENABLE) { 578 + writel_relaxed(0, emc->regs + EMC_AUTO_CAL_INTERVAL); 579 + 580 + err = readl_relaxed_poll_timeout_atomic( 581 + emc->regs + EMC_AUTO_CAL_STATUS, val, 582 + !(val & EMC_AUTO_CAL_STATUS_ACTIVE), 1, 300); 583 + if (err) { 584 + dev_err(emc->dev, 585 + "failed to disable auto-cal: %d\n", 586 + err); 587 + return err; 588 + } 589 + 590 + emc->vref_cal_toggle = true; 591 + } 592 + } 593 + 594 + /* program shadow registers */ 595 + for (i = 0; i < ARRAY_SIZE(timing->data); i++) { 596 + /* EMC_XM2CLKPADCTRL should be programmed separately */ 597 + if (i != 73) 598 + writel_relaxed(timing->data[i], 599 + emc->regs + emc_timing_registers[i]); 600 + } 601 + 602 + err = tegra_mc_write_emem_configuration(emc->mc, timing->rate); 603 + if (err) 604 + return err; 605 + 606 + /* DDR3: predict MRS long wait count */ 607 + if (dram_type == DRAM_TYPE_DDR3 && dll_change == DLL_CHANGE_ON) { 608 + u32 cnt = 512; 609 + 610 + if (emc->zcal_long) 611 + cnt -= dram_num * 256; 612 + 613 + val = timing->data[82] & EMC_MRS_WAIT_CNT_SHORT_WAIT_MASK; 614 + if (cnt < val) 615 + cnt = val; 616 + 617 + val = timing->data[82] & ~EMC_MRS_WAIT_CNT_LONG_WAIT_MASK; 618 + val |= (cnt << EMC_MRS_WAIT_CNT_LONG_WAIT_SHIFT) & 619 + EMC_MRS_WAIT_CNT_LONG_WAIT_MASK; 620 + 621 + writel_relaxed(val, emc->regs + EMC_MRS_WAIT_CNT); 622 + } 623 + 624 + /* disable interrupt since read access is prohibited after stalling */ 625 + disable_irq(emc->irq); 626 + 627 + /* this read also completes the writes */ 628 + val = readl_relaxed(emc->regs + EMC_SEL_DPD_CTRL); 629 + 630 + if (!(val & EMC_SEL_DPD_CTRL_QUSE_DPD_ENABLE) && schmitt_to_vref) { 631 + u32 cur_mode, new_mode; 632 + 633 + cur_mode = fbio_cfg5 & EMC_CFG5_QUSE_MODE_MASK; 634 + cur_mode >>= EMC_CFG5_QUSE_MODE_SHIFT; 635 + 636 + new_mode = timing->data[39] & EMC_CFG5_QUSE_MODE_MASK; 637 + new_mode >>= EMC_CFG5_QUSE_MODE_SHIFT; 638 + 639 + if ((cur_mode != EMC_CFG5_QUSE_MODE_PULSE_INTERN && 640 + cur_mode != EMC_CFG5_QUSE_MODE_INTERNAL_LPBK) || 641 + (new_mode != EMC_CFG5_QUSE_MODE_PULSE_INTERN && 642 + new_mode != EMC_CFG5_QUSE_MODE_INTERNAL_LPBK)) 643 + qrst_used = true; 644 + } 645 + 646 + /* flow control marker 1 */ 647 + writel_relaxed(0x1, emc->regs + EMC_STALL_THEN_EXE_BEFORE_CLKCHANGE); 648 + 649 + /* enable periodic reset */ 650 + if (qrst_used) { 651 + writel_relaxed(emc_dbg | EMC_DBG_WRITE_MUX_ACTIVE, 652 + emc->regs + EMC_DBG); 653 + writel_relaxed(emc->emc_cfg | EMC_CFG_PERIODIC_QRST, 654 + emc->regs + EMC_CFG); 655 + writel_relaxed(emc_dbg, emc->regs + EMC_DBG); 656 + } 657 + 658 + /* disable auto-refresh to save time after clock change */ 659 + writel_relaxed(EMC_REFCTRL_DISABLE_ALL(dram_num), 660 + emc->regs + EMC_REFCTRL); 661 + 662 + /* turn off DLL and enter self-refresh on DDR3 */ 663 + if (dram_type == DRAM_TYPE_DDR3) { 664 + if (dll_change == DLL_CHANGE_OFF) 665 + writel_relaxed(timing->emc_mode_1, 666 + emc->regs + EMC_EMRS); 667 + 668 + writel_relaxed(DRAM_BROADCAST(dram_num) | 669 + EMC_SELF_REF_CMD_ENABLED, 670 + emc->regs + EMC_SELF_REF); 671 + } 672 + 673 + /* flow control marker 2 */ 674 + writel_relaxed(0x1, emc->regs + EMC_STALL_THEN_EXE_AFTER_CLKCHANGE); 675 + 676 + /* enable write-active MUX, update unshadowed pad control */ 677 + writel_relaxed(emc_dbg | EMC_DBG_WRITE_MUX_ACTIVE, emc->regs + EMC_DBG); 678 + writel_relaxed(timing->data[73], emc->regs + EMC_XM2CLKPADCTRL); 679 + 680 + /* restore periodic QRST and disable write-active MUX */ 681 + val = !!(emc->emc_cfg & EMC_CFG_PERIODIC_QRST); 682 + if (qrst_used || timing->emc_cfg_periodic_qrst != val) { 683 + if (timing->emc_cfg_periodic_qrst) 684 + emc->emc_cfg |= EMC_CFG_PERIODIC_QRST; 685 + else 686 + emc->emc_cfg &= ~EMC_CFG_PERIODIC_QRST; 687 + 688 + writel_relaxed(emc->emc_cfg, emc->regs + EMC_CFG); 689 + } 690 + writel_relaxed(emc_dbg, emc->regs + EMC_DBG); 691 + 692 + /* exit self-refresh on DDR3 */ 693 + if (dram_type == DRAM_TYPE_DDR3) 694 + writel_relaxed(DRAM_BROADCAST(dram_num), 695 + emc->regs + EMC_SELF_REF); 696 + 697 + /* set DRAM-mode registers */ 698 + if (dram_type == DRAM_TYPE_DDR3) { 699 + if (timing->emc_mode_1 != emc->emc_mode_1) 700 + writel_relaxed(timing->emc_mode_1, 701 + emc->regs + EMC_EMRS); 702 + 703 + if (timing->emc_mode_2 != emc->emc_mode_2) 704 + writel_relaxed(timing->emc_mode_2, 705 + emc->regs + EMC_EMRS); 706 + 707 + if (timing->emc_mode_reset != emc->emc_mode_reset || 708 + dll_change == DLL_CHANGE_ON) { 709 + val = timing->emc_mode_reset; 710 + if (dll_change == DLL_CHANGE_ON) { 711 + val |= EMC_MODE_SET_DLL_RESET; 712 + val |= EMC_MODE_SET_LONG_CNT; 713 + } else { 714 + val &= ~EMC_MODE_SET_DLL_RESET; 715 + } 716 + writel_relaxed(val, emc->regs + EMC_MRS); 717 + } 718 + } else { 719 + if (timing->emc_mode_2 != emc->emc_mode_2) 720 + writel_relaxed(timing->emc_mode_2, 721 + emc->regs + EMC_MRW); 722 + 723 + if (timing->emc_mode_1 != emc->emc_mode_1) 724 + writel_relaxed(timing->emc_mode_1, 725 + emc->regs + EMC_MRW); 726 + } 727 + 728 + emc->emc_mode_1 = timing->emc_mode_1; 729 + emc->emc_mode_2 = timing->emc_mode_2; 730 + emc->emc_mode_reset = timing->emc_mode_reset; 731 + 732 + /* issue ZCAL command if turning ZCAL on */ 733 + if (emc->zcal_long) { 734 + writel_relaxed(EMC_ZQ_CAL_LONG_CMD_DEV0, 735 + emc->regs + EMC_ZQ_CAL); 736 + 737 + if (dram_num > 1) 738 + writel_relaxed(EMC_ZQ_CAL_LONG_CMD_DEV1, 739 + emc->regs + EMC_ZQ_CAL); 740 + } 741 + 742 + /* re-enable auto-refresh */ 743 + writel_relaxed(EMC_REFCTRL_ENABLE_ALL(dram_num), 744 + emc->regs + EMC_REFCTRL); 745 + 746 + /* flow control marker 3 */ 747 + writel_relaxed(0x1, emc->regs + EMC_UNSTALL_RW_AFTER_CLKCHANGE); 748 + 749 + reinit_completion(&emc->clk_handshake_complete); 750 + 751 + /* interrupt can be re-enabled now */ 752 + enable_irq(emc->irq); 753 + 754 + emc->bad_state = false; 755 + emc->prepared = true; 756 + 757 + return 0; 758 + } 759 + 760 + static int emc_complete_timing_change(struct tegra_emc *emc, 761 + unsigned long rate) 762 + { 763 + struct emc_timing *timing = emc_find_timing(emc, rate); 764 + unsigned long timeout; 765 + int ret; 766 + 767 + timeout = wait_for_completion_timeout(&emc->clk_handshake_complete, 768 + msecs_to_jiffies(100)); 769 + if (timeout == 0) { 770 + dev_err(emc->dev, "emc-car handshake failed\n"); 771 + emc->bad_state = true; 772 + return -EIO; 773 + } 774 + 775 + /* restore auto-calibration */ 776 + if (emc->vref_cal_toggle) 777 + writel_relaxed(timing->emc_auto_cal_interval, 778 + emc->regs + EMC_AUTO_CAL_INTERVAL); 779 + 780 + /* restore dynamic self-refresh */ 781 + if (timing->emc_cfg_dyn_self_ref) { 782 + emc->emc_cfg |= EMC_CFG_DYN_SREF_ENABLE; 783 + writel_relaxed(emc->emc_cfg, emc->regs + EMC_CFG); 784 + } 785 + 786 + /* set number of clocks to wait after each ZQ command */ 787 + if (emc->zcal_long) 788 + writel_relaxed(timing->emc_zcal_cnt_long, 789 + emc->regs + EMC_ZCAL_WAIT_CNT); 790 + 791 + udelay(2); 792 + /* update restored timing */ 793 + ret = emc_seq_update_timing(emc); 794 + if (ret) 795 + emc->bad_state = true; 796 + 797 + /* restore early ACK */ 798 + mc_writel(emc->mc, emc->mc_override, MC_EMEM_ARB_OVERRIDE); 799 + 800 + emc->prepared = false; 801 + 802 + return ret; 803 + } 804 + 805 + static int emc_unprepare_timing_change(struct tegra_emc *emc, 806 + unsigned long rate) 807 + { 808 + if (emc->prepared && !emc->bad_state) { 809 + /* shouldn't ever happen in practice */ 810 + dev_err(emc->dev, "timing configuration can't be reverted\n"); 811 + emc->bad_state = true; 812 + } 813 + 814 + return 0; 815 + } 816 + 817 + static int emc_clk_change_notify(struct notifier_block *nb, 818 + unsigned long msg, void *data) 819 + { 820 + struct tegra_emc *emc = container_of(nb, struct tegra_emc, clk_nb); 821 + struct clk_notifier_data *cnd = data; 822 + int err; 823 + 824 + switch (msg) { 825 + case PRE_RATE_CHANGE: 826 + err = emc_prepare_timing_change(emc, cnd->new_rate); 827 + break; 828 + 829 + case ABORT_RATE_CHANGE: 830 + err = emc_unprepare_timing_change(emc, cnd->old_rate); 831 + break; 832 + 833 + case POST_RATE_CHANGE: 834 + err = emc_complete_timing_change(emc, cnd->new_rate); 835 + break; 836 + 837 + default: 838 + return NOTIFY_DONE; 839 + } 840 + 841 + return notifier_from_errno(err); 842 + } 843 + 844 + static int load_one_timing_from_dt(struct tegra_emc *emc, 845 + struct emc_timing *timing, 846 + struct device_node *node) 847 + { 848 + u32 value; 849 + int err; 850 + 851 + err = of_property_read_u32(node, "clock-frequency", &value); 852 + if (err) { 853 + dev_err(emc->dev, "timing %pOF: failed to read rate: %d\n", 854 + node, err); 855 + return err; 856 + } 857 + 858 + timing->rate = value; 859 + 860 + err = of_property_read_u32_array(node, "nvidia,emc-configuration", 861 + timing->data, 862 + ARRAY_SIZE(emc_timing_registers)); 863 + if (err) { 864 + dev_err(emc->dev, 865 + "timing %pOF: failed to read emc timing data: %d\n", 866 + node, err); 867 + return err; 868 + } 869 + 870 + #define EMC_READ_BOOL(prop, dtprop) \ 871 + timing->prop = of_property_read_bool(node, dtprop); 872 + 873 + #define EMC_READ_U32(prop, dtprop) \ 874 + err = of_property_read_u32(node, dtprop, &timing->prop); \ 875 + if (err) { \ 876 + dev_err(emc->dev, \ 877 + "timing %pOFn: failed to read " #prop ": %d\n", \ 878 + node, err); \ 879 + return err; \ 880 + } 881 + 882 + EMC_READ_U32(emc_auto_cal_interval, "nvidia,emc-auto-cal-interval") 883 + EMC_READ_U32(emc_mode_1, "nvidia,emc-mode-1") 884 + EMC_READ_U32(emc_mode_2, "nvidia,emc-mode-2") 885 + EMC_READ_U32(emc_mode_reset, "nvidia,emc-mode-reset") 886 + EMC_READ_U32(emc_zcal_cnt_long, "nvidia,emc-zcal-cnt-long") 887 + EMC_READ_BOOL(emc_cfg_dyn_self_ref, "nvidia,emc-cfg-dyn-self-ref") 888 + EMC_READ_BOOL(emc_cfg_periodic_qrst, "nvidia,emc-cfg-periodic-qrst") 889 + 890 + #undef EMC_READ_U32 891 + #undef EMC_READ_BOOL 892 + 893 + dev_dbg(emc->dev, "%s: %pOF: rate %lu\n", __func__, node, timing->rate); 894 + 895 + return 0; 896 + } 897 + 898 + static int cmp_timings(const void *_a, const void *_b) 899 + { 900 + const struct emc_timing *a = _a; 901 + const struct emc_timing *b = _b; 902 + 903 + if (a->rate < b->rate) 904 + return -1; 905 + 906 + if (a->rate > b->rate) 907 + return 1; 908 + 909 + return 0; 910 + } 911 + 912 + static int emc_check_mc_timings(struct tegra_emc *emc) 913 + { 914 + struct tegra_mc *mc = emc->mc; 915 + unsigned int i; 916 + 917 + if (emc->num_timings != mc->num_timings) { 918 + dev_err(emc->dev, "emc/mc timings number mismatch: %u %u\n", 919 + emc->num_timings, mc->num_timings); 920 + return -EINVAL; 921 + } 922 + 923 + for (i = 0; i < mc->num_timings; i++) { 924 + if (emc->timings[i].rate != mc->timings[i].rate) { 925 + dev_err(emc->dev, 926 + "emc/mc timing rate mismatch: %lu %lu\n", 927 + emc->timings[i].rate, mc->timings[i].rate); 928 + return -EINVAL; 929 + } 930 + } 931 + 932 + return 0; 933 + } 934 + 935 + static int emc_load_timings_from_dt(struct tegra_emc *emc, 936 + struct device_node *node) 937 + { 938 + struct device_node *child; 939 + struct emc_timing *timing; 940 + int child_count; 941 + int err; 942 + 943 + child_count = of_get_child_count(node); 944 + if (!child_count) { 945 + dev_err(emc->dev, "no memory timings in: %pOF\n", node); 946 + return -EINVAL; 947 + } 948 + 949 + emc->timings = devm_kcalloc(emc->dev, child_count, sizeof(*timing), 950 + GFP_KERNEL); 951 + if (!emc->timings) 952 + return -ENOMEM; 953 + 954 + emc->num_timings = child_count; 955 + timing = emc->timings; 956 + 957 + for_each_child_of_node(node, child) { 958 + err = load_one_timing_from_dt(emc, timing++, child); 959 + if (err) { 960 + of_node_put(child); 961 + return err; 962 + } 963 + } 964 + 965 + sort(emc->timings, emc->num_timings, sizeof(*timing), cmp_timings, 966 + NULL); 967 + 968 + err = emc_check_mc_timings(emc); 969 + if (err) 970 + return err; 971 + 972 + dev_info(emc->dev, 973 + "got %u timings for RAM code %u (min %luMHz max %luMHz)\n", 974 + emc->num_timings, 975 + tegra_read_ram_code(), 976 + emc->timings[0].rate / 1000000, 977 + emc->timings[emc->num_timings - 1].rate / 1000000); 978 + 979 + return 0; 980 + } 981 + 982 + static struct device_node *emc_find_node_by_ram_code(struct device *dev) 983 + { 984 + struct device_node *np; 985 + u32 value, ram_code; 986 + int err; 987 + 988 + ram_code = tegra_read_ram_code(); 989 + 990 + for_each_child_of_node(dev->of_node, np) { 991 + err = of_property_read_u32(np, "nvidia,ram-code", &value); 992 + if (err || value != ram_code) 993 + continue; 994 + 995 + return np; 996 + } 997 + 998 + dev_err(dev, "no memory timings for RAM code %u found in device-tree\n", 999 + ram_code); 1000 + 1001 + return NULL; 1002 + } 1003 + 1004 + static int emc_setup_hw(struct tegra_emc *emc) 1005 + { 1006 + u32 intmask = EMC_REFRESH_OVERFLOW_INT | EMC_CLKCHANGE_COMPLETE_INT; 1007 + u32 fbio_cfg5, emc_cfg, emc_dbg; 1008 + enum emc_dram_type dram_type; 1009 + 1010 + fbio_cfg5 = readl_relaxed(emc->regs + EMC_FBIO_CFG5); 1011 + dram_type = fbio_cfg5 & EMC_FBIO_CFG5_DRAM_TYPE_MASK; 1012 + 1013 + emc_cfg = readl_relaxed(emc->regs + EMC_CFG_2); 1014 + 1015 + /* enable EMC and CAR to handshake on PLL divider/source changes */ 1016 + emc_cfg |= EMC_CLKCHANGE_REQ_ENABLE; 1017 + 1018 + /* configure clock change mode accordingly to DRAM type */ 1019 + switch (dram_type) { 1020 + case DRAM_TYPE_LPDDR2: 1021 + emc_cfg |= EMC_CLKCHANGE_PD_ENABLE; 1022 + emc_cfg &= ~EMC_CLKCHANGE_SR_ENABLE; 1023 + break; 1024 + 1025 + default: 1026 + emc_cfg &= ~EMC_CLKCHANGE_SR_ENABLE; 1027 + emc_cfg &= ~EMC_CLKCHANGE_PD_ENABLE; 1028 + break; 1029 + } 1030 + 1031 + writel_relaxed(emc_cfg, emc->regs + EMC_CFG_2); 1032 + 1033 + /* initialize interrupt */ 1034 + writel_relaxed(intmask, emc->regs + EMC_INTMASK); 1035 + writel_relaxed(0xffffffff, emc->regs + EMC_INTSTATUS); 1036 + 1037 + /* ensure that unwanted debug features are disabled */ 1038 + emc_dbg = readl_relaxed(emc->regs + EMC_DBG); 1039 + emc_dbg |= EMC_DBG_CFG_PRIORITY; 1040 + emc_dbg &= ~EMC_DBG_READ_MUX_ASSEMBLY; 1041 + emc_dbg &= ~EMC_DBG_WRITE_MUX_ACTIVE; 1042 + emc_dbg &= ~EMC_DBG_FORCE_UPDATE; 1043 + writel_relaxed(emc_dbg, emc->regs + EMC_DBG); 1044 + 1045 + return 0; 1046 + } 1047 + 1048 + static long emc_round_rate(unsigned long rate, 1049 + unsigned long min_rate, 1050 + unsigned long max_rate, 1051 + void *arg) 1052 + { 1053 + struct emc_timing *timing = NULL; 1054 + struct tegra_emc *emc = arg; 1055 + unsigned int i; 1056 + 1057 + min_rate = min(min_rate, emc->timings[emc->num_timings - 1].rate); 1058 + 1059 + for (i = 0; i < emc->num_timings; i++) { 1060 + if (emc->timings[i].rate < rate && i != emc->num_timings - 1) 1061 + continue; 1062 + 1063 + if (emc->timings[i].rate > max_rate) { 1064 + i = max(i, 1u) - 1; 1065 + 1066 + if (emc->timings[i].rate < min_rate) 1067 + break; 1068 + } 1069 + 1070 + if (emc->timings[i].rate < min_rate) 1071 + continue; 1072 + 1073 + timing = &emc->timings[i]; 1074 + break; 1075 + } 1076 + 1077 + if (!timing) { 1078 + dev_err(emc->dev, "no timing for rate %lu min %lu max %lu\n", 1079 + rate, min_rate, max_rate); 1080 + return -EINVAL; 1081 + } 1082 + 1083 + return timing->rate; 1084 + } 1085 + 1086 + static int tegra_emc_probe(struct platform_device *pdev) 1087 + { 1088 + struct platform_device *mc; 1089 + struct device_node *np; 1090 + struct tegra_emc *emc; 1091 + int err; 1092 + 1093 + if (of_get_child_count(pdev->dev.of_node) == 0) { 1094 + dev_info(&pdev->dev, 1095 + "device-tree node doesn't have memory timings\n"); 1096 + return 0; 1097 + } 1098 + 1099 + np = of_parse_phandle(pdev->dev.of_node, "nvidia,memory-controller", 0); 1100 + if (!np) { 1101 + dev_err(&pdev->dev, "could not get memory controller node\n"); 1102 + return -ENOENT; 1103 + } 1104 + 1105 + mc = of_find_device_by_node(np); 1106 + of_node_put(np); 1107 + if (!mc) 1108 + return -ENOENT; 1109 + 1110 + np = emc_find_node_by_ram_code(&pdev->dev); 1111 + if (!np) 1112 + return -EINVAL; 1113 + 1114 + emc = devm_kzalloc(&pdev->dev, sizeof(*emc), GFP_KERNEL); 1115 + if (!emc) { 1116 + of_node_put(np); 1117 + return -ENOMEM; 1118 + } 1119 + 1120 + emc->mc = platform_get_drvdata(mc); 1121 + if (!emc->mc) 1122 + return -EPROBE_DEFER; 1123 + 1124 + init_completion(&emc->clk_handshake_complete); 1125 + emc->clk_nb.notifier_call = emc_clk_change_notify; 1126 + emc->dev = &pdev->dev; 1127 + 1128 + err = emc_load_timings_from_dt(emc, np); 1129 + of_node_put(np); 1130 + if (err) 1131 + return err; 1132 + 1133 + emc->regs = devm_platform_ioremap_resource(pdev, 0); 1134 + if (IS_ERR(emc->regs)) 1135 + return PTR_ERR(emc->regs); 1136 + 1137 + err = emc_setup_hw(emc); 1138 + if (err) 1139 + return err; 1140 + 1141 + err = platform_get_irq(pdev, 0); 1142 + if (err < 0) { 1143 + dev_err(&pdev->dev, "interrupt not specified: %d\n", err); 1144 + return err; 1145 + } 1146 + emc->irq = err; 1147 + 1148 + err = devm_request_irq(&pdev->dev, emc->irq, tegra_emc_isr, 0, 1149 + dev_name(&pdev->dev), emc); 1150 + if (err) { 1151 + dev_err(&pdev->dev, "failed to request irq: %d\n", err); 1152 + return err; 1153 + } 1154 + 1155 + tegra20_clk_set_emc_round_callback(emc_round_rate, emc); 1156 + 1157 + emc->clk = devm_clk_get(&pdev->dev, "emc"); 1158 + if (IS_ERR(emc->clk)) { 1159 + err = PTR_ERR(emc->clk); 1160 + dev_err(&pdev->dev, "failed to get emc clock: %d\n", err); 1161 + goto unset_cb; 1162 + } 1163 + 1164 + err = clk_notifier_register(emc->clk, &emc->clk_nb); 1165 + if (err) { 1166 + dev_err(&pdev->dev, "failed to register clk notifier: %d\n", 1167 + err); 1168 + goto unset_cb; 1169 + } 1170 + 1171 + platform_set_drvdata(pdev, emc); 1172 + 1173 + return 0; 1174 + 1175 + unset_cb: 1176 + tegra20_clk_set_emc_round_callback(NULL, NULL); 1177 + 1178 + return err; 1179 + } 1180 + 1181 + static int tegra_emc_suspend(struct device *dev) 1182 + { 1183 + struct tegra_emc *emc = dev_get_drvdata(dev); 1184 + 1185 + /* 1186 + * Suspending in a bad state will hang machine. The "prepared" var 1187 + * shall be always false here unless it's a kernel bug that caused 1188 + * suspending in a wrong order. 1189 + */ 1190 + if (WARN_ON(emc->prepared) || emc->bad_state) 1191 + return -EINVAL; 1192 + 1193 + emc->bad_state = true; 1194 + 1195 + return 0; 1196 + } 1197 + 1198 + static int tegra_emc_resume(struct device *dev) 1199 + { 1200 + struct tegra_emc *emc = dev_get_drvdata(dev); 1201 + 1202 + emc_setup_hw(emc); 1203 + emc->bad_state = false; 1204 + 1205 + return 0; 1206 + } 1207 + 1208 + static const struct dev_pm_ops tegra_emc_pm_ops = { 1209 + .suspend = tegra_emc_suspend, 1210 + .resume = tegra_emc_resume, 1211 + }; 1212 + 1213 + static const struct of_device_id tegra_emc_of_match[] = { 1214 + { .compatible = "nvidia,tegra30-emc", }, 1215 + {}, 1216 + }; 1217 + 1218 + static struct platform_driver tegra_emc_driver = { 1219 + .probe = tegra_emc_probe, 1220 + .driver = { 1221 + .name = "tegra30-emc", 1222 + .of_match_table = tegra_emc_of_match, 1223 + .pm = &tegra_emc_pm_ops, 1224 + .suppress_bind_attrs = true, 1225 + }, 1226 + }; 1227 + 1228 + static int __init tegra_emc_init(void) 1229 + { 1230 + return platform_driver_register(&tegra_emc_driver); 1231 + } 1232 + subsys_initcall(tegra_emc_init);
+30 -4
drivers/memory/tegra/tegra30.c
··· 10 10 11 11 #include "mc.h" 12 12 13 + static const unsigned long tegra30_mc_emem_regs[] = { 14 + MC_EMEM_ARB_CFG, 15 + MC_EMEM_ARB_OUTSTANDING_REQ, 16 + MC_EMEM_ARB_TIMING_RCD, 17 + MC_EMEM_ARB_TIMING_RP, 18 + MC_EMEM_ARB_TIMING_RC, 19 + MC_EMEM_ARB_TIMING_RAS, 20 + MC_EMEM_ARB_TIMING_FAW, 21 + MC_EMEM_ARB_TIMING_RRD, 22 + MC_EMEM_ARB_TIMING_RAP2PRE, 23 + MC_EMEM_ARB_TIMING_WAP2PRE, 24 + MC_EMEM_ARB_TIMING_R2R, 25 + MC_EMEM_ARB_TIMING_W2W, 26 + MC_EMEM_ARB_TIMING_R2W, 27 + MC_EMEM_ARB_TIMING_W2R, 28 + MC_EMEM_ARB_DA_TURNS, 29 + MC_EMEM_ARB_DA_COVERS, 30 + MC_EMEM_ARB_MISC0, 31 + MC_EMEM_ARB_RING1_THROTTLE, 32 + }; 33 + 13 34 static const struct tegra_mc_client tegra30_mc_clients[] = { 14 35 { 15 36 .id = 0x00, ··· 952 931 { .name = "isp", .swgroup = TEGRA_SWGROUP_ISP, .reg = 0x258 }, 953 932 }; 954 933 955 - static const unsigned int tegra30_group_display[] = { 934 + static const unsigned int tegra30_group_drm[] = { 956 935 TEGRA_SWGROUP_DC, 957 936 TEGRA_SWGROUP_DCB, 937 + TEGRA_SWGROUP_G2, 938 + TEGRA_SWGROUP_NV, 939 + TEGRA_SWGROUP_NV2, 958 940 }; 959 941 960 942 static const struct tegra_smmu_group_soc tegra30_groups[] = { 961 943 { 962 - .name = "display", 963 - .swgroups = tegra30_group_display, 964 - .num_swgroups = ARRAY_SIZE(tegra30_group_display), 944 + .name = "drm", 945 + .swgroups = tegra30_group_drm, 946 + .num_swgroups = ARRAY_SIZE(tegra30_group_drm), 965 947 }, 966 948 }; 967 949 ··· 1018 994 .atom_size = 16, 1019 995 .client_id_mask = 0x7f, 1020 996 .smmu = &tegra30_smmu_soc, 997 + .emem_regs = tegra30_mc_emem_regs, 998 + .num_emem_regs = ARRAY_SIZE(tegra30_mc_emem_regs), 1021 999 .intmask = MC_INT_INVALID_SMMU_PAGE | MC_INT_SECURITY_VIOLATION | 1022 1000 MC_INT_DECERR_EMEM, 1023 1001 .reset_ops = &tegra_mc_reset_ops_common,
+21 -3
drivers/nvmem/meson-efuse.c
··· 17 17 static int meson_efuse_read(void *context, unsigned int offset, 18 18 void *val, size_t bytes) 19 19 { 20 - return meson_sm_call_read((u8 *)val, bytes, SM_EFUSE_READ, offset, 20 + struct meson_sm_firmware *fw = context; 21 + 22 + return meson_sm_call_read(fw, (u8 *)val, bytes, SM_EFUSE_READ, offset, 21 23 bytes, 0, 0, 0); 22 24 } 23 25 24 26 static int meson_efuse_write(void *context, unsigned int offset, 25 27 void *val, size_t bytes) 26 28 { 27 - return meson_sm_call_write((u8 *)val, bytes, SM_EFUSE_WRITE, offset, 29 + struct meson_sm_firmware *fw = context; 30 + 31 + return meson_sm_call_write(fw, (u8 *)val, bytes, SM_EFUSE_WRITE, offset, 28 32 bytes, 0, 0, 0); 29 33 } 30 34 ··· 41 37 static int meson_efuse_probe(struct platform_device *pdev) 42 38 { 43 39 struct device *dev = &pdev->dev; 40 + struct meson_sm_firmware *fw; 41 + struct device_node *sm_np; 44 42 struct nvmem_device *nvmem; 45 43 struct nvmem_config *econfig; 46 44 struct clk *clk; 47 45 unsigned int size; 48 46 int ret; 47 + 48 + sm_np = of_parse_phandle(pdev->dev.of_node, "secure-monitor", 0); 49 + if (!sm_np) { 50 + dev_err(&pdev->dev, "no secure-monitor node\n"); 51 + return -ENODEV; 52 + } 53 + 54 + fw = meson_sm_get(sm_np); 55 + of_node_put(sm_np); 56 + if (!fw) 57 + return -EPROBE_DEFER; 49 58 50 59 clk = devm_clk_get(dev, NULL); 51 60 if (IS_ERR(clk)) { ··· 82 65 return ret; 83 66 } 84 67 85 - if (meson_sm_call(SM_EFUSE_USER_MAX, &size, 0, 0, 0, 0, 0) < 0) { 68 + if (meson_sm_call(fw, SM_EFUSE_USER_MAX, &size, 0, 0, 0, 0, 0) < 0) { 86 69 dev_err(dev, "failed to get max user"); 87 70 return -EINVAL; 88 71 } ··· 98 81 econfig->reg_read = meson_efuse_read; 99 82 econfig->reg_write = meson_efuse_write; 100 83 econfig->size = size; 84 + econfig->priv = fw; 101 85 102 86 nvmem = devm_nvmem_register(&pdev->dev, econfig); 103 87
+11
drivers/phy/marvell/Kconfig
··· 103 103 The PHY driver will be used by Marvell udc/ehci/otg driver. 104 104 105 105 To compile this driver as a module, choose M here. 106 + 107 + config PHY_MMP3_USB 108 + tristate "Marvell MMP3 USB PHY Driver" 109 + depends on MACH_MMP3_DT || COMPILE_TEST 110 + select GENERIC_PHY 111 + help 112 + Enable this to support Marvell MMP3 USB PHY driver for Marvell 113 + SoC. This driver will do the PHY initialization and shutdown. 114 + The PHY driver will be used by Marvell udc/ehci/otg driver. 115 + 116 + To compile this driver as a module, choose M here.
+1
drivers/phy/marvell/Makefile
··· 2 2 obj-$(CONFIG_ARMADA375_USBCLUSTER_PHY) += phy-armada375-usb2.o 3 3 obj-$(CONFIG_PHY_BERLIN_SATA) += phy-berlin-sata.o 4 4 obj-$(CONFIG_PHY_BERLIN_USB) += phy-berlin-usb.o 5 + obj-$(CONFIG_PHY_MMP3_USB) += phy-mmp3-usb.o 5 6 obj-$(CONFIG_PHY_MVEBU_A3700_COMPHY) += phy-mvebu-a3700-comphy.o 6 7 obj-$(CONFIG_PHY_MVEBU_A3700_UTMI) += phy-mvebu-a3700-utmi.o 7 8 obj-$(CONFIG_PHY_MVEBU_A38X_COMPHY) += phy-armada38x-comphy.o
+291
drivers/phy/marvell/phy-mmp3-usb.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2011 Marvell International Ltd. All rights reserved. 4 + * Copyright (C) 2018,2019 Lubomir Rintel <lkundrak@v3.sk> 5 + */ 6 + 7 + #include <linux/delay.h> 8 + #include <linux/io.h> 9 + #include <linux/module.h> 10 + #include <linux/phy/phy.h> 11 + #include <linux/platform_device.h> 12 + #include <linux/soc/mmp/cputype.h> 13 + 14 + #define USB2_PLL_REG0 0x4 15 + #define USB2_PLL_REG1 0x8 16 + #define USB2_TX_REG0 0x10 17 + #define USB2_TX_REG1 0x14 18 + #define USB2_TX_REG2 0x18 19 + #define USB2_RX_REG0 0x20 20 + #define USB2_RX_REG1 0x24 21 + #define USB2_RX_REG2 0x28 22 + #define USB2_ANA_REG0 0x30 23 + #define USB2_ANA_REG1 0x34 24 + #define USB2_ANA_REG2 0x38 25 + #define USB2_DIG_REG0 0x3C 26 + #define USB2_DIG_REG1 0x40 27 + #define USB2_DIG_REG2 0x44 28 + #define USB2_DIG_REG3 0x48 29 + #define USB2_TEST_REG0 0x4C 30 + #define USB2_TEST_REG1 0x50 31 + #define USB2_TEST_REG2 0x54 32 + #define USB2_CHARGER_REG0 0x58 33 + #define USB2_OTG_REG0 0x5C 34 + #define USB2_PHY_MON0 0x60 35 + #define USB2_RESETVE_REG0 0x64 36 + #define USB2_ICID_REG0 0x78 37 + #define USB2_ICID_REG1 0x7C 38 + 39 + /* USB2_PLL_REG0 */ 40 + 41 + /* This is for Ax stepping */ 42 + #define USB2_PLL_FBDIV_SHIFT_MMP3 0 43 + #define USB2_PLL_FBDIV_MASK_MMP3 (0xFF << 0) 44 + 45 + #define USB2_PLL_REFDIV_SHIFT_MMP3 8 46 + #define USB2_PLL_REFDIV_MASK_MMP3 (0xF << 8) 47 + 48 + #define USB2_PLL_VDD12_SHIFT_MMP3 12 49 + #define USB2_PLL_VDD18_SHIFT_MMP3 14 50 + 51 + /* This is for B0 stepping */ 52 + #define USB2_PLL_FBDIV_SHIFT_MMP3_B0 0 53 + #define USB2_PLL_REFDIV_SHIFT_MMP3_B0 9 54 + #define USB2_PLL_VDD18_SHIFT_MMP3_B0 14 55 + #define USB2_PLL_FBDIV_MASK_MMP3_B0 0x01FF 56 + #define USB2_PLL_REFDIV_MASK_MMP3_B0 0x3E00 57 + 58 + #define USB2_PLL_CAL12_SHIFT_MMP3 0 59 + #define USB2_PLL_CALI12_MASK_MMP3 (0x3 << 0) 60 + 61 + #define USB2_PLL_VCOCAL_START_SHIFT_MMP3 2 62 + 63 + #define USB2_PLL_KVCO_SHIFT_MMP3 4 64 + #define USB2_PLL_KVCO_MASK_MMP3 (0x7<<4) 65 + 66 + #define USB2_PLL_ICP_SHIFT_MMP3 8 67 + #define USB2_PLL_ICP_MASK_MMP3 (0x7<<8) 68 + 69 + #define USB2_PLL_LOCK_BYPASS_SHIFT_MMP3 12 70 + 71 + #define USB2_PLL_PU_PLL_SHIFT_MMP3 13 72 + #define USB2_PLL_PU_PLL_MASK (0x1 << 13) 73 + 74 + #define USB2_PLL_READY_MASK_MMP3 (0x1 << 15) 75 + 76 + /* USB2_TX_REG0 */ 77 + #define USB2_TX_IMPCAL_VTH_SHIFT_MMP3 8 78 + #define USB2_TX_IMPCAL_VTH_MASK_MMP3 (0x7 << 8) 79 + 80 + #define USB2_TX_RCAL_START_SHIFT_MMP3 13 81 + 82 + /* USB2_TX_REG1 */ 83 + #define USB2_TX_CK60_PHSEL_SHIFT_MMP3 0 84 + #define USB2_TX_CK60_PHSEL_MASK_MMP3 (0xf << 0) 85 + 86 + #define USB2_TX_AMP_SHIFT_MMP3 4 87 + #define USB2_TX_AMP_MASK_MMP3 (0x7 << 4) 88 + 89 + #define USB2_TX_VDD12_SHIFT_MMP3 8 90 + #define USB2_TX_VDD12_MASK_MMP3 (0x3 << 8) 91 + 92 + /* USB2_TX_REG2 */ 93 + #define USB2_TX_DRV_SLEWRATE_SHIFT 10 94 + 95 + /* USB2_RX_REG0 */ 96 + #define USB2_RX_SQ_THRESH_SHIFT_MMP3 4 97 + #define USB2_RX_SQ_THRESH_MASK_MMP3 (0xf << 4) 98 + 99 + #define USB2_RX_SQ_LENGTH_SHIFT_MMP3 10 100 + #define USB2_RX_SQ_LENGTH_MASK_MMP3 (0x3 << 10) 101 + 102 + /* USB2_ANA_REG1*/ 103 + #define USB2_ANA_PU_ANA_SHIFT_MMP3 14 104 + 105 + /* USB2_OTG_REG0 */ 106 + #define USB2_OTG_PU_OTG_SHIFT_MMP3 3 107 + 108 + struct mmp3_usb_phy { 109 + struct phy *phy; 110 + void __iomem *base; 111 + }; 112 + 113 + static unsigned int u2o_get(void __iomem *base, unsigned int offset) 114 + { 115 + return readl_relaxed(base + offset); 116 + } 117 + 118 + static void u2o_set(void __iomem *base, unsigned int offset, 119 + unsigned int value) 120 + { 121 + u32 reg; 122 + 123 + reg = readl_relaxed(base + offset); 124 + reg |= value; 125 + writel_relaxed(reg, base + offset); 126 + readl_relaxed(base + offset); 127 + } 128 + 129 + static void u2o_clear(void __iomem *base, unsigned int offset, 130 + unsigned int value) 131 + { 132 + u32 reg; 133 + 134 + reg = readl_relaxed(base + offset); 135 + reg &= ~value; 136 + writel_relaxed(reg, base + offset); 137 + readl_relaxed(base + offset); 138 + } 139 + 140 + static int mmp3_usb_phy_init(struct phy *phy) 141 + { 142 + struct mmp3_usb_phy *mmp3_usb_phy = phy_get_drvdata(phy); 143 + void __iomem *base = mmp3_usb_phy->base; 144 + 145 + if (cpu_is_mmp3_a0()) { 146 + u2o_clear(base, USB2_PLL_REG0, (USB2_PLL_FBDIV_MASK_MMP3 147 + | USB2_PLL_REFDIV_MASK_MMP3)); 148 + u2o_set(base, USB2_PLL_REG0, 149 + 0xd << USB2_PLL_REFDIV_SHIFT_MMP3 150 + | 0xf0 << USB2_PLL_FBDIV_SHIFT_MMP3); 151 + } else if (cpu_is_mmp3_b0()) { 152 + u2o_clear(base, USB2_PLL_REG0, USB2_PLL_REFDIV_MASK_MMP3_B0 153 + | USB2_PLL_FBDIV_MASK_MMP3_B0); 154 + u2o_set(base, USB2_PLL_REG0, 155 + 0xd << USB2_PLL_REFDIV_SHIFT_MMP3_B0 156 + | 0xf0 << USB2_PLL_FBDIV_SHIFT_MMP3_B0); 157 + } else { 158 + dev_err(&phy->dev, "unsupported silicon revision\n"); 159 + return -ENODEV; 160 + } 161 + 162 + u2o_clear(base, USB2_PLL_REG1, USB2_PLL_PU_PLL_MASK 163 + | USB2_PLL_ICP_MASK_MMP3 164 + | USB2_PLL_KVCO_MASK_MMP3 165 + | USB2_PLL_CALI12_MASK_MMP3); 166 + u2o_set(base, USB2_PLL_REG1, 1 << USB2_PLL_PU_PLL_SHIFT_MMP3 167 + | 1 << USB2_PLL_LOCK_BYPASS_SHIFT_MMP3 168 + | 3 << USB2_PLL_ICP_SHIFT_MMP3 169 + | 3 << USB2_PLL_KVCO_SHIFT_MMP3 170 + | 3 << USB2_PLL_CAL12_SHIFT_MMP3); 171 + 172 + u2o_clear(base, USB2_TX_REG0, USB2_TX_IMPCAL_VTH_MASK_MMP3); 173 + u2o_set(base, USB2_TX_REG0, 2 << USB2_TX_IMPCAL_VTH_SHIFT_MMP3); 174 + 175 + u2o_clear(base, USB2_TX_REG1, USB2_TX_VDD12_MASK_MMP3 176 + | USB2_TX_AMP_MASK_MMP3 177 + | USB2_TX_CK60_PHSEL_MASK_MMP3); 178 + u2o_set(base, USB2_TX_REG1, 3 << USB2_TX_VDD12_SHIFT_MMP3 179 + | 4 << USB2_TX_AMP_SHIFT_MMP3 180 + | 4 << USB2_TX_CK60_PHSEL_SHIFT_MMP3); 181 + 182 + u2o_clear(base, USB2_TX_REG2, 3 << USB2_TX_DRV_SLEWRATE_SHIFT); 183 + u2o_set(base, USB2_TX_REG2, 2 << USB2_TX_DRV_SLEWRATE_SHIFT); 184 + 185 + u2o_clear(base, USB2_RX_REG0, USB2_RX_SQ_THRESH_MASK_MMP3); 186 + u2o_set(base, USB2_RX_REG0, 0xa << USB2_RX_SQ_THRESH_SHIFT_MMP3); 187 + 188 + u2o_set(base, USB2_ANA_REG1, 0x1 << USB2_ANA_PU_ANA_SHIFT_MMP3); 189 + 190 + u2o_set(base, USB2_OTG_REG0, 0x1 << USB2_OTG_PU_OTG_SHIFT_MMP3); 191 + 192 + return 0; 193 + } 194 + 195 + static int mmp3_usb_phy_calibrate(struct phy *phy) 196 + { 197 + struct mmp3_usb_phy *mmp3_usb_phy = phy_get_drvdata(phy); 198 + void __iomem *base = mmp3_usb_phy->base; 199 + int loops; 200 + 201 + /* 202 + * PLL VCO and TX Impedance Calibration Timing: 203 + * 204 + * _____________________________________ 205 + * PU __________| 206 + * _____________________________ 207 + * VCOCAL START _________| 208 + * ___ 209 + * REG_RCAL_START ________________| |________|_______ 210 + * | 200us | 400us | 40| 400us | USB PHY READY 211 + */ 212 + 213 + udelay(200); 214 + u2o_set(base, USB2_PLL_REG1, 1 << USB2_PLL_VCOCAL_START_SHIFT_MMP3); 215 + udelay(400); 216 + u2o_set(base, USB2_TX_REG0, 1 << USB2_TX_RCAL_START_SHIFT_MMP3); 217 + udelay(40); 218 + u2o_clear(base, USB2_TX_REG0, 1 << USB2_TX_RCAL_START_SHIFT_MMP3); 219 + udelay(400); 220 + 221 + loops = 0; 222 + while ((u2o_get(base, USB2_PLL_REG1) & USB2_PLL_READY_MASK_MMP3) == 0) { 223 + mdelay(1); 224 + loops++; 225 + if (loops > 100) { 226 + dev_err(&phy->dev, "PLL_READY not set after 100mS.\n"); 227 + return -ETIMEDOUT; 228 + } 229 + } 230 + 231 + return 0; 232 + } 233 + 234 + static const struct phy_ops mmp3_usb_phy_ops = { 235 + .init = mmp3_usb_phy_init, 236 + .calibrate = mmp3_usb_phy_calibrate, 237 + .owner = THIS_MODULE, 238 + }; 239 + 240 + static const struct of_device_id mmp3_usb_phy_of_match[] = { 241 + { .compatible = "marvell,mmp3-usb-phy", }, 242 + { }, 243 + }; 244 + MODULE_DEVICE_TABLE(of, mmp3_usb_phy_of_match); 245 + 246 + static int mmp3_usb_phy_probe(struct platform_device *pdev) 247 + { 248 + struct device *dev = &pdev->dev; 249 + struct resource *resource; 250 + struct mmp3_usb_phy *mmp3_usb_phy; 251 + struct phy_provider *provider; 252 + 253 + mmp3_usb_phy = devm_kzalloc(dev, sizeof(*mmp3_usb_phy), GFP_KERNEL); 254 + if (!mmp3_usb_phy) 255 + return -ENOMEM; 256 + 257 + resource = platform_get_resource(pdev, IORESOURCE_MEM, 0); 258 + mmp3_usb_phy->base = devm_ioremap_resource(dev, resource); 259 + if (IS_ERR(mmp3_usb_phy->base)) { 260 + dev_err(dev, "failed to remap PHY regs\n"); 261 + return PTR_ERR(mmp3_usb_phy->base); 262 + } 263 + 264 + mmp3_usb_phy->phy = devm_phy_create(dev, NULL, &mmp3_usb_phy_ops); 265 + if (IS_ERR(mmp3_usb_phy->phy)) { 266 + dev_err(dev, "failed to create PHY\n"); 267 + return PTR_ERR(mmp3_usb_phy->phy); 268 + } 269 + 270 + phy_set_drvdata(mmp3_usb_phy->phy, mmp3_usb_phy); 271 + provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate); 272 + if (IS_ERR(provider)) { 273 + dev_err(dev, "failed to register PHY provider\n"); 274 + return PTR_ERR(provider); 275 + } 276 + 277 + return 0; 278 + } 279 + 280 + static struct platform_driver mmp3_usb_phy_driver = { 281 + .probe = mmp3_usb_phy_probe, 282 + .driver = { 283 + .name = "mmp3-usb-phy", 284 + .of_match_table = mmp3_usb_phy_of_match, 285 + }, 286 + }; 287 + module_platform_driver(mmp3_usb_phy_driver); 288 + 289 + MODULE_AUTHOR("Lubomir Rintel <lkundrak@v3.sk>"); 290 + MODULE_DESCRIPTION("Marvell MMP3 USB PHY Driver"); 291 + MODULE_LICENSE("GPL v2");
+3 -2
drivers/reset/Kconfig
··· 129 129 130 130 config RESET_SIMPLE 131 131 bool "Simple Reset Controller Driver" if COMPILE_TEST 132 - default ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX || ARCH_ASPEED || ARCH_BITMAIN || ARC 132 + default ARCH_AGILEX || ARCH_ASPEED || ARCH_BITMAIN || ARCH_REALTEK || ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX || ARC 133 133 help 134 134 This enables a simple reset controller driver for reset lines that 135 135 that can be asserted and deasserted by toggling bits in a contiguous, ··· 138 138 Currently this driver supports: 139 139 - Altera SoCFPGAs 140 140 - ASPEED BMC SoCs 141 + - Bitmain BM1880 SoC 142 + - Realtek SoCs 141 143 - RCC reset controller in STM32 MCUs 142 144 - Allwinner SoCs 143 145 - ZTE's zx2967 family 144 - - Bitmain BM1880 SoC 145 146 146 147 config RESET_STM32MP157 147 148 bool "STM32MP157 Reset Driver" if COMPILE_TEST
+4 -4
drivers/reset/core.c
··· 77 77 * @rcdev: a pointer to the reset controller device 78 78 * @reset_spec: reset line specifier as found in the device tree 79 79 * 80 - * This simple translation function should be used for reset controllers 81 - * with 1:1 mapping, where reset lines can be indexed by number without gaps. 80 + * This static translation function is used by default if of_xlate in 81 + * :c:type:`reset_controller_dev` is not set. It is useful for all reset 82 + * controllers with 1:1 mapping, where reset lines can be indexed by number 83 + * without gaps. 82 84 */ 83 85 static int of_reset_simple_xlate(struct reset_controller_dev *rcdev, 84 86 const struct of_phandle_args *reset_spec) ··· 335 333 * internal state to be reset, but must be prepared for this to happen. 336 334 * Consumers must not use reset_control_reset on shared reset lines when 337 335 * reset_control_(de)assert has been used. 338 - * return 0. 339 336 * 340 337 * If rstc is NULL it is an optional reset and the function will just 341 338 * return 0. ··· 393 392 * After calling this function, the reset is guaranteed to be deasserted. 394 393 * Consumers must not use reset_control_reset on shared reset lines when 395 394 * reset_control_(de)assert has been used. 396 - * return 0. 397 395 * 398 396 * If rstc is NULL it is an optional reset and the function will just 399 397 * return 0.
+1 -1
drivers/reset/hisilicon/reset-hi3660.c
··· 56 56 return hi3660_reset_deassert(rcdev, idx); 57 57 } 58 58 59 - static struct reset_control_ops hi3660_reset_ops = { 59 + static const struct reset_control_ops hi3660_reset_ops = { 60 60 .reset = hi3660_reset_dev, 61 61 .assert = hi3660_reset_assert, 62 62 .deassert = hi3660_reset_deassert,
+40 -3
drivers/reset/reset-meson-audio-arb.c
··· 19 19 spinlock_t lock; 20 20 }; 21 21 22 + struct meson_audio_arb_match_data { 23 + const unsigned int *reset_bits; 24 + unsigned int reset_num; 25 + }; 26 + 22 27 #define ARB_GENERAL_BIT 31 23 28 24 29 static const unsigned int axg_audio_arb_reset_bits[] = { ··· 33 28 [AXG_ARB_FRDDR_A] = 4, 34 29 [AXG_ARB_FRDDR_B] = 5, 35 30 [AXG_ARB_FRDDR_C] = 6, 31 + }; 32 + 33 + static const struct meson_audio_arb_match_data axg_audio_arb_match = { 34 + .reset_bits = axg_audio_arb_reset_bits, 35 + .reset_num = ARRAY_SIZE(axg_audio_arb_reset_bits), 36 + }; 37 + 38 + static const unsigned int sm1_audio_arb_reset_bits[] = { 39 + [AXG_ARB_TODDR_A] = 0, 40 + [AXG_ARB_TODDR_B] = 1, 41 + [AXG_ARB_TODDR_C] = 2, 42 + [AXG_ARB_FRDDR_A] = 4, 43 + [AXG_ARB_FRDDR_B] = 5, 44 + [AXG_ARB_FRDDR_C] = 6, 45 + [AXG_ARB_TODDR_D] = 3, 46 + [AXG_ARB_FRDDR_D] = 7, 47 + }; 48 + 49 + static const struct meson_audio_arb_match_data sm1_audio_arb_match = { 50 + .reset_bits = sm1_audio_arb_reset_bits, 51 + .reset_num = ARRAY_SIZE(sm1_audio_arb_reset_bits), 36 52 }; 37 53 38 54 static int meson_audio_arb_update(struct reset_controller_dev *rcdev, ··· 108 82 }; 109 83 110 84 static const struct of_device_id meson_audio_arb_of_match[] = { 111 - { .compatible = "amlogic,meson-axg-audio-arb", }, 85 + { 86 + .compatible = "amlogic,meson-axg-audio-arb", 87 + .data = &axg_audio_arb_match, 88 + }, { 89 + .compatible = "amlogic,meson-sm1-audio-arb", 90 + .data = &sm1_audio_arb_match, 91 + }, 112 92 {} 113 93 }; 114 94 MODULE_DEVICE_TABLE(of, meson_audio_arb_of_match); ··· 136 104 static int meson_audio_arb_probe(struct platform_device *pdev) 137 105 { 138 106 struct device *dev = &pdev->dev; 107 + const struct meson_audio_arb_match_data *data; 139 108 struct meson_audio_arb_data *arb; 140 109 struct resource *res; 141 110 int ret; 111 + 112 + data = of_device_get_match_data(dev); 113 + if (!data) 114 + return -EINVAL; 142 115 143 116 arb = devm_kzalloc(dev, sizeof(*arb), GFP_KERNEL); 144 117 if (!arb) ··· 163 126 return PTR_ERR(arb->regs); 164 127 165 128 spin_lock_init(&arb->lock); 166 - arb->reset_bits = axg_audio_arb_reset_bits; 167 - arb->rstc.nr_resets = ARRAY_SIZE(axg_audio_arb_reset_bits); 129 + arb->reset_bits = data->reset_bits; 130 + arb->rstc.nr_resets = data->reset_num; 168 131 arb->rstc.ops = &meson_audio_arb_rstc_ops; 169 132 arb->rstc.of_node = dev->of_node; 170 133 arb->rstc.owner = THIS_MODULE;
+28 -7
drivers/reset/reset-meson.c
··· 15 15 #include <linux/types.h> 16 16 #include <linux/of_device.h> 17 17 18 - #define REG_COUNT 8 19 18 #define BITS_PER_REG 32 20 - #define LEVEL_OFFSET 0x7c 19 + 20 + struct meson_reset_param { 21 + int reg_count; 22 + int level_offset; 23 + }; 21 24 22 25 struct meson_reset { 23 26 void __iomem *reg_base; 27 + const struct meson_reset_param *param; 24 28 struct reset_controller_dev rcdev; 25 29 spinlock_t lock; 26 30 }; ··· 50 46 container_of(rcdev, struct meson_reset, rcdev); 51 47 unsigned int bank = id / BITS_PER_REG; 52 48 unsigned int offset = id % BITS_PER_REG; 53 - void __iomem *reg_addr = data->reg_base + LEVEL_OFFSET + (bank << 2); 49 + void __iomem *reg_addr; 54 50 unsigned long flags; 55 51 u32 reg; 52 + 53 + reg_addr = data->reg_base + data->param->level_offset + (bank << 2); 56 54 57 55 spin_lock_irqsave(&data->lock, flags); 58 56 ··· 87 81 .deassert = meson_reset_deassert, 88 82 }; 89 83 84 + static const struct meson_reset_param meson8b_param = { 85 + .reg_count = 8, 86 + .level_offset = 0x7c, 87 + }; 88 + 89 + static const struct meson_reset_param meson_a1_param = { 90 + .reg_count = 3, 91 + .level_offset = 0x40, 92 + }; 93 + 90 94 static const struct of_device_id meson_reset_dt_ids[] = { 91 - { .compatible = "amlogic,meson8b-reset" }, 92 - { .compatible = "amlogic,meson-gxbb-reset" }, 93 - { .compatible = "amlogic,meson-axg-reset" }, 95 + { .compatible = "amlogic,meson8b-reset", .data = &meson8b_param}, 96 + { .compatible = "amlogic,meson-gxbb-reset", .data = &meson8b_param}, 97 + { .compatible = "amlogic,meson-axg-reset", .data = &meson8b_param}, 98 + { .compatible = "amlogic,meson-a1-reset", .data = &meson_a1_param}, 94 99 { /* sentinel */ }, 95 100 }; 96 101 ··· 119 102 if (IS_ERR(data->reg_base)) 120 103 return PTR_ERR(data->reg_base); 121 104 105 + data->param = of_device_get_match_data(&pdev->dev); 106 + if (!data->param) 107 + return -ENODEV; 108 + 122 109 platform_set_drvdata(pdev, data); 123 110 124 111 spin_lock_init(&data->lock); 125 112 126 113 data->rcdev.owner = THIS_MODULE; 127 - data->rcdev.nr_resets = REG_COUNT * BITS_PER_REG; 114 + data->rcdev.nr_resets = data->param->reg_count * BITS_PER_REG; 128 115 data->rcdev.ops = &meson_reset_ops; 129 116 data->rcdev.of_node = pdev->dev.of_node; 130 117
+4
drivers/reset/reset-uniphier-glue.c
··· 141 141 .data = &uniphier_pro4_data, 142 142 }, 143 143 { 144 + .compatible = "socionext,uniphier-pro5-usb3-reset", 145 + .data = &uniphier_pro4_data, 146 + }, 147 + { 144 148 .compatible = "socionext,uniphier-pxs2-usb3-reset", 145 149 .data = &uniphier_pxs2_data, 146 150 },
+1 -1
drivers/reset/reset-zynqmp.c
··· 64 64 PM_RESET_ACTION_PULSE); 65 65 } 66 66 67 - static struct reset_control_ops zynqmp_reset_ops = { 67 + static const struct reset_control_ops zynqmp_reset_ops = { 68 68 .reset = zynqmp_reset_reset, 69 69 .assert = zynqmp_reset_assert, 70 70 .deassert = zynqmp_reset_deassert,
+3
drivers/soc/amlogic/meson-gx-socinfo.c
··· 40 40 { "G12A", 0x28 }, 41 41 { "G12B", 0x29 }, 42 42 { "SM1", 0x2b }, 43 + { "A1", 0x2c }, 43 44 }; 44 45 45 46 static const struct meson_gx_package_id { ··· 69 68 { "S922X", 0x29, 0x40, 0xf0 }, 70 69 { "A311D", 0x29, 0x10, 0xf0 }, 71 70 { "S905X3", 0x2b, 0x5, 0xf }, 71 + { "S905D3", 0x2b, 0xb0, 0xf0 }, 72 + { "A113L", 0x2c, 0x0, 0xf8 }, 72 73 }; 73 74 74 75 static inline unsigned int socinfo_to_major(u32 socinfo)
+11
drivers/soc/atmel/Kconfig
··· 5 5 default ARCH_AT91 6 6 help 7 7 Include support for the SoC bus on the Atmel ARM SoCs. 8 + 9 + config AT91_SOC_SFR 10 + tristate "Special Function Registers support" 11 + depends on ARCH_AT91 || COMPILE_TEST 12 + help 13 + This is a driver for the Special Function Registers available on 14 + Atmel SAMA5Dx SoCs, providing access to specific aspects of the 15 + integrated memory, bridge implementations, processor etc. 16 + 17 + This driver can also be built as a module. If so, the module 18 + will be called sfr.
+1
drivers/soc/atmel/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 obj-$(CONFIG_AT91_SOC_ID) += soc.o 3 + obj-$(CONFIG_AT91_SOC_SFR) += sfr.o
+99
drivers/soc/atmel/sfr.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * sfr.c - driver for special function registers 4 + * 5 + * Copyright (C) 2019 Bootlin. 6 + * 7 + */ 8 + #include <linux/mfd/syscon.h> 9 + #include <linux/module.h> 10 + #include <linux/nvmem-provider.h> 11 + #include <linux/random.h> 12 + #include <linux/of.h> 13 + #include <linux/of_device.h> 14 + #include <linux/platform_device.h> 15 + #include <linux/regmap.h> 16 + 17 + #define SFR_SN0 0x4c 18 + #define SFR_SN_SIZE 8 19 + 20 + struct atmel_sfr_priv { 21 + struct regmap *regmap; 22 + }; 23 + 24 + static int atmel_sfr_read(void *context, unsigned int offset, 25 + void *buf, size_t bytes) 26 + { 27 + struct atmel_sfr_priv *priv = context; 28 + 29 + return regmap_bulk_read(priv->regmap, SFR_SN0 + offset, 30 + buf, bytes / 4); 31 + } 32 + 33 + static struct nvmem_config atmel_sfr_nvmem_config = { 34 + .name = "atmel-sfr", 35 + .read_only = true, 36 + .word_size = 4, 37 + .stride = 4, 38 + .size = SFR_SN_SIZE, 39 + .reg_read = atmel_sfr_read, 40 + }; 41 + 42 + static int atmel_sfr_probe(struct platform_device *pdev) 43 + { 44 + struct device *dev = &pdev->dev; 45 + struct device_node *np = dev->of_node; 46 + struct nvmem_device *nvmem; 47 + struct atmel_sfr_priv *priv; 48 + u8 sn[SFR_SN_SIZE]; 49 + int ret; 50 + 51 + priv = devm_kmalloc(dev, sizeof(*priv), GFP_KERNEL); 52 + if (!priv) 53 + return -ENOMEM; 54 + 55 + priv->regmap = syscon_node_to_regmap(np); 56 + if (IS_ERR(priv->regmap)) { 57 + dev_err(dev, "cannot get parent's regmap\n"); 58 + return PTR_ERR(priv->regmap); 59 + } 60 + 61 + atmel_sfr_nvmem_config.dev = dev; 62 + atmel_sfr_nvmem_config.priv = priv; 63 + 64 + nvmem = devm_nvmem_register(dev, &atmel_sfr_nvmem_config); 65 + if (IS_ERR(nvmem)) { 66 + dev_err(dev, "error registering nvmem config\n"); 67 + return PTR_ERR(nvmem); 68 + } 69 + 70 + ret = atmel_sfr_read(priv, 0, sn, SFR_SN_SIZE); 71 + if (ret == 0) 72 + add_device_randomness(sn, SFR_SN_SIZE); 73 + 74 + return ret; 75 + } 76 + 77 + static const struct of_device_id atmel_sfr_dt_ids[] = { 78 + { 79 + .compatible = "atmel,sama5d2-sfr", 80 + }, { 81 + .compatible = "atmel,sama5d4-sfr", 82 + }, { 83 + /* sentinel */ 84 + }, 85 + }; 86 + MODULE_DEVICE_TABLE(of, atmel_sfr_dt_ids); 87 + 88 + static struct platform_driver atmel_sfr_driver = { 89 + .probe = atmel_sfr_probe, 90 + .driver = { 91 + .name = "atmel-sfr", 92 + .of_match_table = atmel_sfr_dt_ids, 93 + }, 94 + }; 95 + module_platform_driver(atmel_sfr_driver); 96 + 97 + MODULE_AUTHOR("Kamel Bouhara <kamel.bouhara@bootlin.com>"); 98 + MODULE_DESCRIPTION("Atmel SFR SN driver for SAMA5D2/4 SoC family"); 99 + MODULE_LICENSE("GPL v2");
+10
drivers/soc/fsl/Kconfig
··· 40 40 /dev/dpaa2_mc_console and /dev/dpaa2_aiop_console, 41 41 which can be used to dump the Management Complex and AIOP 42 42 firmware logs. 43 + 44 + config FSL_RCPM 45 + bool "Freescale RCPM support" 46 + depends on PM_SLEEP && (ARM || ARM64) 47 + help 48 + The NXP QorIQ Processors based on ARM Core have RCPM module 49 + (Run Control and Power Management), which performs all device-level 50 + tasks associated with power management, such as wakeup source control. 51 + Note that currently this driver will not support PowerPC based 52 + QorIQ processor. 43 53 endmenu
+1
drivers/soc/fsl/Makefile
··· 6 6 obj-$(CONFIG_FSL_DPAA) += qbman/ 7 7 obj-$(CONFIG_QUICC_ENGINE) += qe/ 8 8 obj-$(CONFIG_CPM) += qe/ 9 + obj-$(CONFIG_FSL_RCPM) += rcpm.o 9 10 obj-$(CONFIG_FSL_GUTS) += guts.o 10 11 obj-$(CONFIG_FSL_MC_DPIO) += dpio/ 11 12 obj-$(CONFIG_DPAA2_CONSOLE) += dpaa2-console.o
+151
drivers/soc/fsl/rcpm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // 3 + // rcpm.c - Freescale QorIQ RCPM driver 4 + // 5 + // Copyright 2019 NXP 6 + // 7 + // Author: Ran Wang <ran.wang_1@nxp.com> 8 + 9 + #include <linux/init.h> 10 + #include <linux/module.h> 11 + #include <linux/platform_device.h> 12 + #include <linux/of_address.h> 13 + #include <linux/slab.h> 14 + #include <linux/suspend.h> 15 + #include <linux/kernel.h> 16 + 17 + #define RCPM_WAKEUP_CELL_MAX_SIZE 7 18 + 19 + struct rcpm { 20 + unsigned int wakeup_cells; 21 + void __iomem *ippdexpcr_base; 22 + bool little_endian; 23 + }; 24 + 25 + /** 26 + * rcpm_pm_prepare - performs device-level tasks associated with power 27 + * management, such as programming related to the wakeup source control. 28 + * @dev: Device to handle. 29 + * 30 + */ 31 + static int rcpm_pm_prepare(struct device *dev) 32 + { 33 + int i, ret, idx; 34 + void __iomem *base; 35 + struct wakeup_source *ws; 36 + struct rcpm *rcpm; 37 + struct device_node *np = dev->of_node; 38 + u32 value[RCPM_WAKEUP_CELL_MAX_SIZE + 1]; 39 + u32 setting[RCPM_WAKEUP_CELL_MAX_SIZE] = {0}; 40 + 41 + rcpm = dev_get_drvdata(dev); 42 + if (!rcpm) 43 + return -EINVAL; 44 + 45 + base = rcpm->ippdexpcr_base; 46 + idx = wakeup_sources_read_lock(); 47 + 48 + /* Begin with first registered wakeup source */ 49 + for_each_wakeup_source(ws) { 50 + 51 + /* skip object which is not attached to device */ 52 + if (!ws->dev || !ws->dev->parent) 53 + continue; 54 + 55 + ret = device_property_read_u32_array(ws->dev->parent, 56 + "fsl,rcpm-wakeup", value, 57 + rcpm->wakeup_cells + 1); 58 + 59 + /* Wakeup source should refer to current rcpm device */ 60 + if (ret || (np->phandle != value[0])) 61 + continue; 62 + 63 + /* Property "#fsl,rcpm-wakeup-cells" of rcpm node defines the 64 + * number of IPPDEXPCR register cells, and "fsl,rcpm-wakeup" 65 + * of wakeup source IP contains an integer array: <phandle to 66 + * RCPM node, IPPDEXPCR0 setting, IPPDEXPCR1 setting, 67 + * IPPDEXPCR2 setting, etc>. 68 + * 69 + * So we will go thought them to collect setting data. 70 + */ 71 + for (i = 0; i < rcpm->wakeup_cells; i++) 72 + setting[i] |= value[i + 1]; 73 + } 74 + 75 + wakeup_sources_read_unlock(idx); 76 + 77 + /* Program all IPPDEXPCRn once */ 78 + for (i = 0; i < rcpm->wakeup_cells; i++) { 79 + u32 tmp = setting[i]; 80 + void __iomem *address = base + i * 4; 81 + 82 + if (!tmp) 83 + continue; 84 + 85 + /* We can only OR related bits */ 86 + if (rcpm->little_endian) { 87 + tmp |= ioread32(address); 88 + iowrite32(tmp, address); 89 + } else { 90 + tmp |= ioread32be(address); 91 + iowrite32be(tmp, address); 92 + } 93 + } 94 + 95 + return 0; 96 + } 97 + 98 + static const struct dev_pm_ops rcpm_pm_ops = { 99 + .prepare = rcpm_pm_prepare, 100 + }; 101 + 102 + static int rcpm_probe(struct platform_device *pdev) 103 + { 104 + struct device *dev = &pdev->dev; 105 + struct resource *r; 106 + struct rcpm *rcpm; 107 + int ret; 108 + 109 + rcpm = devm_kzalloc(dev, sizeof(*rcpm), GFP_KERNEL); 110 + if (!rcpm) 111 + return -ENOMEM; 112 + 113 + r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 114 + if (!r) 115 + return -ENODEV; 116 + 117 + rcpm->ippdexpcr_base = devm_ioremap_resource(&pdev->dev, r); 118 + if (IS_ERR(rcpm->ippdexpcr_base)) { 119 + ret = PTR_ERR(rcpm->ippdexpcr_base); 120 + return ret; 121 + } 122 + 123 + rcpm->little_endian = device_property_read_bool( 124 + &pdev->dev, "little-endian"); 125 + 126 + ret = device_property_read_u32(&pdev->dev, 127 + "#fsl,rcpm-wakeup-cells", &rcpm->wakeup_cells); 128 + if (ret) 129 + return ret; 130 + 131 + dev_set_drvdata(&pdev->dev, rcpm); 132 + 133 + return 0; 134 + } 135 + 136 + static const struct of_device_id rcpm_of_match[] = { 137 + { .compatible = "fsl,qoriq-rcpm-2.1+", }, 138 + {} 139 + }; 140 + MODULE_DEVICE_TABLE(of, rcpm_of_match); 141 + 142 + static struct platform_driver rcpm_driver = { 143 + .driver = { 144 + .name = "rcpm", 145 + .of_match_table = rcpm_of_match, 146 + .pm = &rcpm_pm_ops, 147 + }, 148 + .probe = rcpm_probe, 149 + }; 150 + 151 + module_platform_driver(rcpm_driver);
+20 -16
drivers/soc/imx/soc-imx-scu.c
··· 33 33 u32 uid_high; 34 34 } __packed; 35 35 36 - static ssize_t soc_uid_show(struct device *dev, 37 - struct device_attribute *attr, char *buf) 36 + static int imx_scu_soc_uid(u64 *soc_uid) 38 37 { 39 38 struct imx_sc_msg_misc_get_soc_uid msg; 40 39 struct imx_sc_rpc_msg *hdr = &msg.hdr; 41 - u64 soc_uid; 42 40 int ret; 43 41 44 42 hdr->ver = IMX_SC_RPC_VERSION; ··· 50 52 return ret; 51 53 } 52 54 53 - soc_uid = msg.uid_high; 54 - soc_uid <<= 32; 55 - soc_uid |= msg.uid_low; 55 + *soc_uid = msg.uid_high; 56 + *soc_uid <<= 32; 57 + *soc_uid |= msg.uid_low; 56 58 57 - return sprintf(buf, "%016llX\n", soc_uid); 59 + return 0; 58 60 } 59 - 60 - static DEVICE_ATTR_RO(soc_uid); 61 61 62 62 static int imx_scu_soc_id(void) 63 63 { ··· 85 89 struct soc_device_attribute *soc_dev_attr; 86 90 struct soc_device *soc_dev; 87 91 int id, ret; 92 + u64 uid = 0; 88 93 u32 val; 89 94 90 95 ret = imx_scu_get_handle(&soc_ipc_handle); ··· 109 112 if (id < 0) 110 113 return -EINVAL; 111 114 115 + ret = imx_scu_soc_uid(&uid); 116 + if (ret < 0) 117 + return -EINVAL; 118 + 112 119 /* format soc_id value passed from SCU firmware */ 113 120 val = id & 0x1f; 114 121 soc_dev_attr->soc_id = kasprintf(GFP_KERNEL, "0x%x", val); ··· 131 130 goto free_soc_id; 132 131 } 133 132 134 - soc_dev = soc_device_register(soc_dev_attr); 135 - if (IS_ERR(soc_dev)) { 136 - ret = PTR_ERR(soc_dev); 133 + soc_dev_attr->serial_number = kasprintf(GFP_KERNEL, "%016llX", uid); 134 + if (!soc_dev_attr->serial_number) { 135 + ret = -ENOMEM; 137 136 goto free_revision; 138 137 } 139 138 140 - ret = device_create_file(soc_device_to_device(soc_dev), 141 - &dev_attr_soc_uid); 142 - if (ret) 143 - goto free_revision; 139 + soc_dev = soc_device_register(soc_dev_attr); 140 + if (IS_ERR(soc_dev)) { 141 + ret = PTR_ERR(soc_dev); 142 + goto free_serial_number; 143 + } 144 144 145 145 return 0; 146 146 147 + free_serial_number: 148 + kfree(soc_dev_attr->serial_number); 147 149 free_revision: 148 150 kfree(soc_dev_attr->revision); 149 151 free_soc_id:
+36 -15
drivers/soc/imx/soc-imx8.c
··· 9 9 #include <linux/slab.h> 10 10 #include <linux/sys_soc.h> 11 11 #include <linux/platform_device.h> 12 + #include <linux/arm-smccc.h> 12 13 #include <linux/of.h> 13 14 14 15 #define REV_B1 0x21 15 16 16 17 #define IMX8MQ_SW_INFO_B1 0x40 17 18 #define IMX8MQ_SW_MAGIC_B1 0xff0055aa 19 + 20 + #define IMX_SIP_GET_SOC_INFO 0xc2000006 18 21 19 22 #define OCOTP_UID_LOW 0x410 20 23 #define OCOTP_UID_HIGH 0x420 ··· 32 29 33 30 static u64 soc_uid; 34 31 35 - static ssize_t soc_uid_show(struct device *dev, 36 - struct device_attribute *attr, char *buf) 32 + #ifdef CONFIG_HAVE_ARM_SMCCC 33 + static u32 imx8mq_soc_revision_from_atf(void) 37 34 { 38 - return sprintf(buf, "%016llX\n", soc_uid); 39 - } 35 + struct arm_smccc_res res; 40 36 41 - static DEVICE_ATTR_RO(soc_uid); 37 + arm_smccc_smc(IMX_SIP_GET_SOC_INFO, 0, 0, 0, 0, 0, 0, 0, &res); 38 + 39 + if (res.a0 == SMCCC_RET_NOT_SUPPORTED) 40 + return 0; 41 + else 42 + return res.a0 & 0xff; 43 + } 44 + #else 45 + static inline u32 imx8mq_soc_revision_from_atf(void) { return 0; }; 46 + #endif 42 47 43 48 static u32 __init imx8mq_soc_revision(void) 44 49 { ··· 62 51 ocotp_base = of_iomap(np, 0); 63 52 WARN_ON(!ocotp_base); 64 53 65 - magic = readl_relaxed(ocotp_base + IMX8MQ_SW_INFO_B1); 66 - if (magic == IMX8MQ_SW_MAGIC_B1) 67 - rev = REV_B1; 54 + /* 55 + * SOC revision on older imx8mq is not available in fuses so query 56 + * the value from ATF instead. 57 + */ 58 + rev = imx8mq_soc_revision_from_atf(); 59 + if (!rev) { 60 + magic = readl_relaxed(ocotp_base + IMX8MQ_SW_INFO_B1); 61 + if (magic == IMX8MQ_SW_MAGIC_B1) 62 + rev = REV_B1; 63 + } 68 64 69 65 soc_uid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH); 70 66 soc_uid <<= 32; ··· 192 174 goto free_soc; 193 175 } 194 176 195 - soc_dev = soc_device_register(soc_dev_attr); 196 - if (IS_ERR(soc_dev)) { 197 - ret = PTR_ERR(soc_dev); 177 + soc_dev_attr->serial_number = kasprintf(GFP_KERNEL, "%016llX", soc_uid); 178 + if (!soc_dev_attr->serial_number) { 179 + ret = -ENOMEM; 198 180 goto free_rev; 199 181 } 200 182 201 - ret = device_create_file(soc_device_to_device(soc_dev), 202 - &dev_attr_soc_uid); 203 - if (ret) 204 - goto free_rev; 183 + soc_dev = soc_device_register(soc_dev_attr); 184 + if (IS_ERR(soc_dev)) { 185 + ret = PTR_ERR(soc_dev); 186 + goto free_serial_number; 187 + } 205 188 206 189 if (IS_ENABLED(CONFIG_ARM_IMX_CPUFREQ_DT)) 207 190 platform_device_register_simple("imx-cpufreq-dt", -1, NULL, 0); 208 191 209 192 return 0; 210 193 194 + free_serial_number: 195 + kfree(soc_dev_attr->serial_number); 211 196 free_rev: 212 197 if (strcmp(soc_dev_attr->revision, "unknown")) 213 198 kfree(soc_dev_attr->revision);
+147 -69
drivers/soc/mediatek/mtk-scpsys.c
··· 21 21 #include <dt-bindings/power/mt8173-power.h> 22 22 23 23 #define MTK_POLL_DELAY_US 10 24 - #define MTK_POLL_TIMEOUT (jiffies_to_usecs(HZ)) 24 + #define MTK_POLL_TIMEOUT USEC_PER_SEC 25 25 26 26 #define MTK_SCPD_ACTIVE_WAKEUP BIT(0) 27 27 #define MTK_SCPD_FWAIT_SRAM BIT(1) ··· 108 108 109 109 #define MAX_CLKS 3 110 110 111 + /** 112 + * struct scp_domain_data - scp domain data for power on/off flow 113 + * @name: The domain name. 114 + * @sta_mask: The mask for power on/off status bit. 115 + * @ctl_offs: The offset for main power control register. 116 + * @sram_pdn_bits: The mask for sram power control bits. 117 + * @sram_pdn_ack_bits: The mask for sram power control acked bits. 118 + * @bus_prot_mask: The mask for single step bus protection. 119 + * @clk_id: The basic clocks required by this power domain. 120 + * @caps: The flag for active wake-up action. 121 + */ 111 122 struct scp_domain_data { 112 123 const char *name; 113 124 u32 sta_mask; ··· 191 180 return -EINVAL; 192 181 } 193 182 183 + static int scpsys_regulator_enable(struct scp_domain *scpd) 184 + { 185 + if (!scpd->supply) 186 + return 0; 187 + 188 + return regulator_enable(scpd->supply); 189 + } 190 + 191 + static int scpsys_regulator_disable(struct scp_domain *scpd) 192 + { 193 + if (!scpd->supply) 194 + return 0; 195 + 196 + return regulator_disable(scpd->supply); 197 + } 198 + 199 + static void scpsys_clk_disable(struct clk *clk[], int max_num) 200 + { 201 + int i; 202 + 203 + for (i = max_num - 1; i >= 0; i--) 204 + clk_disable_unprepare(clk[i]); 205 + } 206 + 207 + static int scpsys_clk_enable(struct clk *clk[], int max_num) 208 + { 209 + int i, ret = 0; 210 + 211 + for (i = 0; i < max_num && clk[i]; i++) { 212 + ret = clk_prepare_enable(clk[i]); 213 + if (ret) { 214 + scpsys_clk_disable(clk, i); 215 + break; 216 + } 217 + } 218 + 219 + return ret; 220 + } 221 + 222 + static int scpsys_sram_enable(struct scp_domain *scpd, void __iomem *ctl_addr) 223 + { 224 + u32 val; 225 + u32 pdn_ack = scpd->data->sram_pdn_ack_bits; 226 + int tmp; 227 + 228 + val = readl(ctl_addr); 229 + val &= ~scpd->data->sram_pdn_bits; 230 + writel(val, ctl_addr); 231 + 232 + /* Either wait until SRAM_PDN_ACK all 0 or have a force wait */ 233 + if (MTK_SCPD_CAPS(scpd, MTK_SCPD_FWAIT_SRAM)) { 234 + /* 235 + * Currently, MTK_SCPD_FWAIT_SRAM is necessary only for 236 + * MT7622_POWER_DOMAIN_WB and thus just a trivial setup 237 + * is applied here. 238 + */ 239 + usleep_range(12000, 12100); 240 + } else { 241 + /* Either wait until SRAM_PDN_ACK all 1 or 0 */ 242 + int ret = readl_poll_timeout(ctl_addr, tmp, 243 + (tmp & pdn_ack) == 0, 244 + MTK_POLL_DELAY_US, MTK_POLL_TIMEOUT); 245 + if (ret < 0) 246 + return ret; 247 + } 248 + 249 + return 0; 250 + } 251 + 252 + static int scpsys_sram_disable(struct scp_domain *scpd, void __iomem *ctl_addr) 253 + { 254 + u32 val; 255 + u32 pdn_ack = scpd->data->sram_pdn_ack_bits; 256 + int tmp; 257 + 258 + val = readl(ctl_addr); 259 + val |= scpd->data->sram_pdn_bits; 260 + writel(val, ctl_addr); 261 + 262 + /* Either wait until SRAM_PDN_ACK all 1 or 0 */ 263 + return readl_poll_timeout(ctl_addr, tmp, 264 + (tmp & pdn_ack) == pdn_ack, 265 + MTK_POLL_DELAY_US, MTK_POLL_TIMEOUT); 266 + } 267 + 268 + static int scpsys_bus_protect_enable(struct scp_domain *scpd) 269 + { 270 + struct scp *scp = scpd->scp; 271 + 272 + if (!scpd->data->bus_prot_mask) 273 + return 0; 274 + 275 + return mtk_infracfg_set_bus_protection(scp->infracfg, 276 + scpd->data->bus_prot_mask, 277 + scp->bus_prot_reg_update); 278 + } 279 + 280 + static int scpsys_bus_protect_disable(struct scp_domain *scpd) 281 + { 282 + struct scp *scp = scpd->scp; 283 + 284 + if (!scpd->data->bus_prot_mask) 285 + return 0; 286 + 287 + return mtk_infracfg_clear_bus_protection(scp->infracfg, 288 + scpd->data->bus_prot_mask, 289 + scp->bus_prot_reg_update); 290 + } 291 + 194 292 static int scpsys_power_on(struct generic_pm_domain *genpd) 195 293 { 196 294 struct scp_domain *scpd = container_of(genpd, struct scp_domain, genpd); 197 295 struct scp *scp = scpd->scp; 198 296 void __iomem *ctl_addr = scp->base + scpd->data->ctl_offs; 199 - u32 pdn_ack = scpd->data->sram_pdn_ack_bits; 200 297 u32 val; 201 298 int ret, tmp; 202 - int i; 203 299 204 - if (scpd->supply) { 205 - ret = regulator_enable(scpd->supply); 206 - if (ret) 207 - return ret; 208 - } 300 + ret = scpsys_regulator_enable(scpd); 301 + if (ret < 0) 302 + return ret; 209 303 210 - for (i = 0; i < MAX_CLKS && scpd->clk[i]; i++) { 211 - ret = clk_prepare_enable(scpd->clk[i]); 212 - if (ret) { 213 - for (--i; i >= 0; i--) 214 - clk_disable_unprepare(scpd->clk[i]); 304 + ret = scpsys_clk_enable(scpd->clk, MAX_CLKS); 305 + if (ret) 306 + goto err_clk; 215 307 216 - goto err_clk; 217 - } 218 - } 219 - 308 + /* subsys power on */ 220 309 val = readl(ctl_addr); 221 310 val |= PWR_ON_BIT; 222 311 writel(val, ctl_addr); ··· 338 227 val |= PWR_RST_B_BIT; 339 228 writel(val, ctl_addr); 340 229 341 - val &= ~scpd->data->sram_pdn_bits; 342 - writel(val, ctl_addr); 230 + ret = scpsys_sram_enable(scpd, ctl_addr); 231 + if (ret < 0) 232 + goto err_pwr_ack; 343 233 344 - /* Either wait until SRAM_PDN_ACK all 0 or have a force wait */ 345 - if (MTK_SCPD_CAPS(scpd, MTK_SCPD_FWAIT_SRAM)) { 346 - /* 347 - * Currently, MTK_SCPD_FWAIT_SRAM is necessary only for 348 - * MT7622_POWER_DOMAIN_WB and thus just a trivial setup is 349 - * applied here. 350 - */ 351 - usleep_range(12000, 12100); 352 - 353 - } else { 354 - ret = readl_poll_timeout(ctl_addr, tmp, (tmp & pdn_ack) == 0, 355 - MTK_POLL_DELAY_US, MTK_POLL_TIMEOUT); 356 - if (ret < 0) 357 - goto err_pwr_ack; 358 - } 359 - 360 - if (scpd->data->bus_prot_mask) { 361 - ret = mtk_infracfg_clear_bus_protection(scp->infracfg, 362 - scpd->data->bus_prot_mask, 363 - scp->bus_prot_reg_update); 364 - if (ret) 365 - goto err_pwr_ack; 366 - } 234 + ret = scpsys_bus_protect_disable(scpd); 235 + if (ret < 0) 236 + goto err_pwr_ack; 367 237 368 238 return 0; 369 239 370 240 err_pwr_ack: 371 - for (i = MAX_CLKS - 1; i >= 0; i--) { 372 - if (scpd->clk[i]) 373 - clk_disable_unprepare(scpd->clk[i]); 374 - } 241 + scpsys_clk_disable(scpd->clk, MAX_CLKS); 375 242 err_clk: 376 - if (scpd->supply) 377 - regulator_disable(scpd->supply); 243 + scpsys_regulator_disable(scpd); 378 244 379 245 dev_err(scp->dev, "Failed to power on domain %s\n", genpd->name); 380 246 ··· 363 275 struct scp_domain *scpd = container_of(genpd, struct scp_domain, genpd); 364 276 struct scp *scp = scpd->scp; 365 277 void __iomem *ctl_addr = scp->base + scpd->data->ctl_offs; 366 - u32 pdn_ack = scpd->data->sram_pdn_ack_bits; 367 278 u32 val; 368 279 int ret, tmp; 369 - int i; 370 280 371 - if (scpd->data->bus_prot_mask) { 372 - ret = mtk_infracfg_set_bus_protection(scp->infracfg, 373 - scpd->data->bus_prot_mask, 374 - scp->bus_prot_reg_update); 375 - if (ret) 376 - goto out; 377 - } 378 - 379 - val = readl(ctl_addr); 380 - val |= scpd->data->sram_pdn_bits; 381 - writel(val, ctl_addr); 382 - 383 - /* wait until SRAM_PDN_ACK all 1 */ 384 - ret = readl_poll_timeout(ctl_addr, tmp, (tmp & pdn_ack) == pdn_ack, 385 - MTK_POLL_DELAY_US, MTK_POLL_TIMEOUT); 281 + ret = scpsys_bus_protect_enable(scpd); 386 282 if (ret < 0) 387 283 goto out; 388 284 285 + ret = scpsys_sram_disable(scpd, ctl_addr); 286 + if (ret < 0) 287 + goto out; 288 + 289 + /* subsys power off */ 290 + val = readl(ctl_addr); 389 291 val |= PWR_ISO_BIT; 390 292 writel(val, ctl_addr); 391 293 ··· 397 319 if (ret < 0) 398 320 goto out; 399 321 400 - for (i = 0; i < MAX_CLKS && scpd->clk[i]; i++) 401 - clk_disable_unprepare(scpd->clk[i]); 322 + scpsys_clk_disable(scpd->clk, MAX_CLKS); 402 323 403 - if (scpd->supply) 404 - regulator_disable(scpd->supply); 324 + ret = scpsys_regulator_disable(scpd); 325 + if (ret < 0) 326 + goto out; 405 327 406 328 return 0; 407 329
+3 -11
drivers/soc/qcom/Kconfig
··· 58 58 depends on ARCH_QCOM || COMPILE_TEST 59 59 help 60 60 Qualcomm Technologies, Inc. platform specific 61 - Last Level Cache Controller(LLCC) driver. This provides interfaces 62 - to clients that use the LLCC. Say yes here to enable LLCC slice 63 - driver. 64 - 65 - config QCOM_SDM845_LLCC 66 - tristate "Qualcomm Technologies, Inc. SDM845 LLCC driver" 67 - depends on QCOM_LLCC 68 - help 69 - Say yes here to enable the LLCC driver for SDM845. This provides 70 - data required to configure LLCC so that clients can start using the 71 - LLCC slices. 61 + Last Level Cache Controller(LLCC) driver for platforms such as, 62 + SDM845. This provides interfaces to clients that use the LLCC. 63 + Say yes here to enable LLCC slice driver. 72 64 73 65 config QCOM_MDT_LOADER 74 66 tristate
+1 -2
drivers/soc/qcom/Makefile
··· 21 21 obj-$(CONFIG_QCOM_SOCINFO) += socinfo.o 22 22 obj-$(CONFIG_QCOM_WCNSS_CTRL) += wcnss_ctrl.o 23 23 obj-$(CONFIG_QCOM_APR) += apr.o 24 - obj-$(CONFIG_QCOM_LLCC) += llcc-slice.o 25 - obj-$(CONFIG_QCOM_SDM845_LLCC) += llcc-sdm845.o 24 + obj-$(CONFIG_QCOM_LLCC) += llcc-qcom.o 26 25 obj-$(CONFIG_QCOM_RPMHPD) += rpmhpd.o 27 26 obj-$(CONFIG_QCOM_RPMPD) += rpmpd.o
-100
drivers/soc/qcom/llcc-sdm845.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * Copyright (c) 2017-2018, The Linux Foundation. All rights reserved. 4 - * 5 - */ 6 - 7 - #include <linux/kernel.h> 8 - #include <linux/module.h> 9 - #include <linux/of.h> 10 - #include <linux/of_device.h> 11 - #include <linux/soc/qcom/llcc-qcom.h> 12 - 13 - /* 14 - * SCT(System Cache Table) entry contains of the following members: 15 - * usecase_id: Unique id for the client's use case 16 - * slice_id: llcc slice id for each client 17 - * max_cap: The maximum capacity of the cache slice provided in KB 18 - * priority: Priority of the client used to select victim line for replacement 19 - * fixed_size: Boolean indicating if the slice has a fixed capacity 20 - * bonus_ways: Bonus ways are additional ways to be used for any slice, 21 - * if client ends up using more than reserved cache ways. Bonus 22 - * ways are allocated only if they are not reserved for some 23 - * other client. 24 - * res_ways: Reserved ways for the cache slice, the reserved ways cannot 25 - * be used by any other client than the one its assigned to. 26 - * cache_mode: Each slice operates as a cache, this controls the mode of the 27 - * slice: normal or TCM(Tightly Coupled Memory) 28 - * probe_target_ways: Determines what ways to probe for access hit. When 29 - * configured to 1 only bonus and reserved ways are probed. 30 - * When configured to 0 all ways in llcc are probed. 31 - * dis_cap_alloc: Disable capacity based allocation for a client 32 - * retain_on_pc: If this bit is set and client has maintained active vote 33 - * then the ways assigned to this client are not flushed on power 34 - * collapse. 35 - * activate_on_init: Activate the slice immediately after the SCT is programmed 36 - */ 37 - #define SCT_ENTRY(uid, sid, mc, p, fs, bway, rway, cmod, ptw, dca, rp, a) \ 38 - { \ 39 - .usecase_id = uid, \ 40 - .slice_id = sid, \ 41 - .max_cap = mc, \ 42 - .priority = p, \ 43 - .fixed_size = fs, \ 44 - .bonus_ways = bway, \ 45 - .res_ways = rway, \ 46 - .cache_mode = cmod, \ 47 - .probe_target_ways = ptw, \ 48 - .dis_cap_alloc = dca, \ 49 - .retain_on_pc = rp, \ 50 - .activate_on_init = a, \ 51 - } 52 - 53 - static struct llcc_slice_config sdm845_data[] = { 54 - SCT_ENTRY(LLCC_CPUSS, 1, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 1), 55 - SCT_ENTRY(LLCC_VIDSC0, 2, 512, 2, 1, 0x0, 0x0f0, 0, 0, 1, 1, 0), 56 - SCT_ENTRY(LLCC_VIDSC1, 3, 512, 2, 1, 0x0, 0x0f0, 0, 0, 1, 1, 0), 57 - SCT_ENTRY(LLCC_ROTATOR, 4, 563, 2, 1, 0x0, 0x00e, 2, 0, 1, 1, 0), 58 - SCT_ENTRY(LLCC_VOICE, 5, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0), 59 - SCT_ENTRY(LLCC_AUDIO, 6, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0), 60 - SCT_ENTRY(LLCC_MDMHPGRW, 7, 1024, 2, 0, 0xfc, 0xf00, 0, 0, 1, 1, 0), 61 - SCT_ENTRY(LLCC_MDM, 8, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0), 62 - SCT_ENTRY(LLCC_CMPT, 10, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0), 63 - SCT_ENTRY(LLCC_GPUHTW, 11, 512, 1, 1, 0xc, 0x0, 0, 0, 1, 1, 0), 64 - SCT_ENTRY(LLCC_GPU, 12, 2304, 1, 0, 0xff0, 0x2, 0, 0, 1, 1, 0), 65 - SCT_ENTRY(LLCC_MMUHWT, 13, 256, 2, 0, 0x0, 0x1, 0, 0, 1, 0, 1), 66 - SCT_ENTRY(LLCC_CMPTDMA, 15, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0), 67 - SCT_ENTRY(LLCC_DISP, 16, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0), 68 - SCT_ENTRY(LLCC_VIDFW, 17, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0), 69 - SCT_ENTRY(LLCC_MDMHPFX, 20, 1024, 2, 1, 0x0, 0xf00, 0, 0, 1, 1, 0), 70 - SCT_ENTRY(LLCC_MDMPNG, 21, 1024, 0, 1, 0x1e, 0x0, 0, 0, 1, 1, 0), 71 - SCT_ENTRY(LLCC_AUDHW, 22, 1024, 1, 1, 0xffc, 0x2, 0, 0, 1, 1, 0), 72 - }; 73 - 74 - static int sdm845_qcom_llcc_remove(struct platform_device *pdev) 75 - { 76 - return qcom_llcc_remove(pdev); 77 - } 78 - 79 - static int sdm845_qcom_llcc_probe(struct platform_device *pdev) 80 - { 81 - return qcom_llcc_probe(pdev, sdm845_data, ARRAY_SIZE(sdm845_data)); 82 - } 83 - 84 - static const struct of_device_id sdm845_qcom_llcc_of_match[] = { 85 - { .compatible = "qcom,sdm845-llcc", }, 86 - { } 87 - }; 88 - 89 - static struct platform_driver sdm845_qcom_llcc_driver = { 90 - .driver = { 91 - .name = "sdm845-llcc", 92 - .of_match_table = sdm845_qcom_llcc_of_match, 93 - }, 94 - .probe = sdm845_qcom_llcc_probe, 95 - .remove = sdm845_qcom_llcc_remove, 96 - }; 97 - module_platform_driver(sdm845_qcom_llcc_driver); 98 - 99 - MODULE_DESCRIPTION("QCOM sdm845 LLCC driver"); 100 - MODULE_LICENSE("GPL v2");
+118 -14
drivers/soc/qcom/llcc-slice.c drivers/soc/qcom/llcc-qcom.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * Copyright (c) 2017-2018, The Linux Foundation. All rights reserved. 3 + * Copyright (c) 2017-2019, The Linux Foundation. All rights reserved. 4 4 * 5 5 */ 6 6 ··· 11 11 #include <linux/kernel.h> 12 12 #include <linux/module.h> 13 13 #include <linux/mutex.h> 14 + #include <linux/of.h> 14 15 #include <linux/of_device.h> 15 16 #include <linux/regmap.h> 16 17 #include <linux/sizes.h> ··· 47 46 48 47 #define BANK_OFFSET_STRIDE 0x80000 49 48 50 - static struct llcc_drv_data *drv_data = (void *) -EPROBE_DEFER; 51 - 52 - static const struct regmap_config llcc_regmap_config = { 53 - .reg_bits = 32, 54 - .reg_stride = 4, 55 - .val_bits = 32, 56 - .fast_io = true, 49 + /** 50 + * llcc_slice_config - Data associated with the llcc slice 51 + * @usecase_id: Unique id for the client's use case 52 + * @slice_id: llcc slice id for each client 53 + * @max_cap: The maximum capacity of the cache slice provided in KB 54 + * @priority: Priority of the client used to select victim line for replacement 55 + * @fixed_size: Boolean indicating if the slice has a fixed capacity 56 + * @bonus_ways: Bonus ways are additional ways to be used for any slice, 57 + * if client ends up using more than reserved cache ways. Bonus 58 + * ways are allocated only if they are not reserved for some 59 + * other client. 60 + * @res_ways: Reserved ways for the cache slice, the reserved ways cannot 61 + * be used by any other client than the one its assigned to. 62 + * @cache_mode: Each slice operates as a cache, this controls the mode of the 63 + * slice: normal or TCM(Tightly Coupled Memory) 64 + * @probe_target_ways: Determines what ways to probe for access hit. When 65 + * configured to 1 only bonus and reserved ways are probed. 66 + * When configured to 0 all ways in llcc are probed. 67 + * @dis_cap_alloc: Disable capacity based allocation for a client 68 + * @retain_on_pc: If this bit is set and client has maintained active vote 69 + * then the ways assigned to this client are not flushed on power 70 + * collapse. 71 + * @activate_on_init: Activate the slice immediately after it is programmed 72 + */ 73 + struct llcc_slice_config { 74 + u32 usecase_id; 75 + u32 slice_id; 76 + u32 max_cap; 77 + u32 priority; 78 + bool fixed_size; 79 + u32 bonus_ways; 80 + u32 res_ways; 81 + u32 cache_mode; 82 + u32 probe_target_ways; 83 + bool dis_cap_alloc; 84 + bool retain_on_pc; 85 + bool activate_on_init; 57 86 }; 87 + 88 + struct qcom_llcc_config { 89 + const struct llcc_slice_config *sct_data; 90 + int size; 91 + }; 92 + 93 + static const struct llcc_slice_config sc7180_data[] = { 94 + { LLCC_CPUSS, 1, 256, 1, 0, 0xf, 0x0, 0, 0, 0, 1, 1 }, 95 + { LLCC_MDM, 8, 128, 1, 0, 0xf, 0x0, 0, 0, 0, 1, 0 }, 96 + { LLCC_GPUHTW, 11, 128, 1, 0, 0xf, 0x0, 0, 0, 0, 1, 0 }, 97 + { LLCC_GPU, 12, 128, 1, 0, 0xf, 0x0, 0, 0, 0, 1, 0 }, 98 + }; 99 + 100 + static const struct llcc_slice_config sdm845_data[] = { 101 + { LLCC_CPUSS, 1, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 1 }, 102 + { LLCC_VIDSC0, 2, 512, 2, 1, 0x0, 0x0f0, 0, 0, 1, 1, 0 }, 103 + { LLCC_VIDSC1, 3, 512, 2, 1, 0x0, 0x0f0, 0, 0, 1, 1, 0 }, 104 + { LLCC_ROTATOR, 4, 563, 2, 1, 0x0, 0x00e, 2, 0, 1, 1, 0 }, 105 + { LLCC_VOICE, 5, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 }, 106 + { LLCC_AUDIO, 6, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 }, 107 + { LLCC_MDMHPGRW, 7, 1024, 2, 0, 0xfc, 0xf00, 0, 0, 1, 1, 0 }, 108 + { LLCC_MDM, 8, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 }, 109 + { LLCC_CMPT, 10, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 }, 110 + { LLCC_GPUHTW, 11, 512, 1, 1, 0xc, 0x0, 0, 0, 1, 1, 0 }, 111 + { LLCC_GPU, 12, 2304, 1, 0, 0xff0, 0x2, 0, 0, 1, 1, 0 }, 112 + { LLCC_MMUHWT, 13, 256, 2, 0, 0x0, 0x1, 0, 0, 1, 0, 1 }, 113 + { LLCC_CMPTDMA, 15, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 }, 114 + { LLCC_DISP, 16, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 }, 115 + { LLCC_VIDFW, 17, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 }, 116 + { LLCC_MDMHPFX, 20, 1024, 2, 1, 0x0, 0xf00, 0, 0, 1, 1, 0 }, 117 + { LLCC_MDMPNG, 21, 1024, 0, 1, 0x1e, 0x0, 0, 0, 1, 1, 0 }, 118 + { LLCC_AUDHW, 22, 1024, 1, 1, 0xffc, 0x2, 0, 0, 1, 1, 0 }, 119 + }; 120 + 121 + static const struct qcom_llcc_config sc7180_cfg = { 122 + .sct_data = sc7180_data, 123 + .size = ARRAY_SIZE(sc7180_data), 124 + }; 125 + 126 + static const struct qcom_llcc_config sdm845_cfg = { 127 + .sct_data = sdm845_data, 128 + .size = ARRAY_SIZE(sdm845_data), 129 + }; 130 + 131 + static struct llcc_drv_data *drv_data = (void *) -EPROBE_DEFER; 58 132 59 133 /** 60 134 * llcc_slice_getd - get llcc slice descriptor ··· 377 301 return ret; 378 302 } 379 303 380 - int qcom_llcc_remove(struct platform_device *pdev) 304 + static int qcom_llcc_remove(struct platform_device *pdev) 381 305 { 382 306 /* Set the global pointer to a error code to avoid referencing it */ 383 307 drv_data = ERR_PTR(-ENODEV); 384 308 return 0; 385 309 } 386 - EXPORT_SYMBOL_GPL(qcom_llcc_remove); 387 310 388 311 static struct regmap *qcom_llcc_init_mmio(struct platform_device *pdev, 389 312 const char *name) 390 313 { 391 314 struct resource *res; 392 315 void __iomem *base; 316 + struct regmap_config llcc_regmap_config = { 317 + .reg_bits = 32, 318 + .reg_stride = 4, 319 + .val_bits = 32, 320 + .fast_io = true, 321 + }; 393 322 394 323 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, name); 395 324 if (!res) ··· 404 323 if (IS_ERR(base)) 405 324 return ERR_CAST(base); 406 325 326 + llcc_regmap_config.name = name; 407 327 return devm_regmap_init_mmio(&pdev->dev, base, &llcc_regmap_config); 408 328 } 409 329 410 - int qcom_llcc_probe(struct platform_device *pdev, 411 - const struct llcc_slice_config *llcc_cfg, u32 sz) 330 + static int qcom_llcc_probe(struct platform_device *pdev) 412 331 { 413 332 u32 num_banks; 414 333 struct device *dev = &pdev->dev; 415 334 int ret, i; 416 335 struct platform_device *llcc_edac; 336 + const struct qcom_llcc_config *cfg; 337 + const struct llcc_slice_config *llcc_cfg; 338 + u32 sz; 417 339 418 340 drv_data = devm_kzalloc(dev, sizeof(*drv_data), GFP_KERNEL); 419 341 if (!drv_data) { ··· 445 361 num_banks &= LLCC_LB_CNT_MASK; 446 362 num_banks >>= LLCC_LB_CNT_SHIFT; 447 363 drv_data->num_banks = num_banks; 364 + 365 + cfg = of_device_get_match_data(&pdev->dev); 366 + llcc_cfg = cfg->sct_data; 367 + sz = cfg->size; 448 368 449 369 for (i = 0; i < sz; i++) 450 370 if (llcc_cfg[i].slice_id > drv_data->max_slices) ··· 495 407 drv_data = ERR_PTR(-ENODEV); 496 408 return ret; 497 409 } 498 - EXPORT_SYMBOL_GPL(qcom_llcc_probe); 499 - MODULE_LICENSE("GPL v2"); 410 + 411 + static const struct of_device_id qcom_llcc_of_match[] = { 412 + { .compatible = "qcom,sc7180-llcc", .data = &sc7180_cfg }, 413 + { .compatible = "qcom,sdm845-llcc", .data = &sdm845_cfg }, 414 + { } 415 + }; 416 + 417 + static struct platform_driver qcom_llcc_driver = { 418 + .driver = { 419 + .name = "qcom-llcc", 420 + .of_match_table = qcom_llcc_of_match, 421 + }, 422 + .probe = qcom_llcc_probe, 423 + .remove = qcom_llcc_remove, 424 + }; 425 + module_platform_driver(qcom_llcc_driver); 426 + 500 427 MODULE_DESCRIPTION("Qualcomm Last Level Cache Controller"); 428 + MODULE_LICENSE("GPL v2");
+23
drivers/soc/qcom/rpmpd.c
··· 115 115 116 116 static DEFINE_MUTEX(rpmpd_lock); 117 117 118 + /* msm8976 RPM Power Domains */ 119 + DEFINE_RPMPD_PAIR(msm8976, vddcx, vddcx_ao, SMPA, LEVEL, 2); 120 + DEFINE_RPMPD_PAIR(msm8976, vddmx, vddmx_ao, SMPA, LEVEL, 6); 121 + 122 + DEFINE_RPMPD_VFL(msm8976, vddcx_vfl, RWSC, 2); 123 + DEFINE_RPMPD_VFL(msm8976, vddmx_vfl, RWSM, 6); 124 + 125 + static struct rpmpd *msm8976_rpmpds[] = { 126 + [MSM8976_VDDCX] = &msm8976_vddcx, 127 + [MSM8976_VDDCX_AO] = &msm8976_vddcx_ao, 128 + [MSM8976_VDDCX_VFL] = &msm8976_vddcx_vfl, 129 + [MSM8976_VDDMX] = &msm8976_vddmx, 130 + [MSM8976_VDDMX_AO] = &msm8976_vddmx_ao, 131 + [MSM8976_VDDMX_VFL] = &msm8976_vddmx_vfl, 132 + }; 133 + 134 + static const struct rpmpd_desc msm8976_desc = { 135 + .rpmpds = msm8976_rpmpds, 136 + .num_pds = ARRAY_SIZE(msm8976_rpmpds), 137 + .max_state = RPM_SMD_LEVEL_TURBO_HIGH, 138 + }; 139 + 118 140 /* msm8996 RPM Power domains */ 119 141 DEFINE_RPMPD_PAIR(msm8996, vddcx, vddcx_ao, SMPA, CORNER, 1); 120 142 DEFINE_RPMPD_PAIR(msm8996, vddmx, vddmx_ao, SMPA, CORNER, 2); ··· 220 198 }; 221 199 222 200 static const struct of_device_id rpmpd_match_table[] = { 201 + { .compatible = "qcom,msm8976-rpmpd", .data = &msm8976_desc }, 223 202 { .compatible = "qcom,msm8996-rpmpd", .data = &msm8996_desc }, 224 203 { .compatible = "qcom,msm8998-rpmpd", .data = &msm8998_desc }, 225 204 { .compatible = "qcom,qcs404-rpmpd", .data = &qcs404_desc },
+17 -1
drivers/soc/qcom/smd-rpm.c
··· 19 19 /** 20 20 * struct qcom_smd_rpm - state of the rpm device driver 21 21 * @rpm_channel: reference to the smd channel 22 + * @icc: interconnect proxy device 22 23 * @ack: completion for acks 23 24 * @lock: mutual exclusion around the send/complete pair 24 25 * @ack_status: result of the rpm request 25 26 */ 26 27 struct qcom_smd_rpm { 27 28 struct rpmsg_endpoint *rpm_channel; 29 + struct platform_device *icc; 28 30 struct device *dev; 29 31 30 32 struct completion ack; ··· 195 193 static int qcom_smd_rpm_probe(struct rpmsg_device *rpdev) 196 194 { 197 195 struct qcom_smd_rpm *rpm; 196 + int ret; 198 197 199 198 rpm = devm_kzalloc(&rpdev->dev, sizeof(*rpm), GFP_KERNEL); 200 199 if (!rpm) ··· 208 205 rpm->rpm_channel = rpdev->ept; 209 206 dev_set_drvdata(&rpdev->dev, rpm); 210 207 211 - return of_platform_populate(rpdev->dev.of_node, NULL, NULL, &rpdev->dev); 208 + rpm->icc = platform_device_register_data(&rpdev->dev, "icc_smd_rpm", -1, 209 + NULL, 0); 210 + if (IS_ERR(rpm->icc)) 211 + return PTR_ERR(rpm->icc); 212 + 213 + ret = of_platform_populate(rpdev->dev.of_node, NULL, NULL, &rpdev->dev); 214 + if (ret) 215 + platform_device_unregister(rpm->icc); 216 + 217 + return ret; 212 218 } 213 219 214 220 static void qcom_smd_rpm_remove(struct rpmsg_device *rpdev) 215 221 { 222 + struct qcom_smd_rpm *rpm = dev_get_drvdata(&rpdev->dev); 223 + 224 + platform_device_unregister(rpm->icc); 216 225 of_platform_depopulate(&rpdev->dev); 217 226 } 218 227 ··· 232 217 { .compatible = "qcom,rpm-apq8084" }, 233 218 { .compatible = "qcom,rpm-msm8916" }, 234 219 { .compatible = "qcom,rpm-msm8974" }, 220 + { .compatible = "qcom,rpm-msm8976" }, 235 221 { .compatible = "qcom,rpm-msm8996" }, 236 222 { .compatible = "qcom,rpm-msm8998" }, 237 223 { .compatible = "qcom,rpm-sdm660" },
+2
drivers/soc/qcom/socinfo.c
··· 198 198 { 310, "MSM8996AU" }, 199 199 { 311, "APQ8096AU" }, 200 200 { 312, "APQ8096SG" }, 201 + { 321, "SDM845" }, 202 + { 341, "SDA845" }, 201 203 }; 202 204 203 205 static const char *socinfo_machine(struct device *dev, unsigned int id)
+29 -3
drivers/soc/renesas/Kconfig
··· 178 178 help 179 179 This enables support for the Renesas RZ/G2M SoC. 180 180 181 + config ARCH_R8A774B1 182 + bool "Renesas RZ/G2N SoC Platform" 183 + select ARCH_RCAR_GEN3 184 + select SYSC_R8A774B1 185 + help 186 + This enables support for the Renesas RZ/G2N SoC. 187 + 181 188 config ARCH_R8A774C0 182 189 bool "Renesas RZ/G2E SoC Platform" 183 190 select ARCH_RCAR_GEN3 ··· 199 192 help 200 193 This enables support for the Renesas R-Car H3 SoC. 201 194 195 + config ARCH_R8A77960 196 + bool 197 + select ARCH_RCAR_GEN3 198 + select SYSC_R8A77960 199 + 202 200 config ARCH_R8A7796 203 201 bool "Renesas R-Car M3-W SoC Platform" 204 - select ARCH_RCAR_GEN3 205 - select SYSC_R8A7796 202 + select ARCH_R8A77960 206 203 help 207 204 This enables support for the Renesas R-Car M3-W SoC. 205 + 206 + config ARCH_R8A77961 207 + bool "Renesas R-Car M3-W+ SoC Platform" 208 + select ARCH_RCAR_GEN3 209 + select SYSC_R8A77961 210 + help 211 + This enables support for the Renesas R-Car M3-W+ SoC. 208 212 209 213 config ARCH_R8A77965 210 214 bool "Renesas R-Car M3-N SoC Platform" ··· 271 253 bool "RZ/G2M System Controller support" if COMPILE_TEST 272 254 select SYSC_RCAR 273 255 256 + config SYSC_R8A774B1 257 + bool "RZ/G2N System Controller support" if COMPILE_TEST 258 + select SYSC_RCAR 259 + 274 260 config SYSC_R8A774C0 275 261 bool "RZ/G2E System Controller support" if COMPILE_TEST 276 262 select SYSC_RCAR ··· 303 281 bool "R-Car H3 System Controller support" if COMPILE_TEST 304 282 select SYSC_RCAR 305 283 306 - config SYSC_R8A7796 284 + config SYSC_R8A77960 307 285 bool "R-Car M3-W System Controller support" if COMPILE_TEST 286 + select SYSC_RCAR 287 + 288 + config SYSC_R8A77961 289 + bool "R-Car M3-W+ System Controller support" if COMPILE_TEST 308 290 select SYSC_RCAR 309 291 310 292 config SYSC_R8A77965
+3 -1
drivers/soc/renesas/Makefile
··· 7 7 obj-$(CONFIG_SYSC_R8A7745) += r8a7745-sysc.o 8 8 obj-$(CONFIG_SYSC_R8A77470) += r8a77470-sysc.o 9 9 obj-$(CONFIG_SYSC_R8A774A1) += r8a774a1-sysc.o 10 + obj-$(CONFIG_SYSC_R8A774B1) += r8a774b1-sysc.o 10 11 obj-$(CONFIG_SYSC_R8A774C0) += r8a774c0-sysc.o 11 12 obj-$(CONFIG_SYSC_R8A7779) += r8a7779-sysc.o 12 13 obj-$(CONFIG_SYSC_R8A7790) += r8a7790-sysc.o ··· 15 14 obj-$(CONFIG_SYSC_R8A7792) += r8a7792-sysc.o 16 15 obj-$(CONFIG_SYSC_R8A7794) += r8a7794-sysc.o 17 16 obj-$(CONFIG_SYSC_R8A7795) += r8a7795-sysc.o 18 - obj-$(CONFIG_SYSC_R8A7796) += r8a7796-sysc.o 17 + obj-$(CONFIG_SYSC_R8A77960) += r8a7796-sysc.o 18 + obj-$(CONFIG_SYSC_R8A77961) += r8a7796-sysc.o 19 19 obj-$(CONFIG_SYSC_R8A77965) += r8a77965-sysc.o 20 20 obj-$(CONFIG_SYSC_R8A77970) += r8a77970-sysc.o 21 21 obj-$(CONFIG_SYSC_R8A77980) += r8a77980-sysc.o
-1
drivers/soc/renesas/r8a7743-sysc.c
··· 5 5 * Copyright (C) 2016 Cogent Embedded Inc. 6 6 */ 7 7 8 - #include <linux/bug.h> 9 8 #include <linux/kernel.h> 10 9 11 10 #include <dt-bindings/power/r8a7743-sysc.h>
-1
drivers/soc/renesas/r8a7745-sysc.c
··· 5 5 * Copyright (C) 2016 Cogent Embedded Inc. 6 6 */ 7 7 8 - #include <linux/bug.h> 9 8 #include <linux/kernel.h> 10 9 11 10 #include <dt-bindings/power/r8a7745-sysc.h>
-1
drivers/soc/renesas/r8a77470-sysc.c
··· 5 5 * Copyright (C) 2018 Renesas Electronics Corp. 6 6 */ 7 7 8 - #include <linux/bug.h> 9 8 #include <linux/kernel.h> 10 9 11 10 #include <dt-bindings/power/r8a77470-sysc.h>
-1
drivers/soc/renesas/r8a774a1-sysc.c
··· 7 7 * Copyright (C) 2016 Glider bvba 8 8 */ 9 9 10 - #include <linux/bug.h> 11 10 #include <linux/kernel.h> 12 11 13 12 #include <dt-bindings/power/r8a774a1-sysc.h>
+37
drivers/soc/renesas/r8a774b1-sysc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Renesas RZ/G2N System Controller 4 + * Copyright (C) 2019 Renesas Electronics Corp. 5 + * 6 + * Based on Renesas R-Car M3-W System Controller 7 + * Copyright (C) 2016 Glider bvba 8 + */ 9 + 10 + #include <linux/bits.h> 11 + #include <linux/kernel.h> 12 + 13 + #include <dt-bindings/power/r8a774b1-sysc.h> 14 + 15 + #include "rcar-sysc.h" 16 + 17 + static const struct rcar_sysc_area r8a774b1_areas[] __initconst = { 18 + { "always-on", 0, 0, R8A774B1_PD_ALWAYS_ON, -1, PD_ALWAYS_ON }, 19 + { "ca57-scu", 0x1c0, 0, R8A774B1_PD_CA57_SCU, R8A774B1_PD_ALWAYS_ON, 20 + PD_SCU }, 21 + { "ca57-cpu0", 0x80, 0, R8A774B1_PD_CA57_CPU0, R8A774B1_PD_CA57_SCU, 22 + PD_CPU_NOCR }, 23 + { "ca57-cpu1", 0x80, 1, R8A774B1_PD_CA57_CPU1, R8A774B1_PD_CA57_SCU, 24 + PD_CPU_NOCR }, 25 + { "a3vc", 0x380, 0, R8A774B1_PD_A3VC, R8A774B1_PD_ALWAYS_ON }, 26 + { "a3vp", 0x340, 0, R8A774B1_PD_A3VP, R8A774B1_PD_ALWAYS_ON }, 27 + { "a2vc1", 0x3c0, 1, R8A774B1_PD_A2VC1, R8A774B1_PD_A3VC }, 28 + { "3dg-a", 0x100, 0, R8A774B1_PD_3DG_A, R8A774B1_PD_ALWAYS_ON }, 29 + { "3dg-b", 0x100, 1, R8A774B1_PD_3DG_B, R8A774B1_PD_3DG_A }, 30 + }; 31 + 32 + const struct rcar_sysc_info r8a774b1_sysc_info __initconst = { 33 + .areas = r8a774b1_areas, 34 + .num_areas = ARRAY_SIZE(r8a774b1_areas), 35 + .extmask_offs = 0x2f8, 36 + .extmask_val = BIT(0), 37 + };
+3 -1
drivers/soc/renesas/r8a774c0-sysc.c
··· 6 6 * Based on Renesas R-Car E3 System Controller 7 7 */ 8 8 9 - #include <linux/bug.h> 9 + #include <linux/bits.h> 10 10 #include <linux/kernel.h> 11 11 #include <linux/sys_soc.h> 12 12 ··· 50 50 .init = r8a774c0_sysc_init, 51 51 .areas = r8a774c0_areas, 52 52 .num_areas = ARRAY_SIZE(r8a774c0_areas), 53 + .extmask_offs = 0x2f8, 54 + .extmask_val = BIT(0), 53 55 };
-1
drivers/soc/renesas/r8a7779-sysc.c
··· 5 5 * Copyright (C) 2016 Glider bvba 6 6 */ 7 7 8 - #include <linux/bug.h> 9 8 #include <linux/kernel.h> 10 9 11 10 #include <dt-bindings/power/r8a7779-sysc.h>
-1
drivers/soc/renesas/r8a7790-sysc.c
··· 5 5 * Copyright (C) 2016 Glider bvba 6 6 */ 7 7 8 - #include <linux/bug.h> 9 8 #include <linux/kernel.h> 10 9 11 10 #include <dt-bindings/power/r8a7790-sysc.h>
-1
drivers/soc/renesas/r8a7791-sysc.c
··· 5 5 * Copyright (C) 2016 Glider bvba 6 6 */ 7 7 8 - #include <linux/bug.h> 9 8 #include <linux/kernel.h> 10 9 11 10 #include <dt-bindings/power/r8a7791-sysc.h>
-1
drivers/soc/renesas/r8a7792-sysc.c
··· 5 5 * Copyright (C) 2016 Cogent Embedded Inc. 6 6 */ 7 7 8 - #include <linux/bug.h> 9 8 #include <linux/init.h> 10 9 #include <linux/kernel.h> 11 10
-1
drivers/soc/renesas/r8a7794-sysc.c
··· 5 5 * Copyright (C) 2016 Glider bvba 6 6 */ 7 7 8 - #include <linux/bug.h> 9 8 #include <linux/kernel.h> 10 9 11 10 #include <dt-bindings/power/r8a7794-sysc.h>
+27 -6
drivers/soc/renesas/r8a7795-sysc.c
··· 5 5 * Copyright (C) 2016-2017 Glider bvba 6 6 */ 7 7 8 - #include <linux/bug.h> 8 + #include <linux/bits.h> 9 9 #include <linux/kernel.h> 10 10 #include <linux/sys_soc.h> 11 11 ··· 51 51 52 52 53 53 /* 54 - * Fixups for R-Car H3 revisions after ES1.x 54 + * Fixups for R-Car H3 revisions 55 55 */ 56 56 57 - static const struct soc_device_attribute r8a7795es1[] __initconst = { 58 - { .soc_id = "r8a7795", .revision = "ES1.*" }, 57 + #define HAS_A2VC0 BIT(0) /* Power domain A2VC0 is present */ 58 + #define NO_EXTMASK BIT(1) /* Missing SYSCEXTMASK register */ 59 + 60 + static const struct soc_device_attribute r8a7795_quirks_match[] __initconst = { 61 + { 62 + .soc_id = "r8a7795", .revision = "ES1.*", 63 + .data = (void *)(HAS_A2VC0 | NO_EXTMASK), 64 + }, { 65 + .soc_id = "r8a7795", .revision = "ES2.*", 66 + .data = (void *)(NO_EXTMASK), 67 + }, 59 68 { /* sentinel */ } 60 69 }; 61 70 62 71 static int __init r8a7795_sysc_init(void) 63 72 { 64 - if (!soc_device_match(r8a7795es1)) 73 + const struct soc_device_attribute *attr; 74 + u32 quirks = 0; 75 + 76 + attr = soc_device_match(r8a7795_quirks_match); 77 + if (attr) 78 + quirks = (uintptr_t)attr->data; 79 + 80 + if (!(quirks & HAS_A2VC0)) 65 81 rcar_sysc_nullify(r8a7795_areas, ARRAY_SIZE(r8a7795_areas), 66 82 R8A7795_PD_A2VC0); 83 + 84 + if (quirks & NO_EXTMASK) 85 + r8a7795_sysc_info.extmask_val = 0; 67 86 68 87 return 0; 69 88 } 70 89 71 - const struct rcar_sysc_info r8a7795_sysc_info __initconst = { 90 + struct rcar_sysc_info r8a7795_sysc_info __initdata = { 72 91 .init = r8a7795_sysc_init, 73 92 .areas = r8a7795_areas, 74 93 .num_areas = ARRAY_SIZE(r8a7795_areas), 94 + .extmask_offs = 0x2f8, 95 + .extmask_val = BIT(0), 75 96 };
+26 -4
drivers/soc/renesas/r8a7796-sysc.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * Renesas R-Car M3-W System Controller 3 + * Renesas R-Car M3-W/W+ System Controller 4 4 * 5 5 * Copyright (C) 2016 Glider bvba 6 + * Copyright (C) 2018-2019 Renesas Electronics Corporation 6 7 */ 7 8 8 - #include <linux/bug.h> 9 + #include <linux/bits.h> 9 10 #include <linux/kernel.h> 10 11 11 12 #include <dt-bindings/power/r8a7796-sysc.h> 12 13 13 14 #include "rcar-sysc.h" 14 15 15 - static const struct rcar_sysc_area r8a7796_areas[] __initconst = { 16 + static struct rcar_sysc_area r8a7796_areas[] __initdata = { 16 17 { "always-on", 0, 0, R8A7796_PD_ALWAYS_ON, -1, PD_ALWAYS_ON }, 17 18 { "ca57-scu", 0x1c0, 0, R8A7796_PD_CA57_SCU, R8A7796_PD_ALWAYS_ON, 18 19 PD_SCU }, ··· 40 39 { "a3ir", 0x180, 0, R8A7796_PD_A3IR, R8A7796_PD_ALWAYS_ON }, 41 40 }; 42 41 43 - const struct rcar_sysc_info r8a7796_sysc_info __initconst = { 42 + 43 + #ifdef CONFIG_SYSC_R8A77960 44 + const struct rcar_sysc_info r8a77960_sysc_info __initconst = { 44 45 .areas = r8a7796_areas, 45 46 .num_areas = ARRAY_SIZE(r8a7796_areas), 46 47 }; 48 + #endif /* CONFIG_SYSC_R8A77960 */ 49 + 50 + #ifdef CONFIG_SYSC_R8A77961 51 + static int __init r8a77961_sysc_init(void) 52 + { 53 + rcar_sysc_nullify(r8a7796_areas, ARRAY_SIZE(r8a7796_areas), 54 + R8A7796_PD_A2VC0); 55 + 56 + return 0; 57 + } 58 + 59 + const struct rcar_sysc_info r8a77961_sysc_info __initconst = { 60 + .init = r8a77961_sysc_init, 61 + .areas = r8a7796_areas, 62 + .num_areas = ARRAY_SIZE(r8a7796_areas), 63 + .extmask_offs = 0x2f8, 64 + .extmask_val = BIT(0), 65 + }; 66 + #endif /* CONFIG_SYSC_R8A77961 */
+3 -1
drivers/soc/renesas/r8a77965-sysc.c
··· 7 7 * Copyright (C) 2016 Glider bvba 8 8 */ 9 9 10 - #include <linux/bug.h> 10 + #include <linux/bits.h> 11 11 #include <linux/kernel.h> 12 12 13 13 #include <dt-bindings/power/r8a77965-sysc.h> ··· 33 33 const struct rcar_sysc_info r8a77965_sysc_info __initconst = { 34 34 .areas = r8a77965_areas, 35 35 .num_areas = ARRAY_SIZE(r8a77965_areas), 36 + .extmask_offs = 0x2f8, 37 + .extmask_val = BIT(0), 36 38 };
+3 -1
drivers/soc/renesas/r8a77970-sysc.c
··· 5 5 * Copyright (C) 2017 Cogent Embedded Inc. 6 6 */ 7 7 8 - #include <linux/bug.h> 8 + #include <linux/bits.h> 9 9 #include <linux/kernel.h> 10 10 11 11 #include <dt-bindings/power/r8a77970-sysc.h> ··· 32 32 const struct rcar_sysc_info r8a77970_sysc_info __initconst = { 33 33 .areas = r8a77970_areas, 34 34 .num_areas = ARRAY_SIZE(r8a77970_areas), 35 + .extmask_offs = 0x1b0, 36 + .extmask_val = BIT(0), 35 37 };
+3 -1
drivers/soc/renesas/r8a77980-sysc.c
··· 6 6 * Copyright (C) 2018 Cogent Embedded, Inc. 7 7 */ 8 8 9 - #include <linux/bug.h> 9 + #include <linux/bits.h> 10 10 #include <linux/kernel.h> 11 11 12 12 #include <dt-bindings/power/r8a77980-sysc.h> ··· 49 49 const struct rcar_sysc_info r8a77980_sysc_info __initconst = { 50 50 .areas = r8a77980_areas, 51 51 .num_areas = ARRAY_SIZE(r8a77980_areas), 52 + .extmask_offs = 0x138, 53 + .extmask_val = BIT(0), 52 54 };
+3 -1
drivers/soc/renesas/r8a77990-sysc.c
··· 5 5 * Copyright (C) 2018 Renesas Electronics Corp. 6 6 */ 7 7 8 - #include <linux/bug.h> 8 + #include <linux/bits.h> 9 9 #include <linux/kernel.h> 10 10 #include <linux/sys_soc.h> 11 11 ··· 50 50 .init = r8a77990_sysc_init, 51 51 .areas = r8a77990_areas, 52 52 .num_areas = ARRAY_SIZE(r8a77990_areas), 53 + .extmask_offs = 0x2f8, 54 + .extmask_val = BIT(0), 53 55 };
-1
drivers/soc/renesas/r8a77995-sysc.c
··· 5 5 * Copyright (C) 2017 Glider bvba 6 6 */ 7 7 8 - #include <linux/bug.h> 9 8 #include <linux/kernel.h> 10 9 11 10 #include <dt-bindings/power/r8a77995-sysc.h>
+2
drivers/soc/renesas/rcar-rst.c
··· 45 45 { .compatible = "renesas,r8a77470-rst", .data = &rcar_rst_gen2 }, 46 46 /* RZ/G2 is handled like R-Car Gen3 */ 47 47 { .compatible = "renesas,r8a774a1-rst", .data = &rcar_rst_gen3 }, 48 + { .compatible = "renesas,r8a774b1-rst", .data = &rcar_rst_gen3 }, 48 49 { .compatible = "renesas,r8a774c0-rst", .data = &rcar_rst_gen3 }, 49 50 /* R-Car Gen1 */ 50 51 { .compatible = "renesas,r8a7778-reset-wdt", .data = &rcar_rst_gen1 }, ··· 59 58 /* R-Car Gen3 */ 60 59 { .compatible = "renesas,r8a7795-rst", .data = &rcar_rst_gen3 }, 61 60 { .compatible = "renesas,r8a7796-rst", .data = &rcar_rst_gen3 }, 61 + { .compatible = "renesas,r8a77961-rst", .data = &rcar_rst_gen3 }, 62 62 { .compatible = "renesas,r8a77965-rst", .data = &rcar_rst_gen3 }, 63 63 { .compatible = "renesas,r8a77970-rst", .data = &rcar_rst_gen3 }, 64 64 { .compatible = "renesas,r8a77980-rst", .data = &rcar_rst_gen3 },
+24 -2
drivers/soc/renesas/rcar-sysc.c
··· 63 63 64 64 static void __iomem *rcar_sysc_base; 65 65 static DEFINE_SPINLOCK(rcar_sysc_lock); /* SMP CPUs + I/O devices */ 66 + static u32 rcar_sysc_extmask_offs, rcar_sysc_extmask_val; 66 67 67 68 static int rcar_sysc_pwr_on_off(const struct rcar_sysc_ch *sysc_ch, bool on) 68 69 { ··· 105 104 int k; 106 105 107 106 spin_lock_irqsave(&rcar_sysc_lock, flags); 107 + 108 + /* 109 + * Mask external power requests for CPU or 3DG domains 110 + */ 111 + if (rcar_sysc_extmask_val) { 112 + iowrite32(rcar_sysc_extmask_val, 113 + rcar_sysc_base + rcar_sysc_extmask_offs); 114 + } 108 115 109 116 /* 110 117 * The interrupt source needs to be enabled, but masked, to prevent the ··· 157 148 iowrite32(isr_mask, rcar_sysc_base + SYSCISCR); 158 149 159 150 out: 151 + if (rcar_sysc_extmask_val) 152 + iowrite32(0, rcar_sysc_base + rcar_sysc_extmask_offs); 153 + 160 154 spin_unlock_irqrestore(&rcar_sysc_lock, flags); 161 155 162 156 pr_debug("sysc power %s domain %d: %08x -> %d\n", on ? "on" : "off", ··· 287 275 #ifdef CONFIG_SYSC_R8A774A1 288 276 { .compatible = "renesas,r8a774a1-sysc", .data = &r8a774a1_sysc_info }, 289 277 #endif 278 + #ifdef CONFIG_SYSC_R8A774B1 279 + { .compatible = "renesas,r8a774b1-sysc", .data = &r8a774b1_sysc_info }, 280 + #endif 290 281 #ifdef CONFIG_SYSC_R8A774C0 291 282 { .compatible = "renesas,r8a774c0-sysc", .data = &r8a774c0_sysc_info }, 292 283 #endif ··· 313 298 #ifdef CONFIG_SYSC_R8A7795 314 299 { .compatible = "renesas,r8a7795-sysc", .data = &r8a7795_sysc_info }, 315 300 #endif 316 - #ifdef CONFIG_SYSC_R8A7796 317 - { .compatible = "renesas,r8a7796-sysc", .data = &r8a7796_sysc_info }, 301 + #ifdef CONFIG_SYSC_R8A77960 302 + { .compatible = "renesas,r8a7796-sysc", .data = &r8a77960_sysc_info }, 303 + #endif 304 + #ifdef CONFIG_SYSC_R8A77961 305 + { .compatible = "renesas,r8a77961-sysc", .data = &r8a77961_sysc_info }, 318 306 #endif 319 307 #ifdef CONFIG_SYSC_R8A77965 320 308 { .compatible = "renesas,r8a77965-sysc", .data = &r8a77965_sysc_info }, ··· 377 359 } 378 360 379 361 rcar_sysc_base = base; 362 + 363 + /* Optional External Request Mask Register */ 364 + rcar_sysc_extmask_offs = info->extmask_offs; 365 + rcar_sysc_extmask_val = info->extmask_val; 380 366 381 367 domains = kzalloc(sizeof(*domains), GFP_KERNEL); 382 368 if (!domains) {
+7 -2
drivers/soc/renesas/rcar-sysc.h
··· 44 44 int (*init)(void); /* Optional */ 45 45 const struct rcar_sysc_area *areas; 46 46 unsigned int num_areas; 47 + /* Optional External Request Mask Register */ 48 + u32 extmask_offs; /* SYSCEXTMASK register offset */ 49 + u32 extmask_val; /* SYSCEXTMASK register mask value */ 47 50 }; 48 51 49 52 extern const struct rcar_sysc_info r8a7743_sysc_info; 50 53 extern const struct rcar_sysc_info r8a7745_sysc_info; 51 54 extern const struct rcar_sysc_info r8a77470_sysc_info; 52 55 extern const struct rcar_sysc_info r8a774a1_sysc_info; 56 + extern const struct rcar_sysc_info r8a774b1_sysc_info; 53 57 extern const struct rcar_sysc_info r8a774c0_sysc_info; 54 58 extern const struct rcar_sysc_info r8a7779_sysc_info; 55 59 extern const struct rcar_sysc_info r8a7790_sysc_info; 56 60 extern const struct rcar_sysc_info r8a7791_sysc_info; 57 61 extern const struct rcar_sysc_info r8a7792_sysc_info; 58 62 extern const struct rcar_sysc_info r8a7794_sysc_info; 59 - extern const struct rcar_sysc_info r8a7795_sysc_info; 60 - extern const struct rcar_sysc_info r8a7796_sysc_info; 63 + extern struct rcar_sysc_info r8a7795_sysc_info; 64 + extern const struct rcar_sysc_info r8a77960_sysc_info; 65 + extern const struct rcar_sysc_info r8a77961_sysc_info; 61 66 extern const struct rcar_sysc_info r8a77965_sysc_info; 62 67 extern const struct rcar_sysc_info r8a77970_sysc_info; 63 68 extern const struct rcar_sysc_info r8a77980_sysc_info;
+13 -2
drivers/soc/renesas/renesas-soc.c
··· 116 116 .id = 0x52, 117 117 }; 118 118 119 + static const struct renesas_soc soc_rz_g2n __initconst __maybe_unused = { 120 + .family = &fam_rzg2, 121 + .id = 0x55, 122 + }; 123 + 119 124 static const struct renesas_soc soc_rz_g2e __initconst __maybe_unused = { 120 125 .family = &fam_rzg2, 121 126 .id = 0x57, ··· 232 227 #ifdef CONFIG_ARCH_R8A774A1 233 228 { .compatible = "renesas,r8a774a1", .data = &soc_rz_g2m }, 234 229 #endif 230 + #ifdef CONFIG_ARCH_R8A774B1 231 + { .compatible = "renesas,r8a774b1", .data = &soc_rz_g2n }, 232 + #endif 235 233 #ifdef CONFIG_ARCH_R8A774C0 236 234 { .compatible = "renesas,r8a774c0", .data = &soc_rz_g2e }, 237 235 #endif ··· 262 254 #ifdef CONFIG_ARCH_R8A7795 263 255 { .compatible = "renesas,r8a7795", .data = &soc_rcar_h3 }, 264 256 #endif 265 - #ifdef CONFIG_ARCH_R8A7796 257 + #ifdef CONFIG_ARCH_R8A77960 266 258 { .compatible = "renesas,r8a7796", .data = &soc_rcar_m3_w }, 259 + #endif 260 + #ifdef CONFIG_ARCH_R8A77961 261 + { .compatible = "renesas,r8a77961", .data = &soc_rcar_m3_w }, 267 262 #endif 268 263 #ifdef CONFIG_ARCH_R8A77965 269 264 { .compatible = "renesas,r8a77965", .data = &soc_rcar_m3_n }, ··· 337 326 if (np) { 338 327 chipid = of_iomap(np, 0); 339 328 of_node_put(np); 340 - } else if (soc->id) { 329 + } else if (soc->id && family->reg) { 341 330 chipid = ioremap(family->reg, 4); 342 331 } 343 332 if (chipid) {
+10
drivers/soc/samsung/Kconfig
··· 7 7 8 8 if SOC_SAMSUNG 9 9 10 + config EXYNOS_ASV 11 + bool "Exynos Adaptive Supply Voltage support" if COMPILE_TEST 12 + depends on (ARCH_EXYNOS && EXYNOS_CHIPID) || COMPILE_TEST 13 + select EXYNOS_ASV_ARM if ARM && ARCH_EXYNOS 14 + 15 + # There is no need to enable these drivers for ARMv8 16 + config EXYNOS_ASV_ARM 17 + bool "Exynos ASV ARMv7-specific driver extensions" if COMPILE_TEST 18 + depends on EXYNOS_ASV 19 + 10 20 config EXYNOS_CHIPID 11 21 bool "Exynos Chipid controller driver" if COMPILE_TEST 12 22 depends on ARCH_EXYNOS || COMPILE_TEST
+3
drivers/soc/samsung/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 + obj-$(CONFIG_EXYNOS_ASV) += exynos-asv.o 4 + obj-$(CONFIG_EXYNOS_ASV_ARM) += exynos5422-asv.o 5 + 3 6 obj-$(CONFIG_EXYNOS_CHIPID) += exynos-chipid.o 4 7 obj-$(CONFIG_EXYNOS_PMU) += exynos-pmu.o 5 8
+177
drivers/soc/samsung/exynos-asv.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2019 Samsung Electronics Co., Ltd. 4 + * http://www.samsung.com/ 5 + * Author: Sylwester Nawrocki <s.nawrocki@samsung.com> 6 + * 7 + * Samsung Exynos SoC Adaptive Supply Voltage support 8 + */ 9 + 10 + #include <linux/cpu.h> 11 + #include <linux/device.h> 12 + #include <linux/errno.h> 13 + #include <linux/init.h> 14 + #include <linux/mfd/syscon.h> 15 + #include <linux/module.h> 16 + #include <linux/of.h> 17 + #include <linux/of_device.h> 18 + #include <linux/platform_device.h> 19 + #include <linux/pm_opp.h> 20 + #include <linux/regmap.h> 21 + #include <linux/soc/samsung/exynos-chipid.h> 22 + 23 + #include "exynos-asv.h" 24 + #include "exynos5422-asv.h" 25 + 26 + #define MHZ 1000000U 27 + 28 + static int exynos_asv_update_cpu_opps(struct exynos_asv *asv, 29 + struct device *cpu) 30 + { 31 + struct exynos_asv_subsys *subsys = NULL; 32 + struct dev_pm_opp *opp; 33 + unsigned int opp_freq; 34 + int i; 35 + 36 + for (i = 0; i < ARRAY_SIZE(asv->subsys); i++) { 37 + if (of_device_is_compatible(cpu->of_node, 38 + asv->subsys[i].cpu_dt_compat)) { 39 + subsys = &asv->subsys[i]; 40 + break; 41 + } 42 + } 43 + if (!subsys) 44 + return -EINVAL; 45 + 46 + for (i = 0; i < subsys->table.num_rows; i++) { 47 + unsigned int new_volt, volt; 48 + int ret; 49 + 50 + opp_freq = exynos_asv_opp_get_frequency(subsys, i); 51 + 52 + opp = dev_pm_opp_find_freq_exact(cpu, opp_freq * MHZ, true); 53 + if (IS_ERR(opp)) { 54 + dev_info(asv->dev, "cpu%d opp%d, freq: %u missing\n", 55 + cpu->id, i, opp_freq); 56 + 57 + continue; 58 + } 59 + 60 + volt = dev_pm_opp_get_voltage(opp); 61 + new_volt = asv->opp_get_voltage(subsys, i, volt); 62 + dev_pm_opp_put(opp); 63 + 64 + if (new_volt == volt) 65 + continue; 66 + 67 + ret = dev_pm_opp_adjust_voltage(cpu, opp_freq * MHZ, 68 + new_volt, new_volt, new_volt); 69 + if (ret < 0) 70 + dev_err(asv->dev, 71 + "Failed to adjust OPP %u Hz/%u uV for cpu%d\n", 72 + opp_freq, new_volt, cpu->id); 73 + else 74 + dev_dbg(asv->dev, 75 + "Adjusted OPP %u Hz/%u -> %u uV, cpu%d\n", 76 + opp_freq, volt, new_volt, cpu->id); 77 + } 78 + 79 + return 0; 80 + } 81 + 82 + static int exynos_asv_update_opps(struct exynos_asv *asv) 83 + { 84 + struct opp_table *last_opp_table = NULL; 85 + struct device *cpu; 86 + int ret, cpuid; 87 + 88 + for_each_possible_cpu(cpuid) { 89 + struct opp_table *opp_table; 90 + 91 + cpu = get_cpu_device(cpuid); 92 + if (!cpu) 93 + continue; 94 + 95 + opp_table = dev_pm_opp_get_opp_table(cpu); 96 + if (IS_ERR_OR_NULL(opp_table)) 97 + continue; 98 + 99 + if (!last_opp_table || opp_table != last_opp_table) { 100 + last_opp_table = opp_table; 101 + 102 + ret = exynos_asv_update_cpu_opps(asv, cpu); 103 + if (ret < 0) 104 + dev_err(asv->dev, "Couldn't udate OPPs for cpu%d\n", 105 + cpuid); 106 + } 107 + 108 + dev_pm_opp_put_opp_table(opp_table); 109 + } 110 + 111 + return 0; 112 + } 113 + 114 + static int exynos_asv_probe(struct platform_device *pdev) 115 + { 116 + int (*probe_func)(struct exynos_asv *asv); 117 + struct exynos_asv *asv; 118 + struct device *cpu_dev; 119 + u32 product_id = 0; 120 + int ret, i; 121 + 122 + cpu_dev = get_cpu_device(0); 123 + ret = dev_pm_opp_get_opp_count(cpu_dev); 124 + if (ret < 0) 125 + return -EPROBE_DEFER; 126 + 127 + asv = devm_kzalloc(&pdev->dev, sizeof(*asv), GFP_KERNEL); 128 + if (!asv) 129 + return -ENOMEM; 130 + 131 + asv->chipid_regmap = device_node_to_regmap(pdev->dev.of_node); 132 + if (IS_ERR(asv->chipid_regmap)) { 133 + dev_err(&pdev->dev, "Could not find syscon regmap\n"); 134 + return PTR_ERR(asv->chipid_regmap); 135 + } 136 + 137 + regmap_read(asv->chipid_regmap, EXYNOS_CHIPID_REG_PRO_ID, &product_id); 138 + 139 + switch (product_id & EXYNOS_MASK) { 140 + case 0xE5422000: 141 + probe_func = exynos5422_asv_init; 142 + break; 143 + default: 144 + return -ENODEV; 145 + } 146 + 147 + ret = of_property_read_u32(pdev->dev.of_node, "samsung,asv-bin", 148 + &asv->of_bin); 149 + if (ret < 0) 150 + asv->of_bin = -EINVAL; 151 + 152 + asv->dev = &pdev->dev; 153 + dev_set_drvdata(&pdev->dev, asv); 154 + 155 + for (i = 0; i < ARRAY_SIZE(asv->subsys); i++) 156 + asv->subsys[i].asv = asv; 157 + 158 + ret = probe_func(asv); 159 + if (ret < 0) 160 + return ret; 161 + 162 + return exynos_asv_update_opps(asv); 163 + } 164 + 165 + static const struct of_device_id exynos_asv_of_device_ids[] = { 166 + { .compatible = "samsung,exynos4210-chipid" }, 167 + {} 168 + }; 169 + 170 + static struct platform_driver exynos_asv_driver = { 171 + .driver = { 172 + .name = "exynos-asv", 173 + .of_match_table = exynos_asv_of_device_ids, 174 + }, 175 + .probe = exynos_asv_probe, 176 + }; 177 + module_platform_driver(exynos_asv_driver);
+71
drivers/soc/samsung/exynos-asv.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (c) 2019 Samsung Electronics Co., Ltd. 4 + * http://www.samsung.com/ 5 + * Author: Sylwester Nawrocki <s.nawrocki@samsung.com> 6 + * 7 + * Samsung Exynos SoC Adaptive Supply Voltage support 8 + */ 9 + #ifndef __LINUX_SOC_EXYNOS_ASV_H 10 + #define __LINUX_SOC_EXYNOS_ASV_H 11 + 12 + struct regmap; 13 + 14 + /* HPM, IDS values to select target group */ 15 + struct asv_limit_entry { 16 + unsigned int hpm; 17 + unsigned int ids; 18 + }; 19 + 20 + struct exynos_asv_table { 21 + unsigned int num_rows; 22 + unsigned int num_cols; 23 + u32 *buf; 24 + }; 25 + 26 + struct exynos_asv_subsys { 27 + struct exynos_asv *asv; 28 + const char *cpu_dt_compat; 29 + int id; 30 + struct exynos_asv_table table; 31 + 32 + unsigned int base_volt; 33 + unsigned int offset_volt_h; 34 + unsigned int offset_volt_l; 35 + }; 36 + 37 + struct exynos_asv { 38 + struct device *dev; 39 + struct regmap *chipid_regmap; 40 + struct exynos_asv_subsys subsys[2]; 41 + 42 + int (*opp_get_voltage)(const struct exynos_asv_subsys *subs, 43 + int level, unsigned int voltage); 44 + unsigned int group; 45 + unsigned int table; 46 + 47 + /* True if SG fields from PKG_ID register should be used */ 48 + bool use_sg; 49 + /* ASV bin read from DT */ 50 + int of_bin; 51 + }; 52 + 53 + static inline u32 __asv_get_table_entry(const struct exynos_asv_table *table, 54 + unsigned int row, unsigned int col) 55 + { 56 + return table->buf[row * (table->num_cols) + col]; 57 + } 58 + 59 + static inline u32 exynos_asv_opp_get_voltage(const struct exynos_asv_subsys *subsys, 60 + unsigned int level, unsigned int group) 61 + { 62 + return __asv_get_table_entry(&subsys->table, level, group + 1); 63 + } 64 + 65 + static inline u32 exynos_asv_opp_get_frequency(const struct exynos_asv_subsys *subsys, 66 + unsigned int level) 67 + { 68 + return __asv_get_table_entry(&subsys->table, level, 0); 69 + } 70 + 71 + #endif /* __LINUX_SOC_EXYNOS_ASV_H */
+10 -2
drivers/soc/samsung/exynos-chipid.c
··· 45 45 return NULL; 46 46 } 47 47 48 - int __init exynos_chipid_early_init(void) 48 + static int __init exynos_chipid_early_init(void) 49 49 { 50 50 struct soc_device_attribute *soc_dev_attr; 51 51 struct soc_device *soc_dev; 52 52 struct device_node *root; 53 + struct device_node *syscon; 53 54 struct regmap *regmap; 54 55 u32 product_id; 55 56 u32 revision; 56 57 int ret; 57 58 58 - regmap = syscon_regmap_lookup_by_compatible("samsung,exynos4210-chipid"); 59 + syscon = of_find_compatible_node(NULL, NULL, 60 + "samsung,exynos4210-chipid"); 61 + if (!syscon) 62 + return ENODEV; 63 + 64 + regmap = device_node_to_regmap(syscon); 65 + of_node_put(syscon); 66 + 59 67 if (IS_ERR(regmap)) 60 68 return PTR_ERR(regmap); 61 69
+505
drivers/soc/samsung/exynos5422-asv.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2019 Samsung Electronics Co., Ltd. 4 + * http://www.samsung.com/ 5 + * 6 + * Samsung Exynos 5422 SoC Adaptive Supply Voltage support 7 + */ 8 + 9 + #include <linux/bitrev.h> 10 + #include <linux/errno.h> 11 + #include <linux/regmap.h> 12 + #include <linux/soc/samsung/exynos-chipid.h> 13 + #include <linux/slab.h> 14 + 15 + #include "exynos-asv.h" 16 + #include "exynos5422-asv.h" 17 + 18 + #define ASV_GROUPS_NUM 14 19 + #define ASV_ARM_DVFS_NUM 20 20 + #define ASV_ARM_BIN2_DVFS_NUM 17 21 + #define ASV_KFC_DVFS_NUM 14 22 + #define ASV_KFC_BIN2_DVFS_NUM 12 23 + 24 + /* 25 + * This array is a set of 4 ASV data tables, first column of each ASV table 26 + * contains frequency value in MHz and subsequent columns contain the CPU 27 + * cluster's supply voltage values in uV. 28 + * In order to create a set of OPPs for specific SoC revision one of the voltage 29 + * columns (1...14) from one of the tables (0...3) is selected during 30 + * initialization. There are separate ASV tables for the big (ARM) and little 31 + * (KFC) CPU cluster. Only OPPs which are already defined in devicetree 32 + * will be updated. 33 + */ 34 + 35 + static const u32 asv_arm_table[][ASV_ARM_DVFS_NUM][ASV_GROUPS_NUM + 1] = { 36 + { 37 + /* ARM 0, 1 */ 38 + { 2100, 1362500, 1362500, 1350000, 1337500, 1325000, 1312500, 1300000, 39 + 1275000, 1262500, 1250000, 1237500, 1225000, 1212500, 1200000 }, 40 + { 2000, 1312500, 1312500, 1300000, 1287500, 1275000, 1262500, 1250000, 41 + 1237500, 1225000, 1237500, 1225000, 1212500, 1200000, 1187500 }, 42 + { 1900, 1250000, 1237500, 1225000, 1212500, 1200000, 1187500, 1175000, 43 + 1162500, 1150000, 1162500, 1150000, 1137500, 1125000, 1112500 }, 44 + { 1800, 1200000, 1187500, 1175000, 1162500, 1150000, 1137500, 1125000, 45 + 1112500, 1100000, 1112500, 1100000, 1087500, 1075000, 1062500 }, 46 + { 1700, 1162500, 1150000, 1137500, 1125000, 1112500, 1100000, 1087500, 47 + 1075000, 1062500, 1075000, 1062500, 1050000, 1037500, 1025000 }, 48 + { 1600, 1125000, 1112500, 1100000, 1087500, 1075000, 1062500, 1050000, 49 + 1037500, 1025000, 1037500, 1025000, 1012500, 1000000, 987500 }, 50 + { 1500, 1087500, 1075000, 1062500, 1050000, 1037500, 1025000, 1012500, 51 + 1000000, 987500, 1000000, 987500, 975000, 962500, 950000 }, 52 + { 1400, 1062500, 1050000, 1037500, 1025000, 1012500, 1000000, 987500, 53 + 975000, 962500, 975000, 962500, 950000, 937500, 925000 }, 54 + { 1300, 1050000, 1037500, 1025000, 1012500, 1000000, 987500, 975000, 55 + 962500, 950000, 962500, 950000, 937500, 925000, 912500 }, 56 + { 1200, 1025000, 1012500, 1000000, 987500, 975000, 962500, 950000, 57 + 937500, 925000, 937500, 925000, 912500, 900000, 900000 }, 58 + { 1100, 1000000, 987500, 975000, 962500, 950000, 937500, 925000, 59 + 912500, 900000, 900000, 900000, 900000, 900000, 900000 }, 60 + { 1000, 975000, 962500, 950000, 937500, 925000, 912500, 900000, 61 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 62 + { 900, 950000, 937500, 925000, 912500, 900000, 900000, 900000, 63 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 64 + { 800, 925000, 912500, 900000, 900000, 900000, 900000, 900000, 65 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 66 + { 700, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 67 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 68 + { 600, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 69 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 70 + { 500, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 71 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 72 + { 400, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 73 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 74 + { 300, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 75 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 76 + { 200, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 77 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 78 + }, { 79 + /* ARM 2 */ 80 + { 2100, 1362500, 1362500, 1350000, 1337500, 1325000, 1312500, 1300000, 81 + 1275000, 1262500, 1250000, 1237500, 1225000, 1212500, 1200000 }, 82 + { 2000, 1312500, 1312500, 1312500, 1300000, 1275000, 1262500, 1250000, 83 + 1237500, 1225000, 1237500, 1225000, 1212500, 1200000, 1187500 }, 84 + { 1900, 1262500, 1250000, 1250000, 1237500, 1212500, 1200000, 1187500, 85 + 1175000, 1162500, 1175000, 1162500, 1150000, 1137500, 1125000 }, 86 + { 1800, 1212500, 1200000, 1187500, 1175000, 1162500, 1150000, 1137500, 87 + 1125000, 1112500, 1125000, 1112500, 1100000, 1087500, 1075000 }, 88 + { 1700, 1175000, 1162500, 1150000, 1137500, 1125000, 1112500, 1100000, 89 + 1087500, 1075000, 1087500, 1075000, 1062500, 1050000, 1037500 }, 90 + { 1600, 1137500, 1125000, 1112500, 1100000, 1087500, 1075000, 1062500, 91 + 1050000, 1037500, 1050000, 1037500, 1025000, 1012500, 1000000 }, 92 + { 1500, 1100000, 1087500, 1075000, 1062500, 1050000, 1037500, 1025000, 93 + 1012500, 1000000, 1012500, 1000000, 987500, 975000, 962500 }, 94 + { 1400, 1075000, 1062500, 1050000, 1037500, 1025000, 1012500, 1000000, 95 + 987500, 975000, 987500, 975000, 962500, 950000, 937500 }, 96 + { 1300, 1050000, 1037500, 1025000, 1012500, 1000000, 987500, 975000, 97 + 962500, 950000, 962500, 950000, 937500, 925000, 912500 }, 98 + { 1200, 1025000, 1012500, 1000000, 987500, 975000, 962500, 950000, 99 + 937500, 925000, 937500, 925000, 912500, 900000, 900000 }, 100 + { 1100, 1000000, 987500, 975000, 962500, 950000, 937500, 925000, 101 + 912500, 900000, 900000, 900000, 900000, 900000, 900000 }, 102 + { 1000, 975000, 962500, 950000, 937500, 925000, 912500, 900000, 103 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 104 + { 900, 950000, 937500, 925000, 912500, 900000, 900000, 900000, 105 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 106 + { 800, 925000, 912500, 900000, 900000, 900000, 900000, 900000, 107 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 108 + { 700, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 109 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 110 + { 600, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 111 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 112 + { 500, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 113 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 114 + { 400, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 115 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 116 + { 300, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 117 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 118 + { 200, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 119 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 120 + }, { 121 + /* ARM 3 */ 122 + { 2100, 1362500, 1362500, 1350000, 1337500, 1325000, 1312500, 1300000, 123 + 1275000, 1262500, 1250000, 1237500, 1225000, 1212500, 1200000 }, 124 + { 2000, 1312500, 1312500, 1300000, 1287500, 1275000, 1262500, 1250000, 125 + 1237500, 1225000, 1237500, 1225000, 1212500, 1200000, 1187500 }, 126 + { 1900, 1262500, 1250000, 1237500, 1225000, 1212500, 1200000, 1187500, 127 + 1175000, 1162500, 1175000, 1162500, 1150000, 1137500, 1125000 }, 128 + { 1800, 1212500, 1200000, 1187500, 1175000, 1162500, 1150000, 1137500, 129 + 1125000, 1112500, 1125000, 1112500, 1100000, 1087500, 1075000 }, 130 + { 1700, 1175000, 1162500, 1150000, 1137500, 1125000, 1112500, 1100000, 131 + 1087500, 1075000, 1087500, 1075000, 1062500, 1050000, 1037500 }, 132 + { 1600, 1137500, 1125000, 1112500, 1100000, 1087500, 1075000, 1062500, 133 + 1050000, 1037500, 1050000, 1037500, 1025000, 1012500, 1000000 }, 134 + { 1500, 1100000, 1087500, 1075000, 1062500, 1050000, 1037500, 1025000, 135 + 1012500, 1000000, 1012500, 1000000, 987500, 975000, 962500 }, 136 + { 1400, 1075000, 1062500, 1050000, 1037500, 1025000, 1012500, 1000000, 137 + 987500, 975000, 987500, 975000, 962500, 950000, 937500 }, 138 + { 1300, 1050000, 1037500, 1025000, 1012500, 1000000, 987500, 975000, 139 + 962500, 950000, 962500, 950000, 937500, 925000, 912500 }, 140 + { 1200, 1025000, 1012500, 1000000, 987500, 975000, 962500, 950000, 141 + 937500, 925000, 937500, 925000, 912500, 900000, 900000 }, 142 + { 1100, 1000000, 987500, 975000, 962500, 950000, 937500, 925000, 143 + 912500, 900000, 900000, 900000, 900000, 900000, 900000 }, 144 + { 1000, 975000, 962500, 950000, 937500, 925000, 912500, 900000, 145 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 146 + { 900, 950000, 937500, 925000, 912500, 900000, 900000, 900000, 147 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 148 + { 800, 925000, 912500, 900000, 900000, 900000, 900000, 900000, 149 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 150 + { 700, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 151 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 152 + { 600, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 153 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 154 + { 500, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 155 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 156 + { 400, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 157 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 158 + { 300, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 159 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 160 + { 200, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 161 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 162 + }, { 163 + /* ARM bin 2 */ 164 + { 1800, 1237500, 1225000, 1212500, 1200000, 1187500, 1175000, 1162500, 165 + 1150000, 1137500, 1150000, 1137500, 1125000, 1112500, 1100000 }, 166 + { 1700, 1200000, 1187500, 1175000, 1162500, 1150000, 1137500, 1125000, 167 + 1112500, 1100000, 1112500, 1100000, 1087500, 1075000, 1062500 }, 168 + { 1600, 1162500, 1150000, 1137500, 1125000, 1112500, 1100000, 1087500, 169 + 1075000, 1062500, 1075000, 1062500, 1050000, 1037500, 1025000 }, 170 + { 1500, 1125000, 1112500, 1100000, 1087500, 1075000, 1062500, 1050000, 171 + 1037500, 1025000, 1037500, 1025000, 1012500, 1000000, 987500 }, 172 + { 1400, 1100000, 1087500, 1075000, 1062500, 1050000, 1037500, 1025000, 173 + 1012500, 1000000, 1012500, 1000000, 987500, 975000, 962500 }, 174 + { 1300, 1087500, 1075000, 1062500, 1050000, 1037500, 1025000, 1012500, 175 + 1000000, 987500, 1000000, 987500, 975000, 962500, 950000 }, 176 + { 1200, 1062500, 1050000, 1037500, 1025000, 1012500, 1000000, 987500, 177 + 975000, 962500, 975000, 962500, 950000, 937500, 925000 }, 178 + { 1100, 1037500, 1025000, 1012500, 1000000, 987500, 975000, 962500, 179 + 950000, 937500, 950000, 937500, 925000, 912500, 900000 }, 180 + { 1000, 1012500, 1000000, 987500, 975000, 962500, 950000, 937500, 181 + 925000, 912500, 925000, 912500, 900000, 900000, 900000 }, 182 + { 900, 987500, 975000, 962500, 950000, 937500, 925000, 912500, 183 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 184 + { 800, 962500, 950000, 937500, 925000, 912500, 900000, 900000, 185 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 186 + { 700, 937500, 925000, 912500, 900000, 900000, 900000, 900000, 187 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 188 + { 600, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 189 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 190 + { 500, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 191 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 192 + { 400, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 193 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 194 + { 300, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 195 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 196 + { 200, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 197 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 198 + } 199 + }; 200 + 201 + static const u32 asv_kfc_table[][ASV_KFC_DVFS_NUM][ASV_GROUPS_NUM + 1] = { 202 + { 203 + /* KFC 0, 1 */ 204 + { 1500000, 1300000, 1300000, 1300000, 1287500, 1287500, 1287500, 1275000, 205 + 1262500, 1250000, 1237500, 1225000, 1212500, 1200000, 1187500 }, 206 + { 1400000, 1275000, 1262500, 1250000, 1237500, 1225000, 1212500, 1200000, 207 + 1187500, 1175000, 1162500, 1150000, 1137500, 1125000, 1112500 }, 208 + { 1300000, 1225000, 1212500, 1200000, 1187500, 1175000, 1162500, 1150000, 209 + 1137500, 1125000, 1112500, 1100000, 1087500, 1075000, 1062500 }, 210 + { 1200000, 1175000, 1162500, 1150000, 1137500, 1125000, 1112500, 1100000, 211 + 1087500, 1075000, 1062500, 1050000, 1037500, 1025000, 1012500 }, 212 + { 1100000, 1137500, 1125000, 1112500, 1100000, 1087500, 1075000, 1062500, 213 + 1050000, 1037500, 1025000, 1012500, 1000000, 987500, 975000 }, 214 + { 1000000, 1100000, 1087500, 1075000, 1062500, 1050000, 1037500, 1025000, 215 + 1012500, 1000000, 987500, 975000, 962500, 950000, 937500 }, 216 + { 900000, 1062500, 1050000, 1037500, 1025000, 1012500, 1000000, 987500, 217 + 975000, 962500, 950000, 937500, 925000, 912500, 900000 }, 218 + { 800000, 1025000, 1012500, 1000000, 987500, 975000, 962500, 950000, 219 + 937500, 925000, 912500, 900000, 900000, 900000, 900000 }, 220 + { 700000, 987500, 975000, 962500, 950000, 937500, 925000, 912500, 221 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 222 + { 600000, 950000, 937500, 925000, 912500, 900000, 900000, 900000, 223 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 224 + { 500000, 912500, 900000, 900000, 900000, 900000, 900000, 900000, 225 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 226 + { 400000, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 227 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 228 + { 300000, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 229 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 230 + { 200000, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 231 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 232 + }, { 233 + /* KFC 2 */ 234 + { 1500, 1300000, 1300000, 1300000, 1287500, 1287500, 1287500, 1275000, 235 + 1262500, 1250000, 1237500, 1225000, 1212500, 1200000, 1187500 }, 236 + { 1400, 1275000, 1262500, 1250000, 1237500, 1225000, 1212500, 1200000, 237 + 1187500, 1175000, 1162500, 1150000, 1137500, 1125000, 1112500 }, 238 + { 1300, 1225000, 1212500, 1200000, 1187500, 1175000, 1162500, 1150000, 239 + 1137500, 1125000, 1112500, 1100000, 1087500, 1075000, 1062500 }, 240 + { 1200, 1175000, 1162500, 1150000, 1137500, 1125000, 1112500, 1100000, 241 + 1087500, 1075000, 1062500, 1050000, 1037500, 1025000, 1012500 }, 242 + { 1100, 1137500, 1125000, 1112500, 1100000, 1087500, 1075000, 1062500, 243 + 1050000, 1037500, 1025000, 1012500, 1000000, 987500, 975000 }, 244 + { 1000, 1100000, 1087500, 1075000, 1062500, 1050000, 1037500, 1025000, 245 + 1012500, 1000000, 987500, 975000, 962500, 950000, 937500 }, 246 + { 900, 1062500, 1050000, 1037500, 1025000, 1012500, 1000000, 987500, 247 + 975000, 962500, 950000, 937500, 925000, 912500, 900000 }, 248 + { 800, 1025000, 1012500, 1000000, 987500, 975000, 962500, 950000, 249 + 937500, 925000, 912500, 900000, 900000, 900000, 900000 }, 250 + { 700, 987500, 975000, 962500, 950000, 937500, 925000, 912500, 251 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 252 + { 600, 950000, 937500, 925000, 912500, 900000, 900000, 900000, 253 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 254 + { 500, 912500, 900000, 900000, 900000, 900000, 900000, 900000, 255 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 256 + { 400, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 257 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 258 + { 300, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 259 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 260 + { 200, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 261 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 262 + }, { 263 + /* KFC 3 */ 264 + { 1500, 1300000, 1300000, 1300000, 1287500, 1287500, 1287500, 1275000, 265 + 1262500, 1250000, 1237500, 1225000, 1212500, 1200000, 1187500 }, 266 + { 1400, 1275000, 1262500, 1250000, 1237500, 1225000, 1212500, 1200000, 267 + 1187500, 1175000, 1162500, 1150000, 1137500, 1125000, 1112500 }, 268 + { 1300, 1225000, 1212500, 1200000, 1187500, 1175000, 1162500, 1150000, 269 + 1137500, 1125000, 1112500, 1100000, 1087500, 1075000, 1062500 }, 270 + { 1200, 1175000, 1162500, 1150000, 1137500, 1125000, 1112500, 1100000, 271 + 1087500, 1075000, 1062500, 1050000, 1037500, 1025000, 1012500 }, 272 + { 1100, 1137500, 1125000, 1112500, 1100000, 1087500, 1075000, 1062500, 273 + 1050000, 1037500, 1025000, 1012500, 1000000, 987500, 975000 }, 274 + { 1000, 1100000, 1087500, 1075000, 1062500, 1050000, 1037500, 1025000, 275 + 1012500, 1000000, 987500, 975000, 962500, 950000, 937500 }, 276 + { 900, 1062500, 1050000, 1037500, 1025000, 1012500, 1000000, 987500, 277 + 975000, 962500, 950000, 937500, 925000, 912500, 900000 }, 278 + { 800, 1025000, 1012500, 1000000, 987500, 975000, 962500, 950000, 279 + 937500, 925000, 912500, 900000, 900000, 900000, 900000 }, 280 + { 700, 987500, 975000, 962500, 950000, 937500, 925000, 912500, 281 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 282 + { 600, 950000, 937500, 925000, 912500, 900000, 900000, 900000, 283 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 284 + { 500, 912500, 900000, 900000, 900000, 900000, 900000, 900000, 285 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 286 + { 400, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 287 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 288 + { 300, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 289 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 290 + { 200, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 291 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 292 + }, { 293 + /* KFC bin 2 */ 294 + { 1300, 1250000, 1237500, 1225000, 1212500, 1200000, 1187500, 1175000, 295 + 1162500, 1150000, 1137500, 1125000, 1112500, 1100000, 1087500 }, 296 + { 1200, 1200000, 1187500, 1175000, 1162500, 1150000, 1137500, 1125000, 297 + 1112500, 1100000, 1087500, 1075000, 1062500, 1050000, 1037500 }, 298 + { 1100, 1162500, 1150000, 1137500, 1125000, 1112500, 1100000, 1087500, 299 + 1075000, 1062500, 1050000, 1037500, 1025000, 1012500, 1000000 }, 300 + { 1000, 1125000, 1112500, 1100000, 1087500, 1075000, 1062500, 1050000, 301 + 1037500, 1025000, 1012500, 1000000, 987500, 975000, 962500 }, 302 + { 900, 1087500, 1075000, 1062500, 1050000, 1037500, 1025000, 1012500, 303 + 1000000, 987500, 975000, 962500, 950000, 937500, 925000 }, 304 + { 800, 1050000, 1037500, 1025000, 1012500, 1000000, 987500, 975000, 305 + 962500, 950000, 937500, 925000, 912500, 900000, 900000 }, 306 + { 700, 1012500, 1000000, 987500, 975000, 962500, 950000, 937500, 307 + 925000, 912500, 900000, 900000, 900000, 900000, 900000 }, 308 + { 600, 975000, 962500, 950000, 937500, 925000, 912500, 900000, 309 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 310 + { 500, 937500, 925000, 912500, 900000, 900000, 900000, 900000, 311 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 312 + { 400, 925000, 912500, 900000, 900000, 900000, 900000, 900000, 313 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 314 + { 300, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 315 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 316 + { 200, 900000, 900000, 900000, 900000, 900000, 900000, 900000, 317 + 900000, 900000, 900000, 900000, 900000, 900000, 900000 }, 318 + } 319 + }; 320 + 321 + static const struct asv_limit_entry __asv_limits[ASV_GROUPS_NUM] = { 322 + { 13, 55 }, 323 + { 21, 65 }, 324 + { 25, 69 }, 325 + { 30, 72 }, 326 + { 36, 74 }, 327 + { 43, 76 }, 328 + { 51, 78 }, 329 + { 65, 80 }, 330 + { 81, 82 }, 331 + { 98, 84 }, 332 + { 119, 87 }, 333 + { 135, 89 }, 334 + { 150, 92 }, 335 + { 999, 999 }, 336 + }; 337 + 338 + static int exynos5422_asv_get_group(struct exynos_asv *asv) 339 + { 340 + unsigned int pkgid_reg, auxi_reg; 341 + int hpm, ids, i; 342 + 343 + regmap_read(asv->chipid_regmap, EXYNOS_CHIPID_REG_PKG_ID, &pkgid_reg); 344 + regmap_read(asv->chipid_regmap, EXYNOS_CHIPID_REG_AUX_INFO, &auxi_reg); 345 + 346 + if (asv->use_sg) { 347 + u32 sga = (pkgid_reg >> EXYNOS5422_SG_A_OFFSET) & 348 + EXYNOS5422_SG_A_MASK; 349 + 350 + u32 sgb = (pkgid_reg >> EXYNOS5422_SG_B_OFFSET) & 351 + EXYNOS5422_SG_B_MASK; 352 + 353 + if ((pkgid_reg >> EXYNOS5422_SG_BSIGN_OFFSET) & 354 + EXYNOS5422_SG_BSIGN_MASK) 355 + return sga + sgb; 356 + else 357 + return sga - sgb; 358 + } 359 + 360 + hpm = (auxi_reg >> EXYNOS5422_TMCB_OFFSET) & EXYNOS5422_TMCB_MASK; 361 + ids = (pkgid_reg >> EXYNOS5422_IDS_OFFSET) & EXYNOS5422_IDS_MASK; 362 + 363 + for (i = 0; i < ASV_GROUPS_NUM; i++) { 364 + if (ids <= __asv_limits[i].ids) 365 + break; 366 + if (hpm <= __asv_limits[i].hpm) 367 + break; 368 + } 369 + if (i < ASV_GROUPS_NUM) 370 + return i; 371 + 372 + return 0; 373 + } 374 + 375 + static int __asv_offset_voltage(unsigned int index) 376 + { 377 + switch (index) { 378 + case 1: 379 + return 12500; 380 + case 2: 381 + return 50000; 382 + case 3: 383 + return 25000; 384 + default: 385 + return 0; 386 + }; 387 + } 388 + 389 + static void exynos5422_asv_offset_voltage_setup(struct exynos_asv *asv) 390 + { 391 + struct exynos_asv_subsys *subsys; 392 + unsigned int reg, value; 393 + 394 + regmap_read(asv->chipid_regmap, EXYNOS_CHIPID_REG_AUX_INFO, &reg); 395 + 396 + /* ARM offset voltage setup */ 397 + subsys = &asv->subsys[EXYNOS_ASV_SUBSYS_ID_ARM]; 398 + 399 + subsys->base_volt = 1000000; 400 + 401 + value = (reg >> EXYNOS5422_ARM_UP_OFFSET) & EXYNOS5422_ARM_UP_MASK; 402 + subsys->offset_volt_h = __asv_offset_voltage(value); 403 + 404 + value = (reg >> EXYNOS5422_ARM_DN_OFFSET) & EXYNOS5422_ARM_DN_MASK; 405 + subsys->offset_volt_l = __asv_offset_voltage(value); 406 + 407 + /* KFC offset voltage setup */ 408 + subsys = &asv->subsys[EXYNOS_ASV_SUBSYS_ID_KFC]; 409 + 410 + subsys->base_volt = 1000000; 411 + 412 + value = (reg >> EXYNOS5422_KFC_UP_OFFSET) & EXYNOS5422_KFC_UP_MASK; 413 + subsys->offset_volt_h = __asv_offset_voltage(value); 414 + 415 + value = (reg >> EXYNOS5422_KFC_DN_OFFSET) & EXYNOS5422_KFC_DN_MASK; 416 + subsys->offset_volt_l = __asv_offset_voltage(value); 417 + } 418 + 419 + static int exynos5422_asv_opp_get_voltage(const struct exynos_asv_subsys *subsys, 420 + int level, unsigned int volt) 421 + { 422 + unsigned int asv_volt; 423 + 424 + if (level >= subsys->table.num_rows) 425 + return volt; 426 + 427 + asv_volt = exynos_asv_opp_get_voltage(subsys, level, 428 + subsys->asv->group); 429 + 430 + if (volt > subsys->base_volt) 431 + asv_volt += subsys->offset_volt_h; 432 + else 433 + asv_volt += subsys->offset_volt_l; 434 + 435 + return asv_volt; 436 + } 437 + 438 + static unsigned int exynos5422_asv_parse_table(unsigned int pkg_id) 439 + { 440 + return (pkg_id >> EXYNOS5422_TABLE_OFFSET) & EXYNOS5422_TABLE_MASK; 441 + } 442 + 443 + static bool exynos5422_asv_parse_bin2(unsigned int pkg_id) 444 + { 445 + return (pkg_id >> EXYNOS5422_BIN2_OFFSET) & EXYNOS5422_BIN2_MASK; 446 + } 447 + 448 + static bool exynos5422_asv_parse_sg(unsigned int pkg_id) 449 + { 450 + return (pkg_id >> EXYNOS5422_USESG_OFFSET) & EXYNOS5422_USESG_MASK; 451 + } 452 + 453 + int exynos5422_asv_init(struct exynos_asv *asv) 454 + { 455 + struct exynos_asv_subsys *subsys; 456 + unsigned int table_index; 457 + unsigned int pkg_id; 458 + bool bin2; 459 + 460 + regmap_read(asv->chipid_regmap, EXYNOS_CHIPID_REG_PKG_ID, &pkg_id); 461 + 462 + if (asv->of_bin == 2) { 463 + bin2 = true; 464 + asv->use_sg = false; 465 + } else { 466 + asv->use_sg = exynos5422_asv_parse_sg(pkg_id); 467 + bin2 = exynos5422_asv_parse_bin2(pkg_id); 468 + } 469 + 470 + asv->group = exynos5422_asv_get_group(asv); 471 + asv->table = exynos5422_asv_parse_table(pkg_id); 472 + 473 + exynos5422_asv_offset_voltage_setup(asv); 474 + 475 + if (bin2) { 476 + table_index = 3; 477 + } else { 478 + if (asv->table == 2 || asv->table == 3) 479 + table_index = asv->table - 1; 480 + else 481 + table_index = 0; 482 + } 483 + 484 + subsys = &asv->subsys[EXYNOS_ASV_SUBSYS_ID_ARM]; 485 + subsys->cpu_dt_compat = "arm,cortex-a15"; 486 + if (bin2) 487 + subsys->table.num_rows = ASV_ARM_BIN2_DVFS_NUM; 488 + else 489 + subsys->table.num_rows = ASV_ARM_DVFS_NUM; 490 + subsys->table.num_cols = ASV_GROUPS_NUM + 1; 491 + subsys->table.buf = (u32 *)asv_arm_table[table_index]; 492 + 493 + subsys = &asv->subsys[EXYNOS_ASV_SUBSYS_ID_KFC]; 494 + subsys->cpu_dt_compat = "arm,cortex-a7"; 495 + if (bin2) 496 + subsys->table.num_rows = ASV_KFC_BIN2_DVFS_NUM; 497 + else 498 + subsys->table.num_rows = ASV_KFC_DVFS_NUM; 499 + subsys->table.num_cols = ASV_GROUPS_NUM + 1; 500 + subsys->table.buf = (u32 *)asv_kfc_table[table_index]; 501 + 502 + asv->opp_get_voltage = exynos5422_asv_opp_get_voltage; 503 + 504 + return 0; 505 + }
+31
drivers/soc/samsung/exynos5422-asv.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (c) 2019 Samsung Electronics Co., Ltd. 4 + * http://www.samsung.com/ 5 + * 6 + * Samsung Exynos 5422 SoC Adaptive Supply Voltage support 7 + */ 8 + 9 + #ifndef __LINUX_SOC_EXYNOS5422_ASV_H 10 + #define __LINUX_SOC_EXYNOS5422_ASV_H 11 + 12 + #include <linux/errno.h> 13 + 14 + enum { 15 + EXYNOS_ASV_SUBSYS_ID_ARM, 16 + EXYNOS_ASV_SUBSYS_ID_KFC, 17 + EXYNOS_ASV_SUBSYS_ID_MAX 18 + }; 19 + 20 + struct exynos_asv; 21 + 22 + #ifdef CONFIG_EXYNOS_ASV_ARM 23 + int exynos5422_asv_init(struct exynos_asv *asv); 24 + #else 25 + static inline int exynos5422_asv_init(struct exynos_asv *asv) 26 + { 27 + return -ENOTSUPP; 28 + } 29 + #endif 30 + 31 + #endif /* __LINUX_SOC_EXYNOS5422_ASV_H */
+10
drivers/soc/tegra/Kconfig
··· 15 15 select PL310_ERRATA_769419 if CACHE_L2X0 16 16 select SOC_TEGRA_FLOWCTRL 17 17 select SOC_TEGRA_PMC 18 + select SOC_TEGRA20_VOLTAGE_COUPLER 18 19 select TEGRA_TIMER 19 20 help 20 21 Support for NVIDIA Tegra AP20 and T20 processors, based on the ··· 29 28 select PL310_ERRATA_769419 if CACHE_L2X0 30 29 select SOC_TEGRA_FLOWCTRL 31 30 select SOC_TEGRA_PMC 31 + select SOC_TEGRA30_VOLTAGE_COUPLER 32 32 select TEGRA_TIMER 33 33 help 34 34 Support for NVIDIA Tegra T30 processor family, based on the ··· 137 135 def_bool y 138 136 depends on PM_GENERIC_DOMAINS 139 137 depends on TEGRA_BPMP 138 + 139 + config SOC_TEGRA20_VOLTAGE_COUPLER 140 + bool "Voltage scaling support for Tegra20 SoCs" 141 + depends on ARCH_TEGRA_2x_SOC || COMPILE_TEST 142 + 143 + config SOC_TEGRA30_VOLTAGE_COUPLER 144 + bool "Voltage scaling support for Tegra30 SoCs" 145 + depends on ARCH_TEGRA_3x_SOC || COMPILE_TEST
+2
drivers/soc/tegra/Makefile
··· 5 5 obj-$(CONFIG_SOC_TEGRA_FLOWCTRL) += flowctrl.o 6 6 obj-$(CONFIG_SOC_TEGRA_PMC) += pmc.o 7 7 obj-$(CONFIG_SOC_TEGRA_POWERGATE_BPMP) += powergate-bpmp.o 8 + obj-$(CONFIG_SOC_TEGRA20_VOLTAGE_COUPLER) += regulators-tegra20.o 9 + obj-$(CONFIG_SOC_TEGRA30_VOLTAGE_COUPLER) += regulators-tegra30.o
+146 -52
drivers/soc/tegra/fuse/fuse-tegra.c
··· 8 8 #include <linux/kobject.h> 9 9 #include <linux/init.h> 10 10 #include <linux/io.h> 11 + #include <linux/nvmem-consumer.h> 12 + #include <linux/nvmem-provider.h> 11 13 #include <linux/of.h> 12 14 #include <linux/of_address.h> 13 15 #include <linux/platform_device.h> ··· 32 30 [TEGRA_REVISION_A03p] = "A03 prime", 33 31 [TEGRA_REVISION_A04] = "A04", 34 32 }; 35 - 36 - static u8 fuse_readb(struct tegra_fuse *fuse, unsigned int offset) 37 - { 38 - u32 val; 39 - 40 - val = fuse->read(fuse, round_down(offset, 4)); 41 - val >>= (offset % 4) * 8; 42 - val &= 0xff; 43 - 44 - return val; 45 - } 46 - 47 - static ssize_t fuse_read(struct file *fd, struct kobject *kobj, 48 - struct bin_attribute *attr, char *buf, 49 - loff_t pos, size_t size) 50 - { 51 - struct device *dev = kobj_to_dev(kobj); 52 - struct tegra_fuse *fuse = dev_get_drvdata(dev); 53 - int i; 54 - 55 - if (pos < 0 || pos >= attr->size) 56 - return 0; 57 - 58 - if (size > attr->size - pos) 59 - size = attr->size - pos; 60 - 61 - for (i = 0; i < size; i++) 62 - buf[i] = fuse_readb(fuse, pos + i); 63 - 64 - return i; 65 - } 66 - 67 - static struct bin_attribute fuse_bin_attr = { 68 - .attr = { .name = "fuse", .mode = S_IRUGO, }, 69 - .read = fuse_read, 70 - }; 71 - 72 - static int tegra_fuse_create_sysfs(struct device *dev, unsigned int size, 73 - const struct tegra_fuse_info *info) 74 - { 75 - fuse_bin_attr.size = size; 76 - 77 - return device_create_bin_file(dev, &fuse_bin_attr); 78 - } 79 33 80 34 static const struct of_device_id car_match[] __initconst = { 81 35 { .compatible = "nvidia,tegra20-car", }, ··· 73 115 { /* sentinel */ } 74 116 }; 75 117 118 + static int tegra_fuse_read(void *priv, unsigned int offset, void *value, 119 + size_t bytes) 120 + { 121 + unsigned int count = bytes / 4, i; 122 + struct tegra_fuse *fuse = priv; 123 + u32 *buffer = value; 124 + 125 + for (i = 0; i < count; i++) 126 + buffer[i] = fuse->read(fuse, offset + i * 4); 127 + 128 + return 0; 129 + } 130 + 131 + static const struct nvmem_cell_info tegra_fuse_cells[] = { 132 + { 133 + .name = "tsensor-cpu1", 134 + .offset = 0x084, 135 + .bytes = 4, 136 + .bit_offset = 0, 137 + .nbits = 32, 138 + }, { 139 + .name = "tsensor-cpu2", 140 + .offset = 0x088, 141 + .bytes = 4, 142 + .bit_offset = 0, 143 + .nbits = 32, 144 + }, { 145 + .name = "tsensor-cpu0", 146 + .offset = 0x098, 147 + .bytes = 4, 148 + .bit_offset = 0, 149 + .nbits = 32, 150 + }, { 151 + .name = "xusb-pad-calibration", 152 + .offset = 0x0f0, 153 + .bytes = 4, 154 + .bit_offset = 0, 155 + .nbits = 32, 156 + }, { 157 + .name = "tsensor-cpu3", 158 + .offset = 0x12c, 159 + .bytes = 4, 160 + .bit_offset = 0, 161 + .nbits = 32, 162 + }, { 163 + .name = "sata-calibration", 164 + .offset = 0x124, 165 + .bytes = 1, 166 + .bit_offset = 0, 167 + .nbits = 2, 168 + }, { 169 + .name = "tsensor-gpu", 170 + .offset = 0x154, 171 + .bytes = 4, 172 + .bit_offset = 0, 173 + .nbits = 32, 174 + }, { 175 + .name = "tsensor-mem0", 176 + .offset = 0x158, 177 + .bytes = 4, 178 + .bit_offset = 0, 179 + .nbits = 32, 180 + }, { 181 + .name = "tsensor-mem1", 182 + .offset = 0x15c, 183 + .bytes = 4, 184 + .bit_offset = 0, 185 + .nbits = 32, 186 + }, { 187 + .name = "tsensor-pllx", 188 + .offset = 0x160, 189 + .bytes = 4, 190 + .bit_offset = 0, 191 + .nbits = 32, 192 + }, { 193 + .name = "tsensor-common", 194 + .offset = 0x180, 195 + .bytes = 4, 196 + .bit_offset = 0, 197 + .nbits = 32, 198 + }, { 199 + .name = "tsensor-realignment", 200 + .offset = 0x1fc, 201 + .bytes = 4, 202 + .bit_offset = 0, 203 + .nbits = 32, 204 + }, { 205 + .name = "gpu-calibration", 206 + .offset = 0x204, 207 + .bytes = 4, 208 + .bit_offset = 0, 209 + .nbits = 32, 210 + }, { 211 + .name = "xusb-pad-calibration-ext", 212 + .offset = 0x250, 213 + .bytes = 4, 214 + .bit_offset = 0, 215 + .nbits = 32, 216 + }, 217 + }; 218 + 76 219 static int tegra_fuse_probe(struct platform_device *pdev) 77 220 { 78 221 void __iomem *base = fuse->base; 222 + struct nvmem_config nvmem; 79 223 struct resource *res; 80 224 int err; 81 225 ··· 206 146 207 147 if (fuse->soc->probe) { 208 148 err = fuse->soc->probe(fuse); 209 - if (err < 0) { 210 - fuse->base = base; 211 - return err; 212 - } 149 + if (err < 0) 150 + goto restore; 213 151 } 214 152 215 - if (tegra_fuse_create_sysfs(&pdev->dev, fuse->soc->info->size, 216 - fuse->soc->info)) 217 - return -ENODEV; 153 + memset(&nvmem, 0, sizeof(nvmem)); 154 + nvmem.dev = &pdev->dev; 155 + nvmem.name = "fuse"; 156 + nvmem.id = -1; 157 + nvmem.owner = THIS_MODULE; 158 + nvmem.cells = tegra_fuse_cells; 159 + nvmem.ncells = ARRAY_SIZE(tegra_fuse_cells); 160 + nvmem.type = NVMEM_TYPE_OTP; 161 + nvmem.read_only = true; 162 + nvmem.root_only = true; 163 + nvmem.reg_read = tegra_fuse_read; 164 + nvmem.size = fuse->soc->info->size; 165 + nvmem.word_size = 4; 166 + nvmem.stride = 4; 167 + nvmem.priv = fuse; 168 + 169 + fuse->nvmem = devm_nvmem_register(&pdev->dev, &nvmem); 170 + if (IS_ERR(fuse->nvmem)) { 171 + err = PTR_ERR(fuse->nvmem); 172 + dev_err(&pdev->dev, "failed to register NVMEM device: %d\n", 173 + err); 174 + goto restore; 175 + } 218 176 219 177 /* release the early I/O memory mapping */ 220 178 iounmap(base); 221 179 222 180 return 0; 181 + 182 + restore: 183 + fuse->base = base; 184 + return err; 223 185 } 224 186 225 187 static struct platform_driver tegra_fuse_driver = { ··· 268 186 269 187 int tegra_fuse_readl(unsigned long offset, u32 *value) 270 188 { 271 - if (!fuse->read) 189 + if (!fuse->read || !fuse->clk) 272 190 return -EPROBE_DEFER; 191 + 192 + if (IS_ERR(fuse->clk)) 193 + return PTR_ERR(fuse->clk); 273 194 274 195 *value = fuse->read(fuse, offset); 275 196 ··· 423 338 pr_debug("Tegra CPU Speedo ID %d, SoC Speedo ID %d\n", 424 339 tegra_sku_info.cpu_speedo_id, tegra_sku_info.soc_speedo_id); 425 340 341 + if (fuse->soc->lookups) { 342 + size_t size = sizeof(*fuse->lookups) * fuse->soc->num_lookups; 343 + 344 + fuse->lookups = kmemdup(fuse->soc->lookups, size, GFP_KERNEL); 345 + if (!fuse->lookups) 346 + return -ENOMEM; 347 + 348 + nvmem_add_cell_lookups(fuse->lookups, fuse->soc->num_lookups); 349 + } 426 350 427 351 return 0; 428 352 }
+154
drivers/soc/tegra/fuse/fuse-tegra30.c
··· 8 8 #include <linux/err.h> 9 9 #include <linux/io.h> 10 10 #include <linux/kernel.h> 11 + #include <linux/nvmem-consumer.h> 11 12 #include <linux/of_device.h> 12 13 #include <linux/of_address.h> 13 14 #include <linux/platform_device.h> ··· 128 127 #endif 129 128 130 129 #if defined(CONFIG_ARCH_TEGRA_124_SOC) || defined(CONFIG_ARCH_TEGRA_132_SOC) 130 + static const struct nvmem_cell_lookup tegra124_fuse_lookups[] = { 131 + { 132 + .nvmem_name = "fuse", 133 + .cell_name = "xusb-pad-calibration", 134 + .dev_id = "7009f000.padctl", 135 + .con_id = "calibration", 136 + }, { 137 + .nvmem_name = "fuse", 138 + .cell_name = "sata-calibration", 139 + .dev_id = "70020000.sata", 140 + .con_id = "calibration", 141 + }, { 142 + .nvmem_name = "fuse", 143 + .cell_name = "tsensor-common", 144 + .dev_id = "700e2000.thermal-sensor", 145 + .con_id = "common", 146 + }, { 147 + .nvmem_name = "fuse", 148 + .cell_name = "tsensor-realignment", 149 + .dev_id = "700e2000.thermal-sensor", 150 + .con_id = "realignment", 151 + }, { 152 + .nvmem_name = "fuse", 153 + .cell_name = "tsensor-cpu0", 154 + .dev_id = "700e2000.thermal-sensor", 155 + .con_id = "cpu0", 156 + }, { 157 + .nvmem_name = "fuse", 158 + .cell_name = "tsensor-cpu1", 159 + .dev_id = "700e2000.thermal-sensor", 160 + .con_id = "cpu1", 161 + }, { 162 + .nvmem_name = "fuse", 163 + .cell_name = "tsensor-cpu2", 164 + .dev_id = "700e2000.thermal-sensor", 165 + .con_id = "cpu2", 166 + }, { 167 + .nvmem_name = "fuse", 168 + .cell_name = "tsensor-cpu3", 169 + .dev_id = "700e2000.thermal-sensor", 170 + .con_id = "cpu3", 171 + }, { 172 + .nvmem_name = "fuse", 173 + .cell_name = "tsensor-mem0", 174 + .dev_id = "700e2000.thermal-sensor", 175 + .con_id = "mem0", 176 + }, { 177 + .nvmem_name = "fuse", 178 + .cell_name = "tsensor-mem1", 179 + .dev_id = "700e2000.thermal-sensor", 180 + .con_id = "mem1", 181 + }, { 182 + .nvmem_name = "fuse", 183 + .cell_name = "tsensor-gpu", 184 + .dev_id = "700e2000.thermal-sensor", 185 + .con_id = "gpu", 186 + }, { 187 + .nvmem_name = "fuse", 188 + .cell_name = "tsensor-pllx", 189 + .dev_id = "700e2000.thermal-sensor", 190 + .con_id = "pllx", 191 + }, 192 + }; 193 + 131 194 static const struct tegra_fuse_info tegra124_fuse_info = { 132 195 .read = tegra30_fuse_read, 133 196 .size = 0x300, ··· 202 137 .init = tegra30_fuse_init, 203 138 .speedo_init = tegra124_init_speedo_data, 204 139 .info = &tegra124_fuse_info, 140 + .lookups = tegra124_fuse_lookups, 141 + .num_lookups = ARRAY_SIZE(tegra124_fuse_lookups), 205 142 }; 206 143 #endif 207 144 208 145 #if defined(CONFIG_ARCH_TEGRA_210_SOC) 146 + static const struct nvmem_cell_lookup tegra210_fuse_lookups[] = { 147 + { 148 + .nvmem_name = "fuse", 149 + .cell_name = "tsensor-cpu1", 150 + .dev_id = "700e2000.thermal-sensor", 151 + .con_id = "cpu1", 152 + }, { 153 + .nvmem_name = "fuse", 154 + .cell_name = "tsensor-cpu2", 155 + .dev_id = "700e2000.thermal-sensor", 156 + .con_id = "cpu2", 157 + }, { 158 + .nvmem_name = "fuse", 159 + .cell_name = "tsensor-cpu0", 160 + .dev_id = "700e2000.thermal-sensor", 161 + .con_id = "cpu0", 162 + }, { 163 + .nvmem_name = "fuse", 164 + .cell_name = "xusb-pad-calibration", 165 + .dev_id = "7009f000.padctl", 166 + .con_id = "calibration", 167 + }, { 168 + .nvmem_name = "fuse", 169 + .cell_name = "tsensor-cpu3", 170 + .dev_id = "700e2000.thermal-sensor", 171 + .con_id = "cpu3", 172 + }, { 173 + .nvmem_name = "fuse", 174 + .cell_name = "sata-calibration", 175 + .dev_id = "70020000.sata", 176 + .con_id = "calibration", 177 + }, { 178 + .nvmem_name = "fuse", 179 + .cell_name = "tsensor-gpu", 180 + .dev_id = "700e2000.thermal-sensor", 181 + .con_id = "gpu", 182 + }, { 183 + .nvmem_name = "fuse", 184 + .cell_name = "tsensor-mem0", 185 + .dev_id = "700e2000.thermal-sensor", 186 + .con_id = "mem0", 187 + }, { 188 + .nvmem_name = "fuse", 189 + .cell_name = "tsensor-mem1", 190 + .dev_id = "700e2000.thermal-sensor", 191 + .con_id = "mem1", 192 + }, { 193 + .nvmem_name = "fuse", 194 + .cell_name = "tsensor-pllx", 195 + .dev_id = "700e2000.thermal-sensor", 196 + .con_id = "pllx", 197 + }, { 198 + .nvmem_name = "fuse", 199 + .cell_name = "tsensor-common", 200 + .dev_id = "700e2000.thermal-sensor", 201 + .con_id = "common", 202 + }, { 203 + .nvmem_name = "fuse", 204 + .cell_name = "gpu-calibration", 205 + .dev_id = "57000000.gpu", 206 + .con_id = "calibration", 207 + }, { 208 + .nvmem_name = "fuse", 209 + .cell_name = "xusb-pad-calibration-ext", 210 + .dev_id = "7009f000.padctl", 211 + .con_id = "calibration-ext", 212 + }, 213 + }; 214 + 209 215 static const struct tegra_fuse_info tegra210_fuse_info = { 210 216 .read = tegra30_fuse_read, 211 217 .size = 0x300, ··· 287 151 .init = tegra30_fuse_init, 288 152 .speedo_init = tegra210_init_speedo_data, 289 153 .info = &tegra210_fuse_info, 154 + .lookups = tegra210_fuse_lookups, 155 + .num_lookups = ARRAY_SIZE(tegra210_fuse_lookups), 290 156 }; 291 157 #endif 292 158 293 159 #if defined(CONFIG_ARCH_TEGRA_186_SOC) 160 + static const struct nvmem_cell_lookup tegra186_fuse_lookups[] = { 161 + { 162 + .nvmem_name = "fuse", 163 + .cell_name = "xusb-pad-calibration", 164 + .dev_id = "3520000.padctl", 165 + .con_id = "calibration", 166 + }, { 167 + .nvmem_name = "fuse", 168 + .cell_name = "xusb-pad-calibration-ext", 169 + .dev_id = "3520000.padctl", 170 + .con_id = "calibration-ext", 171 + }, 172 + }; 173 + 294 174 static const struct tegra_fuse_info tegra186_fuse_info = { 295 175 .read = tegra30_fuse_read, 296 176 .size = 0x300, ··· 316 164 const struct tegra_fuse_soc tegra186_fuse_soc = { 317 165 .init = tegra30_fuse_init, 318 166 .info = &tegra186_fuse_info, 167 + .lookups = tegra186_fuse_lookups, 168 + .num_lookups = ARRAY_SIZE(tegra186_fuse_lookups), 319 169 }; 320 170 #endif
+8
drivers/soc/tegra/fuse/fuse.h
··· 13 13 #include <linux/dmaengine.h> 14 14 #include <linux/types.h> 15 15 16 + struct nvmem_cell_lookup; 17 + struct nvmem_device; 16 18 struct tegra_fuse; 17 19 18 20 struct tegra_fuse_info { ··· 29 27 int (*probe)(struct tegra_fuse *fuse); 30 28 31 29 const struct tegra_fuse_info *info; 30 + 31 + const struct nvmem_cell_lookup *lookups; 32 + unsigned int num_lookups; 32 33 }; 33 34 34 35 struct tegra_fuse { ··· 53 48 dma_addr_t phys; 54 49 u32 *virt; 55 50 } apbdma; 51 + 52 + struct nvmem_device *nvmem; 53 + struct nvmem_cell_lookup *lookups; 56 54 }; 57 55 58 56 void tegra_init_revision(void);
+210 -22
drivers/soc/tegra/pmc.c
··· 56 56 #define PMC_CNTRL_SIDE_EFFECT_LP0 BIT(14) /* LP0 when CPU pwr gated */ 57 57 #define PMC_CNTRL_SYSCLK_OE BIT(11) /* system clock enable */ 58 58 #define PMC_CNTRL_SYSCLK_POLARITY BIT(10) /* sys clk polarity */ 59 + #define PMC_CNTRL_PWRREQ_POLARITY BIT(8) 59 60 #define PMC_CNTRL_MAIN_RST BIT(4) 61 + 62 + #define PMC_WAKE_MASK 0x0c 63 + #define PMC_WAKE_LEVEL 0x10 64 + #define PMC_WAKE_STATUS 0x14 65 + #define PMC_SW_WAKE_STATUS 0x18 60 66 61 67 #define DPD_SAMPLE 0x020 62 68 #define DPD_SAMPLE_ENABLE BIT(0) ··· 88 82 89 83 #define PMC_CPUPWRGOOD_TIMER 0xc8 90 84 #define PMC_CPUPWROFF_TIMER 0xcc 85 + #define PMC_COREPWRGOOD_TIMER 0x3c 86 + #define PMC_COREPWROFF_TIMER 0xe0 91 87 92 88 #define PMC_PWR_DET_VALUE 0xe4 93 89 94 90 #define PMC_SCRATCH41 0x140 91 + 92 + #define PMC_WAKE2_MASK 0x160 93 + #define PMC_WAKE2_LEVEL 0x164 94 + #define PMC_WAKE2_STATUS 0x168 95 + #define PMC_SW_WAKE2_STATUS 0x16c 95 96 96 97 #define PMC_SENSOR_CTRL 0x1b0 97 98 #define PMC_SENSOR_CTRL_SCRATCH_WRITE BIT(2) ··· 239 226 void (*setup_irq_polarity)(struct tegra_pmc *pmc, 240 227 struct device_node *np, 241 228 bool invert); 229 + int (*irq_set_wake)(struct irq_data *data, unsigned int on); 230 + int (*irq_set_type)(struct irq_data *data, unsigned int type); 242 231 243 232 const char * const *reset_sources; 244 233 unsigned int num_reset_sources; ··· 324 309 * @pctl_dev: pin controller exposed by the PMC 325 310 * @domain: IRQ domain provided by the PMC 326 311 * @irq: chip implementation for the IRQ domain 312 + * @clk_nb: pclk clock changes handler 327 313 */ 328 314 struct tegra_pmc { 329 315 struct device *dev; ··· 360 344 361 345 struct irq_domain *domain; 362 346 struct irq_chip irq; 347 + 348 + struct notifier_block clk_nb; 363 349 }; 364 350 365 351 static struct tegra_pmc *pmc = &(struct tegra_pmc) { ··· 1210 1192 return err; 1211 1193 1212 1194 if (pmc->clk) { 1213 - rate = clk_get_rate(pmc->clk); 1195 + rate = pmc->rate; 1214 1196 if (!rate) { 1215 1197 dev_err(pmc->dev, "failed to get clock rate\n"); 1216 1198 return -ENODEV; ··· 1451 1433 void tegra_pmc_enter_suspend_mode(enum tegra_suspend_mode mode) 1452 1434 { 1453 1435 unsigned long long rate = 0; 1436 + u64 ticks; 1454 1437 u32 value; 1455 1438 1456 1439 switch (mode) { ··· 1460 1441 break; 1461 1442 1462 1443 case TEGRA_SUSPEND_LP2: 1463 - rate = clk_get_rate(pmc->clk); 1444 + rate = pmc->rate; 1464 1445 break; 1465 1446 1466 1447 default: ··· 1470 1451 if (WARN_ON_ONCE(rate == 0)) 1471 1452 rate = 100000000; 1472 1453 1473 - if (rate != pmc->rate) { 1474 - u64 ticks; 1454 + ticks = pmc->cpu_good_time * rate + USEC_PER_SEC - 1; 1455 + do_div(ticks, USEC_PER_SEC); 1456 + tegra_pmc_writel(pmc, ticks, PMC_CPUPWRGOOD_TIMER); 1475 1457 1476 - ticks = pmc->cpu_good_time * rate + USEC_PER_SEC - 1; 1477 - do_div(ticks, USEC_PER_SEC); 1478 - tegra_pmc_writel(pmc, ticks, PMC_CPUPWRGOOD_TIMER); 1479 - 1480 - ticks = pmc->cpu_off_time * rate + USEC_PER_SEC - 1; 1481 - do_div(ticks, USEC_PER_SEC); 1482 - tegra_pmc_writel(pmc, ticks, PMC_CPUPWROFF_TIMER); 1483 - 1484 - wmb(); 1485 - 1486 - pmc->rate = rate; 1487 - } 1458 + ticks = pmc->cpu_off_time * rate + USEC_PER_SEC - 1; 1459 + do_div(ticks, USEC_PER_SEC); 1460 + tegra_pmc_writel(pmc, ticks, PMC_CPUPWROFF_TIMER); 1488 1461 1489 1462 value = tegra_pmc_readl(pmc, PMC_CNTRL); 1490 1463 value &= ~PMC_CNTRL_SIDE_EFFECT_LP0; ··· 1910 1899 event->id, 1911 1900 &pmc->irq, pmc); 1912 1901 1902 + /* 1903 + * GPIOs don't have an equivalent interrupt in the 1904 + * parent controller (GIC). However some code, such 1905 + * as the one in irq_get_irqchip_state(), require a 1906 + * valid IRQ chip to be set. Make sure that's the 1907 + * case by passing NULL here, which will install a 1908 + * dummy IRQ chip for the interrupt in the parent 1909 + * domain. 1910 + */ 1911 + if (domain->parent) 1912 + irq_domain_set_hwirq_and_chip(domain->parent, 1913 + virq, 0, NULL, 1914 + NULL); 1915 + 1913 1916 break; 1914 1917 } 1915 1918 } ··· 1933 1908 * dummy hardware IRQ number. This is used in the ->irq_set_type() 1934 1909 * and ->irq_set_wake() callbacks to return early for these IRQs. 1935 1910 */ 1936 - if (i == soc->num_wake_events) 1911 + if (i == soc->num_wake_events) { 1937 1912 err = irq_domain_set_hwirq_and_chip(domain, virq, ULONG_MAX, 1938 1913 &pmc->irq, pmc); 1914 + 1915 + /* 1916 + * Interrupts without a wake event don't have a corresponding 1917 + * interrupt in the parent controller (GIC). Pass NULL for the 1918 + * chip here, which causes a dummy IRQ chip to be installed 1919 + * for the interrupt in the parent domain, to make this 1920 + * explicit. 1921 + */ 1922 + if (domain->parent) 1923 + irq_domain_set_hwirq_and_chip(domain->parent, virq, 0, 1924 + NULL, NULL); 1925 + } 1939 1926 1940 1927 return err; 1941 1928 } ··· 1957 1920 .alloc = tegra_pmc_irq_alloc, 1958 1921 }; 1959 1922 1960 - static int tegra_pmc_irq_set_wake(struct irq_data *data, unsigned int on) 1923 + static int tegra210_pmc_irq_set_wake(struct irq_data *data, unsigned int on) 1924 + { 1925 + struct tegra_pmc *pmc = irq_data_get_irq_chip_data(data); 1926 + unsigned int offset, bit; 1927 + u32 value; 1928 + 1929 + if (data->hwirq == ULONG_MAX) 1930 + return 0; 1931 + 1932 + offset = data->hwirq / 32; 1933 + bit = data->hwirq % 32; 1934 + 1935 + /* clear wake status */ 1936 + tegra_pmc_writel(pmc, 0, PMC_SW_WAKE_STATUS); 1937 + tegra_pmc_writel(pmc, 0, PMC_SW_WAKE2_STATUS); 1938 + 1939 + tegra_pmc_writel(pmc, 0, PMC_WAKE_STATUS); 1940 + tegra_pmc_writel(pmc, 0, PMC_WAKE2_STATUS); 1941 + 1942 + /* enable PMC wake */ 1943 + if (data->hwirq >= 32) 1944 + offset = PMC_WAKE2_MASK; 1945 + else 1946 + offset = PMC_WAKE_MASK; 1947 + 1948 + value = tegra_pmc_readl(pmc, offset); 1949 + 1950 + if (on) 1951 + value |= BIT(bit); 1952 + else 1953 + value &= ~BIT(bit); 1954 + 1955 + tegra_pmc_writel(pmc, value, offset); 1956 + 1957 + return 0; 1958 + } 1959 + 1960 + static int tegra210_pmc_irq_set_type(struct irq_data *data, unsigned int type) 1961 + { 1962 + struct tegra_pmc *pmc = irq_data_get_irq_chip_data(data); 1963 + unsigned int offset, bit; 1964 + u32 value; 1965 + 1966 + if (data->hwirq == ULONG_MAX) 1967 + return 0; 1968 + 1969 + offset = data->hwirq / 32; 1970 + bit = data->hwirq % 32; 1971 + 1972 + if (data->hwirq >= 32) 1973 + offset = PMC_WAKE2_LEVEL; 1974 + else 1975 + offset = PMC_WAKE_LEVEL; 1976 + 1977 + value = tegra_pmc_readl(pmc, offset); 1978 + 1979 + switch (type) { 1980 + case IRQ_TYPE_EDGE_RISING: 1981 + case IRQ_TYPE_LEVEL_HIGH: 1982 + value |= BIT(bit); 1983 + break; 1984 + 1985 + case IRQ_TYPE_EDGE_FALLING: 1986 + case IRQ_TYPE_LEVEL_LOW: 1987 + value &= ~BIT(bit); 1988 + break; 1989 + 1990 + case IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING: 1991 + value ^= BIT(bit); 1992 + break; 1993 + 1994 + default: 1995 + return -EINVAL; 1996 + } 1997 + 1998 + tegra_pmc_writel(pmc, value, offset); 1999 + 2000 + return 0; 2001 + } 2002 + 2003 + static int tegra186_pmc_irq_set_wake(struct irq_data *data, unsigned int on) 1961 2004 { 1962 2005 struct tegra_pmc *pmc = irq_data_get_irq_chip_data(data); 1963 2006 unsigned int offset, bit; ··· 2069 1952 return 0; 2070 1953 } 2071 1954 2072 - static int tegra_pmc_irq_set_type(struct irq_data *data, unsigned int type) 1955 + static int tegra186_pmc_irq_set_type(struct irq_data *data, unsigned int type) 2073 1956 { 2074 1957 struct tegra_pmc *pmc = irq_data_get_irq_chip_data(data); 2075 1958 u32 value; ··· 2123 2006 pmc->irq.irq_unmask = irq_chip_unmask_parent; 2124 2007 pmc->irq.irq_eoi = irq_chip_eoi_parent; 2125 2008 pmc->irq.irq_set_affinity = irq_chip_set_affinity_parent; 2126 - pmc->irq.irq_set_type = tegra_pmc_irq_set_type; 2127 - pmc->irq.irq_set_wake = tegra_pmc_irq_set_wake; 2009 + pmc->irq.irq_set_type = pmc->soc->irq_set_type; 2010 + pmc->irq.irq_set_wake = pmc->soc->irq_set_wake; 2128 2011 2129 2012 pmc->domain = irq_domain_add_hierarchy(parent, 0, 96, pmc->dev->of_node, 2130 2013 &tegra_pmc_irq_domain_ops, pmc); ··· 2134 2017 } 2135 2018 2136 2019 return 0; 2020 + } 2021 + 2022 + static int tegra_pmc_clk_notify_cb(struct notifier_block *nb, 2023 + unsigned long action, void *ptr) 2024 + { 2025 + struct tegra_pmc *pmc = container_of(nb, struct tegra_pmc, clk_nb); 2026 + struct clk_notifier_data *data = ptr; 2027 + 2028 + switch (action) { 2029 + case PRE_RATE_CHANGE: 2030 + mutex_lock(&pmc->powergates_lock); 2031 + break; 2032 + 2033 + case POST_RATE_CHANGE: 2034 + pmc->rate = data->new_rate; 2035 + /* fall through */ 2036 + 2037 + case ABORT_RATE_CHANGE: 2038 + mutex_unlock(&pmc->powergates_lock); 2039 + break; 2040 + 2041 + default: 2042 + WARN_ON_ONCE(1); 2043 + return notifier_from_errno(-EINVAL); 2044 + } 2045 + 2046 + return NOTIFY_OK; 2137 2047 } 2138 2048 2139 2049 static int tegra_pmc_probe(struct platform_device *pdev) ··· 2226 2082 pmc->clk = NULL; 2227 2083 } 2228 2084 2085 + /* 2086 + * PCLK clock rate can't be retrieved using CLK API because it 2087 + * causes lockup if CPU enters LP2 idle state from some other 2088 + * CLK notifier, hence we're caching the rate's value locally. 2089 + */ 2090 + if (pmc->clk) { 2091 + pmc->clk_nb.notifier_call = tegra_pmc_clk_notify_cb; 2092 + err = clk_notifier_register(pmc->clk, &pmc->clk_nb); 2093 + if (err) { 2094 + dev_err(&pdev->dev, 2095 + "failed to register clk notifier\n"); 2096 + return err; 2097 + } 2098 + 2099 + pmc->rate = clk_get_rate(pmc->clk); 2100 + } 2101 + 2229 2102 pmc->dev = &pdev->dev; 2230 2103 2231 2104 tegra_pmc_init(pmc); ··· 2294 2133 cleanup_sysfs: 2295 2134 device_remove_file(&pdev->dev, &dev_attr_reset_reason); 2296 2135 device_remove_file(&pdev->dev, &dev_attr_reset_level); 2136 + clk_notifier_unregister(pmc->clk, &pmc->clk_nb); 2137 + 2297 2138 return err; 2298 2139 } 2299 2140 ··· 2347 2184 2348 2185 static void tegra20_pmc_init(struct tegra_pmc *pmc) 2349 2186 { 2350 - u32 value; 2187 + u32 value, osc, pmu, off; 2351 2188 2352 2189 /* Always enable CPU power request */ 2353 2190 value = tegra_pmc_readl(pmc, PMC_CNTRL); ··· 2361 2198 else 2362 2199 value |= PMC_CNTRL_SYSCLK_POLARITY; 2363 2200 2201 + if (pmc->corereq_high) 2202 + value &= ~PMC_CNTRL_PWRREQ_POLARITY; 2203 + else 2204 + value |= PMC_CNTRL_PWRREQ_POLARITY; 2205 + 2364 2206 /* configure the output polarity while the request is tristated */ 2365 2207 tegra_pmc_writel(pmc, value, PMC_CNTRL); 2366 2208 ··· 2373 2205 value = tegra_pmc_readl(pmc, PMC_CNTRL); 2374 2206 value |= PMC_CNTRL_SYSCLK_OE; 2375 2207 tegra_pmc_writel(pmc, value, PMC_CNTRL); 2208 + 2209 + /* program core timings which are applicable only for suspend state */ 2210 + if (pmc->suspend_mode != TEGRA_SUSPEND_NONE) { 2211 + osc = DIV_ROUND_UP(pmc->core_osc_time * 8192, 1000000); 2212 + pmu = DIV_ROUND_UP(pmc->core_pmu_time * 32768, 1000000); 2213 + off = DIV_ROUND_UP(pmc->core_off_time * 32768, 1000000); 2214 + tegra_pmc_writel(pmc, ((osc << 8) & 0xff00) | (pmu & 0xff), 2215 + PMC_COREPWRGOOD_TIMER); 2216 + tegra_pmc_writel(pmc, off, PMC_COREPWROFF_TIMER); 2217 + } 2376 2218 } 2377 2219 2378 2220 static void tegra20_pmc_setup_irq_polarity(struct tegra_pmc *pmc, ··· 2716 2538 TEGRA210_IO_PAD_TABLE(TEGRA_IO_PIN_DESC) 2717 2539 }; 2718 2540 2541 + static const struct tegra_wake_event tegra210_wake_events[] = { 2542 + TEGRA_WAKE_IRQ("rtc", 16, 2), 2543 + }; 2544 + 2719 2545 static const struct tegra_pmc_soc tegra210_pmc_soc = { 2720 2546 .num_powergates = ARRAY_SIZE(tegra210_powergates), 2721 2547 .powergates = tegra210_powergates, ··· 2737 2555 .regs = &tegra20_pmc_regs, 2738 2556 .init = tegra20_pmc_init, 2739 2557 .setup_irq_polarity = tegra20_pmc_setup_irq_polarity, 2558 + .irq_set_wake = tegra210_pmc_irq_set_wake, 2559 + .irq_set_type = tegra210_pmc_irq_set_type, 2740 2560 .reset_sources = tegra210_reset_sources, 2741 2561 .num_reset_sources = ARRAY_SIZE(tegra210_reset_sources), 2742 2562 .reset_levels = NULL, 2743 2563 .num_reset_levels = 0, 2564 + .num_wake_events = ARRAY_SIZE(tegra210_wake_events), 2565 + .wake_events = tegra210_wake_events, 2744 2566 }; 2745 2567 2746 2568 #define TEGRA186_IO_PAD_TABLE(_pad) \ ··· 2866 2680 .regs = &tegra186_pmc_regs, 2867 2681 .init = NULL, 2868 2682 .setup_irq_polarity = tegra186_pmc_setup_irq_polarity, 2683 + .irq_set_wake = tegra186_pmc_irq_set_wake, 2684 + .irq_set_type = tegra186_pmc_irq_set_type, 2869 2685 .reset_sources = tegra186_reset_sources, 2870 2686 .num_reset_sources = ARRAY_SIZE(tegra186_reset_sources), 2871 2687 .reset_levels = tegra186_reset_levels,
+365
drivers/soc/tegra/regulators-tegra20.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Voltage regulators coupler for NVIDIA Tegra20 4 + * Copyright (C) 2019 GRATE-DRIVER project 5 + * 6 + * Voltage constraints borrowed from downstream kernel sources 7 + * Copyright (C) 2010-2011 NVIDIA Corporation 8 + */ 9 + 10 + #define pr_fmt(fmt) "tegra voltage-coupler: " fmt 11 + 12 + #include <linux/init.h> 13 + #include <linux/kernel.h> 14 + #include <linux/of.h> 15 + #include <linux/regulator/coupler.h> 16 + #include <linux/regulator/driver.h> 17 + #include <linux/regulator/machine.h> 18 + 19 + struct tegra_regulator_coupler { 20 + struct regulator_coupler coupler; 21 + struct regulator_dev *core_rdev; 22 + struct regulator_dev *cpu_rdev; 23 + struct regulator_dev *rtc_rdev; 24 + int core_min_uV; 25 + }; 26 + 27 + static inline struct tegra_regulator_coupler * 28 + to_tegra_coupler(struct regulator_coupler *coupler) 29 + { 30 + return container_of(coupler, struct tegra_regulator_coupler, coupler); 31 + } 32 + 33 + static int tegra20_core_limit(struct tegra_regulator_coupler *tegra, 34 + struct regulator_dev *core_rdev) 35 + { 36 + int core_min_uV = 0; 37 + int core_max_uV; 38 + int core_cur_uV; 39 + int err; 40 + 41 + if (tegra->core_min_uV > 0) 42 + return tegra->core_min_uV; 43 + 44 + core_cur_uV = regulator_get_voltage_rdev(core_rdev); 45 + if (core_cur_uV < 0) 46 + return core_cur_uV; 47 + 48 + core_max_uV = max(core_cur_uV, 1200000); 49 + 50 + err = regulator_check_voltage(core_rdev, &core_min_uV, &core_max_uV); 51 + if (err) 52 + return err; 53 + 54 + /* 55 + * Limit minimum CORE voltage to a value left from bootloader or, 56 + * if it's unreasonably low value, to the most common 1.2v or to 57 + * whatever maximum value defined via board's device-tree. 58 + */ 59 + tegra->core_min_uV = core_max_uV; 60 + 61 + pr_info("core minimum voltage limited to %duV\n", tegra->core_min_uV); 62 + 63 + return tegra->core_min_uV; 64 + } 65 + 66 + static int tegra20_core_rtc_max_spread(struct regulator_dev *core_rdev, 67 + struct regulator_dev *rtc_rdev) 68 + { 69 + struct coupling_desc *c_desc = &core_rdev->coupling_desc; 70 + struct regulator_dev *rdev; 71 + int max_spread; 72 + unsigned int i; 73 + 74 + for (i = 1; i < c_desc->n_coupled; i++) { 75 + max_spread = core_rdev->constraints->max_spread[i - 1]; 76 + rdev = c_desc->coupled_rdevs[i]; 77 + 78 + if (rdev == rtc_rdev && max_spread) 79 + return max_spread; 80 + } 81 + 82 + pr_err_once("rtc-core max-spread is undefined in device-tree\n"); 83 + 84 + return 150000; 85 + } 86 + 87 + static int tegra20_core_rtc_update(struct tegra_regulator_coupler *tegra, 88 + struct regulator_dev *core_rdev, 89 + struct regulator_dev *rtc_rdev, 90 + int cpu_uV, int cpu_min_uV) 91 + { 92 + int core_min_uV, core_max_uV = INT_MAX; 93 + int rtc_min_uV, rtc_max_uV = INT_MAX; 94 + int core_target_uV; 95 + int rtc_target_uV; 96 + int max_spread; 97 + int core_uV; 98 + int rtc_uV; 99 + int err; 100 + 101 + /* 102 + * RTC and CORE voltages should be no more than 170mV from each other, 103 + * CPU should be below RTC and CORE by at least 120mV. This applies 104 + * to all Tegra20 SoC's. 105 + */ 106 + max_spread = tegra20_core_rtc_max_spread(core_rdev, rtc_rdev); 107 + 108 + /* 109 + * The core voltage scaling is currently not hooked up in drivers, 110 + * hence we will limit the minimum core voltage to a reasonable value. 111 + * This should be good enough for the time being. 112 + */ 113 + core_min_uV = tegra20_core_limit(tegra, core_rdev); 114 + if (core_min_uV < 0) 115 + return core_min_uV; 116 + 117 + err = regulator_check_voltage(core_rdev, &core_min_uV, &core_max_uV); 118 + if (err) 119 + return err; 120 + 121 + err = regulator_check_consumers(core_rdev, &core_min_uV, &core_max_uV, 122 + PM_SUSPEND_ON); 123 + if (err) 124 + return err; 125 + 126 + core_uV = regulator_get_voltage_rdev(core_rdev); 127 + if (core_uV < 0) 128 + return core_uV; 129 + 130 + core_min_uV = max(cpu_min_uV + 125000, core_min_uV); 131 + if (core_min_uV > core_max_uV) 132 + return -EINVAL; 133 + 134 + if (cpu_uV + 120000 > core_uV) 135 + pr_err("core-cpu voltage constraint violated: %d %d\n", 136 + core_uV, cpu_uV + 120000); 137 + 138 + rtc_uV = regulator_get_voltage_rdev(rtc_rdev); 139 + if (rtc_uV < 0) 140 + return rtc_uV; 141 + 142 + if (cpu_uV + 120000 > rtc_uV) 143 + pr_err("rtc-cpu voltage constraint violated: %d %d\n", 144 + rtc_uV, cpu_uV + 120000); 145 + 146 + if (abs(core_uV - rtc_uV) > 170000) 147 + pr_err("core-rtc voltage constraint violated: %d %d\n", 148 + core_uV, rtc_uV); 149 + 150 + rtc_min_uV = max(cpu_min_uV + 125000, core_min_uV - max_spread); 151 + 152 + err = regulator_check_voltage(rtc_rdev, &rtc_min_uV, &rtc_max_uV); 153 + if (err) 154 + return err; 155 + 156 + while (core_uV != core_min_uV || rtc_uV != rtc_min_uV) { 157 + if (core_uV < core_min_uV) { 158 + core_target_uV = min(core_uV + max_spread, core_min_uV); 159 + core_target_uV = min(rtc_uV + max_spread, core_target_uV); 160 + } else { 161 + core_target_uV = max(core_uV - max_spread, core_min_uV); 162 + core_target_uV = max(rtc_uV - max_spread, core_target_uV); 163 + } 164 + 165 + err = regulator_set_voltage_rdev(core_rdev, 166 + core_target_uV, 167 + core_max_uV, 168 + PM_SUSPEND_ON); 169 + if (err) 170 + return err; 171 + 172 + core_uV = core_target_uV; 173 + 174 + if (rtc_uV < rtc_min_uV) { 175 + rtc_target_uV = min(rtc_uV + max_spread, rtc_min_uV); 176 + rtc_target_uV = min(core_uV + max_spread, rtc_target_uV); 177 + } else { 178 + rtc_target_uV = max(rtc_uV - max_spread, rtc_min_uV); 179 + rtc_target_uV = max(core_uV - max_spread, rtc_target_uV); 180 + } 181 + 182 + err = regulator_set_voltage_rdev(rtc_rdev, 183 + rtc_target_uV, 184 + rtc_max_uV, 185 + PM_SUSPEND_ON); 186 + if (err) 187 + return err; 188 + 189 + rtc_uV = rtc_target_uV; 190 + } 191 + 192 + return 0; 193 + } 194 + 195 + static int tegra20_core_voltage_update(struct tegra_regulator_coupler *tegra, 196 + struct regulator_dev *cpu_rdev, 197 + struct regulator_dev *core_rdev, 198 + struct regulator_dev *rtc_rdev) 199 + { 200 + int cpu_uV; 201 + 202 + cpu_uV = regulator_get_voltage_rdev(cpu_rdev); 203 + if (cpu_uV < 0) 204 + return cpu_uV; 205 + 206 + return tegra20_core_rtc_update(tegra, core_rdev, rtc_rdev, 207 + cpu_uV, cpu_uV); 208 + } 209 + 210 + static int tegra20_cpu_voltage_update(struct tegra_regulator_coupler *tegra, 211 + struct regulator_dev *cpu_rdev, 212 + struct regulator_dev *core_rdev, 213 + struct regulator_dev *rtc_rdev) 214 + { 215 + int cpu_min_uV_consumers = 0; 216 + int cpu_max_uV = INT_MAX; 217 + int cpu_min_uV = 0; 218 + int cpu_uV; 219 + int err; 220 + 221 + err = regulator_check_voltage(cpu_rdev, &cpu_min_uV, &cpu_max_uV); 222 + if (err) 223 + return err; 224 + 225 + err = regulator_check_consumers(cpu_rdev, &cpu_min_uV, &cpu_max_uV, 226 + PM_SUSPEND_ON); 227 + if (err) 228 + return err; 229 + 230 + err = regulator_check_consumers(cpu_rdev, &cpu_min_uV_consumers, 231 + &cpu_max_uV, PM_SUSPEND_ON); 232 + if (err) 233 + return err; 234 + 235 + cpu_uV = regulator_get_voltage_rdev(cpu_rdev); 236 + if (cpu_uV < 0) 237 + return cpu_uV; 238 + 239 + /* 240 + * CPU's regulator may not have any consumers, hence the voltage 241 + * must not be changed in that case because CPU simply won't 242 + * survive the voltage drop if it's running on a higher frequency. 243 + */ 244 + if (!cpu_min_uV_consumers) 245 + cpu_min_uV = cpu_uV; 246 + 247 + if (cpu_min_uV > cpu_uV) { 248 + err = tegra20_core_rtc_update(tegra, core_rdev, rtc_rdev, 249 + cpu_uV, cpu_min_uV); 250 + if (err) 251 + return err; 252 + 253 + err = regulator_set_voltage_rdev(cpu_rdev, cpu_min_uV, 254 + cpu_max_uV, PM_SUSPEND_ON); 255 + if (err) 256 + return err; 257 + } else if (cpu_min_uV < cpu_uV) { 258 + err = regulator_set_voltage_rdev(cpu_rdev, cpu_min_uV, 259 + cpu_max_uV, PM_SUSPEND_ON); 260 + if (err) 261 + return err; 262 + 263 + err = tegra20_core_rtc_update(tegra, core_rdev, rtc_rdev, 264 + cpu_uV, cpu_min_uV); 265 + if (err) 266 + return err; 267 + } 268 + 269 + return 0; 270 + } 271 + 272 + static int tegra20_regulator_balance_voltage(struct regulator_coupler *coupler, 273 + struct regulator_dev *rdev, 274 + suspend_state_t state) 275 + { 276 + struct tegra_regulator_coupler *tegra = to_tegra_coupler(coupler); 277 + struct regulator_dev *core_rdev = tegra->core_rdev; 278 + struct regulator_dev *cpu_rdev = tegra->cpu_rdev; 279 + struct regulator_dev *rtc_rdev = tegra->rtc_rdev; 280 + 281 + if ((core_rdev != rdev && cpu_rdev != rdev && rtc_rdev != rdev) || 282 + state != PM_SUSPEND_ON) { 283 + pr_err("regulators are not coupled properly\n"); 284 + return -EINVAL; 285 + } 286 + 287 + if (rdev == cpu_rdev) 288 + return tegra20_cpu_voltage_update(tegra, cpu_rdev, 289 + core_rdev, rtc_rdev); 290 + 291 + if (rdev == core_rdev) 292 + return tegra20_core_voltage_update(tegra, cpu_rdev, 293 + core_rdev, rtc_rdev); 294 + 295 + pr_err("changing %s voltage not permitted\n", rdev_get_name(rtc_rdev)); 296 + 297 + return -EPERM; 298 + } 299 + 300 + static int tegra20_regulator_attach(struct regulator_coupler *coupler, 301 + struct regulator_dev *rdev) 302 + { 303 + struct tegra_regulator_coupler *tegra = to_tegra_coupler(coupler); 304 + struct device_node *np = rdev->dev.of_node; 305 + 306 + if (of_property_read_bool(np, "nvidia,tegra-core-regulator") && 307 + !tegra->core_rdev) { 308 + tegra->core_rdev = rdev; 309 + return 0; 310 + } 311 + 312 + if (of_property_read_bool(np, "nvidia,tegra-rtc-regulator") && 313 + !tegra->rtc_rdev) { 314 + tegra->rtc_rdev = rdev; 315 + return 0; 316 + } 317 + 318 + if (of_property_read_bool(np, "nvidia,tegra-cpu-regulator") && 319 + !tegra->cpu_rdev) { 320 + tegra->cpu_rdev = rdev; 321 + return 0; 322 + } 323 + 324 + return -EINVAL; 325 + } 326 + 327 + static int tegra20_regulator_detach(struct regulator_coupler *coupler, 328 + struct regulator_dev *rdev) 329 + { 330 + struct tegra_regulator_coupler *tegra = to_tegra_coupler(coupler); 331 + 332 + if (tegra->core_rdev == rdev) { 333 + tegra->core_rdev = NULL; 334 + return 0; 335 + } 336 + 337 + if (tegra->rtc_rdev == rdev) { 338 + tegra->rtc_rdev = NULL; 339 + return 0; 340 + } 341 + 342 + if (tegra->cpu_rdev == rdev) { 343 + tegra->cpu_rdev = NULL; 344 + return 0; 345 + } 346 + 347 + return -EINVAL; 348 + } 349 + 350 + static struct tegra_regulator_coupler tegra20_coupler = { 351 + .coupler = { 352 + .attach_regulator = tegra20_regulator_attach, 353 + .detach_regulator = tegra20_regulator_detach, 354 + .balance_voltage = tegra20_regulator_balance_voltage, 355 + }, 356 + }; 357 + 358 + static int __init tegra_regulator_coupler_init(void) 359 + { 360 + if (!of_machine_is_compatible("nvidia,tegra20")) 361 + return 0; 362 + 363 + return regulator_coupler_register(&tegra20_coupler.coupler); 364 + } 365 + arch_initcall(tegra_regulator_coupler_init);
+317
drivers/soc/tegra/regulators-tegra30.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Voltage regulators coupler for NVIDIA Tegra30 4 + * Copyright (C) 2019 GRATE-DRIVER project 5 + * 6 + * Voltage constraints borrowed from downstream kernel sources 7 + * Copyright (C) 2010-2011 NVIDIA Corporation 8 + */ 9 + 10 + #define pr_fmt(fmt) "tegra voltage-coupler: " fmt 11 + 12 + #include <linux/init.h> 13 + #include <linux/kernel.h> 14 + #include <linux/of.h> 15 + #include <linux/regulator/coupler.h> 16 + #include <linux/regulator/driver.h> 17 + #include <linux/regulator/machine.h> 18 + 19 + #include <soc/tegra/fuse.h> 20 + 21 + struct tegra_regulator_coupler { 22 + struct regulator_coupler coupler; 23 + struct regulator_dev *core_rdev; 24 + struct regulator_dev *cpu_rdev; 25 + int core_min_uV; 26 + }; 27 + 28 + static inline struct tegra_regulator_coupler * 29 + to_tegra_coupler(struct regulator_coupler *coupler) 30 + { 31 + return container_of(coupler, struct tegra_regulator_coupler, coupler); 32 + } 33 + 34 + static int tegra30_core_limit(struct tegra_regulator_coupler *tegra, 35 + struct regulator_dev *core_rdev) 36 + { 37 + int core_min_uV = 0; 38 + int core_max_uV; 39 + int core_cur_uV; 40 + int err; 41 + 42 + if (tegra->core_min_uV > 0) 43 + return tegra->core_min_uV; 44 + 45 + core_cur_uV = regulator_get_voltage_rdev(core_rdev); 46 + if (core_cur_uV < 0) 47 + return core_cur_uV; 48 + 49 + core_max_uV = max(core_cur_uV, 1200000); 50 + 51 + err = regulator_check_voltage(core_rdev, &core_min_uV, &core_max_uV); 52 + if (err) 53 + return err; 54 + 55 + /* 56 + * Limit minimum CORE voltage to a value left from bootloader or, 57 + * if it's unreasonably low value, to the most common 1.2v or to 58 + * whatever maximum value defined via board's device-tree. 59 + */ 60 + tegra->core_min_uV = core_max_uV; 61 + 62 + pr_info("core minimum voltage limited to %duV\n", tegra->core_min_uV); 63 + 64 + return tegra->core_min_uV; 65 + } 66 + 67 + static int tegra30_core_cpu_limit(int cpu_uV) 68 + { 69 + if (cpu_uV < 800000) 70 + return 950000; 71 + 72 + if (cpu_uV < 900000) 73 + return 1000000; 74 + 75 + if (cpu_uV < 1000000) 76 + return 1100000; 77 + 78 + if (cpu_uV < 1100000) 79 + return 1200000; 80 + 81 + if (cpu_uV < 1250000) { 82 + switch (tegra_sku_info.cpu_speedo_id) { 83 + case 0 ... 1: 84 + case 4: 85 + case 7 ... 8: 86 + return 1200000; 87 + 88 + default: 89 + return 1300000; 90 + } 91 + } 92 + 93 + return -EINVAL; 94 + } 95 + 96 + static int tegra30_voltage_update(struct tegra_regulator_coupler *tegra, 97 + struct regulator_dev *cpu_rdev, 98 + struct regulator_dev *core_rdev) 99 + { 100 + int core_min_uV, core_max_uV = INT_MAX; 101 + int cpu_min_uV, cpu_max_uV = INT_MAX; 102 + int cpu_min_uV_consumers = 0; 103 + int core_min_limited_uV; 104 + int core_target_uV; 105 + int cpu_target_uV; 106 + int core_max_step; 107 + int cpu_max_step; 108 + int max_spread; 109 + int core_uV; 110 + int cpu_uV; 111 + int err; 112 + 113 + /* 114 + * CPU voltage should not got lower than 300mV from the CORE. 115 + * CPU voltage should stay below the CORE by 100mV+, depending 116 + * by the CORE voltage. This applies to all Tegra30 SoC's. 117 + */ 118 + max_spread = cpu_rdev->constraints->max_spread[0]; 119 + cpu_max_step = cpu_rdev->constraints->max_uV_step; 120 + core_max_step = core_rdev->constraints->max_uV_step; 121 + 122 + if (!max_spread) { 123 + pr_err_once("cpu-core max-spread is undefined in device-tree\n"); 124 + max_spread = 300000; 125 + } 126 + 127 + if (!cpu_max_step) { 128 + pr_err_once("cpu max-step is undefined in device-tree\n"); 129 + cpu_max_step = 150000; 130 + } 131 + 132 + if (!core_max_step) { 133 + pr_err_once("core max-step is undefined in device-tree\n"); 134 + core_max_step = 150000; 135 + } 136 + 137 + /* 138 + * The CORE voltage scaling is currently not hooked up in drivers, 139 + * hence we will limit the minimum CORE voltage to a reasonable value. 140 + * This should be good enough for the time being. 141 + */ 142 + core_min_uV = tegra30_core_limit(tegra, core_rdev); 143 + if (core_min_uV < 0) 144 + return core_min_uV; 145 + 146 + err = regulator_check_consumers(core_rdev, &core_min_uV, &core_max_uV, 147 + PM_SUSPEND_ON); 148 + if (err) 149 + return err; 150 + 151 + core_uV = regulator_get_voltage_rdev(core_rdev); 152 + if (core_uV < 0) 153 + return core_uV; 154 + 155 + cpu_min_uV = core_min_uV - max_spread; 156 + 157 + err = regulator_check_consumers(cpu_rdev, &cpu_min_uV, &cpu_max_uV, 158 + PM_SUSPEND_ON); 159 + if (err) 160 + return err; 161 + 162 + err = regulator_check_consumers(cpu_rdev, &cpu_min_uV_consumers, 163 + &cpu_max_uV, PM_SUSPEND_ON); 164 + if (err) 165 + return err; 166 + 167 + err = regulator_check_voltage(cpu_rdev, &cpu_min_uV, &cpu_max_uV); 168 + if (err) 169 + return err; 170 + 171 + cpu_uV = regulator_get_voltage_rdev(cpu_rdev); 172 + if (cpu_uV < 0) 173 + return cpu_uV; 174 + 175 + /* 176 + * CPU's regulator may not have any consumers, hence the voltage 177 + * must not be changed in that case because CPU simply won't 178 + * survive the voltage drop if it's running on a higher frequency. 179 + */ 180 + if (!cpu_min_uV_consumers) 181 + cpu_min_uV = cpu_uV; 182 + 183 + /* 184 + * Bootloader shall set up voltages correctly, but if it 185 + * happens that there is a violation, then try to fix it 186 + * at first. 187 + */ 188 + core_min_limited_uV = tegra30_core_cpu_limit(cpu_uV); 189 + if (core_min_limited_uV < 0) 190 + return core_min_limited_uV; 191 + 192 + core_min_uV = max(core_min_uV, tegra30_core_cpu_limit(cpu_min_uV)); 193 + 194 + err = regulator_check_voltage(core_rdev, &core_min_uV, &core_max_uV); 195 + if (err) 196 + return err; 197 + 198 + if (core_min_limited_uV > core_uV) { 199 + pr_err("core voltage constraint violated: %d %d %d\n", 200 + core_uV, core_min_limited_uV, cpu_uV); 201 + goto update_core; 202 + } 203 + 204 + while (cpu_uV != cpu_min_uV || core_uV != core_min_uV) { 205 + if (cpu_uV < cpu_min_uV) { 206 + cpu_target_uV = min(cpu_uV + cpu_max_step, cpu_min_uV); 207 + } else { 208 + cpu_target_uV = max(cpu_uV - cpu_max_step, cpu_min_uV); 209 + cpu_target_uV = max(core_uV - max_spread, cpu_target_uV); 210 + } 211 + 212 + err = regulator_set_voltage_rdev(cpu_rdev, 213 + cpu_target_uV, 214 + cpu_max_uV, 215 + PM_SUSPEND_ON); 216 + if (err) 217 + return err; 218 + 219 + cpu_uV = cpu_target_uV; 220 + update_core: 221 + core_min_limited_uV = tegra30_core_cpu_limit(cpu_uV); 222 + if (core_min_limited_uV < 0) 223 + return core_min_limited_uV; 224 + 225 + core_target_uV = max(core_min_limited_uV, core_min_uV); 226 + 227 + if (core_uV < core_target_uV) { 228 + core_target_uV = min(core_target_uV, core_uV + core_max_step); 229 + core_target_uV = min(core_target_uV, cpu_uV + max_spread); 230 + } else { 231 + core_target_uV = max(core_target_uV, core_uV - core_max_step); 232 + } 233 + 234 + err = regulator_set_voltage_rdev(core_rdev, 235 + core_target_uV, 236 + core_max_uV, 237 + PM_SUSPEND_ON); 238 + if (err) 239 + return err; 240 + 241 + core_uV = core_target_uV; 242 + } 243 + 244 + return 0; 245 + } 246 + 247 + static int tegra30_regulator_balance_voltage(struct regulator_coupler *coupler, 248 + struct regulator_dev *rdev, 249 + suspend_state_t state) 250 + { 251 + struct tegra_regulator_coupler *tegra = to_tegra_coupler(coupler); 252 + struct regulator_dev *core_rdev = tegra->core_rdev; 253 + struct regulator_dev *cpu_rdev = tegra->cpu_rdev; 254 + 255 + if ((core_rdev != rdev && cpu_rdev != rdev) || state != PM_SUSPEND_ON) { 256 + pr_err("regulators are not coupled properly\n"); 257 + return -EINVAL; 258 + } 259 + 260 + return tegra30_voltage_update(tegra, cpu_rdev, core_rdev); 261 + } 262 + 263 + static int tegra30_regulator_attach(struct regulator_coupler *coupler, 264 + struct regulator_dev *rdev) 265 + { 266 + struct tegra_regulator_coupler *tegra = to_tegra_coupler(coupler); 267 + struct device_node *np = rdev->dev.of_node; 268 + 269 + if (of_property_read_bool(np, "nvidia,tegra-core-regulator") && 270 + !tegra->core_rdev) { 271 + tegra->core_rdev = rdev; 272 + return 0; 273 + } 274 + 275 + if (of_property_read_bool(np, "nvidia,tegra-cpu-regulator") && 276 + !tegra->cpu_rdev) { 277 + tegra->cpu_rdev = rdev; 278 + return 0; 279 + } 280 + 281 + return -EINVAL; 282 + } 283 + 284 + static int tegra30_regulator_detach(struct regulator_coupler *coupler, 285 + struct regulator_dev *rdev) 286 + { 287 + struct tegra_regulator_coupler *tegra = to_tegra_coupler(coupler); 288 + 289 + if (tegra->core_rdev == rdev) { 290 + tegra->core_rdev = NULL; 291 + return 0; 292 + } 293 + 294 + if (tegra->cpu_rdev == rdev) { 295 + tegra->cpu_rdev = NULL; 296 + return 0; 297 + } 298 + 299 + return -EINVAL; 300 + } 301 + 302 + static struct tegra_regulator_coupler tegra30_coupler = { 303 + .coupler = { 304 + .attach_regulator = tegra30_regulator_attach, 305 + .detach_regulator = tegra30_regulator_detach, 306 + .balance_voltage = tegra30_regulator_balance_voltage, 307 + }, 308 + }; 309 + 310 + static int __init tegra_regulator_coupler_init(void) 311 + { 312 + if (!of_machine_is_compatible("nvidia,tegra30")) 313 + return 0; 314 + 315 + return regulator_coupler_register(&tegra30_coupler.coupler); 316 + } 317 + arch_initcall(tegra_regulator_coupler_init);
+1
drivers/soc/ti/Makefile
··· 6 6 knav_qmss-y := knav_qmss_queue.o knav_qmss_acc.o 7 7 obj-$(CONFIG_KEYSTONE_NAVIGATOR_DMA) += knav_dma.o 8 8 obj-$(CONFIG_AMX3_PM) += pm33xx.o 9 + obj-$(CONFIG_ARCH_OMAP2PLUS) += omap_prm.o 9 10 obj-$(CONFIG_WKUP_M3_IPC) += wkup_m3_ipc.o 10 11 obj-$(CONFIG_TI_SCI_PM_DOMAINS) += ti_sci_pm_domains.o 11 12 obj-$(CONFIG_TI_SCI_INTA_MSI_DOMAIN) += ti_sci_inta_msi.o
+391
drivers/soc/ti/omap_prm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * OMAP2+ PRM driver 4 + * 5 + * Copyright (C) 2019 Texas Instruments Incorporated - http://www.ti.com/ 6 + * Tero Kristo <t-kristo@ti.com> 7 + */ 8 + 9 + #include <linux/kernel.h> 10 + #include <linux/device.h> 11 + #include <linux/io.h> 12 + #include <linux/iopoll.h> 13 + #include <linux/of.h> 14 + #include <linux/of_device.h> 15 + #include <linux/platform_device.h> 16 + #include <linux/reset-controller.h> 17 + #include <linux/delay.h> 18 + 19 + #include <linux/platform_data/ti-prm.h> 20 + 21 + struct omap_rst_map { 22 + s8 rst; 23 + s8 st; 24 + }; 25 + 26 + struct omap_prm_data { 27 + u32 base; 28 + const char *name; 29 + const char *clkdm_name; 30 + u16 rstctrl; 31 + u16 rstst; 32 + const struct omap_rst_map *rstmap; 33 + u8 flags; 34 + }; 35 + 36 + struct omap_prm { 37 + const struct omap_prm_data *data; 38 + void __iomem *base; 39 + }; 40 + 41 + struct omap_reset_data { 42 + struct reset_controller_dev rcdev; 43 + struct omap_prm *prm; 44 + u32 mask; 45 + spinlock_t lock; 46 + struct clockdomain *clkdm; 47 + struct device *dev; 48 + }; 49 + 50 + #define to_omap_reset_data(p) container_of((p), struct omap_reset_data, rcdev) 51 + 52 + #define OMAP_MAX_RESETS 8 53 + #define OMAP_RESET_MAX_WAIT 10000 54 + 55 + #define OMAP_PRM_HAS_RSTCTRL BIT(0) 56 + #define OMAP_PRM_HAS_RSTST BIT(1) 57 + #define OMAP_PRM_HAS_NO_CLKDM BIT(2) 58 + 59 + #define OMAP_PRM_HAS_RESETS (OMAP_PRM_HAS_RSTCTRL | OMAP_PRM_HAS_RSTST) 60 + 61 + static const struct omap_rst_map rst_map_0[] = { 62 + { .rst = 0, .st = 0 }, 63 + { .rst = -1 }, 64 + }; 65 + 66 + static const struct omap_rst_map rst_map_01[] = { 67 + { .rst = 0, .st = 0 }, 68 + { .rst = 1, .st = 1 }, 69 + { .rst = -1 }, 70 + }; 71 + 72 + static const struct omap_rst_map rst_map_012[] = { 73 + { .rst = 0, .st = 0 }, 74 + { .rst = 1, .st = 1 }, 75 + { .rst = 2, .st = 2 }, 76 + { .rst = -1 }, 77 + }; 78 + 79 + static const struct omap_prm_data omap4_prm_data[] = { 80 + { .name = "tesla", .base = 0x4a306400, .rstctrl = 0x10, .rstst = 0x14, .rstmap = rst_map_01 }, 81 + { .name = "core", .base = 0x4a306700, .rstctrl = 0x210, .rstst = 0x214, .clkdm_name = "ducati", .rstmap = rst_map_012 }, 82 + { .name = "ivahd", .base = 0x4a306f00, .rstctrl = 0x10, .rstst = 0x14, .rstmap = rst_map_012 }, 83 + { .name = "device", .base = 0x4a307b00, .rstctrl = 0x0, .rstst = 0x4, .rstmap = rst_map_01, .flags = OMAP_PRM_HAS_RSTCTRL | OMAP_PRM_HAS_NO_CLKDM }, 84 + { }, 85 + }; 86 + 87 + static const struct omap_prm_data omap5_prm_data[] = { 88 + { .name = "dsp", .base = 0x4ae06400, .rstctrl = 0x10, .rstst = 0x14, .rstmap = rst_map_01 }, 89 + { .name = "core", .base = 0x4ae06700, .rstctrl = 0x210, .rstst = 0x214, .clkdm_name = "ipu", .rstmap = rst_map_012 }, 90 + { .name = "iva", .base = 0x4ae07200, .rstctrl = 0x10, .rstst = 0x14, .rstmap = rst_map_012 }, 91 + { .name = "device", .base = 0x4ae07c00, .rstctrl = 0x0, .rstst = 0x4, .rstmap = rst_map_01, .flags = OMAP_PRM_HAS_RSTCTRL | OMAP_PRM_HAS_NO_CLKDM }, 92 + { }, 93 + }; 94 + 95 + static const struct omap_prm_data dra7_prm_data[] = { 96 + { .name = "dsp1", .base = 0x4ae06400, .rstctrl = 0x10, .rstst = 0x14, .rstmap = rst_map_01 }, 97 + { .name = "ipu", .base = 0x4ae06500, .rstctrl = 0x10, .rstst = 0x14, .clkdm_name = "ipu1", .rstmap = rst_map_012 }, 98 + { .name = "core", .base = 0x4ae06700, .rstctrl = 0x210, .rstst = 0x214, .clkdm_name = "ipu2", .rstmap = rst_map_012 }, 99 + { .name = "iva", .base = 0x4ae06f00, .rstctrl = 0x10, .rstst = 0x14, .rstmap = rst_map_012 }, 100 + { .name = "dsp2", .base = 0x4ae07b00, .rstctrl = 0x10, .rstst = 0x14, .rstmap = rst_map_01 }, 101 + { .name = "eve1", .base = 0x4ae07b40, .rstctrl = 0x10, .rstst = 0x14, .rstmap = rst_map_01 }, 102 + { .name = "eve2", .base = 0x4ae07b80, .rstctrl = 0x10, .rstst = 0x14, .rstmap = rst_map_01 }, 103 + { .name = "eve3", .base = 0x4ae07bc0, .rstctrl = 0x10, .rstst = 0x14, .rstmap = rst_map_01 }, 104 + { .name = "eve4", .base = 0x4ae07c00, .rstctrl = 0x10, .rstst = 0x14, .rstmap = rst_map_01 }, 105 + { }, 106 + }; 107 + 108 + static const struct omap_rst_map am3_per_rst_map[] = { 109 + { .rst = 1 }, 110 + { .rst = -1 }, 111 + }; 112 + 113 + static const struct omap_rst_map am3_wkup_rst_map[] = { 114 + { .rst = 3, .st = 5 }, 115 + { .rst = -1 }, 116 + }; 117 + 118 + static const struct omap_prm_data am3_prm_data[] = { 119 + { .name = "per", .base = 0x44e00c00, .rstctrl = 0x0, .rstmap = am3_per_rst_map, .flags = OMAP_PRM_HAS_RSTCTRL, .clkdm_name = "pruss_ocp" }, 120 + { .name = "wkup", .base = 0x44e00d00, .rstctrl = 0x0, .rstst = 0xc, .rstmap = am3_wkup_rst_map, .flags = OMAP_PRM_HAS_RSTCTRL | OMAP_PRM_HAS_NO_CLKDM }, 121 + { .name = "device", .base = 0x44e00f00, .rstctrl = 0x0, .rstst = 0x8, .rstmap = rst_map_01, .flags = OMAP_PRM_HAS_RSTCTRL | OMAP_PRM_HAS_NO_CLKDM }, 122 + { .name = "gfx", .base = 0x44e01100, .rstctrl = 0x4, .rstst = 0x14, .rstmap = rst_map_0, .clkdm_name = "gfx_l3" }, 123 + { }, 124 + }; 125 + 126 + static const struct omap_rst_map am4_per_rst_map[] = { 127 + { .rst = 1, .st = 0 }, 128 + { .rst = -1 }, 129 + }; 130 + 131 + static const struct omap_rst_map am4_device_rst_map[] = { 132 + { .rst = 0, .st = 1 }, 133 + { .rst = 1, .st = 0 }, 134 + { .rst = -1 }, 135 + }; 136 + 137 + static const struct omap_prm_data am4_prm_data[] = { 138 + { .name = "gfx", .base = 0x44df0400, .rstctrl = 0x10, .rstst = 0x14, .rstmap = rst_map_0, .clkdm_name = "gfx_l3" }, 139 + { .name = "per", .base = 0x44df0800, .rstctrl = 0x10, .rstst = 0x14, .rstmap = am4_per_rst_map, .clkdm_name = "pruss_ocp" }, 140 + { .name = "wkup", .base = 0x44df2000, .rstctrl = 0x10, .rstst = 0x14, .rstmap = am3_wkup_rst_map, .flags = OMAP_PRM_HAS_NO_CLKDM }, 141 + { .name = "device", .base = 0x44df4000, .rstctrl = 0x0, .rstst = 0x4, .rstmap = am4_device_rst_map, .flags = OMAP_PRM_HAS_RSTCTRL | OMAP_PRM_HAS_NO_CLKDM }, 142 + { }, 143 + }; 144 + 145 + static const struct of_device_id omap_prm_id_table[] = { 146 + { .compatible = "ti,omap4-prm-inst", .data = omap4_prm_data }, 147 + { .compatible = "ti,omap5-prm-inst", .data = omap5_prm_data }, 148 + { .compatible = "ti,dra7-prm-inst", .data = dra7_prm_data }, 149 + { .compatible = "ti,am3-prm-inst", .data = am3_prm_data }, 150 + { .compatible = "ti,am4-prm-inst", .data = am4_prm_data }, 151 + { }, 152 + }; 153 + 154 + static bool _is_valid_reset(struct omap_reset_data *reset, unsigned long id) 155 + { 156 + if (reset->mask & BIT(id)) 157 + return true; 158 + 159 + return false; 160 + } 161 + 162 + static int omap_reset_get_st_bit(struct omap_reset_data *reset, 163 + unsigned long id) 164 + { 165 + const struct omap_rst_map *map = reset->prm->data->rstmap; 166 + 167 + while (map->rst >= 0) { 168 + if (map->rst == id) 169 + return map->st; 170 + 171 + map++; 172 + } 173 + 174 + return id; 175 + } 176 + 177 + static int omap_reset_status(struct reset_controller_dev *rcdev, 178 + unsigned long id) 179 + { 180 + struct omap_reset_data *reset = to_omap_reset_data(rcdev); 181 + u32 v; 182 + int st_bit = omap_reset_get_st_bit(reset, id); 183 + bool has_rstst = reset->prm->data->rstst || 184 + (reset->prm->data->flags & OMAP_PRM_HAS_RSTST); 185 + 186 + /* Check if we have rstst */ 187 + if (!has_rstst) 188 + return -ENOTSUPP; 189 + 190 + /* Check if hw reset line is asserted */ 191 + v = readl_relaxed(reset->prm->base + reset->prm->data->rstctrl); 192 + if (v & BIT(id)) 193 + return 1; 194 + 195 + /* 196 + * Check reset status, high value means reset sequence has been 197 + * completed successfully so we can return 0 here (reset deasserted) 198 + */ 199 + v = readl_relaxed(reset->prm->base + reset->prm->data->rstst); 200 + v >>= st_bit; 201 + v &= 1; 202 + 203 + return !v; 204 + } 205 + 206 + static int omap_reset_assert(struct reset_controller_dev *rcdev, 207 + unsigned long id) 208 + { 209 + struct omap_reset_data *reset = to_omap_reset_data(rcdev); 210 + u32 v; 211 + unsigned long flags; 212 + 213 + /* assert the reset control line */ 214 + spin_lock_irqsave(&reset->lock, flags); 215 + v = readl_relaxed(reset->prm->base + reset->prm->data->rstctrl); 216 + v |= 1 << id; 217 + writel_relaxed(v, reset->prm->base + reset->prm->data->rstctrl); 218 + spin_unlock_irqrestore(&reset->lock, flags); 219 + 220 + return 0; 221 + } 222 + 223 + static int omap_reset_deassert(struct reset_controller_dev *rcdev, 224 + unsigned long id) 225 + { 226 + struct omap_reset_data *reset = to_omap_reset_data(rcdev); 227 + u32 v; 228 + int st_bit; 229 + bool has_rstst; 230 + unsigned long flags; 231 + struct ti_prm_platform_data *pdata = dev_get_platdata(reset->dev); 232 + int ret = 0; 233 + 234 + has_rstst = reset->prm->data->rstst || 235 + (reset->prm->data->flags & OMAP_PRM_HAS_RSTST); 236 + 237 + if (has_rstst) { 238 + st_bit = omap_reset_get_st_bit(reset, id); 239 + 240 + /* Clear the reset status by writing 1 to the status bit */ 241 + v = 1 << st_bit; 242 + writel_relaxed(v, reset->prm->base + reset->prm->data->rstst); 243 + } 244 + 245 + if (reset->clkdm) 246 + pdata->clkdm_deny_idle(reset->clkdm); 247 + 248 + /* de-assert the reset control line */ 249 + spin_lock_irqsave(&reset->lock, flags); 250 + v = readl_relaxed(reset->prm->base + reset->prm->data->rstctrl); 251 + v &= ~(1 << id); 252 + writel_relaxed(v, reset->prm->base + reset->prm->data->rstctrl); 253 + spin_unlock_irqrestore(&reset->lock, flags); 254 + 255 + if (!has_rstst) 256 + goto exit; 257 + 258 + /* wait for the status to be set */ 259 + ret = readl_relaxed_poll_timeout(reset->prm->base + 260 + reset->prm->data->rstst, 261 + v, v & BIT(st_bit), 1, 262 + OMAP_RESET_MAX_WAIT); 263 + if (ret) 264 + pr_err("%s: timedout waiting for %s:%lu\n", __func__, 265 + reset->prm->data->name, id); 266 + 267 + exit: 268 + if (reset->clkdm) 269 + pdata->clkdm_allow_idle(reset->clkdm); 270 + 271 + return ret; 272 + } 273 + 274 + static const struct reset_control_ops omap_reset_ops = { 275 + .assert = omap_reset_assert, 276 + .deassert = omap_reset_deassert, 277 + .status = omap_reset_status, 278 + }; 279 + 280 + static int omap_prm_reset_xlate(struct reset_controller_dev *rcdev, 281 + const struct of_phandle_args *reset_spec) 282 + { 283 + struct omap_reset_data *reset = to_omap_reset_data(rcdev); 284 + 285 + if (!_is_valid_reset(reset, reset_spec->args[0])) 286 + return -EINVAL; 287 + 288 + return reset_spec->args[0]; 289 + } 290 + 291 + static int omap_prm_reset_init(struct platform_device *pdev, 292 + struct omap_prm *prm) 293 + { 294 + struct omap_reset_data *reset; 295 + const struct omap_rst_map *map; 296 + struct ti_prm_platform_data *pdata = dev_get_platdata(&pdev->dev); 297 + char buf[32]; 298 + 299 + /* 300 + * Check if we have controllable resets. If either rstctrl is non-zero 301 + * or OMAP_PRM_HAS_RSTCTRL flag is set, we have reset control register 302 + * for the domain. 303 + */ 304 + if (!prm->data->rstctrl && !(prm->data->flags & OMAP_PRM_HAS_RSTCTRL)) 305 + return 0; 306 + 307 + /* Check if we have the pdata callbacks in place */ 308 + if (!pdata || !pdata->clkdm_lookup || !pdata->clkdm_deny_idle || 309 + !pdata->clkdm_allow_idle) 310 + return -EINVAL; 311 + 312 + map = prm->data->rstmap; 313 + if (!map) 314 + return -EINVAL; 315 + 316 + reset = devm_kzalloc(&pdev->dev, sizeof(*reset), GFP_KERNEL); 317 + if (!reset) 318 + return -ENOMEM; 319 + 320 + reset->rcdev.owner = THIS_MODULE; 321 + reset->rcdev.ops = &omap_reset_ops; 322 + reset->rcdev.of_node = pdev->dev.of_node; 323 + reset->rcdev.nr_resets = OMAP_MAX_RESETS; 324 + reset->rcdev.of_xlate = omap_prm_reset_xlate; 325 + reset->rcdev.of_reset_n_cells = 1; 326 + reset->dev = &pdev->dev; 327 + spin_lock_init(&reset->lock); 328 + 329 + reset->prm = prm; 330 + 331 + sprintf(buf, "%s_clkdm", prm->data->clkdm_name ? prm->data->clkdm_name : 332 + prm->data->name); 333 + 334 + if (!(prm->data->flags & OMAP_PRM_HAS_NO_CLKDM)) { 335 + reset->clkdm = pdata->clkdm_lookup(buf); 336 + if (!reset->clkdm) 337 + return -EINVAL; 338 + } 339 + 340 + while (map->rst >= 0) { 341 + reset->mask |= BIT(map->rst); 342 + map++; 343 + } 344 + 345 + return devm_reset_controller_register(&pdev->dev, &reset->rcdev); 346 + } 347 + 348 + static int omap_prm_probe(struct platform_device *pdev) 349 + { 350 + struct resource *res; 351 + const struct omap_prm_data *data; 352 + struct omap_prm *prm; 353 + const struct of_device_id *match; 354 + 355 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 356 + if (!res) 357 + return -ENODEV; 358 + 359 + match = of_match_device(omap_prm_id_table, &pdev->dev); 360 + if (!match) 361 + return -ENOTSUPP; 362 + 363 + prm = devm_kzalloc(&pdev->dev, sizeof(*prm), GFP_KERNEL); 364 + if (!prm) 365 + return -ENOMEM; 366 + 367 + data = match->data; 368 + 369 + while (data->base != res->start) { 370 + if (!data->base) 371 + return -EINVAL; 372 + data++; 373 + } 374 + 375 + prm->data = data; 376 + 377 + prm->base = devm_ioremap_resource(&pdev->dev, res); 378 + if (IS_ERR(prm->base)) 379 + return PTR_ERR(prm->base); 380 + 381 + return omap_prm_reset_init(pdev, prm); 382 + } 383 + 384 + static struct platform_driver omap_prm_driver = { 385 + .probe = omap_prm_probe, 386 + .driver = { 387 + .name = KBUILD_MODNAME, 388 + .of_match_table = omap_prm_id_table, 389 + }, 390 + }; 391 + builtin_platform_driver(omap_prm_driver);
+8 -2
drivers/soc/xilinx/zynqmp_pm_domains.c
··· 2 2 /* 3 3 * ZynqMP Generic PM domain support 4 4 * 5 - * Copyright (C) 2015-2018 Xilinx, Inc. 5 + * Copyright (C) 2015-2019 Xilinx, Inc. 6 6 * 7 7 * Davorin Mista <davorin.mista@aggios.com> 8 8 * Jolly Shah <jollys@xilinx.com> ··· 24 24 #define ZYNQMP_PM_DOMAIN_REQUESTED BIT(0) 25 25 26 26 static const struct zynqmp_eemi_ops *eemi_ops; 27 + 28 + static int min_capability; 27 29 28 30 /** 29 31 * struct zynqmp_pm_domain - Wrapper around struct generic_pm_domain ··· 108 106 int ret; 109 107 struct pm_domain_data *pdd, *tmp; 110 108 struct zynqmp_pm_domain *pd; 111 - u32 capabilities = 0; 109 + u32 capabilities = min_capability; 112 110 bool may_wakeup; 113 111 114 112 if (!eemi_ops->set_requirement) ··· 284 282 GFP_KERNEL); 285 283 if (!domains) 286 284 return -ENOMEM; 285 + 286 + if (!of_device_is_compatible(dev->parent->of_node, 287 + "xlnx,zynqmp-firmware")) 288 + min_capability = ZYNQMP_PM_CAPABILITY_UNUSABLE; 287 289 288 290 for (i = 0; i < ZYNQMP_NUM_DOMAINS; i++, pd++) { 289 291 pd->node_id = 0;
+9
include/dt-bindings/power/qcom-rpmpd.h
··· 27 27 #define RPMH_REGULATOR_LEVEL_TURBO 384 28 28 #define RPMH_REGULATOR_LEVEL_TURBO_L1 416 29 29 30 + /* MSM8976 Power Domain Indexes */ 31 + #define MSM8976_VDDCX 0 32 + #define MSM8976_VDDCX_AO 1 33 + #define MSM8976_VDDCX_VFL 2 34 + #define MSM8976_VDDMX 3 35 + #define MSM8976_VDDMX_AO 4 36 + #define MSM8976_VDDMX_VFL 5 37 + 30 38 /* MSM8996 Power Domain Indexes */ 31 39 #define MSM8996_VDDCX 0 32 40 #define MSM8996_VDDCX_AO 1 ··· 76 68 #define RPM_SMD_LEVEL_NOM_PLUS 320 77 69 #define RPM_SMD_LEVEL_TURBO 384 78 70 #define RPM_SMD_LEVEL_TURBO_NO_CPR 416 71 + #define RPM_SMD_LEVEL_TURBO_HIGH 448 79 72 #define RPM_SMD_LEVEL_BINNING 512 80 73 81 74 #endif
+74
include/dt-bindings/reset/amlogic,meson-a1-reset.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0+ OR MIT) 2 + * 3 + * Copyright (c) 2019 Amlogic, Inc. All rights reserved. 4 + * Author: Xingyu Chen <xingyu.chen@amlogic.com> 5 + * 6 + */ 7 + 8 + #ifndef _DT_BINDINGS_AMLOGIC_MESON_A1_RESET_H 9 + #define _DT_BINDINGS_AMLOGIC_MESON_A1_RESET_H 10 + 11 + /* RESET0 */ 12 + /* 0 */ 13 + #define RESET_AM2AXI_VAD 1 14 + /* 2-3 */ 15 + #define RESET_PSRAM 4 16 + #define RESET_PAD_CTRL 5 17 + /* 6 */ 18 + #define RESET_TEMP_SENSOR 7 19 + #define RESET_AM2AXI_DEV 8 20 + /* 9 */ 21 + #define RESET_SPICC_A 10 22 + #define RESET_MSR_CLK 11 23 + #define RESET_AUDIO 12 24 + #define RESET_ANALOG_CTRL 13 25 + #define RESET_SAR_ADC 14 26 + #define RESET_AUDIO_VAD 15 27 + #define RESET_CEC 16 28 + #define RESET_PWM_EF 17 29 + #define RESET_PWM_CD 18 30 + #define RESET_PWM_AB 19 31 + /* 20 */ 32 + #define RESET_IR_CTRL 21 33 + #define RESET_I2C_S_A 22 34 + /* 23 */ 35 + #define RESET_I2C_M_D 24 36 + #define RESET_I2C_M_C 25 37 + #define RESET_I2C_M_B 26 38 + #define RESET_I2C_M_A 27 39 + #define RESET_I2C_PROD_AHB 28 40 + #define RESET_I2C_PROD 29 41 + /* 30-31 */ 42 + 43 + /* RESET1 */ 44 + #define RESET_ACODEC 32 45 + #define RESET_DMA 33 46 + #define RESET_SD_EMMC_A 34 47 + /* 35 */ 48 + #define RESET_USBCTRL 36 49 + /* 37 */ 50 + #define RESET_USBPHY 38 51 + /* 39-41 */ 52 + #define RESET_RSA 42 53 + #define RESET_DMC 43 54 + /* 44 */ 55 + #define RESET_IRQ_CTRL 45 56 + /* 46 */ 57 + #define RESET_NIC_VAD 47 58 + #define RESET_NIC_AXI 48 59 + #define RESET_RAMA 49 60 + #define RESET_RAMB 50 61 + /* 51-52 */ 62 + #define RESET_ROM 53 63 + #define RESET_SPIFC 54 64 + #define RESET_GIC 55 65 + #define RESET_UART_C 56 66 + #define RESET_UART_B 57 67 + #define RESET_UART_A 58 68 + #define RESET_OSC_RING 59 69 + /* 60-63 */ 70 + 71 + /* RESET2 */ 72 + /* 64-95 */ 73 + 74 + #endif
+2
include/dt-bindings/reset/amlogic,meson-axg-audio-arb.h
··· 13 13 #define AXG_ARB_FRDDR_A 3 14 14 #define AXG_ARB_FRDDR_B 4 15 15 #define AXG_ARB_FRDDR_C 5 16 + #define AXG_ARB_TODDR_D 6 17 + #define AXG_ARB_FRDDR_D 7 16 18 17 19 #endif /* _DT_BINDINGS_AMLOGIC_MESON_AXG_AUDIO_ARB_H */
+9 -6
include/linux/firmware/meson/meson_sm.h
··· 16 16 17 17 struct meson_sm_firmware; 18 18 19 - int meson_sm_call(unsigned int cmd_index, u32 *ret, u32 arg0, u32 arg1, 20 - u32 arg2, u32 arg3, u32 arg4); 21 - int meson_sm_call_write(void *buffer, unsigned int b_size, unsigned int cmd_index, 22 - u32 arg0, u32 arg1, u32 arg2, u32 arg3, u32 arg4); 23 - int meson_sm_call_read(void *buffer, unsigned int bsize, unsigned int cmd_index, 24 - u32 arg0, u32 arg1, u32 arg2, u32 arg3, u32 arg4); 19 + int meson_sm_call(struct meson_sm_firmware *fw, unsigned int cmd_index, 20 + u32 *ret, u32 arg0, u32 arg1, u32 arg2, u32 arg3, u32 arg4); 21 + int meson_sm_call_write(struct meson_sm_firmware *fw, void *buffer, 22 + unsigned int b_size, unsigned int cmd_index, u32 arg0, 23 + u32 arg1, u32 arg2, u32 arg3, u32 arg4); 24 + int meson_sm_call_read(struct meson_sm_firmware *fw, void *buffer, 25 + unsigned int bsize, unsigned int cmd_index, u32 arg0, 26 + u32 arg1, u32 arg2, u32 arg3, u32 arg4); 27 + struct meson_sm_firmware *meson_sm_get(struct device_node *firmware_node); 25 28 26 29 #endif /* _MESON_SM_FW_H_ */
+2 -1
include/linux/firmware/xlnx-zynqmp.h
··· 2 2 /* 3 3 * Xilinx Zynq MPSoC Firmware layer 4 4 * 5 - * Copyright (C) 2014-2018 Xilinx 5 + * Copyright (C) 2014-2019 Xilinx 6 6 * 7 7 * Michal Simek <michal.simek@xilinx.com> 8 8 * Davorin Mista <davorin.mista@aggios.com> ··· 46 46 #define ZYNQMP_PM_CAPABILITY_ACCESS 0x1U 47 47 #define ZYNQMP_PM_CAPABILITY_CONTEXT 0x2U 48 48 #define ZYNQMP_PM_CAPABILITY_WAKEUP 0x4U 49 + #define ZYNQMP_PM_CAPABILITY_UNUSABLE 0x8U 49 50 50 51 /* 51 52 * Firmware FPGA Manager flags
+2 -2
include/linux/logic_pio.h
··· 108 108 * area by redefining the macro below. 109 109 */ 110 110 #define PIO_INDIRECT_SIZE 0x4000 111 - #define MMIO_UPPER_LIMIT (IO_SPACE_LIMIT - PIO_INDIRECT_SIZE) 112 111 #else 113 - #define MMIO_UPPER_LIMIT IO_SPACE_LIMIT 112 + #define PIO_INDIRECT_SIZE 0 114 113 #endif /* CONFIG_INDIRECT_PIO */ 114 + #define MMIO_UPPER_LIMIT (IO_SPACE_LIMIT - PIO_INDIRECT_SIZE) 115 115 116 116 struct logic_pio_hwaddr *find_io_range_by_fwnode(struct fwnode_handle *fwnode); 117 117 unsigned long logic_pio_trans_hwaddr(struct fwnode_handle *fwnode,
-1
include/linux/mfd/syscon/atmel-matrix.h
··· 106 106 #define AT91_MATRIX_DDR_IOSR BIT(18) 107 107 #define AT91_MATRIX_NFD0_SELECT BIT(24) 108 108 #define AT91_MATRIX_DDR_MP_EN BIT(25) 109 - #define AT91_MATRIX_EBI_NUM_CS 8 110 109 111 110 #define AT91_MATRIX_USBPUCR_PUON BIT(30) 112 111
+21
include/linux/platform_data/ti-prm.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * TI PRM (Power & Reset Manager) platform data 4 + * 5 + * Copyright (C) 2019 Texas Instruments, Inc. 6 + * 7 + * Tero Kristo <t-kristo@ti.com> 8 + */ 9 + 10 + #ifndef _LINUX_PLATFORM_DATA_TI_PRM_H 11 + #define _LINUX_PLATFORM_DATA_TI_PRM_H 12 + 13 + struct clockdomain; 14 + 15 + struct ti_prm_platform_data { 16 + void (*clkdm_deny_idle)(struct clockdomain *clkdm); 17 + void (*clkdm_allow_idle)(struct clockdomain *clkdm); 18 + struct clockdomain * (*clkdm_lookup)(const char *name); 19 + }; 20 + 21 + #endif /* _LINUX_PLATFORM_DATA_TI_PRM_H */
+9
include/linux/pm_wakeup.h
··· 63 63 bool autosleep_enabled:1; 64 64 }; 65 65 66 + #define for_each_wakeup_source(ws) \ 67 + for ((ws) = wakeup_sources_walk_start(); \ 68 + (ws); \ 69 + (ws) = wakeup_sources_walk_next((ws))) 70 + 66 71 #ifdef CONFIG_PM_SLEEP 67 72 68 73 /* ··· 97 92 extern struct wakeup_source *wakeup_source_register(struct device *dev, 98 93 const char *name); 99 94 extern void wakeup_source_unregister(struct wakeup_source *ws); 95 + extern int wakeup_sources_read_lock(void); 96 + extern void wakeup_sources_read_unlock(int idx); 97 + extern struct wakeup_source *wakeup_sources_walk_start(void); 98 + extern struct wakeup_source *wakeup_sources_walk_next(struct wakeup_source *ws); 100 99 extern int device_wakeup_enable(struct device *dev); 101 100 extern int device_wakeup_disable(struct device *dev); 102 101 extern void device_set_wakeup_capable(struct device *dev, bool capable);
+2 -1
include/linux/reset-controller.h
··· 62 62 * @of_node: corresponding device tree node as phandle target 63 63 * @of_reset_n_cells: number of cells in reset line specifiers 64 64 * @of_xlate: translation function to translate from specifier as found in the 65 - * device tree to id as given to the reset control ops 65 + * device tree to id as given to the reset control ops, defaults 66 + * to :c:func:`of_reset_simple_xlate`. 66 67 * @nr_resets: number of reset controls in this reset controller device 67 68 */ 68 69 struct reset_controller_dev {
+46
include/linux/reset.h
··· 203 203 return __reset_control_get(dev, id, 0, true, false, false); 204 204 } 205 205 206 + /** 207 + * reset_control_get_optional_exclusive - optional reset_control_get_exclusive() 208 + * @dev: device to be reset by the controller 209 + * @id: reset line name 210 + * 211 + * Optional variant of reset_control_get_exclusive(). If the requested reset 212 + * is not specified in the device tree, this function returns NULL instead of 213 + * an error. 214 + * 215 + * See reset_control_get_exclusive() for more information. 216 + */ 206 217 static inline struct reset_control *reset_control_get_optional_exclusive( 207 218 struct device *dev, const char *id) 208 219 { 209 220 return __reset_control_get(dev, id, 0, false, true, true); 210 221 } 211 222 223 + /** 224 + * reset_control_get_optional_shared - optional reset_control_get_shared() 225 + * @dev: device to be reset by the controller 226 + * @id: reset line name 227 + * 228 + * Optional variant of reset_control_get_shared(). If the requested reset 229 + * is not specified in the device tree, this function returns NULL instead of 230 + * an error. 231 + * 232 + * See reset_control_get_shared() for more information. 233 + */ 212 234 static inline struct reset_control *reset_control_get_optional_shared( 213 235 struct device *dev, const char *id) 214 236 { ··· 376 354 return __devm_reset_control_get(dev, id, 0, true, false, false); 377 355 } 378 356 357 + /** 358 + * devm_reset_control_get_optional_exclusive - resource managed 359 + * reset_control_get_optional_exclusive() 360 + * @dev: device to be reset by the controller 361 + * @id: reset line name 362 + * 363 + * Managed reset_control_get_optional_exclusive(). For reset controllers 364 + * returned from this function, reset_control_put() is called automatically on 365 + * driver detach. 366 + * 367 + * See reset_control_get_optional_exclusive() for more information. 368 + */ 379 369 static inline struct reset_control *devm_reset_control_get_optional_exclusive( 380 370 struct device *dev, const char *id) 381 371 { 382 372 return __devm_reset_control_get(dev, id, 0, false, true, true); 383 373 } 384 374 375 + /** 376 + * devm_reset_control_get_optional_shared - resource managed 377 + * reset_control_get_optional_shared() 378 + * @dev: device to be reset by the controller 379 + * @id: reset line name 380 + * 381 + * Managed reset_control_get_optional_shared(). For reset controllers returned 382 + * from this function, reset_control_put() is called automatically on driver 383 + * detach. 384 + * 385 + * See reset_control_get_optional_shared() for more information. 386 + */ 385 387 static inline struct reset_control *devm_reset_control_get_optional_shared( 386 388 struct device *dev, const char *id) 387 389 {
+2
include/linux/soc/mmp/cputype.h
··· 2 2 #ifndef __ASM_MACH_CPUTYPE_H 3 3 #define __ASM_MACH_CPUTYPE_H 4 4 5 + #if defined(CONFIG_ARM) || defined(CONFIG_ARM64) 5 6 #include <asm/cputype.h> 7 + #endif 6 8 7 9 /* 8 10 * CPU Stepping CPU_ID CHIP_ID
+20 -74
include/linux/soc/qcom/llcc-qcom.h
··· 38 38 }; 39 39 40 40 /** 41 - * llcc_slice_config - Data associated with the llcc slice 42 - * @usecase_id: usecase id for which the llcc slice is used 43 - * @slice_id: llcc slice id assigned to each slice 44 - * @max_cap: maximum capacity of the llcc slice 45 - * @priority: priority of the llcc slice 46 - * @fixed_size: whether the llcc slice can grow beyond its size 47 - * @bonus_ways: bonus ways associated with llcc slice 48 - * @res_ways: reserved ways associated with llcc slice 49 - * @cache_mode: mode of the llcc slice 50 - * @probe_target_ways: Probe only reserved and bonus ways on a cache miss 51 - * @dis_cap_alloc: Disable capacity based allocation 52 - * @retain_on_pc: Retain through power collapse 53 - * @activate_on_init: activate the slice on init 41 + * llcc_edac_reg_data - llcc edac registers data for each error type 42 + * @name: Name of the error 43 + * @synd_reg: Syndrome register address 44 + * @count_status_reg: Status register address to read the error count 45 + * @ways_status_reg: Status register address to read the error ways 46 + * @reg_cnt: Number of registers 47 + * @count_mask: Mask value to get the error count 48 + * @ways_mask: Mask value to get the error ways 49 + * @count_shift: Shift value to get the error count 50 + * @ways_shift: Shift value to get the error ways 54 51 */ 55 - struct llcc_slice_config { 56 - u32 usecase_id; 57 - u32 slice_id; 58 - u32 max_cap; 59 - u32 priority; 60 - bool fixed_size; 61 - u32 bonus_ways; 62 - u32 res_ways; 63 - u32 cache_mode; 64 - u32 probe_target_ways; 65 - bool dis_cap_alloc; 66 - bool retain_on_pc; 67 - bool activate_on_init; 52 + struct llcc_edac_reg_data { 53 + char *name; 54 + u64 synd_reg; 55 + u64 count_status_reg; 56 + u64 ways_status_reg; 57 + u32 reg_cnt; 58 + u32 count_mask; 59 + u32 ways_mask; 60 + u8 count_shift; 61 + u8 ways_shift; 68 62 }; 69 63 70 64 /** ··· 85 91 unsigned long *bitmap; 86 92 u32 *offsets; 87 93 int ecc_irq; 88 - }; 89 - 90 - /** 91 - * llcc_edac_reg_data - llcc edac registers data for each error type 92 - * @name: Name of the error 93 - * @synd_reg: Syndrome register address 94 - * @count_status_reg: Status register address to read the error count 95 - * @ways_status_reg: Status register address to read the error ways 96 - * @reg_cnt: Number of registers 97 - * @count_mask: Mask value to get the error count 98 - * @ways_mask: Mask value to get the error ways 99 - * @count_shift: Shift value to get the error count 100 - * @ways_shift: Shift value to get the error ways 101 - */ 102 - struct llcc_edac_reg_data { 103 - char *name; 104 - u64 synd_reg; 105 - u64 count_status_reg; 106 - u64 ways_status_reg; 107 - u32 reg_cnt; 108 - u32 count_mask; 109 - u32 ways_mask; 110 - u8 count_shift; 111 - u8 ways_shift; 112 94 }; 113 95 114 96 #if IS_ENABLED(CONFIG_QCOM_LLCC) ··· 124 154 */ 125 155 int llcc_slice_deactivate(struct llcc_slice_desc *desc); 126 156 127 - /** 128 - * qcom_llcc_probe - program the sct table 129 - * @pdev: platform device pointer 130 - * @table: soc sct table 131 - * @sz: Size of the config table 132 - */ 133 - int qcom_llcc_probe(struct platform_device *pdev, 134 - const struct llcc_slice_config *table, u32 sz); 135 - 136 - /** 137 - * qcom_llcc_remove - remove the sct table 138 - * @pdev: Platform device pointer 139 - */ 140 - int qcom_llcc_remove(struct platform_device *pdev); 141 157 #else 142 158 static inline struct llcc_slice_desc *llcc_slice_getd(u32 uid) 143 159 { ··· 152 196 static inline int llcc_slice_deactivate(struct llcc_slice_desc *desc) 153 197 { 154 198 return -EINVAL; 155 - } 156 - static inline int qcom_llcc_probe(struct platform_device *pdev, 157 - const struct llcc_slice_config *table, u32 sz) 158 - { 159 - return -ENODEV; 160 - } 161 - 162 - static inline int qcom_llcc_remove(struct platform_device *pdev) 163 - { 164 - return -ENODEV; 165 199 } 166 200 #endif 167 201
+1 -1
include/soc/tegra/mc.h
··· 181 181 spinlock_t lock; 182 182 }; 183 183 184 - void tegra_mc_write_emem_configuration(struct tegra_mc *mc, unsigned long rate); 184 + int tegra_mc_write_emem_configuration(struct tegra_mc *mc, unsigned long rate); 185 185 unsigned int tegra_mc_get_emem_device_count(struct tegra_mc *mc); 186 186 187 187 #endif /* __SOC_TEGRA_MC_H__ */
+1 -1
lib/Makefile
··· 109 109 obj-$(CONFIG_CHECK_SIGNATURE) += check_signature.o 110 110 obj-$(CONFIG_DEBUG_LOCKING_API_SELFTESTS) += locking-selftest.o 111 111 112 - obj-y += logic_pio.o 112 + lib-y += logic_pio.o 113 113 114 114 obj-$(CONFIG_GENERIC_HWEIGHT) += hweight.o 115 115
+8 -6
lib/logic_pio.c
··· 3 3 * Copyright (C) 2017 HiSilicon Limited, All Rights Reserved. 4 4 * Author: Gabriele Paoloni <gabriele.paoloni@huawei.com> 5 5 * Author: Zhichang Yuan <yuanzhichang@hisilicon.com> 6 + * Author: John Garry <john.garry@huawei.com> 6 7 */ 7 8 8 9 #define pr_fmt(fmt) "LOGIC PIO: " fmt ··· 40 39 resource_size_t iio_sz = MMIO_UPPER_LIMIT; 41 40 int ret = 0; 42 41 43 - if (!new_range || !new_range->fwnode || !new_range->size) 42 + if (!new_range || !new_range->fwnode || !new_range->size || 43 + (new_range->flags == LOGIC_PIO_INDIRECT && !new_range->ops)) 44 44 return -EINVAL; 45 45 46 46 start = new_range->hw_start; ··· 239 237 } else if (addr >= MMIO_UPPER_LIMIT && addr < IO_SPACE_LIMIT) { \ 240 238 struct logic_pio_hwaddr *entry = find_io_range(addr); \ 241 239 \ 242 - if (entry && entry->ops) \ 240 + if (entry) \ 243 241 ret = entry->ops->in(entry->hostdata, \ 244 242 addr, sizeof(type)); \ 245 243 else \ ··· 255 253 } else if (addr >= MMIO_UPPER_LIMIT && addr < IO_SPACE_LIMIT) { \ 256 254 struct logic_pio_hwaddr *entry = find_io_range(addr); \ 257 255 \ 258 - if (entry && entry->ops) \ 256 + if (entry) \ 259 257 entry->ops->out(entry->hostdata, \ 260 258 addr, value, sizeof(type)); \ 261 259 else \ ··· 263 261 } \ 264 262 } \ 265 263 \ 266 - void logic_ins##bw(unsigned long addr, void *buffer, \ 264 + void logic_ins##bw(unsigned long addr, void *buffer, \ 267 265 unsigned int count) \ 268 266 { \ 269 267 if (addr < MMIO_UPPER_LIMIT) { \ ··· 271 269 } else if (addr >= MMIO_UPPER_LIMIT && addr < IO_SPACE_LIMIT) { \ 272 270 struct logic_pio_hwaddr *entry = find_io_range(addr); \ 273 271 \ 274 - if (entry && entry->ops) \ 272 + if (entry) \ 275 273 entry->ops->ins(entry->hostdata, \ 276 274 addr, buffer, sizeof(type), count); \ 277 275 else \ ··· 288 286 } else if (addr >= MMIO_UPPER_LIMIT && addr < IO_SPACE_LIMIT) { \ 289 287 struct logic_pio_hwaddr *entry = find_io_range(addr); \ 290 288 \ 291 - if (entry && entry->ops) \ 289 + if (entry) \ 292 290 entry->ops->outs(entry->hostdata, \ 293 291 addr, buffer, sizeof(type), count); \ 294 292 else \