Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM SoC driver updates from Arnd Bergmann:
"As usual, the drivers/tee and drivers/reset subsystems get merged
here, with the expected set of smaller updates and some new hardware
support. The tee subsystem now supports device drivers to be attached
to a tee, the first example here is a random number driver with its
implementation in the secure world.

Three new power domain drivers get added for specific chip families:
- Broadcom BCM283x chips (used in Raspberry Pi)
- Qualcomm Snapdragon phone chips
- Xilinx ZynqMP FPGA SoCs

One new driver is added to talk to the BPMP firmware on NVIDIA
Tegra210

Existing drivers are extended for new SoC variants from NXP, NVIDIA,
Amlogic and Qualcomm"

* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (113 commits)
tee: optee: update optee_msg.h and optee_smc.h to dual license
tee: add cancellation support to client interface
dpaa2-eth: configure the cache stashing amount on a queue
soc: fsl: dpio: configure cache stashing destination
soc: fsl: dpio: enable frame data cache stashing per software portal
soc: fsl: guts: make fsl_guts_get_svr() static
hwrng: make symbol 'optee_rng_id_table' static
tee: optee: Fix unsigned comparison with less than zero
hwrng: Fix unsigned comparison with less than zero
tee: fix possible error pointer ctx dereferencing
hwrng: optee: Initialize some structs using memset instead of braces
tee: optee: Initialize some structs using memset instead of braces
soc: fsl: dpio: fix memory leak of a struct qbman on error exit path
clk: tegra: dfll: Make symbol 'tegra210_cpu_cvb_tables' static
soc: qcom: llcc-slice: Fix typos
qcom: soc: llcc-slice: Consolidate some code
qcom: soc: llcc-slice: Clear the global drv_data pointer on error
drivers: soc: xilinx: Add ZynqMP power domain driver
firmware: xilinx: Add APIs to control node status/power
dt-bindings: power: Add ZynqMP power domain bindings
...

+7125 -747
+6 -2
Documentation/devicetree/bindings/arm/freescale/fsl,scu.txt
··· 58 58 domain binding[2]. 59 59 60 60 Required properties: 61 - - compatible: Should be "fsl,imx8qxp-scu-pd". 61 + - compatible: Should be one of: 62 + "fsl,imx8qm-scu-pd", 63 + "fsl,imx8qxp-scu-pd" 64 + followed by "fsl,scu-pd" 65 + 62 66 - #power-domain-cells: Must be 1. Contains the Resource ID used by 63 67 SCU commands. 64 68 See detailed Resource ID list from: ··· 161 157 }; 162 158 163 159 pd: imx8qx-pd { 164 - compatible = "fsl,imx8qxp-scu-pd"; 160 + compatible = "fsl,imx8qxp-scu-pd", "fsl,scu-pd"; 165 161 #power-domain-cells = <1>; 166 162 }; 167 163
+29 -3
Documentation/devicetree/bindings/bus/imx-weim.txt
··· 47 47 Timing property for child nodes. It is mandatory, not optional. 48 48 49 49 - fsl,weim-cs-timing: The timing array, contains timing values for the 50 - child node. We can get the CS index from the child 51 - node's "reg" property. The number of registers depends 52 - on the selected chip. 50 + child node. We get the CS indexes from the address 51 + ranges in the child node's "reg" property. 52 + The number of registers depends on the selected chip: 53 53 For i.MX1, i.MX21 ("fsl,imx1-weim") there are two 54 54 registers: CSxU, CSxL. 55 55 For i.MX25, i.MX27, i.MX31 and i.MX35 ("fsl,imx27-weim") ··· 78 78 bank-width = <2>; 79 79 fsl,weim-cs-timing = <0x00620081 0x00000001 0x1c022000 80 80 0x0000c000 0x1404a38e 0x00000000>; 81 + }; 82 + }; 83 + 84 + Example for an imx6q-based board, a multi-chipselect device connected to WEIM: 85 + 86 + In this case, both chip select 0 and 1 will be configured with the same timing 87 + array values. 88 + 89 + weim: weim@21b8000 { 90 + compatible = "fsl,imx6q-weim"; 91 + reg = <0x021b8000 0x4000>; 92 + clocks = <&clks 196>; 93 + #address-cells = <2>; 94 + #size-cells = <1>; 95 + ranges = <0 0 0x08000000 0x02000000 96 + 1 0 0x0a000000 0x02000000 97 + 2 0 0x0c000000 0x02000000 98 + 3 0 0x0e000000 0x02000000>; 99 + fsl,weim-cs-gpr = <&gpr>; 100 + 101 + acme@0 { 102 + compatible = "acme,whatever"; 103 + reg = <0 0 0x100>, <0 0x400000 0x800>, 104 + <1 0x400000 0x800>; 105 + fsl,weim-cs-timing = <0x024400b1 0x00001010 0x20081100 106 + 0x00000000 0xa0000240 0x00000000>; 81 107 }; 82 108 };
+46
Documentation/devicetree/bindings/nvmem/xlnx,zynqmp-nvmem.txt
··· 1 + -------------------------------------------------------------------------- 2 + = Zynq UltraScale+ MPSoC nvmem firmware driver binding = 3 + -------------------------------------------------------------------------- 4 + The nvmem_firmware node provides access to the hardware related data 5 + like soc revision, IDCODE... etc, By using the firmware interface. 6 + 7 + Required properties: 8 + - compatible: should be "xlnx,zynqmp-nvmem-fw" 9 + 10 + = Data cells = 11 + Are child nodes of silicon id, bindings of which as described in 12 + bindings/nvmem/nvmem.txt 13 + 14 + ------- 15 + Example 16 + ------- 17 + firmware { 18 + zynqmp_firmware: zynqmp-firmware { 19 + compatible = "xlnx,zynqmp-firmware"; 20 + method = "smc"; 21 + 22 + nvmem_firmware { 23 + compatible = "xlnx,zynqmp-nvmem-fw"; 24 + #address-cells = <1>; 25 + #size-cells = <1>; 26 + 27 + /* Data cells */ 28 + soc_revision: soc_revision { 29 + reg = <0x0 0x4>; 30 + }; 31 + }; 32 + }; 33 + }; 34 + 35 + = Data consumers = 36 + Are device nodes which consume nvmem data cells. 37 + 38 + For example: 39 + pcap { 40 + ... 41 + 42 + nvmem-cells = <&soc_revision>; 43 + nvmem-cell-names = "soc_revision"; 44 + 45 + ... 46 + };
+3
Documentation/devicetree/bindings/opp/opp.txt
··· 129 129 - opp-microamp-<name>: Named opp-microamp property. Similar to 130 130 opp-microvolt-<name> property, but for microamp instead. 131 131 132 + - opp-level: A value representing the performance level of the device, 133 + expressed as a 32-bit integer. 134 + 132 135 - clock-latency-ns: Specifies the maximum possible transition latency (in 133 136 nanoseconds) for switching to this OPP from any other OPP. 134 137
+3
Documentation/devicetree/bindings/power/fsl,imx-gpcv2.txt
··· 32 32 Optional properties: 33 33 34 34 - power-supply: Power supply used to power the domain 35 + - clocks: a number of phandles to clocks that need to be enabled during 36 + domain power-up sequencing to ensure reset propagation into devices 37 + located inside this power domain 35 38 36 39 Example: 37 40
+145
Documentation/devicetree/bindings/power/qcom,rpmpd.txt
··· 1 + Qualcomm RPM/RPMh Power domains 2 + 3 + For RPM/RPMh Power domains, we communicate a performance state to RPM/RPMh 4 + which then translates it into a corresponding voltage on a rail 5 + 6 + Required Properties: 7 + - compatible: Should be one of the following 8 + * qcom,msm8996-rpmpd: RPM Power domain for the msm8996 family of SoC 9 + * qcom,sdm845-rpmhpd: RPMh Power domain for the sdm845 family of SoC 10 + - #power-domain-cells: number of cells in Power domain specifier 11 + must be 1. 12 + - operating-points-v2: Phandle to the OPP table for the Power domain. 13 + Refer to Documentation/devicetree/bindings/power/power_domain.txt 14 + and Documentation/devicetree/bindings/opp/opp.txt for more details 15 + 16 + Refer to <dt-bindings/power/qcom-rpmpd.h> for the level values for 17 + various OPPs for different platforms as well as Power domain indexes 18 + 19 + Example: rpmh power domain controller and OPP table 20 + 21 + #include <dt-bindings/power/qcom-rpmhpd.h> 22 + 23 + opp-level values specified in the OPP tables for RPMh power domains 24 + should use the RPMH_REGULATOR_LEVEL_* constants from 25 + <dt-bindings/power/qcom-rpmhpd.h> 26 + 27 + rpmhpd: power-controller { 28 + compatible = "qcom,sdm845-rpmhpd"; 29 + #power-domain-cells = <1>; 30 + operating-points-v2 = <&rpmhpd_opp_table>; 31 + 32 + rpmhpd_opp_table: opp-table { 33 + compatible = "operating-points-v2"; 34 + 35 + rpmhpd_opp_ret: opp1 { 36 + opp-level = <RPMH_REGULATOR_LEVEL_RETENTION>; 37 + }; 38 + 39 + rpmhpd_opp_min_svs: opp2 { 40 + opp-level = <RPMH_REGULATOR_LEVEL_MIN_SVS>; 41 + }; 42 + 43 + rpmhpd_opp_low_svs: opp3 { 44 + opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS>; 45 + }; 46 + 47 + rpmhpd_opp_svs: opp4 { 48 + opp-level = <RPMH_REGULATOR_LEVEL_SVS>; 49 + }; 50 + 51 + rpmhpd_opp_svs_l1: opp5 { 52 + opp-level = <RPMH_REGULATOR_LEVEL_SVS_L1>; 53 + }; 54 + 55 + rpmhpd_opp_nom: opp6 { 56 + opp-level = <RPMH_REGULATOR_LEVEL_NOM>; 57 + }; 58 + 59 + rpmhpd_opp_nom_l1: opp7 { 60 + opp-level = <RPMH_REGULATOR_LEVEL_NOM_L1>; 61 + }; 62 + 63 + rpmhpd_opp_nom_l2: opp8 { 64 + opp-level = <RPMH_REGULATOR_LEVEL_NOM_L2>; 65 + }; 66 + 67 + rpmhpd_opp_turbo: opp9 { 68 + opp-level = <RPMH_REGULATOR_LEVEL_TURBO>; 69 + }; 70 + 71 + rpmhpd_opp_turbo_l1: opp10 { 72 + opp-level = <RPMH_REGULATOR_LEVEL_TURBO_L1>; 73 + }; 74 + }; 75 + }; 76 + 77 + Example: rpm power domain controller and OPP table 78 + 79 + rpmpd: power-controller { 80 + compatible = "qcom,msm8996-rpmpd"; 81 + #power-domain-cells = <1>; 82 + operating-points-v2 = <&rpmpd_opp_table>; 83 + 84 + rpmpd_opp_table: opp-table { 85 + compatible = "operating-points-v2"; 86 + 87 + rpmpd_opp_low: opp1 { 88 + opp-level = <1>; 89 + }; 90 + 91 + rpmpd_opp_ret: opp2 { 92 + opp-level = <2>; 93 + }; 94 + 95 + rpmpd_opp_svs: opp3 { 96 + opp-level = <3>; 97 + }; 98 + 99 + rpmpd_opp_normal: opp4 { 100 + opp-level = <4>; 101 + }; 102 + 103 + rpmpd_opp_high: opp5 { 104 + opp-level = <5>; 105 + }; 106 + 107 + rpmpd_opp_turbo: opp6 { 108 + opp-level = <6>; 109 + }; 110 + }; 111 + }; 112 + 113 + Example: Client/Consumer device using OPP table 114 + 115 + leaky-device0@12350000 { 116 + compatible = "foo,i-leak-current"; 117 + reg = <0x12350000 0x1000>; 118 + power-domains = <&rpmhpd SDM845_MX>; 119 + operating-points-v2 = <&leaky_opp_table>; 120 + }; 121 + 122 + 123 + leaky_opp_table: opp-table { 124 + compatible = "operating-points-v2"; 125 + 126 + opp1 { 127 + opp-hz = /bits/ 64 <144000>; 128 + required-opps = <&rpmhpd_opp_low>; 129 + }; 130 + 131 + opp2 { 132 + opp-hz = /bits/ 64 <400000>; 133 + required-opps = <&rpmhpd_opp_ret>; 134 + }; 135 + 136 + opp3 { 137 + opp-hz = /bits/ 64 <20000000>; 138 + required-opps = <&rpmpd_opp_svs>; 139 + }; 140 + 141 + opp4 { 142 + opp-hz = /bits/ 64 <25000000>; 143 + required-opps = <&rpmpd_opp_normal>; 144 + }; 145 + };
+25
Documentation/devicetree/bindings/power/reset/xlnx,zynqmp-power.txt
··· 1 + -------------------------------------------------------------------- 2 + Device Tree Bindings for the Xilinx Zynq MPSoC Power Management 3 + -------------------------------------------------------------------- 4 + The zynqmp-power node describes the power management configurations. 5 + It will control remote suspend/shutdown interfaces. 6 + 7 + Required properties: 8 + - compatible: Must contain: "xlnx,zynqmp-power" 9 + - interrupts: Interrupt specifier 10 + 11 + ------- 12 + Example 13 + ------- 14 + 15 + firmware { 16 + zynqmp_firmware: zynqmp-firmware { 17 + compatible = "xlnx,zynqmp-firmware"; 18 + method = "smc"; 19 + 20 + zynqmp_power: zynqmp-power { 21 + compatible = "xlnx,zynqmp-power"; 22 + interrupts = <0 35 4>; 23 + }; 24 + }; 25 + };
+34
Documentation/devicetree/bindings/power/xlnx,zynqmp-genpd.txt
··· 1 + ----------------------------------------------------------- 2 + Device Tree Bindings for the Xilinx Zynq MPSoC PM domains 3 + ----------------------------------------------------------- 4 + The binding for zynqmp-power-controller follow the common 5 + generic PM domain binding[1]. 6 + 7 + [1] Documentation/devicetree/bindings/power/power_domain.txt 8 + 9 + == Zynq MPSoC Generic PM Domain Node == 10 + 11 + Required property: 12 + - Below property should be in zynqmp-firmware node. 13 + - #power-domain-cells: Number of cells in a PM domain specifier. Must be 1. 14 + 15 + Power domain ID indexes are mentioned in 16 + include/dt-bindings/power/xlnx-zynqmp-power.h. 17 + 18 + ------- 19 + Example 20 + ------- 21 + 22 + firmware { 23 + zynqmp_firmware: zynqmp-firmware { 24 + ... 25 + #power-domain-cells = <1>; 26 + ... 27 + }; 28 + }; 29 + 30 + sata { 31 + ... 32 + power-domains = <&zynqmp_firmware 28>; 33 + ... 34 + };
+27
Documentation/devicetree/bindings/reset/brcm,brcmstb-reset.txt
··· 1 + Broadcom STB SW_INIT-style reset controller 2 + =========================================== 3 + 4 + Broadcom STB SoCs have a SW_INIT-style reset controller with separate 5 + SET/CLEAR/STATUS registers and possibly multiple banks, each of 32 bit 6 + reset lines. 7 + 8 + Please also refer to reset.txt in this directory for common reset 9 + controller binding usage. 10 + 11 + Required properties: 12 + - compatible: should be brcm,brcmstb-reset 13 + - reg: register base and length 14 + - #reset-cells: must be set to 1 15 + 16 + Example: 17 + 18 + reset: reset-controller@8404318 { 19 + compatible = "brcm,brcmstb-reset"; 20 + reg = <0x8404318 0x30>; 21 + #reset-cells = <1>; 22 + }; 23 + 24 + &ethernet_switch { 25 + resets = <&reset>; 26 + reset-names = "switch"; 27 + };
+5 -2
Documentation/devicetree/bindings/reset/fsl,imx7-src.txt
··· 5 5 controller binding usage. 6 6 7 7 Required properties: 8 - - compatible: Should be "fsl,imx7d-src", "syscon" 8 + - compatible: 9 + - For i.MX7 SoCs should be "fsl,imx7d-src", "syscon" 10 + - For i.MX8MQ SoCs should be "fsl,imx8mq-src", "syscon" 9 11 - reg: should be register base and length as documented in the 10 12 datasheet 11 13 - interrupts: Should contain SRC interrupt ··· 46 44 47 45 48 46 For list of all valid reset indicies see 49 - <dt-bindings/reset/imx7-reset.h> 47 + <dt-bindings/reset/imx7-reset.h> for i.MX7 and 48 + <dt-bindings/reset/imx8mq-reset.h> for i.MX8MQ
+52
Documentation/devicetree/bindings/reset/xlnx,zynqmp-reset.txt
··· 1 + -------------------------------------------------------------------------- 2 + = Zynq UltraScale+ MPSoC reset driver binding = 3 + -------------------------------------------------------------------------- 4 + The Zynq UltraScale+ MPSoC has several different resets. 5 + 6 + See Chapter 36 of the Zynq UltraScale+ MPSoC TRM (UG) for more information 7 + about zynqmp resets. 8 + 9 + Please also refer to reset.txt in this directory for common reset 10 + controller binding usage. 11 + 12 + Required Properties: 13 + - compatible: "xlnx,zynqmp-reset" 14 + - #reset-cells: Specifies the number of cells needed to encode reset 15 + line, should be 1 16 + 17 + ------- 18 + Example 19 + ------- 20 + 21 + firmware { 22 + zynqmp_firmware: zynqmp-firmware { 23 + compatible = "xlnx,zynqmp-firmware"; 24 + method = "smc"; 25 + 26 + zynqmp_reset: reset-controller { 27 + compatible = "xlnx,zynqmp-reset"; 28 + #reset-cells = <1>; 29 + }; 30 + }; 31 + }; 32 + 33 + Specifying reset lines connected to IP modules 34 + ============================================== 35 + 36 + Device nodes that need access to reset lines should 37 + specify them as a reset phandle in their corresponding node as 38 + specified in reset.txt. 39 + 40 + For list of all valid reset indicies see 41 + <dt-bindings/reset/xlnx-zynqmp-resets.h> 42 + 43 + Example: 44 + 45 + serdes: zynqmp_phy@fd400000 { 46 + ... 47 + 48 + resets = <&zynqmp_reset ZYNQMP_RESET_SATA>; 49 + reset-names = "sata_rst"; 50 + 51 + ... 52 + };
+2
Documentation/devicetree/bindings/soc/amlogic/clk-measure.txt
··· 9 9 "amlogic,meson-gx-clk-measure" for GX SoCs 10 10 "amlogic,meson8-clk-measure" for Meson8 SoCs 11 11 "amlogic,meson8b-clk-measure" for Meson8b SoCs 12 + "amlogic,meson-axg-clk-measure" for AXG SoCs 13 + "amlogic,meson-g12a-clk-measure" for G12a SoCs 12 14 - reg: base address and size of the Clock Measurer register space. 13 15 14 16 Example:
+46
Documentation/devicetree/bindings/soc/bcm/brcm,bcm2835-pm.txt
··· 1 + BCM2835 PM (Power domains, watchdog) 2 + 3 + The PM block controls power domains and some reset lines, and includes 4 + a watchdog timer. This binding supersedes the brcm,bcm2835-pm-wdt 5 + binding which covered some of PM's register range and functionality. 6 + 7 + Required properties: 8 + 9 + - compatible: Should be "brcm,bcm2835-pm" 10 + - reg: Specifies base physical address and size of the two 11 + register ranges ("PM" and "ASYNC_BRIDGE" in that 12 + order) 13 + - clocks: a) v3d: The V3D clock from CPRMAN 14 + b) peri_image: The PERI_IMAGE clock from CPRMAN 15 + c) h264: The H264 clock from CPRMAN 16 + d) isp: The ISP clock from CPRMAN 17 + - #reset-cells: Should be 1. This property follows the reset controller 18 + bindings[1]. 19 + - #power-domain-cells: Should be 1. This property follows the power domain 20 + bindings[2]. 21 + 22 + Optional properties: 23 + 24 + - timeout-sec: Contains the watchdog timeout in seconds 25 + - system-power-controller: Whether the watchdog is controlling the 26 + system power. This node follows the power controller bindings[3]. 27 + 28 + [1] Documentation/devicetree/bindings/reset/reset.txt 29 + [2] Documentation/devicetree/bindings/power/power_domain.txt 30 + [3] Documentation/devicetree/bindings/power/power-controller.txt 31 + 32 + Example: 33 + 34 + pm { 35 + compatible = "brcm,bcm2835-pm", "brcm,bcm2835-pm-wdt"; 36 + #power-domain-cells = <1>; 37 + #reset-cells = <1>; 38 + reg = <0x7e100000 0x114>, 39 + <0x7e00a000 0x24>; 40 + clocks = <&clocks BCM2835_CLOCK_V3D>, 41 + <&clocks BCM2835_CLOCK_PERI_IMAGE>, 42 + <&clocks BCM2835_CLOCK_H264>, 43 + <&clocks BCM2835_CLOCK_ISP>; 44 + clock-names = "v3d", "peri_image", "h264", "isp"; 45 + system-power-controller; 46 + };
+1
Documentation/devicetree/bindings/soc/qcom/qcom,smd-rpm.txt
··· 23 23 "qcom,rpm-msm8916" 24 24 "qcom,rpm-msm8974" 25 25 "qcom,rpm-msm8998" 26 + "qcom,rpm-sdm660" 26 27 "qcom,rpm-qcs404" 27 28 28 29 - qcom,smd-channels:
+32 -8
MAINTAINERS
··· 1940 1940 L: linux-arm-msm@vger.kernel.org 1941 1941 S: Maintained 1942 1942 F: Documentation/devicetree/bindings/soc/qcom/ 1943 + F: Documentation/devicetree/bindings/*/qcom* 1943 1944 F: arch/arm/boot/dts/qcom-*.dts 1944 1945 F: arch/arm/boot/dts/qcom-*.dtsi 1945 1946 F: arch/arm/mach-qcom/ 1946 - F: arch/arm64/boot/dts/qcom/* 1947 - F: drivers/i2c/busses/i2c-qup.c 1948 - F: drivers/clk/qcom/ 1949 - F: drivers/dma/qcom/ 1950 - F: drivers/soc/qcom/ 1951 - F: drivers/spi/spi-qup.c 1952 - F: drivers/tty/serial/msm_serial.c 1947 + F: arch/arm64/boot/dts/qcom/ 1948 + F: drivers/*/qcom/ 1949 + F: drivers/*/qcom* 1950 + F: drivers/*/*/qcom/ 1951 + F: drivers/*/*/qcom* 1953 1952 F: drivers/*/pm8???-* 1953 + F: drivers/bluetooth/btqcomsmd.c 1954 + F: drivers/clocksource/timer-qcom.c 1955 + F: drivers/extcon/extcon-qcom* 1956 + F: drivers/iommu/msm* 1957 + F: drivers/i2c/busses/i2c-qup.c 1958 + F: drivers/i2c/busses/i2c-qcom-geni.c 1954 1959 F: drivers/mfd/ssbi.c 1955 - F: drivers/firmware/qcom_scm* 1960 + F: drivers/mmc/host/mmci_qcom* 1961 + F: drivers/mmc/host/sdhci_msm.c 1962 + F: drivers/pci/controller/dwc/pcie-qcom.c 1963 + F: drivers/phy/qualcomm/ 1964 + F: drivers/power/*/msm* 1965 + F: drivers/reset/reset-qcom-* 1966 + F: drivers/scsi/ufs/ufs-qcom.* 1967 + F: drivers/spi/spi-qup.c 1968 + F: drivers/spi/spi-geni-qcom.c 1969 + F: drivers/spi/spi-qcom-qspi.c 1970 + F: drivers/tty/serial/msm_serial.c 1971 + F: drivers/usb/dwc3/dwc3-qcom.c 1972 + F: include/dt-bindings/*/qcom* 1973 + F: include/linux/*/qcom* 1956 1974 T: git git://git.kernel.org/pub/scm/linux/kernel/git/agross/linux.git 1957 1975 1958 1976 ARM/RADISYS ENP2611 MACHINE SUPPORT ··· 11332 11314 S: Maintained 11333 11315 F: drivers/tee/optee/ 11334 11316 11317 + OP-TEE RANDOM NUMBER GENERATOR (RNG) DRIVER 11318 + M: Sumit Garg <sumit.garg@linaro.org> 11319 + S: Maintained 11320 + F: drivers/char/hw_random/optee-rng.c 11321 + 11335 11322 OPA-VNIC DRIVER 11336 11323 M: Dennis Dalessandro <dennis.dalessandro@intel.com> 11337 11324 M: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> ··· 13040 13017 F: Documentation/devicetree/bindings/reset/ 13041 13018 F: include/dt-bindings/reset/ 13042 13019 F: include/linux/reset.h 13020 + F: include/linux/reset/ 13043 13021 F: include/linux/reset-controller.h 13044 13022 13045 13023 RESTARTABLE SEQUENCES SUPPORT
-4
arch/arm/boot/dts/bcm2835-rpi.dtsi
··· 85 85 power-domains = <&power RPI_POWER_DOMAIN_USB>; 86 86 }; 87 87 88 - &v3d { 89 - power-domains = <&power RPI_POWER_DOMAIN_V3D>; 90 - }; 91 - 92 88 &hdmi { 93 89 power-domains = <&power RPI_POWER_DOMAIN_HDMI>; 94 90 status = "okay";
+14 -3
arch/arm/boot/dts/bcm283x.dtsi
··· 3 3 #include <dt-bindings/clock/bcm2835-aux.h> 4 4 #include <dt-bindings/gpio/gpio.h> 5 5 #include <dt-bindings/interrupt-controller/irq.h> 6 + #include <dt-bindings/soc/bcm2835-pm.h> 6 7 7 8 /* firmware-provided startup stubs live here, where the secondary CPUs are 8 9 * spinning. ··· 121 120 #interrupt-cells = <2>; 122 121 }; 123 122 124 - watchdog@7e100000 { 125 - compatible = "brcm,bcm2835-pm-wdt"; 126 - reg = <0x7e100000 0x28>; 123 + pm: watchdog@7e100000 { 124 + compatible = "brcm,bcm2835-pm", "brcm,bcm2835-pm-wdt"; 125 + #power-domain-cells = <1>; 126 + #reset-cells = <1>; 127 + reg = <0x7e100000 0x114>, 128 + <0x7e00a000 0x24>; 129 + clocks = <&clocks BCM2835_CLOCK_V3D>, 130 + <&clocks BCM2835_CLOCK_PERI_IMAGE>, 131 + <&clocks BCM2835_CLOCK_H264>, 132 + <&clocks BCM2835_CLOCK_ISP>; 133 + clock-names = "v3d", "peri_image", "h264", "isp"; 134 + system-power-controller; 127 135 }; 128 136 129 137 clocks: cprman@7e101000 { ··· 639 629 compatible = "brcm,bcm2835-v3d"; 640 630 reg = <0x7ec00000 0x1000>; 641 631 interrupts = <1 10>; 632 + power-domains = <&pm BCM2835_POWER_DOMAIN_GRAFX_V3D>; 642 633 }; 643 634 644 635 vc4: gpu {
+1
arch/arm/mach-bcm/Kconfig
··· 167 167 select BCM2835_TIMER 168 168 select PINCTRL 169 169 select PINCTRL_BCM2835 170 + select MFD_CORE 170 171 help 171 172 This enables support for the Broadcom BCM2835 and BCM2836 SoCs. 172 173 This SoC is used in the Raspberry Pi and Roku 2 devices.
+1 -2
arch/arm/mach-socfpga/socfpga.c
··· 19 19 #include <linux/of_irq.h> 20 20 #include <linux/of_platform.h> 21 21 #include <linux/reboot.h> 22 + #include <linux/reset/socfpga.h> 22 23 23 24 #include <asm/hardware/cache-l2x0.h> 24 25 #include <asm/mach/arch.h> ··· 32 31 void __iomem *rst_manager_base_addr; 33 32 void __iomem *sdr_ctl_base_addr; 34 33 unsigned long socfpga_cpu1start_addr; 35 - 36 - extern void __init socfpga_reset_init(void); 37 34 38 35 static void __init socfpga_sysmgr_init(void) 39 36 {
+1 -1
arch/arm/mach-sunxi/sunxi.c
··· 14 14 #include <linux/clocksource.h> 15 15 #include <linux/init.h> 16 16 #include <linux/platform_device.h> 17 + #include <linux/reset/sunxi.h> 17 18 18 19 #include <asm/mach/arch.h> 19 20 #include <asm/secure_cntvoff.h> ··· 38 37 NULL, 39 38 }; 40 39 41 - extern void __init sun6i_reset_init(void); 42 40 static void __init sun6i_timer_init(void) 43 41 { 44 42 of_clk_init(NULL);
+2 -3
drivers/bus/hisi_lpc.c
··· 522 522 523 523 if (!found) { 524 524 dev_warn(hostdev, 525 - "could not find cell for child device (%s)\n", 525 + "could not find cell for child device (%s), discarding\n", 526 526 hid); 527 - ret = -ENODEV; 528 - goto fail; 527 + continue; 529 528 } 530 529 531 530 pdev = platform_device_alloc(cell->name, PLATFORM_DEVID_AUTO);
+57 -13
drivers/bus/imx-weim.c
··· 46 46 }; 47 47 48 48 #define MAX_CS_REGS_COUNT 6 49 + #define MAX_CS_COUNT 6 50 + #define OF_REG_SIZE 3 51 + 52 + struct cs_timing { 53 + bool is_applied; 54 + u32 regs[MAX_CS_REGS_COUNT]; 55 + }; 56 + 57 + struct cs_timing_state { 58 + struct cs_timing cs[MAX_CS_COUNT]; 59 + }; 49 60 50 61 static const struct of_device_id weim_id_table[] = { 51 62 /* i.MX1/21 */ ··· 122 111 } 123 112 124 113 /* Parse and set the timing for this device. */ 125 - static int __init weim_timing_setup(struct device_node *np, void __iomem *base, 126 - const struct imx_weim_devtype *devtype) 114 + static int __init weim_timing_setup(struct device *dev, 115 + struct device_node *np, void __iomem *base, 116 + const struct imx_weim_devtype *devtype, 117 + struct cs_timing_state *ts) 127 118 { 128 119 u32 cs_idx, value[MAX_CS_REGS_COUNT]; 129 120 int i, ret; 121 + int reg_idx, num_regs; 122 + struct cs_timing *cst; 130 123 131 124 if (WARN_ON(devtype->cs_regs_count > MAX_CS_REGS_COUNT)) 132 125 return -EINVAL; 133 - 134 - /* get the CS index from this child node's "reg" property. */ 135 - ret = of_property_read_u32(np, "reg", &cs_idx); 136 - if (ret) 137 - return ret; 138 - 139 - if (cs_idx >= devtype->cs_count) 126 + if (WARN_ON(devtype->cs_count > MAX_CS_COUNT)) 140 127 return -EINVAL; 141 128 142 129 ret = of_property_read_u32_array(np, "fsl,weim-cs-timing", ··· 142 133 if (ret) 143 134 return ret; 144 135 145 - /* set the timing for WEIM */ 146 - for (i = 0; i < devtype->cs_regs_count; i++) 147 - writel(value[i], base + cs_idx * devtype->cs_stride + i * 4); 136 + /* 137 + * the child node's "reg" property may contain multiple address ranges, 138 + * extract the chip select for each. 139 + */ 140 + num_regs = of_property_count_elems_of_size(np, "reg", OF_REG_SIZE); 141 + if (num_regs < 0) 142 + return num_regs; 143 + if (!num_regs) 144 + return -EINVAL; 145 + for (reg_idx = 0; reg_idx < num_regs; reg_idx++) { 146 + /* get the CS index from this child node's "reg" property. */ 147 + ret = of_property_read_u32_index(np, "reg", 148 + reg_idx * OF_REG_SIZE, &cs_idx); 149 + if (ret) 150 + break; 151 + 152 + if (cs_idx >= devtype->cs_count) 153 + return -EINVAL; 154 + 155 + /* prevent re-configuring a CS that's already been configured */ 156 + cst = &ts->cs[cs_idx]; 157 + if (cst->is_applied && memcmp(value, cst->regs, 158 + devtype->cs_regs_count * sizeof(u32))) { 159 + dev_err(dev, "fsl,weim-cs-timing conflict on %pOF", np); 160 + return -EINVAL; 161 + } 162 + 163 + /* set the timing for WEIM */ 164 + for (i = 0; i < devtype->cs_regs_count; i++) 165 + writel(value[i], 166 + base + cs_idx * devtype->cs_stride + i * 4); 167 + if (!cst->is_applied) { 168 + cst->is_applied = true; 169 + memcpy(cst->regs, value, 170 + devtype->cs_regs_count * sizeof(u32)); 171 + } 172 + } 148 173 149 174 return 0; 150 175 } ··· 191 148 const struct imx_weim_devtype *devtype = of_id->data; 192 149 struct device_node *child; 193 150 int ret, have_child = 0; 151 + struct cs_timing_state ts = {}; 194 152 195 153 if (devtype == &imx50_weim_devtype) { 196 154 ret = imx_weim_gpr_setup(pdev); ··· 200 156 } 201 157 202 158 for_each_available_child_of_node(pdev->dev.of_node, child) { 203 - ret = weim_timing_setup(child, base, devtype); 159 + ret = weim_timing_setup(&pdev->dev, child, base, devtype, &ts); 204 160 if (ret) 205 161 dev_warn(&pdev->dev, "%pOF set timing failed.\n", 206 162 child);
+15
drivers/char/hw_random/Kconfig
··· 424 424 will be called exynos-trng. 425 425 426 426 If unsure, say Y. 427 + 428 + config HW_RANDOM_OPTEE 429 + tristate "OP-TEE based Random Number Generator support" 430 + depends on OPTEE 431 + default HW_RANDOM 432 + help 433 + This driver provides support for OP-TEE based Random Number 434 + Generator on ARM SoCs where hardware entropy sources are not 435 + accessible to normal world (Linux). 436 + 437 + To compile this driver as a module, choose M here: the module 438 + will be called optee-rng. 439 + 440 + If unsure, say Y. 441 + 427 442 endif # HW_RANDOM 428 443 429 444 config UML_RANDOM
+1
drivers/char/hw_random/Makefile
··· 38 38 obj-$(CONFIG_HW_RANDOM_MTK) += mtk-rng.o 39 39 obj-$(CONFIG_HW_RANDOM_S390) += s390-trng.o 40 40 obj-$(CONFIG_HW_RANDOM_KEYSTONE) += ks-sa-rng.o 41 + obj-$(CONFIG_HW_RANDOM_OPTEE) += optee-rng.o
+306
drivers/char/hw_random/optee-rng.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2018-2019 Linaro Ltd. 4 + */ 5 + 6 + #include <linux/delay.h> 7 + #include <linux/of.h> 8 + #include <linux/hw_random.h> 9 + #include <linux/kernel.h> 10 + #include <linux/module.h> 11 + #include <linux/slab.h> 12 + #include <linux/tee_drv.h> 13 + #include <linux/uuid.h> 14 + 15 + #define DRIVER_NAME "optee-rng" 16 + 17 + #define TEE_ERROR_HEALTH_TEST_FAIL 0x00000001 18 + 19 + /* 20 + * TA_CMD_GET_ENTROPY - Get Entropy from RNG 21 + * 22 + * param[0] (inout memref) - Entropy buffer memory reference 23 + * param[1] unused 24 + * param[2] unused 25 + * param[3] unused 26 + * 27 + * Result: 28 + * TEE_SUCCESS - Invoke command success 29 + * TEE_ERROR_BAD_PARAMETERS - Incorrect input param 30 + * TEE_ERROR_NOT_SUPPORTED - Requested entropy size greater than size of pool 31 + * TEE_ERROR_HEALTH_TEST_FAIL - Continuous health testing failed 32 + */ 33 + #define TA_CMD_GET_ENTROPY 0x0 34 + 35 + /* 36 + * TA_CMD_GET_RNG_INFO - Get RNG information 37 + * 38 + * param[0] (out value) - value.a: RNG data-rate in bytes per second 39 + * value.b: Quality/Entropy per 1024 bit of data 40 + * param[1] unused 41 + * param[2] unused 42 + * param[3] unused 43 + * 44 + * Result: 45 + * TEE_SUCCESS - Invoke command success 46 + * TEE_ERROR_BAD_PARAMETERS - Incorrect input param 47 + */ 48 + #define TA_CMD_GET_RNG_INFO 0x1 49 + 50 + #define MAX_ENTROPY_REQ_SZ (4 * 1024) 51 + 52 + /** 53 + * struct optee_rng_private - OP-TEE Random Number Generator private data 54 + * @dev: OP-TEE based RNG device. 55 + * @ctx: OP-TEE context handler. 56 + * @session_id: RNG TA session identifier. 57 + * @data_rate: RNG data rate. 58 + * @entropy_shm_pool: Memory pool shared with RNG device. 59 + * @optee_rng: OP-TEE RNG driver structure. 60 + */ 61 + struct optee_rng_private { 62 + struct device *dev; 63 + struct tee_context *ctx; 64 + u32 session_id; 65 + u32 data_rate; 66 + struct tee_shm *entropy_shm_pool; 67 + struct hwrng optee_rng; 68 + }; 69 + 70 + #define to_optee_rng_private(r) \ 71 + container_of(r, struct optee_rng_private, optee_rng) 72 + 73 + static size_t get_optee_rng_data(struct optee_rng_private *pvt_data, 74 + void *buf, size_t req_size) 75 + { 76 + int ret = 0; 77 + u8 *rng_data = NULL; 78 + size_t rng_size = 0; 79 + struct tee_ioctl_invoke_arg inv_arg; 80 + struct tee_param param[4]; 81 + 82 + memset(&inv_arg, 0, sizeof(inv_arg)); 83 + memset(&param, 0, sizeof(param)); 84 + 85 + /* Invoke TA_CMD_GET_ENTROPY function of Trusted App */ 86 + inv_arg.func = TA_CMD_GET_ENTROPY; 87 + inv_arg.session = pvt_data->session_id; 88 + inv_arg.num_params = 4; 89 + 90 + /* Fill invoke cmd params */ 91 + param[0].attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT; 92 + param[0].u.memref.shm = pvt_data->entropy_shm_pool; 93 + param[0].u.memref.size = req_size; 94 + param[0].u.memref.shm_offs = 0; 95 + 96 + ret = tee_client_invoke_func(pvt_data->ctx, &inv_arg, param); 97 + if ((ret < 0) || (inv_arg.ret != 0)) { 98 + dev_err(pvt_data->dev, "TA_CMD_GET_ENTROPY invoke err: %x\n", 99 + inv_arg.ret); 100 + return 0; 101 + } 102 + 103 + rng_data = tee_shm_get_va(pvt_data->entropy_shm_pool, 0); 104 + if (IS_ERR(rng_data)) { 105 + dev_err(pvt_data->dev, "tee_shm_get_va failed\n"); 106 + return 0; 107 + } 108 + 109 + rng_size = param[0].u.memref.size; 110 + memcpy(buf, rng_data, rng_size); 111 + 112 + return rng_size; 113 + } 114 + 115 + static int optee_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait) 116 + { 117 + struct optee_rng_private *pvt_data = to_optee_rng_private(rng); 118 + size_t read = 0, rng_size = 0; 119 + int timeout = 1; 120 + u8 *data = buf; 121 + 122 + if (max > MAX_ENTROPY_REQ_SZ) 123 + max = MAX_ENTROPY_REQ_SZ; 124 + 125 + while (read == 0) { 126 + rng_size = get_optee_rng_data(pvt_data, data, (max - read)); 127 + 128 + data += rng_size; 129 + read += rng_size; 130 + 131 + if (wait) { 132 + if (timeout-- == 0) 133 + return read; 134 + msleep((1000 * (max - read)) / pvt_data->data_rate); 135 + } else { 136 + return read; 137 + } 138 + } 139 + 140 + return read; 141 + } 142 + 143 + static int optee_rng_init(struct hwrng *rng) 144 + { 145 + struct optee_rng_private *pvt_data = to_optee_rng_private(rng); 146 + struct tee_shm *entropy_shm_pool = NULL; 147 + 148 + entropy_shm_pool = tee_shm_alloc(pvt_data->ctx, MAX_ENTROPY_REQ_SZ, 149 + TEE_SHM_MAPPED | TEE_SHM_DMA_BUF); 150 + if (IS_ERR(entropy_shm_pool)) { 151 + dev_err(pvt_data->dev, "tee_shm_alloc failed\n"); 152 + return PTR_ERR(entropy_shm_pool); 153 + } 154 + 155 + pvt_data->entropy_shm_pool = entropy_shm_pool; 156 + 157 + return 0; 158 + } 159 + 160 + static void optee_rng_cleanup(struct hwrng *rng) 161 + { 162 + struct optee_rng_private *pvt_data = to_optee_rng_private(rng); 163 + 164 + tee_shm_free(pvt_data->entropy_shm_pool); 165 + } 166 + 167 + static struct optee_rng_private pvt_data = { 168 + .optee_rng = { 169 + .name = DRIVER_NAME, 170 + .init = optee_rng_init, 171 + .cleanup = optee_rng_cleanup, 172 + .read = optee_rng_read, 173 + } 174 + }; 175 + 176 + static int get_optee_rng_info(struct device *dev) 177 + { 178 + int ret = 0; 179 + struct tee_ioctl_invoke_arg inv_arg; 180 + struct tee_param param[4]; 181 + 182 + memset(&inv_arg, 0, sizeof(inv_arg)); 183 + memset(&param, 0, sizeof(param)); 184 + 185 + /* Invoke TA_CMD_GET_RNG_INFO function of Trusted App */ 186 + inv_arg.func = TA_CMD_GET_RNG_INFO; 187 + inv_arg.session = pvt_data.session_id; 188 + inv_arg.num_params = 4; 189 + 190 + /* Fill invoke cmd params */ 191 + param[0].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT; 192 + 193 + ret = tee_client_invoke_func(pvt_data.ctx, &inv_arg, param); 194 + if ((ret < 0) || (inv_arg.ret != 0)) { 195 + dev_err(dev, "TA_CMD_GET_RNG_INFO invoke err: %x\n", 196 + inv_arg.ret); 197 + return -EINVAL; 198 + } 199 + 200 + pvt_data.data_rate = param[0].u.value.a; 201 + pvt_data.optee_rng.quality = param[0].u.value.b; 202 + 203 + return 0; 204 + } 205 + 206 + static int optee_ctx_match(struct tee_ioctl_version_data *ver, const void *data) 207 + { 208 + if (ver->impl_id == TEE_IMPL_ID_OPTEE) 209 + return 1; 210 + else 211 + return 0; 212 + } 213 + 214 + static int optee_rng_probe(struct device *dev) 215 + { 216 + struct tee_client_device *rng_device = to_tee_client_device(dev); 217 + int ret = 0, err = -ENODEV; 218 + struct tee_ioctl_open_session_arg sess_arg; 219 + 220 + memset(&sess_arg, 0, sizeof(sess_arg)); 221 + 222 + /* Open context with TEE driver */ 223 + pvt_data.ctx = tee_client_open_context(NULL, optee_ctx_match, NULL, 224 + NULL); 225 + if (IS_ERR(pvt_data.ctx)) 226 + return -ENODEV; 227 + 228 + /* Open session with hwrng Trusted App */ 229 + memcpy(sess_arg.uuid, rng_device->id.uuid.b, TEE_IOCTL_UUID_LEN); 230 + sess_arg.clnt_login = TEE_IOCTL_LOGIN_PUBLIC; 231 + sess_arg.num_params = 0; 232 + 233 + ret = tee_client_open_session(pvt_data.ctx, &sess_arg, NULL); 234 + if ((ret < 0) || (sess_arg.ret != 0)) { 235 + dev_err(dev, "tee_client_open_session failed, err: %x\n", 236 + sess_arg.ret); 237 + err = -EINVAL; 238 + goto out_ctx; 239 + } 240 + pvt_data.session_id = sess_arg.session; 241 + 242 + err = get_optee_rng_info(dev); 243 + if (err) 244 + goto out_sess; 245 + 246 + err = hwrng_register(&pvt_data.optee_rng); 247 + if (err) { 248 + dev_err(dev, "hwrng registration failed (%d)\n", err); 249 + goto out_sess; 250 + } 251 + 252 + pvt_data.dev = dev; 253 + 254 + return 0; 255 + 256 + out_sess: 257 + tee_client_close_session(pvt_data.ctx, pvt_data.session_id); 258 + out_ctx: 259 + tee_client_close_context(pvt_data.ctx); 260 + 261 + return err; 262 + } 263 + 264 + static int optee_rng_remove(struct device *dev) 265 + { 266 + hwrng_unregister(&pvt_data.optee_rng); 267 + tee_client_close_session(pvt_data.ctx, pvt_data.session_id); 268 + tee_client_close_context(pvt_data.ctx); 269 + 270 + return 0; 271 + } 272 + 273 + static const struct tee_client_device_id optee_rng_id_table[] = { 274 + {UUID_INIT(0xab7a617c, 0xb8e7, 0x4d8f, 275 + 0x83, 0x01, 0xd0, 0x9b, 0x61, 0x03, 0x6b, 0x64)}, 276 + {} 277 + }; 278 + 279 + MODULE_DEVICE_TABLE(tee, optee_rng_id_table); 280 + 281 + static struct tee_client_driver optee_rng_driver = { 282 + .id_table = optee_rng_id_table, 283 + .driver = { 284 + .name = DRIVER_NAME, 285 + .bus = &tee_bus_type, 286 + .probe = optee_rng_probe, 287 + .remove = optee_rng_remove, 288 + }, 289 + }; 290 + 291 + static int __init optee_rng_mod_init(void) 292 + { 293 + return driver_register(&optee_rng_driver.driver); 294 + } 295 + 296 + static void __exit optee_rng_mod_exit(void) 297 + { 298 + driver_unregister(&optee_rng_driver.driver); 299 + } 300 + 301 + module_init(optee_rng_mod_init); 302 + module_exit(optee_rng_mod_exit); 303 + 304 + MODULE_LICENSE("GPL v2"); 305 + MODULE_AUTHOR("Sumit Garg <sumit.garg@linaro.org>"); 306 + MODULE_DESCRIPTION("OP-TEE based random number generator driver");
+5
drivers/clk/tegra/Kconfig
··· 5 5 config CLK_TEGRA_BPMP 6 6 def_bool y 7 7 depends on TEGRA_BPMP 8 + 9 + config TEGRA_CLK_DFLL 10 + depends on ARCH_TEGRA_124_SOC || ARCH_TEGRA_210_SOC 11 + select PM_OPP 12 + def_bool y
+1 -1
drivers/clk/tegra/Makefile
··· 20 20 obj-$(CONFIG_ARCH_TEGRA_3x_SOC) += clk-tegra30.o 21 21 obj-$(CONFIG_ARCH_TEGRA_114_SOC) += clk-tegra114.o 22 22 obj-$(CONFIG_ARCH_TEGRA_124_SOC) += clk-tegra124.o 23 - obj-$(CONFIG_ARCH_TEGRA_124_SOC) += clk-tegra124-dfll-fcpu.o 23 + obj-$(CONFIG_TEGRA_CLK_DFLL) += clk-tegra124-dfll-fcpu.o 24 24 obj-$(CONFIG_ARCH_TEGRA_132_SOC) += clk-tegra124.o 25 25 obj-y += cvb.o 26 26 obj-$(CONFIG_ARCH_TEGRA_210_SOC) += clk-tegra210.o
+387 -72
drivers/clk/tegra/clk-dfll.c
··· 1 1 /* 2 2 * clk-dfll.c - Tegra DFLL clock source common code 3 3 * 4 - * Copyright (C) 2012-2014 NVIDIA Corporation. All rights reserved. 4 + * Copyright (C) 2012-2019 NVIDIA Corporation. All rights reserved. 5 5 * 6 6 * Aleksandr Frid <afrid@nvidia.com> 7 7 * Paul Walmsley <pwalmsley@nvidia.com> ··· 47 47 #include <linux/kernel.h> 48 48 #include <linux/module.h> 49 49 #include <linux/of.h> 50 + #include <linux/pinctrl/consumer.h> 50 51 #include <linux/pm_opp.h> 51 52 #include <linux/pm_runtime.h> 52 53 #include <linux/regmap.h> ··· 244 243 DFLL_TUNE_LOW = 1, 245 244 }; 246 245 246 + 247 + enum tegra_dfll_pmu_if { 248 + TEGRA_DFLL_PMU_I2C = 0, 249 + TEGRA_DFLL_PMU_PWM = 1, 250 + }; 251 + 247 252 /** 248 253 * struct dfll_rate_req - target DFLL rate request data 249 254 * @rate: target frequency, after the postscaling ··· 307 300 u32 i2c_reg; 308 301 u32 i2c_slave_addr; 309 302 310 - /* i2c_lut array entries are regulator framework selectors */ 311 - unsigned i2c_lut[MAX_DFLL_VOLTAGES]; 312 - int i2c_lut_size; 313 - u8 lut_min, lut_max, lut_safe; 303 + /* lut array entries are regulator framework selectors or PWM values*/ 304 + unsigned lut[MAX_DFLL_VOLTAGES]; 305 + unsigned long lut_uv[MAX_DFLL_VOLTAGES]; 306 + int lut_size; 307 + u8 lut_bottom, lut_min, lut_max, lut_safe; 308 + 309 + /* PWM interface */ 310 + enum tegra_dfll_pmu_if pmu_if; 311 + unsigned long pwm_rate; 312 + struct pinctrl *pwm_pin; 313 + struct pinctrl_state *pwm_enable_state; 314 + struct pinctrl_state *pwm_disable_state; 315 + u32 reg_init_uV; 314 316 }; 315 317 316 318 #define clk_hw_to_dfll(_hw) container_of(_hw, struct tegra_dfll, dfll_clk_hw) ··· 506 490 } 507 491 508 492 /* 493 + * DVCO rate control 494 + */ 495 + 496 + static unsigned long get_dvco_rate_below(struct tegra_dfll *td, u8 out_min) 497 + { 498 + struct dev_pm_opp *opp; 499 + unsigned long rate, prev_rate; 500 + unsigned long uv, min_uv; 501 + 502 + min_uv = td->lut_uv[out_min]; 503 + for (rate = 0, prev_rate = 0; ; rate++) { 504 + opp = dev_pm_opp_find_freq_ceil(td->soc->dev, &rate); 505 + if (IS_ERR(opp)) 506 + break; 507 + 508 + uv = dev_pm_opp_get_voltage(opp); 509 + dev_pm_opp_put(opp); 510 + 511 + if (uv && uv > min_uv) 512 + return prev_rate; 513 + 514 + prev_rate = rate; 515 + } 516 + 517 + return prev_rate; 518 + } 519 + 520 + /* 509 521 * DFLL-to-I2C controller interface 510 522 */ 511 523 ··· 562 518 return 0; 563 519 } 564 520 521 + 522 + /* 523 + * DFLL-to-PWM controller interface 524 + */ 525 + 526 + /** 527 + * dfll_pwm_set_output_enabled - enable/disable PWM voltage requests 528 + * @td: DFLL instance 529 + * @enable: whether to enable or disable the PWM voltage requests 530 + * 531 + * Set the master enable control for PWM control value updates. If disabled, 532 + * then the PWM signal is not driven. Also configure the PWM output pad 533 + * to the appropriate state. 534 + */ 535 + static int dfll_pwm_set_output_enabled(struct tegra_dfll *td, bool enable) 536 + { 537 + int ret; 538 + u32 val, div; 539 + 540 + if (enable) { 541 + ret = pinctrl_select_state(td->pwm_pin, td->pwm_enable_state); 542 + if (ret < 0) { 543 + dev_err(td->dev, "setting enable state failed\n"); 544 + return -EINVAL; 545 + } 546 + val = dfll_readl(td, DFLL_OUTPUT_CFG); 547 + val &= ~DFLL_OUTPUT_CFG_PWM_DIV_MASK; 548 + div = DIV_ROUND_UP(td->ref_rate, td->pwm_rate); 549 + val |= (div << DFLL_OUTPUT_CFG_PWM_DIV_SHIFT) 550 + & DFLL_OUTPUT_CFG_PWM_DIV_MASK; 551 + dfll_writel(td, val, DFLL_OUTPUT_CFG); 552 + dfll_wmb(td); 553 + 554 + val |= DFLL_OUTPUT_CFG_PWM_ENABLE; 555 + dfll_writel(td, val, DFLL_OUTPUT_CFG); 556 + dfll_wmb(td); 557 + } else { 558 + ret = pinctrl_select_state(td->pwm_pin, td->pwm_disable_state); 559 + if (ret < 0) 560 + dev_warn(td->dev, "setting disable state failed\n"); 561 + 562 + val = dfll_readl(td, DFLL_OUTPUT_CFG); 563 + val &= ~DFLL_OUTPUT_CFG_PWM_ENABLE; 564 + dfll_writel(td, val, DFLL_OUTPUT_CFG); 565 + dfll_wmb(td); 566 + } 567 + 568 + return 0; 569 + } 570 + 571 + /** 572 + * dfll_set_force_output_value - set fixed value for force output 573 + * @td: DFLL instance 574 + * @out_val: value to force output 575 + * 576 + * Set the fixed value for force output, DFLL will output this value when 577 + * force output is enabled. 578 + */ 579 + static u32 dfll_set_force_output_value(struct tegra_dfll *td, u8 out_val) 580 + { 581 + u32 val = dfll_readl(td, DFLL_OUTPUT_FORCE); 582 + 583 + val = (val & DFLL_OUTPUT_FORCE_ENABLE) | (out_val & OUT_MASK); 584 + dfll_writel(td, val, DFLL_OUTPUT_FORCE); 585 + dfll_wmb(td); 586 + 587 + return dfll_readl(td, DFLL_OUTPUT_FORCE); 588 + } 589 + 590 + /** 591 + * dfll_set_force_output_enabled - enable/disable force output 592 + * @td: DFLL instance 593 + * @enable: whether to enable or disable the force output 594 + * 595 + * Set the enable control for fouce output with fixed value. 596 + */ 597 + static void dfll_set_force_output_enabled(struct tegra_dfll *td, bool enable) 598 + { 599 + u32 val = dfll_readl(td, DFLL_OUTPUT_FORCE); 600 + 601 + if (enable) 602 + val |= DFLL_OUTPUT_FORCE_ENABLE; 603 + else 604 + val &= ~DFLL_OUTPUT_FORCE_ENABLE; 605 + 606 + dfll_writel(td, val, DFLL_OUTPUT_FORCE); 607 + dfll_wmb(td); 608 + } 609 + 610 + /** 611 + * dfll_force_output - force output a fixed value 612 + * @td: DFLL instance 613 + * @out_sel: value to force output 614 + * 615 + * Set the fixed value for force output, DFLL will output this value. 616 + */ 617 + static int dfll_force_output(struct tegra_dfll *td, unsigned int out_sel) 618 + { 619 + u32 val; 620 + 621 + if (out_sel > OUT_MASK) 622 + return -EINVAL; 623 + 624 + val = dfll_set_force_output_value(td, out_sel); 625 + if ((td->mode < DFLL_CLOSED_LOOP) && 626 + !(val & DFLL_OUTPUT_FORCE_ENABLE)) { 627 + dfll_set_force_output_enabled(td, true); 628 + } 629 + 630 + return 0; 631 + } 632 + 565 633 /** 566 634 * dfll_load_lut - load the voltage lookup table 567 635 * @td: struct tegra_dfll * ··· 695 539 lut_index = i; 696 540 697 541 val = regulator_list_hardware_vsel(td->vdd_reg, 698 - td->i2c_lut[lut_index]); 542 + td->lut[lut_index]); 699 543 __raw_writel(val, td->lut_base + i * 4); 700 544 } 701 545 ··· 750 594 { 751 595 u32 val; 752 596 753 - td->lut_min = 0; 754 - td->lut_max = td->i2c_lut_size - 1; 755 - td->lut_safe = td->lut_min + 1; 597 + td->lut_min = td->lut_bottom; 598 + td->lut_max = td->lut_size - 1; 599 + td->lut_safe = td->lut_min + (td->lut_min < td->lut_max ? 1 : 0); 756 600 757 - dfll_i2c_writel(td, 0, DFLL_OUTPUT_CFG); 601 + /* clear DFLL_OUTPUT_CFG before setting new value */ 602 + dfll_writel(td, 0, DFLL_OUTPUT_CFG); 603 + dfll_wmb(td); 604 + 758 605 val = (td->lut_safe << DFLL_OUTPUT_CFG_SAFE_SHIFT) | 759 - (td->lut_max << DFLL_OUTPUT_CFG_MAX_SHIFT) | 760 - (td->lut_min << DFLL_OUTPUT_CFG_MIN_SHIFT); 761 - dfll_i2c_writel(td, val, DFLL_OUTPUT_CFG); 762 - dfll_i2c_wmb(td); 606 + (td->lut_max << DFLL_OUTPUT_CFG_MAX_SHIFT) | 607 + (td->lut_min << DFLL_OUTPUT_CFG_MIN_SHIFT); 608 + dfll_writel(td, val, DFLL_OUTPUT_CFG); 609 + dfll_wmb(td); 763 610 764 611 dfll_writel(td, 0, DFLL_OUTPUT_FORCE); 765 612 dfll_i2c_writel(td, 0, DFLL_INTR_EN); 766 613 dfll_i2c_writel(td, DFLL_INTR_MAX_MASK | DFLL_INTR_MIN_MASK, 767 614 DFLL_INTR_STS); 768 615 769 - dfll_load_i2c_lut(td); 770 - dfll_init_i2c_if(td); 616 + if (td->pmu_if == TEGRA_DFLL_PMU_PWM) { 617 + u32 vinit = td->reg_init_uV; 618 + int vstep = td->soc->alignment.step_uv; 619 + unsigned long vmin = td->lut_uv[0]; 620 + 621 + /* set initial voltage */ 622 + if ((vinit >= vmin) && vstep) { 623 + unsigned int vsel; 624 + 625 + vsel = DIV_ROUND_UP((vinit - vmin), vstep); 626 + dfll_force_output(td, vsel); 627 + } 628 + } else { 629 + dfll_load_i2c_lut(td); 630 + dfll_init_i2c_if(td); 631 + } 771 632 } 772 633 773 634 /* ··· 804 631 static int find_lut_index_for_rate(struct tegra_dfll *td, unsigned long rate) 805 632 { 806 633 struct dev_pm_opp *opp; 807 - int i, uv; 634 + int i, align_step; 808 635 809 636 opp = dev_pm_opp_find_freq_ceil(td->soc->dev, &rate); 810 637 if (IS_ERR(opp)) 811 638 return PTR_ERR(opp); 812 639 813 - uv = dev_pm_opp_get_voltage(opp); 640 + align_step = dev_pm_opp_get_voltage(opp) / td->soc->alignment.step_uv; 814 641 dev_pm_opp_put(opp); 815 642 816 - for (i = 0; i < td->i2c_lut_size; i++) { 817 - if (regulator_list_voltage(td->vdd_reg, td->i2c_lut[i]) == uv) 643 + for (i = td->lut_bottom; i < td->lut_size; i++) { 644 + if ((td->lut_uv[i] / td->soc->alignment.step_uv) >= align_step) 818 645 return i; 819 646 } 820 647 ··· 1036 863 return -EINVAL; 1037 864 } 1038 865 1039 - dfll_i2c_set_output_enabled(td, true); 866 + if (td->pmu_if == TEGRA_DFLL_PMU_PWM) 867 + dfll_pwm_set_output_enabled(td, true); 868 + else 869 + dfll_i2c_set_output_enabled(td, true); 870 + 1040 871 dfll_set_mode(td, DFLL_CLOSED_LOOP); 1041 872 dfll_set_frequency_request(td, req); 873 + dfll_set_force_output_enabled(td, false); 1042 874 return 0; 1043 875 1044 876 default: ··· 1067 889 case DFLL_CLOSED_LOOP: 1068 890 dfll_set_open_loop_config(td); 1069 891 dfll_set_mode(td, DFLL_OPEN_LOOP); 1070 - dfll_i2c_set_output_enabled(td, false); 892 + if (td->pmu_if == TEGRA_DFLL_PMU_PWM) 893 + dfll_pwm_set_output_enabled(td, false); 894 + else 895 + dfll_i2c_set_output_enabled(td, false); 1071 896 return 0; 1072 897 1073 898 case DFLL_OPEN_LOOP: ··· 1352 1171 seq_printf(s, "[0x%02x] = 0x%08x\n", offs, 1353 1172 dfll_i2c_readl(td, offs)); 1354 1173 1355 - seq_puts(s, "\nINTEGRATED I2C CONTROLLER REGISTERS:\n"); 1356 - offs = DFLL_I2C_CLK_DIVISOR; 1357 - seq_printf(s, "[0x%02x] = 0x%08x\n", offs, 1358 - __raw_readl(td->i2c_controller_base + offs)); 1359 - 1360 - seq_puts(s, "\nLUT:\n"); 1361 - for (offs = 0; offs < 4 * MAX_DFLL_VOLTAGES; offs += 4) 1174 + if (td->pmu_if == TEGRA_DFLL_PMU_I2C) { 1175 + seq_puts(s, "\nINTEGRATED I2C CONTROLLER REGISTERS:\n"); 1176 + offs = DFLL_I2C_CLK_DIVISOR; 1362 1177 seq_printf(s, "[0x%02x] = 0x%08x\n", offs, 1363 - __raw_readl(td->lut_base + offs)); 1178 + __raw_readl(td->i2c_controller_base + offs)); 1179 + 1180 + seq_puts(s, "\nLUT:\n"); 1181 + for (offs = 0; offs < 4 * MAX_DFLL_VOLTAGES; offs += 4) 1182 + seq_printf(s, "[0x%02x] = 0x%08x\n", offs, 1183 + __raw_readl(td->lut_base + offs)); 1184 + } 1364 1185 1365 1186 return 0; 1366 1187 } ··· 1532 1349 */ 1533 1350 static int find_vdd_map_entry_exact(struct tegra_dfll *td, int uV) 1534 1351 { 1535 - int i, n_voltages, reg_uV; 1352 + int i, n_voltages, reg_uV,reg_volt_id, align_step; 1536 1353 1354 + if (WARN_ON(td->pmu_if == TEGRA_DFLL_PMU_PWM)) 1355 + return -EINVAL; 1356 + 1357 + align_step = uV / td->soc->alignment.step_uv; 1537 1358 n_voltages = regulator_count_voltages(td->vdd_reg); 1538 1359 for (i = 0; i < n_voltages; i++) { 1539 1360 reg_uV = regulator_list_voltage(td->vdd_reg, i); 1540 1361 if (reg_uV < 0) 1541 1362 break; 1542 1363 1543 - if (uV == reg_uV) 1364 + reg_volt_id = reg_uV / td->soc->alignment.step_uv; 1365 + 1366 + if (align_step == reg_volt_id) 1544 1367 return i; 1545 1368 } 1546 1369 ··· 1560 1371 * */ 1561 1372 static int find_vdd_map_entry_min(struct tegra_dfll *td, int uV) 1562 1373 { 1563 - int i, n_voltages, reg_uV; 1374 + int i, n_voltages, reg_uV, reg_volt_id, align_step; 1564 1375 1376 + if (WARN_ON(td->pmu_if == TEGRA_DFLL_PMU_PWM)) 1377 + return -EINVAL; 1378 + 1379 + align_step = uV / td->soc->alignment.step_uv; 1565 1380 n_voltages = regulator_count_voltages(td->vdd_reg); 1566 1381 for (i = 0; i < n_voltages; i++) { 1567 1382 reg_uV = regulator_list_voltage(td->vdd_reg, i); 1568 1383 if (reg_uV < 0) 1569 1384 break; 1570 1385 1571 - if (uV <= reg_uV) 1386 + reg_volt_id = reg_uV / td->soc->alignment.step_uv; 1387 + 1388 + if (align_step <= reg_volt_id) 1572 1389 return i; 1573 1390 } 1574 1391 ··· 1582 1387 return -EINVAL; 1583 1388 } 1584 1389 1390 + /* 1391 + * dfll_build_pwm_lut - build the PWM regulator lookup table 1392 + * @td: DFLL instance 1393 + * @v_max: Vmax from OPP table 1394 + * 1395 + * Look-up table in h/w is ignored when PWM is used as DFLL interface to PMIC. 1396 + * In this case closed loop output is controlling duty cycle directly. The s/w 1397 + * look-up that maps PWM duty cycle to voltage is still built by this function. 1398 + */ 1399 + static int dfll_build_pwm_lut(struct tegra_dfll *td, unsigned long v_max) 1400 + { 1401 + int i; 1402 + unsigned long rate, reg_volt; 1403 + u8 lut_bottom = MAX_DFLL_VOLTAGES; 1404 + int v_min = td->soc->cvb->min_millivolts * 1000; 1405 + 1406 + for (i = 0; i < MAX_DFLL_VOLTAGES; i++) { 1407 + reg_volt = td->lut_uv[i]; 1408 + 1409 + /* since opp voltage is exact mv */ 1410 + reg_volt = (reg_volt / 1000) * 1000; 1411 + if (reg_volt > v_max) 1412 + break; 1413 + 1414 + td->lut[i] = i; 1415 + if ((lut_bottom == MAX_DFLL_VOLTAGES) && (reg_volt >= v_min)) 1416 + lut_bottom = i; 1417 + } 1418 + 1419 + /* determine voltage boundaries */ 1420 + td->lut_size = i; 1421 + if ((lut_bottom == MAX_DFLL_VOLTAGES) || 1422 + (lut_bottom + 1 >= td->lut_size)) { 1423 + dev_err(td->dev, "no voltage above DFLL minimum %d mV\n", 1424 + td->soc->cvb->min_millivolts); 1425 + return -EINVAL; 1426 + } 1427 + td->lut_bottom = lut_bottom; 1428 + 1429 + /* determine rate boundaries */ 1430 + rate = get_dvco_rate_below(td, td->lut_bottom); 1431 + if (!rate) { 1432 + dev_err(td->dev, "no opp below DFLL minimum voltage %d mV\n", 1433 + td->soc->cvb->min_millivolts); 1434 + return -EINVAL; 1435 + } 1436 + td->dvco_rate_min = rate; 1437 + 1438 + return 0; 1439 + } 1440 + 1585 1441 /** 1586 1442 * dfll_build_i2c_lut - build the I2C voltage register lookup table 1587 1443 * @td: DFLL instance 1444 + * @v_max: Vmax from OPP table 1588 1445 * 1589 1446 * The DFLL hardware has 33 bytes of look-up table RAM that must be filled with 1590 1447 * PMIC voltage register values that span the entire DFLL operating range. ··· 1644 1397 * the soc-specific platform driver (td->soc->opp_dev) and the PMIC 1645 1398 * register-to-voltage mapping queried from the regulator framework. 1646 1399 * 1647 - * On success, fills in td->i2c_lut and returns 0, or -err on failure. 1400 + * On success, fills in td->lut and returns 0, or -err on failure. 1648 1401 */ 1649 - static int dfll_build_i2c_lut(struct tegra_dfll *td) 1402 + static int dfll_build_i2c_lut(struct tegra_dfll *td, unsigned long v_max) 1650 1403 { 1404 + unsigned long rate, v, v_opp; 1651 1405 int ret = -EINVAL; 1652 - int j, v, v_max, v_opp; 1653 - int selector; 1654 - unsigned long rate; 1655 - struct dev_pm_opp *opp; 1656 - int lut; 1657 - 1658 - rate = ULONG_MAX; 1659 - opp = dev_pm_opp_find_freq_floor(td->soc->dev, &rate); 1660 - if (IS_ERR(opp)) { 1661 - dev_err(td->dev, "couldn't get vmax opp, empty opp table?\n"); 1662 - goto out; 1663 - } 1664 - v_max = dev_pm_opp_get_voltage(opp); 1665 - dev_pm_opp_put(opp); 1406 + int j, selector, lut; 1666 1407 1667 1408 v = td->soc->cvb->min_millivolts * 1000; 1668 1409 lut = find_vdd_map_entry_exact(td, v); 1669 1410 if (lut < 0) 1670 1411 goto out; 1671 - td->i2c_lut[0] = lut; 1412 + td->lut[0] = lut; 1413 + td->lut_bottom = 0; 1672 1414 1673 1415 for (j = 1, rate = 0; ; rate++) { 1416 + struct dev_pm_opp *opp; 1417 + 1674 1418 opp = dev_pm_opp_find_freq_ceil(td->soc->dev, &rate); 1675 1419 if (IS_ERR(opp)) 1676 1420 break; ··· 1673 1435 dev_pm_opp_put(opp); 1674 1436 1675 1437 for (;;) { 1676 - v += max(1, (v_max - v) / (MAX_DFLL_VOLTAGES - j)); 1438 + v += max(1UL, (v_max - v) / (MAX_DFLL_VOLTAGES - j)); 1677 1439 if (v >= v_opp) 1678 1440 break; 1679 1441 1680 1442 selector = find_vdd_map_entry_min(td, v); 1681 1443 if (selector < 0) 1682 1444 goto out; 1683 - if (selector != td->i2c_lut[j - 1]) 1684 - td->i2c_lut[j++] = selector; 1445 + if (selector != td->lut[j - 1]) 1446 + td->lut[j++] = selector; 1685 1447 } 1686 1448 1687 1449 v = (j == MAX_DFLL_VOLTAGES - 1) ? v_max : v_opp; 1688 1450 selector = find_vdd_map_entry_exact(td, v); 1689 1451 if (selector < 0) 1690 1452 goto out; 1691 - if (selector != td->i2c_lut[j - 1]) 1692 - td->i2c_lut[j++] = selector; 1453 + if (selector != td->lut[j - 1]) 1454 + td->lut[j++] = selector; 1693 1455 1694 1456 if (v >= v_max) 1695 1457 break; 1696 1458 } 1697 - td->i2c_lut_size = j; 1459 + td->lut_size = j; 1698 1460 1699 1461 if (!td->dvco_rate_min) 1700 1462 dev_err(td->dev, "no opp above DFLL minimum voltage %d mV\n", 1701 1463 td->soc->cvb->min_millivolts); 1702 - else 1464 + else { 1703 1465 ret = 0; 1466 + for (j = 0; j < td->lut_size; j++) 1467 + td->lut_uv[j] = 1468 + regulator_list_voltage(td->vdd_reg, 1469 + td->lut[j]); 1470 + } 1704 1471 1705 1472 out: 1706 1473 return ret; 1474 + } 1475 + 1476 + static int dfll_build_lut(struct tegra_dfll *td) 1477 + { 1478 + unsigned long rate, v_max; 1479 + struct dev_pm_opp *opp; 1480 + 1481 + rate = ULONG_MAX; 1482 + opp = dev_pm_opp_find_freq_floor(td->soc->dev, &rate); 1483 + if (IS_ERR(opp)) { 1484 + dev_err(td->dev, "couldn't get vmax opp, empty opp table?\n"); 1485 + return -EINVAL; 1486 + } 1487 + v_max = dev_pm_opp_get_voltage(opp); 1488 + dev_pm_opp_put(opp); 1489 + 1490 + if (td->pmu_if == TEGRA_DFLL_PMU_PWM) 1491 + return dfll_build_pwm_lut(td, v_max); 1492 + else 1493 + return dfll_build_i2c_lut(td, v_max); 1707 1494 } 1708 1495 1709 1496 /** ··· 1789 1526 } 1790 1527 td->i2c_reg = vsel_reg; 1791 1528 1792 - ret = dfll_build_i2c_lut(td); 1793 - if (ret) { 1794 - dev_err(td->dev, "couldn't build I2C LUT\n"); 1529 + return 0; 1530 + } 1531 + 1532 + static int dfll_fetch_pwm_params(struct tegra_dfll *td) 1533 + { 1534 + int ret, i; 1535 + u32 pwm_period; 1536 + 1537 + if (!td->soc->alignment.step_uv || !td->soc->alignment.offset_uv) { 1538 + dev_err(td->dev, 1539 + "Missing step or alignment info for PWM regulator"); 1540 + return -EINVAL; 1541 + } 1542 + for (i = 0; i < MAX_DFLL_VOLTAGES; i++) 1543 + td->lut_uv[i] = td->soc->alignment.offset_uv + 1544 + i * td->soc->alignment.step_uv; 1545 + 1546 + ret = read_dt_param(td, "nvidia,pwm-tristate-microvolts", 1547 + &td->reg_init_uV); 1548 + if (!ret) { 1549 + dev_err(td->dev, "couldn't get initialized voltage\n"); 1795 1550 return ret; 1551 + } 1552 + 1553 + ret = read_dt_param(td, "nvidia,pwm-period-nanoseconds", &pwm_period); 1554 + if (!ret) { 1555 + dev_err(td->dev, "couldn't get PWM period\n"); 1556 + return ret; 1557 + } 1558 + td->pwm_rate = (NSEC_PER_SEC / pwm_period) * (MAX_DFLL_VOLTAGES - 1); 1559 + 1560 + td->pwm_pin = devm_pinctrl_get(td->dev); 1561 + if (IS_ERR(td->pwm_pin)) { 1562 + dev_err(td->dev, "DT: missing pinctrl device\n"); 1563 + return PTR_ERR(td->pwm_pin); 1564 + } 1565 + 1566 + td->pwm_enable_state = pinctrl_lookup_state(td->pwm_pin, 1567 + "dvfs_pwm_enable"); 1568 + if (IS_ERR(td->pwm_enable_state)) { 1569 + dev_err(td->dev, "DT: missing pwm enabled state\n"); 1570 + return PTR_ERR(td->pwm_enable_state); 1571 + } 1572 + 1573 + td->pwm_disable_state = pinctrl_lookup_state(td->pwm_pin, 1574 + "dvfs_pwm_disable"); 1575 + if (IS_ERR(td->pwm_disable_state)) { 1576 + dev_err(td->dev, "DT: missing pwm disabled state\n"); 1577 + return PTR_ERR(td->pwm_disable_state); 1796 1578 } 1797 1579 1798 1580 return 0; ··· 1905 1597 1906 1598 td->soc = soc; 1907 1599 1908 - td->vdd_reg = devm_regulator_get(td->dev, "vdd-cpu"); 1909 - if (IS_ERR(td->vdd_reg)) { 1910 - ret = PTR_ERR(td->vdd_reg); 1911 - if (ret != -EPROBE_DEFER) 1912 - dev_err(td->dev, "couldn't get vdd_cpu regulator: %d\n", 1913 - ret); 1914 - 1915 - return ret; 1916 - } 1917 - 1918 1600 td->dvco_rst = devm_reset_control_get(td->dev, "dvco"); 1919 1601 if (IS_ERR(td->dvco_rst)) { 1920 1602 dev_err(td->dev, "couldn't get dvco reset\n"); ··· 1917 1619 return ret; 1918 1620 } 1919 1621 1920 - ret = dfll_fetch_i2c_params(td); 1622 + if (of_property_read_bool(td->dev->of_node, "nvidia,pwm-to-pmic")) { 1623 + td->pmu_if = TEGRA_DFLL_PMU_PWM; 1624 + ret = dfll_fetch_pwm_params(td); 1625 + } else { 1626 + td->vdd_reg = devm_regulator_get(td->dev, "vdd-cpu"); 1627 + if (IS_ERR(td->vdd_reg)) { 1628 + dev_err(td->dev, "couldn't get vdd_cpu regulator\n"); 1629 + return PTR_ERR(td->vdd_reg); 1630 + } 1631 + td->pmu_if = TEGRA_DFLL_PMU_I2C; 1632 + ret = dfll_fetch_i2c_params(td); 1633 + } 1921 1634 if (ret) 1922 1635 return ret; 1636 + 1637 + ret = dfll_build_lut(td); 1638 + if (ret) { 1639 + dev_err(td->dev, "couldn't build LUT\n"); 1640 + return ret; 1641 + } 1923 1642 1924 1643 mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1925 1644 if (!mem) {
+5 -1
drivers/clk/tegra/clk-dfll.h
··· 1 1 /* 2 2 * clk-dfll.h - prototypes and macros for the Tegra DFLL clocksource driver 3 - * Copyright (C) 2013 NVIDIA Corporation. All rights reserved. 3 + * Copyright (C) 2013-2019 NVIDIA Corporation. All rights reserved. 4 4 * 5 5 * Aleksandr Frid <afrid@nvidia.com> 6 6 * Paul Walmsley <pwalmsley@nvidia.com> ··· 22 22 #include <linux/reset.h> 23 23 #include <linux/types.h> 24 24 25 + #include "cvb.h" 26 + 25 27 /** 26 28 * struct tegra_dfll_soc_data - SoC-specific hooks/integration for the DFLL driver 27 29 * @dev: struct device * that holds the OPP table for the DFLL 28 30 * @max_freq: maximum frequency supported on this SoC 29 31 * @cvb: CPU frequency table for this SoC 32 + * @alignment: parameters of the regulator step and offset 30 33 * @init_clock_trimmers: callback to initialize clock trimmers 31 34 * @set_clock_trimmers_high: callback to tune clock trimmers for high voltage 32 35 * @set_clock_trimmers_low: callback to tune clock trimmers for low voltage ··· 38 35 struct device *dev; 39 36 unsigned long max_freq; 40 37 const struct cvb_table *cvb; 38 + struct rail_alignment alignment; 41 39 42 40 void (*init_clock_trimmers)(void); 43 41 void (*set_clock_trimmers_high)(void);
+504 -16
drivers/clk/tegra/clk-tegra124-dfll-fcpu.c
··· 1 1 /* 2 2 * Tegra124 DFLL FCPU clock source driver 3 3 * 4 - * Copyright (C) 2012-2014 NVIDIA Corporation. All rights reserved. 4 + * Copyright (C) 2012-2019 NVIDIA Corporation. All rights reserved. 5 5 * 6 6 * Aleksandr Frid <afrid@nvidia.com> 7 7 * Paul Walmsley <pwalmsley@nvidia.com> ··· 21 21 #include <linux/err.h> 22 22 #include <linux/kernel.h> 23 23 #include <linux/init.h> 24 + #include <linux/of_device.h> 24 25 #include <linux/platform_device.h> 26 + #include <linux/regulator/consumer.h> 25 27 #include <soc/tegra/fuse.h> 26 28 27 29 #include "clk.h" 28 30 #include "clk-dfll.h" 29 31 #include "cvb.h" 30 32 33 + struct dfll_fcpu_data { 34 + const unsigned long *cpu_max_freq_table; 35 + unsigned int cpu_max_freq_table_size; 36 + const struct cvb_table *cpu_cvb_tables; 37 + unsigned int cpu_cvb_tables_size; 38 + }; 39 + 31 40 /* Maximum CPU frequency, indexed by CPU speedo id */ 32 - static const unsigned long cpu_max_freq_table[] = { 41 + static const unsigned long tegra124_cpu_max_freq_table[] = { 33 42 [0] = 2014500000UL, 34 43 [1] = 2320500000UL, 35 44 [2] = 2116500000UL, ··· 51 42 .process_id = -1, 52 43 .min_millivolts = 900, 53 44 .max_millivolts = 1260, 54 - .alignment = { 55 - .step_uv = 10000, /* 10mV */ 56 - }, 57 45 .speedo_scale = 100, 58 46 .voltage_scale = 1000, 59 47 .entries = { ··· 88 82 }, 89 83 }; 90 84 85 + static const unsigned long tegra210_cpu_max_freq_table[] = { 86 + [0] = 1912500000UL, 87 + [1] = 1912500000UL, 88 + [2] = 2218500000UL, 89 + [3] = 1785000000UL, 90 + [4] = 1632000000UL, 91 + [5] = 1912500000UL, 92 + [6] = 2014500000UL, 93 + [7] = 1734000000UL, 94 + [8] = 1683000000UL, 95 + [9] = 1555500000UL, 96 + [10] = 1504500000UL, 97 + }; 98 + 99 + #define CPU_CVB_TABLE \ 100 + .speedo_scale = 100, \ 101 + .voltage_scale = 1000, \ 102 + .entries = { \ 103 + { 204000000UL, { 1007452, -23865, 370 } }, \ 104 + { 306000000UL, { 1052709, -24875, 370 } }, \ 105 + { 408000000UL, { 1099069, -25895, 370 } }, \ 106 + { 510000000UL, { 1146534, -26905, 370 } }, \ 107 + { 612000000UL, { 1195102, -27915, 370 } }, \ 108 + { 714000000UL, { 1244773, -28925, 370 } }, \ 109 + { 816000000UL, { 1295549, -29935, 370 } }, \ 110 + { 918000000UL, { 1347428, -30955, 370 } }, \ 111 + { 1020000000UL, { 1400411, -31965, 370 } }, \ 112 + { 1122000000UL, { 1454497, -32975, 370 } }, \ 113 + { 1224000000UL, { 1509687, -33985, 370 } }, \ 114 + { 1326000000UL, { 1565981, -35005, 370 } }, \ 115 + { 1428000000UL, { 1623379, -36015, 370 } }, \ 116 + { 1530000000UL, { 1681880, -37025, 370 } }, \ 117 + { 1632000000UL, { 1741485, -38035, 370 } }, \ 118 + { 1734000000UL, { 1802194, -39055, 370 } }, \ 119 + { 1836000000UL, { 1864006, -40065, 370 } }, \ 120 + { 1912500000UL, { 1910780, -40815, 370 } }, \ 121 + { 2014500000UL, { 1227000, 0, 0 } }, \ 122 + { 2218500000UL, { 1227000, 0, 0 } }, \ 123 + { 0UL, { 0, 0, 0 } }, \ 124 + } 125 + 126 + #define CPU_CVB_TABLE_XA \ 127 + .speedo_scale = 100, \ 128 + .voltage_scale = 1000, \ 129 + .entries = { \ 130 + { 204000000UL, { 1250024, -39785, 565 } }, \ 131 + { 306000000UL, { 1297556, -41145, 565 } }, \ 132 + { 408000000UL, { 1346718, -42505, 565 } }, \ 133 + { 510000000UL, { 1397511, -43855, 565 } }, \ 134 + { 612000000UL, { 1449933, -45215, 565 } }, \ 135 + { 714000000UL, { 1503986, -46575, 565 } }, \ 136 + { 816000000UL, { 1559669, -47935, 565 } }, \ 137 + { 918000000UL, { 1616982, -49295, 565 } }, \ 138 + { 1020000000UL, { 1675926, -50645, 565 } }, \ 139 + { 1122000000UL, { 1736500, -52005, 565 } }, \ 140 + { 1224000000UL, { 1798704, -53365, 565 } }, \ 141 + { 1326000000UL, { 1862538, -54725, 565 } }, \ 142 + { 1428000000UL, { 1928003, -56085, 565 } }, \ 143 + { 1530000000UL, { 1995097, -57435, 565 } }, \ 144 + { 1606500000UL, { 2046149, -58445, 565 } }, \ 145 + { 1632000000UL, { 2063822, -58795, 565 } }, \ 146 + { 0UL, { 0, 0, 0 } }, \ 147 + } 148 + 149 + #define CPU_CVB_TABLE_EUCM1 \ 150 + .speedo_scale = 100, \ 151 + .voltage_scale = 1000, \ 152 + .entries = { \ 153 + { 204000000UL, { 734429, 0, 0 } }, \ 154 + { 306000000UL, { 768191, 0, 0 } }, \ 155 + { 408000000UL, { 801953, 0, 0 } }, \ 156 + { 510000000UL, { 835715, 0, 0 } }, \ 157 + { 612000000UL, { 869477, 0, 0 } }, \ 158 + { 714000000UL, { 903239, 0, 0 } }, \ 159 + { 816000000UL, { 937001, 0, 0 } }, \ 160 + { 918000000UL, { 970763, 0, 0 } }, \ 161 + { 1020000000UL, { 1004525, 0, 0 } }, \ 162 + { 1122000000UL, { 1038287, 0, 0 } }, \ 163 + { 1224000000UL, { 1072049, 0, 0 } }, \ 164 + { 1326000000UL, { 1105811, 0, 0 } }, \ 165 + { 1428000000UL, { 1130000, 0, 0 } }, \ 166 + { 1555500000UL, { 1130000, 0, 0 } }, \ 167 + { 1632000000UL, { 1170000, 0, 0 } }, \ 168 + { 1734000000UL, { 1227500, 0, 0 } }, \ 169 + { 0UL, { 0, 0, 0 } }, \ 170 + } 171 + 172 + #define CPU_CVB_TABLE_EUCM2 \ 173 + .speedo_scale = 100, \ 174 + .voltage_scale = 1000, \ 175 + .entries = { \ 176 + { 204000000UL, { 742283, 0, 0 } }, \ 177 + { 306000000UL, { 776249, 0, 0 } }, \ 178 + { 408000000UL, { 810215, 0, 0 } }, \ 179 + { 510000000UL, { 844181, 0, 0 } }, \ 180 + { 612000000UL, { 878147, 0, 0 } }, \ 181 + { 714000000UL, { 912113, 0, 0 } }, \ 182 + { 816000000UL, { 946079, 0, 0 } }, \ 183 + { 918000000UL, { 980045, 0, 0 } }, \ 184 + { 1020000000UL, { 1014011, 0, 0 } }, \ 185 + { 1122000000UL, { 1047977, 0, 0 } }, \ 186 + { 1224000000UL, { 1081943, 0, 0 } }, \ 187 + { 1326000000UL, { 1090000, 0, 0 } }, \ 188 + { 1479000000UL, { 1090000, 0, 0 } }, \ 189 + { 1555500000UL, { 1162000, 0, 0 } }, \ 190 + { 1683000000UL, { 1195000, 0, 0 } }, \ 191 + { 0UL, { 0, 0, 0 } }, \ 192 + } 193 + 194 + #define CPU_CVB_TABLE_EUCM2_JOINT_RAIL \ 195 + .speedo_scale = 100, \ 196 + .voltage_scale = 1000, \ 197 + .entries = { \ 198 + { 204000000UL, { 742283, 0, 0 } }, \ 199 + { 306000000UL, { 776249, 0, 0 } }, \ 200 + { 408000000UL, { 810215, 0, 0 } }, \ 201 + { 510000000UL, { 844181, 0, 0 } }, \ 202 + { 612000000UL, { 878147, 0, 0 } }, \ 203 + { 714000000UL, { 912113, 0, 0 } }, \ 204 + { 816000000UL, { 946079, 0, 0 } }, \ 205 + { 918000000UL, { 980045, 0, 0 } }, \ 206 + { 1020000000UL, { 1014011, 0, 0 } }, \ 207 + { 1122000000UL, { 1047977, 0, 0 } }, \ 208 + { 1224000000UL, { 1081943, 0, 0 } }, \ 209 + { 1326000000UL, { 1090000, 0, 0 } }, \ 210 + { 1479000000UL, { 1090000, 0, 0 } }, \ 211 + { 1504500000UL, { 1120000, 0, 0 } }, \ 212 + { 0UL, { 0, 0, 0 } }, \ 213 + } 214 + 215 + #define CPU_CVB_TABLE_ODN \ 216 + .speedo_scale = 100, \ 217 + .voltage_scale = 1000, \ 218 + .entries = { \ 219 + { 204000000UL, { 721094, 0, 0 } }, \ 220 + { 306000000UL, { 754040, 0, 0 } }, \ 221 + { 408000000UL, { 786986, 0, 0 } }, \ 222 + { 510000000UL, { 819932, 0, 0 } }, \ 223 + { 612000000UL, { 852878, 0, 0 } }, \ 224 + { 714000000UL, { 885824, 0, 0 } }, \ 225 + { 816000000UL, { 918770, 0, 0 } }, \ 226 + { 918000000UL, { 915716, 0, 0 } }, \ 227 + { 1020000000UL, { 984662, 0, 0 } }, \ 228 + { 1122000000UL, { 1017608, 0, 0 } }, \ 229 + { 1224000000UL, { 1050554, 0, 0 } }, \ 230 + { 1326000000UL, { 1083500, 0, 0 } }, \ 231 + { 1428000000UL, { 1116446, 0, 0 } }, \ 232 + { 1581000000UL, { 1130000, 0, 0 } }, \ 233 + { 1683000000UL, { 1168000, 0, 0 } }, \ 234 + { 1785000000UL, { 1227500, 0, 0 } }, \ 235 + { 0UL, { 0, 0, 0 } }, \ 236 + } 237 + 238 + static struct cvb_table tegra210_cpu_cvb_tables[] = { 239 + { 240 + .speedo_id = 10, 241 + .process_id = 0, 242 + .min_millivolts = 840, 243 + .max_millivolts = 1120, 244 + CPU_CVB_TABLE_EUCM2_JOINT_RAIL, 245 + .cpu_dfll_data = { 246 + .tune0_low = 0xffead0ff, 247 + .tune0_high = 0xffead0ff, 248 + .tune1 = 0x20091d9, 249 + .tune_high_min_millivolts = 864, 250 + } 251 + }, 252 + { 253 + .speedo_id = 10, 254 + .process_id = 1, 255 + .min_millivolts = 840, 256 + .max_millivolts = 1120, 257 + CPU_CVB_TABLE_EUCM2_JOINT_RAIL, 258 + .cpu_dfll_data = { 259 + .tune0_low = 0xffead0ff, 260 + .tune0_high = 0xffead0ff, 261 + .tune1 = 0x20091d9, 262 + .tune_high_min_millivolts = 864, 263 + } 264 + }, 265 + { 266 + .speedo_id = 9, 267 + .process_id = 0, 268 + .min_millivolts = 900, 269 + .max_millivolts = 1162, 270 + CPU_CVB_TABLE_EUCM2, 271 + .cpu_dfll_data = { 272 + .tune0_low = 0xffead0ff, 273 + .tune0_high = 0xffead0ff, 274 + .tune1 = 0x20091d9, 275 + } 276 + }, 277 + { 278 + .speedo_id = 9, 279 + .process_id = 1, 280 + .min_millivolts = 900, 281 + .max_millivolts = 1162, 282 + CPU_CVB_TABLE_EUCM2, 283 + .cpu_dfll_data = { 284 + .tune0_low = 0xffead0ff, 285 + .tune0_high = 0xffead0ff, 286 + .tune1 = 0x20091d9, 287 + } 288 + }, 289 + { 290 + .speedo_id = 8, 291 + .process_id = 0, 292 + .min_millivolts = 900, 293 + .max_millivolts = 1195, 294 + CPU_CVB_TABLE_EUCM2, 295 + .cpu_dfll_data = { 296 + .tune0_low = 0xffead0ff, 297 + .tune0_high = 0xffead0ff, 298 + .tune1 = 0x20091d9, 299 + } 300 + }, 301 + { 302 + .speedo_id = 8, 303 + .process_id = 1, 304 + .min_millivolts = 900, 305 + .max_millivolts = 1195, 306 + CPU_CVB_TABLE_EUCM2, 307 + .cpu_dfll_data = { 308 + .tune0_low = 0xffead0ff, 309 + .tune0_high = 0xffead0ff, 310 + .tune1 = 0x20091d9, 311 + } 312 + }, 313 + { 314 + .speedo_id = 7, 315 + .process_id = 0, 316 + .min_millivolts = 841, 317 + .max_millivolts = 1227, 318 + CPU_CVB_TABLE_EUCM1, 319 + .cpu_dfll_data = { 320 + .tune0_low = 0xffead0ff, 321 + .tune0_high = 0xffead0ff, 322 + .tune1 = 0x20091d9, 323 + .tune_high_min_millivolts = 864, 324 + } 325 + }, 326 + { 327 + .speedo_id = 7, 328 + .process_id = 1, 329 + .min_millivolts = 841, 330 + .max_millivolts = 1227, 331 + CPU_CVB_TABLE_EUCM1, 332 + .cpu_dfll_data = { 333 + .tune0_low = 0xffead0ff, 334 + .tune0_high = 0xffead0ff, 335 + .tune1 = 0x20091d9, 336 + .tune_high_min_millivolts = 864, 337 + } 338 + }, 339 + { 340 + .speedo_id = 6, 341 + .process_id = 0, 342 + .min_millivolts = 870, 343 + .max_millivolts = 1150, 344 + CPU_CVB_TABLE, 345 + .cpu_dfll_data = { 346 + .tune0_low = 0xffead0ff, 347 + .tune1 = 0x20091d9, 348 + } 349 + }, 350 + { 351 + .speedo_id = 6, 352 + .process_id = 1, 353 + .min_millivolts = 870, 354 + .max_millivolts = 1150, 355 + CPU_CVB_TABLE, 356 + .cpu_dfll_data = { 357 + .tune0_low = 0xffead0ff, 358 + .tune1 = 0x25501d0, 359 + } 360 + }, 361 + { 362 + .speedo_id = 5, 363 + .process_id = 0, 364 + .min_millivolts = 818, 365 + .max_millivolts = 1227, 366 + CPU_CVB_TABLE, 367 + .cpu_dfll_data = { 368 + .tune0_low = 0xffead0ff, 369 + .tune0_high = 0xffead0ff, 370 + .tune1 = 0x20091d9, 371 + .tune_high_min_millivolts = 864, 372 + } 373 + }, 374 + { 375 + .speedo_id = 5, 376 + .process_id = 1, 377 + .min_millivolts = 818, 378 + .max_millivolts = 1227, 379 + CPU_CVB_TABLE, 380 + .cpu_dfll_data = { 381 + .tune0_low = 0xffead0ff, 382 + .tune0_high = 0xffead0ff, 383 + .tune1 = 0x25501d0, 384 + .tune_high_min_millivolts = 864, 385 + } 386 + }, 387 + { 388 + .speedo_id = 4, 389 + .process_id = -1, 390 + .min_millivolts = 918, 391 + .max_millivolts = 1113, 392 + CPU_CVB_TABLE_XA, 393 + .cpu_dfll_data = { 394 + .tune0_low = 0xffead0ff, 395 + .tune1 = 0x17711BD, 396 + } 397 + }, 398 + { 399 + .speedo_id = 3, 400 + .process_id = 0, 401 + .min_millivolts = 825, 402 + .max_millivolts = 1227, 403 + CPU_CVB_TABLE_ODN, 404 + .cpu_dfll_data = { 405 + .tune0_low = 0xffead0ff, 406 + .tune0_high = 0xffead0ff, 407 + .tune1 = 0x20091d9, 408 + .tune_high_min_millivolts = 864, 409 + } 410 + }, 411 + { 412 + .speedo_id = 3, 413 + .process_id = 1, 414 + .min_millivolts = 825, 415 + .max_millivolts = 1227, 416 + CPU_CVB_TABLE_ODN, 417 + .cpu_dfll_data = { 418 + .tune0_low = 0xffead0ff, 419 + .tune0_high = 0xffead0ff, 420 + .tune1 = 0x25501d0, 421 + .tune_high_min_millivolts = 864, 422 + } 423 + }, 424 + { 425 + .speedo_id = 2, 426 + .process_id = 0, 427 + .min_millivolts = 870, 428 + .max_millivolts = 1227, 429 + CPU_CVB_TABLE, 430 + .cpu_dfll_data = { 431 + .tune0_low = 0xffead0ff, 432 + .tune1 = 0x20091d9, 433 + } 434 + }, 435 + { 436 + .speedo_id = 2, 437 + .process_id = 1, 438 + .min_millivolts = 870, 439 + .max_millivolts = 1227, 440 + CPU_CVB_TABLE, 441 + .cpu_dfll_data = { 442 + .tune0_low = 0xffead0ff, 443 + .tune1 = 0x25501d0, 444 + } 445 + }, 446 + { 447 + .speedo_id = 1, 448 + .process_id = 0, 449 + .min_millivolts = 837, 450 + .max_millivolts = 1227, 451 + CPU_CVB_TABLE, 452 + .cpu_dfll_data = { 453 + .tune0_low = 0xffead0ff, 454 + .tune0_high = 0xffead0ff, 455 + .tune1 = 0x20091d9, 456 + .tune_high_min_millivolts = 864, 457 + } 458 + }, 459 + { 460 + .speedo_id = 1, 461 + .process_id = 1, 462 + .min_millivolts = 837, 463 + .max_millivolts = 1227, 464 + CPU_CVB_TABLE, 465 + .cpu_dfll_data = { 466 + .tune0_low = 0xffead0ff, 467 + .tune0_high = 0xffead0ff, 468 + .tune1 = 0x25501d0, 469 + .tune_high_min_millivolts = 864, 470 + } 471 + }, 472 + { 473 + .speedo_id = 0, 474 + .process_id = 0, 475 + .min_millivolts = 850, 476 + .max_millivolts = 1170, 477 + CPU_CVB_TABLE, 478 + .cpu_dfll_data = { 479 + .tune0_low = 0xffead0ff, 480 + .tune0_high = 0xffead0ff, 481 + .tune1 = 0x20091d9, 482 + .tune_high_min_millivolts = 864, 483 + } 484 + }, 485 + { 486 + .speedo_id = 0, 487 + .process_id = 1, 488 + .min_millivolts = 850, 489 + .max_millivolts = 1170, 490 + CPU_CVB_TABLE, 491 + .cpu_dfll_data = { 492 + .tune0_low = 0xffead0ff, 493 + .tune0_high = 0xffead0ff, 494 + .tune1 = 0x25501d0, 495 + .tune_high_min_millivolts = 864, 496 + } 497 + }, 498 + }; 499 + 500 + static const struct dfll_fcpu_data tegra124_dfll_fcpu_data = { 501 + .cpu_max_freq_table = tegra124_cpu_max_freq_table, 502 + .cpu_max_freq_table_size = ARRAY_SIZE(tegra124_cpu_max_freq_table), 503 + .cpu_cvb_tables = tegra124_cpu_cvb_tables, 504 + .cpu_cvb_tables_size = ARRAY_SIZE(tegra124_cpu_cvb_tables) 505 + }; 506 + 507 + static const struct dfll_fcpu_data tegra210_dfll_fcpu_data = { 508 + .cpu_max_freq_table = tegra210_cpu_max_freq_table, 509 + .cpu_max_freq_table_size = ARRAY_SIZE(tegra210_cpu_max_freq_table), 510 + .cpu_cvb_tables = tegra210_cpu_cvb_tables, 511 + .cpu_cvb_tables_size = ARRAY_SIZE(tegra210_cpu_cvb_tables), 512 + }; 513 + 514 + static const struct of_device_id tegra124_dfll_fcpu_of_match[] = { 515 + { 516 + .compatible = "nvidia,tegra124-dfll", 517 + .data = &tegra124_dfll_fcpu_data, 518 + }, 519 + { 520 + .compatible = "nvidia,tegra210-dfll", 521 + .data = &tegra210_dfll_fcpu_data 522 + }, 523 + { }, 524 + }; 525 + 526 + static void get_alignment_from_dt(struct device *dev, 527 + struct rail_alignment *align) 528 + { 529 + if (of_property_read_u32(dev->of_node, 530 + "nvidia,pwm-voltage-step-microvolts", 531 + &align->step_uv)) 532 + align->step_uv = 0; 533 + 534 + if (of_property_read_u32(dev->of_node, 535 + "nvidia,pwm-min-microvolts", 536 + &align->offset_uv)) 537 + align->offset_uv = 0; 538 + } 539 + 540 + static int get_alignment_from_regulator(struct device *dev, 541 + struct rail_alignment *align) 542 + { 543 + struct regulator *reg = devm_regulator_get(dev, "vdd-cpu"); 544 + 545 + if (IS_ERR(reg)) 546 + return PTR_ERR(reg); 547 + 548 + align->offset_uv = regulator_list_voltage(reg, 0); 549 + align->step_uv = regulator_get_linear_step(reg); 550 + 551 + devm_regulator_put(reg); 552 + 553 + return 0; 554 + } 555 + 91 556 static int tegra124_dfll_fcpu_probe(struct platform_device *pdev) 92 557 { 93 558 int process_id, speedo_id, speedo_value, err; 94 559 struct tegra_dfll_soc_data *soc; 560 + const struct dfll_fcpu_data *fcpu_data; 561 + struct rail_alignment align; 562 + 563 + fcpu_data = of_device_get_match_data(&pdev->dev); 564 + if (!fcpu_data) 565 + return -ENODEV; 95 566 96 567 process_id = tegra_sku_info.cpu_process_id; 97 568 speedo_id = tegra_sku_info.cpu_speedo_id; 98 569 speedo_value = tegra_sku_info.cpu_speedo_value; 99 570 100 - if (speedo_id >= ARRAY_SIZE(cpu_max_freq_table)) { 571 + if (speedo_id >= fcpu_data->cpu_max_freq_table_size) { 101 572 dev_err(&pdev->dev, "unknown max CPU freq for speedo_id=%d\n", 102 573 speedo_id); 103 574 return -ENODEV; ··· 590 107 return -ENODEV; 591 108 } 592 109 593 - soc->max_freq = cpu_max_freq_table[speedo_id]; 110 + if (of_property_read_bool(pdev->dev.of_node, "nvidia,pwm-to-pmic")) { 111 + get_alignment_from_dt(&pdev->dev, &align); 112 + } else { 113 + err = get_alignment_from_regulator(&pdev->dev, &align); 114 + if (err) 115 + return err; 116 + } 594 117 595 - soc->cvb = tegra_cvb_add_opp_table(soc->dev, tegra124_cpu_cvb_tables, 596 - ARRAY_SIZE(tegra124_cpu_cvb_tables), 597 - process_id, speedo_id, speedo_value, 598 - soc->max_freq); 118 + soc->max_freq = fcpu_data->cpu_max_freq_table[speedo_id]; 119 + 120 + soc->cvb = tegra_cvb_add_opp_table(soc->dev, fcpu_data->cpu_cvb_tables, 121 + fcpu_data->cpu_cvb_tables_size, 122 + &align, process_id, speedo_id, 123 + speedo_value, soc->max_freq); 124 + soc->alignment = align; 125 + 599 126 if (IS_ERR(soc->cvb)) { 600 127 dev_err(&pdev->dev, "couldn't add OPP table: %ld\n", 601 128 PTR_ERR(soc->cvb)); ··· 636 143 637 144 return 0; 638 145 } 639 - 640 - static const struct of_device_id tegra124_dfll_fcpu_of_match[] = { 641 - { .compatible = "nvidia,tegra124-dfll", }, 642 - { }, 643 - }; 644 146 645 147 static const struct dev_pm_ops tegra124_dfll_pm_ops = { 646 148 SET_RUNTIME_PM_OPS(tegra_dfll_runtime_suspend,
+7 -5
drivers/clk/tegra/cvb.c
··· 1 1 /* 2 2 * Utility functions for parsing Tegra CVB voltage tables 3 3 * 4 - * Copyright (C) 2012-2014 NVIDIA Corporation. All rights reserved. 4 + * Copyright (C) 2012-2019 NVIDIA Corporation. All rights reserved. 5 5 * 6 6 * This program is free software; you can redistribute it and/or modify 7 7 * it under the terms of the GNU General Public License version 2 as ··· 62 62 } 63 63 64 64 static int build_opp_table(struct device *dev, const struct cvb_table *table, 65 + struct rail_alignment *align, 65 66 int speedo_value, unsigned long max_freq) 66 67 { 67 - const struct rail_alignment *align = &table->alignment; 68 68 int i, ret, dfll_mv, min_mv, max_mv; 69 69 70 70 min_mv = round_voltage(table->min_millivolts, align, UP); ··· 109 109 */ 110 110 const struct cvb_table * 111 111 tegra_cvb_add_opp_table(struct device *dev, const struct cvb_table *tables, 112 - size_t count, int process_id, int speedo_id, 113 - int speedo_value, unsigned long max_freq) 112 + size_t count, struct rail_alignment *align, 113 + int process_id, int speedo_id, int speedo_value, 114 + unsigned long max_freq) 114 115 { 115 116 size_t i; 116 117 int ret; ··· 125 124 if (table->process_id != -1 && table->process_id != process_id) 126 125 continue; 127 126 128 - ret = build_opp_table(dev, table, speedo_value, max_freq); 127 + ret = build_opp_table(dev, table, align, speedo_value, 128 + max_freq); 129 129 return ret ? ERR_PTR(ret) : table; 130 130 } 131 131
+4 -3
drivers/clk/tegra/cvb.h
··· 41 41 u32 tune0_low; 42 42 u32 tune0_high; 43 43 u32 tune1; 44 + unsigned int tune_high_min_millivolts; 44 45 }; 45 46 46 47 struct cvb_table { ··· 50 49 51 50 int min_millivolts; 52 51 int max_millivolts; 53 - struct rail_alignment alignment; 54 52 55 53 int speedo_scale; 56 54 int voltage_scale; ··· 59 59 60 60 const struct cvb_table * 61 61 tegra_cvb_add_opp_table(struct device *dev, const struct cvb_table *cvb_tables, 62 - size_t count, int process_id, int speedo_id, 63 - int speedo_value, unsigned long max_freq); 62 + size_t count, struct rail_alignment *align, 63 + int process_id, int speedo_id, int speedo_value, 64 + unsigned long max_freq); 64 65 void tegra_cvb_remove_opp_table(struct device *dev, 65 66 const struct cvb_table *table, 66 67 unsigned long max_freq);
+2 -2
drivers/cpufreq/Kconfig.arm
··· 272 272 This adds the CPUFreq driver support for Tegra20 SOCs. 273 273 274 274 config ARM_TEGRA124_CPUFREQ 275 - tristate "Tegra124 CPUFreq support" 276 - depends on ARCH_TEGRA && CPUFREQ_DT && REGULATOR 275 + bool "Tegra124 CPUFreq support" 276 + depends on ARCH_TEGRA && CPUFREQ_DT 277 277 default y 278 278 help 279 279 This adds the CPUFreq driver support for Tegra124 SOCs.
+1
drivers/cpufreq/cpufreq-dt-platdev.c
··· 119 119 { .compatible = "mediatek,mt8176", }, 120 120 121 121 { .compatible = "nvidia,tegra124", }, 122 + { .compatible = "nvidia,tegra210", }, 122 123 123 124 { .compatible = "qcom,apq8096", }, 124 125 { .compatible = "qcom,msm8996", },
+4 -40
drivers/cpufreq/tegra124-cpufreq.c
··· 22 22 #include <linux/of.h> 23 23 #include <linux/platform_device.h> 24 24 #include <linux/pm_opp.h> 25 - #include <linux/regulator/consumer.h> 26 25 #include <linux/types.h> 27 26 28 27 struct tegra124_cpufreq_priv { 29 - struct regulator *vdd_cpu_reg; 30 28 struct clk *cpu_clk; 31 29 struct clk *pllp_clk; 32 30 struct clk *pllx_clk; ··· 58 60 return ret; 59 61 } 60 62 61 - static void tegra124_cpu_switch_to_pllx(struct tegra124_cpufreq_priv *priv) 62 - { 63 - clk_set_parent(priv->cpu_clk, priv->pllp_clk); 64 - clk_disable_unprepare(priv->dfll_clk); 65 - regulator_sync_voltage(priv->vdd_cpu_reg); 66 - clk_set_parent(priv->cpu_clk, priv->pllx_clk); 67 - } 68 - 69 63 static int tegra124_cpufreq_probe(struct platform_device *pdev) 70 64 { 71 65 struct tegra124_cpufreq_priv *priv; ··· 78 88 if (!np) 79 89 return -ENODEV; 80 90 81 - priv->vdd_cpu_reg = regulator_get(cpu_dev, "vdd-cpu"); 82 - if (IS_ERR(priv->vdd_cpu_reg)) { 83 - ret = PTR_ERR(priv->vdd_cpu_reg); 84 - goto out_put_np; 85 - } 86 - 87 91 priv->cpu_clk = of_clk_get_by_name(np, "cpu_g"); 88 92 if (IS_ERR(priv->cpu_clk)) { 89 93 ret = PTR_ERR(priv->cpu_clk); 90 - goto out_put_vdd_cpu_reg; 94 + goto out_put_np; 91 95 } 92 96 93 97 priv->dfll_clk = of_clk_get_by_name(np, "dfll"); ··· 113 129 platform_device_register_full(&cpufreq_dt_devinfo); 114 130 if (IS_ERR(priv->cpufreq_dt_pdev)) { 115 131 ret = PTR_ERR(priv->cpufreq_dt_pdev); 116 - goto out_switch_to_pllx; 132 + goto out_put_pllp_clk; 117 133 } 118 134 119 135 platform_set_drvdata(pdev, priv); 120 136 121 137 return 0; 122 138 123 - out_switch_to_pllx: 124 - tegra124_cpu_switch_to_pllx(priv); 125 139 out_put_pllp_clk: 126 140 clk_put(priv->pllp_clk); 127 141 out_put_pllx_clk: ··· 128 146 clk_put(priv->dfll_clk); 129 147 out_put_cpu_clk: 130 148 clk_put(priv->cpu_clk); 131 - out_put_vdd_cpu_reg: 132 - regulator_put(priv->vdd_cpu_reg); 133 149 out_put_np: 134 150 of_node_put(np); 135 151 136 152 return ret; 137 153 } 138 154 139 - static int tegra124_cpufreq_remove(struct platform_device *pdev) 140 - { 141 - struct tegra124_cpufreq_priv *priv = platform_get_drvdata(pdev); 142 - 143 - platform_device_unregister(priv->cpufreq_dt_pdev); 144 - tegra124_cpu_switch_to_pllx(priv); 145 - 146 - clk_put(priv->pllp_clk); 147 - clk_put(priv->pllx_clk); 148 - clk_put(priv->dfll_clk); 149 - clk_put(priv->cpu_clk); 150 - regulator_put(priv->vdd_cpu_reg); 151 - 152 - return 0; 153 - } 154 - 155 155 static struct platform_driver tegra124_cpufreq_platdrv = { 156 156 .driver.name = "cpufreq-tegra124", 157 157 .probe = tegra124_cpufreq_probe, 158 - .remove = tegra124_cpufreq_remove, 159 158 }; 160 159 161 160 static int __init tegra_cpufreq_init(void) ··· 144 181 int ret; 145 182 struct platform_device *pdev; 146 183 147 - if (!of_machine_is_compatible("nvidia,tegra124")) 184 + if (!(of_machine_is_compatible("nvidia,tegra124") || 185 + of_machine_is_compatible("nvidia,tegra210"))) 148 186 return -ENODEV; 149 187 150 188 /*
+38
drivers/firmware/imx/misc.c
··· 18 18 u16 resource; 19 19 } __packed; 20 20 21 + struct imx_sc_msg_req_cpu_start { 22 + struct imx_sc_rpc_msg hdr; 23 + u32 address_hi; 24 + u32 address_lo; 25 + u16 resource; 26 + u8 enable; 27 + } __packed; 28 + 21 29 struct imx_sc_msg_req_misc_get_ctrl { 22 30 struct imx_sc_rpc_msg hdr; 23 31 u32 ctrl; ··· 105 97 return 0; 106 98 } 107 99 EXPORT_SYMBOL(imx_sc_misc_get_control); 100 + 101 + /* 102 + * This function starts/stops a CPU identified by @resource 103 + * 104 + * @param[in] ipc IPC handle 105 + * @param[in] resource resource the control is associated with 106 + * @param[in] enable true for start, false for stop 107 + * @param[in] phys_addr initial instruction address to be executed 108 + * 109 + * @return Returns 0 for success and < 0 for errors. 110 + */ 111 + int imx_sc_pm_cpu_start(struct imx_sc_ipc *ipc, u32 resource, 112 + bool enable, u64 phys_addr) 113 + { 114 + struct imx_sc_msg_req_cpu_start msg; 115 + struct imx_sc_rpc_msg *hdr = &msg.hdr; 116 + 117 + hdr->ver = IMX_SC_RPC_VERSION; 118 + hdr->svc = IMX_SC_RPC_SVC_PM; 119 + hdr->func = IMX_SC_PM_FUNC_CPU_START; 120 + hdr->size = 4; 121 + 122 + msg.address_hi = phys_addr >> 32; 123 + msg.address_lo = phys_addr; 124 + msg.resource = resource; 125 + msg.enable = enable; 126 + 127 + return imx_scu_call_rpc(ipc, &msg, true); 128 + } 129 + EXPORT_SYMBOL(imx_sc_pm_cpu_start);
+1
drivers/firmware/imx/scu-pd.c
··· 322 322 323 323 static const struct of_device_id imx_sc_pd_match[] = { 324 324 { .compatible = "fsl,imx8qxp-scu-pd", &imx8qxp_scu_pd}, 325 + { .compatible = "fsl,scu-pd", &imx8qxp_scu_pd}, 325 326 { /* sentinel */ } 326 327 }; 327 328
+11
drivers/firmware/raspberrypi.c
··· 238 238 return 0; 239 239 } 240 240 241 + static void rpi_firmware_shutdown(struct platform_device *pdev) 242 + { 243 + struct rpi_firmware *fw = platform_get_drvdata(pdev); 244 + 245 + if (!fw) 246 + return; 247 + 248 + rpi_firmware_property(fw, RPI_FIRMWARE_NOTIFY_REBOOT, NULL, 0); 249 + } 250 + 241 251 static int rpi_firmware_remove(struct platform_device *pdev) 242 252 { 243 253 struct rpi_firmware *fw = platform_get_drvdata(pdev); ··· 288 278 .of_match_table = rpi_firmware_of_match, 289 279 }, 290 280 .probe = rpi_firmware_probe, 281 + .shutdown = rpi_firmware_shutdown, 291 282 .remove = rpi_firmware_remove, 292 283 }; 293 284 module_platform_driver(rpi_firmware_driver);
+3
drivers/firmware/tegra/Makefile
··· 1 1 tegra-bpmp-y = bpmp.o 2 + tegra-bpmp-$(CONFIG_ARCH_TEGRA_210_SOC) += bpmp-tegra210.o 3 + tegra-bpmp-$(CONFIG_ARCH_TEGRA_186_SOC) += bpmp-tegra186.o 4 + tegra-bpmp-$(CONFIG_ARCH_TEGRA_194_SOC) += bpmp-tegra186.o 2 5 tegra-bpmp-$(CONFIG_DEBUG_FS) += bpmp-debugfs.o 3 6 obj-$(CONFIG_TEGRA_BPMP) += tegra-bpmp.o 4 7 obj-$(CONFIG_TEGRA_IVC) += ivc.o
+34
drivers/firmware/tegra/bpmp-private.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (c) 2018, NVIDIA CORPORATION. 4 + */ 5 + 6 + #ifndef __FIRMWARE_TEGRA_BPMP_PRIVATE_H 7 + #define __FIRMWARE_TEGRA_BPMP_PRIVATE_H 8 + 9 + #include <soc/tegra/bpmp.h> 10 + 11 + struct tegra_bpmp_ops { 12 + int (*init)(struct tegra_bpmp *bpmp); 13 + void (*deinit)(struct tegra_bpmp *bpmp); 14 + bool (*is_response_ready)(struct tegra_bpmp_channel *channel); 15 + bool (*is_request_ready)(struct tegra_bpmp_channel *channel); 16 + int (*ack_response)(struct tegra_bpmp_channel *channel); 17 + int (*ack_request)(struct tegra_bpmp_channel *channel); 18 + bool (*is_response_channel_free)(struct tegra_bpmp_channel *channel); 19 + bool (*is_request_channel_free)(struct tegra_bpmp_channel *channel); 20 + int (*post_response)(struct tegra_bpmp_channel *channel); 21 + int (*post_request)(struct tegra_bpmp_channel *channel); 22 + int (*ring_doorbell)(struct tegra_bpmp *bpmp); 23 + int (*resume)(struct tegra_bpmp *bpmp); 24 + }; 25 + 26 + #if IS_ENABLED(CONFIG_ARCH_TEGRA_186_SOC) || \ 27 + IS_ENABLED(CONFIG_ARCH_TEGRA_194_SOC) 28 + extern const struct tegra_bpmp_ops tegra186_bpmp_ops; 29 + #endif 30 + #if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC) 31 + extern const struct tegra_bpmp_ops tegra210_bpmp_ops; 32 + #endif 33 + 34 + #endif
+305
drivers/firmware/tegra/bpmp-tegra186.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2018, NVIDIA CORPORATION. 4 + */ 5 + 6 + #include <linux/genalloc.h> 7 + #include <linux/mailbox_client.h> 8 + #include <linux/platform_device.h> 9 + 10 + #include <soc/tegra/bpmp.h> 11 + #include <soc/tegra/bpmp-abi.h> 12 + #include <soc/tegra/ivc.h> 13 + 14 + #include "bpmp-private.h" 15 + 16 + struct tegra186_bpmp { 17 + struct tegra_bpmp *parent; 18 + 19 + struct { 20 + struct gen_pool *pool; 21 + dma_addr_t phys; 22 + void *virt; 23 + } tx, rx; 24 + 25 + struct { 26 + struct mbox_client client; 27 + struct mbox_chan *channel; 28 + } mbox; 29 + }; 30 + 31 + static inline struct tegra_bpmp * 32 + mbox_client_to_bpmp(struct mbox_client *client) 33 + { 34 + struct tegra186_bpmp *priv; 35 + 36 + priv = container_of(client, struct tegra186_bpmp, mbox.client); 37 + 38 + return priv->parent; 39 + } 40 + 41 + static bool tegra186_bpmp_is_message_ready(struct tegra_bpmp_channel *channel) 42 + { 43 + void *frame; 44 + 45 + frame = tegra_ivc_read_get_next_frame(channel->ivc); 46 + if (IS_ERR(frame)) { 47 + channel->ib = NULL; 48 + return false; 49 + } 50 + 51 + channel->ib = frame; 52 + 53 + return true; 54 + } 55 + 56 + static bool tegra186_bpmp_is_channel_free(struct tegra_bpmp_channel *channel) 57 + { 58 + void *frame; 59 + 60 + frame = tegra_ivc_write_get_next_frame(channel->ivc); 61 + if (IS_ERR(frame)) { 62 + channel->ob = NULL; 63 + return false; 64 + } 65 + 66 + channel->ob = frame; 67 + 68 + return true; 69 + } 70 + 71 + static int tegra186_bpmp_ack_message(struct tegra_bpmp_channel *channel) 72 + { 73 + return tegra_ivc_read_advance(channel->ivc); 74 + } 75 + 76 + static int tegra186_bpmp_post_message(struct tegra_bpmp_channel *channel) 77 + { 78 + return tegra_ivc_write_advance(channel->ivc); 79 + } 80 + 81 + static int tegra186_bpmp_ring_doorbell(struct tegra_bpmp *bpmp) 82 + { 83 + struct tegra186_bpmp *priv = bpmp->priv; 84 + int err; 85 + 86 + err = mbox_send_message(priv->mbox.channel, NULL); 87 + if (err < 0) 88 + return err; 89 + 90 + mbox_client_txdone(priv->mbox.channel, 0); 91 + 92 + return 0; 93 + } 94 + 95 + static void tegra186_bpmp_ivc_notify(struct tegra_ivc *ivc, void *data) 96 + { 97 + struct tegra_bpmp *bpmp = data; 98 + struct tegra186_bpmp *priv = bpmp->priv; 99 + 100 + if (WARN_ON(priv->mbox.channel == NULL)) 101 + return; 102 + 103 + tegra186_bpmp_ring_doorbell(bpmp); 104 + } 105 + 106 + static int tegra186_bpmp_channel_init(struct tegra_bpmp_channel *channel, 107 + struct tegra_bpmp *bpmp, 108 + unsigned int index) 109 + { 110 + struct tegra186_bpmp *priv = bpmp->priv; 111 + size_t message_size, queue_size; 112 + unsigned int offset; 113 + int err; 114 + 115 + channel->ivc = devm_kzalloc(bpmp->dev, sizeof(*channel->ivc), 116 + GFP_KERNEL); 117 + if (!channel->ivc) 118 + return -ENOMEM; 119 + 120 + message_size = tegra_ivc_align(MSG_MIN_SZ); 121 + queue_size = tegra_ivc_total_queue_size(message_size); 122 + offset = queue_size * index; 123 + 124 + err = tegra_ivc_init(channel->ivc, NULL, 125 + priv->rx.virt + offset, priv->rx.phys + offset, 126 + priv->tx.virt + offset, priv->tx.phys + offset, 127 + 1, message_size, tegra186_bpmp_ivc_notify, 128 + bpmp); 129 + if (err < 0) { 130 + dev_err(bpmp->dev, "failed to setup IVC for channel %u: %d\n", 131 + index, err); 132 + return err; 133 + } 134 + 135 + init_completion(&channel->completion); 136 + channel->bpmp = bpmp; 137 + 138 + return 0; 139 + } 140 + 141 + static void tegra186_bpmp_channel_reset(struct tegra_bpmp_channel *channel) 142 + { 143 + /* reset the channel state */ 144 + tegra_ivc_reset(channel->ivc); 145 + 146 + /* sync the channel state with BPMP */ 147 + while (tegra_ivc_notified(channel->ivc)) 148 + ; 149 + } 150 + 151 + static void tegra186_bpmp_channel_cleanup(struct tegra_bpmp_channel *channel) 152 + { 153 + tegra_ivc_cleanup(channel->ivc); 154 + } 155 + 156 + static void mbox_handle_rx(struct mbox_client *client, void *data) 157 + { 158 + struct tegra_bpmp *bpmp = mbox_client_to_bpmp(client); 159 + 160 + tegra_bpmp_handle_rx(bpmp); 161 + } 162 + 163 + static int tegra186_bpmp_init(struct tegra_bpmp *bpmp) 164 + { 165 + struct tegra186_bpmp *priv; 166 + unsigned int i; 167 + int err; 168 + 169 + priv = devm_kzalloc(bpmp->dev, sizeof(*priv), GFP_KERNEL); 170 + if (!priv) 171 + return -ENOMEM; 172 + 173 + bpmp->priv = priv; 174 + priv->parent = bpmp; 175 + 176 + priv->tx.pool = of_gen_pool_get(bpmp->dev->of_node, "shmem", 0); 177 + if (!priv->tx.pool) { 178 + dev_err(bpmp->dev, "TX shmem pool not found\n"); 179 + return -ENOMEM; 180 + } 181 + 182 + priv->tx.virt = gen_pool_dma_alloc(priv->tx.pool, 4096, &priv->tx.phys); 183 + if (!priv->tx.virt) { 184 + dev_err(bpmp->dev, "failed to allocate from TX pool\n"); 185 + return -ENOMEM; 186 + } 187 + 188 + priv->rx.pool = of_gen_pool_get(bpmp->dev->of_node, "shmem", 1); 189 + if (!priv->rx.pool) { 190 + dev_err(bpmp->dev, "RX shmem pool not found\n"); 191 + err = -ENOMEM; 192 + goto free_tx; 193 + } 194 + 195 + priv->rx.virt = gen_pool_dma_alloc(priv->rx.pool, 4096, &priv->rx.phys); 196 + if (!priv->rx.virt) { 197 + dev_err(bpmp->dev, "failed to allocate from RX pool\n"); 198 + err = -ENOMEM; 199 + goto free_tx; 200 + } 201 + 202 + err = tegra186_bpmp_channel_init(bpmp->tx_channel, bpmp, 203 + bpmp->soc->channels.cpu_tx.offset); 204 + if (err < 0) 205 + goto free_rx; 206 + 207 + err = tegra186_bpmp_channel_init(bpmp->rx_channel, bpmp, 208 + bpmp->soc->channels.cpu_rx.offset); 209 + if (err < 0) 210 + goto cleanup_tx_channel; 211 + 212 + for (i = 0; i < bpmp->threaded.count; i++) { 213 + unsigned int index = bpmp->soc->channels.thread.offset + i; 214 + 215 + err = tegra186_bpmp_channel_init(&bpmp->threaded_channels[i], 216 + bpmp, index); 217 + if (err < 0) 218 + goto cleanup_channels; 219 + } 220 + 221 + /* mbox registration */ 222 + priv->mbox.client.dev = bpmp->dev; 223 + priv->mbox.client.rx_callback = mbox_handle_rx; 224 + priv->mbox.client.tx_block = false; 225 + priv->mbox.client.knows_txdone = false; 226 + 227 + priv->mbox.channel = mbox_request_channel(&priv->mbox.client, 0); 228 + if (IS_ERR(priv->mbox.channel)) { 229 + err = PTR_ERR(priv->mbox.channel); 230 + dev_err(bpmp->dev, "failed to get HSP mailbox: %d\n", err); 231 + goto cleanup_channels; 232 + } 233 + 234 + tegra186_bpmp_channel_reset(bpmp->tx_channel); 235 + tegra186_bpmp_channel_reset(bpmp->rx_channel); 236 + 237 + for (i = 0; i < bpmp->threaded.count; i++) 238 + tegra186_bpmp_channel_reset(&bpmp->threaded_channels[i]); 239 + 240 + return 0; 241 + 242 + cleanup_channels: 243 + for (i = 0; i < bpmp->threaded.count; i++) { 244 + if (!bpmp->threaded_channels[i].bpmp) 245 + continue; 246 + 247 + tegra186_bpmp_channel_cleanup(&bpmp->threaded_channels[i]); 248 + } 249 + 250 + tegra186_bpmp_channel_cleanup(bpmp->rx_channel); 251 + cleanup_tx_channel: 252 + tegra186_bpmp_channel_cleanup(bpmp->tx_channel); 253 + free_rx: 254 + gen_pool_free(priv->rx.pool, (unsigned long)priv->rx.virt, 4096); 255 + free_tx: 256 + gen_pool_free(priv->tx.pool, (unsigned long)priv->tx.virt, 4096); 257 + 258 + return err; 259 + } 260 + 261 + static void tegra186_bpmp_deinit(struct tegra_bpmp *bpmp) 262 + { 263 + struct tegra186_bpmp *priv = bpmp->priv; 264 + unsigned int i; 265 + 266 + mbox_free_channel(priv->mbox.channel); 267 + 268 + for (i = 0; i < bpmp->threaded.count; i++) 269 + tegra186_bpmp_channel_cleanup(&bpmp->threaded_channels[i]); 270 + 271 + tegra186_bpmp_channel_cleanup(bpmp->rx_channel); 272 + tegra186_bpmp_channel_cleanup(bpmp->tx_channel); 273 + 274 + gen_pool_free(priv->rx.pool, (unsigned long)priv->rx.virt, 4096); 275 + gen_pool_free(priv->tx.pool, (unsigned long)priv->tx.virt, 4096); 276 + } 277 + 278 + static int tegra186_bpmp_resume(struct tegra_bpmp *bpmp) 279 + { 280 + unsigned int i; 281 + 282 + /* reset message channels */ 283 + tegra186_bpmp_channel_reset(bpmp->tx_channel); 284 + tegra186_bpmp_channel_reset(bpmp->rx_channel); 285 + 286 + for (i = 0; i < bpmp->threaded.count; i++) 287 + tegra186_bpmp_channel_reset(&bpmp->threaded_channels[i]); 288 + 289 + return 0; 290 + } 291 + 292 + const struct tegra_bpmp_ops tegra186_bpmp_ops = { 293 + .init = tegra186_bpmp_init, 294 + .deinit = tegra186_bpmp_deinit, 295 + .is_response_ready = tegra186_bpmp_is_message_ready, 296 + .is_request_ready = tegra186_bpmp_is_message_ready, 297 + .ack_response = tegra186_bpmp_ack_message, 298 + .ack_request = tegra186_bpmp_ack_message, 299 + .is_response_channel_free = tegra186_bpmp_is_channel_free, 300 + .is_request_channel_free = tegra186_bpmp_is_channel_free, 301 + .post_response = tegra186_bpmp_post_message, 302 + .post_request = tegra186_bpmp_post_message, 303 + .ring_doorbell = tegra186_bpmp_ring_doorbell, 304 + .resume = tegra186_bpmp_resume, 305 + };
+243
drivers/firmware/tegra/bpmp-tegra210.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2018, NVIDIA CORPORATION. 4 + */ 5 + 6 + #include <linux/interrupt.h> 7 + #include <linux/irq.h> 8 + #include <linux/io.h> 9 + #include <linux/of.h> 10 + #include <linux/platform_device.h> 11 + 12 + #include <soc/tegra/bpmp.h> 13 + 14 + #include "bpmp-private.h" 15 + 16 + #define TRIGGER_OFFSET 0x000 17 + #define RESULT_OFFSET(id) (0xc00 + id * 4) 18 + #define TRIGGER_ID_SHIFT 16 19 + #define TRIGGER_CMD_GET 4 20 + 21 + #define STA_OFFSET 0 22 + #define SET_OFFSET 4 23 + #define CLR_OFFSET 8 24 + 25 + #define CH_MASK(ch) (0x3 << ((ch) * 2)) 26 + #define SL_SIGL(ch) (0x0 << ((ch) * 2)) 27 + #define SL_QUED(ch) (0x1 << ((ch) * 2)) 28 + #define MA_FREE(ch) (0x2 << ((ch) * 2)) 29 + #define MA_ACKD(ch) (0x3 << ((ch) * 2)) 30 + 31 + struct tegra210_bpmp { 32 + void __iomem *atomics; 33 + void __iomem *arb_sema; 34 + struct irq_data *tx_irq_data; 35 + }; 36 + 37 + static u32 bpmp_channel_status(struct tegra_bpmp *bpmp, unsigned int index) 38 + { 39 + struct tegra210_bpmp *priv = bpmp->priv; 40 + 41 + return __raw_readl(priv->arb_sema + STA_OFFSET) & CH_MASK(index); 42 + } 43 + 44 + static bool tegra210_bpmp_is_response_ready(struct tegra_bpmp_channel *channel) 45 + { 46 + unsigned int index = channel->index; 47 + 48 + return bpmp_channel_status(channel->bpmp, index) == MA_ACKD(index); 49 + } 50 + 51 + static bool tegra210_bpmp_is_request_ready(struct tegra_bpmp_channel *channel) 52 + { 53 + unsigned int index = channel->index; 54 + 55 + return bpmp_channel_status(channel->bpmp, index) == SL_SIGL(index); 56 + } 57 + 58 + static bool 59 + tegra210_bpmp_is_request_channel_free(struct tegra_bpmp_channel *channel) 60 + { 61 + unsigned int index = channel->index; 62 + 63 + return bpmp_channel_status(channel->bpmp, index) == MA_FREE(index); 64 + } 65 + 66 + static bool 67 + tegra210_bpmp_is_response_channel_free(struct tegra_bpmp_channel *channel) 68 + { 69 + unsigned int index = channel->index; 70 + 71 + return bpmp_channel_status(channel->bpmp, index) == SL_QUED(index); 72 + } 73 + 74 + static int tegra210_bpmp_post_request(struct tegra_bpmp_channel *channel) 75 + { 76 + struct tegra210_bpmp *priv = channel->bpmp->priv; 77 + 78 + __raw_writel(CH_MASK(channel->index), priv->arb_sema + CLR_OFFSET); 79 + 80 + return 0; 81 + } 82 + 83 + static int tegra210_bpmp_post_response(struct tegra_bpmp_channel *channel) 84 + { 85 + struct tegra210_bpmp *priv = channel->bpmp->priv; 86 + 87 + __raw_writel(MA_ACKD(channel->index), priv->arb_sema + SET_OFFSET); 88 + 89 + return 0; 90 + } 91 + 92 + static int tegra210_bpmp_ack_response(struct tegra_bpmp_channel *channel) 93 + { 94 + struct tegra210_bpmp *priv = channel->bpmp->priv; 95 + 96 + __raw_writel(MA_ACKD(channel->index) ^ MA_FREE(channel->index), 97 + priv->arb_sema + CLR_OFFSET); 98 + 99 + return 0; 100 + } 101 + 102 + static int tegra210_bpmp_ack_request(struct tegra_bpmp_channel *channel) 103 + { 104 + struct tegra210_bpmp *priv = channel->bpmp->priv; 105 + 106 + __raw_writel(SL_QUED(channel->index), priv->arb_sema + SET_OFFSET); 107 + 108 + return 0; 109 + } 110 + 111 + static int tegra210_bpmp_ring_doorbell(struct tegra_bpmp *bpmp) 112 + { 113 + struct tegra210_bpmp *priv = bpmp->priv; 114 + struct irq_data *irq_data = priv->tx_irq_data; 115 + 116 + /* 117 + * Tegra Legacy Interrupt Controller (LIC) is used to notify BPMP of 118 + * available messages 119 + */ 120 + if (irq_data->chip->irq_retrigger) 121 + return irq_data->chip->irq_retrigger(irq_data); 122 + 123 + return -EINVAL; 124 + } 125 + 126 + static irqreturn_t rx_irq(int irq, void *data) 127 + { 128 + struct tegra_bpmp *bpmp = data; 129 + 130 + tegra_bpmp_handle_rx(bpmp); 131 + 132 + return IRQ_HANDLED; 133 + } 134 + 135 + static int tegra210_bpmp_channel_init(struct tegra_bpmp_channel *channel, 136 + struct tegra_bpmp *bpmp, 137 + unsigned int index) 138 + { 139 + struct tegra210_bpmp *priv = bpmp->priv; 140 + u32 address; 141 + void *p; 142 + 143 + /* Retrieve channel base address from BPMP */ 144 + writel(index << TRIGGER_ID_SHIFT | TRIGGER_CMD_GET, 145 + priv->atomics + TRIGGER_OFFSET); 146 + address = readl(priv->atomics + RESULT_OFFSET(index)); 147 + 148 + p = devm_ioremap(bpmp->dev, address, 0x80); 149 + if (!p) 150 + return -ENOMEM; 151 + 152 + channel->ib = p; 153 + channel->ob = p; 154 + channel->index = index; 155 + init_completion(&channel->completion); 156 + channel->bpmp = bpmp; 157 + 158 + return 0; 159 + } 160 + 161 + static int tegra210_bpmp_init(struct tegra_bpmp *bpmp) 162 + { 163 + struct platform_device *pdev = to_platform_device(bpmp->dev); 164 + struct tegra210_bpmp *priv; 165 + struct resource *res; 166 + unsigned int i; 167 + int err; 168 + 169 + priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); 170 + if (!priv) 171 + return -ENOMEM; 172 + 173 + bpmp->priv = priv; 174 + 175 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 176 + priv->atomics = devm_ioremap_resource(&pdev->dev, res); 177 + if (IS_ERR(priv->atomics)) 178 + return PTR_ERR(priv->atomics); 179 + 180 + res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 181 + priv->arb_sema = devm_ioremap_resource(&pdev->dev, res); 182 + if (IS_ERR(priv->arb_sema)) 183 + return PTR_ERR(priv->arb_sema); 184 + 185 + err = tegra210_bpmp_channel_init(bpmp->tx_channel, bpmp, 186 + bpmp->soc->channels.cpu_tx.offset); 187 + if (err < 0) 188 + return err; 189 + 190 + err = tegra210_bpmp_channel_init(bpmp->rx_channel, bpmp, 191 + bpmp->soc->channels.cpu_rx.offset); 192 + if (err < 0) 193 + return err; 194 + 195 + for (i = 0; i < bpmp->threaded.count; i++) { 196 + unsigned int index = bpmp->soc->channels.thread.offset + i; 197 + 198 + err = tegra210_bpmp_channel_init(&bpmp->threaded_channels[i], 199 + bpmp, index); 200 + if (err < 0) 201 + return err; 202 + } 203 + 204 + err = platform_get_irq_byname(pdev, "tx"); 205 + if (err < 0) { 206 + dev_err(&pdev->dev, "failed to get TX IRQ: %d\n", err); 207 + return err; 208 + } 209 + 210 + priv->tx_irq_data = irq_get_irq_data(err); 211 + if (!priv->tx_irq_data) { 212 + dev_err(&pdev->dev, "failed to get IRQ data for TX IRQ\n"); 213 + return err; 214 + } 215 + 216 + err = platform_get_irq_byname(pdev, "rx"); 217 + if (err < 0) { 218 + dev_err(&pdev->dev, "failed to get rx IRQ: %d\n", err); 219 + return err; 220 + } 221 + 222 + err = devm_request_irq(&pdev->dev, err, rx_irq, 223 + IRQF_NO_SUSPEND, dev_name(&pdev->dev), bpmp); 224 + if (err < 0) { 225 + dev_err(&pdev->dev, "failed to request IRQ: %d\n", err); 226 + return err; 227 + } 228 + 229 + return 0; 230 + } 231 + 232 + const struct tegra_bpmp_ops tegra210_bpmp_ops = { 233 + .init = tegra210_bpmp_init, 234 + .is_response_ready = tegra210_bpmp_is_response_ready, 235 + .is_request_ready = tegra210_bpmp_is_request_ready, 236 + .ack_response = tegra210_bpmp_ack_response, 237 + .ack_request = tegra210_bpmp_ack_request, 238 + .is_response_channel_free = tegra210_bpmp_is_response_channel_free, 239 + .is_request_channel_free = tegra210_bpmp_is_request_channel_free, 240 + .post_response = tegra210_bpmp_post_response, 241 + .post_request = tegra210_bpmp_post_request, 242 + .ring_doorbell = tegra210_bpmp_ring_doorbell, 243 + };
+154 -230
drivers/firmware/tegra/bpmp.c
··· 26 26 #include <soc/tegra/bpmp-abi.h> 27 27 #include <soc/tegra/ivc.h> 28 28 29 + #include "bpmp-private.h" 30 + 29 31 #define MSG_ACK BIT(0) 30 32 #define MSG_RING BIT(1) 31 33 #define TAG_SZ 32 ··· 36 34 mbox_client_to_bpmp(struct mbox_client *client) 37 35 { 38 36 return container_of(client, struct tegra_bpmp, mbox.client); 37 + } 38 + 39 + static inline const struct tegra_bpmp_ops * 40 + channel_to_ops(struct tegra_bpmp_channel *channel) 41 + { 42 + struct tegra_bpmp *bpmp = channel->bpmp; 43 + 44 + return bpmp->soc->ops; 39 45 } 40 46 41 47 struct tegra_bpmp *tegra_bpmp_get(struct device *dev) ··· 106 96 (msg->rx.size == 0 || msg->rx.data); 107 97 } 108 98 109 - static bool tegra_bpmp_master_acked(struct tegra_bpmp_channel *channel) 99 + static bool tegra_bpmp_is_response_ready(struct tegra_bpmp_channel *channel) 110 100 { 111 - void *frame; 101 + const struct tegra_bpmp_ops *ops = channel_to_ops(channel); 112 102 113 - frame = tegra_ivc_read_get_next_frame(channel->ivc); 114 - if (IS_ERR(frame)) { 115 - channel->ib = NULL; 116 - return false; 117 - } 118 - 119 - channel->ib = frame; 120 - 121 - return true; 103 + return ops->is_response_ready(channel); 122 104 } 123 105 124 - static int tegra_bpmp_wait_ack(struct tegra_bpmp_channel *channel) 106 + static bool tegra_bpmp_is_request_ready(struct tegra_bpmp_channel *channel) 107 + { 108 + const struct tegra_bpmp_ops *ops = channel_to_ops(channel); 109 + 110 + return ops->is_request_ready(channel); 111 + } 112 + 113 + static int tegra_bpmp_wait_response(struct tegra_bpmp_channel *channel) 125 114 { 126 115 unsigned long timeout = channel->bpmp->soc->channels.cpu_tx.timeout; 127 116 ktime_t end; ··· 128 119 end = ktime_add_us(ktime_get(), timeout); 129 120 130 121 do { 131 - if (tegra_bpmp_master_acked(channel)) 122 + if (tegra_bpmp_is_response_ready(channel)) 132 123 return 0; 133 124 } while (ktime_before(ktime_get(), end)); 134 125 135 126 return -ETIMEDOUT; 136 127 } 137 128 138 - static bool tegra_bpmp_master_free(struct tegra_bpmp_channel *channel) 129 + static int tegra_bpmp_ack_response(struct tegra_bpmp_channel *channel) 139 130 { 140 - void *frame; 131 + const struct tegra_bpmp_ops *ops = channel_to_ops(channel); 141 132 142 - frame = tegra_ivc_write_get_next_frame(channel->ivc); 143 - if (IS_ERR(frame)) { 144 - channel->ob = NULL; 145 - return false; 146 - } 147 - 148 - channel->ob = frame; 149 - 150 - return true; 133 + return ops->ack_response(channel); 151 134 } 152 135 153 - static int tegra_bpmp_wait_master_free(struct tegra_bpmp_channel *channel) 136 + static int tegra_bpmp_ack_request(struct tegra_bpmp_channel *channel) 137 + { 138 + const struct tegra_bpmp_ops *ops = channel_to_ops(channel); 139 + 140 + return ops->ack_request(channel); 141 + } 142 + 143 + static bool 144 + tegra_bpmp_is_request_channel_free(struct tegra_bpmp_channel *channel) 145 + { 146 + const struct tegra_bpmp_ops *ops = channel_to_ops(channel); 147 + 148 + return ops->is_request_channel_free(channel); 149 + } 150 + 151 + static bool 152 + tegra_bpmp_is_response_channel_free(struct tegra_bpmp_channel *channel) 153 + { 154 + const struct tegra_bpmp_ops *ops = channel_to_ops(channel); 155 + 156 + return ops->is_response_channel_free(channel); 157 + } 158 + 159 + static int 160 + tegra_bpmp_wait_request_channel_free(struct tegra_bpmp_channel *channel) 154 161 { 155 162 unsigned long timeout = channel->bpmp->soc->channels.cpu_tx.timeout; 156 163 ktime_t start, now; ··· 174 149 start = ns_to_ktime(local_clock()); 175 150 176 151 do { 177 - if (tegra_bpmp_master_free(channel)) 152 + if (tegra_bpmp_is_request_channel_free(channel)) 178 153 return 0; 179 154 180 155 now = ns_to_ktime(local_clock()); 181 156 } while (ktime_us_delta(now, start) < timeout); 182 157 183 158 return -ETIMEDOUT; 159 + } 160 + 161 + static int tegra_bpmp_post_request(struct tegra_bpmp_channel *channel) 162 + { 163 + const struct tegra_bpmp_ops *ops = channel_to_ops(channel); 164 + 165 + return ops->post_request(channel); 166 + } 167 + 168 + static int tegra_bpmp_post_response(struct tegra_bpmp_channel *channel) 169 + { 170 + const struct tegra_bpmp_ops *ops = channel_to_ops(channel); 171 + 172 + return ops->post_response(channel); 173 + } 174 + 175 + static int tegra_bpmp_ring_doorbell(struct tegra_bpmp *bpmp) 176 + { 177 + return bpmp->soc->ops->ring_doorbell(bpmp); 184 178 } 185 179 186 180 static ssize_t __tegra_bpmp_channel_read(struct tegra_bpmp_channel *channel, ··· 210 166 if (data && size > 0) 211 167 memcpy(data, channel->ib->data, size); 212 168 213 - err = tegra_ivc_read_advance(channel->ivc); 169 + err = tegra_bpmp_ack_response(channel); 214 170 if (err < 0) 215 171 return err; 216 172 ··· 254 210 if (data && size > 0) 255 211 memcpy(channel->ob->data, data, size); 256 212 257 - return tegra_ivc_write_advance(channel->ivc); 213 + return tegra_bpmp_post_request(channel); 258 214 } 259 215 260 216 static struct tegra_bpmp_channel * ··· 282 238 283 239 channel = &bpmp->threaded_channels[index]; 284 240 285 - if (!tegra_bpmp_master_free(channel)) { 241 + if (!tegra_bpmp_is_request_channel_free(channel)) { 286 242 err = -EBUSY; 287 243 goto unlock; 288 244 } ··· 314 270 { 315 271 int err; 316 272 317 - err = tegra_bpmp_wait_master_free(channel); 273 + err = tegra_bpmp_wait_request_channel_free(channel); 318 274 if (err < 0) 319 275 return err; 320 276 ··· 346 302 347 303 spin_unlock(&bpmp->atomic_tx_lock); 348 304 349 - err = mbox_send_message(bpmp->mbox.channel, NULL); 305 + err = tegra_bpmp_ring_doorbell(bpmp); 350 306 if (err < 0) 351 307 return err; 352 308 353 - mbox_client_txdone(bpmp->mbox.channel, 0); 354 - 355 - err = tegra_bpmp_wait_ack(channel); 309 + err = tegra_bpmp_wait_response(channel); 356 310 if (err < 0) 357 311 return err; 358 312 ··· 377 335 if (IS_ERR(channel)) 378 336 return PTR_ERR(channel); 379 337 380 - err = mbox_send_message(bpmp->mbox.channel, NULL); 338 + err = tegra_bpmp_ring_doorbell(bpmp); 381 339 if (err < 0) 382 340 return err; 383 - 384 - mbox_client_txdone(bpmp->mbox.channel, 0); 385 341 386 342 timeout = usecs_to_jiffies(bpmp->soc->channels.thread.timeout); 387 343 ··· 409 369 { 410 370 unsigned long flags = channel->ib->flags; 411 371 struct tegra_bpmp *bpmp = channel->bpmp; 412 - struct tegra_bpmp_mb_data *frame; 413 372 int err; 414 373 415 374 if (WARN_ON(size > MSG_DATA_MIN_SZ)) 416 375 return; 417 376 418 - err = tegra_ivc_read_advance(channel->ivc); 377 + err = tegra_bpmp_ack_request(channel); 419 378 if (WARN_ON(err < 0)) 420 379 return; 421 380 422 381 if ((flags & MSG_ACK) == 0) 423 382 return; 424 383 425 - frame = tegra_ivc_write_get_next_frame(channel->ivc); 426 - if (WARN_ON(IS_ERR(frame))) 384 + if (WARN_ON(!tegra_bpmp_is_response_channel_free(channel))) 427 385 return; 428 386 429 - frame->code = code; 387 + channel->ob->code = code; 430 388 431 389 if (data && size > 0) 432 - memcpy(frame->data, data, size); 390 + memcpy(channel->ob->data, data, size); 433 391 434 - err = tegra_ivc_write_advance(channel->ivc); 392 + err = tegra_bpmp_post_response(channel); 435 393 if (WARN_ON(err < 0)) 436 394 return; 437 395 438 396 if (flags & MSG_RING) { 439 - err = mbox_send_message(bpmp->mbox.channel, NULL); 397 + err = tegra_bpmp_ring_doorbell(bpmp); 440 398 if (WARN_ON(err < 0)) 441 399 return; 442 - 443 - mbox_client_txdone(bpmp->mbox.channel, 0); 444 400 } 445 401 } 446 402 EXPORT_SYMBOL_GPL(tegra_bpmp_mrq_return); ··· 663 627 complete(&channel->completion); 664 628 } 665 629 666 - static void tegra_bpmp_handle_rx(struct mbox_client *client, void *data) 630 + void tegra_bpmp_handle_rx(struct tegra_bpmp *bpmp) 667 631 { 668 - struct tegra_bpmp *bpmp = mbox_client_to_bpmp(client); 669 632 struct tegra_bpmp_channel *channel; 670 633 unsigned int i, count; 671 634 unsigned long *busy; ··· 673 638 count = bpmp->soc->channels.thread.count; 674 639 busy = bpmp->threaded.busy; 675 640 676 - if (tegra_bpmp_master_acked(channel)) 641 + if (tegra_bpmp_is_request_ready(channel)) 677 642 tegra_bpmp_handle_mrq(bpmp, channel->ib->code, channel); 678 643 679 644 spin_lock(&bpmp->lock); ··· 683 648 684 649 channel = &bpmp->threaded_channels[i]; 685 650 686 - if (tegra_bpmp_master_acked(channel)) { 651 + if (tegra_bpmp_is_response_ready(channel)) { 687 652 tegra_bpmp_channel_signal(channel); 688 653 clear_bit(i, busy); 689 654 } ··· 692 657 spin_unlock(&bpmp->lock); 693 658 } 694 659 695 - static void tegra_bpmp_ivc_notify(struct tegra_ivc *ivc, void *data) 696 - { 697 - struct tegra_bpmp *bpmp = data; 698 - int err; 699 - 700 - if (WARN_ON(bpmp->mbox.channel == NULL)) 701 - return; 702 - 703 - err = mbox_send_message(bpmp->mbox.channel, NULL); 704 - if (err < 0) 705 - return; 706 - 707 - mbox_client_txdone(bpmp->mbox.channel, 0); 708 - } 709 - 710 - static int tegra_bpmp_channel_init(struct tegra_bpmp_channel *channel, 711 - struct tegra_bpmp *bpmp, 712 - unsigned int index) 713 - { 714 - size_t message_size, queue_size; 715 - unsigned int offset; 716 - int err; 717 - 718 - channel->ivc = devm_kzalloc(bpmp->dev, sizeof(*channel->ivc), 719 - GFP_KERNEL); 720 - if (!channel->ivc) 721 - return -ENOMEM; 722 - 723 - message_size = tegra_ivc_align(MSG_MIN_SZ); 724 - queue_size = tegra_ivc_total_queue_size(message_size); 725 - offset = queue_size * index; 726 - 727 - err = tegra_ivc_init(channel->ivc, NULL, 728 - bpmp->rx.virt + offset, bpmp->rx.phys + offset, 729 - bpmp->tx.virt + offset, bpmp->tx.phys + offset, 730 - 1, message_size, tegra_bpmp_ivc_notify, 731 - bpmp); 732 - if (err < 0) { 733 - dev_err(bpmp->dev, "failed to setup IVC for channel %u: %d\n", 734 - index, err); 735 - return err; 736 - } 737 - 738 - init_completion(&channel->completion); 739 - channel->bpmp = bpmp; 740 - 741 - return 0; 742 - } 743 - 744 - static void tegra_bpmp_channel_reset(struct tegra_bpmp_channel *channel) 745 - { 746 - /* reset the channel state */ 747 - tegra_ivc_reset(channel->ivc); 748 - 749 - /* sync the channel state with BPMP */ 750 - while (tegra_ivc_notified(channel->ivc)) 751 - ; 752 - } 753 - 754 - static void tegra_bpmp_channel_cleanup(struct tegra_bpmp_channel *channel) 755 - { 756 - tegra_ivc_cleanup(channel->ivc); 757 - } 758 - 759 660 static int tegra_bpmp_probe(struct platform_device *pdev) 760 661 { 761 662 struct tegra_bpmp *bpmp; 762 - unsigned int i; 763 663 char tag[TAG_SZ]; 764 664 size_t size; 765 665 int err; ··· 706 736 bpmp->soc = of_device_get_match_data(&pdev->dev); 707 737 bpmp->dev = &pdev->dev; 708 738 709 - bpmp->tx.pool = of_gen_pool_get(pdev->dev.of_node, "shmem", 0); 710 - if (!bpmp->tx.pool) { 711 - dev_err(&pdev->dev, "TX shmem pool not found\n"); 712 - return -ENOMEM; 713 - } 714 - 715 - bpmp->tx.virt = gen_pool_dma_alloc(bpmp->tx.pool, 4096, &bpmp->tx.phys); 716 - if (!bpmp->tx.virt) { 717 - dev_err(&pdev->dev, "failed to allocate from TX pool\n"); 718 - return -ENOMEM; 719 - } 720 - 721 - bpmp->rx.pool = of_gen_pool_get(pdev->dev.of_node, "shmem", 1); 722 - if (!bpmp->rx.pool) { 723 - dev_err(&pdev->dev, "RX shmem pool not found\n"); 724 - err = -ENOMEM; 725 - goto free_tx; 726 - } 727 - 728 - bpmp->rx.virt = gen_pool_dma_alloc(bpmp->rx.pool, 4096, &bpmp->rx.phys); 729 - if (!bpmp->rx.virt) { 730 - dev_err(&pdev->dev, "failed to allocate from RX pool\n"); 731 - err = -ENOMEM; 732 - goto free_tx; 733 - } 734 - 735 739 INIT_LIST_HEAD(&bpmp->mrqs); 736 740 spin_lock_init(&bpmp->lock); 737 741 ··· 715 771 size = BITS_TO_LONGS(bpmp->threaded.count) * sizeof(long); 716 772 717 773 bpmp->threaded.allocated = devm_kzalloc(&pdev->dev, size, GFP_KERNEL); 718 - if (!bpmp->threaded.allocated) { 719 - err = -ENOMEM; 720 - goto free_rx; 721 - } 774 + if (!bpmp->threaded.allocated) 775 + return -ENOMEM; 722 776 723 777 bpmp->threaded.busy = devm_kzalloc(&pdev->dev, size, GFP_KERNEL); 724 - if (!bpmp->threaded.busy) { 725 - err = -ENOMEM; 726 - goto free_rx; 727 - } 778 + if (!bpmp->threaded.busy) 779 + return -ENOMEM; 728 780 729 781 spin_lock_init(&bpmp->atomic_tx_lock); 730 782 bpmp->tx_channel = devm_kzalloc(&pdev->dev, sizeof(*bpmp->tx_channel), 731 783 GFP_KERNEL); 732 - if (!bpmp->tx_channel) { 733 - err = -ENOMEM; 734 - goto free_rx; 735 - } 784 + if (!bpmp->tx_channel) 785 + return -ENOMEM; 736 786 737 787 bpmp->rx_channel = devm_kzalloc(&pdev->dev, sizeof(*bpmp->rx_channel), 738 788 GFP_KERNEL); 739 - if (!bpmp->rx_channel) { 740 - err = -ENOMEM; 741 - goto free_rx; 742 - } 789 + if (!bpmp->rx_channel) 790 + return -ENOMEM; 743 791 744 792 bpmp->threaded_channels = devm_kcalloc(&pdev->dev, bpmp->threaded.count, 745 793 sizeof(*bpmp->threaded_channels), 746 794 GFP_KERNEL); 747 - if (!bpmp->threaded_channels) { 748 - err = -ENOMEM; 749 - goto free_rx; 750 - } 795 + if (!bpmp->threaded_channels) 796 + return -ENOMEM; 751 797 752 - err = tegra_bpmp_channel_init(bpmp->tx_channel, bpmp, 753 - bpmp->soc->channels.cpu_tx.offset); 798 + err = bpmp->soc->ops->init(bpmp); 754 799 if (err < 0) 755 - goto free_rx; 756 - 757 - err = tegra_bpmp_channel_init(bpmp->rx_channel, bpmp, 758 - bpmp->soc->channels.cpu_rx.offset); 759 - if (err < 0) 760 - goto cleanup_tx_channel; 761 - 762 - for (i = 0; i < bpmp->threaded.count; i++) { 763 - err = tegra_bpmp_channel_init( 764 - &bpmp->threaded_channels[i], bpmp, 765 - bpmp->soc->channels.thread.offset + i); 766 - if (err < 0) 767 - goto cleanup_threaded_channels; 768 - } 769 - 770 - /* mbox registration */ 771 - bpmp->mbox.client.dev = &pdev->dev; 772 - bpmp->mbox.client.rx_callback = tegra_bpmp_handle_rx; 773 - bpmp->mbox.client.tx_block = false; 774 - bpmp->mbox.client.knows_txdone = false; 775 - 776 - bpmp->mbox.channel = mbox_request_channel(&bpmp->mbox.client, 0); 777 - if (IS_ERR(bpmp->mbox.channel)) { 778 - err = PTR_ERR(bpmp->mbox.channel); 779 - dev_err(&pdev->dev, "failed to get HSP mailbox: %d\n", err); 780 - goto cleanup_threaded_channels; 781 - } 782 - 783 - /* reset message channels */ 784 - tegra_bpmp_channel_reset(bpmp->tx_channel); 785 - tegra_bpmp_channel_reset(bpmp->rx_channel); 786 - for (i = 0; i < bpmp->threaded.count; i++) 787 - tegra_bpmp_channel_reset(&bpmp->threaded_channels[i]); 800 + return err; 788 801 789 802 err = tegra_bpmp_request_mrq(bpmp, MRQ_PING, 790 803 tegra_bpmp_mrq_handle_ping, bpmp); 791 804 if (err < 0) 792 - goto free_mbox; 805 + goto deinit; 793 806 794 807 err = tegra_bpmp_ping(bpmp); 795 808 if (err < 0) { ··· 768 867 if (err < 0) 769 868 goto free_mrq; 770 869 771 - err = tegra_bpmp_init_clocks(bpmp); 772 - if (err < 0) 773 - goto free_mrq; 870 + if (of_find_property(pdev->dev.of_node, "#clock-cells", NULL)) { 871 + err = tegra_bpmp_init_clocks(bpmp); 872 + if (err < 0) 873 + goto free_mrq; 874 + } 774 875 775 - err = tegra_bpmp_init_resets(bpmp); 776 - if (err < 0) 777 - goto free_mrq; 876 + if (of_find_property(pdev->dev.of_node, "#reset-cells", NULL)) { 877 + err = tegra_bpmp_init_resets(bpmp); 878 + if (err < 0) 879 + goto free_mrq; 880 + } 778 881 779 - err = tegra_bpmp_init_powergates(bpmp); 780 - if (err < 0) 781 - goto free_mrq; 882 + if (of_find_property(pdev->dev.of_node, "#power-domain-cells", NULL)) { 883 + err = tegra_bpmp_init_powergates(bpmp); 884 + if (err < 0) 885 + goto free_mrq; 886 + } 782 887 783 888 err = tegra_bpmp_init_debugfs(bpmp); 784 889 if (err < 0) ··· 794 887 795 888 free_mrq: 796 889 tegra_bpmp_free_mrq(bpmp, MRQ_PING, bpmp); 797 - free_mbox: 798 - mbox_free_channel(bpmp->mbox.channel); 799 - cleanup_threaded_channels: 800 - for (i = 0; i < bpmp->threaded.count; i++) { 801 - if (bpmp->threaded_channels[i].bpmp) 802 - tegra_bpmp_channel_cleanup(&bpmp->threaded_channels[i]); 803 - } 890 + deinit: 891 + if (bpmp->soc->ops->deinit) 892 + bpmp->soc->ops->deinit(bpmp); 804 893 805 - tegra_bpmp_channel_cleanup(bpmp->rx_channel); 806 - cleanup_tx_channel: 807 - tegra_bpmp_channel_cleanup(bpmp->tx_channel); 808 - free_rx: 809 - gen_pool_free(bpmp->rx.pool, (unsigned long)bpmp->rx.virt, 4096); 810 - free_tx: 811 - gen_pool_free(bpmp->tx.pool, (unsigned long)bpmp->tx.virt, 4096); 812 894 return err; 813 895 } 814 896 815 897 static int __maybe_unused tegra_bpmp_resume(struct device *dev) 816 898 { 817 899 struct tegra_bpmp *bpmp = dev_get_drvdata(dev); 818 - unsigned int i; 819 900 820 - /* reset message channels */ 821 - tegra_bpmp_channel_reset(bpmp->tx_channel); 822 - tegra_bpmp_channel_reset(bpmp->rx_channel); 823 - 824 - for (i = 0; i < bpmp->threaded.count; i++) 825 - tegra_bpmp_channel_reset(&bpmp->threaded_channels[i]); 826 - 827 - return 0; 901 + if (bpmp->soc->ops->resume) 902 + return bpmp->soc->ops->resume(bpmp); 903 + else 904 + return 0; 828 905 } 829 906 830 907 static SIMPLE_DEV_PM_OPS(tegra_bpmp_pm_ops, NULL, tegra_bpmp_resume); 831 908 909 + #if IS_ENABLED(CONFIG_ARCH_TEGRA_186_SOC) || \ 910 + IS_ENABLED(CONFIG_ARCH_TEGRA_194_SOC) 832 911 static const struct tegra_bpmp_soc tegra186_soc = { 833 912 .channels = { 834 913 .cpu_tx = { ··· 831 938 .timeout = 0, 832 939 }, 833 940 }, 941 + .ops = &tegra186_bpmp_ops, 834 942 .num_resets = 193, 835 943 }; 944 + #endif 945 + 946 + #if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC) 947 + static const struct tegra_bpmp_soc tegra210_soc = { 948 + .channels = { 949 + .cpu_tx = { 950 + .offset = 0, 951 + .count = 1, 952 + .timeout = 60 * USEC_PER_SEC, 953 + }, 954 + .thread = { 955 + .offset = 4, 956 + .count = 1, 957 + .timeout = 600 * USEC_PER_SEC, 958 + }, 959 + .cpu_rx = { 960 + .offset = 8, 961 + .count = 1, 962 + .timeout = 0, 963 + }, 964 + }, 965 + .ops = &tegra210_bpmp_ops, 966 + }; 967 + #endif 836 968 837 969 static const struct of_device_id tegra_bpmp_match[] = { 970 + #if IS_ENABLED(CONFIG_ARCH_TEGRA_186_SOC) || \ 971 + IS_ENABLED(CONFIG_ARCH_TEGRA_194_SOC) 838 972 { .compatible = "nvidia,tegra186-bpmp", .data = &tegra186_soc }, 973 + #endif 974 + #if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC) 975 + { .compatible = "nvidia,tegra210-bpmp", .data = &tegra210_soc }, 976 + #endif 839 977 { } 840 978 }; 841 979
+2 -19
drivers/firmware/ti_sci.c
··· 146 146 return 0; 147 147 } 148 148 149 - /** 150 - * ti_sci_debug_open() - debug file open 151 - * @inode: inode pointer 152 - * @file: file pointer 153 - * 154 - * Return: result of single_open 155 - */ 156 - static int ti_sci_debug_open(struct inode *inode, struct file *file) 157 - { 158 - return single_open(file, ti_sci_debug_show, inode->i_private); 159 - } 160 - 161 - /* log file operations */ 162 - static const struct file_operations ti_sci_debug_fops = { 163 - .open = ti_sci_debug_open, 164 - .read = seq_read, 165 - .llseek = seq_lseek, 166 - .release = single_release, 167 - }; 149 + /* Provide the log file operations interface*/ 150 + DEFINE_SHOW_ATTRIBUTE(ti_sci_debug); 168 151 169 152 /** 170 153 * ti_sci_debugfs_create() - Create log debug file
+1
drivers/firmware/xilinx/Kconfig
··· 6 6 7 7 config ZYNQMP_FIRMWARE 8 8 bool "Enable Xilinx Zynq MPSoC firmware interface" 9 + select MFD_CORE 9 10 help 10 11 Firmware interface driver is used by different 11 12 drivers to communicate with the firmware for
+166
drivers/firmware/xilinx/zynqmp.c
··· 14 14 #include <linux/compiler.h> 15 15 #include <linux/device.h> 16 16 #include <linux/init.h> 17 + #include <linux/mfd/core.h> 17 18 #include <linux/module.h> 18 19 #include <linux/of.h> 19 20 #include <linux/of_platform.h> ··· 23 22 24 23 #include <linux/firmware/xlnx-zynqmp.h> 25 24 #include "zynqmp-debug.h" 25 + 26 + static const struct mfd_cell firmware_devs[] = { 27 + { 28 + .name = "zynqmp_power_controller", 29 + }, 30 + }; 26 31 27 32 /** 28 33 * zynqmp_pm_ret_code() - Convert PMU-FW error codes to Linux error codes ··· 189 182 } 190 183 ret = zynqmp_pm_invoke_fn(PM_GET_API_VERSION, 0, 0, 0, 0, ret_payload); 191 184 *version = ret_payload[1]; 185 + 186 + return ret; 187 + } 188 + 189 + /** 190 + * zynqmp_pm_get_chipid - Get silicon ID registers 191 + * @idcode: IDCODE register 192 + * @version: version register 193 + * 194 + * Return: Returns the status of the operation and the idcode and version 195 + * registers in @idcode and @version. 196 + */ 197 + static int zynqmp_pm_get_chipid(u32 *idcode, u32 *version) 198 + { 199 + u32 ret_payload[PAYLOAD_ARG_CNT]; 200 + int ret; 201 + 202 + if (!idcode || !version) 203 + return -EINVAL; 204 + 205 + ret = zynqmp_pm_invoke_fn(PM_GET_CHIPID, 0, 0, 0, 0, ret_payload); 206 + *idcode = ret_payload[1]; 207 + *version = ret_payload[2]; 192 208 193 209 return ret; 194 210 } ··· 499 469 arg1, arg2, out); 500 470 } 501 471 472 + /** 473 + * zynqmp_pm_reset_assert - Request setting of reset (1 - assert, 0 - release) 474 + * @reset: Reset to be configured 475 + * @assert_flag: Flag stating should reset be asserted (1) or 476 + * released (0) 477 + * 478 + * Return: Returns status, either success or error+reason 479 + */ 480 + static int zynqmp_pm_reset_assert(const enum zynqmp_pm_reset reset, 481 + const enum zynqmp_pm_reset_action assert_flag) 482 + { 483 + return zynqmp_pm_invoke_fn(PM_RESET_ASSERT, reset, assert_flag, 484 + 0, 0, NULL); 485 + } 486 + 487 + /** 488 + * zynqmp_pm_reset_get_status - Get status of the reset 489 + * @reset: Reset whose status should be returned 490 + * @status: Returned status 491 + * 492 + * Return: Returns status, either success or error+reason 493 + */ 494 + static int zynqmp_pm_reset_get_status(const enum zynqmp_pm_reset reset, 495 + u32 *status) 496 + { 497 + u32 ret_payload[PAYLOAD_ARG_CNT]; 498 + int ret; 499 + 500 + if (!status) 501 + return -EINVAL; 502 + 503 + ret = zynqmp_pm_invoke_fn(PM_RESET_GET_STATUS, reset, 0, 504 + 0, 0, ret_payload); 505 + *status = ret_payload[1]; 506 + 507 + return ret; 508 + } 509 + 510 + /** 511 + * zynqmp_pm_init_finalize() - PM call to inform firmware that the caller 512 + * master has initialized its own power management 513 + * 514 + * This API function is to be used for notify the power management controller 515 + * about the completed power management initialization. 516 + * 517 + * Return: Returns status, either success or error+reason 518 + */ 519 + static int zynqmp_pm_init_finalize(void) 520 + { 521 + return zynqmp_pm_invoke_fn(PM_PM_INIT_FINALIZE, 0, 0, 0, 0, NULL); 522 + } 523 + 524 + /** 525 + * zynqmp_pm_set_suspend_mode() - Set system suspend mode 526 + * @mode: Mode to set for system suspend 527 + * 528 + * This API function is used to set mode of system suspend. 529 + * 530 + * Return: Returns status, either success or error+reason 531 + */ 532 + static int zynqmp_pm_set_suspend_mode(u32 mode) 533 + { 534 + return zynqmp_pm_invoke_fn(PM_SET_SUSPEND_MODE, mode, 0, 0, 0, NULL); 535 + } 536 + 537 + /** 538 + * zynqmp_pm_request_node() - Request a node with specific capabilities 539 + * @node: Node ID of the slave 540 + * @capabilities: Requested capabilities of the slave 541 + * @qos: Quality of service (not supported) 542 + * @ack: Flag to specify whether acknowledge is requested 543 + * 544 + * This function is used by master to request particular node from firmware. 545 + * Every master must request node before using it. 546 + * 547 + * Return: Returns status, either success or error+reason 548 + */ 549 + static int zynqmp_pm_request_node(const u32 node, const u32 capabilities, 550 + const u32 qos, 551 + const enum zynqmp_pm_request_ack ack) 552 + { 553 + return zynqmp_pm_invoke_fn(PM_REQUEST_NODE, node, capabilities, 554 + qos, ack, NULL); 555 + } 556 + 557 + /** 558 + * zynqmp_pm_release_node() - Release a node 559 + * @node: Node ID of the slave 560 + * 561 + * This function is used by master to inform firmware that master 562 + * has released node. Once released, master must not use that node 563 + * without re-request. 564 + * 565 + * Return: Returns status, either success or error+reason 566 + */ 567 + static int zynqmp_pm_release_node(const u32 node) 568 + { 569 + return zynqmp_pm_invoke_fn(PM_RELEASE_NODE, node, 0, 0, 0, NULL); 570 + } 571 + 572 + /** 573 + * zynqmp_pm_set_requirement() - PM call to set requirement for PM slaves 574 + * @node: Node ID of the slave 575 + * @capabilities: Requested capabilities of the slave 576 + * @qos: Quality of service (not supported) 577 + * @ack: Flag to specify whether acknowledge is requested 578 + * 579 + * This API function is to be used for slaves a PU already has requested 580 + * to change its capabilities. 581 + * 582 + * Return: Returns status, either success or error+reason 583 + */ 584 + static int zynqmp_pm_set_requirement(const u32 node, const u32 capabilities, 585 + const u32 qos, 586 + const enum zynqmp_pm_request_ack ack) 587 + { 588 + return zynqmp_pm_invoke_fn(PM_SET_REQUIREMENT, node, capabilities, 589 + qos, ack, NULL); 590 + } 591 + 502 592 static const struct zynqmp_eemi_ops eemi_ops = { 503 593 .get_api_version = zynqmp_pm_get_api_version, 594 + .get_chipid = zynqmp_pm_get_chipid, 504 595 .query_data = zynqmp_pm_query_data, 505 596 .clock_enable = zynqmp_pm_clock_enable, 506 597 .clock_disable = zynqmp_pm_clock_disable, ··· 633 482 .clock_setparent = zynqmp_pm_clock_setparent, 634 483 .clock_getparent = zynqmp_pm_clock_getparent, 635 484 .ioctl = zynqmp_pm_ioctl, 485 + .reset_assert = zynqmp_pm_reset_assert, 486 + .reset_get_status = zynqmp_pm_reset_get_status, 487 + .init_finalize = zynqmp_pm_init_finalize, 488 + .set_suspend_mode = zynqmp_pm_set_suspend_mode, 489 + .request_node = zynqmp_pm_request_node, 490 + .release_node = zynqmp_pm_release_node, 491 + .set_requirement = zynqmp_pm_set_requirement, 636 492 }; 637 493 638 494 /** ··· 696 538 697 539 zynqmp_pm_api_debugfs_init(); 698 540 541 + ret = mfd_add_devices(&pdev->dev, PLATFORM_DEVID_NONE, firmware_devs, 542 + ARRAY_SIZE(firmware_devs), NULL, 0, NULL); 543 + if (ret) { 544 + dev_err(&pdev->dev, "failed to add MFD devices %d\n", ret); 545 + return ret; 546 + } 547 + 699 548 return of_platform_populate(dev->of_node, NULL, NULL, dev); 700 549 } 701 550 702 551 static int zynqmp_firmware_remove(struct platform_device *pdev) 703 552 { 553 + mfd_remove_devices(&pdev->dev); 704 554 zynqmp_pm_api_debugfs_exit(); 705 555 706 556 return 0;
+1
drivers/mfd/Makefile
··· 10 10 obj-$(CONFIG_MFD_ACT8945A) += act8945a.o 11 11 obj-$(CONFIG_MFD_SM501) += sm501.o 12 12 obj-$(CONFIG_MFD_ASIC3) += asic3.o tmio_core.o 13 + obj-$(CONFIG_ARCH_BCM2835) += bcm2835-pm.o 13 14 obj-$(CONFIG_MFD_BCM590XX) += bcm590xx.o 14 15 obj-$(CONFIG_MFD_BD9571MWV) += bd9571mwv.o 15 16 cros_ec_core-objs := cros_ec.o
+92
drivers/mfd/bcm2835-pm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * PM MFD driver for Broadcom BCM2835 4 + * 5 + * This driver binds to the PM block and creates the MFD device for 6 + * the WDT and power drivers. 7 + */ 8 + 9 + #include <linux/delay.h> 10 + #include <linux/io.h> 11 + #include <linux/mfd/bcm2835-pm.h> 12 + #include <linux/mfd/core.h> 13 + #include <linux/module.h> 14 + #include <linux/of_address.h> 15 + #include <linux/of_platform.h> 16 + #include <linux/platform_device.h> 17 + #include <linux/types.h> 18 + #include <linux/watchdog.h> 19 + 20 + static const struct mfd_cell bcm2835_pm_devs[] = { 21 + { .name = "bcm2835-wdt" }, 22 + }; 23 + 24 + static const struct mfd_cell bcm2835_power_devs[] = { 25 + { .name = "bcm2835-power" }, 26 + }; 27 + 28 + static int bcm2835_pm_probe(struct platform_device *pdev) 29 + { 30 + struct resource *res; 31 + struct device *dev = &pdev->dev; 32 + struct bcm2835_pm *pm; 33 + int ret; 34 + 35 + pm = devm_kzalloc(dev, sizeof(*pm), GFP_KERNEL); 36 + if (!pm) 37 + return -ENOMEM; 38 + platform_set_drvdata(pdev, pm); 39 + 40 + pm->dev = dev; 41 + 42 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 43 + pm->base = devm_ioremap_resource(dev, res); 44 + if (IS_ERR(pm->base)) 45 + return PTR_ERR(pm->base); 46 + 47 + ret = devm_mfd_add_devices(dev, -1, 48 + bcm2835_pm_devs, ARRAY_SIZE(bcm2835_pm_devs), 49 + NULL, 0, NULL); 50 + if (ret) 51 + return ret; 52 + 53 + /* We'll use the presence of the AXI ASB regs in the 54 + * bcm2835-pm binding as the key for whether we can reference 55 + * the full PM register range and support power domains. 56 + */ 57 + res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 58 + if (res) { 59 + pm->asb = devm_ioremap_resource(dev, res); 60 + if (IS_ERR(pm->asb)) 61 + return PTR_ERR(pm->asb); 62 + 63 + ret = devm_mfd_add_devices(dev, -1, 64 + bcm2835_power_devs, 65 + ARRAY_SIZE(bcm2835_power_devs), 66 + NULL, 0, NULL); 67 + if (ret) 68 + return ret; 69 + } 70 + 71 + return 0; 72 + } 73 + 74 + static const struct of_device_id bcm2835_pm_of_match[] = { 75 + { .compatible = "brcm,bcm2835-pm-wdt", }, 76 + { .compatible = "brcm,bcm2835-pm", }, 77 + {}, 78 + }; 79 + MODULE_DEVICE_TABLE(of, bcm2835_pm_of_match); 80 + 81 + static struct platform_driver bcm2835_pm_driver = { 82 + .probe = bcm2835_pm_probe, 83 + .driver = { 84 + .name = "bcm2835-pm", 85 + .of_match_table = bcm2835_pm_of_match, 86 + }, 87 + }; 88 + module_platform_driver(bcm2835_pm_driver); 89 + 90 + MODULE_AUTHOR("Eric Anholt <eric@anholt.net>"); 91 + MODULE_DESCRIPTION("Driver for Broadcom BCM2835 PM MFD"); 92 + MODULE_LICENSE("GPL");
+6 -1
drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
··· 2468 2468 queue.destination.type = DPNI_DEST_DPCON; 2469 2469 queue.destination.priority = 1; 2470 2470 queue.user_context = (u64)(uintptr_t)fq; 2471 + queue.flc.stash_control = 1; 2472 + queue.flc.value &= 0xFFFFFFFFFFFFFFC0; 2473 + /* 01 01 00 - data, annotation, flow context */ 2474 + queue.flc.value |= 0x14; 2471 2475 err = dpni_set_queue(priv->mc_io, 0, priv->mc_token, 2472 2476 DPNI_QUEUE_RX, 0, fq->flowid, 2473 - DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST, 2477 + DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST | 2478 + DPNI_QUEUE_OPT_FLC, 2474 2479 &queue); 2475 2480 if (err) { 2476 2481 dev_err(dev, "dpni_set_queue(RX) failed\n");
+10
drivers/nvmem/Kconfig
··· 192 192 This driver can also be built as a module. If so, the module 193 193 will be called nvmem-sc27xx-efuse. 194 194 195 + config NVMEM_ZYNQMP 196 + bool "Xilinx ZYNQMP SoC nvmem firmware support" 197 + depends on ARCH_ZYNQMP 198 + help 199 + This is a driver to access hardware related data like 200 + soc revision, IDCODE... etc by using the firmware 201 + interface. 202 + 203 + If sure, say yes. If unsure, say no. 204 + 195 205 endif
+2
drivers/nvmem/Makefile
··· 41 41 nvmem-rave-sp-eeprom-y := rave-sp-eeprom.o 42 42 obj-$(CONFIG_SC27XX_EFUSE) += nvmem-sc27xx-efuse.o 43 43 nvmem-sc27xx-efuse-y := sc27xx-efuse.o 44 + obj-$(CONFIG_NVMEM_ZYNQMP) += nvmem_zynqmp_nvmem.o 45 + nvmem_zynqmp_nvmem-y := zynqmp_nvmem.o
+86
drivers/nvmem/zynqmp_nvmem.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Copyright (C) 2019 Xilinx, Inc. 4 + */ 5 + 6 + #include <linux/module.h> 7 + #include <linux/nvmem-provider.h> 8 + #include <linux/of.h> 9 + #include <linux/platform_device.h> 10 + #include <linux/firmware/xlnx-zynqmp.h> 11 + 12 + #define SILICON_REVISION_MASK 0xF 13 + 14 + struct zynqmp_nvmem_data { 15 + struct device *dev; 16 + struct nvmem_device *nvmem; 17 + }; 18 + 19 + static int zynqmp_nvmem_read(void *context, unsigned int offset, 20 + void *val, size_t bytes) 21 + { 22 + int ret; 23 + int idcode, version; 24 + struct zynqmp_nvmem_data *priv = context; 25 + 26 + const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops(); 27 + 28 + if (!eemi_ops || !eemi_ops->get_chipid) 29 + return -ENXIO; 30 + 31 + ret = eemi_ops->get_chipid(&idcode, &version); 32 + if (ret < 0) 33 + return ret; 34 + 35 + dev_dbg(priv->dev, "Read chipid val %x %x\n", idcode, version); 36 + *(int *)val = version & SILICON_REVISION_MASK; 37 + 38 + return 0; 39 + } 40 + 41 + static struct nvmem_config econfig = { 42 + .name = "zynqmp-nvmem", 43 + .owner = THIS_MODULE, 44 + .word_size = 1, 45 + .size = 1, 46 + .read_only = true, 47 + }; 48 + 49 + static const struct of_device_id zynqmp_nvmem_match[] = { 50 + { .compatible = "xlnx,zynqmp-nvmem-fw", }, 51 + { /* sentinel */ }, 52 + }; 53 + MODULE_DEVICE_TABLE(of, zynqmp_nvmem_match); 54 + 55 + static int zynqmp_nvmem_probe(struct platform_device *pdev) 56 + { 57 + struct device *dev = &pdev->dev; 58 + struct zynqmp_nvmem_data *priv; 59 + 60 + priv = devm_kzalloc(dev, sizeof(struct zynqmp_nvmem_data), GFP_KERNEL); 61 + if (!priv) 62 + return -ENOMEM; 63 + 64 + priv->dev = dev; 65 + econfig.dev = dev; 66 + econfig.reg_read = zynqmp_nvmem_read; 67 + econfig.priv = priv; 68 + 69 + priv->nvmem = devm_nvmem_register(dev, &econfig); 70 + 71 + return PTR_ERR_OR_ZERO(priv->nvmem); 72 + } 73 + 74 + static struct platform_driver zynqmp_nvmem_driver = { 75 + .probe = zynqmp_nvmem_probe, 76 + .driver = { 77 + .name = "zynqmp-nvmem", 78 + .of_match_table = zynqmp_nvmem_match, 79 + }, 80 + }; 81 + 82 + module_platform_driver(zynqmp_nvmem_driver); 83 + 84 + MODULE_AUTHOR("Michal Simek <michal.simek@xilinx.com>, Nava kishore Manne <navam@xilinx.com>"); 85 + MODULE_DESCRIPTION("ZynqMP NVMEM driver"); 86 + MODULE_LICENSE("GPL");
+18
drivers/opp/core.c
··· 131 131 EXPORT_SYMBOL_GPL(dev_pm_opp_get_freq); 132 132 133 133 /** 134 + * dev_pm_opp_get_level() - Gets the level corresponding to an available opp 135 + * @opp: opp for which level value has to be returned for 136 + * 137 + * Return: level read from device tree corresponding to the opp, else 138 + * return 0. 139 + */ 140 + unsigned int dev_pm_opp_get_level(struct dev_pm_opp *opp) 141 + { 142 + if (IS_ERR_OR_NULL(opp) || !opp->available) { 143 + pr_err("%s: Invalid parameters\n", __func__); 144 + return 0; 145 + } 146 + 147 + return opp->level; 148 + } 149 + EXPORT_SYMBOL_GPL(dev_pm_opp_get_level); 150 + 151 + /** 134 152 * dev_pm_opp_is_turbo() - Returns if opp is turbo OPP or not 135 153 * @opp: opp for which turbo mode is being verified 136 154 *
+2
drivers/opp/of.c
··· 594 594 new_opp->rate = (unsigned long)rate; 595 595 } 596 596 597 + of_property_read_u32(np, "opp-level", &new_opp->level); 598 + 597 599 /* Check if the OPP supports hardware's hierarchy of versions or not */ 598 600 if (!_opp_is_supported(dev, opp_table, np)) { 599 601 dev_dbg(dev, "OPP not supported by hardware: %llu\n", rate);
+2
drivers/opp/opp.h
··· 60 60 * @suspend: true if suspend OPP 61 61 * @pstate: Device's power domain's performance state. 62 62 * @rate: Frequency in hertz 63 + * @level: Performance level 63 64 * @supplies: Power supplies voltage/current values 64 65 * @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's 65 66 * frequency from any other OPP's frequency. ··· 81 80 bool suspend; 82 81 unsigned int pstate; 83 82 unsigned long rate; 83 + unsigned int level; 84 84 85 85 struct dev_pm_opp_supply *supplies; 86 86
+10 -2
drivers/reset/Kconfig
··· 40 40 help 41 41 This enables the reset controller driver for Marvell Berlin SoCs. 42 42 43 + config RESET_BRCMSTB 44 + tristate "Broadcom STB reset controller" 45 + depends on ARCH_BRCMSTB || COMPILE_TEST 46 + default ARCH_BRCMSTB 47 + help 48 + This enables the reset controller driver for Broadcom STB SoCs using 49 + a SUN_TOP_CTRL_SW_INIT style controller. 50 + 43 51 config RESET_HSDK 44 52 bool "Synopsys HSDK Reset Driver" 45 53 depends on HAS_IOMEM ··· 56 48 This enables the reset controller driver for HSDK board. 57 49 58 50 config RESET_IMX7 59 - bool "i.MX7 Reset Driver" if COMPILE_TEST 51 + bool "i.MX7/8 Reset Driver" if COMPILE_TEST 60 52 depends on HAS_IOMEM 61 - default SOC_IMX7D 53 + default SOC_IMX7D || (ARM64 && ARCH_MXC) 62 54 select MFD_SYSCON 63 55 help 64 56 This enables the reset controller driver for i.MX7 SoCs.
+2
drivers/reset/Makefile
··· 7 7 obj-$(CONFIG_RESET_ATH79) += reset-ath79.o 8 8 obj-$(CONFIG_RESET_AXS10X) += reset-axs10x.o 9 9 obj-$(CONFIG_RESET_BERLIN) += reset-berlin.o 10 + obj-$(CONFIG_RESET_BRCMSTB) += reset-brcmstb.o 10 11 obj-$(CONFIG_RESET_HSDK) += reset-hsdk.o 11 12 obj-$(CONFIG_RESET_IMX7) += reset-imx7.o 12 13 obj-$(CONFIG_RESET_LANTIQ) += reset-lantiq.o ··· 27 26 obj-$(CONFIG_RESET_UNIPHIER) += reset-uniphier.o 28 27 obj-$(CONFIG_RESET_UNIPHIER_GLUE) += reset-uniphier-glue.o 29 28 obj-$(CONFIG_RESET_ZYNQ) += reset-zynq.o 29 + obj-$(CONFIG_ARCH_ZYNQMP) += reset-zynqmp.o 30 30
+132
drivers/reset/reset-brcmstb.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Broadcom STB generic reset controller for SW_INIT style reset controller 4 + * 5 + * Author: Florian Fainelli <f.fainelli@gmail.com> 6 + * Copyright (C) 2018 Broadcom 7 + */ 8 + #include <linux/delay.h> 9 + #include <linux/device.h> 10 + #include <linux/io.h> 11 + #include <linux/module.h> 12 + #include <linux/of.h> 13 + #include <linux/platform_device.h> 14 + #include <linux/reset-controller.h> 15 + #include <linux/types.h> 16 + 17 + struct brcmstb_reset { 18 + void __iomem *base; 19 + struct reset_controller_dev rcdev; 20 + }; 21 + 22 + #define SW_INIT_SET 0x00 23 + #define SW_INIT_CLEAR 0x04 24 + #define SW_INIT_STATUS 0x08 25 + 26 + #define SW_INIT_BIT(id) BIT((id) & 0x1f) 27 + #define SW_INIT_BANK(id) ((id) >> 5) 28 + 29 + /* A full bank contains extra registers that we are not utilizing but still 30 + * qualify as a single bank. 31 + */ 32 + #define SW_INIT_BANK_SIZE 0x18 33 + 34 + static inline 35 + struct brcmstb_reset *to_brcmstb(struct reset_controller_dev *rcdev) 36 + { 37 + return container_of(rcdev, struct brcmstb_reset, rcdev); 38 + } 39 + 40 + static int brcmstb_reset_assert(struct reset_controller_dev *rcdev, 41 + unsigned long id) 42 + { 43 + unsigned int off = SW_INIT_BANK(id) * SW_INIT_BANK_SIZE; 44 + struct brcmstb_reset *priv = to_brcmstb(rcdev); 45 + 46 + writel_relaxed(SW_INIT_BIT(id), priv->base + off + SW_INIT_SET); 47 + 48 + return 0; 49 + } 50 + 51 + static int brcmstb_reset_deassert(struct reset_controller_dev *rcdev, 52 + unsigned long id) 53 + { 54 + unsigned int off = SW_INIT_BANK(id) * SW_INIT_BANK_SIZE; 55 + struct brcmstb_reset *priv = to_brcmstb(rcdev); 56 + 57 + writel_relaxed(SW_INIT_BIT(id), priv->base + off + SW_INIT_CLEAR); 58 + /* Maximum reset delay after de-asserting a line and seeing block 59 + * operation is typically 14us for the worst case, build some slack 60 + * here. 61 + */ 62 + usleep_range(100, 200); 63 + 64 + return 0; 65 + } 66 + 67 + static int brcmstb_reset_status(struct reset_controller_dev *rcdev, 68 + unsigned long id) 69 + { 70 + unsigned int off = SW_INIT_BANK(id) * SW_INIT_BANK_SIZE; 71 + struct brcmstb_reset *priv = to_brcmstb(rcdev); 72 + 73 + return readl_relaxed(priv->base + off + SW_INIT_STATUS) & 74 + SW_INIT_BIT(id); 75 + } 76 + 77 + static const struct reset_control_ops brcmstb_reset_ops = { 78 + .assert = brcmstb_reset_assert, 79 + .deassert = brcmstb_reset_deassert, 80 + .status = brcmstb_reset_status, 81 + }; 82 + 83 + static int brcmstb_reset_probe(struct platform_device *pdev) 84 + { 85 + struct device *kdev = &pdev->dev; 86 + struct brcmstb_reset *priv; 87 + struct resource *res; 88 + 89 + priv = devm_kzalloc(kdev, sizeof(*priv), GFP_KERNEL); 90 + if (!priv) 91 + return -ENOMEM; 92 + 93 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 94 + if (!IS_ALIGNED(res->start, SW_INIT_BANK_SIZE) || 95 + !IS_ALIGNED(resource_size(res), SW_INIT_BANK_SIZE)) { 96 + dev_err(kdev, "incorrect register range\n"); 97 + return -EINVAL; 98 + } 99 + 100 + priv->base = devm_ioremap_resource(kdev, res); 101 + if (IS_ERR(priv->base)) 102 + return PTR_ERR(priv->base); 103 + 104 + dev_set_drvdata(kdev, priv); 105 + 106 + priv->rcdev.owner = THIS_MODULE; 107 + priv->rcdev.nr_resets = DIV_ROUND_DOWN_ULL(resource_size(res), 108 + SW_INIT_BANK_SIZE) * 32; 109 + priv->rcdev.ops = &brcmstb_reset_ops; 110 + priv->rcdev.of_node = kdev->of_node; 111 + /* Use defaults: 1 cell and simple xlate function */ 112 + 113 + return devm_reset_controller_register(kdev, &priv->rcdev); 114 + } 115 + 116 + static const struct of_device_id brcmstb_reset_of_match[] = { 117 + { .compatible = "brcm,brcmstb-reset" }, 118 + { /* sentinel */ } 119 + }; 120 + 121 + static struct platform_driver brcmstb_reset_driver = { 122 + .probe = brcmstb_reset_probe, 123 + .driver = { 124 + .name = "brcmstb-reset", 125 + .of_match_table = brcmstb_reset_of_match, 126 + }, 127 + }; 128 + module_platform_driver(brcmstb_reset_driver); 129 + 130 + MODULE_AUTHOR("Broadcom"); 131 + MODULE_DESCRIPTION("Broadcom STB reset controller"); 132 + MODULE_LICENSE("GPL");
+158 -14
drivers/reset/reset-imx7.c
··· 17 17 18 18 #include <linux/mfd/syscon.h> 19 19 #include <linux/mod_devicetable.h> 20 + #include <linux/of_device.h> 20 21 #include <linux/platform_device.h> 21 22 #include <linux/reset-controller.h> 22 23 #include <linux/regmap.h> 23 24 #include <dt-bindings/reset/imx7-reset.h> 25 + #include <dt-bindings/reset/imx8mq-reset.h> 26 + 27 + struct imx7_src_signal { 28 + unsigned int offset, bit; 29 + }; 30 + 31 + struct imx7_src_variant { 32 + const struct imx7_src_signal *signals; 33 + unsigned int signals_num; 34 + struct reset_control_ops ops; 35 + }; 24 36 25 37 struct imx7_src { 26 38 struct reset_controller_dev rcdev; 27 39 struct regmap *regmap; 40 + const struct imx7_src_signal *signals; 28 41 }; 29 42 30 43 enum imx7_src_registers { ··· 52 39 SRC_DDRC_RCR = 0x1000, 53 40 }; 54 41 55 - struct imx7_src_signal { 56 - unsigned int offset, bit; 57 - }; 42 + static int imx7_reset_update(struct imx7_src *imx7src, 43 + unsigned long id, unsigned int value) 44 + { 45 + const struct imx7_src_signal *signal = &imx7src->signals[id]; 46 + 47 + return regmap_update_bits(imx7src->regmap, 48 + signal->offset, signal->bit, value); 49 + } 58 50 59 51 static const struct imx7_src_signal imx7_src_signals[IMX7_RESET_NUM] = { 60 52 [IMX7_RESET_A7_CORE_POR_RESET0] = { SRC_A7RCR0, BIT(0) }, ··· 99 81 unsigned long id, bool assert) 100 82 { 101 83 struct imx7_src *imx7src = to_imx7_src(rcdev); 102 - const struct imx7_src_signal *signal = &imx7_src_signals[id]; 103 - unsigned int value = assert ? signal->bit : 0; 84 + const unsigned int bit = imx7src->signals[id].bit; 85 + unsigned int value = assert ? bit : 0; 104 86 105 87 switch (id) { 106 88 case IMX7_RESET_PCIEPHY: ··· 113 95 break; 114 96 115 97 case IMX7_RESET_PCIE_CTRL_APPS_EN: 116 - value = (assert) ? 0 : signal->bit; 98 + value = assert ? 0 : bit; 117 99 break; 118 100 } 119 101 120 - return regmap_update_bits(imx7src->regmap, 121 - signal->offset, signal->bit, value); 102 + return imx7_reset_update(imx7src, id, value); 122 103 } 123 104 124 105 static int imx7_reset_assert(struct reset_controller_dev *rcdev, ··· 132 115 return imx7_reset_set(rcdev, id, false); 133 116 } 134 117 135 - static const struct reset_control_ops imx7_reset_ops = { 136 - .assert = imx7_reset_assert, 137 - .deassert = imx7_reset_deassert, 118 + static const struct imx7_src_variant variant_imx7 = { 119 + .signals = imx7_src_signals, 120 + .signals_num = ARRAY_SIZE(imx7_src_signals), 121 + .ops = { 122 + .assert = imx7_reset_assert, 123 + .deassert = imx7_reset_deassert, 124 + }, 125 + }; 126 + 127 + enum imx8mq_src_registers { 128 + SRC_A53RCR0 = 0x0004, 129 + SRC_HDMI_RCR = 0x0030, 130 + SRC_DISP_RCR = 0x0034, 131 + SRC_GPU_RCR = 0x0040, 132 + SRC_VPU_RCR = 0x0044, 133 + SRC_PCIE2_RCR = 0x0048, 134 + SRC_MIPIPHY1_RCR = 0x004c, 135 + SRC_MIPIPHY2_RCR = 0x0050, 136 + SRC_DDRC2_RCR = 0x1004, 137 + }; 138 + 139 + static const struct imx7_src_signal imx8mq_src_signals[IMX8MQ_RESET_NUM] = { 140 + [IMX8MQ_RESET_A53_CORE_POR_RESET0] = { SRC_A53RCR0, BIT(0) }, 141 + [IMX8MQ_RESET_A53_CORE_POR_RESET1] = { SRC_A53RCR0, BIT(1) }, 142 + [IMX8MQ_RESET_A53_CORE_POR_RESET2] = { SRC_A53RCR0, BIT(2) }, 143 + [IMX8MQ_RESET_A53_CORE_POR_RESET3] = { SRC_A53RCR0, BIT(3) }, 144 + [IMX8MQ_RESET_A53_CORE_RESET0] = { SRC_A53RCR0, BIT(4) }, 145 + [IMX8MQ_RESET_A53_CORE_RESET1] = { SRC_A53RCR0, BIT(5) }, 146 + [IMX8MQ_RESET_A53_CORE_RESET2] = { SRC_A53RCR0, BIT(6) }, 147 + [IMX8MQ_RESET_A53_CORE_RESET3] = { SRC_A53RCR0, BIT(7) }, 148 + [IMX8MQ_RESET_A53_DBG_RESET0] = { SRC_A53RCR0, BIT(8) }, 149 + [IMX8MQ_RESET_A53_DBG_RESET1] = { SRC_A53RCR0, BIT(9) }, 150 + [IMX8MQ_RESET_A53_DBG_RESET2] = { SRC_A53RCR0, BIT(10) }, 151 + [IMX8MQ_RESET_A53_DBG_RESET3] = { SRC_A53RCR0, BIT(11) }, 152 + [IMX8MQ_RESET_A53_ETM_RESET0] = { SRC_A53RCR0, BIT(12) }, 153 + [IMX8MQ_RESET_A53_ETM_RESET1] = { SRC_A53RCR0, BIT(13) }, 154 + [IMX8MQ_RESET_A53_ETM_RESET2] = { SRC_A53RCR0, BIT(14) }, 155 + [IMX8MQ_RESET_A53_ETM_RESET3] = { SRC_A53RCR0, BIT(15) }, 156 + [IMX8MQ_RESET_A53_SOC_DBG_RESET] = { SRC_A53RCR0, BIT(20) }, 157 + [IMX8MQ_RESET_A53_L2RESET] = { SRC_A53RCR0, BIT(21) }, 158 + [IMX8MQ_RESET_SW_NON_SCLR_M4C_RST] = { SRC_M4RCR, BIT(0) }, 159 + [IMX8MQ_RESET_OTG1_PHY_RESET] = { SRC_USBOPHY1_RCR, BIT(0) }, 160 + [IMX8MQ_RESET_OTG2_PHY_RESET] = { SRC_USBOPHY2_RCR, BIT(0) }, 161 + [IMX8MQ_RESET_MIPI_DSI_RESET_BYTE_N] = { SRC_MIPIPHY_RCR, BIT(1) }, 162 + [IMX8MQ_RESET_MIPI_DSI_RESET_N] = { SRC_MIPIPHY_RCR, BIT(2) }, 163 + [IMX8MQ_RESET_MIPI_DIS_DPI_RESET_N] = { SRC_MIPIPHY_RCR, BIT(3) }, 164 + [IMX8MQ_RESET_MIPI_DIS_ESC_RESET_N] = { SRC_MIPIPHY_RCR, BIT(4) }, 165 + [IMX8MQ_RESET_MIPI_DIS_PCLK_RESET_N] = { SRC_MIPIPHY_RCR, BIT(5) }, 166 + [IMX8MQ_RESET_PCIEPHY] = { SRC_PCIEPHY_RCR, 167 + BIT(2) | BIT(1) }, 168 + [IMX8MQ_RESET_PCIEPHY_PERST] = { SRC_PCIEPHY_RCR, BIT(3) }, 169 + [IMX8MQ_RESET_PCIE_CTRL_APPS_EN] = { SRC_PCIEPHY_RCR, BIT(6) }, 170 + [IMX8MQ_RESET_PCIE_CTRL_APPS_TURNOFF] = { SRC_PCIEPHY_RCR, BIT(11) }, 171 + [IMX8MQ_RESET_HDMI_PHY_APB_RESET] = { SRC_HDMI_RCR, BIT(0) }, 172 + [IMX8MQ_RESET_DISP_RESET] = { SRC_DISP_RCR, BIT(0) }, 173 + [IMX8MQ_RESET_GPU_RESET] = { SRC_GPU_RCR, BIT(0) }, 174 + [IMX8MQ_RESET_VPU_RESET] = { SRC_VPU_RCR, BIT(0) }, 175 + [IMX8MQ_RESET_PCIEPHY2] = { SRC_PCIE2_RCR, 176 + BIT(2) | BIT(1) }, 177 + [IMX8MQ_RESET_PCIEPHY2_PERST] = { SRC_PCIE2_RCR, BIT(3) }, 178 + [IMX8MQ_RESET_PCIE2_CTRL_APPS_EN] = { SRC_PCIE2_RCR, BIT(6) }, 179 + [IMX8MQ_RESET_PCIE2_CTRL_APPS_TURNOFF] = { SRC_PCIE2_RCR, BIT(11) }, 180 + [IMX8MQ_RESET_MIPI_CSI1_CORE_RESET] = { SRC_MIPIPHY1_RCR, BIT(0) }, 181 + [IMX8MQ_RESET_MIPI_CSI1_PHY_REF_RESET] = { SRC_MIPIPHY1_RCR, BIT(1) }, 182 + [IMX8MQ_RESET_MIPI_CSI1_ESC_RESET] = { SRC_MIPIPHY1_RCR, BIT(2) }, 183 + [IMX8MQ_RESET_MIPI_CSI2_CORE_RESET] = { SRC_MIPIPHY2_RCR, BIT(0) }, 184 + [IMX8MQ_RESET_MIPI_CSI2_PHY_REF_RESET] = { SRC_MIPIPHY2_RCR, BIT(1) }, 185 + [IMX8MQ_RESET_MIPI_CSI2_ESC_RESET] = { SRC_MIPIPHY2_RCR, BIT(2) }, 186 + [IMX8MQ_RESET_DDRC1_PRST] = { SRC_DDRC_RCR, BIT(0) }, 187 + [IMX8MQ_RESET_DDRC1_CORE_RESET] = { SRC_DDRC_RCR, BIT(1) }, 188 + [IMX8MQ_RESET_DDRC1_PHY_RESET] = { SRC_DDRC_RCR, BIT(2) }, 189 + [IMX8MQ_RESET_DDRC2_PHY_RESET] = { SRC_DDRC2_RCR, BIT(0) }, 190 + [IMX8MQ_RESET_DDRC2_CORE_RESET] = { SRC_DDRC2_RCR, BIT(1) }, 191 + [IMX8MQ_RESET_DDRC2_PRST] = { SRC_DDRC2_RCR, BIT(2) }, 192 + }; 193 + 194 + static int imx8mq_reset_set(struct reset_controller_dev *rcdev, 195 + unsigned long id, bool assert) 196 + { 197 + struct imx7_src *imx7src = to_imx7_src(rcdev); 198 + const unsigned int bit = imx7src->signals[id].bit; 199 + unsigned int value = assert ? bit : 0; 200 + 201 + switch (id) { 202 + case IMX8MQ_RESET_PCIEPHY: 203 + case IMX8MQ_RESET_PCIEPHY2: /* fallthrough */ 204 + /* 205 + * wait for more than 10us to release phy g_rst and 206 + * btnrst 207 + */ 208 + if (!assert) 209 + udelay(10); 210 + break; 211 + 212 + case IMX8MQ_RESET_PCIE_CTRL_APPS_EN: 213 + case IMX8MQ_RESET_PCIE2_CTRL_APPS_EN: /* fallthrough */ 214 + case IMX8MQ_RESET_MIPI_DIS_PCLK_RESET_N: /* fallthrough */ 215 + case IMX8MQ_RESET_MIPI_DIS_ESC_RESET_N: /* fallthrough */ 216 + case IMX8MQ_RESET_MIPI_DIS_DPI_RESET_N: /* fallthrough */ 217 + case IMX8MQ_RESET_MIPI_DSI_RESET_N: /* fallthrough */ 218 + case IMX8MQ_RESET_MIPI_DSI_RESET_BYTE_N: /* fallthrough */ 219 + value = assert ? 0 : bit; 220 + break; 221 + } 222 + 223 + return imx7_reset_update(imx7src, id, value); 224 + } 225 + 226 + static int imx8mq_reset_assert(struct reset_controller_dev *rcdev, 227 + unsigned long id) 228 + { 229 + return imx8mq_reset_set(rcdev, id, true); 230 + } 231 + 232 + static int imx8mq_reset_deassert(struct reset_controller_dev *rcdev, 233 + unsigned long id) 234 + { 235 + return imx8mq_reset_set(rcdev, id, false); 236 + } 237 + 238 + static const struct imx7_src_variant variant_imx8mq = { 239 + .signals = imx8mq_src_signals, 240 + .signals_num = ARRAY_SIZE(imx8mq_src_signals), 241 + .ops = { 242 + .assert = imx8mq_reset_assert, 243 + .deassert = imx8mq_reset_deassert, 244 + }, 138 245 }; 139 246 140 247 static int imx7_reset_probe(struct platform_device *pdev) ··· 266 125 struct imx7_src *imx7src; 267 126 struct device *dev = &pdev->dev; 268 127 struct regmap_config config = { .name = "src" }; 128 + const struct imx7_src_variant *variant = of_device_get_match_data(dev); 269 129 270 130 imx7src = devm_kzalloc(dev, sizeof(*imx7src), GFP_KERNEL); 271 131 if (!imx7src) 272 132 return -ENOMEM; 273 133 134 + imx7src->signals = variant->signals; 274 135 imx7src->regmap = syscon_node_to_regmap(dev->of_node); 275 136 if (IS_ERR(imx7src->regmap)) { 276 137 dev_err(dev, "Unable to get imx7-src regmap"); ··· 281 138 regmap_attach_dev(dev, imx7src->regmap, &config); 282 139 283 140 imx7src->rcdev.owner = THIS_MODULE; 284 - imx7src->rcdev.nr_resets = IMX7_RESET_NUM; 285 - imx7src->rcdev.ops = &imx7_reset_ops; 141 + imx7src->rcdev.nr_resets = variant->signals_num; 142 + imx7src->rcdev.ops = &variant->ops; 286 143 imx7src->rcdev.of_node = dev->of_node; 287 144 288 145 return devm_reset_controller_register(dev, &imx7src->rcdev); 289 146 } 290 147 291 148 static const struct of_device_id imx7_reset_dt_ids[] = { 292 - { .compatible = "fsl,imx7d-src", }, 149 + { .compatible = "fsl,imx7d-src", .data = &variant_imx7 }, 150 + { .compatible = "fsl,imx8mq-src", .data = &variant_imx8mq }, 293 151 { /* sentinel */ }, 294 152 }; 295 153
+1 -1
drivers/reset/reset-socfpga.c
··· 11 11 #include <linux/of_address.h> 12 12 #include <linux/platform_device.h> 13 13 #include <linux/reset-controller.h> 14 + #include <linux/reset/socfpga.h> 14 15 #include <linux/slab.h> 15 16 #include <linux/spinlock.h> 16 17 #include <linux/types.h> ··· 19 18 #include "reset-simple.h" 20 19 21 20 #define SOCFPGA_NR_BANKS 8 22 - void __init socfpga_reset_init(void); 23 21 24 22 static int a10_reset_init(struct device_node *np) 25 23 {
+1
drivers/reset/reset-sunxi.c
··· 18 18 #include <linux/of_address.h> 19 19 #include <linux/platform_device.h> 20 20 #include <linux/reset-controller.h> 21 + #include <linux/reset/sunxi.h> 21 22 #include <linux/slab.h> 22 23 #include <linux/spinlock.h> 23 24 #include <linux/types.h>
+114
drivers/reset/reset-zynqmp.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Copyright (C) 2018 Xilinx, Inc. 4 + * 5 + */ 6 + 7 + #include <linux/err.h> 8 + #include <linux/of.h> 9 + #include <linux/platform_device.h> 10 + #include <linux/reset-controller.h> 11 + #include <linux/firmware/xlnx-zynqmp.h> 12 + 13 + #define ZYNQMP_NR_RESETS (ZYNQMP_PM_RESET_END - ZYNQMP_PM_RESET_START) 14 + #define ZYNQMP_RESET_ID ZYNQMP_PM_RESET_START 15 + 16 + struct zynqmp_reset_data { 17 + struct reset_controller_dev rcdev; 18 + const struct zynqmp_eemi_ops *eemi_ops; 19 + }; 20 + 21 + static inline struct zynqmp_reset_data * 22 + to_zynqmp_reset_data(struct reset_controller_dev *rcdev) 23 + { 24 + return container_of(rcdev, struct zynqmp_reset_data, rcdev); 25 + } 26 + 27 + static int zynqmp_reset_assert(struct reset_controller_dev *rcdev, 28 + unsigned long id) 29 + { 30 + struct zynqmp_reset_data *priv = to_zynqmp_reset_data(rcdev); 31 + 32 + return priv->eemi_ops->reset_assert(ZYNQMP_RESET_ID + id, 33 + PM_RESET_ACTION_ASSERT); 34 + } 35 + 36 + static int zynqmp_reset_deassert(struct reset_controller_dev *rcdev, 37 + unsigned long id) 38 + { 39 + struct zynqmp_reset_data *priv = to_zynqmp_reset_data(rcdev); 40 + 41 + return priv->eemi_ops->reset_assert(ZYNQMP_RESET_ID + id, 42 + PM_RESET_ACTION_RELEASE); 43 + } 44 + 45 + static int zynqmp_reset_status(struct reset_controller_dev *rcdev, 46 + unsigned long id) 47 + { 48 + struct zynqmp_reset_data *priv = to_zynqmp_reset_data(rcdev); 49 + int val, err; 50 + 51 + err = priv->eemi_ops->reset_get_status(ZYNQMP_RESET_ID + id, &val); 52 + if (err) 53 + return err; 54 + 55 + return val; 56 + } 57 + 58 + static int zynqmp_reset_reset(struct reset_controller_dev *rcdev, 59 + unsigned long id) 60 + { 61 + struct zynqmp_reset_data *priv = to_zynqmp_reset_data(rcdev); 62 + 63 + return priv->eemi_ops->reset_assert(ZYNQMP_RESET_ID + id, 64 + PM_RESET_ACTION_PULSE); 65 + } 66 + 67 + static struct reset_control_ops zynqmp_reset_ops = { 68 + .reset = zynqmp_reset_reset, 69 + .assert = zynqmp_reset_assert, 70 + .deassert = zynqmp_reset_deassert, 71 + .status = zynqmp_reset_status, 72 + }; 73 + 74 + static int zynqmp_reset_probe(struct platform_device *pdev) 75 + { 76 + struct zynqmp_reset_data *priv; 77 + 78 + priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); 79 + if (!priv) 80 + return -ENOMEM; 81 + 82 + platform_set_drvdata(pdev, priv); 83 + 84 + priv->eemi_ops = zynqmp_pm_get_eemi_ops(); 85 + if (!priv->eemi_ops) 86 + return -ENXIO; 87 + 88 + priv->rcdev.ops = &zynqmp_reset_ops; 89 + priv->rcdev.owner = THIS_MODULE; 90 + priv->rcdev.of_node = pdev->dev.of_node; 91 + priv->rcdev.nr_resets = ZYNQMP_NR_RESETS; 92 + 93 + return devm_reset_controller_register(&pdev->dev, &priv->rcdev); 94 + } 95 + 96 + static const struct of_device_id zynqmp_reset_dt_ids[] = { 97 + { .compatible = "xlnx,zynqmp-reset", }, 98 + { /* sentinel */ }, 99 + }; 100 + 101 + static struct platform_driver zynqmp_reset_driver = { 102 + .probe = zynqmp_reset_probe, 103 + .driver = { 104 + .name = KBUILD_MODNAME, 105 + .of_match_table = zynqmp_reset_dt_ids, 106 + }, 107 + }; 108 + 109 + static int __init zynqmp_reset_init(void) 110 + { 111 + return platform_driver_register(&zynqmp_reset_driver); 112 + } 113 + 114 + arch_initcall(zynqmp_reset_init);
+16 -2
drivers/soc/amlogic/meson-canvas.c
··· 51 51 { 52 52 struct device_node *canvas_node; 53 53 struct platform_device *canvas_pdev; 54 + struct meson_canvas *canvas; 54 55 55 56 canvas_node = of_parse_phandle(dev->of_node, "amlogic,canvas", 0); 56 57 if (!canvas_node) 57 58 return ERR_PTR(-ENODEV); 58 59 59 60 canvas_pdev = of_find_device_by_node(canvas_node); 60 - if (!canvas_pdev) 61 + if (!canvas_pdev) { 62 + of_node_put(canvas_node); 61 63 return ERR_PTR(-EPROBE_DEFER); 64 + } 62 65 63 - return dev_get_drvdata(&canvas_pdev->dev); 66 + of_node_put(canvas_node); 67 + 68 + /* 69 + * If priv is NULL, it's probably because the canvas hasn't 70 + * properly initialized. Bail out with -EINVAL because, in the 71 + * current state, this driver probe cannot return -EPROBE_DEFER 72 + */ 73 + canvas = dev_get_drvdata(&canvas_pdev->dev); 74 + if (!canvas) 75 + return ERR_PTR(-EINVAL); 76 + 77 + return canvas; 64 78 } 65 79 EXPORT_SYMBOL_GPL(meson_canvas_get); 66 80
+196
drivers/soc/amlogic/meson-clk-measure.c
··· 165 165 CLK_MSR_ID(82, "ge2d"), 166 166 }; 167 167 168 + static struct meson_msr_id clk_msr_axg[CLK_MSR_MAX] = { 169 + CLK_MSR_ID(0, "ring_osc_out_ee_0"), 170 + CLK_MSR_ID(1, "ring_osc_out_ee_1"), 171 + CLK_MSR_ID(2, "ring_osc_out_ee_2"), 172 + CLK_MSR_ID(3, "a53_ring_osc"), 173 + CLK_MSR_ID(4, "gp0_pll"), 174 + CLK_MSR_ID(5, "gp1_pll"), 175 + CLK_MSR_ID(7, "clk81"), 176 + CLK_MSR_ID(9, "encl"), 177 + CLK_MSR_ID(17, "sys_pll_div16"), 178 + CLK_MSR_ID(18, "sys_cpu_div16"), 179 + CLK_MSR_ID(20, "rtc_osc_out"), 180 + CLK_MSR_ID(23, "mmc_clk"), 181 + CLK_MSR_ID(28, "sar_adc"), 182 + CLK_MSR_ID(31, "mpll_test_out"), 183 + CLK_MSR_ID(40, "mod_eth_tx_clk"), 184 + CLK_MSR_ID(41, "mod_eth_rx_clk_rmii"), 185 + CLK_MSR_ID(42, "mp0_out"), 186 + CLK_MSR_ID(43, "fclk_div5"), 187 + CLK_MSR_ID(44, "pwm_b"), 188 + CLK_MSR_ID(45, "pwm_a"), 189 + CLK_MSR_ID(46, "vpu"), 190 + CLK_MSR_ID(47, "ddr_dpll_pt"), 191 + CLK_MSR_ID(48, "mp1_out"), 192 + CLK_MSR_ID(49, "mp2_out"), 193 + CLK_MSR_ID(50, "mp3_out"), 194 + CLK_MSR_ID(51, "sd_emmm_c"), 195 + CLK_MSR_ID(52, "sd_emmc_b"), 196 + CLK_MSR_ID(61, "gpio_msr"), 197 + CLK_MSR_ID(66, "audio_slv_lrclk_c"), 198 + CLK_MSR_ID(67, "audio_slv_lrclk_b"), 199 + CLK_MSR_ID(68, "audio_slv_lrclk_a"), 200 + CLK_MSR_ID(69, "audio_slv_sclk_c"), 201 + CLK_MSR_ID(70, "audio_slv_sclk_b"), 202 + CLK_MSR_ID(71, "audio_slv_sclk_a"), 203 + CLK_MSR_ID(72, "pwm_d"), 204 + CLK_MSR_ID(73, "pwm_c"), 205 + CLK_MSR_ID(74, "wifi_beacon"), 206 + CLK_MSR_ID(75, "tdmin_lb_lrcl"), 207 + CLK_MSR_ID(76, "tdmin_lb_sclk"), 208 + CLK_MSR_ID(77, "rng_ring_osc_0"), 209 + CLK_MSR_ID(78, "rng_ring_osc_1"), 210 + CLK_MSR_ID(79, "rng_ring_osc_2"), 211 + CLK_MSR_ID(80, "rng_ring_osc_3"), 212 + CLK_MSR_ID(81, "vapb"), 213 + CLK_MSR_ID(82, "ge2d"), 214 + CLK_MSR_ID(84, "audio_resample"), 215 + CLK_MSR_ID(85, "audio_pdm_sys"), 216 + CLK_MSR_ID(86, "audio_spdifout"), 217 + CLK_MSR_ID(87, "audio_spdifin"), 218 + CLK_MSR_ID(88, "audio_lrclk_f"), 219 + CLK_MSR_ID(89, "audio_lrclk_e"), 220 + CLK_MSR_ID(90, "audio_lrclk_d"), 221 + CLK_MSR_ID(91, "audio_lrclk_c"), 222 + CLK_MSR_ID(92, "audio_lrclk_b"), 223 + CLK_MSR_ID(93, "audio_lrclk_a"), 224 + CLK_MSR_ID(94, "audio_sclk_f"), 225 + CLK_MSR_ID(95, "audio_sclk_e"), 226 + CLK_MSR_ID(96, "audio_sclk_d"), 227 + CLK_MSR_ID(97, "audio_sclk_c"), 228 + CLK_MSR_ID(98, "audio_sclk_b"), 229 + CLK_MSR_ID(99, "audio_sclk_a"), 230 + CLK_MSR_ID(100, "audio_mclk_f"), 231 + CLK_MSR_ID(101, "audio_mclk_e"), 232 + CLK_MSR_ID(102, "audio_mclk_d"), 233 + CLK_MSR_ID(103, "audio_mclk_c"), 234 + CLK_MSR_ID(104, "audio_mclk_b"), 235 + CLK_MSR_ID(105, "audio_mclk_a"), 236 + CLK_MSR_ID(106, "pcie_refclk_n"), 237 + CLK_MSR_ID(107, "pcie_refclk_p"), 238 + CLK_MSR_ID(108, "audio_locker_out"), 239 + CLK_MSR_ID(109, "audio_locker_in"), 240 + }; 241 + 242 + static struct meson_msr_id clk_msr_g12a[CLK_MSR_MAX] = { 243 + CLK_MSR_ID(0, "ring_osc_out_ee_0"), 244 + CLK_MSR_ID(1, "ring_osc_out_ee_1"), 245 + CLK_MSR_ID(2, "ring_osc_out_ee_2"), 246 + CLK_MSR_ID(3, "sys_cpu_ring_osc"), 247 + CLK_MSR_ID(4, "gp0_pll"), 248 + CLK_MSR_ID(6, "enci"), 249 + CLK_MSR_ID(7, "clk81"), 250 + CLK_MSR_ID(8, "encp"), 251 + CLK_MSR_ID(9, "encl"), 252 + CLK_MSR_ID(10, "vdac"), 253 + CLK_MSR_ID(11, "eth_tx"), 254 + CLK_MSR_ID(12, "hifi_pll"), 255 + CLK_MSR_ID(13, "mod_tcon"), 256 + CLK_MSR_ID(14, "fec_0"), 257 + CLK_MSR_ID(15, "fec_1"), 258 + CLK_MSR_ID(16, "fec_2"), 259 + CLK_MSR_ID(17, "sys_pll_div16"), 260 + CLK_MSR_ID(18, "sys_cpu_div16"), 261 + CLK_MSR_ID(19, "lcd_an_ph2"), 262 + CLK_MSR_ID(20, "rtc_osc_out"), 263 + CLK_MSR_ID(21, "lcd_an_ph3"), 264 + CLK_MSR_ID(22, "eth_phy_ref"), 265 + CLK_MSR_ID(23, "mpll_50m"), 266 + CLK_MSR_ID(24, "eth_125m"), 267 + CLK_MSR_ID(25, "eth_rmii"), 268 + CLK_MSR_ID(26, "sc_int"), 269 + CLK_MSR_ID(27, "in_mac"), 270 + CLK_MSR_ID(28, "sar_adc"), 271 + CLK_MSR_ID(29, "pcie_inp"), 272 + CLK_MSR_ID(30, "pcie_inn"), 273 + CLK_MSR_ID(31, "mpll_test_out"), 274 + CLK_MSR_ID(32, "vdec"), 275 + CLK_MSR_ID(33, "sys_cpu_ring_osc_1"), 276 + CLK_MSR_ID(34, "eth_mpll_50m"), 277 + CLK_MSR_ID(35, "mali"), 278 + CLK_MSR_ID(36, "hdmi_tx_pixel"), 279 + CLK_MSR_ID(37, "cdac"), 280 + CLK_MSR_ID(38, "vdin_meas"), 281 + CLK_MSR_ID(39, "bt656"), 282 + CLK_MSR_ID(41, "eth_rx_or_rmii"), 283 + CLK_MSR_ID(42, "mp0_out"), 284 + CLK_MSR_ID(43, "fclk_div5"), 285 + CLK_MSR_ID(44, "pwm_b"), 286 + CLK_MSR_ID(45, "pwm_a"), 287 + CLK_MSR_ID(46, "vpu"), 288 + CLK_MSR_ID(47, "ddr_dpll_pt"), 289 + CLK_MSR_ID(48, "mp1_out"), 290 + CLK_MSR_ID(49, "mp2_out"), 291 + CLK_MSR_ID(50, "mp3_out"), 292 + CLK_MSR_ID(51, "sd_emmc_c"), 293 + CLK_MSR_ID(52, "sd_emmc_b"), 294 + CLK_MSR_ID(53, "sd_emmc_a"), 295 + CLK_MSR_ID(54, "vpu_clkc"), 296 + CLK_MSR_ID(55, "vid_pll_div_out"), 297 + CLK_MSR_ID(56, "wave420l_a"), 298 + CLK_MSR_ID(57, "wave420l_c"), 299 + CLK_MSR_ID(58, "wave420l_b"), 300 + CLK_MSR_ID(59, "hcodec"), 301 + CLK_MSR_ID(61, "gpio_msr"), 302 + CLK_MSR_ID(62, "hevcb"), 303 + CLK_MSR_ID(63, "dsi_meas"), 304 + CLK_MSR_ID(64, "spicc_1"), 305 + CLK_MSR_ID(65, "spicc_0"), 306 + CLK_MSR_ID(66, "vid_lock"), 307 + CLK_MSR_ID(67, "dsi_phy"), 308 + CLK_MSR_ID(68, "hdcp22_esm"), 309 + CLK_MSR_ID(69, "hdcp22_skp"), 310 + CLK_MSR_ID(70, "pwm_f"), 311 + CLK_MSR_ID(71, "pwm_e"), 312 + CLK_MSR_ID(72, "pwm_d"), 313 + CLK_MSR_ID(73, "pwm_c"), 314 + CLK_MSR_ID(75, "hevcf"), 315 + CLK_MSR_ID(77, "rng_ring_osc_0"), 316 + CLK_MSR_ID(78, "rng_ring_osc_1"), 317 + CLK_MSR_ID(79, "rng_ring_osc_2"), 318 + CLK_MSR_ID(80, "rng_ring_osc_3"), 319 + CLK_MSR_ID(81, "vapb"), 320 + CLK_MSR_ID(82, "ge2d"), 321 + CLK_MSR_ID(83, "co_rx"), 322 + CLK_MSR_ID(84, "co_tx"), 323 + CLK_MSR_ID(89, "hdmi_todig"), 324 + CLK_MSR_ID(90, "hdmitx_sys"), 325 + CLK_MSR_ID(94, "eth_phy_rx"), 326 + CLK_MSR_ID(95, "eth_phy_pll"), 327 + CLK_MSR_ID(96, "vpu_b"), 328 + CLK_MSR_ID(97, "cpu_b_tmp"), 329 + CLK_MSR_ID(98, "ts"), 330 + CLK_MSR_ID(99, "ring_osc_out_ee_3"), 331 + CLK_MSR_ID(100, "ring_osc_out_ee_4"), 332 + CLK_MSR_ID(101, "ring_osc_out_ee_5"), 333 + CLK_MSR_ID(102, "ring_osc_out_ee_6"), 334 + CLK_MSR_ID(103, "ring_osc_out_ee_7"), 335 + CLK_MSR_ID(104, "ring_osc_out_ee_8"), 336 + CLK_MSR_ID(105, "ring_osc_out_ee_9"), 337 + CLK_MSR_ID(106, "ephy_test"), 338 + CLK_MSR_ID(107, "au_dac_g128x"), 339 + CLK_MSR_ID(108, "audio_locker_out"), 340 + CLK_MSR_ID(109, "audio_locker_in"), 341 + CLK_MSR_ID(110, "audio_tdmout_c_sclk"), 342 + CLK_MSR_ID(111, "audio_tdmout_b_sclk"), 343 + CLK_MSR_ID(112, "audio_tdmout_a_sclk"), 344 + CLK_MSR_ID(113, "audio_tdmin_lb_sclk"), 345 + CLK_MSR_ID(114, "audio_tdmin_c_sclk"), 346 + CLK_MSR_ID(115, "audio_tdmin_b_sclk"), 347 + CLK_MSR_ID(116, "audio_tdmin_a_sclk"), 348 + CLK_MSR_ID(117, "audio_resample"), 349 + CLK_MSR_ID(118, "audio_pdm_sys"), 350 + CLK_MSR_ID(119, "audio_spdifout_b"), 351 + CLK_MSR_ID(120, "audio_spdifout"), 352 + CLK_MSR_ID(121, "audio_spdifin"), 353 + CLK_MSR_ID(122, "audio_pdm_dclk"), 354 + }; 355 + 168 356 static int meson_measure_id(struct meson_msr_id *clk_msr_id, 169 357 unsigned int duration) 170 358 { ··· 524 336 { 525 337 .compatible = "amlogic,meson8b-clk-measure", 526 338 .data = (void *)clk_msr_m8, 339 + }, 340 + { 341 + .compatible = "amlogic,meson-axg-clk-measure", 342 + .data = (void *)clk_msr_axg, 343 + }, 344 + { 345 + .compatible = "amlogic,meson-g12a-clk-measure", 346 + .data = (void *)clk_msr_g12a, 527 347 }, 528 348 { /* sentinel */ } 529 349 };
+12
drivers/soc/bcm/Kconfig
··· 1 1 menu "Broadcom SoC drivers" 2 2 3 + config BCM2835_POWER 4 + bool "BCM2835 power domain driver" 5 + depends on ARCH_BCM2835 || (COMPILE_TEST && OF) 6 + default y if ARCH_BCM2835 7 + select PM_GENERIC_DOMAINS if PM 8 + select RESET_CONTROLLER 9 + help 10 + This enables support for the BCM2835 power domains and reset 11 + controller. Any usage of power domains by the Raspberry Pi 12 + firmware means that Linux usage of the same power domain 13 + must be accessed using the RASPBERRYPI_POWER driver 14 + 3 15 config RASPBERRYPI_POWER 4 16 bool "Raspberry Pi power domain driver" 5 17 depends on ARCH_BCM2835 || (COMPILE_TEST && OF)
+1
drivers/soc/bcm/Makefile
··· 1 + obj-$(CONFIG_BCM2835_POWER) += bcm2835-power.o 1 2 obj-$(CONFIG_RASPBERRYPI_POWER) += raspberrypi-power.o 2 3 obj-$(CONFIG_SOC_BRCMSTB) += brcmstb/
+661
drivers/soc/bcm/bcm2835-power.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Power domain driver for Broadcom BCM2835 4 + * 5 + * Copyright (C) 2018 Broadcom 6 + */ 7 + 8 + #include <dt-bindings/soc/bcm2835-pm.h> 9 + #include <linux/clk.h> 10 + #include <linux/delay.h> 11 + #include <linux/io.h> 12 + #include <linux/mfd/bcm2835-pm.h> 13 + #include <linux/module.h> 14 + #include <linux/platform_device.h> 15 + #include <linux/pm_domain.h> 16 + #include <linux/reset-controller.h> 17 + #include <linux/types.h> 18 + 19 + #define PM_GNRIC 0x00 20 + #define PM_AUDIO 0x04 21 + #define PM_STATUS 0x18 22 + #define PM_RSTC 0x1c 23 + #define PM_RSTS 0x20 24 + #define PM_WDOG 0x24 25 + #define PM_PADS0 0x28 26 + #define PM_PADS2 0x2c 27 + #define PM_PADS3 0x30 28 + #define PM_PADS4 0x34 29 + #define PM_PADS5 0x38 30 + #define PM_PADS6 0x3c 31 + #define PM_CAM0 0x44 32 + #define PM_CAM0_LDOHPEN BIT(2) 33 + #define PM_CAM0_LDOLPEN BIT(1) 34 + #define PM_CAM0_CTRLEN BIT(0) 35 + 36 + #define PM_CAM1 0x48 37 + #define PM_CAM1_LDOHPEN BIT(2) 38 + #define PM_CAM1_LDOLPEN BIT(1) 39 + #define PM_CAM1_CTRLEN BIT(0) 40 + 41 + #define PM_CCP2TX 0x4c 42 + #define PM_CCP2TX_LDOEN BIT(1) 43 + #define PM_CCP2TX_CTRLEN BIT(0) 44 + 45 + #define PM_DSI0 0x50 46 + #define PM_DSI0_LDOHPEN BIT(2) 47 + #define PM_DSI0_LDOLPEN BIT(1) 48 + #define PM_DSI0_CTRLEN BIT(0) 49 + 50 + #define PM_DSI1 0x54 51 + #define PM_DSI1_LDOHPEN BIT(2) 52 + #define PM_DSI1_LDOLPEN BIT(1) 53 + #define PM_DSI1_CTRLEN BIT(0) 54 + 55 + #define PM_HDMI 0x58 56 + #define PM_HDMI_RSTDR BIT(19) 57 + #define PM_HDMI_LDOPD BIT(1) 58 + #define PM_HDMI_CTRLEN BIT(0) 59 + 60 + #define PM_USB 0x5c 61 + /* The power gates must be enabled with this bit before enabling the LDO in the 62 + * USB block. 63 + */ 64 + #define PM_USB_CTRLEN BIT(0) 65 + 66 + #define PM_PXLDO 0x60 67 + #define PM_PXBG 0x64 68 + #define PM_DFT 0x68 69 + #define PM_SMPS 0x6c 70 + #define PM_XOSC 0x70 71 + #define PM_SPAREW 0x74 72 + #define PM_SPARER 0x78 73 + #define PM_AVS_RSTDR 0x7c 74 + #define PM_AVS_STAT 0x80 75 + #define PM_AVS_EVENT 0x84 76 + #define PM_AVS_INTEN 0x88 77 + #define PM_DUMMY 0xfc 78 + 79 + #define PM_IMAGE 0x108 80 + #define PM_GRAFX 0x10c 81 + #define PM_PROC 0x110 82 + #define PM_ENAB BIT(12) 83 + #define PM_ISPRSTN BIT(8) 84 + #define PM_H264RSTN BIT(7) 85 + #define PM_PERIRSTN BIT(6) 86 + #define PM_V3DRSTN BIT(6) 87 + #define PM_ISFUNC BIT(5) 88 + #define PM_MRDONE BIT(4) 89 + #define PM_MEMREP BIT(3) 90 + #define PM_ISPOW BIT(2) 91 + #define PM_POWOK BIT(1) 92 + #define PM_POWUP BIT(0) 93 + #define PM_INRUSH_SHIFT 13 94 + #define PM_INRUSH_3_5_MA 0 95 + #define PM_INRUSH_5_MA 1 96 + #define PM_INRUSH_10_MA 2 97 + #define PM_INRUSH_20_MA 3 98 + #define PM_INRUSH_MASK (3 << PM_INRUSH_SHIFT) 99 + 100 + #define PM_PASSWORD 0x5a000000 101 + 102 + #define PM_WDOG_TIME_SET 0x000fffff 103 + #define PM_RSTC_WRCFG_CLR 0xffffffcf 104 + #define PM_RSTS_HADWRH_SET 0x00000040 105 + #define PM_RSTC_WRCFG_SET 0x00000030 106 + #define PM_RSTC_WRCFG_FULL_RESET 0x00000020 107 + #define PM_RSTC_RESET 0x00000102 108 + 109 + #define PM_READ(reg) readl(power->base + (reg)) 110 + #define PM_WRITE(reg, val) writel(PM_PASSWORD | (val), power->base + (reg)) 111 + 112 + #define ASB_BRDG_VERSION 0x00 113 + #define ASB_CPR_CTRL 0x04 114 + 115 + #define ASB_V3D_S_CTRL 0x08 116 + #define ASB_V3D_M_CTRL 0x0c 117 + #define ASB_ISP_S_CTRL 0x10 118 + #define ASB_ISP_M_CTRL 0x14 119 + #define ASB_H264_S_CTRL 0x18 120 + #define ASB_H264_M_CTRL 0x1c 121 + 122 + #define ASB_REQ_STOP BIT(0) 123 + #define ASB_ACK BIT(1) 124 + #define ASB_EMPTY BIT(2) 125 + #define ASB_FULL BIT(3) 126 + 127 + #define ASB_AXI_BRDG_ID 0x20 128 + 129 + #define ASB_READ(reg) readl(power->asb + (reg)) 130 + #define ASB_WRITE(reg, val) writel(PM_PASSWORD | (val), power->asb + (reg)) 131 + 132 + struct bcm2835_power_domain { 133 + struct generic_pm_domain base; 134 + struct bcm2835_power *power; 135 + u32 domain; 136 + struct clk *clk; 137 + }; 138 + 139 + struct bcm2835_power { 140 + struct device *dev; 141 + /* PM registers. */ 142 + void __iomem *base; 143 + /* AXI Async bridge registers. */ 144 + void __iomem *asb; 145 + 146 + struct genpd_onecell_data pd_xlate; 147 + struct bcm2835_power_domain domains[BCM2835_POWER_DOMAIN_COUNT]; 148 + struct reset_controller_dev reset; 149 + }; 150 + 151 + static int bcm2835_asb_enable(struct bcm2835_power *power, u32 reg) 152 + { 153 + u64 start = ktime_get_ns(); 154 + 155 + /* Enable the module's async AXI bridges. */ 156 + ASB_WRITE(reg, ASB_READ(reg) & ~ASB_REQ_STOP); 157 + while (ASB_READ(reg) & ASB_ACK) { 158 + cpu_relax(); 159 + if (ktime_get_ns() - start >= 1000) 160 + return -ETIMEDOUT; 161 + } 162 + 163 + return 0; 164 + } 165 + 166 + static int bcm2835_asb_disable(struct bcm2835_power *power, u32 reg) 167 + { 168 + u64 start = ktime_get_ns(); 169 + 170 + /* Enable the module's async AXI bridges. */ 171 + ASB_WRITE(reg, ASB_READ(reg) | ASB_REQ_STOP); 172 + while (!(ASB_READ(reg) & ASB_ACK)) { 173 + cpu_relax(); 174 + if (ktime_get_ns() - start >= 1000) 175 + return -ETIMEDOUT; 176 + } 177 + 178 + return 0; 179 + } 180 + 181 + static int bcm2835_power_power_off(struct bcm2835_power_domain *pd, u32 pm_reg) 182 + { 183 + struct bcm2835_power *power = pd->power; 184 + 185 + /* Enable functional isolation */ 186 + PM_WRITE(pm_reg, PM_READ(pm_reg) & ~PM_ISFUNC); 187 + 188 + /* Enable electrical isolation */ 189 + PM_WRITE(pm_reg, PM_READ(pm_reg) & ~PM_ISPOW); 190 + 191 + /* Open the power switches. */ 192 + PM_WRITE(pm_reg, PM_READ(pm_reg) & ~PM_POWUP); 193 + 194 + return 0; 195 + } 196 + 197 + static int bcm2835_power_power_on(struct bcm2835_power_domain *pd, u32 pm_reg) 198 + { 199 + struct bcm2835_power *power = pd->power; 200 + struct device *dev = power->dev; 201 + u64 start; 202 + int ret; 203 + int inrush; 204 + bool powok; 205 + 206 + /* If it was already powered on by the fw, leave it that way. */ 207 + if (PM_READ(pm_reg) & PM_POWUP) 208 + return 0; 209 + 210 + /* Enable power. Allowing too much current at once may result 211 + * in POWOK never getting set, so start low and ramp it up as 212 + * necessary to succeed. 213 + */ 214 + powok = false; 215 + for (inrush = PM_INRUSH_3_5_MA; inrush <= PM_INRUSH_20_MA; inrush++) { 216 + PM_WRITE(pm_reg, 217 + (PM_READ(pm_reg) & ~PM_INRUSH_MASK) | 218 + (inrush << PM_INRUSH_SHIFT) | 219 + PM_POWUP); 220 + 221 + start = ktime_get_ns(); 222 + while (!(powok = !!(PM_READ(pm_reg) & PM_POWOK))) { 223 + cpu_relax(); 224 + if (ktime_get_ns() - start >= 3000) 225 + break; 226 + } 227 + } 228 + if (!powok) { 229 + dev_err(dev, "Timeout waiting for %s power OK\n", 230 + pd->base.name); 231 + ret = -ETIMEDOUT; 232 + goto err_disable_powup; 233 + } 234 + 235 + /* Disable electrical isolation */ 236 + PM_WRITE(pm_reg, PM_READ(pm_reg) | PM_ISPOW); 237 + 238 + /* Repair memory */ 239 + PM_WRITE(pm_reg, PM_READ(pm_reg) | PM_MEMREP); 240 + start = ktime_get_ns(); 241 + while (!(PM_READ(pm_reg) & PM_MRDONE)) { 242 + cpu_relax(); 243 + if (ktime_get_ns() - start >= 1000) { 244 + dev_err(dev, "Timeout waiting for %s memory repair\n", 245 + pd->base.name); 246 + ret = -ETIMEDOUT; 247 + goto err_disable_ispow; 248 + } 249 + } 250 + 251 + /* Disable functional isolation */ 252 + PM_WRITE(pm_reg, PM_READ(pm_reg) | PM_ISFUNC); 253 + 254 + return 0; 255 + 256 + err_disable_ispow: 257 + PM_WRITE(pm_reg, PM_READ(pm_reg) & ~PM_ISPOW); 258 + err_disable_powup: 259 + PM_WRITE(pm_reg, PM_READ(pm_reg) & ~(PM_POWUP | PM_INRUSH_MASK)); 260 + return ret; 261 + } 262 + 263 + static int bcm2835_asb_power_on(struct bcm2835_power_domain *pd, 264 + u32 pm_reg, 265 + u32 asb_m_reg, 266 + u32 asb_s_reg, 267 + u32 reset_flags) 268 + { 269 + struct bcm2835_power *power = pd->power; 270 + int ret; 271 + 272 + ret = clk_prepare_enable(pd->clk); 273 + if (ret) { 274 + dev_err(power->dev, "Failed to enable clock for %s\n", 275 + pd->base.name); 276 + return ret; 277 + } 278 + 279 + /* Wait 32 clocks for reset to propagate, 1 us will be enough */ 280 + udelay(1); 281 + 282 + clk_disable_unprepare(pd->clk); 283 + 284 + /* Deassert the resets. */ 285 + PM_WRITE(pm_reg, PM_READ(pm_reg) | reset_flags); 286 + 287 + ret = clk_prepare_enable(pd->clk); 288 + if (ret) { 289 + dev_err(power->dev, "Failed to enable clock for %s\n", 290 + pd->base.name); 291 + goto err_enable_resets; 292 + } 293 + 294 + ret = bcm2835_asb_enable(power, asb_m_reg); 295 + if (ret) { 296 + dev_err(power->dev, "Failed to enable ASB master for %s\n", 297 + pd->base.name); 298 + goto err_disable_clk; 299 + } 300 + ret = bcm2835_asb_enable(power, asb_s_reg); 301 + if (ret) { 302 + dev_err(power->dev, "Failed to enable ASB slave for %s\n", 303 + pd->base.name); 304 + goto err_disable_asb_master; 305 + } 306 + 307 + return 0; 308 + 309 + err_disable_asb_master: 310 + bcm2835_asb_disable(power, asb_m_reg); 311 + err_disable_clk: 312 + clk_disable_unprepare(pd->clk); 313 + err_enable_resets: 314 + PM_WRITE(pm_reg, PM_READ(pm_reg) & ~reset_flags); 315 + return ret; 316 + } 317 + 318 + static int bcm2835_asb_power_off(struct bcm2835_power_domain *pd, 319 + u32 pm_reg, 320 + u32 asb_m_reg, 321 + u32 asb_s_reg, 322 + u32 reset_flags) 323 + { 324 + struct bcm2835_power *power = pd->power; 325 + int ret; 326 + 327 + ret = bcm2835_asb_disable(power, asb_s_reg); 328 + if (ret) { 329 + dev_warn(power->dev, "Failed to disable ASB slave for %s\n", 330 + pd->base.name); 331 + return ret; 332 + } 333 + ret = bcm2835_asb_disable(power, asb_m_reg); 334 + if (ret) { 335 + dev_warn(power->dev, "Failed to disable ASB master for %s\n", 336 + pd->base.name); 337 + bcm2835_asb_enable(power, asb_s_reg); 338 + return ret; 339 + } 340 + 341 + clk_disable_unprepare(pd->clk); 342 + 343 + /* Assert the resets. */ 344 + PM_WRITE(pm_reg, PM_READ(pm_reg) & ~reset_flags); 345 + 346 + return 0; 347 + } 348 + 349 + static int bcm2835_power_pd_power_on(struct generic_pm_domain *domain) 350 + { 351 + struct bcm2835_power_domain *pd = 352 + container_of(domain, struct bcm2835_power_domain, base); 353 + struct bcm2835_power *power = pd->power; 354 + 355 + switch (pd->domain) { 356 + case BCM2835_POWER_DOMAIN_GRAFX: 357 + return bcm2835_power_power_on(pd, PM_GRAFX); 358 + 359 + case BCM2835_POWER_DOMAIN_GRAFX_V3D: 360 + return bcm2835_asb_power_on(pd, PM_GRAFX, 361 + ASB_V3D_M_CTRL, ASB_V3D_S_CTRL, 362 + PM_V3DRSTN); 363 + 364 + case BCM2835_POWER_DOMAIN_IMAGE: 365 + return bcm2835_power_power_on(pd, PM_IMAGE); 366 + 367 + case BCM2835_POWER_DOMAIN_IMAGE_PERI: 368 + return bcm2835_asb_power_on(pd, PM_IMAGE, 369 + 0, 0, 370 + PM_PERIRSTN); 371 + 372 + case BCM2835_POWER_DOMAIN_IMAGE_ISP: 373 + return bcm2835_asb_power_on(pd, PM_IMAGE, 374 + ASB_ISP_M_CTRL, ASB_ISP_S_CTRL, 375 + PM_ISPRSTN); 376 + 377 + case BCM2835_POWER_DOMAIN_IMAGE_H264: 378 + return bcm2835_asb_power_on(pd, PM_IMAGE, 379 + ASB_H264_M_CTRL, ASB_H264_S_CTRL, 380 + PM_H264RSTN); 381 + 382 + case BCM2835_POWER_DOMAIN_USB: 383 + PM_WRITE(PM_USB, PM_USB_CTRLEN); 384 + return 0; 385 + 386 + case BCM2835_POWER_DOMAIN_DSI0: 387 + PM_WRITE(PM_DSI0, PM_DSI0_CTRLEN); 388 + PM_WRITE(PM_DSI0, PM_DSI0_CTRLEN | PM_DSI0_LDOHPEN); 389 + return 0; 390 + 391 + case BCM2835_POWER_DOMAIN_DSI1: 392 + PM_WRITE(PM_DSI1, PM_DSI1_CTRLEN); 393 + PM_WRITE(PM_DSI1, PM_DSI1_CTRLEN | PM_DSI1_LDOHPEN); 394 + return 0; 395 + 396 + case BCM2835_POWER_DOMAIN_CCP2TX: 397 + PM_WRITE(PM_CCP2TX, PM_CCP2TX_CTRLEN); 398 + PM_WRITE(PM_CCP2TX, PM_CCP2TX_CTRLEN | PM_CCP2TX_LDOEN); 399 + return 0; 400 + 401 + case BCM2835_POWER_DOMAIN_HDMI: 402 + PM_WRITE(PM_HDMI, PM_READ(PM_HDMI) | PM_HDMI_RSTDR); 403 + PM_WRITE(PM_HDMI, PM_READ(PM_HDMI) | PM_HDMI_CTRLEN); 404 + PM_WRITE(PM_HDMI, PM_READ(PM_HDMI) & ~PM_HDMI_LDOPD); 405 + usleep_range(100, 200); 406 + PM_WRITE(PM_HDMI, PM_READ(PM_HDMI) & ~PM_HDMI_RSTDR); 407 + return 0; 408 + 409 + default: 410 + dev_err(power->dev, "Invalid domain %d\n", pd->domain); 411 + return -EINVAL; 412 + } 413 + } 414 + 415 + static int bcm2835_power_pd_power_off(struct generic_pm_domain *domain) 416 + { 417 + struct bcm2835_power_domain *pd = 418 + container_of(domain, struct bcm2835_power_domain, base); 419 + struct bcm2835_power *power = pd->power; 420 + 421 + switch (pd->domain) { 422 + case BCM2835_POWER_DOMAIN_GRAFX: 423 + return bcm2835_power_power_off(pd, PM_GRAFX); 424 + 425 + case BCM2835_POWER_DOMAIN_GRAFX_V3D: 426 + return bcm2835_asb_power_off(pd, PM_GRAFX, 427 + ASB_V3D_M_CTRL, ASB_V3D_S_CTRL, 428 + PM_V3DRSTN); 429 + 430 + case BCM2835_POWER_DOMAIN_IMAGE: 431 + return bcm2835_power_power_off(pd, PM_IMAGE); 432 + 433 + case BCM2835_POWER_DOMAIN_IMAGE_PERI: 434 + return bcm2835_asb_power_off(pd, PM_IMAGE, 435 + 0, 0, 436 + PM_PERIRSTN); 437 + 438 + case BCM2835_POWER_DOMAIN_IMAGE_ISP: 439 + return bcm2835_asb_power_off(pd, PM_IMAGE, 440 + ASB_ISP_M_CTRL, ASB_ISP_S_CTRL, 441 + PM_ISPRSTN); 442 + 443 + case BCM2835_POWER_DOMAIN_IMAGE_H264: 444 + return bcm2835_asb_power_off(pd, PM_IMAGE, 445 + ASB_H264_M_CTRL, ASB_H264_S_CTRL, 446 + PM_H264RSTN); 447 + 448 + case BCM2835_POWER_DOMAIN_USB: 449 + PM_WRITE(PM_USB, 0); 450 + return 0; 451 + 452 + case BCM2835_POWER_DOMAIN_DSI0: 453 + PM_WRITE(PM_DSI0, PM_DSI0_CTRLEN); 454 + PM_WRITE(PM_DSI0, 0); 455 + return 0; 456 + 457 + case BCM2835_POWER_DOMAIN_DSI1: 458 + PM_WRITE(PM_DSI1, PM_DSI1_CTRLEN); 459 + PM_WRITE(PM_DSI1, 0); 460 + return 0; 461 + 462 + case BCM2835_POWER_DOMAIN_CCP2TX: 463 + PM_WRITE(PM_CCP2TX, PM_CCP2TX_CTRLEN); 464 + PM_WRITE(PM_CCP2TX, 0); 465 + return 0; 466 + 467 + case BCM2835_POWER_DOMAIN_HDMI: 468 + PM_WRITE(PM_HDMI, PM_READ(PM_HDMI) | PM_HDMI_LDOPD); 469 + PM_WRITE(PM_HDMI, PM_READ(PM_HDMI) & ~PM_HDMI_CTRLEN); 470 + return 0; 471 + 472 + default: 473 + dev_err(power->dev, "Invalid domain %d\n", pd->domain); 474 + return -EINVAL; 475 + } 476 + } 477 + 478 + static void 479 + bcm2835_init_power_domain(struct bcm2835_power *power, 480 + int pd_xlate_index, const char *name) 481 + { 482 + struct device *dev = power->dev; 483 + struct bcm2835_power_domain *dom = &power->domains[pd_xlate_index]; 484 + 485 + dom->clk = devm_clk_get(dev->parent, name); 486 + 487 + dom->base.name = name; 488 + dom->base.power_on = bcm2835_power_pd_power_on; 489 + dom->base.power_off = bcm2835_power_pd_power_off; 490 + 491 + dom->domain = pd_xlate_index; 492 + dom->power = power; 493 + 494 + /* XXX: on/off at boot? */ 495 + pm_genpd_init(&dom->base, NULL, true); 496 + 497 + power->pd_xlate.domains[pd_xlate_index] = &dom->base; 498 + } 499 + 500 + /** bcm2835_reset_reset - Resets a block that has a reset line in the 501 + * PM block. 502 + * 503 + * The consumer of the reset controller must have the power domain up 504 + * -- there's no reset ability with the power domain down. To reset 505 + * the sub-block, we just disable its access to memory through the 506 + * ASB, reset, and re-enable. 507 + */ 508 + static int bcm2835_reset_reset(struct reset_controller_dev *rcdev, 509 + unsigned long id) 510 + { 511 + struct bcm2835_power *power = container_of(rcdev, struct bcm2835_power, 512 + reset); 513 + struct bcm2835_power_domain *pd; 514 + int ret; 515 + 516 + switch (id) { 517 + case BCM2835_RESET_V3D: 518 + pd = &power->domains[BCM2835_POWER_DOMAIN_GRAFX_V3D]; 519 + break; 520 + case BCM2835_RESET_H264: 521 + pd = &power->domains[BCM2835_POWER_DOMAIN_IMAGE_H264]; 522 + break; 523 + case BCM2835_RESET_ISP: 524 + pd = &power->domains[BCM2835_POWER_DOMAIN_IMAGE_ISP]; 525 + break; 526 + default: 527 + dev_err(power->dev, "Bad reset id %ld\n", id); 528 + return -EINVAL; 529 + } 530 + 531 + ret = bcm2835_power_pd_power_off(&pd->base); 532 + if (ret) 533 + return ret; 534 + 535 + return bcm2835_power_pd_power_on(&pd->base); 536 + } 537 + 538 + static int bcm2835_reset_status(struct reset_controller_dev *rcdev, 539 + unsigned long id) 540 + { 541 + struct bcm2835_power *power = container_of(rcdev, struct bcm2835_power, 542 + reset); 543 + 544 + switch (id) { 545 + case BCM2835_RESET_V3D: 546 + return !PM_READ(PM_GRAFX & PM_V3DRSTN); 547 + case BCM2835_RESET_H264: 548 + return !PM_READ(PM_IMAGE & PM_H264RSTN); 549 + case BCM2835_RESET_ISP: 550 + return !PM_READ(PM_IMAGE & PM_ISPRSTN); 551 + default: 552 + return -EINVAL; 553 + } 554 + } 555 + 556 + static const struct reset_control_ops bcm2835_reset_ops = { 557 + .reset = bcm2835_reset_reset, 558 + .status = bcm2835_reset_status, 559 + }; 560 + 561 + static const char *const power_domain_names[] = { 562 + [BCM2835_POWER_DOMAIN_GRAFX] = "grafx", 563 + [BCM2835_POWER_DOMAIN_GRAFX_V3D] = "v3d", 564 + 565 + [BCM2835_POWER_DOMAIN_IMAGE] = "image", 566 + [BCM2835_POWER_DOMAIN_IMAGE_PERI] = "peri_image", 567 + [BCM2835_POWER_DOMAIN_IMAGE_H264] = "h264", 568 + [BCM2835_POWER_DOMAIN_IMAGE_ISP] = "isp", 569 + 570 + [BCM2835_POWER_DOMAIN_USB] = "usb", 571 + [BCM2835_POWER_DOMAIN_DSI0] = "dsi0", 572 + [BCM2835_POWER_DOMAIN_DSI1] = "dsi1", 573 + [BCM2835_POWER_DOMAIN_CAM0] = "cam0", 574 + [BCM2835_POWER_DOMAIN_CAM1] = "cam1", 575 + [BCM2835_POWER_DOMAIN_CCP2TX] = "ccp2tx", 576 + [BCM2835_POWER_DOMAIN_HDMI] = "hdmi", 577 + }; 578 + 579 + static int bcm2835_power_probe(struct platform_device *pdev) 580 + { 581 + struct bcm2835_pm *pm = dev_get_drvdata(pdev->dev.parent); 582 + struct device *dev = &pdev->dev; 583 + struct bcm2835_power *power; 584 + static const struct { 585 + int parent, child; 586 + } domain_deps[] = { 587 + { BCM2835_POWER_DOMAIN_GRAFX, BCM2835_POWER_DOMAIN_GRAFX_V3D }, 588 + { BCM2835_POWER_DOMAIN_IMAGE, BCM2835_POWER_DOMAIN_IMAGE_PERI }, 589 + { BCM2835_POWER_DOMAIN_IMAGE, BCM2835_POWER_DOMAIN_IMAGE_H264 }, 590 + { BCM2835_POWER_DOMAIN_IMAGE, BCM2835_POWER_DOMAIN_IMAGE_ISP }, 591 + { BCM2835_POWER_DOMAIN_IMAGE_PERI, BCM2835_POWER_DOMAIN_USB }, 592 + { BCM2835_POWER_DOMAIN_IMAGE_PERI, BCM2835_POWER_DOMAIN_CAM0 }, 593 + { BCM2835_POWER_DOMAIN_IMAGE_PERI, BCM2835_POWER_DOMAIN_CAM1 }, 594 + }; 595 + int ret, i; 596 + u32 id; 597 + 598 + power = devm_kzalloc(dev, sizeof(*power), GFP_KERNEL); 599 + if (!power) 600 + return -ENOMEM; 601 + platform_set_drvdata(pdev, power); 602 + 603 + power->dev = dev; 604 + power->base = pm->base; 605 + power->asb = pm->asb; 606 + 607 + id = ASB_READ(ASB_AXI_BRDG_ID); 608 + if (id != 0x62726467 /* "BRDG" */) { 609 + dev_err(dev, "ASB register ID returned 0x%08x\n", id); 610 + return -ENODEV; 611 + } 612 + 613 + power->pd_xlate.domains = devm_kcalloc(dev, 614 + ARRAY_SIZE(power_domain_names), 615 + sizeof(*power->pd_xlate.domains), 616 + GFP_KERNEL); 617 + if (!power->pd_xlate.domains) 618 + return -ENOMEM; 619 + 620 + power->pd_xlate.num_domains = ARRAY_SIZE(power_domain_names); 621 + 622 + for (i = 0; i < ARRAY_SIZE(power_domain_names); i++) 623 + bcm2835_init_power_domain(power, i, power_domain_names[i]); 624 + 625 + for (i = 0; i < ARRAY_SIZE(domain_deps); i++) { 626 + pm_genpd_add_subdomain(&power->domains[domain_deps[i].parent].base, 627 + &power->domains[domain_deps[i].child].base); 628 + } 629 + 630 + power->reset.owner = THIS_MODULE; 631 + power->reset.nr_resets = BCM2835_RESET_COUNT; 632 + power->reset.ops = &bcm2835_reset_ops; 633 + power->reset.of_node = dev->parent->of_node; 634 + 635 + ret = devm_reset_controller_register(dev, &power->reset); 636 + if (ret) 637 + return ret; 638 + 639 + of_genpd_add_provider_onecell(dev->parent->of_node, &power->pd_xlate); 640 + 641 + dev_info(dev, "Broadcom BCM2835 power domains driver"); 642 + return 0; 643 + } 644 + 645 + static int bcm2835_power_remove(struct platform_device *pdev) 646 + { 647 + return 0; 648 + } 649 + 650 + static struct platform_driver bcm2835_power_driver = { 651 + .probe = bcm2835_power_probe, 652 + .remove = bcm2835_power_remove, 653 + .driver = { 654 + .name = "bcm2835-power", 655 + }, 656 + }; 657 + module_platform_driver(bcm2835_power_driver); 658 + 659 + MODULE_AUTHOR("Eric Anholt <eric@anholt.net>"); 660 + MODULE_DESCRIPTION("Driver for Broadcom BCM2835 PM power domains and reset"); 661 + MODULE_LICENSE("GPL");
+1
drivers/soc/fsl/Kconfig
··· 22 22 config FSL_MC_DPIO 23 23 tristate "QorIQ DPAA2 DPIO driver" 24 24 depends on FSL_MC_BUS 25 + select SOC_BUS 25 26 help 26 27 Driver for the DPAA2 DPIO object. A DPIO provides queue and 27 28 buffer management facilities for software to interact with
+5
drivers/soc/fsl/dpio/dpio-cmd.h
··· 26 26 #define DPIO_CMDID_DISABLE DPIO_CMD(0x003) 27 27 #define DPIO_CMDID_GET_ATTR DPIO_CMD(0x004) 28 28 #define DPIO_CMDID_RESET DPIO_CMD(0x005) 29 + #define DPIO_CMDID_SET_STASHING_DEST DPIO_CMD(0x120) 29 30 30 31 struct dpio_cmd_open { 31 32 __le32 dpio_id; ··· 46 45 __le64 qbman_portal_ci_addr; 47 46 /* cmd word 3 */ 48 47 __le32 qbman_version; 48 + }; 49 + 50 + struct dpio_stashing_dest { 51 + u8 sdest; 49 52 }; 50 53 51 54 #endif /* _FSL_DPIO_CMD_H */
+53 -1
drivers/soc/fsl/dpio/dpio-driver.c
··· 14 14 #include <linux/dma-mapping.h> 15 15 #include <linux/delay.h> 16 16 #include <linux/io.h> 17 + #include <linux/sys_soc.h> 17 18 18 19 #include <linux/fsl/mc.h> 19 20 #include <soc/fsl/dpaa2-io.h> ··· 32 31 }; 33 32 34 33 static cpumask_var_t cpus_unused_mask; 34 + 35 + static const struct soc_device_attribute ls1088a_soc[] = { 36 + {.family = "QorIQ LS1088A"}, 37 + { /* sentinel */ } 38 + }; 39 + 40 + static const struct soc_device_attribute ls2080a_soc[] = { 41 + {.family = "QorIQ LS2080A"}, 42 + { /* sentinel */ } 43 + }; 44 + 45 + static const struct soc_device_attribute ls2088a_soc[] = { 46 + {.family = "QorIQ LS2088A"}, 47 + { /* sentinel */ } 48 + }; 49 + 50 + static const struct soc_device_attribute lx2160a_soc[] = { 51 + {.family = "QorIQ LX2160A"}, 52 + { /* sentinel */ } 53 + }; 54 + 55 + static int dpaa2_dpio_get_cluster_sdest(struct fsl_mc_device *dpio_dev, int cpu) 56 + { 57 + int cluster_base, cluster_size; 58 + 59 + if (soc_device_match(ls1088a_soc)) { 60 + cluster_base = 2; 61 + cluster_size = 4; 62 + } else if (soc_device_match(ls2080a_soc) || 63 + soc_device_match(ls2088a_soc) || 64 + soc_device_match(lx2160a_soc)) { 65 + cluster_base = 0; 66 + cluster_size = 2; 67 + } else { 68 + dev_err(&dpio_dev->dev, "unknown SoC version\n"); 69 + return -1; 70 + } 71 + 72 + return cluster_base + cpu / cluster_size; 73 + } 35 74 36 75 static irqreturn_t dpio_irq_handler(int irq_num, void *arg) 37 76 { ··· 130 89 int err = -ENOMEM; 131 90 struct device *dev = &dpio_dev->dev; 132 91 int possible_next_cpu; 92 + int sdest; 133 93 134 94 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 135 95 if (!priv) ··· 186 144 } 187 145 desc.cpu = possible_next_cpu; 188 146 cpumask_clear_cpu(possible_next_cpu, cpus_unused_mask); 147 + 148 + sdest = dpaa2_dpio_get_cluster_sdest(dpio_dev, desc.cpu); 149 + if (sdest >= 0) { 150 + err = dpio_set_stashing_destination(dpio_dev->mc_io, 0, 151 + dpio_dev->mc_handle, 152 + sdest); 153 + if (err) 154 + dev_err(dev, "dpio_set_stashing_destination failed for cpu%d\n", 155 + desc.cpu); 156 + } 189 157 190 158 /* 191 159 * Set the CENA regs to be the cache inhibited area of the portal to ··· 272 220 273 221 dev = &dpio_dev->dev; 274 222 priv = dev_get_drvdata(dev); 223 + cpu = dpaa2_io_get_cpu(priv->io); 275 224 276 225 dpaa2_io_down(priv->io); 277 226 278 227 dpio_teardown_irqs(dpio_dev); 279 228 280 - cpu = dpaa2_io_get_cpu(priv->io); 281 229 cpumask_set_cpu(cpu, cpus_unused_mask); 282 230 283 231 err = dpio_open(dpio_dev->mc_io, 0, dpio_dev->obj_desc.id,
+3 -2
drivers/soc/fsl/dpio/dpio-service.c
··· 473 473 * Return 0 for success, and negative error code for failure. 474 474 */ 475 475 int dpaa2_io_service_release(struct dpaa2_io *d, 476 - u32 bpid, 476 + u16 bpid, 477 477 const u64 *buffers, 478 478 unsigned int num_buffers) 479 479 { ··· 502 502 * Eg. if the buffer pool is empty, this will return zero. 503 503 */ 504 504 int dpaa2_io_service_acquire(struct dpaa2_io *d, 505 - u32 bpid, 505 + u16 bpid, 506 506 u64 *buffers, 507 507 unsigned int num_buffers) 508 508 { ··· 630 630 if (!(dpaa2_dq_flags(ret) & DPAA2_DQ_STAT_VALIDFRAME)) 631 631 ret = NULL; 632 632 } else { 633 + prefetch(&s->vaddr[s->idx]); 633 634 *is_last = 0; 634 635 } 635 636
+16
drivers/soc/fsl/dpio/dpio.c
··· 166 166 return 0; 167 167 } 168 168 169 + int dpio_set_stashing_destination(struct fsl_mc_io *mc_io, 170 + u32 cmd_flags, 171 + u16 token, 172 + u8 sdest) 173 + { 174 + struct fsl_mc_command cmd = { 0 }; 175 + struct dpio_stashing_dest *dpio_cmd; 176 + 177 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_SET_STASHING_DEST, 178 + cmd_flags, token); 179 + dpio_cmd = (struct dpio_stashing_dest *)cmd.params; 180 + dpio_cmd->sdest = sdest; 181 + 182 + return mc_send_command(mc_io, &cmd); 183 + } 184 + 169 185 /** 170 186 * dpio_get_api_version - Get Data Path I/O API version 171 187 * @mc_io: Pointer to MC portal's DPIO object
+5
drivers/soc/fsl/dpio/dpio.h
··· 75 75 u16 token, 76 76 struct dpio_attr *attr); 77 77 78 + int dpio_set_stashing_destination(struct fsl_mc_io *mc_io, 79 + u32 cmd_flags, 80 + u16 token, 81 + u8 dest); 82 + 78 83 int dpio_get_api_version(struct fsl_mc_io *mc_io, 79 84 u32 cmd_flags, 80 85 u16 *major_ver,
+3 -2
drivers/soc/fsl/dpio/qbman-portal.c
··· 169 169 3, /* RPM: Valid bit mode, RCR in array mode */ 170 170 2, /* DCM: Discrete consumption ack mode */ 171 171 3, /* EPM: Valid bit mode, EQCR in array mode */ 172 - 0, /* mem stashing drop enable == FALSE */ 172 + 1, /* mem stashing drop enable == TRUE */ 173 173 1, /* mem stashing priority == TRUE */ 174 - 0, /* mem stashing enable == FALSE */ 174 + 1, /* mem stashing enable == TRUE */ 175 175 1, /* dequeue stashing priority == TRUE */ 176 176 0, /* dequeue stashing enable == FALSE */ 177 177 0); /* EQCR_CI stashing priority == FALSE */ ··· 180 180 reg = qbman_read_register(p, QBMAN_CINH_SWP_CFG); 181 181 if (!reg) { 182 182 pr_err("qbman: the portal is not enabled!\n"); 183 + kfree(p); 183 184 return NULL; 184 185 } 185 186
+5 -5
drivers/soc/fsl/guts.c
··· 32 32 static struct guts *guts; 33 33 static struct soc_device_attribute soc_dev_attr; 34 34 static struct soc_device *soc_dev; 35 + static struct device_node *root; 35 36 36 37 37 38 /* SoC die attribute definition for QorIQ platform */ ··· 115 114 return NULL; 116 115 } 117 116 118 - u32 fsl_guts_get_svr(void) 117 + static u32 fsl_guts_get_svr(void) 119 118 { 120 119 u32 svr = 0; 121 120 ··· 129 128 130 129 return svr; 131 130 } 132 - EXPORT_SYMBOL(fsl_guts_get_svr); 133 131 134 132 static int fsl_guts_probe(struct platform_device *pdev) 135 133 { 136 - struct device_node *root, *np = pdev->dev.of_node; 134 + struct device_node *np = pdev->dev.of_node; 137 135 struct device *dev = &pdev->dev; 138 136 struct resource *res; 139 137 const struct fsl_soc_die_attr *soc_die; ··· 155 155 root = of_find_node_by_path("/"); 156 156 if (of_property_read_string(root, "model", &machine)) 157 157 of_property_read_string_index(root, "compatible", 0, &machine); 158 - of_node_put(root); 159 158 if (machine) 160 - soc_dev_attr.machine = devm_kstrdup(dev, machine, GFP_KERNEL); 159 + soc_dev_attr.machine = machine; 161 160 162 161 svr = fsl_guts_get_svr(); 163 162 soc_die = fsl_soc_die_match(svr, fsl_soc_die); ··· 191 192 static int fsl_guts_remove(struct platform_device *dev) 192 193 { 193 194 soc_device_unregister(soc_dev); 195 + of_node_put(root); 194 196 return 0; 195 197 } 196 198
+1 -1
drivers/soc/imx/Kconfig
··· 2 2 3 3 config IMX_GPCV2_PM_DOMAINS 4 4 bool "i.MX GPCv2 PM domains" 5 - depends on SOC_IMX7D || SOC_IMX8MQ || (COMPILE_TEST && OF) 5 + depends on ARCH_MXC || (COMPILE_TEST && OF) 6 6 depends on PM 7 7 select PM_GENERIC_DOMAINS 8 8 default y if SOC_IMX7D
+74 -2
drivers/soc/imx/gpcv2.c
··· 8 8 * Copyright 2015-2017 Pengutronix, Lucas Stach <kernel@pengutronix.de> 9 9 */ 10 10 11 + #include <linux/clk.h> 11 12 #include <linux/of_device.h> 12 13 #include <linux/platform_device.h> 13 14 #include <linux/pm_domain.h> ··· 66 65 67 66 #define GPC_M4_PU_PDN_FLG 0x1bc 68 67 68 + #define GPC_PU_PWRHSK 0x1fc 69 + 70 + #define IMX8M_GPU_HSK_PWRDNREQN BIT(6) 71 + #define IMX8M_VPU_HSK_PWRDNREQN BIT(5) 72 + #define IMX8M_DISP_HSK_PWRDNREQN BIT(4) 73 + 69 74 /* 70 75 * The PGC offset values in Reference Manual 71 76 * (Rev. 1, 01/2018 and the older ones) GPC chapter's ··· 99 92 100 93 #define GPC_PGC_CTRL_PCR BIT(0) 101 94 95 + #define GPC_CLK_MAX 6 96 + 102 97 struct imx_pgc_domain { 103 98 struct generic_pm_domain genpd; 104 99 struct regmap *regmap; 105 100 struct regulator *regulator; 101 + struct clk *clk[GPC_CLK_MAX]; 102 + int num_clks; 106 103 107 104 unsigned int pgc; 108 105 109 106 const struct { 110 107 u32 pxx; 111 108 u32 map; 109 + u32 hsk; 112 110 } bits; 113 111 114 112 const int voltage; ··· 137 125 const bool enable_power_control = !on; 138 126 const bool has_regulator = !IS_ERR(domain->regulator); 139 127 unsigned long deadline; 140 - int ret = 0; 128 + int i, ret = 0; 141 129 142 130 regmap_update_bits(domain->regmap, GPC_PGC_CPU_MAPPING, 143 131 domain->bits.map, domain->bits.map); ··· 150 138 } 151 139 } 152 140 141 + /* Enable reset clocks for all devices in the domain */ 142 + for (i = 0; i < domain->num_clks; i++) 143 + clk_prepare_enable(domain->clk[i]); 144 + 153 145 if (enable_power_control) 154 146 regmap_update_bits(domain->regmap, GPC_PGC_CTRL(domain->pgc), 155 147 GPC_PGC_CTRL_PCR, GPC_PGC_CTRL_PCR); 148 + 149 + if (domain->bits.hsk) 150 + regmap_update_bits(domain->regmap, GPC_PU_PWRHSK, 151 + domain->bits.hsk, on ? domain->bits.hsk : 0); 156 152 157 153 regmap_update_bits(domain->regmap, offset, 158 154 domain->bits.pxx, domain->bits.pxx); ··· 198 178 if (enable_power_control) 199 179 regmap_update_bits(domain->regmap, GPC_PGC_CTRL(domain->pgc), 200 180 GPC_PGC_CTRL_PCR, 0); 181 + 182 + /* Disable reset clocks for all devices in the domain */ 183 + for (i = 0; i < domain->num_clks; i++) 184 + clk_disable_unprepare(domain->clk[i]); 201 185 202 186 if (has_regulator && !on) { 203 187 int err; ··· 352 328 .bits = { 353 329 .pxx = IMX8M_GPU_SW_Pxx_REQ, 354 330 .map = IMX8M_GPU_A53_DOMAIN, 331 + .hsk = IMX8M_GPU_HSK_PWRDNREQN, 355 332 }, 356 333 .pgc = IMX8M_PGC_GPU, 357 334 }, ··· 364 339 .bits = { 365 340 .pxx = IMX8M_VPU_SW_Pxx_REQ, 366 341 .map = IMX8M_VPU_A53_DOMAIN, 342 + .hsk = IMX8M_VPU_HSK_PWRDNREQN, 367 343 }, 368 344 .pgc = IMX8M_PGC_VPU, 369 345 }, ··· 376 350 .bits = { 377 351 .pxx = IMX8M_DISP_SW_Pxx_REQ, 378 352 .map = IMX8M_DISP_A53_DOMAIN, 353 + .hsk = IMX8M_DISP_HSK_PWRDNREQN, 379 354 }, 380 355 .pgc = IMX8M_PGC_DISP, 381 356 }, ··· 417 390 418 391 static const struct regmap_range imx8m_yes_ranges[] = { 419 392 regmap_reg_range(GPC_LPCR_A_CORE_BSC, 420 - GPC_M4_PU_PDN_FLG), 393 + GPC_PU_PWRHSK), 421 394 regmap_reg_range(GPC_PGC_CTRL(IMX8M_PGC_MIPI), 422 395 GPC_PGC_SR(IMX8M_PGC_MIPI)), 423 396 regmap_reg_range(GPC_PGC_CTRL(IMX8M_PGC_PCIE1), ··· 453 426 .reg_access_table = &imx8m_access_table, 454 427 }; 455 428 429 + static int imx_pgc_get_clocks(struct imx_pgc_domain *domain) 430 + { 431 + int i, ret; 432 + 433 + for (i = 0; ; i++) { 434 + struct clk *clk = of_clk_get(domain->dev->of_node, i); 435 + if (IS_ERR(clk)) 436 + break; 437 + if (i >= GPC_CLK_MAX) { 438 + dev_err(domain->dev, "more than %d clocks\n", 439 + GPC_CLK_MAX); 440 + ret = -EINVAL; 441 + goto clk_err; 442 + } 443 + domain->clk[i] = clk; 444 + } 445 + domain->num_clks = i; 446 + 447 + return 0; 448 + 449 + clk_err: 450 + while (i--) 451 + clk_put(domain->clk[i]); 452 + 453 + return ret; 454 + } 455 + 456 + static void imx_pgc_put_clocks(struct imx_pgc_domain *domain) 457 + { 458 + int i; 459 + 460 + for (i = domain->num_clks - 1; i >= 0; i--) 461 + clk_put(domain->clk[i]); 462 + } 463 + 456 464 static int imx_pgc_domain_probe(struct platform_device *pdev) 457 465 { 458 466 struct imx_pgc_domain *domain = pdev->dev.platform_data; ··· 507 445 domain->voltage, domain->voltage); 508 446 } 509 447 448 + ret = imx_pgc_get_clocks(domain); 449 + if (ret) { 450 + if (ret != -EPROBE_DEFER) 451 + dev_err(domain->dev, "Failed to get domain's clocks\n"); 452 + return ret; 453 + } 454 + 510 455 ret = pm_genpd_init(&domain->genpd, NULL, true); 511 456 if (ret) { 512 457 dev_err(domain->dev, "Failed to init power domain\n"); 458 + imx_pgc_put_clocks(domain); 513 459 return ret; 514 460 } 515 461 ··· 526 456 if (ret) { 527 457 dev_err(domain->dev, "Failed to add genpd provider\n"); 528 458 pm_genpd_remove(&domain->genpd); 459 + imx_pgc_put_clocks(domain); 529 460 } 530 461 531 462 return ret; ··· 538 467 539 468 of_genpd_del_provider(domain->dev->of_node); 540 469 pm_genpd_remove(&domain->genpd); 470 + imx_pgc_put_clocks(domain); 541 471 542 472 return 0; 543 473 }
+18
drivers/soc/qcom/Kconfig
··· 98 98 of hardware components aggregate requests for these resources and 99 99 help apply the aggregated state on the resource. 100 100 101 + config QCOM_RPMHPD 102 + bool "Qualcomm RPMh Power domain driver" 103 + depends on QCOM_RPMH && QCOM_COMMAND_DB 104 + help 105 + QCOM RPMh Power domain driver to support power-domains with 106 + performance states. The driver communicates a performance state 107 + value to RPMh which then translates it into corresponding voltage 108 + for the voltage rail. 109 + 110 + config QCOM_RPMPD 111 + bool "Qualcomm RPM Power domain driver" 112 + depends on QCOM_SMD_RPM=y 113 + help 114 + QCOM RPM Power domain driver to support power-domains with 115 + performance states. The driver communicates a performance state 116 + value to RPM which then translates it into corresponding voltage 117 + for the voltage rail. 118 + 101 119 config QCOM_SMEM 102 120 tristate "Qualcomm Shared Memory Manager (SMEM)" 103 121 depends on ARCH_QCOM || COMPILE_TEST
+2
drivers/soc/qcom/Makefile
··· 21 21 obj-$(CONFIG_QCOM_APR) += apr.o 22 22 obj-$(CONFIG_QCOM_LLCC) += llcc-slice.o 23 23 obj-$(CONFIG_QCOM_SDM845_LLCC) += llcc-sdm845.o 24 + obj-$(CONFIG_QCOM_RPMHPD) += rpmhpd.o 25 + obj-$(CONFIG_QCOM_RPMPD) += rpmpd.o
+6
drivers/soc/qcom/llcc-sdm845.c
··· 71 71 SCT_ENTRY(LLCC_AUDHW, 22, 1024, 1, 1, 0xffc, 0x2, 0, 0, 1, 1, 0), 72 72 }; 73 73 74 + static int sdm845_qcom_llcc_remove(struct platform_device *pdev) 75 + { 76 + return qcom_llcc_remove(pdev); 77 + } 78 + 74 79 static int sdm845_qcom_llcc_probe(struct platform_device *pdev) 75 80 { 76 81 return qcom_llcc_probe(pdev, sdm845_data, ARRAY_SIZE(sdm845_data)); ··· 92 87 .of_match_table = sdm845_qcom_llcc_of_match, 93 88 }, 94 89 .probe = sdm845_qcom_llcc_probe, 90 + .remove = sdm845_qcom_llcc_remove, 95 91 }; 96 92 module_platform_driver(sdm845_qcom_llcc_driver); 97 93
+66 -31
drivers/soc/qcom/llcc-slice.c
··· 46 46 47 47 #define BANK_OFFSET_STRIDE 0x80000 48 48 49 - static struct llcc_drv_data *drv_data; 49 + static struct llcc_drv_data *drv_data = (void *) -EPROBE_DEFER; 50 50 51 51 static const struct regmap_config llcc_regmap_config = { 52 52 .reg_bits = 32, ··· 67 67 const struct llcc_slice_config *cfg; 68 68 struct llcc_slice_desc *desc; 69 69 u32 sz, count; 70 + 71 + if (IS_ERR(drv_data)) 72 + return ERR_CAST(drv_data); 70 73 71 74 cfg = drv_data->cfg; 72 75 sz = drv_data->cfg_size; ··· 111 108 u32 slice_status; 112 109 int ret; 113 110 111 + if (IS_ERR(drv_data)) 112 + return PTR_ERR(drv_data); 113 + 114 114 act_ctrl_reg = LLCC_TRP_ACT_CTRLn(sid); 115 115 status_reg = LLCC_TRP_STATUSn(sid); 116 116 ··· 148 142 { 149 143 int ret; 150 144 u32 act_ctrl_val; 145 + 146 + if (IS_ERR(drv_data)) 147 + return PTR_ERR(drv_data); 151 148 152 149 if (IS_ERR_OR_NULL(desc)) 153 150 return -EINVAL; ··· 188 179 { 189 180 u32 act_ctrl_val; 190 181 int ret; 182 + 183 + if (IS_ERR(drv_data)) 184 + return PTR_ERR(drv_data); 191 185 192 186 if (IS_ERR_OR_NULL(desc)) 193 187 return -EINVAL; ··· 301 289 return ret; 302 290 } 303 291 292 + int qcom_llcc_remove(struct platform_device *pdev) 293 + { 294 + /* Set the global pointer to a error code to avoid referencing it */ 295 + drv_data = ERR_PTR(-ENODEV); 296 + return 0; 297 + } 298 + EXPORT_SYMBOL_GPL(qcom_llcc_remove); 299 + 300 + static struct regmap *qcom_llcc_init_mmio(struct platform_device *pdev, 301 + const char *name) 302 + { 303 + struct resource *res; 304 + void __iomem *base; 305 + 306 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, name); 307 + if (!res) 308 + return ERR_PTR(-ENODEV); 309 + 310 + base = devm_ioremap_resource(&pdev->dev, res); 311 + if (IS_ERR(base)) 312 + return ERR_CAST(base); 313 + 314 + return devm_regmap_init_mmio(&pdev->dev, base, &llcc_regmap_config); 315 + } 316 + 304 317 int qcom_llcc_probe(struct platform_device *pdev, 305 318 const struct llcc_slice_config *llcc_cfg, u32 sz) 306 319 { 307 320 u32 num_banks; 308 321 struct device *dev = &pdev->dev; 309 - struct resource *llcc_banks_res, *llcc_bcast_res; 310 - void __iomem *llcc_banks_base, *llcc_bcast_base; 311 322 int ret, i; 312 323 struct platform_device *llcc_edac; 313 324 314 325 drv_data = devm_kzalloc(dev, sizeof(*drv_data), GFP_KERNEL); 315 - if (!drv_data) 316 - return -ENOMEM; 326 + if (!drv_data) { 327 + ret = -ENOMEM; 328 + goto err; 329 + } 317 330 318 - llcc_banks_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 319 - "llcc_base"); 320 - llcc_banks_base = devm_ioremap_resource(&pdev->dev, llcc_banks_res); 321 - if (IS_ERR(llcc_banks_base)) 322 - return PTR_ERR(llcc_banks_base); 331 + drv_data->regmap = qcom_llcc_init_mmio(pdev, "llcc_base"); 332 + if (IS_ERR(drv_data->regmap)) { 333 + ret = PTR_ERR(drv_data->regmap); 334 + goto err; 335 + } 323 336 324 - drv_data->regmap = devm_regmap_init_mmio(dev, llcc_banks_base, 325 - &llcc_regmap_config); 326 - if (IS_ERR(drv_data->regmap)) 327 - return PTR_ERR(drv_data->regmap); 328 - 329 - llcc_bcast_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 330 - "llcc_broadcast_base"); 331 - llcc_bcast_base = devm_ioremap_resource(&pdev->dev, llcc_bcast_res); 332 - if (IS_ERR(llcc_bcast_base)) 333 - return PTR_ERR(llcc_bcast_base); 334 - 335 - drv_data->bcast_regmap = devm_regmap_init_mmio(dev, llcc_bcast_base, 336 - &llcc_regmap_config); 337 - if (IS_ERR(drv_data->bcast_regmap)) 338 - return PTR_ERR(drv_data->bcast_regmap); 337 + drv_data->bcast_regmap = 338 + qcom_llcc_init_mmio(pdev, "llcc_broadcast_base"); 339 + if (IS_ERR(drv_data->bcast_regmap)) { 340 + ret = PTR_ERR(drv_data->bcast_regmap); 341 + goto err; 342 + } 339 343 340 344 ret = regmap_read(drv_data->regmap, LLCC_COMMON_STATUS0, 341 345 &num_banks); 342 346 if (ret) 343 - return ret; 347 + goto err; 344 348 345 349 num_banks &= LLCC_LB_CNT_MASK; 346 350 num_banks >>= LLCC_LB_CNT_SHIFT; ··· 368 340 369 341 drv_data->offsets = devm_kcalloc(dev, num_banks, sizeof(u32), 370 342 GFP_KERNEL); 371 - if (!drv_data->offsets) 372 - return -ENOMEM; 343 + if (!drv_data->offsets) { 344 + ret = -ENOMEM; 345 + goto err; 346 + } 373 347 374 348 for (i = 0; i < num_banks; i++) 375 349 drv_data->offsets[i] = i * BANK_OFFSET_STRIDE; ··· 379 349 drv_data->bitmap = devm_kcalloc(dev, 380 350 BITS_TO_LONGS(drv_data->max_slices), sizeof(unsigned long), 381 351 GFP_KERNEL); 382 - if (!drv_data->bitmap) 383 - return -ENOMEM; 352 + if (!drv_data->bitmap) { 353 + ret = -ENOMEM; 354 + goto err; 355 + } 384 356 385 357 drv_data->cfg = llcc_cfg; 386 358 drv_data->cfg_size = sz; ··· 391 359 392 360 ret = qcom_llcc_cfg_program(pdev); 393 361 if (ret) 394 - return ret; 362 + goto err; 395 363 396 364 drv_data->ecc_irq = platform_get_irq(pdev, 0); 397 365 if (drv_data->ecc_irq >= 0) { ··· 402 370 dev_err(dev, "Failed to register llcc edac driver\n"); 403 371 } 404 372 373 + return 0; 374 + err: 375 + drv_data = ERR_PTR(-ENODEV); 405 376 return ret; 406 377 } 407 378 EXPORT_SYMBOL_GPL(qcom_llcc_probe);
+5 -2
drivers/soc/qcom/qcom_gsbi.c
··· 138 138 struct resource *res; 139 139 void __iomem *base; 140 140 struct gsbi_info *gsbi; 141 - int i; 141 + int i, ret; 142 142 u32 mask, gsbi_num; 143 143 const struct crci_config *config = NULL; 144 144 ··· 221 221 222 222 platform_set_drvdata(pdev, gsbi); 223 223 224 - return of_platform_populate(node, NULL, NULL, &pdev->dev); 224 + ret = of_platform_populate(node, NULL, NULL, &pdev->dev); 225 + if (ret) 226 + clk_disable_unprepare(gsbi->hclk); 227 + return ret; 225 228 } 226 229 227 230 static int gsbi_remove(struct platform_device *pdev)
+24 -8
drivers/soc/qcom/rmtfs_mem.c
··· 45 45 struct device_attribute *attr, 46 46 char *buf); 47 47 48 - static DEVICE_ATTR(phys_addr, 0400, qcom_rmtfs_mem_show, NULL); 49 - static DEVICE_ATTR(size, 0400, qcom_rmtfs_mem_show, NULL); 50 - static DEVICE_ATTR(client_id, 0400, qcom_rmtfs_mem_show, NULL); 48 + static DEVICE_ATTR(phys_addr, 0444, qcom_rmtfs_mem_show, NULL); 49 + static DEVICE_ATTR(size, 0444, qcom_rmtfs_mem_show, NULL); 50 + static DEVICE_ATTR(client_id, 0444, qcom_rmtfs_mem_show, NULL); 51 51 52 52 static ssize_t qcom_rmtfs_mem_show(struct device *dev, 53 53 struct device_attribute *attr, ··· 132 132 return 0; 133 133 } 134 134 135 + static struct class rmtfs_class = { 136 + .owner = THIS_MODULE, 137 + .name = "rmtfs", 138 + }; 139 + 135 140 static const struct file_operations qcom_rmtfs_mem_fops = { 136 141 .owner = THIS_MODULE, 137 142 .open = qcom_rmtfs_mem_open, ··· 204 199 205 200 dev_set_name(&rmtfs_mem->dev, "qcom_rmtfs_mem%d", client_id); 206 201 rmtfs_mem->dev.id = client_id; 202 + rmtfs_mem->dev.class = &rmtfs_class; 207 203 rmtfs_mem->dev.devt = MKDEV(MAJOR(qcom_rmtfs_mem_major), client_id); 208 204 209 205 ret = cdev_device_add(&rmtfs_mem->cdev, &rmtfs_mem->dev); ··· 283 277 }, 284 278 }; 285 279 286 - static int qcom_rmtfs_mem_init(void) 280 + static int __init qcom_rmtfs_mem_init(void) 287 281 { 288 282 int ret; 283 + 284 + ret = class_register(&rmtfs_class); 285 + if (ret) 286 + return ret; 289 287 290 288 ret = alloc_chrdev_region(&qcom_rmtfs_mem_major, 0, 291 289 QCOM_RMTFS_MEM_DEV_MAX, "qcom_rmtfs_mem"); 292 290 if (ret < 0) { 293 291 pr_err("qcom_rmtfs_mem: failed to allocate char dev region\n"); 294 - return ret; 292 + goto unregister_class; 295 293 } 296 294 297 295 ret = platform_driver_register(&qcom_rmtfs_mem_driver); 298 296 if (ret < 0) { 299 297 pr_err("qcom_rmtfs_mem: failed to register rmtfs_mem driver\n"); 300 - unregister_chrdev_region(qcom_rmtfs_mem_major, 301 - QCOM_RMTFS_MEM_DEV_MAX); 298 + goto unregister_chrdev; 302 299 } 303 300 301 + return 0; 302 + 303 + unregister_chrdev: 304 + unregister_chrdev_region(qcom_rmtfs_mem_major, QCOM_RMTFS_MEM_DEV_MAX); 305 + unregister_class: 306 + class_unregister(&rmtfs_class); 304 307 return ret; 305 308 } 306 309 module_init(qcom_rmtfs_mem_init); 307 310 308 - static void qcom_rmtfs_mem_exit(void) 311 + static void __exit qcom_rmtfs_mem_exit(void) 309 312 { 310 313 platform_driver_unregister(&qcom_rmtfs_mem_driver); 311 314 unregister_chrdev_region(qcom_rmtfs_mem_major, QCOM_RMTFS_MEM_DEV_MAX); 315 + class_unregister(&rmtfs_class); 312 316 } 313 317 module_exit(qcom_rmtfs_mem_exit); 314 318
+22 -15
drivers/soc/qcom/rpmh.c
··· 80 80 struct rpmh_request *rpm_msg = container_of(msg, struct rpmh_request, 81 81 msg); 82 82 struct completion *compl = rpm_msg->completion; 83 + bool free = rpm_msg->needs_free; 83 84 84 85 rpm_msg->err = r; 85 86 ··· 95 94 complete(compl); 96 95 97 96 exit: 98 - if (rpm_msg->needs_free) 97 + if (free) 99 98 kfree(rpm_msg); 100 99 } 101 100 ··· 193 192 WARN_ON(irqs_disabled()); 194 193 ret = rpmh_rsc_send_data(ctrlr_to_drv(ctrlr), &rpm_msg->msg); 195 194 } else { 196 - ret = rpmh_rsc_write_ctrl_data(ctrlr_to_drv(ctrlr), 197 - &rpm_msg->msg); 198 195 /* Clean up our call by spoofing tx_done */ 196 + ret = 0; 199 197 rpmh_tx_done(&rpm_msg->msg, ret); 200 198 } 201 199 ··· 348 348 { 349 349 struct batch_cache_req *req; 350 350 struct rpmh_request *rpm_msgs; 351 - DECLARE_COMPLETION_ONSTACK(compl); 351 + struct completion *compls; 352 352 struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev); 353 353 unsigned long time_left; 354 354 int count = 0; 355 - int ret, i, j; 355 + int ret, i; 356 + void *ptr; 356 357 357 358 if (!cmd || !n) 358 359 return -EINVAL; ··· 363 362 if (!count) 364 363 return -EINVAL; 365 364 366 - req = kzalloc(sizeof(*req) + count * sizeof(req->rpm_msgs[0]), 365 + ptr = kzalloc(sizeof(*req) + 366 + count * (sizeof(req->rpm_msgs[0]) + sizeof(*compls)), 367 367 GFP_ATOMIC); 368 - if (!req) 368 + if (!ptr) 369 369 return -ENOMEM; 370 + 371 + req = ptr; 372 + compls = ptr + sizeof(*req) + count * sizeof(*rpm_msgs); 373 + 370 374 req->count = count; 371 375 rpm_msgs = req->rpm_msgs; 372 376 ··· 386 380 } 387 381 388 382 for (i = 0; i < count; i++) { 389 - rpm_msgs[i].completion = &compl; 383 + struct completion *compl = &compls[i]; 384 + 385 + init_completion(compl); 386 + rpm_msgs[i].completion = compl; 390 387 ret = rpmh_rsc_send_data(ctrlr_to_drv(ctrlr), &rpm_msgs[i].msg); 391 388 if (ret) { 392 389 pr_err("Error(%d) sending RPMH message addr=%#x\n", 393 390 ret, rpm_msgs[i].msg.cmds[0].addr); 394 - for (j = i; j < count; j++) 395 - rpmh_tx_done(&rpm_msgs[j].msg, ret); 396 391 break; 397 392 } 398 393 } 399 394 400 395 time_left = RPMH_TIMEOUT_MS; 401 - for (i = 0; i < count; i++) { 402 - time_left = wait_for_completion_timeout(&compl, time_left); 396 + while (i--) { 397 + time_left = wait_for_completion_timeout(&compls[i], time_left); 403 398 if (!time_left) { 404 399 /* 405 400 * Better hope they never finish because they'll signal 406 - * the completion on our stack and that's bad once 407 - * we've returned from the function. 401 + * the completion that we're going to free once 402 + * we've returned from this function. 408 403 */ 409 404 WARN_ON(1); 410 405 ret = -ETIMEDOUT; ··· 414 407 } 415 408 416 409 exit: 417 - kfree(req); 410 + kfree(ptr); 418 411 419 412 return ret; 420 413 }
+406
drivers/soc/qcom/rpmhpd.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2018, The Linux Foundation. All rights reserved.*/ 3 + 4 + #include <linux/err.h> 5 + #include <linux/init.h> 6 + #include <linux/kernel.h> 7 + #include <linux/mutex.h> 8 + #include <linux/pm_domain.h> 9 + #include <linux/slab.h> 10 + #include <linux/of.h> 11 + #include <linux/of_device.h> 12 + #include <linux/platform_device.h> 13 + #include <linux/pm_opp.h> 14 + #include <soc/qcom/cmd-db.h> 15 + #include <soc/qcom/rpmh.h> 16 + #include <dt-bindings/power/qcom-rpmpd.h> 17 + 18 + #define domain_to_rpmhpd(domain) container_of(domain, struct rpmhpd, pd) 19 + 20 + #define RPMH_ARC_MAX_LEVELS 16 21 + 22 + /** 23 + * struct rpmhpd - top level RPMh power domain resource data structure 24 + * @dev: rpmh power domain controller device 25 + * @pd: generic_pm_domain corrresponding to the power domain 26 + * @peer: A peer power domain in case Active only Voting is 27 + * supported 28 + * @active_only: True if it represents an Active only peer 29 + * @level: An array of level (vlvl) to corner (hlvl) mappings 30 + * derived from cmd-db 31 + * @level_count: Number of levels supported by the power domain. max 32 + * being 16 (0 - 15) 33 + * @enabled: true if the power domain is enabled 34 + * @res_name: Resource name used for cmd-db lookup 35 + * @addr: Resource address as looped up using resource name from 36 + * cmd-db 37 + */ 38 + struct rpmhpd { 39 + struct device *dev; 40 + struct generic_pm_domain pd; 41 + struct generic_pm_domain *parent; 42 + struct rpmhpd *peer; 43 + const bool active_only; 44 + unsigned int corner; 45 + unsigned int active_corner; 46 + u32 level[RPMH_ARC_MAX_LEVELS]; 47 + size_t level_count; 48 + bool enabled; 49 + const char *res_name; 50 + u32 addr; 51 + }; 52 + 53 + struct rpmhpd_desc { 54 + struct rpmhpd **rpmhpds; 55 + size_t num_pds; 56 + }; 57 + 58 + static DEFINE_MUTEX(rpmhpd_lock); 59 + 60 + /* SDM845 RPMH powerdomains */ 61 + 62 + static struct rpmhpd sdm845_ebi = { 63 + .pd = { .name = "ebi", }, 64 + .res_name = "ebi.lvl", 65 + }; 66 + 67 + static struct rpmhpd sdm845_lmx = { 68 + .pd = { .name = "lmx", }, 69 + .res_name = "lmx.lvl", 70 + }; 71 + 72 + static struct rpmhpd sdm845_lcx = { 73 + .pd = { .name = "lcx", }, 74 + .res_name = "lcx.lvl", 75 + }; 76 + 77 + static struct rpmhpd sdm845_gfx = { 78 + .pd = { .name = "gfx", }, 79 + .res_name = "gfx.lvl", 80 + }; 81 + 82 + static struct rpmhpd sdm845_mss = { 83 + .pd = { .name = "mss", }, 84 + .res_name = "mss.lvl", 85 + }; 86 + 87 + static struct rpmhpd sdm845_mx_ao; 88 + static struct rpmhpd sdm845_mx = { 89 + .pd = { .name = "mx", }, 90 + .peer = &sdm845_mx_ao, 91 + .res_name = "mx.lvl", 92 + }; 93 + 94 + static struct rpmhpd sdm845_mx_ao = { 95 + .pd = { .name = "mx_ao", }, 96 + .peer = &sdm845_mx, 97 + .res_name = "mx.lvl", 98 + }; 99 + 100 + static struct rpmhpd sdm845_cx_ao; 101 + static struct rpmhpd sdm845_cx = { 102 + .pd = { .name = "cx", }, 103 + .peer = &sdm845_cx_ao, 104 + .parent = &sdm845_mx.pd, 105 + .res_name = "cx.lvl", 106 + }; 107 + 108 + static struct rpmhpd sdm845_cx_ao = { 109 + .pd = { .name = "cx_ao", }, 110 + .peer = &sdm845_cx, 111 + .parent = &sdm845_mx_ao.pd, 112 + .res_name = "cx.lvl", 113 + }; 114 + 115 + static struct rpmhpd *sdm845_rpmhpds[] = { 116 + [SDM845_EBI] = &sdm845_ebi, 117 + [SDM845_MX] = &sdm845_mx, 118 + [SDM845_MX_AO] = &sdm845_mx_ao, 119 + [SDM845_CX] = &sdm845_cx, 120 + [SDM845_CX_AO] = &sdm845_cx_ao, 121 + [SDM845_LMX] = &sdm845_lmx, 122 + [SDM845_LCX] = &sdm845_lcx, 123 + [SDM845_GFX] = &sdm845_gfx, 124 + [SDM845_MSS] = &sdm845_mss, 125 + }; 126 + 127 + static const struct rpmhpd_desc sdm845_desc = { 128 + .rpmhpds = sdm845_rpmhpds, 129 + .num_pds = ARRAY_SIZE(sdm845_rpmhpds), 130 + }; 131 + 132 + static const struct of_device_id rpmhpd_match_table[] = { 133 + { .compatible = "qcom,sdm845-rpmhpd", .data = &sdm845_desc }, 134 + { } 135 + }; 136 + 137 + static int rpmhpd_send_corner(struct rpmhpd *pd, int state, 138 + unsigned int corner, bool sync) 139 + { 140 + struct tcs_cmd cmd = { 141 + .addr = pd->addr, 142 + .data = corner, 143 + }; 144 + 145 + /* 146 + * Wait for an ack only when we are increasing the 147 + * perf state of the power domain 148 + */ 149 + if (sync) 150 + return rpmh_write(pd->dev, state, &cmd, 1); 151 + else 152 + return rpmh_write_async(pd->dev, state, &cmd, 1); 153 + } 154 + 155 + static void to_active_sleep(struct rpmhpd *pd, unsigned int corner, 156 + unsigned int *active, unsigned int *sleep) 157 + { 158 + *active = corner; 159 + 160 + if (pd->active_only) 161 + *sleep = 0; 162 + else 163 + *sleep = *active; 164 + } 165 + 166 + /* 167 + * This function is used to aggregate the votes across the active only 168 + * resources and its peers. The aggregated votes are sent to RPMh as 169 + * ACTIVE_ONLY votes (which take effect immediately), as WAKE_ONLY votes 170 + * (applied by RPMh on system wakeup) and as SLEEP votes (applied by RPMh 171 + * on system sleep). 172 + * We send ACTIVE_ONLY votes for resources without any peers. For others, 173 + * which have an active only peer, all 3 votes are sent. 174 + */ 175 + static int rpmhpd_aggregate_corner(struct rpmhpd *pd, unsigned int corner) 176 + { 177 + int ret; 178 + struct rpmhpd *peer = pd->peer; 179 + unsigned int active_corner, sleep_corner; 180 + unsigned int this_active_corner = 0, this_sleep_corner = 0; 181 + unsigned int peer_active_corner = 0, peer_sleep_corner = 0; 182 + 183 + to_active_sleep(pd, corner, &this_active_corner, &this_sleep_corner); 184 + 185 + if (peer && peer->enabled) 186 + to_active_sleep(peer, peer->corner, &peer_active_corner, 187 + &peer_sleep_corner); 188 + 189 + active_corner = max(this_active_corner, peer_active_corner); 190 + 191 + ret = rpmhpd_send_corner(pd, RPMH_ACTIVE_ONLY_STATE, active_corner, 192 + active_corner > pd->active_corner); 193 + if (ret) 194 + return ret; 195 + 196 + pd->active_corner = active_corner; 197 + 198 + if (peer) { 199 + peer->active_corner = active_corner; 200 + 201 + ret = rpmhpd_send_corner(pd, RPMH_WAKE_ONLY_STATE, 202 + active_corner, false); 203 + if (ret) 204 + return ret; 205 + 206 + sleep_corner = max(this_sleep_corner, peer_sleep_corner); 207 + 208 + return rpmhpd_send_corner(pd, RPMH_SLEEP_STATE, sleep_corner, 209 + false); 210 + } 211 + 212 + return ret; 213 + } 214 + 215 + static int rpmhpd_power_on(struct generic_pm_domain *domain) 216 + { 217 + struct rpmhpd *pd = domain_to_rpmhpd(domain); 218 + int ret = 0; 219 + 220 + mutex_lock(&rpmhpd_lock); 221 + 222 + if (pd->corner) 223 + ret = rpmhpd_aggregate_corner(pd, pd->corner); 224 + 225 + if (!ret) 226 + pd->enabled = true; 227 + 228 + mutex_unlock(&rpmhpd_lock); 229 + 230 + return ret; 231 + } 232 + 233 + static int rpmhpd_power_off(struct generic_pm_domain *domain) 234 + { 235 + struct rpmhpd *pd = domain_to_rpmhpd(domain); 236 + int ret = 0; 237 + 238 + mutex_lock(&rpmhpd_lock); 239 + 240 + ret = rpmhpd_aggregate_corner(pd, pd->level[0]); 241 + 242 + if (!ret) 243 + pd->enabled = false; 244 + 245 + mutex_unlock(&rpmhpd_lock); 246 + 247 + return ret; 248 + } 249 + 250 + static int rpmhpd_set_performance_state(struct generic_pm_domain *domain, 251 + unsigned int level) 252 + { 253 + struct rpmhpd *pd = domain_to_rpmhpd(domain); 254 + int ret = 0, i; 255 + 256 + mutex_lock(&rpmhpd_lock); 257 + 258 + for (i = 0; i < pd->level_count; i++) 259 + if (level <= pd->level[i]) 260 + break; 261 + 262 + /* 263 + * If the level requested is more than that supported by the 264 + * max corner, just set it to max anyway. 265 + */ 266 + if (i == pd->level_count) 267 + i--; 268 + 269 + if (pd->enabled) { 270 + ret = rpmhpd_aggregate_corner(pd, i); 271 + if (ret) 272 + goto out; 273 + } 274 + 275 + pd->corner = i; 276 + out: 277 + mutex_unlock(&rpmhpd_lock); 278 + 279 + return ret; 280 + } 281 + 282 + static unsigned int rpmhpd_get_performance_state(struct generic_pm_domain *genpd, 283 + struct dev_pm_opp *opp) 284 + { 285 + return dev_pm_opp_get_level(opp); 286 + } 287 + 288 + static int rpmhpd_update_level_mapping(struct rpmhpd *rpmhpd) 289 + { 290 + int i; 291 + const u16 *buf; 292 + 293 + buf = cmd_db_read_aux_data(rpmhpd->res_name, &rpmhpd->level_count); 294 + if (IS_ERR(buf)) 295 + return PTR_ERR(buf); 296 + 297 + /* 2 bytes used for each command DB aux data entry */ 298 + rpmhpd->level_count >>= 1; 299 + 300 + if (rpmhpd->level_count > RPMH_ARC_MAX_LEVELS) 301 + return -EINVAL; 302 + 303 + for (i = 0; i < rpmhpd->level_count; i++) { 304 + rpmhpd->level[i] = buf[i]; 305 + 306 + /* 307 + * The AUX data may be zero padded. These 0 valued entries at 308 + * the end of the map must be ignored. 309 + */ 310 + if (i > 0 && rpmhpd->level[i] == 0) { 311 + rpmhpd->level_count = i; 312 + break; 313 + } 314 + pr_debug("%s: ARC hlvl=%2d --> vlvl=%4u\n", rpmhpd->res_name, i, 315 + rpmhpd->level[i]); 316 + } 317 + 318 + return 0; 319 + } 320 + 321 + static int rpmhpd_probe(struct platform_device *pdev) 322 + { 323 + int i, ret; 324 + size_t num_pds; 325 + struct device *dev = &pdev->dev; 326 + struct genpd_onecell_data *data; 327 + struct rpmhpd **rpmhpds; 328 + const struct rpmhpd_desc *desc; 329 + 330 + desc = of_device_get_match_data(dev); 331 + if (!desc) 332 + return -EINVAL; 333 + 334 + rpmhpds = desc->rpmhpds; 335 + num_pds = desc->num_pds; 336 + 337 + data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 338 + if (!data) 339 + return -ENOMEM; 340 + 341 + data->domains = devm_kcalloc(dev, num_pds, sizeof(*data->domains), 342 + GFP_KERNEL); 343 + if (!data->domains) 344 + return -ENOMEM; 345 + 346 + data->num_domains = num_pds; 347 + 348 + for (i = 0; i < num_pds; i++) { 349 + if (!rpmhpds[i]) { 350 + dev_warn(dev, "rpmhpds[%d] is empty\n", i); 351 + continue; 352 + } 353 + 354 + rpmhpds[i]->dev = dev; 355 + rpmhpds[i]->addr = cmd_db_read_addr(rpmhpds[i]->res_name); 356 + if (!rpmhpds[i]->addr) { 357 + dev_err(dev, "Could not find RPMh address for resource %s\n", 358 + rpmhpds[i]->res_name); 359 + return -ENODEV; 360 + } 361 + 362 + ret = cmd_db_read_slave_id(rpmhpds[i]->res_name); 363 + if (ret != CMD_DB_HW_ARC) { 364 + dev_err(dev, "RPMh slave ID mismatch\n"); 365 + return -EINVAL; 366 + } 367 + 368 + ret = rpmhpd_update_level_mapping(rpmhpds[i]); 369 + if (ret) 370 + return ret; 371 + 372 + rpmhpds[i]->pd.power_off = rpmhpd_power_off; 373 + rpmhpds[i]->pd.power_on = rpmhpd_power_on; 374 + rpmhpds[i]->pd.set_performance_state = rpmhpd_set_performance_state; 375 + rpmhpds[i]->pd.opp_to_performance_state = rpmhpd_get_performance_state; 376 + pm_genpd_init(&rpmhpds[i]->pd, NULL, true); 377 + 378 + data->domains[i] = &rpmhpds[i]->pd; 379 + } 380 + 381 + /* Add subdomains */ 382 + for (i = 0; i < num_pds; i++) { 383 + if (!rpmhpds[i]) 384 + continue; 385 + if (rpmhpds[i]->parent) 386 + pm_genpd_add_subdomain(rpmhpds[i]->parent, 387 + &rpmhpds[i]->pd); 388 + } 389 + 390 + return of_genpd_add_provider_onecell(pdev->dev.of_node, data); 391 + } 392 + 393 + static struct platform_driver rpmhpd_driver = { 394 + .driver = { 395 + .name = "qcom-rpmhpd", 396 + .of_match_table = rpmhpd_match_table, 397 + .suppress_bind_attrs = true, 398 + }, 399 + .probe = rpmhpd_probe, 400 + }; 401 + 402 + static int __init rpmhpd_init(void) 403 + { 404 + return platform_driver_register(&rpmhpd_driver); 405 + } 406 + core_initcall(rpmhpd_init);
+315
drivers/soc/qcom/rpmpd.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2017-2018, The Linux Foundation. All rights reserved. */ 3 + 4 + #include <linux/err.h> 5 + #include <linux/init.h> 6 + #include <linux/kernel.h> 7 + #include <linux/mutex.h> 8 + #include <linux/pm_domain.h> 9 + #include <linux/of.h> 10 + #include <linux/of_device.h> 11 + #include <linux/platform_device.h> 12 + #include <linux/pm_opp.h> 13 + #include <linux/soc/qcom/smd-rpm.h> 14 + 15 + #include <dt-bindings/power/qcom-rpmpd.h> 16 + 17 + #define domain_to_rpmpd(domain) container_of(domain, struct rpmpd, pd) 18 + 19 + /* Resource types */ 20 + #define RPMPD_SMPA 0x61706d73 21 + #define RPMPD_LDOA 0x616f646c 22 + 23 + /* Operation Keys */ 24 + #define KEY_CORNER 0x6e726f63 /* corn */ 25 + #define KEY_ENABLE 0x6e657773 /* swen */ 26 + #define KEY_FLOOR_CORNER 0x636676 /* vfc */ 27 + 28 + #define MAX_RPMPD_STATE 6 29 + 30 + #define DEFINE_RPMPD_CORNER_SMPA(_platform, _name, _active, r_id) \ 31 + static struct rpmpd _platform##_##_active; \ 32 + static struct rpmpd _platform##_##_name = { \ 33 + .pd = { .name = #_name, }, \ 34 + .peer = &_platform##_##_active, \ 35 + .res_type = RPMPD_SMPA, \ 36 + .res_id = r_id, \ 37 + .key = KEY_CORNER, \ 38 + }; \ 39 + static struct rpmpd _platform##_##_active = { \ 40 + .pd = { .name = #_active, }, \ 41 + .peer = &_platform##_##_name, \ 42 + .active_only = true, \ 43 + .res_type = RPMPD_SMPA, \ 44 + .res_id = r_id, \ 45 + .key = KEY_CORNER, \ 46 + } 47 + 48 + #define DEFINE_RPMPD_CORNER_LDOA(_platform, _name, r_id) \ 49 + static struct rpmpd _platform##_##_name = { \ 50 + .pd = { .name = #_name, }, \ 51 + .res_type = RPMPD_LDOA, \ 52 + .res_id = r_id, \ 53 + .key = KEY_CORNER, \ 54 + } 55 + 56 + #define DEFINE_RPMPD_VFC(_platform, _name, r_id, r_type) \ 57 + static struct rpmpd _platform##_##_name = { \ 58 + .pd = { .name = #_name, }, \ 59 + .res_type = r_type, \ 60 + .res_id = r_id, \ 61 + .key = KEY_FLOOR_CORNER, \ 62 + } 63 + 64 + #define DEFINE_RPMPD_VFC_SMPA(_platform, _name, r_id) \ 65 + DEFINE_RPMPD_VFC(_platform, _name, r_id, RPMPD_SMPA) 66 + 67 + #define DEFINE_RPMPD_VFC_LDOA(_platform, _name, r_id) \ 68 + DEFINE_RPMPD_VFC(_platform, _name, r_id, RPMPD_LDOA) 69 + 70 + struct rpmpd_req { 71 + __le32 key; 72 + __le32 nbytes; 73 + __le32 value; 74 + }; 75 + 76 + struct rpmpd { 77 + struct generic_pm_domain pd; 78 + struct rpmpd *peer; 79 + const bool active_only; 80 + unsigned int corner; 81 + bool enabled; 82 + const char *res_name; 83 + const int res_type; 84 + const int res_id; 85 + struct qcom_smd_rpm *rpm; 86 + __le32 key; 87 + }; 88 + 89 + struct rpmpd_desc { 90 + struct rpmpd **rpmpds; 91 + size_t num_pds; 92 + }; 93 + 94 + static DEFINE_MUTEX(rpmpd_lock); 95 + 96 + /* msm8996 RPM Power domains */ 97 + DEFINE_RPMPD_CORNER_SMPA(msm8996, vddcx, vddcx_ao, 1); 98 + DEFINE_RPMPD_CORNER_SMPA(msm8996, vddmx, vddmx_ao, 2); 99 + DEFINE_RPMPD_CORNER_LDOA(msm8996, vddsscx, 26); 100 + 101 + DEFINE_RPMPD_VFC_SMPA(msm8996, vddcx_vfc, 1); 102 + DEFINE_RPMPD_VFC_LDOA(msm8996, vddsscx_vfc, 26); 103 + 104 + static struct rpmpd *msm8996_rpmpds[] = { 105 + [MSM8996_VDDCX] = &msm8996_vddcx, 106 + [MSM8996_VDDCX_AO] = &msm8996_vddcx_ao, 107 + [MSM8996_VDDCX_VFC] = &msm8996_vddcx_vfc, 108 + [MSM8996_VDDMX] = &msm8996_vddmx, 109 + [MSM8996_VDDMX_AO] = &msm8996_vddmx_ao, 110 + [MSM8996_VDDSSCX] = &msm8996_vddsscx, 111 + [MSM8996_VDDSSCX_VFC] = &msm8996_vddsscx_vfc, 112 + }; 113 + 114 + static const struct rpmpd_desc msm8996_desc = { 115 + .rpmpds = msm8996_rpmpds, 116 + .num_pds = ARRAY_SIZE(msm8996_rpmpds), 117 + }; 118 + 119 + static const struct of_device_id rpmpd_match_table[] = { 120 + { .compatible = "qcom,msm8996-rpmpd", .data = &msm8996_desc }, 121 + { } 122 + }; 123 + 124 + static int rpmpd_send_enable(struct rpmpd *pd, bool enable) 125 + { 126 + struct rpmpd_req req = { 127 + .key = KEY_ENABLE, 128 + .nbytes = cpu_to_le32(sizeof(u32)), 129 + .value = cpu_to_le32(enable), 130 + }; 131 + 132 + return qcom_rpm_smd_write(pd->rpm, QCOM_SMD_RPM_ACTIVE_STATE, 133 + pd->res_type, pd->res_id, &req, sizeof(req)); 134 + } 135 + 136 + static int rpmpd_send_corner(struct rpmpd *pd, int state, unsigned int corner) 137 + { 138 + struct rpmpd_req req = { 139 + .key = pd->key, 140 + .nbytes = cpu_to_le32(sizeof(u32)), 141 + .value = cpu_to_le32(corner), 142 + }; 143 + 144 + return qcom_rpm_smd_write(pd->rpm, state, pd->res_type, pd->res_id, 145 + &req, sizeof(req)); 146 + }; 147 + 148 + static void to_active_sleep(struct rpmpd *pd, unsigned int corner, 149 + unsigned int *active, unsigned int *sleep) 150 + { 151 + *active = corner; 152 + 153 + if (pd->active_only) 154 + *sleep = 0; 155 + else 156 + *sleep = *active; 157 + } 158 + 159 + static int rpmpd_aggregate_corner(struct rpmpd *pd) 160 + { 161 + int ret; 162 + struct rpmpd *peer = pd->peer; 163 + unsigned int active_corner, sleep_corner; 164 + unsigned int this_active_corner = 0, this_sleep_corner = 0; 165 + unsigned int peer_active_corner = 0, peer_sleep_corner = 0; 166 + 167 + to_active_sleep(pd, pd->corner, &this_active_corner, &this_sleep_corner); 168 + 169 + if (peer && peer->enabled) 170 + to_active_sleep(peer, peer->corner, &peer_active_corner, 171 + &peer_sleep_corner); 172 + 173 + active_corner = max(this_active_corner, peer_active_corner); 174 + 175 + ret = rpmpd_send_corner(pd, QCOM_SMD_RPM_ACTIVE_STATE, active_corner); 176 + if (ret) 177 + return ret; 178 + 179 + sleep_corner = max(this_sleep_corner, peer_sleep_corner); 180 + 181 + return rpmpd_send_corner(pd, QCOM_SMD_RPM_SLEEP_STATE, sleep_corner); 182 + } 183 + 184 + static int rpmpd_power_on(struct generic_pm_domain *domain) 185 + { 186 + int ret; 187 + struct rpmpd *pd = domain_to_rpmpd(domain); 188 + 189 + mutex_lock(&rpmpd_lock); 190 + 191 + ret = rpmpd_send_enable(pd, true); 192 + if (ret) 193 + goto out; 194 + 195 + pd->enabled = true; 196 + 197 + if (pd->corner) 198 + ret = rpmpd_aggregate_corner(pd); 199 + 200 + out: 201 + mutex_unlock(&rpmpd_lock); 202 + 203 + return ret; 204 + } 205 + 206 + static int rpmpd_power_off(struct generic_pm_domain *domain) 207 + { 208 + int ret; 209 + struct rpmpd *pd = domain_to_rpmpd(domain); 210 + 211 + mutex_lock(&rpmpd_lock); 212 + 213 + ret = rpmpd_send_enable(pd, false); 214 + if (!ret) 215 + pd->enabled = false; 216 + 217 + mutex_unlock(&rpmpd_lock); 218 + 219 + return ret; 220 + } 221 + 222 + static int rpmpd_set_performance(struct generic_pm_domain *domain, 223 + unsigned int state) 224 + { 225 + int ret = 0; 226 + struct rpmpd *pd = domain_to_rpmpd(domain); 227 + 228 + if (state > MAX_RPMPD_STATE) 229 + goto out; 230 + 231 + mutex_lock(&rpmpd_lock); 232 + 233 + pd->corner = state; 234 + 235 + if (!pd->enabled && pd->key != KEY_FLOOR_CORNER) 236 + goto out; 237 + 238 + ret = rpmpd_aggregate_corner(pd); 239 + 240 + out: 241 + mutex_unlock(&rpmpd_lock); 242 + 243 + return ret; 244 + } 245 + 246 + static unsigned int rpmpd_get_performance(struct generic_pm_domain *genpd, 247 + struct dev_pm_opp *opp) 248 + { 249 + return dev_pm_opp_get_level(opp); 250 + } 251 + 252 + static int rpmpd_probe(struct platform_device *pdev) 253 + { 254 + int i; 255 + size_t num; 256 + struct genpd_onecell_data *data; 257 + struct qcom_smd_rpm *rpm; 258 + struct rpmpd **rpmpds; 259 + const struct rpmpd_desc *desc; 260 + 261 + rpm = dev_get_drvdata(pdev->dev.parent); 262 + if (!rpm) { 263 + dev_err(&pdev->dev, "Unable to retrieve handle to RPM\n"); 264 + return -ENODEV; 265 + } 266 + 267 + desc = of_device_get_match_data(&pdev->dev); 268 + if (!desc) 269 + return -EINVAL; 270 + 271 + rpmpds = desc->rpmpds; 272 + num = desc->num_pds; 273 + 274 + data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 275 + if (!data) 276 + return -ENOMEM; 277 + 278 + data->domains = devm_kcalloc(&pdev->dev, num, sizeof(*data->domains), 279 + GFP_KERNEL); 280 + data->num_domains = num; 281 + 282 + for (i = 0; i < num; i++) { 283 + if (!rpmpds[i]) { 284 + dev_warn(&pdev->dev, "rpmpds[] with empty entry at index=%d\n", 285 + i); 286 + continue; 287 + } 288 + 289 + rpmpds[i]->rpm = rpm; 290 + rpmpds[i]->pd.power_off = rpmpd_power_off; 291 + rpmpds[i]->pd.power_on = rpmpd_power_on; 292 + rpmpds[i]->pd.set_performance_state = rpmpd_set_performance; 293 + rpmpds[i]->pd.opp_to_performance_state = rpmpd_get_performance; 294 + pm_genpd_init(&rpmpds[i]->pd, NULL, true); 295 + 296 + data->domains[i] = &rpmpds[i]->pd; 297 + } 298 + 299 + return of_genpd_add_provider_onecell(pdev->dev.of_node, data); 300 + } 301 + 302 + static struct platform_driver rpmpd_driver = { 303 + .driver = { 304 + .name = "qcom-rpmpd", 305 + .of_match_table = rpmpd_match_table, 306 + .suppress_bind_attrs = true, 307 + }, 308 + .probe = rpmpd_probe, 309 + }; 310 + 311 + static int __init rpmpd_init(void) 312 + { 313 + return platform_driver_register(&rpmpd_driver); 314 + } 315 + core_initcall(rpmpd_init);
+1
drivers/soc/qcom/smd-rpm.c
··· 227 227 { .compatible = "qcom,rpm-msm8974" }, 228 228 { .compatible = "qcom,rpm-msm8996" }, 229 229 { .compatible = "qcom,rpm-msm8998" }, 230 + { .compatible = "qcom,rpm-sdm660" }, 230 231 { .compatible = "qcom,rpm-qcs404" }, 231 232 {} 232 233 };
+9 -3
drivers/soc/tegra/fuse/fuse-tegra.c
··· 137 137 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 138 138 fuse->phys = res->start; 139 139 fuse->base = devm_ioremap_resource(&pdev->dev, res); 140 - if (IS_ERR(fuse->base)) 141 - return PTR_ERR(fuse->base); 140 + if (IS_ERR(fuse->base)) { 141 + err = PTR_ERR(fuse->base); 142 + fuse->base = base; 143 + return err; 144 + } 142 145 143 146 fuse->clk = devm_clk_get(&pdev->dev, "fuse"); 144 147 if (IS_ERR(fuse->clk)) { 145 148 dev_err(&pdev->dev, "failed to get FUSE clock: %ld", 146 149 PTR_ERR(fuse->clk)); 150 + fuse->base = base; 147 151 return PTR_ERR(fuse->clk); 148 152 } 149 153 ··· 156 152 157 153 if (fuse->soc->probe) { 158 154 err = fuse->soc->probe(fuse); 159 - if (err < 0) 155 + if (err < 0) { 156 + fuse->base = base; 160 157 return err; 158 + } 161 159 } 162 160 163 161 if (tegra_fuse_create_sysfs(&pdev->dev, fuse->soc->info->size,
+1 -1
drivers/soc/tegra/fuse/speedo-tegra210.c
··· 131 131 132 132 soc_speedo[0] = tegra_fuse_read_early(FUSE_SOC_SPEEDO_0); 133 133 soc_speedo[1] = tegra_fuse_read_early(FUSE_SOC_SPEEDO_1); 134 - soc_speedo[2] = tegra_fuse_read_early(FUSE_CPU_SPEEDO_2); 134 + soc_speedo[2] = tegra_fuse_read_early(FUSE_SOC_SPEEDO_2); 135 135 136 136 cpu_iddq = tegra_fuse_read_early(FUSE_CPU_IDDQ) * 4; 137 137 soc_iddq = tegra_fuse_read_early(FUSE_SOC_IDDQ) * 4;
+288 -136
drivers/soc/tegra/pmc.c
··· 20 20 21 21 #define pr_fmt(fmt) "tegra-pmc: " fmt 22 22 23 - #include <linux/kernel.h> 23 + #include <linux/arm-smccc.h> 24 24 #include <linux/clk.h> 25 25 #include <linux/clk/tegra.h> 26 26 #include <linux/debugfs.h> ··· 30 30 #include <linux/init.h> 31 31 #include <linux/io.h> 32 32 #include <linux/iopoll.h> 33 - #include <linux/irq.h> 34 33 #include <linux/irqdomain.h> 35 - #include <linux/of.h> 34 + #include <linux/irq.h> 35 + #include <linux/kernel.h> 36 36 #include <linux/of_address.h> 37 37 #include <linux/of_clk.h> 38 + #include <linux/of.h> 38 39 #include <linux/of_irq.h> 39 40 #include <linux/of_platform.h> 40 - #include <linux/pinctrl/pinctrl.h> 41 - #include <linux/pinctrl/pinconf.h> 42 41 #include <linux/pinctrl/pinconf-generic.h> 42 + #include <linux/pinctrl/pinconf.h> 43 + #include <linux/pinctrl/pinctrl.h> 43 44 #include <linux/platform_device.h> 44 45 #include <linux/pm_domain.h> 45 46 #include <linux/reboot.h> ··· 146 145 #define WAKE_AOWAKE_CTRL 0x4f4 147 146 #define WAKE_AOWAKE_CTRL_INTR_POLARITY BIT(0) 148 147 148 + /* for secure PMC */ 149 + #define TEGRA_SMC_PMC 0xc2fffe00 150 + #define TEGRA_SMC_PMC_READ 0xaa 151 + #define TEGRA_SMC_PMC_WRITE 0xbb 152 + 149 153 struct tegra_powergate { 150 154 struct generic_pm_domain genpd; 151 155 struct tegra_pmc *pmc; ··· 222 216 bool has_gpu_clamps; 223 217 bool needs_mbist_war; 224 218 bool has_impl_33v_pwr; 219 + bool maybe_tz_only; 225 220 226 221 const struct tegra_io_pad_soc *io_pads; 227 222 unsigned int num_io_pads; ··· 280 273 * struct tegra_pmc - NVIDIA Tegra PMC 281 274 * @dev: pointer to PMC device structure 282 275 * @base: pointer to I/O remapped register region 276 + * @wake: pointer to I/O remapped region for WAKE registers 277 + * @aotag: pointer to I/O remapped region for AOTAG registers 278 + * @scratch: pointer to I/O remapped region for scratch registers 283 279 * @clk: pointer to pclk clock 284 280 * @soc: pointer to SoC data structure 281 + * @tz_only: flag specifying if the PMC can only be accessed via TrustZone 285 282 * @debugfs: pointer to debugfs entry 286 283 * @rate: currently configured rate of pclk 287 284 * @suspend_mode: lowest suspend mode available ··· 302 291 * @lp0_vec_size: size of the LP0 warm boot code 303 292 * @powergates_available: Bitmap of available power gates 304 293 * @powergates_lock: mutex for power gate register access 294 + * @pctl_dev: pin controller exposed by the PMC 295 + * @domain: IRQ domain provided by the PMC 296 + * @irq: chip implementation for the IRQ domain 305 297 */ 306 298 struct tegra_pmc { 307 299 struct device *dev; ··· 316 302 struct dentry *debugfs; 317 303 318 304 const struct tegra_pmc_soc *soc; 305 + bool tz_only; 319 306 320 307 unsigned long rate; 321 308 ··· 353 338 return container_of(domain, struct tegra_powergate, genpd); 354 339 } 355 340 356 - static u32 tegra_pmc_readl(unsigned long offset) 341 + static u32 tegra_pmc_readl(struct tegra_pmc *pmc, unsigned long offset) 357 342 { 343 + struct arm_smccc_res res; 344 + 345 + if (pmc->tz_only) { 346 + arm_smccc_smc(TEGRA_SMC_PMC, TEGRA_SMC_PMC_READ, offset, 0, 0, 347 + 0, 0, 0, &res); 348 + if (res.a0) { 349 + if (pmc->dev) 350 + dev_warn(pmc->dev, "%s(): SMC failed: %lu\n", 351 + __func__, res.a0); 352 + else 353 + pr_warn("%s(): SMC failed: %lu\n", __func__, 354 + res.a0); 355 + } 356 + 357 + return res.a1; 358 + } 359 + 358 360 return readl(pmc->base + offset); 359 361 } 360 362 361 - static void tegra_pmc_writel(u32 value, unsigned long offset) 363 + static void tegra_pmc_writel(struct tegra_pmc *pmc, u32 value, 364 + unsigned long offset) 362 365 { 363 - writel(value, pmc->base + offset); 366 + struct arm_smccc_res res; 367 + 368 + if (pmc->tz_only) { 369 + arm_smccc_smc(TEGRA_SMC_PMC, TEGRA_SMC_PMC_WRITE, offset, 370 + value, 0, 0, 0, 0, &res); 371 + if (res.a0) { 372 + if (pmc->dev) 373 + dev_warn(pmc->dev, "%s(): SMC failed: %lu\n", 374 + __func__, res.a0); 375 + else 376 + pr_warn("%s(): SMC failed: %lu\n", __func__, 377 + res.a0); 378 + } 379 + } else { 380 + writel(value, pmc->base + offset); 381 + } 364 382 } 365 383 384 + static u32 tegra_pmc_scratch_readl(struct tegra_pmc *pmc, unsigned long offset) 385 + { 386 + if (pmc->tz_only) 387 + return tegra_pmc_readl(pmc, offset); 388 + 389 + return readl(pmc->scratch + offset); 390 + } 391 + 392 + static void tegra_pmc_scratch_writel(struct tegra_pmc *pmc, u32 value, 393 + unsigned long offset) 394 + { 395 + if (pmc->tz_only) 396 + tegra_pmc_writel(pmc, value, offset); 397 + else 398 + writel(value, pmc->scratch + offset); 399 + } 400 + 401 + /* 402 + * TODO Figure out a way to call this with the struct tegra_pmc * passed in. 403 + * This currently doesn't work because readx_poll_timeout() can only operate 404 + * on functions that take a single argument. 405 + */ 366 406 static inline bool tegra_powergate_state(int id) 367 407 { 368 408 if (id == TEGRA_POWERGATE_3D && pmc->soc->has_gpu_clamps) 369 - return (tegra_pmc_readl(GPU_RG_CNTRL) & 0x1) == 0; 409 + return (tegra_pmc_readl(pmc, GPU_RG_CNTRL) & 0x1) == 0; 370 410 else 371 - return (tegra_pmc_readl(PWRGATE_STATUS) & BIT(id)) != 0; 411 + return (tegra_pmc_readl(pmc, PWRGATE_STATUS) & BIT(id)) != 0; 372 412 } 373 413 374 - static inline bool tegra_powergate_is_valid(int id) 414 + static inline bool tegra_powergate_is_valid(struct tegra_pmc *pmc, int id) 375 415 { 376 416 return (pmc->soc && pmc->soc->powergates[id]); 377 417 } 378 418 379 - static inline bool tegra_powergate_is_available(int id) 419 + static inline bool tegra_powergate_is_available(struct tegra_pmc *pmc, int id) 380 420 { 381 421 return test_bit(id, pmc->powergates_available); 382 422 } ··· 444 374 return -EINVAL; 445 375 446 376 for (i = 0; i < pmc->soc->num_powergates; i++) { 447 - if (!tegra_powergate_is_valid(i)) 377 + if (!tegra_powergate_is_valid(pmc, i)) 448 378 continue; 449 379 450 380 if (!strcmp(name, pmc->soc->powergates[i])) ··· 456 386 457 387 /** 458 388 * tegra_powergate_set() - set the state of a partition 389 + * @pmc: power management controller 459 390 * @id: partition ID 460 391 * @new_state: new state of the partition 461 392 */ 462 - static int tegra_powergate_set(unsigned int id, bool new_state) 393 + static int tegra_powergate_set(struct tegra_pmc *pmc, unsigned int id, 394 + bool new_state) 463 395 { 464 396 bool status; 465 397 int err; ··· 476 404 return 0; 477 405 } 478 406 479 - tegra_pmc_writel(PWRGATE_TOGGLE_START | id, PWRGATE_TOGGLE); 407 + tegra_pmc_writel(pmc, PWRGATE_TOGGLE_START | id, PWRGATE_TOGGLE); 480 408 481 409 err = readx_poll_timeout(tegra_powergate_state, id, status, 482 410 status == new_state, 10, 100000); ··· 486 414 return err; 487 415 } 488 416 489 - static int __tegra_powergate_remove_clamping(unsigned int id) 417 + static int __tegra_powergate_remove_clamping(struct tegra_pmc *pmc, 418 + unsigned int id) 490 419 { 491 420 u32 mask; 492 421 ··· 499 426 */ 500 427 if (id == TEGRA_POWERGATE_3D) { 501 428 if (pmc->soc->has_gpu_clamps) { 502 - tegra_pmc_writel(0, GPU_RG_CNTRL); 429 + tegra_pmc_writel(pmc, 0, GPU_RG_CNTRL); 503 430 goto out; 504 431 } 505 432 } ··· 515 442 else 516 443 mask = (1 << id); 517 444 518 - tegra_pmc_writel(mask, REMOVE_CLAMPING); 445 + tegra_pmc_writel(pmc, mask, REMOVE_CLAMPING); 519 446 520 447 out: 521 448 mutex_unlock(&pmc->powergates_lock); ··· 567 494 568 495 usleep_range(10, 20); 569 496 570 - err = tegra_powergate_set(pg->id, true); 497 + err = tegra_powergate_set(pg->pmc, pg->id, true); 571 498 if (err < 0) 572 499 return err; 573 500 ··· 579 506 580 507 usleep_range(10, 20); 581 508 582 - err = __tegra_powergate_remove_clamping(pg->id); 509 + err = __tegra_powergate_remove_clamping(pg->pmc, pg->id); 583 510 if (err) 584 511 goto disable_clks; 585 512 ··· 606 533 usleep_range(10, 20); 607 534 608 535 powergate_off: 609 - tegra_powergate_set(pg->id, false); 536 + tegra_powergate_set(pg->pmc, pg->id, false); 610 537 611 538 return err; 612 539 } ··· 631 558 632 559 usleep_range(10, 20); 633 560 634 - err = tegra_powergate_set(pg->id, false); 561 + err = tegra_powergate_set(pg->pmc, pg->id, false); 635 562 if (err) 636 563 goto assert_resets; 637 564 ··· 652 579 static int tegra_genpd_power_on(struct generic_pm_domain *domain) 653 580 { 654 581 struct tegra_powergate *pg = to_powergate(domain); 582 + struct device *dev = pg->pmc->dev; 655 583 int err; 656 584 657 585 err = tegra_powergate_power_up(pg, true); 658 586 if (err) 659 - pr_err("failed to turn on PM domain %s: %d\n", pg->genpd.name, 660 - err); 587 + dev_err(dev, "failed to turn on PM domain %s: %d\n", 588 + pg->genpd.name, err); 661 589 662 590 return err; 663 591 } ··· 666 592 static int tegra_genpd_power_off(struct generic_pm_domain *domain) 667 593 { 668 594 struct tegra_powergate *pg = to_powergate(domain); 595 + struct device *dev = pg->pmc->dev; 669 596 int err; 670 597 671 598 err = tegra_powergate_power_down(pg); 672 599 if (err) 673 - pr_err("failed to turn off PM domain %s: %d\n", 674 - pg->genpd.name, err); 600 + dev_err(dev, "failed to turn off PM domain %s: %d\n", 601 + pg->genpd.name, err); 675 602 676 603 return err; 677 604 } ··· 683 608 */ 684 609 int tegra_powergate_power_on(unsigned int id) 685 610 { 686 - if (!tegra_powergate_is_available(id)) 611 + if (!tegra_powergate_is_available(pmc, id)) 687 612 return -EINVAL; 688 613 689 - return tegra_powergate_set(id, true); 614 + return tegra_powergate_set(pmc, id, true); 690 615 } 691 616 692 617 /** ··· 695 620 */ 696 621 int tegra_powergate_power_off(unsigned int id) 697 622 { 698 - if (!tegra_powergate_is_available(id)) 623 + if (!tegra_powergate_is_available(pmc, id)) 699 624 return -EINVAL; 700 625 701 - return tegra_powergate_set(id, false); 626 + return tegra_powergate_set(pmc, id, false); 702 627 } 703 628 EXPORT_SYMBOL(tegra_powergate_power_off); 704 629 705 630 /** 706 631 * tegra_powergate_is_powered() - check if partition is powered 632 + * @pmc: power management controller 707 633 * @id: partition ID 708 634 */ 709 - int tegra_powergate_is_powered(unsigned int id) 635 + static int tegra_powergate_is_powered(struct tegra_pmc *pmc, unsigned int id) 710 636 { 711 - if (!tegra_powergate_is_valid(id)) 637 + if (!tegra_powergate_is_valid(pmc, id)) 712 638 return -EINVAL; 713 639 714 640 return tegra_powergate_state(id); ··· 721 645 */ 722 646 int tegra_powergate_remove_clamping(unsigned int id) 723 647 { 724 - if (!tegra_powergate_is_available(id)) 648 + if (!tegra_powergate_is_available(pmc, id)) 725 649 return -EINVAL; 726 650 727 - return __tegra_powergate_remove_clamping(id); 651 + return __tegra_powergate_remove_clamping(pmc, id); 728 652 } 729 653 EXPORT_SYMBOL(tegra_powergate_remove_clamping); 730 654 ··· 742 666 struct tegra_powergate *pg; 743 667 int err; 744 668 745 - if (!tegra_powergate_is_available(id)) 669 + if (!tegra_powergate_is_available(pmc, id)) 746 670 return -EINVAL; 747 671 748 672 pg = kzalloc(sizeof(*pg), GFP_KERNEL); ··· 757 681 758 682 err = tegra_powergate_power_up(pg, false); 759 683 if (err) 760 - pr_err("failed to turn on partition %d: %d\n", id, err); 684 + dev_err(pmc->dev, "failed to turn on partition %d: %d\n", id, 685 + err); 761 686 762 687 kfree(pg); 763 688 ··· 768 691 769 692 /** 770 693 * tegra_get_cpu_powergate_id() - convert from CPU ID to partition ID 694 + * @pmc: power management controller 771 695 * @cpuid: CPU partition ID 772 696 * 773 697 * Returns the partition ID corresponding to the CPU partition ID or a 774 698 * negative error code on failure. 775 699 */ 776 - static int tegra_get_cpu_powergate_id(unsigned int cpuid) 700 + static int tegra_get_cpu_powergate_id(struct tegra_pmc *pmc, 701 + unsigned int cpuid) 777 702 { 778 703 if (pmc->soc && cpuid < pmc->soc->num_cpu_powergates) 779 704 return pmc->soc->cpu_powergates[cpuid]; ··· 791 712 { 792 713 int id; 793 714 794 - id = tegra_get_cpu_powergate_id(cpuid); 715 + id = tegra_get_cpu_powergate_id(pmc, cpuid); 795 716 if (id < 0) 796 717 return false; 797 718 798 - return tegra_powergate_is_powered(id); 719 + return tegra_powergate_is_powered(pmc, id); 799 720 } 800 721 801 722 /** ··· 806 727 { 807 728 int id; 808 729 809 - id = tegra_get_cpu_powergate_id(cpuid); 730 + id = tegra_get_cpu_powergate_id(pmc, cpuid); 810 731 if (id < 0) 811 732 return id; 812 733 813 - return tegra_powergate_set(id, true); 734 + return tegra_powergate_set(pmc, id, true); 814 735 } 815 736 816 737 /** ··· 821 742 { 822 743 int id; 823 744 824 - id = tegra_get_cpu_powergate_id(cpuid); 745 + id = tegra_get_cpu_powergate_id(pmc, cpuid); 825 746 if (id < 0) 826 747 return id; 827 748 ··· 834 755 const char *cmd = data; 835 756 u32 value; 836 757 837 - value = readl(pmc->scratch + pmc->soc->regs->scratch0); 758 + value = tegra_pmc_scratch_readl(pmc, pmc->soc->regs->scratch0); 838 759 value &= ~PMC_SCRATCH0_MODE_MASK; 839 760 840 761 if (cmd) { ··· 848 769 value |= PMC_SCRATCH0_MODE_RCM; 849 770 } 850 771 851 - writel(value, pmc->scratch + pmc->soc->regs->scratch0); 772 + tegra_pmc_scratch_writel(pmc, value, pmc->soc->regs->scratch0); 852 773 853 774 /* reset everything but PMC_SCRATCH0 and PMC_RST_STATUS */ 854 - value = tegra_pmc_readl(PMC_CNTRL); 775 + value = tegra_pmc_readl(pmc, PMC_CNTRL); 855 776 value |= PMC_CNTRL_MAIN_RST; 856 - tegra_pmc_writel(value, PMC_CNTRL); 777 + tegra_pmc_writel(pmc, value, PMC_CNTRL); 857 778 858 779 return NOTIFY_DONE; 859 780 } ··· 872 793 seq_printf(s, "------------------\n"); 873 794 874 795 for (i = 0; i < pmc->soc->num_powergates; i++) { 875 - status = tegra_powergate_is_powered(i); 796 + status = tegra_powergate_is_powered(pmc, i); 876 797 if (status < 0) 877 798 continue; 878 799 ··· 934 855 static int tegra_powergate_of_get_resets(struct tegra_powergate *pg, 935 856 struct device_node *np, bool off) 936 857 { 858 + struct device *dev = pg->pmc->dev; 937 859 int err; 938 860 939 861 pg->reset = of_reset_control_array_get_exclusive(np); 940 862 if (IS_ERR(pg->reset)) { 941 863 err = PTR_ERR(pg->reset); 942 - pr_err("failed to get device resets: %d\n", err); 864 + dev_err(dev, "failed to get device resets: %d\n", err); 943 865 return err; 944 866 } 945 867 ··· 957 877 958 878 static void tegra_powergate_add(struct tegra_pmc *pmc, struct device_node *np) 959 879 { 880 + struct device *dev = pmc->dev; 960 881 struct tegra_powergate *pg; 961 882 int id, err; 962 883 bool off; ··· 968 887 969 888 id = tegra_powergate_lookup(pmc, np->name); 970 889 if (id < 0) { 971 - pr_err("powergate lookup failed for %pOFn: %d\n", np, id); 890 + dev_err(dev, "powergate lookup failed for %pOFn: %d\n", np, id); 972 891 goto free_mem; 973 892 } 974 893 ··· 984 903 pg->genpd.power_on = tegra_genpd_power_on; 985 904 pg->pmc = pmc; 986 905 987 - off = !tegra_powergate_is_powered(pg->id); 906 + off = !tegra_powergate_is_powered(pmc, pg->id); 988 907 989 908 err = tegra_powergate_of_get_clks(pg, np); 990 909 if (err < 0) { 991 - pr_err("failed to get clocks for %pOFn: %d\n", np, err); 910 + dev_err(dev, "failed to get clocks for %pOFn: %d\n", np, err); 992 911 goto set_available; 993 912 } 994 913 995 914 err = tegra_powergate_of_get_resets(pg, np, off); 996 915 if (err < 0) { 997 - pr_err("failed to get resets for %pOFn: %d\n", np, err); 916 + dev_err(dev, "failed to get resets for %pOFn: %d\n", np, err); 998 917 goto remove_clks; 999 918 } 1000 919 ··· 1007 926 1008 927 err = pm_genpd_init(&pg->genpd, NULL, off); 1009 928 if (err < 0) { 1010 - pr_err("failed to initialise PM domain %pOFn: %d\n", np, 929 + dev_err(dev, "failed to initialise PM domain %pOFn: %d\n", np, 1011 930 err); 1012 931 goto remove_resets; 1013 932 } 1014 933 1015 934 err = of_genpd_add_provider_simple(np, &pg->genpd); 1016 935 if (err < 0) { 1017 - pr_err("failed to add PM domain provider for %pOFn: %d\n", 1018 - np, err); 936 + dev_err(dev, "failed to add PM domain provider for %pOFn: %d\n", 937 + np, err); 1019 938 goto remove_genpd; 1020 939 } 1021 940 1022 - pr_debug("added PM domain %s\n", pg->genpd.name); 941 + dev_dbg(dev, "added PM domain %s\n", pg->genpd.name); 1023 942 1024 943 return; 1025 944 ··· 1075 994 return NULL; 1076 995 } 1077 996 1078 - static int tegra_io_pad_get_dpd_register_bit(enum tegra_io_pad id, 997 + static int tegra_io_pad_get_dpd_register_bit(struct tegra_pmc *pmc, 998 + enum tegra_io_pad id, 1079 999 unsigned long *request, 1080 1000 unsigned long *status, 1081 1001 u32 *mask) ··· 1085 1003 1086 1004 pad = tegra_io_pad_find(pmc, id); 1087 1005 if (!pad) { 1088 - pr_err("invalid I/O pad ID %u\n", id); 1006 + dev_err(pmc->dev, "invalid I/O pad ID %u\n", id); 1089 1007 return -ENOENT; 1090 1008 } 1091 1009 ··· 1105 1023 return 0; 1106 1024 } 1107 1025 1108 - static int tegra_io_pad_prepare(enum tegra_io_pad id, unsigned long *request, 1109 - unsigned long *status, u32 *mask) 1026 + static int tegra_io_pad_prepare(struct tegra_pmc *pmc, enum tegra_io_pad id, 1027 + unsigned long *request, unsigned long *status, 1028 + u32 *mask) 1110 1029 { 1111 1030 unsigned long rate, value; 1112 1031 int err; 1113 1032 1114 - err = tegra_io_pad_get_dpd_register_bit(id, request, status, mask); 1033 + err = tegra_io_pad_get_dpd_register_bit(pmc, id, request, status, mask); 1115 1034 if (err) 1116 1035 return err; 1117 1036 1118 1037 if (pmc->clk) { 1119 1038 rate = clk_get_rate(pmc->clk); 1120 1039 if (!rate) { 1121 - pr_err("failed to get clock rate\n"); 1040 + dev_err(pmc->dev, "failed to get clock rate\n"); 1122 1041 return -ENODEV; 1123 1042 } 1124 1043 1125 - tegra_pmc_writel(DPD_SAMPLE_ENABLE, DPD_SAMPLE); 1044 + tegra_pmc_writel(pmc, DPD_SAMPLE_ENABLE, DPD_SAMPLE); 1126 1045 1127 1046 /* must be at least 200 ns, in APB (PCLK) clock cycles */ 1128 1047 value = DIV_ROUND_UP(1000000000, rate); 1129 1048 value = DIV_ROUND_UP(200, value); 1130 - tegra_pmc_writel(value, SEL_DPD_TIM); 1049 + tegra_pmc_writel(pmc, value, SEL_DPD_TIM); 1131 1050 } 1132 1051 1133 1052 return 0; 1134 1053 } 1135 1054 1136 - static int tegra_io_pad_poll(unsigned long offset, u32 mask, 1137 - u32 val, unsigned long timeout) 1055 + static int tegra_io_pad_poll(struct tegra_pmc *pmc, unsigned long offset, 1056 + u32 mask, u32 val, unsigned long timeout) 1138 1057 { 1139 1058 u32 value; 1140 1059 1141 1060 timeout = jiffies + msecs_to_jiffies(timeout); 1142 1061 1143 1062 while (time_after(timeout, jiffies)) { 1144 - value = tegra_pmc_readl(offset); 1063 + value = tegra_pmc_readl(pmc, offset); 1145 1064 if ((value & mask) == val) 1146 1065 return 0; 1147 1066 ··· 1152 1069 return -ETIMEDOUT; 1153 1070 } 1154 1071 1155 - static void tegra_io_pad_unprepare(void) 1072 + static void tegra_io_pad_unprepare(struct tegra_pmc *pmc) 1156 1073 { 1157 1074 if (pmc->clk) 1158 - tegra_pmc_writel(DPD_SAMPLE_DISABLE, DPD_SAMPLE); 1075 + tegra_pmc_writel(pmc, DPD_SAMPLE_DISABLE, DPD_SAMPLE); 1159 1076 } 1160 1077 1161 1078 /** ··· 1172 1089 1173 1090 mutex_lock(&pmc->powergates_lock); 1174 1091 1175 - err = tegra_io_pad_prepare(id, &request, &status, &mask); 1092 + err = tegra_io_pad_prepare(pmc, id, &request, &status, &mask); 1176 1093 if (err < 0) { 1177 - pr_err("failed to prepare I/O pad: %d\n", err); 1094 + dev_err(pmc->dev, "failed to prepare I/O pad: %d\n", err); 1178 1095 goto unlock; 1179 1096 } 1180 1097 1181 - tegra_pmc_writel(IO_DPD_REQ_CODE_OFF | mask, request); 1098 + tegra_pmc_writel(pmc, IO_DPD_REQ_CODE_OFF | mask, request); 1182 1099 1183 - err = tegra_io_pad_poll(status, mask, 0, 250); 1100 + err = tegra_io_pad_poll(pmc, status, mask, 0, 250); 1184 1101 if (err < 0) { 1185 - pr_err("failed to enable I/O pad: %d\n", err); 1102 + dev_err(pmc->dev, "failed to enable I/O pad: %d\n", err); 1186 1103 goto unlock; 1187 1104 } 1188 1105 1189 - tegra_io_pad_unprepare(); 1106 + tegra_io_pad_unprepare(pmc); 1190 1107 1191 1108 unlock: 1192 1109 mutex_unlock(&pmc->powergates_lock); ··· 1208 1125 1209 1126 mutex_lock(&pmc->powergates_lock); 1210 1127 1211 - err = tegra_io_pad_prepare(id, &request, &status, &mask); 1128 + err = tegra_io_pad_prepare(pmc, id, &request, &status, &mask); 1212 1129 if (err < 0) { 1213 - pr_err("failed to prepare I/O pad: %d\n", err); 1130 + dev_err(pmc->dev, "failed to prepare I/O pad: %d\n", err); 1214 1131 goto unlock; 1215 1132 } 1216 1133 1217 - tegra_pmc_writel(IO_DPD_REQ_CODE_ON | mask, request); 1134 + tegra_pmc_writel(pmc, IO_DPD_REQ_CODE_ON | mask, request); 1218 1135 1219 - err = tegra_io_pad_poll(status, mask, mask, 250); 1136 + err = tegra_io_pad_poll(pmc, status, mask, mask, 250); 1220 1137 if (err < 0) { 1221 - pr_err("failed to disable I/O pad: %d\n", err); 1138 + dev_err(pmc->dev, "failed to disable I/O pad: %d\n", err); 1222 1139 goto unlock; 1223 1140 } 1224 1141 1225 - tegra_io_pad_unprepare(); 1142 + tegra_io_pad_unprepare(pmc); 1226 1143 1227 1144 unlock: 1228 1145 mutex_unlock(&pmc->powergates_lock); ··· 1230 1147 } 1231 1148 EXPORT_SYMBOL(tegra_io_pad_power_disable); 1232 1149 1233 - static int tegra_io_pad_is_powered(enum tegra_io_pad id) 1150 + static int tegra_io_pad_is_powered(struct tegra_pmc *pmc, enum tegra_io_pad id) 1234 1151 { 1235 1152 unsigned long request, status; 1236 1153 u32 mask, value; 1237 1154 int err; 1238 1155 1239 - err = tegra_io_pad_get_dpd_register_bit(id, &request, &status, &mask); 1156 + err = tegra_io_pad_get_dpd_register_bit(pmc, id, &request, &status, 1157 + &mask); 1240 1158 if (err) 1241 1159 return err; 1242 1160 1243 - value = tegra_pmc_readl(status); 1161 + value = tegra_pmc_readl(pmc, status); 1244 1162 1245 1163 return !(value & mask); 1246 1164 } 1247 1165 1248 - static int tegra_io_pad_set_voltage(enum tegra_io_pad id, int voltage) 1166 + static int tegra_io_pad_set_voltage(struct tegra_pmc *pmc, enum tegra_io_pad id, 1167 + int voltage) 1249 1168 { 1250 1169 const struct tegra_io_pad_soc *pad; 1251 1170 u32 value; ··· 1262 1177 mutex_lock(&pmc->powergates_lock); 1263 1178 1264 1179 if (pmc->soc->has_impl_33v_pwr) { 1265 - value = tegra_pmc_readl(PMC_IMPL_E_33V_PWR); 1180 + value = tegra_pmc_readl(pmc, PMC_IMPL_E_33V_PWR); 1266 1181 1267 1182 if (voltage == TEGRA_IO_PAD_VOLTAGE_1V8) 1268 1183 value &= ~BIT(pad->voltage); 1269 1184 else 1270 1185 value |= BIT(pad->voltage); 1271 1186 1272 - tegra_pmc_writel(value, PMC_IMPL_E_33V_PWR); 1187 + tegra_pmc_writel(pmc, value, PMC_IMPL_E_33V_PWR); 1273 1188 } else { 1274 1189 /* write-enable PMC_PWR_DET_VALUE[pad->voltage] */ 1275 - value = tegra_pmc_readl(PMC_PWR_DET); 1190 + value = tegra_pmc_readl(pmc, PMC_PWR_DET); 1276 1191 value |= BIT(pad->voltage); 1277 - tegra_pmc_writel(value, PMC_PWR_DET); 1192 + tegra_pmc_writel(pmc, value, PMC_PWR_DET); 1278 1193 1279 1194 /* update I/O voltage */ 1280 - value = tegra_pmc_readl(PMC_PWR_DET_VALUE); 1195 + value = tegra_pmc_readl(pmc, PMC_PWR_DET_VALUE); 1281 1196 1282 1197 if (voltage == TEGRA_IO_PAD_VOLTAGE_1V8) 1283 1198 value &= ~BIT(pad->voltage); 1284 1199 else 1285 1200 value |= BIT(pad->voltage); 1286 1201 1287 - tegra_pmc_writel(value, PMC_PWR_DET_VALUE); 1202 + tegra_pmc_writel(pmc, value, PMC_PWR_DET_VALUE); 1288 1203 } 1289 1204 1290 1205 mutex_unlock(&pmc->powergates_lock); ··· 1294 1209 return 0; 1295 1210 } 1296 1211 1297 - static int tegra_io_pad_get_voltage(enum tegra_io_pad id) 1212 + static int tegra_io_pad_get_voltage(struct tegra_pmc *pmc, enum tegra_io_pad id) 1298 1213 { 1299 1214 const struct tegra_io_pad_soc *pad; 1300 1215 u32 value; ··· 1307 1222 return -ENOTSUPP; 1308 1223 1309 1224 if (pmc->soc->has_impl_33v_pwr) 1310 - value = tegra_pmc_readl(PMC_IMPL_E_33V_PWR); 1225 + value = tegra_pmc_readl(pmc, PMC_IMPL_E_33V_PWR); 1311 1226 else 1312 - value = tegra_pmc_readl(PMC_PWR_DET_VALUE); 1227 + value = tegra_pmc_readl(pmc, PMC_PWR_DET_VALUE); 1313 1228 1314 1229 if ((value & BIT(pad->voltage)) == 0) 1315 1230 return TEGRA_IO_PAD_VOLTAGE_1V8; ··· 1381 1296 1382 1297 ticks = pmc->cpu_good_time * rate + USEC_PER_SEC - 1; 1383 1298 do_div(ticks, USEC_PER_SEC); 1384 - tegra_pmc_writel(ticks, PMC_CPUPWRGOOD_TIMER); 1299 + tegra_pmc_writel(pmc, ticks, PMC_CPUPWRGOOD_TIMER); 1385 1300 1386 1301 ticks = pmc->cpu_off_time * rate + USEC_PER_SEC - 1; 1387 1302 do_div(ticks, USEC_PER_SEC); 1388 - tegra_pmc_writel(ticks, PMC_CPUPWROFF_TIMER); 1303 + tegra_pmc_writel(pmc, ticks, PMC_CPUPWROFF_TIMER); 1389 1304 1390 1305 wmb(); 1391 1306 1392 1307 pmc->rate = rate; 1393 1308 } 1394 1309 1395 - value = tegra_pmc_readl(PMC_CNTRL); 1310 + value = tegra_pmc_readl(pmc, PMC_CNTRL); 1396 1311 value &= ~PMC_CNTRL_SIDE_EFFECT_LP0; 1397 1312 value |= PMC_CNTRL_CPU_PWRREQ_OE; 1398 - tegra_pmc_writel(value, PMC_CNTRL); 1313 + tegra_pmc_writel(pmc, value, PMC_CNTRL); 1399 1314 } 1400 1315 #endif 1401 1316 ··· 1517 1432 if (of_property_read_u32(np, "nvidia,pinmux-id", &pinmux)) 1518 1433 pinmux = 0; 1519 1434 1520 - value = tegra_pmc_readl(PMC_SENSOR_CTRL); 1435 + value = tegra_pmc_readl(pmc, PMC_SENSOR_CTRL); 1521 1436 value |= PMC_SENSOR_CTRL_SCRATCH_WRITE; 1522 - tegra_pmc_writel(value, PMC_SENSOR_CTRL); 1437 + tegra_pmc_writel(pmc, value, PMC_SENSOR_CTRL); 1523 1438 1524 1439 value = (reg_data << PMC_SCRATCH54_DATA_SHIFT) | 1525 1440 (reg_addr << PMC_SCRATCH54_ADDR_SHIFT); 1526 - tegra_pmc_writel(value, PMC_SCRATCH54); 1441 + tegra_pmc_writel(pmc, value, PMC_SCRATCH54); 1527 1442 1528 1443 value = PMC_SCRATCH55_RESET_TEGRA; 1529 1444 value |= ctrl_id << PMC_SCRATCH55_CNTRL_ID_SHIFT; ··· 1541 1456 1542 1457 value |= checksum << PMC_SCRATCH55_CHECKSUM_SHIFT; 1543 1458 1544 - tegra_pmc_writel(value, PMC_SCRATCH55); 1459 + tegra_pmc_writel(pmc, value, PMC_SCRATCH55); 1545 1460 1546 - value = tegra_pmc_readl(PMC_SENSOR_CTRL); 1461 + value = tegra_pmc_readl(pmc, PMC_SENSOR_CTRL); 1547 1462 value |= PMC_SENSOR_CTRL_ENABLE_RST; 1548 - tegra_pmc_writel(value, PMC_SENSOR_CTRL); 1463 + tegra_pmc_writel(pmc, value, PMC_SENSOR_CTRL); 1549 1464 1550 1465 dev_info(pmc->dev, "emergency thermal reset enabled\n"); 1551 1466 ··· 1555 1470 1556 1471 static int tegra_io_pad_pinctrl_get_groups_count(struct pinctrl_dev *pctl_dev) 1557 1472 { 1473 + struct tegra_pmc *pmc = pinctrl_dev_get_drvdata(pctl_dev); 1474 + 1558 1475 return pmc->soc->num_io_pads; 1559 1476 } 1560 1477 1561 - static const char *tegra_io_pad_pinctrl_get_group_name( 1562 - struct pinctrl_dev *pctl, unsigned int group) 1478 + static const char *tegra_io_pad_pinctrl_get_group_name(struct pinctrl_dev *pctl, 1479 + unsigned int group) 1563 1480 { 1481 + struct tegra_pmc *pmc = pinctrl_dev_get_drvdata(pctl); 1482 + 1564 1483 return pmc->soc->io_pads[group].name; 1565 1484 } 1566 1485 ··· 1573 1484 const unsigned int **pins, 1574 1485 unsigned int *num_pins) 1575 1486 { 1487 + struct tegra_pmc *pmc = pinctrl_dev_get_drvdata(pctl_dev); 1488 + 1576 1489 *pins = &pmc->soc->io_pads[group].id; 1577 1490 *num_pins = 1; 1491 + 1578 1492 return 0; 1579 1493 } 1580 1494 ··· 1592 1500 static int tegra_io_pad_pinconf_get(struct pinctrl_dev *pctl_dev, 1593 1501 unsigned int pin, unsigned long *config) 1594 1502 { 1595 - const struct tegra_io_pad_soc *pad = tegra_io_pad_find(pmc, pin); 1596 1503 enum pin_config_param param = pinconf_to_config_param(*config); 1504 + struct tegra_pmc *pmc = pinctrl_dev_get_drvdata(pctl_dev); 1505 + const struct tegra_io_pad_soc *pad; 1597 1506 int ret; 1598 1507 u32 arg; 1599 1508 1509 + pad = tegra_io_pad_find(pmc, pin); 1600 1510 if (!pad) 1601 1511 return -EINVAL; 1602 1512 1603 1513 switch (param) { 1604 1514 case PIN_CONFIG_POWER_SOURCE: 1605 - ret = tegra_io_pad_get_voltage(pad->id); 1515 + ret = tegra_io_pad_get_voltage(pmc, pad->id); 1606 1516 if (ret < 0) 1607 1517 return ret; 1518 + 1608 1519 arg = ret; 1609 1520 break; 1521 + 1610 1522 case PIN_CONFIG_LOW_POWER_MODE: 1611 - ret = tegra_io_pad_is_powered(pad->id); 1523 + ret = tegra_io_pad_is_powered(pmc, pad->id); 1612 1524 if (ret < 0) 1613 1525 return ret; 1526 + 1614 1527 arg = !ret; 1615 1528 break; 1529 + 1616 1530 default: 1617 1531 return -EINVAL; 1618 1532 } ··· 1632 1534 unsigned int pin, unsigned long *configs, 1633 1535 unsigned int num_configs) 1634 1536 { 1635 - const struct tegra_io_pad_soc *pad = tegra_io_pad_find(pmc, pin); 1537 + struct tegra_pmc *pmc = pinctrl_dev_get_drvdata(pctl_dev); 1538 + const struct tegra_io_pad_soc *pad; 1636 1539 enum pin_config_param param; 1637 1540 unsigned int i; 1638 1541 int err; 1639 1542 u32 arg; 1640 1543 1544 + pad = tegra_io_pad_find(pmc, pin); 1641 1545 if (!pad) 1642 1546 return -EINVAL; 1643 1547 ··· 1660 1560 if (arg != TEGRA_IO_PAD_VOLTAGE_1V8 && 1661 1561 arg != TEGRA_IO_PAD_VOLTAGE_3V3) 1662 1562 return -EINVAL; 1663 - err = tegra_io_pad_set_voltage(pad->id, arg); 1563 + err = tegra_io_pad_set_voltage(pmc, pad->id, arg); 1664 1564 if (err) 1665 1565 return err; 1666 1566 break; ··· 1685 1585 1686 1586 static int tegra_pmc_pinctrl_init(struct tegra_pmc *pmc) 1687 1587 { 1688 - int err = 0; 1588 + int err; 1689 1589 1690 1590 if (!pmc->soc->num_pin_descs) 1691 1591 return 0; ··· 1698 1598 pmc); 1699 1599 if (IS_ERR(pmc->pctl_dev)) { 1700 1600 err = PTR_ERR(pmc->pctl_dev); 1701 - dev_err(pmc->dev, "unable to register pinctrl, %d\n", err); 1601 + dev_err(pmc->dev, "failed to register pin controller: %d\n", 1602 + err); 1603 + return err; 1702 1604 } 1703 1605 1704 - return err; 1606 + return 0; 1705 1607 } 1706 1608 1707 1609 static ssize_t reset_reason_show(struct device *dev, 1708 - struct device_attribute *attr, char *buf) 1610 + struct device_attribute *attr, char *buf) 1709 1611 { 1710 1612 u32 value, rst_src; 1711 1613 1712 - value = tegra_pmc_readl(pmc->soc->regs->rst_status); 1614 + value = tegra_pmc_readl(pmc, pmc->soc->regs->rst_status); 1713 1615 rst_src = (value & pmc->soc->regs->rst_source_mask) >> 1714 1616 pmc->soc->regs->rst_source_shift; 1715 1617 ··· 1721 1619 static DEVICE_ATTR_RO(reset_reason); 1722 1620 1723 1621 static ssize_t reset_level_show(struct device *dev, 1724 - struct device_attribute *attr, char *buf) 1622 + struct device_attribute *attr, char *buf) 1725 1623 { 1726 1624 u32 value, rst_lvl; 1727 1625 1728 - value = tegra_pmc_readl(pmc->soc->regs->rst_status); 1626 + value = tegra_pmc_readl(pmc, pmc->soc->regs->rst_status); 1729 1627 rst_lvl = (value & pmc->soc->regs->rst_level_mask) >> 1730 1628 pmc->soc->regs->rst_level_shift; 1731 1629 ··· 1743 1641 err = device_create_file(dev, &dev_attr_reset_reason); 1744 1642 if (err < 0) 1745 1643 dev_warn(dev, 1746 - "failed to create attr \"reset_reason\": %d\n", 1747 - err); 1644 + "failed to create attr \"reset_reason\": %d\n", 1645 + err); 1748 1646 } 1749 1647 1750 1648 if (pmc->soc->reset_levels) { 1751 1649 err = device_create_file(dev, &dev_attr_reset_level); 1752 1650 if (err < 0) 1753 1651 dev_warn(dev, 1754 - "failed to create attr \"reset_level\": %d\n", 1755 - err); 1652 + "failed to create attr \"reset_level\": %d\n", 1653 + err); 1756 1654 } 1757 1655 } 1758 1656 ··· 2022 1920 pmc->base = base; 2023 1921 mutex_unlock(&pmc->powergates_lock); 2024 1922 1923 + platform_set_drvdata(pdev, pmc); 1924 + 2025 1925 return 0; 2026 1926 2027 1927 cleanup_restart_handler: ··· 2036 1932 #if defined(CONFIG_PM_SLEEP) && defined(CONFIG_ARM) 2037 1933 static int tegra_pmc_suspend(struct device *dev) 2038 1934 { 2039 - tegra_pmc_writel(virt_to_phys(tegra_resume), PMC_SCRATCH41); 1935 + struct tegra_pmc *pmc = dev_get_drvdata(dev); 1936 + 1937 + tegra_pmc_writel(pmc, virt_to_phys(tegra_resume), PMC_SCRATCH41); 2040 1938 2041 1939 return 0; 2042 1940 } 2043 1941 2044 1942 static int tegra_pmc_resume(struct device *dev) 2045 1943 { 2046 - tegra_pmc_writel(0x0, PMC_SCRATCH41); 1944 + struct tegra_pmc *pmc = dev_get_drvdata(dev); 1945 + 1946 + tegra_pmc_writel(pmc, 0x0, PMC_SCRATCH41); 2047 1947 2048 1948 return 0; 2049 1949 } ··· 2084 1976 u32 value; 2085 1977 2086 1978 /* Always enable CPU power request */ 2087 - value = tegra_pmc_readl(PMC_CNTRL); 1979 + value = tegra_pmc_readl(pmc, PMC_CNTRL); 2088 1980 value |= PMC_CNTRL_CPU_PWRREQ_OE; 2089 - tegra_pmc_writel(value, PMC_CNTRL); 1981 + tegra_pmc_writel(pmc, value, PMC_CNTRL); 2090 1982 2091 - value = tegra_pmc_readl(PMC_CNTRL); 1983 + value = tegra_pmc_readl(pmc, PMC_CNTRL); 2092 1984 2093 1985 if (pmc->sysclkreq_high) 2094 1986 value &= ~PMC_CNTRL_SYSCLK_POLARITY; ··· 2096 1988 value |= PMC_CNTRL_SYSCLK_POLARITY; 2097 1989 2098 1990 /* configure the output polarity while the request is tristated */ 2099 - tegra_pmc_writel(value, PMC_CNTRL); 1991 + tegra_pmc_writel(pmc, value, PMC_CNTRL); 2100 1992 2101 1993 /* now enable the request */ 2102 - value = tegra_pmc_readl(PMC_CNTRL); 1994 + value = tegra_pmc_readl(pmc, PMC_CNTRL); 2103 1995 value |= PMC_CNTRL_SYSCLK_OE; 2104 - tegra_pmc_writel(value, PMC_CNTRL); 1996 + tegra_pmc_writel(pmc, value, PMC_CNTRL); 2105 1997 } 2106 1998 2107 1999 static void tegra20_pmc_setup_irq_polarity(struct tegra_pmc *pmc, ··· 2110 2002 { 2111 2003 u32 value; 2112 2004 2113 - value = tegra_pmc_readl(PMC_CNTRL); 2005 + value = tegra_pmc_readl(pmc, PMC_CNTRL); 2114 2006 2115 2007 if (invert) 2116 2008 value |= PMC_CNTRL_INTR_POLARITY; 2117 2009 else 2118 2010 value &= ~PMC_CNTRL_INTR_POLARITY; 2119 2011 2120 - tegra_pmc_writel(value, PMC_CNTRL); 2012 + tegra_pmc_writel(pmc, value, PMC_CNTRL); 2121 2013 } 2122 2014 2123 2015 static const struct tegra_pmc_soc tegra20_pmc_soc = { ··· 2127 2019 .cpu_powergates = NULL, 2128 2020 .has_tsense_reset = false, 2129 2021 .has_gpu_clamps = false, 2022 + .needs_mbist_war = false, 2023 + .has_impl_33v_pwr = false, 2024 + .maybe_tz_only = false, 2130 2025 .num_io_pads = 0, 2131 2026 .io_pads = NULL, 2132 2027 .num_pin_descs = 0, ··· 2174 2063 .cpu_powergates = tegra30_cpu_powergates, 2175 2064 .has_tsense_reset = true, 2176 2065 .has_gpu_clamps = false, 2066 + .needs_mbist_war = false, 2177 2067 .has_impl_33v_pwr = false, 2068 + .maybe_tz_only = false, 2178 2069 .num_io_pads = 0, 2179 2070 .io_pads = NULL, 2180 2071 .num_pin_descs = 0, ··· 2225 2112 .cpu_powergates = tegra114_cpu_powergates, 2226 2113 .has_tsense_reset = true, 2227 2114 .has_gpu_clamps = false, 2115 + .needs_mbist_war = false, 2228 2116 .has_impl_33v_pwr = false, 2117 + .maybe_tz_only = false, 2229 2118 .num_io_pads = 0, 2230 2119 .io_pads = NULL, 2231 2120 .num_pin_descs = 0, ··· 2336 2221 .cpu_powergates = tegra124_cpu_powergates, 2337 2222 .has_tsense_reset = true, 2338 2223 .has_gpu_clamps = true, 2224 + .needs_mbist_war = false, 2339 2225 .has_impl_33v_pwr = false, 2226 + .maybe_tz_only = false, 2340 2227 .num_io_pads = ARRAY_SIZE(tegra124_io_pads), 2341 2228 .io_pads = tegra124_io_pads, 2342 2229 .num_pin_descs = ARRAY_SIZE(tegra124_pin_descs), ··· 2442 2325 .cpu_powergates = tegra210_cpu_powergates, 2443 2326 .has_tsense_reset = true, 2444 2327 .has_gpu_clamps = true, 2445 - .has_impl_33v_pwr = false, 2446 2328 .needs_mbist_war = true, 2329 + .has_impl_33v_pwr = false, 2330 + .maybe_tz_only = true, 2447 2331 .num_io_pads = ARRAY_SIZE(tegra210_io_pads), 2448 2332 .io_pads = tegra210_io_pads, 2449 2333 .num_pin_descs = ARRAY_SIZE(tegra210_pin_descs), ··· 2531 2413 2532 2414 index = of_property_match_string(np, "reg-names", "wake"); 2533 2415 if (index < 0) { 2534 - pr_err("failed to find PMC wake registers\n"); 2416 + dev_err(pmc->dev, "failed to find PMC wake registers\n"); 2535 2417 return; 2536 2418 } 2537 2419 ··· 2539 2421 2540 2422 wake = ioremap_nocache(regs.start, resource_size(&regs)); 2541 2423 if (!wake) { 2542 - pr_err("failed to map PMC wake registers\n"); 2424 + dev_err(pmc->dev, "failed to map PMC wake registers\n"); 2543 2425 return; 2544 2426 } 2545 2427 ··· 2556 2438 } 2557 2439 2558 2440 static const struct tegra_wake_event tegra186_wake_events[] = { 2559 - TEGRA_WAKE_GPIO("power", 29, 1, TEGRA_AON_GPIO(FF, 0)), 2441 + TEGRA_WAKE_GPIO("power", 29, 1, TEGRA186_AON_GPIO(FF, 0)), 2560 2442 TEGRA_WAKE_IRQ("rtc", 73, 10), 2561 2443 }; 2562 2444 ··· 2567 2449 .cpu_powergates = NULL, 2568 2450 .has_tsense_reset = false, 2569 2451 .has_gpu_clamps = false, 2452 + .needs_mbist_war = false, 2570 2453 .has_impl_33v_pwr = true, 2454 + .maybe_tz_only = false, 2571 2455 .num_io_pads = ARRAY_SIZE(tegra186_io_pads), 2572 2456 .io_pads = tegra186_io_pads, 2573 2457 .num_pin_descs = ARRAY_SIZE(tegra186_pin_descs), ··· 2647 2527 .cpu_powergates = NULL, 2648 2528 .has_tsense_reset = false, 2649 2529 .has_gpu_clamps = false, 2530 + .needs_mbist_war = false, 2531 + .has_impl_33v_pwr = false, 2532 + .maybe_tz_only = false, 2650 2533 .num_io_pads = ARRAY_SIZE(tegra194_io_pads), 2651 2534 .io_pads = tegra194_io_pads, 2652 2535 .regs = &tegra186_pmc_regs, ··· 2683 2560 .probe = tegra_pmc_probe, 2684 2561 }; 2685 2562 builtin_platform_driver(tegra_pmc_driver); 2563 + 2564 + static bool __init tegra_pmc_detect_tz_only(struct tegra_pmc *pmc) 2565 + { 2566 + u32 value, saved; 2567 + 2568 + saved = readl(pmc->base + pmc->soc->regs->scratch0); 2569 + value = saved ^ 0xffffffff; 2570 + 2571 + if (value == 0xffffffff) 2572 + value = 0xdeadbeef; 2573 + 2574 + /* write pattern and read it back */ 2575 + writel(value, pmc->base + pmc->soc->regs->scratch0); 2576 + value = readl(pmc->base + pmc->soc->regs->scratch0); 2577 + 2578 + /* if we read all-zeroes, access is restricted to TZ only */ 2579 + if (value == 0) { 2580 + pr_info("access to PMC is restricted to TZ\n"); 2581 + return true; 2582 + } 2583 + 2584 + /* restore original value */ 2585 + writel(saved, pmc->base + pmc->soc->regs->scratch0); 2586 + 2587 + return false; 2588 + } 2686 2589 2687 2590 /* 2688 2591 * Early initialization to allow access to registers in the very early boot ··· 2771 2622 2772 2623 if (np) { 2773 2624 pmc->soc = match->data; 2625 + 2626 + if (pmc->soc->maybe_tz_only) 2627 + pmc->tz_only = tegra_pmc_detect_tz_only(pmc); 2774 2628 2775 2629 tegra_powergate_init(pmc, np); 2776 2630
+1 -1
drivers/soc/ti/knav_dma.c
··· 598 598 599 599 INIT_LIST_HEAD(&chan->list); 600 600 chan->dma = dma; 601 - chan->direction = DMA_NONE; 601 + chan->direction = DMA_TRANS_NONE; 602 602 atomic_set(&chan->ref_count, 0); 603 603 spin_lock_init(&chan->lock); 604 604
+20
drivers/soc/xilinx/Kconfig
··· 17 17 To compile this driver as a module, choose M here: the 18 18 module will be called xlnx_vcu. 19 19 20 + config ZYNQMP_POWER 21 + bool "Enable Xilinx Zynq MPSoC Power Management driver" 22 + depends on PM && ARCH_ZYNQMP 23 + default y 24 + help 25 + Say yes to enable power management support for ZyqnMP SoC. 26 + This driver uses firmware driver as an interface for power 27 + management request to firmware. It registers isr to handle 28 + power management callbacks from firmware. 29 + If in doubt, say N. 30 + 31 + config ZYNQMP_PM_DOMAINS 32 + bool "Enable Zynq MPSoC generic PM domains" 33 + default y 34 + depends on PM && ARCH_ZYNQMP && ZYNQMP_FIRMWARE 35 + select PM_GENERIC_DOMAINS 36 + help 37 + Say yes to enable device power management through PM domains 38 + If in doubt, say N. 39 + 20 40 endmenu
+2
drivers/soc/xilinx/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 obj-$(CONFIG_XILINX_VCU) += xlnx_vcu.o 3 + obj-$(CONFIG_ZYNQMP_POWER) += zynqmp_power.o 4 + obj-$(CONFIG_ZYNQMP_PM_DOMAINS) += zynqmp_pm_domains.o
+321
drivers/soc/xilinx/zynqmp_pm_domains.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * ZynqMP Generic PM domain support 4 + * 5 + * Copyright (C) 2015-2018 Xilinx, Inc. 6 + * 7 + * Davorin Mista <davorin.mista@aggios.com> 8 + * Jolly Shah <jollys@xilinx.com> 9 + * Rajan Vaja <rajan.vaja@xilinx.com> 10 + */ 11 + 12 + #include <linux/err.h> 13 + #include <linux/list.h> 14 + #include <linux/module.h> 15 + #include <linux/of_platform.h> 16 + #include <linux/platform_device.h> 17 + #include <linux/pm_domain.h> 18 + #include <linux/slab.h> 19 + 20 + #include <linux/firmware/xlnx-zynqmp.h> 21 + 22 + #define ZYNQMP_NUM_DOMAINS (100) 23 + /* Flag stating if PM nodes mapped to the PM domain has been requested */ 24 + #define ZYNQMP_PM_DOMAIN_REQUESTED BIT(0) 25 + 26 + /** 27 + * struct zynqmp_pm_domain - Wrapper around struct generic_pm_domain 28 + * @gpd: Generic power domain 29 + * @node_id: PM node ID corresponding to device inside PM domain 30 + * @flags: ZynqMP PM domain flags 31 + */ 32 + struct zynqmp_pm_domain { 33 + struct generic_pm_domain gpd; 34 + u32 node_id; 35 + u8 flags; 36 + }; 37 + 38 + /** 39 + * zynqmp_gpd_is_active_wakeup_path() - Check if device is in wakeup source 40 + * path 41 + * @dev: Device to check for wakeup source path 42 + * @not_used: Data member (not required) 43 + * 44 + * This function is checks device's child hierarchy and checks if any device is 45 + * set as wakeup source. 46 + * 47 + * Return: 1 if device is in wakeup source path else 0 48 + */ 49 + static int zynqmp_gpd_is_active_wakeup_path(struct device *dev, void *not_used) 50 + { 51 + int may_wakeup; 52 + 53 + may_wakeup = device_may_wakeup(dev); 54 + if (may_wakeup) 55 + return may_wakeup; 56 + 57 + return device_for_each_child(dev, NULL, 58 + zynqmp_gpd_is_active_wakeup_path); 59 + } 60 + 61 + /** 62 + * zynqmp_gpd_power_on() - Power on PM domain 63 + * @domain: Generic PM domain 64 + * 65 + * This function is called before devices inside a PM domain are resumed, to 66 + * power on PM domain. 67 + * 68 + * Return: 0 on success, error code otherwise 69 + */ 70 + static int zynqmp_gpd_power_on(struct generic_pm_domain *domain) 71 + { 72 + int ret; 73 + struct zynqmp_pm_domain *pd; 74 + const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops(); 75 + 76 + if (!eemi_ops || !eemi_ops->set_requirement) 77 + return -ENXIO; 78 + 79 + pd = container_of(domain, struct zynqmp_pm_domain, gpd); 80 + ret = eemi_ops->set_requirement(pd->node_id, 81 + ZYNQMP_PM_CAPABILITY_ACCESS, 82 + ZYNQMP_PM_MAX_QOS, 83 + ZYNQMP_PM_REQUEST_ACK_BLOCKING); 84 + if (ret) { 85 + pr_err("%s() %s set requirement for node %d failed: %d\n", 86 + __func__, domain->name, pd->node_id, ret); 87 + return ret; 88 + } 89 + 90 + pr_debug("%s() Powered on %s domain\n", __func__, domain->name); 91 + return 0; 92 + } 93 + 94 + /** 95 + * zynqmp_gpd_power_off() - Power off PM domain 96 + * @domain: Generic PM domain 97 + * 98 + * This function is called after devices inside a PM domain are suspended, to 99 + * power off PM domain. 100 + * 101 + * Return: 0 on success, error code otherwise 102 + */ 103 + static int zynqmp_gpd_power_off(struct generic_pm_domain *domain) 104 + { 105 + int ret; 106 + struct pm_domain_data *pdd, *tmp; 107 + struct zynqmp_pm_domain *pd; 108 + u32 capabilities = 0; 109 + bool may_wakeup; 110 + const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops(); 111 + 112 + if (!eemi_ops || !eemi_ops->set_requirement) 113 + return -ENXIO; 114 + 115 + pd = container_of(domain, struct zynqmp_pm_domain, gpd); 116 + 117 + /* If domain is already released there is nothing to be done */ 118 + if (!(pd->flags & ZYNQMP_PM_DOMAIN_REQUESTED)) { 119 + pr_debug("%s() %s domain is already released\n", 120 + __func__, domain->name); 121 + return 0; 122 + } 123 + 124 + list_for_each_entry_safe(pdd, tmp, &domain->dev_list, list_node) { 125 + /* If device is in wakeup path, set capability to WAKEUP */ 126 + may_wakeup = zynqmp_gpd_is_active_wakeup_path(pdd->dev, NULL); 127 + if (may_wakeup) { 128 + dev_dbg(pdd->dev, "device is in wakeup path in %s\n", 129 + domain->name); 130 + capabilities = ZYNQMP_PM_CAPABILITY_WAKEUP; 131 + break; 132 + } 133 + } 134 + 135 + ret = eemi_ops->set_requirement(pd->node_id, capabilities, 0, 136 + ZYNQMP_PM_REQUEST_ACK_NO); 137 + /** 138 + * If powering down of any node inside this domain fails, 139 + * report and return the error 140 + */ 141 + if (ret) { 142 + pr_err("%s() %s set requirement for node %d failed: %d\n", 143 + __func__, domain->name, pd->node_id, ret); 144 + return ret; 145 + } 146 + 147 + pr_debug("%s() Powered off %s domain\n", __func__, domain->name); 148 + return 0; 149 + } 150 + 151 + /** 152 + * zynqmp_gpd_attach_dev() - Attach device to the PM domain 153 + * @domain: Generic PM domain 154 + * @dev: Device to attach 155 + * 156 + * Return: 0 on success, error code otherwise 157 + */ 158 + static int zynqmp_gpd_attach_dev(struct generic_pm_domain *domain, 159 + struct device *dev) 160 + { 161 + int ret; 162 + struct zynqmp_pm_domain *pd; 163 + const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops(); 164 + 165 + if (!eemi_ops || !eemi_ops->request_node) 166 + return -ENXIO; 167 + 168 + pd = container_of(domain, struct zynqmp_pm_domain, gpd); 169 + 170 + /* If this is not the first device to attach there is nothing to do */ 171 + if (domain->device_count) 172 + return 0; 173 + 174 + ret = eemi_ops->request_node(pd->node_id, 0, 0, 175 + ZYNQMP_PM_REQUEST_ACK_BLOCKING); 176 + /* If requesting a node fails print and return the error */ 177 + if (ret) { 178 + pr_err("%s() %s request failed for node %d: %d\n", 179 + __func__, domain->name, pd->node_id, ret); 180 + return ret; 181 + } 182 + 183 + pd->flags |= ZYNQMP_PM_DOMAIN_REQUESTED; 184 + 185 + pr_debug("%s() %s attached to %s domain\n", __func__, 186 + dev_name(dev), domain->name); 187 + return 0; 188 + } 189 + 190 + /** 191 + * zynqmp_gpd_detach_dev() - Detach device from the PM domain 192 + * @domain: Generic PM domain 193 + * @dev: Device to detach 194 + */ 195 + static void zynqmp_gpd_detach_dev(struct generic_pm_domain *domain, 196 + struct device *dev) 197 + { 198 + int ret; 199 + struct zynqmp_pm_domain *pd; 200 + const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops(); 201 + 202 + if (!eemi_ops || !eemi_ops->release_node) 203 + return; 204 + 205 + pd = container_of(domain, struct zynqmp_pm_domain, gpd); 206 + 207 + /* If this is not the last device to detach there is nothing to do */ 208 + if (domain->device_count) 209 + return; 210 + 211 + ret = eemi_ops->release_node(pd->node_id); 212 + /* If releasing a node fails print the error and return */ 213 + if (ret) { 214 + pr_err("%s() %s release failed for node %d: %d\n", 215 + __func__, domain->name, pd->node_id, ret); 216 + return; 217 + } 218 + 219 + pd->flags &= ~ZYNQMP_PM_DOMAIN_REQUESTED; 220 + 221 + pr_debug("%s() %s detached from %s domain\n", __func__, 222 + dev_name(dev), domain->name); 223 + } 224 + 225 + static struct generic_pm_domain *zynqmp_gpd_xlate 226 + (struct of_phandle_args *genpdspec, void *data) 227 + { 228 + struct genpd_onecell_data *genpd_data = data; 229 + unsigned int i, idx = genpdspec->args[0]; 230 + struct zynqmp_pm_domain *pd; 231 + 232 + pd = container_of(genpd_data->domains[0], struct zynqmp_pm_domain, gpd); 233 + 234 + if (genpdspec->args_count != 1) 235 + return ERR_PTR(-EINVAL); 236 + 237 + /* Check for existing pm domains */ 238 + for (i = 0; i < ZYNQMP_NUM_DOMAINS; i++) { 239 + if (pd[i].node_id == idx) 240 + goto done; 241 + } 242 + 243 + /** 244 + * Add index in empty node_id of power domain list as no existing 245 + * power domain found for current index. 246 + */ 247 + for (i = 0; i < ZYNQMP_NUM_DOMAINS; i++) { 248 + if (pd[i].node_id == 0) { 249 + pd[i].node_id = idx; 250 + break; 251 + } 252 + } 253 + 254 + done: 255 + if (!genpd_data->domains[i] || i == ZYNQMP_NUM_DOMAINS) 256 + return ERR_PTR(-ENOENT); 257 + 258 + return genpd_data->domains[i]; 259 + } 260 + 261 + static int zynqmp_gpd_probe(struct platform_device *pdev) 262 + { 263 + int i; 264 + struct genpd_onecell_data *zynqmp_pd_data; 265 + struct generic_pm_domain **domains; 266 + struct zynqmp_pm_domain *pd; 267 + struct device *dev = &pdev->dev; 268 + 269 + pd = devm_kcalloc(dev, ZYNQMP_NUM_DOMAINS, sizeof(*pd), GFP_KERNEL); 270 + if (!pd) 271 + return -ENOMEM; 272 + 273 + zynqmp_pd_data = devm_kzalloc(dev, sizeof(*zynqmp_pd_data), GFP_KERNEL); 274 + if (!zynqmp_pd_data) 275 + return -ENOMEM; 276 + 277 + zynqmp_pd_data->xlate = zynqmp_gpd_xlate; 278 + 279 + domains = devm_kcalloc(dev, ZYNQMP_NUM_DOMAINS, sizeof(*domains), 280 + GFP_KERNEL); 281 + if (!domains) 282 + return -ENOMEM; 283 + 284 + for (i = 0; i < ZYNQMP_NUM_DOMAINS; i++, pd++) { 285 + pd->node_id = 0; 286 + pd->gpd.name = kasprintf(GFP_KERNEL, "domain%d", i); 287 + pd->gpd.power_off = zynqmp_gpd_power_off; 288 + pd->gpd.power_on = zynqmp_gpd_power_on; 289 + pd->gpd.attach_dev = zynqmp_gpd_attach_dev; 290 + pd->gpd.detach_dev = zynqmp_gpd_detach_dev; 291 + 292 + domains[i] = &pd->gpd; 293 + 294 + /* Mark all PM domains as initially powered off */ 295 + pm_genpd_init(&pd->gpd, NULL, true); 296 + } 297 + 298 + zynqmp_pd_data->domains = domains; 299 + zynqmp_pd_data->num_domains = ZYNQMP_NUM_DOMAINS; 300 + of_genpd_add_provider_onecell(dev->parent->of_node, zynqmp_pd_data); 301 + 302 + return 0; 303 + } 304 + 305 + static int zynqmp_gpd_remove(struct platform_device *pdev) 306 + { 307 + of_genpd_del_provider(pdev->dev.parent->of_node); 308 + 309 + return 0; 310 + } 311 + 312 + static struct platform_driver zynqmp_power_domain_driver = { 313 + .driver = { 314 + .name = "zynqmp_power_controller", 315 + }, 316 + .probe = zynqmp_gpd_probe, 317 + .remove = zynqmp_gpd_remove, 318 + }; 319 + module_platform_driver(zynqmp_power_domain_driver); 320 + 321 + MODULE_ALIAS("platform:zynqmp_power_controller");
+178
drivers/soc/xilinx/zynqmp_power.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Xilinx Zynq MPSoC Power Management 4 + * 5 + * Copyright (C) 2014-2018 Xilinx, Inc. 6 + * 7 + * Davorin Mista <davorin.mista@aggios.com> 8 + * Jolly Shah <jollys@xilinx.com> 9 + * Rajan Vaja <rajan.vaja@xilinx.com> 10 + */ 11 + 12 + #include <linux/mailbox_client.h> 13 + #include <linux/module.h> 14 + #include <linux/platform_device.h> 15 + #include <linux/reboot.h> 16 + #include <linux/suspend.h> 17 + 18 + #include <linux/firmware/xlnx-zynqmp.h> 19 + 20 + enum pm_suspend_mode { 21 + PM_SUSPEND_MODE_FIRST = 0, 22 + PM_SUSPEND_MODE_STD = PM_SUSPEND_MODE_FIRST, 23 + PM_SUSPEND_MODE_POWER_OFF, 24 + }; 25 + 26 + #define PM_SUSPEND_MODE_FIRST PM_SUSPEND_MODE_STD 27 + 28 + static const char *const suspend_modes[] = { 29 + [PM_SUSPEND_MODE_STD] = "standard", 30 + [PM_SUSPEND_MODE_POWER_OFF] = "power-off", 31 + }; 32 + 33 + static enum pm_suspend_mode suspend_mode = PM_SUSPEND_MODE_STD; 34 + 35 + enum pm_api_cb_id { 36 + PM_INIT_SUSPEND_CB = 30, 37 + PM_ACKNOWLEDGE_CB, 38 + PM_NOTIFY_CB, 39 + }; 40 + 41 + static void zynqmp_pm_get_callback_data(u32 *buf) 42 + { 43 + zynqmp_pm_invoke_fn(GET_CALLBACK_DATA, 0, 0, 0, 0, buf); 44 + } 45 + 46 + static irqreturn_t zynqmp_pm_isr(int irq, void *data) 47 + { 48 + u32 payload[CB_PAYLOAD_SIZE]; 49 + 50 + zynqmp_pm_get_callback_data(payload); 51 + 52 + /* First element is callback API ID, others are callback arguments */ 53 + if (payload[0] == PM_INIT_SUSPEND_CB) { 54 + switch (payload[1]) { 55 + case SUSPEND_SYSTEM_SHUTDOWN: 56 + orderly_poweroff(true); 57 + break; 58 + case SUSPEND_POWER_REQUEST: 59 + pm_suspend(PM_SUSPEND_MEM); 60 + break; 61 + default: 62 + pr_err("%s Unsupported InitSuspendCb reason " 63 + "code %d\n", __func__, payload[1]); 64 + } 65 + } 66 + 67 + return IRQ_HANDLED; 68 + } 69 + 70 + static ssize_t suspend_mode_show(struct device *dev, 71 + struct device_attribute *attr, char *buf) 72 + { 73 + char *s = buf; 74 + int md; 75 + 76 + for (md = PM_SUSPEND_MODE_FIRST; md < ARRAY_SIZE(suspend_modes); md++) 77 + if (suspend_modes[md]) { 78 + if (md == suspend_mode) 79 + s += sprintf(s, "[%s] ", suspend_modes[md]); 80 + else 81 + s += sprintf(s, "%s ", suspend_modes[md]); 82 + } 83 + 84 + /* Convert last space to newline */ 85 + if (s != buf) 86 + *(s - 1) = '\n'; 87 + return (s - buf); 88 + } 89 + 90 + static ssize_t suspend_mode_store(struct device *dev, 91 + struct device_attribute *attr, 92 + const char *buf, size_t count) 93 + { 94 + int md, ret = -EINVAL; 95 + const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops(); 96 + 97 + if (!eemi_ops || !eemi_ops->set_suspend_mode) 98 + return ret; 99 + 100 + for (md = PM_SUSPEND_MODE_FIRST; md < ARRAY_SIZE(suspend_modes); md++) 101 + if (suspend_modes[md] && 102 + sysfs_streq(suspend_modes[md], buf)) { 103 + ret = 0; 104 + break; 105 + } 106 + 107 + if (!ret && md != suspend_mode) { 108 + ret = eemi_ops->set_suspend_mode(md); 109 + if (likely(!ret)) 110 + suspend_mode = md; 111 + } 112 + 113 + return ret ? ret : count; 114 + } 115 + 116 + static DEVICE_ATTR_RW(suspend_mode); 117 + 118 + static int zynqmp_pm_probe(struct platform_device *pdev) 119 + { 120 + int ret, irq; 121 + u32 pm_api_version; 122 + 123 + const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops(); 124 + 125 + if (!eemi_ops || !eemi_ops->get_api_version || !eemi_ops->init_finalize) 126 + return -ENXIO; 127 + 128 + eemi_ops->init_finalize(); 129 + eemi_ops->get_api_version(&pm_api_version); 130 + 131 + /* Check PM API version number */ 132 + if (pm_api_version < ZYNQMP_PM_VERSION) 133 + return -ENODEV; 134 + 135 + irq = platform_get_irq(pdev, 0); 136 + if (irq <= 0) 137 + return -ENXIO; 138 + 139 + ret = devm_request_threaded_irq(&pdev->dev, irq, NULL, zynqmp_pm_isr, 140 + IRQF_NO_SUSPEND | IRQF_ONESHOT, 141 + dev_name(&pdev->dev), &pdev->dev); 142 + if (ret) { 143 + dev_err(&pdev->dev, "devm_request_threaded_irq '%d' failed " 144 + "with %d\n", irq, ret); 145 + return ret; 146 + } 147 + 148 + ret = sysfs_create_file(&pdev->dev.kobj, &dev_attr_suspend_mode.attr); 149 + if (ret) { 150 + dev_err(&pdev->dev, "unable to create sysfs interface\n"); 151 + return ret; 152 + } 153 + 154 + return 0; 155 + } 156 + 157 + static int zynqmp_pm_remove(struct platform_device *pdev) 158 + { 159 + sysfs_remove_file(&pdev->dev.kobj, &dev_attr_suspend_mode.attr); 160 + 161 + return 0; 162 + } 163 + 164 + static const struct of_device_id pm_of_match[] = { 165 + { .compatible = "xlnx,zynqmp-power", }, 166 + { /* end of table */ }, 167 + }; 168 + MODULE_DEVICE_TABLE(of, pm_of_match); 169 + 170 + static struct platform_driver zynqmp_pm_platform_driver = { 171 + .probe = zynqmp_pm_probe, 172 + .remove = zynqmp_pm_remove, 173 + .driver = { 174 + .name = "zynqmp_power", 175 + .of_match_table = pm_of_match, 176 + }, 177 + }; 178 + module_platform_driver(zynqmp_pm_platform_driver);
+1
drivers/tee/optee/Makefile
··· 5 5 optee-objs += rpc.o 6 6 optee-objs += supp.o 7 7 optee-objs += shm_pool.o 8 + optee-objs += device.o
+4
drivers/tee/optee/core.c
··· 634 634 if (optee->sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) 635 635 pr_info("dynamic shared memory is enabled\n"); 636 636 637 + rc = optee_enumerate_devices(); 638 + if (rc) 639 + goto err; 640 + 637 641 pr_info("initialized driver\n"); 638 642 return optee; 639 643 err:
+160
drivers/tee/optee/device.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2019 Linaro Ltd. 4 + */ 5 + 6 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 7 + 8 + #include <linux/kernel.h> 9 + #include <linux/slab.h> 10 + #include <linux/tee_drv.h> 11 + #include <linux/uuid.h> 12 + #include "optee_private.h" 13 + 14 + /* 15 + * Get device UUIDs 16 + * 17 + * [out] memref[0] Array of device UUIDs 18 + * 19 + * Return codes: 20 + * TEE_SUCCESS - Invoke command success 21 + * TEE_ERROR_BAD_PARAMETERS - Incorrect input param 22 + * TEE_ERROR_SHORT_BUFFER - Output buffer size less than required 23 + */ 24 + #define PTA_CMD_GET_DEVICES 0x0 25 + 26 + static int optee_ctx_match(struct tee_ioctl_version_data *ver, const void *data) 27 + { 28 + if (ver->impl_id == TEE_IMPL_ID_OPTEE) 29 + return 1; 30 + else 31 + return 0; 32 + } 33 + 34 + static int get_devices(struct tee_context *ctx, u32 session, 35 + struct tee_shm *device_shm, u32 *shm_size) 36 + { 37 + int ret = 0; 38 + struct tee_ioctl_invoke_arg inv_arg; 39 + struct tee_param param[4]; 40 + 41 + memset(&inv_arg, 0, sizeof(inv_arg)); 42 + memset(&param, 0, sizeof(param)); 43 + 44 + /* Invoke PTA_CMD_GET_DEVICES function */ 45 + inv_arg.func = PTA_CMD_GET_DEVICES; 46 + inv_arg.session = session; 47 + inv_arg.num_params = 4; 48 + 49 + /* Fill invoke cmd params */ 50 + param[0].attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT; 51 + param[0].u.memref.shm = device_shm; 52 + param[0].u.memref.size = *shm_size; 53 + param[0].u.memref.shm_offs = 0; 54 + 55 + ret = tee_client_invoke_func(ctx, &inv_arg, param); 56 + if ((ret < 0) || ((inv_arg.ret != TEEC_SUCCESS) && 57 + (inv_arg.ret != TEEC_ERROR_SHORT_BUFFER))) { 58 + pr_err("PTA_CMD_GET_DEVICES invoke function err: %x\n", 59 + inv_arg.ret); 60 + return -EINVAL; 61 + } 62 + 63 + *shm_size = param[0].u.memref.size; 64 + 65 + return 0; 66 + } 67 + 68 + static int optee_register_device(const uuid_t *device_uuid, u32 device_id) 69 + { 70 + struct tee_client_device *optee_device = NULL; 71 + int rc; 72 + 73 + optee_device = kzalloc(sizeof(*optee_device), GFP_KERNEL); 74 + if (!optee_device) 75 + return -ENOMEM; 76 + 77 + optee_device->dev.bus = &tee_bus_type; 78 + dev_set_name(&optee_device->dev, "optee-clnt%u", device_id); 79 + uuid_copy(&optee_device->id.uuid, device_uuid); 80 + 81 + rc = device_register(&optee_device->dev); 82 + if (rc) { 83 + pr_err("device registration failed, err: %d\n", rc); 84 + kfree(optee_device); 85 + } 86 + 87 + return rc; 88 + } 89 + 90 + int optee_enumerate_devices(void) 91 + { 92 + const uuid_t pta_uuid = 93 + UUID_INIT(0x7011a688, 0xddde, 0x4053, 94 + 0xa5, 0xa9, 0x7b, 0x3c, 0x4d, 0xdf, 0x13, 0xb8); 95 + struct tee_ioctl_open_session_arg sess_arg; 96 + struct tee_shm *device_shm = NULL; 97 + const uuid_t *device_uuid = NULL; 98 + struct tee_context *ctx = NULL; 99 + u32 shm_size = 0, idx, num_devices = 0; 100 + int rc; 101 + 102 + memset(&sess_arg, 0, sizeof(sess_arg)); 103 + 104 + /* Open context with OP-TEE driver */ 105 + ctx = tee_client_open_context(NULL, optee_ctx_match, NULL, NULL); 106 + if (IS_ERR(ctx)) 107 + return -ENODEV; 108 + 109 + /* Open session with device enumeration pseudo TA */ 110 + memcpy(sess_arg.uuid, pta_uuid.b, TEE_IOCTL_UUID_LEN); 111 + sess_arg.clnt_login = TEE_IOCTL_LOGIN_PUBLIC; 112 + sess_arg.num_params = 0; 113 + 114 + rc = tee_client_open_session(ctx, &sess_arg, NULL); 115 + if ((rc < 0) || (sess_arg.ret != TEEC_SUCCESS)) { 116 + /* Device enumeration pseudo TA not found */ 117 + rc = 0; 118 + goto out_ctx; 119 + } 120 + 121 + rc = get_devices(ctx, sess_arg.session, NULL, &shm_size); 122 + if (rc < 0 || !shm_size) 123 + goto out_sess; 124 + 125 + device_shm = tee_shm_alloc(ctx, shm_size, 126 + TEE_SHM_MAPPED | TEE_SHM_DMA_BUF); 127 + if (IS_ERR(device_shm)) { 128 + pr_err("tee_shm_alloc failed\n"); 129 + rc = PTR_ERR(device_shm); 130 + goto out_sess; 131 + } 132 + 133 + rc = get_devices(ctx, sess_arg.session, device_shm, &shm_size); 134 + if (rc < 0) 135 + goto out_shm; 136 + 137 + device_uuid = tee_shm_get_va(device_shm, 0); 138 + if (IS_ERR(device_uuid)) { 139 + pr_err("tee_shm_get_va failed\n"); 140 + rc = PTR_ERR(device_uuid); 141 + goto out_shm; 142 + } 143 + 144 + num_devices = shm_size / sizeof(uuid_t); 145 + 146 + for (idx = 0; idx < num_devices; idx++) { 147 + rc = optee_register_device(&device_uuid[idx], idx); 148 + if (rc) 149 + goto out_shm; 150 + } 151 + 152 + out_shm: 153 + tee_shm_free(device_shm); 154 + out_sess: 155 + tee_client_close_session(ctx, sess_arg.session); 156 + out_ctx: 157 + tee_client_close_context(ctx); 158 + 159 + return rc; 160 + }
+2 -24
drivers/tee/optee/optee_msg.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) */ 1 2 /* 2 - * Copyright (c) 2015-2016, Linaro Limited 3 - * All rights reserved. 4 - * 5 - * Redistribution and use in source and binary forms, with or without 6 - * modification, are permitted provided that the following conditions are met: 7 - * 8 - * 1. Redistributions of source code must retain the above copyright notice, 9 - * this list of conditions and the following disclaimer. 10 - * 11 - * 2. Redistributions in binary form must reproduce the above copyright notice, 12 - * this list of conditions and the following disclaimer in the documentation 13 - * and/or other materials provided with the distribution. 14 - * 15 - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 16 - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 17 - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 18 - * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE 19 - * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR 20 - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF 21 - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS 22 - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN 23 - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) 24 - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 25 - * POSSIBILITY OF SUCH DAMAGE. 3 + * Copyright (c) 2015-2019, Linaro Limited 26 4 */ 27 5 #ifndef _OPTEE_MSG_H 28 6 #define _OPTEE_MSG_H
+3
drivers/tee/optee/optee_private.h
··· 28 28 #define TEEC_ERROR_BAD_PARAMETERS 0xFFFF0006 29 29 #define TEEC_ERROR_COMMUNICATION 0xFFFF000E 30 30 #define TEEC_ERROR_OUT_OF_MEMORY 0xFFFF000C 31 + #define TEEC_ERROR_SHORT_BUFFER 0xFFFF0010 31 32 32 33 #define TEEC_ORIGIN_COMMS 0x00000002 33 34 ··· 181 180 void optee_free_pages_list(void *array, size_t num_entries); 182 181 void optee_fill_pages_list(u64 *dst, struct page **pages, int num_pages, 183 182 size_t page_offset); 183 + 184 + int optee_enumerate_devices(void); 184 185 185 186 /* 186 187 * Small helpers
+2 -24
drivers/tee/optee/optee_smc.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) */ 1 2 /* 2 - * Copyright (c) 2015-2016, Linaro Limited 3 - * All rights reserved. 4 - * 5 - * Redistribution and use in source and binary forms, with or without 6 - * modification, are permitted provided that the following conditions are met: 7 - * 8 - * 1. Redistributions of source code must retain the above copyright notice, 9 - * this list of conditions and the following disclaimer. 10 - * 11 - * 2. Redistributions in binary form must reproduce the above copyright notice, 12 - * this list of conditions and the following disclaimer in the documentation 13 - * and/or other materials provided with the distribution. 14 - * 15 - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 16 - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 17 - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 18 - * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE 19 - * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR 20 - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF 21 - * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS 22 - * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN 23 - * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) 24 - * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 25 - * POSSIBILITY OF SUCH DAMAGE. 3 + * Copyright (c) 2015-2019, Linaro Limited 26 4 */ 27 5 #ifndef OPTEE_SMC_H 28 6 #define OPTEE_SMC_H
+9 -1
drivers/tee/optee/supp.c
··· 88 88 { 89 89 struct optee *optee = tee_get_drvdata(ctx->teedev); 90 90 struct optee_supp *supp = &optee->supp; 91 - struct optee_supp_req *req = kzalloc(sizeof(*req), GFP_KERNEL); 91 + struct optee_supp_req *req; 92 92 bool interruptable; 93 93 u32 ret; 94 94 95 + /* 96 + * Return in case there is no supplicant available and 97 + * non-blocking request. 98 + */ 99 + if (!supp->ctx && ctx->supp_nowait) 100 + return TEEC_ERROR_COMMUNICATION; 101 + 102 + req = kzalloc(sizeof(*req), GFP_KERNEL); 95 103 if (!req) 96 104 return TEEC_ERROR_OUT_OF_MEMORY; 97 105
+74 -4
drivers/tee/tee_core.c
··· 15 15 #define pr_fmt(fmt) "%s: " fmt, __func__ 16 16 17 17 #include <linux/cdev.h> 18 - #include <linux/device.h> 19 18 #include <linux/fs.h> 20 19 #include <linux/idr.h> 21 20 #include <linux/module.h> ··· 105 106 if (IS_ERR(ctx)) 106 107 return PTR_ERR(ctx); 107 108 109 + /* 110 + * Default user-space behaviour is to wait for tee-supplicant 111 + * if not present for any requests in this context. 112 + */ 113 + ctx->supp_nowait = false; 108 114 filp->private_data = ctx; 109 115 return 0; 110 116 } ··· 986 982 } while (IS_ERR(ctx) && PTR_ERR(ctx) != -ENOMEM); 987 983 988 984 put_device(put_dev); 985 + /* 986 + * Default behaviour for in kernel client is to not wait for 987 + * tee-supplicant if not present for any requests in this context. 988 + * Also this flag could be configured again before call to 989 + * tee_client_open_session() if any in kernel client requires 990 + * different behaviour. 991 + */ 992 + if (!IS_ERR(ctx)) 993 + ctx->supp_nowait = true; 994 + 989 995 return ctx; 990 996 } 991 997 EXPORT_SYMBOL_GPL(tee_client_open_context); ··· 1041 1027 } 1042 1028 EXPORT_SYMBOL_GPL(tee_client_invoke_func); 1043 1029 1030 + int tee_client_cancel_req(struct tee_context *ctx, 1031 + struct tee_ioctl_cancel_arg *arg) 1032 + { 1033 + if (!ctx->teedev->desc->ops->cancel_req) 1034 + return -EINVAL; 1035 + return ctx->teedev->desc->ops->cancel_req(ctx, arg->cancel_id, 1036 + arg->session); 1037 + } 1038 + 1039 + static int tee_client_device_match(struct device *dev, 1040 + struct device_driver *drv) 1041 + { 1042 + const struct tee_client_device_id *id_table; 1043 + struct tee_client_device *tee_device; 1044 + 1045 + id_table = to_tee_client_driver(drv)->id_table; 1046 + tee_device = to_tee_client_device(dev); 1047 + 1048 + while (!uuid_is_null(&id_table->uuid)) { 1049 + if (uuid_equal(&tee_device->id.uuid, &id_table->uuid)) 1050 + return 1; 1051 + id_table++; 1052 + } 1053 + 1054 + return 0; 1055 + } 1056 + 1057 + static int tee_client_device_uevent(struct device *dev, 1058 + struct kobj_uevent_env *env) 1059 + { 1060 + uuid_t *dev_id = &to_tee_client_device(dev)->id.uuid; 1061 + 1062 + return add_uevent_var(env, "MODALIAS=tee:%pUb", dev_id); 1063 + } 1064 + 1065 + struct bus_type tee_bus_type = { 1066 + .name = "tee", 1067 + .match = tee_client_device_match, 1068 + .uevent = tee_client_device_uevent, 1069 + }; 1070 + EXPORT_SYMBOL_GPL(tee_bus_type); 1071 + 1044 1072 static int __init tee_init(void) 1045 1073 { 1046 1074 int rc; ··· 1096 1040 rc = alloc_chrdev_region(&tee_devt, 0, TEE_NUM_DEVICES, "tee"); 1097 1041 if (rc) { 1098 1042 pr_err("failed to allocate char dev region\n"); 1099 - class_destroy(tee_class); 1100 - tee_class = NULL; 1043 + goto out_unreg_class; 1101 1044 } 1045 + 1046 + rc = bus_register(&tee_bus_type); 1047 + if (rc) { 1048 + pr_err("failed to register tee bus\n"); 1049 + goto out_unreg_chrdev; 1050 + } 1051 + 1052 + return 0; 1053 + 1054 + out_unreg_chrdev: 1055 + unregister_chrdev_region(tee_devt, TEE_NUM_DEVICES); 1056 + out_unreg_class: 1057 + class_destroy(tee_class); 1058 + tee_class = NULL; 1102 1059 1103 1060 return rc; 1104 1061 } 1105 1062 1106 1063 static void __exit tee_exit(void) 1107 1064 { 1065 + bus_unregister(&tee_bus_type); 1066 + unregister_chrdev_region(tee_devt, TEE_NUM_DEVICES); 1108 1067 class_destroy(tee_class); 1109 1068 tee_class = NULL; 1110 - unregister_chrdev_region(tee_devt, TEE_NUM_DEVICES); 1111 1069 } 1112 1070 1113 1071 subsys_initcall(tee_init);
+9 -17
drivers/watchdog/bcm2835_wdt.c
··· 12 12 13 13 #include <linux/delay.h> 14 14 #include <linux/types.h> 15 + #include <linux/mfd/bcm2835-pm.h> 15 16 #include <linux/module.h> 16 17 #include <linux/io.h> 17 18 #include <linux/watchdog.h> ··· 47 46 void __iomem *base; 48 47 spinlock_t lock; 49 48 }; 49 + 50 + static struct bcm2835_wdt *bcm2835_power_off_wdt; 50 51 51 52 static unsigned int heartbeat; 52 53 static bool nowayout = WATCHDOG_NOWAYOUT; ··· 151 148 */ 152 149 static void bcm2835_power_off(void) 153 150 { 154 - struct device_node *np = 155 - of_find_compatible_node(NULL, NULL, "brcm,bcm2835-pm-wdt"); 156 - struct platform_device *pdev = of_find_device_by_node(np); 157 - struct bcm2835_wdt *wdt = platform_get_drvdata(pdev); 151 + struct bcm2835_wdt *wdt = bcm2835_power_off_wdt; 158 152 u32 val; 159 153 160 154 /* ··· 169 169 170 170 static int bcm2835_wdt_probe(struct platform_device *pdev) 171 171 { 172 - struct resource *res; 172 + struct bcm2835_pm *pm = dev_get_drvdata(pdev->dev.parent); 173 173 struct device *dev = &pdev->dev; 174 174 struct bcm2835_wdt *wdt; 175 175 int err; ··· 181 181 182 182 spin_lock_init(&wdt->lock); 183 183 184 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 185 - wdt->base = devm_ioremap_resource(dev, res); 186 - if (IS_ERR(wdt->base)) 187 - return PTR_ERR(wdt->base); 184 + wdt->base = pm->base; 188 185 189 186 watchdog_set_drvdata(&bcm2835_wdt_wdd, wdt); 190 187 watchdog_init_timeout(&bcm2835_wdt_wdd, heartbeat, dev); ··· 208 211 return err; 209 212 } 210 213 211 - if (pm_power_off == NULL) 214 + if (pm_power_off == NULL) { 212 215 pm_power_off = bcm2835_power_off; 216 + bcm2835_power_off_wdt = wdt; 217 + } 213 218 214 219 dev_info(dev, "Broadcom BCM2835 watchdog timer"); 215 220 return 0; ··· 225 226 return 0; 226 227 } 227 228 228 - static const struct of_device_id bcm2835_wdt_of_match[] = { 229 - { .compatible = "brcm,bcm2835-pm-wdt", }, 230 - {}, 231 - }; 232 - MODULE_DEVICE_TABLE(of, bcm2835_wdt_of_match); 233 - 234 229 static struct platform_driver bcm2835_wdt_driver = { 235 230 .probe = bcm2835_wdt_probe, 236 231 .remove = bcm2835_wdt_remove, 237 232 .driver = { 238 233 .name = "bcm2835-wdt", 239 - .of_match_table = bcm2835_wdt_of_match, 240 234 }, 241 235 }; 242 236 module_platform_driver(bcm2835_wdt_driver);
+39
include/dt-bindings/power/qcom-rpmpd.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) 2018, The Linux Foundation. All rights reserved. */ 3 + 4 + #ifndef _DT_BINDINGS_POWER_QCOM_RPMPD_H 5 + #define _DT_BINDINGS_POWER_QCOM_RPMPD_H 6 + 7 + /* SDM845 Power Domain Indexes */ 8 + #define SDM845_EBI 0 9 + #define SDM845_MX 1 10 + #define SDM845_MX_AO 2 11 + #define SDM845_CX 3 12 + #define SDM845_CX_AO 4 13 + #define SDM845_LMX 5 14 + #define SDM845_LCX 6 15 + #define SDM845_GFX 7 16 + #define SDM845_MSS 8 17 + 18 + /* SDM845 Power Domain performance levels */ 19 + #define RPMH_REGULATOR_LEVEL_RETENTION 16 20 + #define RPMH_REGULATOR_LEVEL_MIN_SVS 48 21 + #define RPMH_REGULATOR_LEVEL_LOW_SVS 64 22 + #define RPMH_REGULATOR_LEVEL_SVS 128 23 + #define RPMH_REGULATOR_LEVEL_SVS_L1 192 24 + #define RPMH_REGULATOR_LEVEL_NOM 256 25 + #define RPMH_REGULATOR_LEVEL_NOM_L1 320 26 + #define RPMH_REGULATOR_LEVEL_NOM_L2 336 27 + #define RPMH_REGULATOR_LEVEL_TURBO 384 28 + #define RPMH_REGULATOR_LEVEL_TURBO_L1 416 29 + 30 + /* MSM8996 Power Domain Indexes */ 31 + #define MSM8996_VDDCX 0 32 + #define MSM8996_VDDCX_AO 1 33 + #define MSM8996_VDDCX_VFC 2 34 + #define MSM8996_VDDMX 3 35 + #define MSM8996_VDDMX_AO 4 36 + #define MSM8996_VDDSSCX 5 37 + #define MSM8996_VDDSSCX_VFC 6 38 + 39 + #endif
+39
include/dt-bindings/power/xlnx-zynqmp-power.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2018 Xilinx, Inc. 4 + */ 5 + 6 + #ifndef _DT_BINDINGS_ZYNQMP_POWER_H 7 + #define _DT_BINDINGS_ZYNQMP_POWER_H 8 + 9 + #define PD_USB_0 22 10 + #define PD_USB_1 23 11 + #define PD_TTC_0 24 12 + #define PD_TTC_1 25 13 + #define PD_TTC_2 26 14 + #define PD_TTC_3 27 15 + #define PD_SATA 28 16 + #define PD_ETH_0 29 17 + #define PD_ETH_1 30 18 + #define PD_ETH_2 31 19 + #define PD_ETH_3 32 20 + #define PD_UART_0 33 21 + #define PD_UART_1 34 22 + #define PD_SPI_0 35 23 + #define PD_SPI_1 36 24 + #define PD_I2C_0 37 25 + #define PD_I2C_1 38 26 + #define PD_SD_0 39 27 + #define PD_SD_1 40 28 + #define PD_DP 41 29 + #define PD_GDMA 42 30 + #define PD_ADMA 43 31 + #define PD_NAND 44 32 + #define PD_QSPI 45 33 + #define PD_GPIO 46 34 + #define PD_CAN_0 47 35 + #define PD_CAN_1 48 36 + #define PD_GPU 58 37 + #define PD_PCIE 59 38 + 39 + #endif
+134
include/dt-bindings/reset/amlogic,meson-g12a-reset.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0+ OR BSD-3-Clause */ 2 + /* 3 + * Copyright (c) 2019 BayLibre, SAS. 4 + * Author: Jerome Brunet <jbrunet@baylibre.com> 5 + * 6 + */ 7 + 8 + #ifndef _DT_BINDINGS_AMLOGIC_MESON_G12A_RESET_H 9 + #define _DT_BINDINGS_AMLOGIC_MESON_G12A_RESET_H 10 + 11 + /* RESET0 */ 12 + #define RESET_HIU 0 13 + /* 1 */ 14 + #define RESET_DOS 2 15 + /* 3-4 */ 16 + #define RESET_VIU 5 17 + #define RESET_AFIFO 6 18 + #define RESET_VID_PLL_DIV 7 19 + /* 8-9 */ 20 + #define RESET_VENC 10 21 + #define RESET_ASSIST 11 22 + #define RESET_PCIE_CTRL_A 12 23 + #define RESET_VCBUS 13 24 + #define RESET_PCIE_PHY 14 25 + #define RESET_PCIE_APB 15 26 + #define RESET_GIC 16 27 + #define RESET_CAPB3_DECODE 17 28 + /* 18 */ 29 + #define RESET_HDMITX_CAPB3 19 30 + #define RESET_DVALIN_CAPB3 20 31 + #define RESET_DOS_CAPB3 21 32 + /* 22 */ 33 + #define RESET_CBUS_CAPB3 23 34 + #define RESET_AHB_CNTL 24 35 + #define RESET_AHB_DATA 25 36 + #define RESET_VCBUS_CLK81 26 37 + /* 27-31 */ 38 + /* RESET1 */ 39 + /* 32 */ 40 + #define RESET_DEMUX 33 41 + #define RESET_USB 34 42 + #define RESET_DDR 35 43 + /* 36 */ 44 + #define RESET_BT656 37 45 + #define RESET_AHB_SRAM 38 46 + /* 39 */ 47 + #define RESET_PARSER 40 48 + /* 41 */ 49 + #define RESET_ISA 42 50 + #define RESET_ETHERNET 43 51 + #define RESET_SD_EMMC_A 44 52 + #define RESET_SD_EMMC_B 45 53 + #define RESET_SD_EMMC_C 46 54 + /* 47-60 */ 55 + #define RESET_AUDIO_CODEC 61 56 + /* 62-63 */ 57 + /* RESET2 */ 58 + /* 64 */ 59 + #define RESET_AUDIO 65 60 + #define RESET_HDMITX_PHY 66 61 + /* 67 */ 62 + #define RESET_MIPI_DSI_HOST 68 63 + #define RESET_ALOCKER 69 64 + #define RESET_GE2D 70 65 + #define RESET_PARSER_REG 71 66 + #define RESET_PARSER_FETCH 72 67 + #define RESET_CTL 73 68 + #define RESET_PARSER_TOP 74 69 + /* 75-77 */ 70 + #define RESET_DVALIN 78 71 + #define RESET_HDMITX 79 72 + /* 80-95 */ 73 + /* RESET3 */ 74 + /* 96-95 */ 75 + #define RESET_DEMUX_TOP 105 76 + #define RESET_DEMUX_DES_PL 106 77 + #define RESET_DEMUX_S2P_0 107 78 + #define RESET_DEMUX_S2P_1 108 79 + #define RESET_DEMUX_0 109 80 + #define RESET_DEMUX_1 110 81 + #define RESET_DEMUX_2 111 82 + /* 112-127 */ 83 + /* RESET4 */ 84 + /* 128-129 */ 85 + #define RESET_MIPI_DSI_PHY 130 86 + /* 131-132 */ 87 + #define RESET_RDMA 133 88 + #define RESET_VENCI 134 89 + #define RESET_VENCP 135 90 + /* 136 */ 91 + #define RESET_VDAC 137 92 + /* 138-139 */ 93 + #define RESET_VDI6 140 94 + #define RESET_VENCL 141 95 + #define RESET_I2C_M1 142 96 + #define RESET_I2C_M2 143 97 + /* 144-159 */ 98 + /* RESET5 */ 99 + /* 160-191 */ 100 + /* RESET6 */ 101 + #define RESET_GEN 192 102 + #define RESET_SPICC0 193 103 + #define RESET_SC 194 104 + #define RESET_SANA_3 195 105 + #define RESET_I2C_M0 196 106 + #define RESET_TS_PLL 197 107 + #define RESET_SPICC1 198 108 + #define RESET_STREAM 199 109 + #define RESET_TS_CPU 200 110 + #define RESET_UART0 201 111 + #define RESET_UART1_2 202 112 + #define RESET_ASYNC0 203 113 + #define RESET_ASYNC1 204 114 + #define RESET_SPIFC0 205 115 + #define RESET_I2C_M3 206 116 + /* 207-223 */ 117 + /* RESET7 */ 118 + #define RESET_USB_DDR_0 224 119 + #define RESET_USB_DDR_1 225 120 + #define RESET_USB_DDR_2 226 121 + #define RESET_USB_DDR_3 227 122 + #define RESET_TS_GPU 228 123 + #define RESET_DEVICE_MMC_ARB 229 124 + #define RESET_DVALIN_DMC_PIPL 230 125 + #define RESET_VID_LOCK 231 126 + #define RESET_NIC_DMC_PIPL 232 127 + #define RESET_DMC_VPU_PIPL 233 128 + #define RESET_GE2D_DMC_PIPL 234 129 + #define RESET_HCODEC_DMC_PIPL 235 130 + #define RESET_WAVE420_DMC_PIPL 236 131 + #define RESET_HEVCF_DMC_PIPL 237 132 + /* 238-255 */ 133 + 134 + #endif
+64
include/dt-bindings/reset/imx8mq-reset.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2018 Zodiac Inflight Innovations 4 + * 5 + * Author: Andrey Smirnov <andrew.smirnov@gmail.com> 6 + */ 7 + 8 + #ifndef DT_BINDING_RESET_IMX8MQ_H 9 + #define DT_BINDING_RESET_IMX8MQ_H 10 + 11 + #define IMX8MQ_RESET_A53_CORE_POR_RESET0 0 12 + #define IMX8MQ_RESET_A53_CORE_POR_RESET1 1 13 + #define IMX8MQ_RESET_A53_CORE_POR_RESET2 2 14 + #define IMX8MQ_RESET_A53_CORE_POR_RESET3 3 15 + #define IMX8MQ_RESET_A53_CORE_RESET0 4 16 + #define IMX8MQ_RESET_A53_CORE_RESET1 5 17 + #define IMX8MQ_RESET_A53_CORE_RESET2 6 18 + #define IMX8MQ_RESET_A53_CORE_RESET3 7 19 + #define IMX8MQ_RESET_A53_DBG_RESET0 8 20 + #define IMX8MQ_RESET_A53_DBG_RESET1 9 21 + #define IMX8MQ_RESET_A53_DBG_RESET2 10 22 + #define IMX8MQ_RESET_A53_DBG_RESET3 11 23 + #define IMX8MQ_RESET_A53_ETM_RESET0 12 24 + #define IMX8MQ_RESET_A53_ETM_RESET1 13 25 + #define IMX8MQ_RESET_A53_ETM_RESET2 14 26 + #define IMX8MQ_RESET_A53_ETM_RESET3 15 27 + #define IMX8MQ_RESET_A53_SOC_DBG_RESET 16 28 + #define IMX8MQ_RESET_A53_L2RESET 17 29 + #define IMX8MQ_RESET_SW_NON_SCLR_M4C_RST 18 30 + #define IMX8MQ_RESET_OTG1_PHY_RESET 19 31 + #define IMX8MQ_RESET_OTG2_PHY_RESET 20 32 + #define IMX8MQ_RESET_MIPI_DSI_RESET_BYTE_N 21 33 + #define IMX8MQ_RESET_MIPI_DSI_RESET_N 22 34 + #define IMX8MQ_RESET_MIPI_DIS_DPI_RESET_N 23 35 + #define IMX8MQ_RESET_MIPI_DIS_ESC_RESET_N 24 36 + #define IMX8MQ_RESET_MIPI_DIS_PCLK_RESET_N 25 37 + #define IMX8MQ_RESET_PCIEPHY 26 38 + #define IMX8MQ_RESET_PCIEPHY_PERST 27 39 + #define IMX8MQ_RESET_PCIE_CTRL_APPS_EN 28 40 + #define IMX8MQ_RESET_PCIE_CTRL_APPS_TURNOFF 29 41 + #define IMX8MQ_RESET_HDMI_PHY_APB_RESET 30 42 + #define IMX8MQ_RESET_DISP_RESET 31 43 + #define IMX8MQ_RESET_GPU_RESET 32 44 + #define IMX8MQ_RESET_VPU_RESET 33 45 + #define IMX8MQ_RESET_PCIEPHY2 34 46 + #define IMX8MQ_RESET_PCIEPHY2_PERST 35 47 + #define IMX8MQ_RESET_PCIE2_CTRL_APPS_EN 36 48 + #define IMX8MQ_RESET_PCIE2_CTRL_APPS_TURNOFF 37 49 + #define IMX8MQ_RESET_MIPI_CSI1_CORE_RESET 38 50 + #define IMX8MQ_RESET_MIPI_CSI1_PHY_REF_RESET 39 51 + #define IMX8MQ_RESET_MIPI_CSI1_ESC_RESET 40 52 + #define IMX8MQ_RESET_MIPI_CSI2_CORE_RESET 41 53 + #define IMX8MQ_RESET_MIPI_CSI2_PHY_REF_RESET 42 54 + #define IMX8MQ_RESET_MIPI_CSI2_ESC_RESET 43 55 + #define IMX8MQ_RESET_DDRC1_PRST 44 56 + #define IMX8MQ_RESET_DDRC1_CORE_RESET 45 57 + #define IMX8MQ_RESET_DDRC1_PHY_RESET 46 58 + #define IMX8MQ_RESET_DDRC2_PRST 47 59 + #define IMX8MQ_RESET_DDRC2_CORE_RESET 48 60 + #define IMX8MQ_RESET_DDRC2_PHY_RESET 49 61 + 62 + #define IMX8MQ_RESET_NUM 50 63 + 64 + #endif
+130
include/dt-bindings/reset/xlnx-zynqmp-resets.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2018 Xilinx, Inc. 4 + */ 5 + 6 + #ifndef _DT_BINDINGS_ZYNQMP_RESETS_H 7 + #define _DT_BINDINGS_ZYNQMP_RESETS_H 8 + 9 + #define ZYNQMP_RESET_PCIE_CFG 0 10 + #define ZYNQMP_RESET_PCIE_BRIDGE 1 11 + #define ZYNQMP_RESET_PCIE_CTRL 2 12 + #define ZYNQMP_RESET_DP 3 13 + #define ZYNQMP_RESET_SWDT_CRF 4 14 + #define ZYNQMP_RESET_AFI_FM5 5 15 + #define ZYNQMP_RESET_AFI_FM4 6 16 + #define ZYNQMP_RESET_AFI_FM3 7 17 + #define ZYNQMP_RESET_AFI_FM2 8 18 + #define ZYNQMP_RESET_AFI_FM1 9 19 + #define ZYNQMP_RESET_AFI_FM0 10 20 + #define ZYNQMP_RESET_GDMA 11 21 + #define ZYNQMP_RESET_GPU_PP1 12 22 + #define ZYNQMP_RESET_GPU_PP0 13 23 + #define ZYNQMP_RESET_GPU 14 24 + #define ZYNQMP_RESET_GT 15 25 + #define ZYNQMP_RESET_SATA 16 26 + #define ZYNQMP_RESET_ACPU3_PWRON 17 27 + #define ZYNQMP_RESET_ACPU2_PWRON 18 28 + #define ZYNQMP_RESET_ACPU1_PWRON 19 29 + #define ZYNQMP_RESET_ACPU0_PWRON 20 30 + #define ZYNQMP_RESET_APU_L2 21 31 + #define ZYNQMP_RESET_ACPU3 22 32 + #define ZYNQMP_RESET_ACPU2 23 33 + #define ZYNQMP_RESET_ACPU1 24 34 + #define ZYNQMP_RESET_ACPU0 25 35 + #define ZYNQMP_RESET_DDR 26 36 + #define ZYNQMP_RESET_APM_FPD 27 37 + #define ZYNQMP_RESET_SOFT 28 38 + #define ZYNQMP_RESET_GEM0 29 39 + #define ZYNQMP_RESET_GEM1 30 40 + #define ZYNQMP_RESET_GEM2 31 41 + #define ZYNQMP_RESET_GEM3 32 42 + #define ZYNQMP_RESET_QSPI 33 43 + #define ZYNQMP_RESET_UART0 34 44 + #define ZYNQMP_RESET_UART1 35 45 + #define ZYNQMP_RESET_SPI0 36 46 + #define ZYNQMP_RESET_SPI1 37 47 + #define ZYNQMP_RESET_SDIO0 38 48 + #define ZYNQMP_RESET_SDIO1 39 49 + #define ZYNQMP_RESET_CAN0 40 50 + #define ZYNQMP_RESET_CAN1 41 51 + #define ZYNQMP_RESET_I2C0 42 52 + #define ZYNQMP_RESET_I2C1 43 53 + #define ZYNQMP_RESET_TTC0 44 54 + #define ZYNQMP_RESET_TTC1 45 55 + #define ZYNQMP_RESET_TTC2 46 56 + #define ZYNQMP_RESET_TTC3 47 57 + #define ZYNQMP_RESET_SWDT_CRL 48 58 + #define ZYNQMP_RESET_NAND 49 59 + #define ZYNQMP_RESET_ADMA 50 60 + #define ZYNQMP_RESET_GPIO 51 61 + #define ZYNQMP_RESET_IOU_CC 52 62 + #define ZYNQMP_RESET_TIMESTAMP 53 63 + #define ZYNQMP_RESET_RPU_R50 54 64 + #define ZYNQMP_RESET_RPU_R51 55 65 + #define ZYNQMP_RESET_RPU_AMBA 56 66 + #define ZYNQMP_RESET_OCM 57 67 + #define ZYNQMP_RESET_RPU_PGE 58 68 + #define ZYNQMP_RESET_USB0_CORERESET 59 69 + #define ZYNQMP_RESET_USB1_CORERESET 60 70 + #define ZYNQMP_RESET_USB0_HIBERRESET 61 71 + #define ZYNQMP_RESET_USB1_HIBERRESET 62 72 + #define ZYNQMP_RESET_USB0_APB 63 73 + #define ZYNQMP_RESET_USB1_APB 64 74 + #define ZYNQMP_RESET_IPI 65 75 + #define ZYNQMP_RESET_APM_LPD 66 76 + #define ZYNQMP_RESET_RTC 67 77 + #define ZYNQMP_RESET_SYSMON 68 78 + #define ZYNQMP_RESET_AFI_FM6 69 79 + #define ZYNQMP_RESET_LPD_SWDT 70 80 + #define ZYNQMP_RESET_FPD 71 81 + #define ZYNQMP_RESET_RPU_DBG1 72 82 + #define ZYNQMP_RESET_RPU_DBG0 73 83 + #define ZYNQMP_RESET_DBG_LPD 74 84 + #define ZYNQMP_RESET_DBG_FPD 75 85 + #define ZYNQMP_RESET_APLL 76 86 + #define ZYNQMP_RESET_DPLL 77 87 + #define ZYNQMP_RESET_VPLL 78 88 + #define ZYNQMP_RESET_IOPLL 79 89 + #define ZYNQMP_RESET_RPLL 80 90 + #define ZYNQMP_RESET_GPO3_PL_0 81 91 + #define ZYNQMP_RESET_GPO3_PL_1 82 92 + #define ZYNQMP_RESET_GPO3_PL_2 83 93 + #define ZYNQMP_RESET_GPO3_PL_3 84 94 + #define ZYNQMP_RESET_GPO3_PL_4 85 95 + #define ZYNQMP_RESET_GPO3_PL_5 86 96 + #define ZYNQMP_RESET_GPO3_PL_6 87 97 + #define ZYNQMP_RESET_GPO3_PL_7 88 98 + #define ZYNQMP_RESET_GPO3_PL_8 89 99 + #define ZYNQMP_RESET_GPO3_PL_9 90 100 + #define ZYNQMP_RESET_GPO3_PL_10 91 101 + #define ZYNQMP_RESET_GPO3_PL_11 92 102 + #define ZYNQMP_RESET_GPO3_PL_12 93 103 + #define ZYNQMP_RESET_GPO3_PL_13 94 104 + #define ZYNQMP_RESET_GPO3_PL_14 95 105 + #define ZYNQMP_RESET_GPO3_PL_15 96 106 + #define ZYNQMP_RESET_GPO3_PL_16 97 107 + #define ZYNQMP_RESET_GPO3_PL_17 98 108 + #define ZYNQMP_RESET_GPO3_PL_18 99 109 + #define ZYNQMP_RESET_GPO3_PL_19 100 110 + #define ZYNQMP_RESET_GPO3_PL_20 101 111 + #define ZYNQMP_RESET_GPO3_PL_21 102 112 + #define ZYNQMP_RESET_GPO3_PL_22 103 113 + #define ZYNQMP_RESET_GPO3_PL_23 104 114 + #define ZYNQMP_RESET_GPO3_PL_24 105 115 + #define ZYNQMP_RESET_GPO3_PL_25 106 116 + #define ZYNQMP_RESET_GPO3_PL_26 107 117 + #define ZYNQMP_RESET_GPO3_PL_27 108 118 + #define ZYNQMP_RESET_GPO3_PL_28 109 119 + #define ZYNQMP_RESET_GPO3_PL_29 110 120 + #define ZYNQMP_RESET_GPO3_PL_30 111 121 + #define ZYNQMP_RESET_GPO3_PL_31 112 122 + #define ZYNQMP_RESET_RPU_LS 113 123 + #define ZYNQMP_RESET_PS_ONLY 114 124 + #define ZYNQMP_RESET_PL 115 125 + #define ZYNQMP_RESET_PS_PL0 116 126 + #define ZYNQMP_RESET_PS_PL1 117 127 + #define ZYNQMP_RESET_PS_PL2 118 128 + #define ZYNQMP_RESET_PS_PL3 119 129 + 130 + #endif
+28
include/dt-bindings/soc/bcm2835-pm.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0+ OR MIT) */ 2 + 3 + #ifndef _DT_BINDINGS_ARM_BCM2835_PM_H 4 + #define _DT_BINDINGS_ARM_BCM2835_PM_H 5 + 6 + #define BCM2835_POWER_DOMAIN_GRAFX 0 7 + #define BCM2835_POWER_DOMAIN_GRAFX_V3D 1 8 + #define BCM2835_POWER_DOMAIN_IMAGE 2 9 + #define BCM2835_POWER_DOMAIN_IMAGE_PERI 3 10 + #define BCM2835_POWER_DOMAIN_IMAGE_ISP 4 11 + #define BCM2835_POWER_DOMAIN_IMAGE_H264 5 12 + #define BCM2835_POWER_DOMAIN_USB 6 13 + #define BCM2835_POWER_DOMAIN_DSI0 7 14 + #define BCM2835_POWER_DOMAIN_DSI1 8 15 + #define BCM2835_POWER_DOMAIN_CAM0 9 16 + #define BCM2835_POWER_DOMAIN_CAM1 10 17 + #define BCM2835_POWER_DOMAIN_CCP2TX 11 18 + #define BCM2835_POWER_DOMAIN_HDMI 12 19 + 20 + #define BCM2835_POWER_DOMAIN_COUNT 13 21 + 22 + #define BCM2835_RESET_V3D 0 23 + #define BCM2835_RESET_ISP 1 24 + #define BCM2835_RESET_H264 2 25 + 26 + #define BCM2835_RESET_COUNT 3 27 + 28 + #endif /* _DT_BINDINGS_ARM_BCM2835_PM_H */
+3
include/linux/firmware/imx/svc/misc.h
··· 52 52 int imx_sc_misc_get_control(struct imx_sc_ipc *ipc, u32 resource, 53 53 u8 ctrl, u32 *val); 54 54 55 + int imx_sc_pm_cpu_start(struct imx_sc_ipc *ipc, u32 resource, 56 + bool enable, u64 phys_addr); 57 + 55 58 #endif /* _SC_MISC_API_H */
+184
include/linux/firmware/xlnx-zynqmp.h
··· 28 28 /* SMC SIP service Call Function Identifier Prefix */ 29 29 #define PM_SIP_SVC 0xC2000000 30 30 #define PM_GET_TRUSTZONE_VERSION 0xa03 31 + #define PM_SET_SUSPEND_MODE 0xa02 32 + #define GET_CALLBACK_DATA 0xa01 31 33 32 34 /* Number of 32bits values in payload */ 33 35 #define PAYLOAD_ARG_CNT 4U 34 36 37 + /* Number of arguments for a callback */ 38 + #define CB_ARG_CNT 4 39 + 40 + /* Payload size (consists of callback API ID + arguments) */ 41 + #define CB_PAYLOAD_SIZE (CB_ARG_CNT + 1) 42 + 43 + #define ZYNQMP_PM_MAX_QOS 100U 44 + 45 + /* Node capabilities */ 46 + #define ZYNQMP_PM_CAPABILITY_ACCESS 0x1U 47 + #define ZYNQMP_PM_CAPABILITY_CONTEXT 0x2U 48 + #define ZYNQMP_PM_CAPABILITY_WAKEUP 0x4U 49 + #define ZYNQMP_PM_CAPABILITY_POWER 0x8U 50 + 35 51 enum pm_api_id { 36 52 PM_GET_API_VERSION = 1, 53 + PM_REQUEST_NODE = 13, 54 + PM_RELEASE_NODE, 55 + PM_SET_REQUIREMENT, 56 + PM_RESET_ASSERT = 17, 57 + PM_RESET_GET_STATUS, 58 + PM_PM_INIT_FINALIZE = 21, 59 + PM_GET_CHIPID = 24, 37 60 PM_IOCTL = 34, 38 61 PM_QUERY_DATA, 39 62 PM_CLOCK_ENABLE, ··· 98 75 PM_QID_CLOCK_GET_NUM_CLOCKS = 12, 99 76 }; 100 77 78 + enum zynqmp_pm_reset_action { 79 + PM_RESET_ACTION_RELEASE, 80 + PM_RESET_ACTION_ASSERT, 81 + PM_RESET_ACTION_PULSE, 82 + }; 83 + 84 + enum zynqmp_pm_reset { 85 + ZYNQMP_PM_RESET_START = 1000, 86 + ZYNQMP_PM_RESET_PCIE_CFG = ZYNQMP_PM_RESET_START, 87 + ZYNQMP_PM_RESET_PCIE_BRIDGE, 88 + ZYNQMP_PM_RESET_PCIE_CTRL, 89 + ZYNQMP_PM_RESET_DP, 90 + ZYNQMP_PM_RESET_SWDT_CRF, 91 + ZYNQMP_PM_RESET_AFI_FM5, 92 + ZYNQMP_PM_RESET_AFI_FM4, 93 + ZYNQMP_PM_RESET_AFI_FM3, 94 + ZYNQMP_PM_RESET_AFI_FM2, 95 + ZYNQMP_PM_RESET_AFI_FM1, 96 + ZYNQMP_PM_RESET_AFI_FM0, 97 + ZYNQMP_PM_RESET_GDMA, 98 + ZYNQMP_PM_RESET_GPU_PP1, 99 + ZYNQMP_PM_RESET_GPU_PP0, 100 + ZYNQMP_PM_RESET_GPU, 101 + ZYNQMP_PM_RESET_GT, 102 + ZYNQMP_PM_RESET_SATA, 103 + ZYNQMP_PM_RESET_ACPU3_PWRON, 104 + ZYNQMP_PM_RESET_ACPU2_PWRON, 105 + ZYNQMP_PM_RESET_ACPU1_PWRON, 106 + ZYNQMP_PM_RESET_ACPU0_PWRON, 107 + ZYNQMP_PM_RESET_APU_L2, 108 + ZYNQMP_PM_RESET_ACPU3, 109 + ZYNQMP_PM_RESET_ACPU2, 110 + ZYNQMP_PM_RESET_ACPU1, 111 + ZYNQMP_PM_RESET_ACPU0, 112 + ZYNQMP_PM_RESET_DDR, 113 + ZYNQMP_PM_RESET_APM_FPD, 114 + ZYNQMP_PM_RESET_SOFT, 115 + ZYNQMP_PM_RESET_GEM0, 116 + ZYNQMP_PM_RESET_GEM1, 117 + ZYNQMP_PM_RESET_GEM2, 118 + ZYNQMP_PM_RESET_GEM3, 119 + ZYNQMP_PM_RESET_QSPI, 120 + ZYNQMP_PM_RESET_UART0, 121 + ZYNQMP_PM_RESET_UART1, 122 + ZYNQMP_PM_RESET_SPI0, 123 + ZYNQMP_PM_RESET_SPI1, 124 + ZYNQMP_PM_RESET_SDIO0, 125 + ZYNQMP_PM_RESET_SDIO1, 126 + ZYNQMP_PM_RESET_CAN0, 127 + ZYNQMP_PM_RESET_CAN1, 128 + ZYNQMP_PM_RESET_I2C0, 129 + ZYNQMP_PM_RESET_I2C1, 130 + ZYNQMP_PM_RESET_TTC0, 131 + ZYNQMP_PM_RESET_TTC1, 132 + ZYNQMP_PM_RESET_TTC2, 133 + ZYNQMP_PM_RESET_TTC3, 134 + ZYNQMP_PM_RESET_SWDT_CRL, 135 + ZYNQMP_PM_RESET_NAND, 136 + ZYNQMP_PM_RESET_ADMA, 137 + ZYNQMP_PM_RESET_GPIO, 138 + ZYNQMP_PM_RESET_IOU_CC, 139 + ZYNQMP_PM_RESET_TIMESTAMP, 140 + ZYNQMP_PM_RESET_RPU_R50, 141 + ZYNQMP_PM_RESET_RPU_R51, 142 + ZYNQMP_PM_RESET_RPU_AMBA, 143 + ZYNQMP_PM_RESET_OCM, 144 + ZYNQMP_PM_RESET_RPU_PGE, 145 + ZYNQMP_PM_RESET_USB0_CORERESET, 146 + ZYNQMP_PM_RESET_USB1_CORERESET, 147 + ZYNQMP_PM_RESET_USB0_HIBERRESET, 148 + ZYNQMP_PM_RESET_USB1_HIBERRESET, 149 + ZYNQMP_PM_RESET_USB0_APB, 150 + ZYNQMP_PM_RESET_USB1_APB, 151 + ZYNQMP_PM_RESET_IPI, 152 + ZYNQMP_PM_RESET_APM_LPD, 153 + ZYNQMP_PM_RESET_RTC, 154 + ZYNQMP_PM_RESET_SYSMON, 155 + ZYNQMP_PM_RESET_AFI_FM6, 156 + ZYNQMP_PM_RESET_LPD_SWDT, 157 + ZYNQMP_PM_RESET_FPD, 158 + ZYNQMP_PM_RESET_RPU_DBG1, 159 + ZYNQMP_PM_RESET_RPU_DBG0, 160 + ZYNQMP_PM_RESET_DBG_LPD, 161 + ZYNQMP_PM_RESET_DBG_FPD, 162 + ZYNQMP_PM_RESET_APLL, 163 + ZYNQMP_PM_RESET_DPLL, 164 + ZYNQMP_PM_RESET_VPLL, 165 + ZYNQMP_PM_RESET_IOPLL, 166 + ZYNQMP_PM_RESET_RPLL, 167 + ZYNQMP_PM_RESET_GPO3_PL_0, 168 + ZYNQMP_PM_RESET_GPO3_PL_1, 169 + ZYNQMP_PM_RESET_GPO3_PL_2, 170 + ZYNQMP_PM_RESET_GPO3_PL_3, 171 + ZYNQMP_PM_RESET_GPO3_PL_4, 172 + ZYNQMP_PM_RESET_GPO3_PL_5, 173 + ZYNQMP_PM_RESET_GPO3_PL_6, 174 + ZYNQMP_PM_RESET_GPO3_PL_7, 175 + ZYNQMP_PM_RESET_GPO3_PL_8, 176 + ZYNQMP_PM_RESET_GPO3_PL_9, 177 + ZYNQMP_PM_RESET_GPO3_PL_10, 178 + ZYNQMP_PM_RESET_GPO3_PL_11, 179 + ZYNQMP_PM_RESET_GPO3_PL_12, 180 + ZYNQMP_PM_RESET_GPO3_PL_13, 181 + ZYNQMP_PM_RESET_GPO3_PL_14, 182 + ZYNQMP_PM_RESET_GPO3_PL_15, 183 + ZYNQMP_PM_RESET_GPO3_PL_16, 184 + ZYNQMP_PM_RESET_GPO3_PL_17, 185 + ZYNQMP_PM_RESET_GPO3_PL_18, 186 + ZYNQMP_PM_RESET_GPO3_PL_19, 187 + ZYNQMP_PM_RESET_GPO3_PL_20, 188 + ZYNQMP_PM_RESET_GPO3_PL_21, 189 + ZYNQMP_PM_RESET_GPO3_PL_22, 190 + ZYNQMP_PM_RESET_GPO3_PL_23, 191 + ZYNQMP_PM_RESET_GPO3_PL_24, 192 + ZYNQMP_PM_RESET_GPO3_PL_25, 193 + ZYNQMP_PM_RESET_GPO3_PL_26, 194 + ZYNQMP_PM_RESET_GPO3_PL_27, 195 + ZYNQMP_PM_RESET_GPO3_PL_28, 196 + ZYNQMP_PM_RESET_GPO3_PL_29, 197 + ZYNQMP_PM_RESET_GPO3_PL_30, 198 + ZYNQMP_PM_RESET_GPO3_PL_31, 199 + ZYNQMP_PM_RESET_RPU_LS, 200 + ZYNQMP_PM_RESET_PS_ONLY, 201 + ZYNQMP_PM_RESET_PL, 202 + ZYNQMP_PM_RESET_PS_PL0, 203 + ZYNQMP_PM_RESET_PS_PL1, 204 + ZYNQMP_PM_RESET_PS_PL2, 205 + ZYNQMP_PM_RESET_PS_PL3, 206 + ZYNQMP_PM_RESET_END = ZYNQMP_PM_RESET_PS_PL3 207 + }; 208 + 209 + enum zynqmp_pm_suspend_reason { 210 + SUSPEND_POWER_REQUEST = 201, 211 + SUSPEND_ALERT, 212 + SUSPEND_SYSTEM_SHUTDOWN, 213 + }; 214 + 215 + enum zynqmp_pm_request_ack { 216 + ZYNQMP_PM_REQUEST_ACK_NO = 1, 217 + ZYNQMP_PM_REQUEST_ACK_BLOCKING, 218 + ZYNQMP_PM_REQUEST_ACK_NON_BLOCKING, 219 + }; 220 + 101 221 /** 102 222 * struct zynqmp_pm_query_data - PM query data 103 223 * @qid: query ID ··· 257 91 258 92 struct zynqmp_eemi_ops { 259 93 int (*get_api_version)(u32 *version); 94 + int (*get_chipid)(u32 *idcode, u32 *version); 260 95 int (*query_data)(struct zynqmp_pm_query_data qdata, u32 *out); 261 96 int (*clock_enable)(u32 clock_id); 262 97 int (*clock_disable)(u32 clock_id); ··· 269 102 int (*clock_setparent)(u32 clock_id, u32 parent_id); 270 103 int (*clock_getparent)(u32 clock_id, u32 *parent_id); 271 104 int (*ioctl)(u32 node_id, u32 ioctl_id, u32 arg1, u32 arg2, u32 *out); 105 + int (*reset_assert)(const enum zynqmp_pm_reset reset, 106 + const enum zynqmp_pm_reset_action assert_flag); 107 + int (*reset_get_status)(const enum zynqmp_pm_reset reset, u32 *status); 108 + int (*init_finalize)(void); 109 + int (*set_suspend_mode)(u32 mode); 110 + int (*request_node)(const u32 node, 111 + const u32 capabilities, 112 + const u32 qos, 113 + const enum zynqmp_pm_request_ack ack); 114 + int (*release_node)(const u32 node); 115 + int (*set_requirement)(const u32 node, 116 + const u32 capabilities, 117 + const u32 qos, 118 + const enum zynqmp_pm_request_ack ack); 272 119 }; 120 + 121 + int zynqmp_pm_invoke_fn(u32 pm_api_id, u32 arg0, u32 arg1, 122 + u32 arg2, u32 arg3, u32 *ret_payload); 273 123 274 124 #if IS_REACHABLE(CONFIG_ARCH_ZYNQMP) 275 125 const struct zynqmp_eemi_ops *zynqmp_pm_get_eemi_ops(void);
-2
include/linux/fsl/guts.h
··· 135 135 u32 srds2cr1; /* 0x.0f44 - SerDes2 Control Register 0 */ 136 136 } __attribute__ ((packed)); 137 137 138 - u32 fsl_guts_get_svr(void); 139 - 140 138 /* Alternate function signal multiplex control */ 141 139 #define MPC85xx_PMUXCR_QE(x) (0x8000 >> (x)) 142 140
+14
include/linux/mfd/bcm2835-pm.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0+ */ 2 + 3 + #ifndef BCM2835_MFD_PM_H 4 + #define BCM2835_MFD_PM_H 5 + 6 + #include <linux/regmap.h> 7 + 8 + struct bcm2835_pm { 9 + struct device *dev; 10 + void __iomem *base; 11 + void __iomem *asb; 12 + }; 13 + 14 + #endif /* BCM2835_MFD_PM_H */
+9
include/linux/mod_devicetable.h
··· 779 779 kernel_ulong_t driver_data; 780 780 }; 781 781 782 + /** 783 + * struct tee_client_device_id - tee based device identifier 784 + * @uuid: For TEE based client devices we use the device uuid as 785 + * the identifier. 786 + */ 787 + struct tee_client_device_id { 788 + uuid_t uuid; 789 + }; 790 + 782 791 #endif /* LINUX_MOD_DEVICETABLE_H */
+7
include/linux/pm_opp.h
··· 86 86 87 87 unsigned long dev_pm_opp_get_freq(struct dev_pm_opp *opp); 88 88 89 + unsigned int dev_pm_opp_get_level(struct dev_pm_opp *opp); 90 + 89 91 bool dev_pm_opp_is_turbo(struct dev_pm_opp *opp); 90 92 91 93 int dev_pm_opp_get_opp_count(struct device *dev); ··· 156 154 } 157 155 158 156 static inline unsigned long dev_pm_opp_get_freq(struct dev_pm_opp *opp) 157 + { 158 + return 0; 159 + } 160 + 161 + static inline unsigned int dev_pm_opp_get_level(struct dev_pm_opp *opp) 159 162 { 160 163 return 0; 161 164 }
+7
include/linux/reset/socfpga.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __LINUX_RESET_SOCFPGA_H__ 3 + #define __LINUX_RESET_SOCFPGA_H__ 4 + 5 + void __init socfpga_reset_init(void); 6 + 7 + #endif /* __LINUX_RESET_SOCFPGA_H__ */
+7
include/linux/reset/sunxi.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __LINUX_RESET_SUNXI_H__ 3 + #define __LINUX_RESET_SUNXI_H__ 4 + 5 + void __init sun6i_reset_init(void); 6 + 7 + #endif /* __LINUX_RESET_SUNXI_H__ */
+6
include/linux/soc/qcom/llcc-qcom.h
··· 162 162 */ 163 163 int qcom_llcc_probe(struct platform_device *pdev, 164 164 const struct llcc_slice_config *table, u32 sz); 165 + 166 + /** 167 + * qcom_llcc_remove - remove the sct table 168 + * @pdev: Platform device pointer 169 + */ 170 + int qcom_llcc_remove(struct platform_device *pdev); 165 171 #else 166 172 static inline struct llcc_slice_desc *llcc_slice_getd(u32 uid) 167 173 {
+49 -1
include/linux/tee_drv.h
··· 15 15 #ifndef __TEE_DRV_H 16 16 #define __TEE_DRV_H 17 17 18 - #include <linux/types.h> 18 + #include <linux/device.h> 19 19 #include <linux/idr.h> 20 20 #include <linux/kref.h> 21 21 #include <linux/list.h> 22 + #include <linux/mod_devicetable.h> 22 23 #include <linux/tee.h> 24 + #include <linux/types.h> 25 + #include <linux/uuid.h> 23 26 24 27 /* 25 28 * The file describes the API provided by the generic TEE driver to the ··· 50 47 * @releasing: flag that indicates if context is being released right now. 51 48 * It is needed to break circular dependency on context during 52 49 * shared memory release. 50 + * @supp_nowait: flag that indicates that requests in this context should not 51 + * wait for tee-supplicant daemon to be started if not present 52 + * and just return with an error code. It is needed for requests 53 + * that arises from TEE based kernel drivers that should be 54 + * non-blocking in nature. 53 55 */ 54 56 struct tee_context { 55 57 struct tee_device *teedev; ··· 62 54 void *data; 63 55 struct kref refcount; 64 56 bool releasing; 57 + bool supp_nowait; 65 58 }; 66 59 67 60 struct tee_param_memref { ··· 535 526 struct tee_ioctl_invoke_arg *arg, 536 527 struct tee_param *param); 537 528 529 + /** 530 + * tee_client_cancel_req() - Request cancellation of the previous open-session 531 + * or invoke-command operations in a Trusted Application 532 + * @ctx: TEE Context 533 + * @arg: Cancellation arguments, see description of 534 + * struct tee_ioctl_cancel_arg 535 + * 536 + * Returns < 0 on error else 0 if the cancellation was successfully requested. 537 + */ 538 + int tee_client_cancel_req(struct tee_context *ctx, 539 + struct tee_ioctl_cancel_arg *arg); 540 + 538 541 static inline bool tee_param_is_memref(struct tee_param *param) 539 542 { 540 543 switch (param->attr & TEE_IOCTL_PARAM_ATTR_TYPE_MASK) { ··· 558 537 return false; 559 538 } 560 539 } 540 + 541 + extern struct bus_type tee_bus_type; 542 + 543 + /** 544 + * struct tee_client_device - tee based device 545 + * @id: device identifier 546 + * @dev: device structure 547 + */ 548 + struct tee_client_device { 549 + struct tee_client_device_id id; 550 + struct device dev; 551 + }; 552 + 553 + #define to_tee_client_device(d) container_of(d, struct tee_client_device, dev) 554 + 555 + /** 556 + * struct tee_client_driver - tee client driver 557 + * @id_table: device id table supported by this driver 558 + * @driver: driver structure 559 + */ 560 + struct tee_client_driver { 561 + const struct tee_client_device_id *id_table; 562 + struct device_driver driver; 563 + }; 564 + 565 + #define to_tee_client_driver(d) \ 566 + container_of(d, struct tee_client_driver, driver) 561 567 562 568 #endif /*__TEE_DRV_H*/
+4
include/soc/bcm2835/raspberrypi-firmware.h
··· 73 73 RPI_FIRMWARE_GET_CUSTOMER_OTP = 0x00030021, 74 74 RPI_FIRMWARE_GET_DOMAIN_STATE = 0x00030030, 75 75 RPI_FIRMWARE_GET_THROTTLED = 0x00030046, 76 + RPI_FIRMWARE_GET_CLOCK_MEASURED = 0x00030047, 77 + RPI_FIRMWARE_NOTIFY_REBOOT = 0x00030048, 76 78 RPI_FIRMWARE_SET_CLOCK_STATE = 0x00038001, 77 79 RPI_FIRMWARE_SET_CLOCK_RATE = 0x00038002, 78 80 RPI_FIRMWARE_SET_VOLTAGE = 0x00038003, ··· 88 86 RPI_FIRMWARE_SET_GPIO_CONFIG = 0x00038043, 89 87 RPI_FIRMWARE_GET_PERIPH_REG = 0x00030045, 90 88 RPI_FIRMWARE_SET_PERIPH_REG = 0x00038045, 89 + RPI_FIRMWARE_GET_POE_HAT_VAL = 0x00030049, 90 + RPI_FIRMWARE_SET_POE_HAT_VAL = 0x00030050, 91 91 92 92 93 93 /* Dispmanx TAGS */
+2 -2
include/soc/fsl/dpaa2-io.h
··· 111 111 const struct dpaa2_fd *fd); 112 112 int dpaa2_io_service_enqueue_qd(struct dpaa2_io *d, u32 qdid, u8 prio, 113 113 u16 qdbin, const struct dpaa2_fd *fd); 114 - int dpaa2_io_service_release(struct dpaa2_io *d, u32 bpid, 114 + int dpaa2_io_service_release(struct dpaa2_io *d, u16 bpid, 115 115 const u64 *buffers, unsigned int num_buffers); 116 - int dpaa2_io_service_acquire(struct dpaa2_io *d, u32 bpid, 116 + int dpaa2_io_service_acquire(struct dpaa2_io *d, u16 bpid, 117 117 u64 *buffers, unsigned int num_buffers); 118 118 119 119 struct dpaa2_io_store *dpaa2_io_store_create(unsigned int max_frames,
+7 -6
include/soc/tegra/bpmp.h
··· 23 23 #include <soc/tegra/bpmp-abi.h> 24 24 25 25 struct tegra_bpmp_clk; 26 + struct tegra_bpmp_ops; 26 27 27 28 struct tegra_bpmp_soc { 28 29 struct { ··· 33 32 unsigned int timeout; 34 33 } cpu_tx, thread, cpu_rx; 35 34 } channels; 35 + 36 + const struct tegra_bpmp_ops *ops; 36 37 unsigned int num_resets; 37 38 }; 38 39 ··· 50 47 struct tegra_bpmp_mb_data *ob; 51 48 struct completion completion; 52 49 struct tegra_ivc *ivc; 50 + unsigned int index; 53 51 }; 54 52 55 53 typedef void (*tegra_bpmp_mrq_handler_t)(unsigned int mrq, ··· 67 63 struct tegra_bpmp { 68 64 const struct tegra_bpmp_soc *soc; 69 65 struct device *dev; 70 - 71 - struct { 72 - struct gen_pool *pool; 73 - dma_addr_t phys; 74 - void *virt; 75 - } tx, rx; 66 + void *priv; 76 67 77 68 struct { 78 69 struct mbox_client client; ··· 171 172 return false; 172 173 } 173 174 #endif 175 + 176 + void tegra_bpmp_handle_rx(struct tegra_bpmp *bpmp); 174 177 175 178 #if IS_ENABLED(CONFIG_CLK_TEGRA_BPMP) 176 179 int tegra_bpmp_init_clocks(struct tegra_bpmp *bpmp);
-6
include/soc/tegra/pmc.h
··· 161 161 #define TEGRA_IO_RAIL_LVDS TEGRA_IO_PAD_LVDS 162 162 163 163 #ifdef CONFIG_SOC_TEGRA_PMC 164 - int tegra_powergate_is_powered(unsigned int id); 165 164 int tegra_powergate_power_on(unsigned int id); 166 165 int tegra_powergate_power_off(unsigned int id); 167 166 int tegra_powergate_remove_clamping(unsigned int id); ··· 181 182 void tegra_pmc_enter_suspend_mode(enum tegra_suspend_mode mode); 182 183 183 184 #else 184 - static inline int tegra_powergate_is_powered(unsigned int id) 185 - { 186 - return -ENOSYS; 187 - } 188 - 189 185 static inline int tegra_powergate_power_on(unsigned int id) 190 186 { 191 187 return -ENOSYS;
+3
scripts/mod/devicetable-offsets.c
··· 225 225 DEVID_FIELD(typec_device_id, svid); 226 226 DEVID_FIELD(typec_device_id, mode); 227 227 228 + DEVID(tee_client_device_id); 229 + DEVID_FIELD(tee_client_device_id, uuid); 230 + 228 231 return 0; 229 232 }
+19
scripts/mod/file2alias.c
··· 37 37 typedef struct { 38 38 __u8 b[16]; 39 39 } uuid_le; 40 + typedef struct { 41 + __u8 b[16]; 42 + } uuid_t; 40 43 41 44 /* Big exception to the "don't include kernel headers into userspace, which 42 45 * even potentially has different endianness and word sizes, since ··· 1290 1287 return 1; 1291 1288 } 1292 1289 1290 + /* Looks like: tee:uuid */ 1291 + static int do_tee_entry(const char *filename, void *symval, char *alias) 1292 + { 1293 + DEF_FIELD(symval, tee_client_device_id, uuid); 1294 + 1295 + sprintf(alias, "tee:%02x%02x%02x%02x-%02x%02x-%02x%02x-%02x%02x-%02x%02x%02x%02x%02x%02x", 1296 + uuid.b[0], uuid.b[1], uuid.b[2], uuid.b[3], uuid.b[4], 1297 + uuid.b[5], uuid.b[6], uuid.b[7], uuid.b[8], uuid.b[9], 1298 + uuid.b[10], uuid.b[11], uuid.b[12], uuid.b[13], uuid.b[14], 1299 + uuid.b[15]); 1300 + 1301 + add_wildcard(alias); 1302 + return 1; 1303 + } 1304 + 1293 1305 /* Does namelen bytes of name exactly match the symbol? */ 1294 1306 static bool sym_is(const char *name, unsigned namelen, const char *symbol) 1295 1307 { ··· 1375 1357 {"fslmc", SIZE_fsl_mc_device_id, do_fsl_mc_entry}, 1376 1358 {"tbsvc", SIZE_tb_service_id, do_tbsvc_entry}, 1377 1359 {"typec", SIZE_typec_device_id, do_typec_entry}, 1360 + {"tee", SIZE_tee_client_device_id, do_tee_entry}, 1378 1361 }; 1379 1362 1380 1363 /* Create MODULE_ALIAS() statements.