Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC driver updates from Olof Johansson:
"As we've enabled multiplatform kernels on ARM, and greatly done away
with the contents under arch/arm/mach-*, there's still need for
SoC-related drivers to go somewhere.

Many of them go in through other driver trees, but we still have
drivers/soc to hold some of the "doesn't fit anywhere" lowlevel code
that might be shared between ARM and ARM64 (or just in general makes
sense to not have under the architecture directory).

This branch contains mostly such code:

- Drivers for qualcomm SoCs for SMEM, SMD and SMD-RPM, used to
communicate with power management blocks on these SoCs for use by
clock, regulator and bus frequency drivers.

- Allwinner Reduced Serial Bus driver, again used to communicate with
PMICs.

- Drivers for ARM's SCPI (System Control Processor). Not to be
confused with PSCI (Power State Coordination Interface). SCPI is
used to communicate with the assistant embedded cores doing power
management, and we have yet to see how many of them will implement
this for their hardware vs abstracting in other ways (or not at all
like in the past).

- To make confusion between SCPI and PSCI more likely, this release
also includes an update of PSCI to interface version 1.0.

- Rockchip support for power domains.

- A driver to talk to the firmware on Raspberry Pi"

* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (57 commits)
soc: qcom: smd-rpm: Correct size of outgoing message
bus: sunxi-rsb: Add driver for Allwinner Reduced Serial Bus
bus: sunxi-rsb: Add Allwinner Reduced Serial Bus (RSB) controller bindings
ARM: bcm2835: add mutual inclusion protection
drivers: psci: make PSCI 1.0 functions initialization version dependent
dt-bindings: Correct paths in Rockchip power domains binding document
soc: rockchip: power-domain: don't try to print the clock name in error case
soc: qcom/smem: add HWSPINLOCK dependency
clk: berlin: add cpuclk
ARM: berlin: dts: add CLKID_CPU for BG2Q
ARM: bcm2835: Add the Raspberry Pi firmware driver
soc: qcom: smem: Move RPM message ram out of smem DT node
soc: qcom: smd-rpm: Correct the active vs sleep state flagging
soc: qcom: smd: delete unneeded of_node_put
firmware: qcom-scm: build for correct architecture level
soc: qcom: smd: Correct SMEM items for upper channels
qcom-scm: add missing prototype for qcom_scm_is_available()
qcom-scm: fix endianess issue in __qcom_scm_is_call_available
soc: qcom: smd: Reject send of too big packets
soc: qcom: smd: Handle big endian CPUs
...

+4466 -392
+188
Documentation/devicetree/bindings/arm/arm,scpi.txt
··· 1 + System Control and Power Interface (SCPI) Message Protocol 2 + ---------------------------------------------------------- 3 + 4 + Firmware implementing the SCPI described in ARM document number ARM DUI 0922B 5 + ("ARM Compute Subsystem SCP: Message Interface Protocols")[0] can be used 6 + by Linux to initiate various system control and power operations. 7 + 8 + Required properties: 9 + 10 + - compatible : should be "arm,scpi" 11 + - mboxes: List of phandle and mailbox channel specifiers 12 + All the channels reserved by remote SCP firmware for use by 13 + SCPI message protocol should be specified in any order 14 + - shmem : List of phandle pointing to the shared memory(SHM) area between the 15 + processors using these mailboxes for IPC, one for each mailbox 16 + SHM can be any memory reserved for the purpose of this communication 17 + between the processors. 18 + 19 + See Documentation/devicetree/bindings/mailbox/mailbox.txt 20 + for more details about the generic mailbox controller and 21 + client driver bindings. 22 + 23 + Clock bindings for the clocks based on SCPI Message Protocol 24 + ------------------------------------------------------------ 25 + 26 + This binding uses the common clock binding[1]. 27 + 28 + Container Node 29 + ============== 30 + Required properties: 31 + - compatible : should be "arm,scpi-clocks" 32 + All the clocks provided by SCP firmware via SCPI message 33 + protocol much be listed as sub-nodes under this node. 34 + 35 + Sub-nodes 36 + ========= 37 + Required properties: 38 + - compatible : shall include one of the following 39 + "arm,scpi-dvfs-clocks" - all the clocks that are variable and index based. 40 + These clocks don't provide an entire range of values between the 41 + limits but only discrete points within the range. The firmware 42 + provides the mapping for each such operating frequency and the 43 + index associated with it. The firmware also manages the 44 + voltage scaling appropriately with the clock scaling. 45 + "arm,scpi-variable-clocks" - all the clocks that are variable and provide full 46 + range within the specified range. The firmware provides the 47 + range of values within a specified range. 48 + 49 + Other required properties for all clocks(all from common clock binding): 50 + - #clock-cells : Should be 1. Contains the Clock ID value used by SCPI commands. 51 + - clock-output-names : shall be the corresponding names of the outputs. 52 + - clock-indices: The identifying number for the clocks(i.e.clock_id) in the 53 + node. It can be non linear and hence provide the mapping of identifiers 54 + into the clock-output-names array. 55 + 56 + SRAM and Shared Memory for SCPI 57 + ------------------------------- 58 + 59 + A small area of SRAM is reserved for SCPI communication between application 60 + processors and SCP. 61 + 62 + Required properties: 63 + - compatible : should be "arm,juno-sram-ns" for Non-secure SRAM on Juno 64 + 65 + The rest of the properties should follow the generic mmio-sram description 66 + found in ../../misc/sysram.txt 67 + 68 + Each sub-node represents the reserved area for SCPI. 69 + 70 + Required sub-node properties: 71 + - reg : The base offset and size of the reserved area with the SRAM 72 + - compatible : should be "arm,juno-scp-shmem" for Non-secure SRAM based 73 + shared memory on Juno platforms 74 + 75 + Sensor bindings for the sensors based on SCPI Message Protocol 76 + -------------------------------------------------------------- 77 + SCPI provides an API to access the various sensors on the SoC. 78 + 79 + Required properties: 80 + - compatible : should be "arm,scpi-sensors". 81 + - #thermal-sensor-cells: should be set to 1. This property follows the 82 + thermal device tree bindings[2]. 83 + 84 + Valid cell values are raw identifiers (Sensor 85 + ID) as used by the firmware. Refer to 86 + platform documentation for your 87 + implementation for the IDs to use. For Juno 88 + R0 and Juno R1 refer to [3]. 89 + 90 + [0] http://infocenter.arm.com/help/topic/com.arm.doc.dui0922b/index.html 91 + [1] Documentation/devicetree/bindings/clock/clock-bindings.txt 92 + [2] Documentation/devicetree/bindings/thermal/thermal.txt 93 + [3] http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0922b/apas03s22.html 94 + 95 + Example: 96 + 97 + sram: sram@50000000 { 98 + compatible = "arm,juno-sram-ns", "mmio-sram"; 99 + reg = <0x0 0x50000000 0x0 0x10000>; 100 + 101 + #address-cells = <1>; 102 + #size-cells = <1>; 103 + ranges = <0 0x0 0x50000000 0x10000>; 104 + 105 + cpu_scp_lpri: scp-shmem@0 { 106 + compatible = "arm,juno-scp-shmem"; 107 + reg = <0x0 0x200>; 108 + }; 109 + 110 + cpu_scp_hpri: scp-shmem@200 { 111 + compatible = "arm,juno-scp-shmem"; 112 + reg = <0x200 0x200>; 113 + }; 114 + }; 115 + 116 + mailbox: mailbox0@40000000 { 117 + .... 118 + #mbox-cells = <1>; 119 + }; 120 + 121 + scpi_protocol: scpi@2e000000 { 122 + compatible = "arm,scpi"; 123 + mboxes = <&mailbox 0 &mailbox 1>; 124 + shmem = <&cpu_scp_lpri &cpu_scp_hpri>; 125 + 126 + clocks { 127 + compatible = "arm,scpi-clocks"; 128 + 129 + scpi_dvfs: scpi_clocks@0 { 130 + compatible = "arm,scpi-dvfs-clocks"; 131 + #clock-cells = <1>; 132 + clock-indices = <0>, <1>, <2>; 133 + clock-output-names = "atlclk", "aplclk","gpuclk"; 134 + }; 135 + scpi_clk: scpi_clocks@3 { 136 + compatible = "arm,scpi-variable-clocks"; 137 + #clock-cells = <1>; 138 + clock-indices = <3>, <4>; 139 + clock-output-names = "pxlclk0", "pxlclk1"; 140 + }; 141 + }; 142 + 143 + scpi_sensors0: sensors { 144 + compatible = "arm,scpi-sensors"; 145 + #thermal-sensor-cells = <1>; 146 + }; 147 + }; 148 + 149 + cpu@0 { 150 + ... 151 + reg = <0 0>; 152 + clocks = <&scpi_dvfs 0>; 153 + }; 154 + 155 + hdlcd@7ff60000 { 156 + ... 157 + reg = <0 0x7ff60000 0 0x1000>; 158 + clocks = <&scpi_clk 4>; 159 + }; 160 + 161 + thermal-zones { 162 + soc_thermal { 163 + polling-delay-passive = <100>; 164 + polling-delay = <1000>; 165 + 166 + /* sensor ID */ 167 + thermal-sensors = <&scpi_sensors0 3>; 168 + ... 169 + }; 170 + }; 171 + 172 + In the above example, the #clock-cells is set to 1 as required. 173 + scpi_dvfs has 3 output clocks namely: atlclk, aplclk, and gpuclk with 0, 174 + 1 and 2 as clock-indices. scpi_clk has 2 output clocks namely: pxlclk0 175 + and pxlclk1 with 3 and 4 as clock-indices. 176 + 177 + The first consumer in the example is cpu@0 and it has '0' as the clock 178 + specifier which points to the first entry in the output clocks of 179 + scpi_dvfs i.e. "atlclk". 180 + 181 + Similarly the second example is hdlcd@7ff60000 and it has pxlclk1 as input 182 + clock. '4' in the clock specifier here points to the second entry 183 + in the output clocks of scpi_clocks i.e. "pxlclk1" 184 + 185 + The thermal-sensors property in the soc_thermal node uses the 186 + temperature sensor provided by SCP firmware to setup a thermal 187 + zone. The ID "3" is the sensor identifier for the temperature sensor 188 + as used by the firmware.
+6
Documentation/devicetree/bindings/arm/psci.txt
··· 31 31 support, but are permitted to be present for compatibility with 32 32 existing software when "arm,psci" is later in the compatible list. 33 33 34 + * "arm,psci-1.0" : for implementations complying to PSCI 1.0. PSCI 1.0 is 35 + backward compatible with PSCI 0.2 with minor specification updates, 36 + as defined in the PSCI specification[2]. 37 + 34 38 - method : The method of calling the PSCI firmware. Permitted 35 39 values are: 36 40 ··· 104 100 105 101 [1] Kernel documentation - ARM idle states bindings 106 102 Documentation/devicetree/bindings/arm/idle-states.txt 103 + [2] Power State Coordination Interface (PSCI) specification 104 + http://infocenter.arm.com/help/topic/com.arm.doc.den0022c/DEN0022C_Power_State_Coordination_Interface.pdf
+47
Documentation/devicetree/bindings/bus/sunxi-rsb.txt
··· 1 + Allwinner Reduced Serial Bus (RSB) controller 2 + 3 + The RSB controller found on later Allwinner SoCs is an SMBus like 2 wire 4 + serial bus with 1 master and up to 15 slaves. It is represented by a node 5 + for the controller itself, and child nodes representing the slave devices. 6 + 7 + Required properties : 8 + 9 + - reg : Offset and length of the register set for the controller. 10 + - compatible : Shall be "allwinner,sun8i-a23-rsb". 11 + - interrupts : The interrupt line associated to the RSB controller. 12 + - clocks : The gate clk associated to the RSB controller. 13 + - resets : The reset line associated to the RSB controller. 14 + - #address-cells : shall be 1 15 + - #size-cells : shall be 0 16 + 17 + Optional properties : 18 + 19 + - clock-frequency : Desired RSB bus clock frequency in Hz. Maximum is 20MHz. 20 + If not set this defaults to 3MHz. 21 + 22 + Child nodes: 23 + 24 + An RSB controller node can contain zero or more child nodes representing 25 + slave devices on the bus. Child 'reg' properties should contain the slave 26 + device's hardware address. The hardware address is hardwired in the device, 27 + which can normally be found in the datasheet. 28 + 29 + Example: 30 + 31 + rsb@01f03400 { 32 + compatible = "allwinner,sun8i-a23-rsb"; 33 + reg = <0x01f03400 0x400>; 34 + interrupts = <0 39 4>; 35 + clocks = <&apb0_gates 3>; 36 + clock-frequency = <3000000>; 37 + resets = <&apb0_rst 3>; 38 + #address-cells = <1>; 39 + #size-cells = <0>; 40 + 41 + pmic@3e3 { 42 + compatible = "..."; 43 + reg = <0x3e3>; 44 + 45 + /* ... */ 46 + }; 47 + };
+5 -3
Documentation/devicetree/bindings/memory-controllers/arm,pl172.txt
··· 1 - * Device tree bindings for ARM PL172 MultiPort Memory Controller 1 + * Device tree bindings for ARM PL172/PL175/PL176 MultiPort Memory Controller 2 2 3 3 Required properties: 4 4 5 - - compatible: "arm,pl172", "arm,primecell" 5 + - compatible: Must be "arm,primecell" and exactly one from 6 + "arm,pl172", "arm,pl175" or "arm,pl176". 6 7 7 8 - reg: Must contains offset/length value for controller. 8 9 ··· 57 56 58 57 - mpmc,extended-wait: Enable extended wait. 59 58 60 - - mpmc,buffer-enable: Enable write buffer. 59 + - mpmc,buffer-enable: Enable write buffer, option is not supported by 60 + PL175 and PL176 controllers. 61 61 62 62 - mpmc,write-protect: Enable write protect. 63 63
+46
Documentation/devicetree/bindings/soc/rockchip/power_domain.txt
··· 1 + * Rockchip Power Domains 2 + 3 + Rockchip processors include support for multiple power domains which can be 4 + powered up/down by software based on different application scenes to save power. 5 + 6 + Required properties for power domain controller: 7 + - compatible: Should be one of the following. 8 + "rockchip,rk3288-power-controller" - for RK3288 SoCs. 9 + - #power-domain-cells: Number of cells in a power-domain specifier. 10 + Should be 1 for multiple PM domains. 11 + - #address-cells: Should be 1. 12 + - #size-cells: Should be 0. 13 + 14 + Required properties for power domain sub nodes: 15 + - reg: index of the power domain, should use macros in: 16 + "include/dt-bindings/power/rk3288-power.h" - for RK3288 type power domain. 17 + - clocks (optional): phandles to clocks which need to be enabled while power domain 18 + switches state. 19 + 20 + Example: 21 + 22 + power: power-controller { 23 + compatible = "rockchip,rk3288-power-controller"; 24 + #power-domain-cells = <1>; 25 + #address-cells = <1>; 26 + #size-cells = <0>; 27 + 28 + pd_gpu { 29 + reg = <RK3288_PD_GPU>; 30 + clocks = <&cru ACLK_GPU>; 31 + }; 32 + }; 33 + 34 + Node of a device using power domains must have a power-domains property, 35 + containing a phandle to the power device node and an index specifying which 36 + power domain to use. 37 + The index should use macros in: 38 + "include/dt-bindings/power/rk3288-power.h" - for rk3288 type power domain. 39 + 40 + Example of the node using power domain: 41 + 42 + node { 43 + /* ... */ 44 + power-domains = <&power RK3288_PD_GPU>; 45 + /* ... */ 46 + };
+33
Documentation/hwmon/scpi-hwmon
··· 1 + Kernel driver scpi-hwmon 2 + ======================== 3 + 4 + Supported chips: 5 + * Chips based on ARM System Control Processor Interface 6 + Addresses scanned: - 7 + Datasheet: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0922b/index.html 8 + 9 + Author: Punit Agrawal <punit.agrawal@arm.com> 10 + 11 + Description 12 + ----------- 13 + 14 + This driver supports hardware monitoring for SoC's based on the ARM 15 + System Control Processor (SCP) implementing the System Control 16 + Processor Interface (SCPI). The following sensor types are supported 17 + by the SCP - 18 + 19 + * temperature 20 + * voltage 21 + * current 22 + * power 23 + 24 + The SCP interface provides an API to query the available sensors and 25 + their values which are then exported to userspace by this driver. 26 + 27 + Usage Notes 28 + ----------- 29 + 30 + The driver relies on device tree node to indicate the presence of SCPI 31 + support in the kernel. See 32 + Documentation/devicetree/bindings/arm/arm,scpi.txt for details of the 33 + devicetree node.
+10
MAINTAINERS
··· 9362 9362 S: Supported 9363 9363 F: arch/score/ 9364 9364 9365 + SYSTEM CONTROL & POWER INTERFACE (SCPI) Message Protocol drivers 9366 + M: Sudeep Holla <sudeep.holla@arm.com> 9367 + L: linux-arm-kernel@lists.infradead.org 9368 + S: Maintained 9369 + F: Documentation/devicetree/bindings/arm/arm,scpi.txt 9370 + F: drivers/clk/clk-scpi.c 9371 + F: drivers/cpufreq/scpi-cpufreq.c 9372 + F: drivers/firmware/arm_scpi.c 9373 + F: include/linux/scpi_protocol.h 9374 + 9365 9375 SCSI CDROM DRIVER 9366 9376 M: Jens Axboe <axboe@kernel.dk> 9367 9377 L: linux-scsi@vger.kernel.org
+11 -6
arch/arm/boot/dts/qcom-msm8974.dtsi
··· 100 100 clock-frequency = <19200000>; 101 101 }; 102 102 103 + smem { 104 + compatible = "qcom,smem"; 105 + 106 + memory-region = <&smem_region>; 107 + qcom,rpm-msg-ram = <&rpm_msg_ram>; 108 + 109 + hwlocks = <&tcsr_mutex 3>; 110 + }; 111 + 103 112 soc: soc { 104 113 #address-cells = <1>; 105 114 #size-cells = <1>; ··· 259 250 #hwlock-cells = <1>; 260 251 }; 261 252 262 - smem@fa00000 { 263 - compatible = "qcom,smem"; 264 - 265 - memory-region = <&smem_region>; 253 + rpm_msg_ram: memory@fc428000 { 254 + compatible = "qcom,rpm-msg-ram"; 266 255 reg = <0xfc428000 0x4000>; 267 - 268 - hwlocks = <&tcsr_mutex 3>; 269 256 }; 270 257 271 258 blsp1_uart2: serial@f991e000 {
-14
arch/arm64/kernel/psci.c
··· 30 30 #include <asm/smp_plat.h> 31 31 #include <asm/suspend.h> 32 32 33 - static bool psci_power_state_loses_context(u32 state) 34 - { 35 - return state & PSCI_0_2_POWER_STATE_TYPE_MASK; 36 - } 37 - 38 - static bool psci_power_state_is_valid(u32 state) 39 - { 40 - const u32 valid_mask = PSCI_0_2_POWER_STATE_ID_MASK | 41 - PSCI_0_2_POWER_STATE_TYPE_MASK | 42 - PSCI_0_2_POWER_STATE_AFFL_MASK; 43 - 44 - return !(state & ~valid_mask); 45 - } 46 - 47 33 static DEFINE_PER_CPU_READ_MOSTLY(u32 *, psci_power_state); 48 34 49 35 static int __maybe_unused cpu_psci_cpu_init_idle(unsigned int cpu)
+4 -2
drivers/base/power/clock_ops.c
··· 93 93 return -ENOMEM; 94 94 } 95 95 } else { 96 - if (IS_ERR(clk) || !__clk_get(clk)) { 96 + if (IS_ERR(clk)) { 97 97 kfree(ce); 98 98 return -ENOENT; 99 99 } ··· 127 127 * @clk: Clock pointer 128 128 * 129 129 * Add the clock to the list of clocks used for the power management of @dev. 130 - * It will increment refcount on clock pointer, use clk_put() on it when done. 130 + * The power-management code will take control of the clock reference, so 131 + * callers should not call clk_put() on @clk after this function sucessfully 132 + * returned. 131 133 */ 132 134 int pm_clk_add_clk(struct device *dev, struct clk *clk) 133 135 {
+11
drivers/bus/Kconfig
··· 120 120 Controller (BSC, sometimes called "LBSC within Bus Bridge", or 121 121 "External Bus Interface") as found on several Renesas ARM SoCs. 122 122 123 + config SUNXI_RSB 124 + tristate "Allwinner sunXi Reduced Serial Bus Driver" 125 + default MACH_SUN8I || MACH_SUN9I 126 + depends on ARCH_SUNXI 127 + select REGMAP 128 + help 129 + Say y here to enable support for Allwinner's Reduced Serial Bus 130 + (RSB) support. This controller is responsible for communicating 131 + with various RSB based devices, such as AXP223, AXP8XX PMICs, 132 + and AC100/AC200 ICs. 133 + 123 134 config VEXPRESS_CONFIG 124 135 bool "Versatile Express configuration bus" 125 136 default y if ARCH_VEXPRESS
+1
drivers/bus/Makefile
··· 15 15 obj-$(CONFIG_OMAP_INTERCONNECT) += omap_l3_smx.o omap_l3_noc.o 16 16 17 17 obj-$(CONFIG_OMAP_OCP2SCP) += omap-ocp2scp.o 18 + obj-$(CONFIG_SUNXI_RSB) += sunxi-rsb.o 18 19 obj-$(CONFIG_SIMPLE_PM_BUS) += simple-pm-bus.o 19 20 obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o
+783
drivers/bus/sunxi-rsb.c
··· 1 + /* 2 + * RSB (Reduced Serial Bus) driver. 3 + * 4 + * Author: Chen-Yu Tsai <wens@csie.org> 5 + * 6 + * This file is licensed under the terms of the GNU General Public License 7 + * version 2. This program is licensed "as is" without any warranty of any 8 + * kind, whether express or implied. 9 + * 10 + * The RSB controller looks like an SMBus controller which only supports 11 + * byte and word data transfers. But, it differs from standard SMBus 12 + * protocol on several aspects: 13 + * - it uses addresses set at runtime to address slaves. Runtime addresses 14 + * are sent to slaves using their 12bit hardware addresses. Up to 15 15 + * runtime addresses are available. 16 + * - it adds a parity bit every 8bits of data and address for read and 17 + * write accesses; this replaces the ack bit 18 + * - only one read access is required to read a byte (instead of a write 19 + * followed by a read access in standard SMBus protocol) 20 + * - there's no Ack bit after each read access 21 + * 22 + * This means this bus cannot be used to interface with standard SMBus 23 + * devices. Devices known to support this interface include the AXP223, 24 + * AXP809, and AXP806 PMICs, and the AC100 audio codec, all from X-Powers. 25 + * 26 + * A description of the operation and wire protocol can be found in the 27 + * RSB section of Allwinner's A80 user manual, which can be found at 28 + * 29 + * https://github.com/allwinner-zh/documents/tree/master/A80 30 + * 31 + * This document is officially released by Allwinner. 32 + * 33 + * This driver is based on i2c-sun6i-p2wi.c, the P2WI bus driver. 34 + * 35 + */ 36 + 37 + #include <linux/clk.h> 38 + #include <linux/clk/clk-conf.h> 39 + #include <linux/device.h> 40 + #include <linux/interrupt.h> 41 + #include <linux/io.h> 42 + #include <linux/iopoll.h> 43 + #include <linux/module.h> 44 + #include <linux/of.h> 45 + #include <linux/of_irq.h> 46 + #include <linux/of_platform.h> 47 + #include <linux/platform_device.h> 48 + #include <linux/regmap.h> 49 + #include <linux/reset.h> 50 + #include <linux/slab.h> 51 + #include <linux/sunxi-rsb.h> 52 + #include <linux/types.h> 53 + 54 + /* RSB registers */ 55 + #define RSB_CTRL 0x0 /* Global control */ 56 + #define RSB_CCR 0x4 /* Clock control */ 57 + #define RSB_INTE 0x8 /* Interrupt controls */ 58 + #define RSB_INTS 0xc /* Interrupt status */ 59 + #define RSB_ADDR 0x10 /* Address to send with read/write command */ 60 + #define RSB_DATA 0x1c /* Data to read/write */ 61 + #define RSB_LCR 0x24 /* Line control */ 62 + #define RSB_DMCR 0x28 /* Device mode (init) control */ 63 + #define RSB_CMD 0x2c /* RSB Command */ 64 + #define RSB_DAR 0x30 /* Device address / runtime address */ 65 + 66 + /* CTRL fields */ 67 + #define RSB_CTRL_START_TRANS BIT(7) 68 + #define RSB_CTRL_ABORT_TRANS BIT(6) 69 + #define RSB_CTRL_GLOBAL_INT_ENB BIT(1) 70 + #define RSB_CTRL_SOFT_RST BIT(0) 71 + 72 + /* CLK CTRL fields */ 73 + #define RSB_CCR_SDA_OUT_DELAY(v) (((v) & 0x7) << 8) 74 + #define RSB_CCR_MAX_CLK_DIV 0xff 75 + #define RSB_CCR_CLK_DIV(v) ((v) & RSB_CCR_MAX_CLK_DIV) 76 + 77 + /* STATUS fields */ 78 + #define RSB_INTS_TRANS_ERR_ACK BIT(16) 79 + #define RSB_INTS_TRANS_ERR_DATA_BIT(v) (((v) >> 8) & 0xf) 80 + #define RSB_INTS_TRANS_ERR_DATA GENMASK(11, 8) 81 + #define RSB_INTS_LOAD_BSY BIT(2) 82 + #define RSB_INTS_TRANS_ERR BIT(1) 83 + #define RSB_INTS_TRANS_OVER BIT(0) 84 + 85 + /* LINE CTRL fields*/ 86 + #define RSB_LCR_SCL_STATE BIT(5) 87 + #define RSB_LCR_SDA_STATE BIT(4) 88 + #define RSB_LCR_SCL_CTL BIT(3) 89 + #define RSB_LCR_SCL_CTL_EN BIT(2) 90 + #define RSB_LCR_SDA_CTL BIT(1) 91 + #define RSB_LCR_SDA_CTL_EN BIT(0) 92 + 93 + /* DEVICE MODE CTRL field values */ 94 + #define RSB_DMCR_DEVICE_START BIT(31) 95 + #define RSB_DMCR_MODE_DATA (0x7c << 16) 96 + #define RSB_DMCR_MODE_REG (0x3e << 8) 97 + #define RSB_DMCR_DEV_ADDR 0x00 98 + 99 + /* CMD values */ 100 + #define RSB_CMD_RD8 0x8b 101 + #define RSB_CMD_RD16 0x9c 102 + #define RSB_CMD_RD32 0xa6 103 + #define RSB_CMD_WR8 0x4e 104 + #define RSB_CMD_WR16 0x59 105 + #define RSB_CMD_WR32 0x63 106 + #define RSB_CMD_STRA 0xe8 107 + 108 + /* DAR fields */ 109 + #define RSB_DAR_RTA(v) (((v) & 0xff) << 16) 110 + #define RSB_DAR_DA(v) ((v) & 0xffff) 111 + 112 + #define RSB_MAX_FREQ 20000000 113 + 114 + #define RSB_CTRL_NAME "sunxi-rsb" 115 + 116 + struct sunxi_rsb_addr_map { 117 + u16 hwaddr; 118 + u8 rtaddr; 119 + }; 120 + 121 + struct sunxi_rsb { 122 + struct device *dev; 123 + void __iomem *regs; 124 + struct clk *clk; 125 + struct reset_control *rstc; 126 + struct completion complete; 127 + struct mutex lock; 128 + unsigned int status; 129 + }; 130 + 131 + /* bus / slave device related functions */ 132 + static struct bus_type sunxi_rsb_bus; 133 + 134 + static int sunxi_rsb_device_match(struct device *dev, struct device_driver *drv) 135 + { 136 + return of_driver_match_device(dev, drv); 137 + } 138 + 139 + static int sunxi_rsb_device_probe(struct device *dev) 140 + { 141 + const struct sunxi_rsb_driver *drv = to_sunxi_rsb_driver(dev->driver); 142 + struct sunxi_rsb_device *rdev = to_sunxi_rsb_device(dev); 143 + int ret; 144 + 145 + if (!drv->probe) 146 + return -ENODEV; 147 + 148 + if (!rdev->irq) { 149 + int irq = -ENOENT; 150 + 151 + if (dev->of_node) 152 + irq = of_irq_get(dev->of_node, 0); 153 + 154 + if (irq == -EPROBE_DEFER) 155 + return irq; 156 + if (irq < 0) 157 + irq = 0; 158 + 159 + rdev->irq = irq; 160 + } 161 + 162 + ret = of_clk_set_defaults(dev->of_node, false); 163 + if (ret < 0) 164 + return ret; 165 + 166 + return drv->probe(rdev); 167 + } 168 + 169 + static int sunxi_rsb_device_remove(struct device *dev) 170 + { 171 + const struct sunxi_rsb_driver *drv = to_sunxi_rsb_driver(dev->driver); 172 + 173 + return drv->remove(to_sunxi_rsb_device(dev)); 174 + } 175 + 176 + static struct bus_type sunxi_rsb_bus = { 177 + .name = RSB_CTRL_NAME, 178 + .match = sunxi_rsb_device_match, 179 + .probe = sunxi_rsb_device_probe, 180 + .remove = sunxi_rsb_device_remove, 181 + }; 182 + 183 + static void sunxi_rsb_dev_release(struct device *dev) 184 + { 185 + struct sunxi_rsb_device *rdev = to_sunxi_rsb_device(dev); 186 + 187 + kfree(rdev); 188 + } 189 + 190 + /** 191 + * sunxi_rsb_device_create() - allocate and add an RSB device 192 + * @rsb: RSB controller 193 + * @node: RSB slave device node 194 + * @hwaddr: RSB slave hardware address 195 + * @rtaddr: RSB slave runtime address 196 + */ 197 + static struct sunxi_rsb_device *sunxi_rsb_device_create(struct sunxi_rsb *rsb, 198 + struct device_node *node, u16 hwaddr, u8 rtaddr) 199 + { 200 + int err; 201 + struct sunxi_rsb_device *rdev; 202 + 203 + rdev = kzalloc(sizeof(*rdev), GFP_KERNEL); 204 + if (!rdev) 205 + return ERR_PTR(-ENOMEM); 206 + 207 + rdev->rsb = rsb; 208 + rdev->hwaddr = hwaddr; 209 + rdev->rtaddr = rtaddr; 210 + rdev->dev.bus = &sunxi_rsb_bus; 211 + rdev->dev.parent = rsb->dev; 212 + rdev->dev.of_node = node; 213 + rdev->dev.release = sunxi_rsb_dev_release; 214 + 215 + dev_set_name(&rdev->dev, "%s-%x", RSB_CTRL_NAME, hwaddr); 216 + 217 + err = device_register(&rdev->dev); 218 + if (err < 0) { 219 + dev_err(&rdev->dev, "Can't add %s, status %d\n", 220 + dev_name(&rdev->dev), err); 221 + goto err_device_add; 222 + } 223 + 224 + dev_dbg(&rdev->dev, "device %s registered\n", dev_name(&rdev->dev)); 225 + 226 + err_device_add: 227 + put_device(&rdev->dev); 228 + 229 + return ERR_PTR(err); 230 + } 231 + 232 + /** 233 + * sunxi_rsb_device_unregister(): unregister an RSB device 234 + * @rdev: rsb_device to be removed 235 + */ 236 + static void sunxi_rsb_device_unregister(struct sunxi_rsb_device *rdev) 237 + { 238 + device_unregister(&rdev->dev); 239 + } 240 + 241 + static int sunxi_rsb_remove_devices(struct device *dev, void *data) 242 + { 243 + struct sunxi_rsb_device *rdev = to_sunxi_rsb_device(dev); 244 + 245 + if (dev->bus == &sunxi_rsb_bus) 246 + sunxi_rsb_device_unregister(rdev); 247 + 248 + return 0; 249 + } 250 + 251 + /** 252 + * sunxi_rsb_driver_register() - Register device driver with RSB core 253 + * @rdrv: device driver to be associated with slave-device. 254 + * 255 + * This API will register the client driver with the RSB framework. 256 + * It is typically called from the driver's module-init function. 257 + */ 258 + int sunxi_rsb_driver_register(struct sunxi_rsb_driver *rdrv) 259 + { 260 + rdrv->driver.bus = &sunxi_rsb_bus; 261 + return driver_register(&rdrv->driver); 262 + } 263 + EXPORT_SYMBOL_GPL(sunxi_rsb_driver_register); 264 + 265 + /* common code that starts a transfer */ 266 + static int _sunxi_rsb_run_xfer(struct sunxi_rsb *rsb) 267 + { 268 + if (readl(rsb->regs + RSB_CTRL) & RSB_CTRL_START_TRANS) { 269 + dev_dbg(rsb->dev, "RSB transfer still in progress\n"); 270 + return -EBUSY; 271 + } 272 + 273 + reinit_completion(&rsb->complete); 274 + 275 + writel(RSB_INTS_LOAD_BSY | RSB_INTS_TRANS_ERR | RSB_INTS_TRANS_OVER, 276 + rsb->regs + RSB_INTE); 277 + writel(RSB_CTRL_START_TRANS | RSB_CTRL_GLOBAL_INT_ENB, 278 + rsb->regs + RSB_CTRL); 279 + 280 + if (!wait_for_completion_io_timeout(&rsb->complete, 281 + msecs_to_jiffies(100))) { 282 + dev_dbg(rsb->dev, "RSB timeout\n"); 283 + 284 + /* abort the transfer */ 285 + writel(RSB_CTRL_ABORT_TRANS, rsb->regs + RSB_CTRL); 286 + 287 + /* clear any interrupt flags */ 288 + writel(readl(rsb->regs + RSB_INTS), rsb->regs + RSB_INTS); 289 + 290 + return -ETIMEDOUT; 291 + } 292 + 293 + if (rsb->status & RSB_INTS_LOAD_BSY) { 294 + dev_dbg(rsb->dev, "RSB busy\n"); 295 + return -EBUSY; 296 + } 297 + 298 + if (rsb->status & RSB_INTS_TRANS_ERR) { 299 + if (rsb->status & RSB_INTS_TRANS_ERR_ACK) { 300 + dev_dbg(rsb->dev, "RSB slave nack\n"); 301 + return -EINVAL; 302 + } 303 + 304 + if (rsb->status & RSB_INTS_TRANS_ERR_DATA) { 305 + dev_dbg(rsb->dev, "RSB transfer data error\n"); 306 + return -EIO; 307 + } 308 + } 309 + 310 + return 0; 311 + } 312 + 313 + static int sunxi_rsb_read(struct sunxi_rsb *rsb, u8 rtaddr, u8 addr, 314 + u32 *buf, size_t len) 315 + { 316 + u32 cmd; 317 + int ret; 318 + 319 + if (!buf) 320 + return -EINVAL; 321 + 322 + switch (len) { 323 + case 1: 324 + cmd = RSB_CMD_RD8; 325 + break; 326 + case 2: 327 + cmd = RSB_CMD_RD16; 328 + break; 329 + case 4: 330 + cmd = RSB_CMD_RD32; 331 + break; 332 + default: 333 + dev_err(rsb->dev, "Invalid access width: %d\n", len); 334 + return -EINVAL; 335 + } 336 + 337 + mutex_lock(&rsb->lock); 338 + 339 + writel(addr, rsb->regs + RSB_ADDR); 340 + writel(RSB_DAR_RTA(rtaddr), rsb->regs + RSB_DAR); 341 + writel(cmd, rsb->regs + RSB_CMD); 342 + 343 + ret = _sunxi_rsb_run_xfer(rsb); 344 + if (ret) 345 + goto out; 346 + 347 + *buf = readl(rsb->regs + RSB_DATA); 348 + 349 + mutex_unlock(&rsb->lock); 350 + 351 + out: 352 + return ret; 353 + } 354 + 355 + static int sunxi_rsb_write(struct sunxi_rsb *rsb, u8 rtaddr, u8 addr, 356 + const u32 *buf, size_t len) 357 + { 358 + u32 cmd; 359 + int ret; 360 + 361 + if (!buf) 362 + return -EINVAL; 363 + 364 + switch (len) { 365 + case 1: 366 + cmd = RSB_CMD_WR8; 367 + break; 368 + case 2: 369 + cmd = RSB_CMD_WR16; 370 + break; 371 + case 4: 372 + cmd = RSB_CMD_WR32; 373 + break; 374 + default: 375 + dev_err(rsb->dev, "Invalid access width: %d\n", len); 376 + return -EINVAL; 377 + } 378 + 379 + mutex_lock(&rsb->lock); 380 + 381 + writel(addr, rsb->regs + RSB_ADDR); 382 + writel(RSB_DAR_RTA(rtaddr), rsb->regs + RSB_DAR); 383 + writel(*buf, rsb->regs + RSB_DATA); 384 + writel(cmd, rsb->regs + RSB_CMD); 385 + ret = _sunxi_rsb_run_xfer(rsb); 386 + 387 + mutex_unlock(&rsb->lock); 388 + 389 + return ret; 390 + } 391 + 392 + /* RSB regmap functions */ 393 + struct sunxi_rsb_ctx { 394 + struct sunxi_rsb_device *rdev; 395 + int size; 396 + }; 397 + 398 + static int regmap_sunxi_rsb_reg_read(void *context, unsigned int reg, 399 + unsigned int *val) 400 + { 401 + struct sunxi_rsb_ctx *ctx = context; 402 + struct sunxi_rsb_device *rdev = ctx->rdev; 403 + 404 + if (reg > 0xff) 405 + return -EINVAL; 406 + 407 + return sunxi_rsb_read(rdev->rsb, rdev->rtaddr, reg, val, ctx->size); 408 + } 409 + 410 + static int regmap_sunxi_rsb_reg_write(void *context, unsigned int reg, 411 + unsigned int val) 412 + { 413 + struct sunxi_rsb_ctx *ctx = context; 414 + struct sunxi_rsb_device *rdev = ctx->rdev; 415 + 416 + return sunxi_rsb_write(rdev->rsb, rdev->rtaddr, reg, &val, ctx->size); 417 + } 418 + 419 + static void regmap_sunxi_rsb_free_ctx(void *context) 420 + { 421 + struct sunxi_rsb_ctx *ctx = context; 422 + 423 + kfree(ctx); 424 + } 425 + 426 + static struct regmap_bus regmap_sunxi_rsb = { 427 + .reg_write = regmap_sunxi_rsb_reg_write, 428 + .reg_read = regmap_sunxi_rsb_reg_read, 429 + .free_context = regmap_sunxi_rsb_free_ctx, 430 + .reg_format_endian_default = REGMAP_ENDIAN_NATIVE, 431 + .val_format_endian_default = REGMAP_ENDIAN_NATIVE, 432 + }; 433 + 434 + static struct sunxi_rsb_ctx *regmap_sunxi_rsb_init_ctx(struct sunxi_rsb_device *rdev, 435 + const struct regmap_config *config) 436 + { 437 + struct sunxi_rsb_ctx *ctx; 438 + 439 + switch (config->val_bits) { 440 + case 8: 441 + case 16: 442 + case 32: 443 + break; 444 + default: 445 + return ERR_PTR(-EINVAL); 446 + } 447 + 448 + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); 449 + if (!ctx) 450 + return ERR_PTR(-ENOMEM); 451 + 452 + ctx->rdev = rdev; 453 + ctx->size = config->val_bits / 8; 454 + 455 + return ctx; 456 + } 457 + 458 + struct regmap *__devm_regmap_init_sunxi_rsb(struct sunxi_rsb_device *rdev, 459 + const struct regmap_config *config, 460 + struct lock_class_key *lock_key, 461 + const char *lock_name) 462 + { 463 + struct sunxi_rsb_ctx *ctx = regmap_sunxi_rsb_init_ctx(rdev, config); 464 + 465 + if (IS_ERR(ctx)) 466 + return ERR_CAST(ctx); 467 + 468 + return __devm_regmap_init(&rdev->dev, &regmap_sunxi_rsb, ctx, config, 469 + lock_key, lock_name); 470 + } 471 + EXPORT_SYMBOL_GPL(__devm_regmap_init_sunxi_rsb); 472 + 473 + /* RSB controller driver functions */ 474 + static irqreturn_t sunxi_rsb_irq(int irq, void *dev_id) 475 + { 476 + struct sunxi_rsb *rsb = dev_id; 477 + u32 status; 478 + 479 + status = readl(rsb->regs + RSB_INTS); 480 + rsb->status = status; 481 + 482 + /* Clear interrupts */ 483 + status &= (RSB_INTS_LOAD_BSY | RSB_INTS_TRANS_ERR | 484 + RSB_INTS_TRANS_OVER); 485 + writel(status, rsb->regs + RSB_INTS); 486 + 487 + complete(&rsb->complete); 488 + 489 + return IRQ_HANDLED; 490 + } 491 + 492 + static int sunxi_rsb_init_device_mode(struct sunxi_rsb *rsb) 493 + { 494 + int ret = 0; 495 + u32 reg; 496 + 497 + /* send init sequence */ 498 + writel(RSB_DMCR_DEVICE_START | RSB_DMCR_MODE_DATA | 499 + RSB_DMCR_MODE_REG | RSB_DMCR_DEV_ADDR, rsb->regs + RSB_DMCR); 500 + 501 + readl_poll_timeout(rsb->regs + RSB_DMCR, reg, 502 + !(reg & RSB_DMCR_DEVICE_START), 100, 250000); 503 + if (reg & RSB_DMCR_DEVICE_START) 504 + ret = -ETIMEDOUT; 505 + 506 + /* clear interrupt status bits */ 507 + writel(readl(rsb->regs + RSB_INTS), rsb->regs + RSB_INTS); 508 + 509 + return ret; 510 + } 511 + 512 + /* 513 + * There are 15 valid runtime addresses, though Allwinner typically 514 + * skips the first, for unknown reasons, and uses the following three. 515 + * 516 + * 0x17, 0x2d, 0x3a, 0x4e, 0x59, 0x63, 0x74, 0x8b, 517 + * 0x9c, 0xa6, 0xb1, 0xc5, 0xd2, 0xe8, 0xff 518 + * 519 + * No designs with 2 RSB slave devices sharing identical hardware 520 + * addresses on the same bus have been seen in the wild. All designs 521 + * use 0x2d for the primary PMIC, 0x3a for the secondary PMIC if 522 + * there is one, and 0x45 for peripheral ICs. 523 + * 524 + * The hardware does not seem to support re-setting runtime addresses. 525 + * Attempts to do so result in the slave devices returning a NACK. 526 + * Hence we just hardcode the mapping here, like Allwinner does. 527 + */ 528 + 529 + static const struct sunxi_rsb_addr_map sunxi_rsb_addr_maps[] = { 530 + { 0x3e3, 0x2d }, /* Primary PMIC: AXP223, AXP809, AXP81X, ... */ 531 + { 0x745, 0x3a }, /* Secondary PMIC: AXP806, ... */ 532 + { 0xe89, 0x45 }, /* Peripheral IC: AC100, ... */ 533 + }; 534 + 535 + static u8 sunxi_rsb_get_rtaddr(u16 hwaddr) 536 + { 537 + int i; 538 + 539 + for (i = 0; i < ARRAY_SIZE(sunxi_rsb_addr_maps); i++) 540 + if (hwaddr == sunxi_rsb_addr_maps[i].hwaddr) 541 + return sunxi_rsb_addr_maps[i].rtaddr; 542 + 543 + return 0; /* 0 is an invalid runtime address */ 544 + } 545 + 546 + static int of_rsb_register_devices(struct sunxi_rsb *rsb) 547 + { 548 + struct device *dev = rsb->dev; 549 + struct device_node *child, *np = dev->of_node; 550 + u32 hwaddr; 551 + u8 rtaddr; 552 + int ret; 553 + 554 + if (!np) 555 + return -EINVAL; 556 + 557 + /* Runtime addresses for all slaves should be set first */ 558 + for_each_available_child_of_node(np, child) { 559 + dev_dbg(dev, "setting child %s runtime address\n", 560 + child->full_name); 561 + 562 + ret = of_property_read_u32(child, "reg", &hwaddr); 563 + if (ret) { 564 + dev_err(dev, "%s: invalid 'reg' property: %d\n", 565 + child->full_name, ret); 566 + continue; 567 + } 568 + 569 + rtaddr = sunxi_rsb_get_rtaddr(hwaddr); 570 + if (!rtaddr) { 571 + dev_err(dev, "%s: unknown hardware device address\n", 572 + child->full_name); 573 + continue; 574 + } 575 + 576 + /* 577 + * Since no devices have been registered yet, we are the 578 + * only ones using the bus, we can skip locking the bus. 579 + */ 580 + 581 + /* setup command parameters */ 582 + writel(RSB_CMD_STRA, rsb->regs + RSB_CMD); 583 + writel(RSB_DAR_RTA(rtaddr) | RSB_DAR_DA(hwaddr), 584 + rsb->regs + RSB_DAR); 585 + 586 + /* send command */ 587 + ret = _sunxi_rsb_run_xfer(rsb); 588 + if (ret) 589 + dev_warn(dev, "%s: set runtime address failed: %d\n", 590 + child->full_name, ret); 591 + } 592 + 593 + /* Then we start adding devices and probing them */ 594 + for_each_available_child_of_node(np, child) { 595 + struct sunxi_rsb_device *rdev; 596 + 597 + dev_dbg(dev, "adding child %s\n", child->full_name); 598 + 599 + ret = of_property_read_u32(child, "reg", &hwaddr); 600 + if (ret) 601 + continue; 602 + 603 + rtaddr = sunxi_rsb_get_rtaddr(hwaddr); 604 + if (!rtaddr) 605 + continue; 606 + 607 + rdev = sunxi_rsb_device_create(rsb, child, hwaddr, rtaddr); 608 + if (IS_ERR(rdev)) 609 + dev_err(dev, "failed to add child device %s: %ld\n", 610 + child->full_name, PTR_ERR(rdev)); 611 + } 612 + 613 + return 0; 614 + } 615 + 616 + static const struct of_device_id sunxi_rsb_of_match_table[] = { 617 + { .compatible = "allwinner,sun8i-a23-rsb" }, 618 + {} 619 + }; 620 + MODULE_DEVICE_TABLE(of, sunxi_rsb_of_match_table); 621 + 622 + static int sunxi_rsb_probe(struct platform_device *pdev) 623 + { 624 + struct device *dev = &pdev->dev; 625 + struct device_node *np = dev->of_node; 626 + struct resource *r; 627 + struct sunxi_rsb *rsb; 628 + unsigned long p_clk_freq; 629 + u32 clk_delay, clk_freq = 3000000; 630 + int clk_div, irq, ret; 631 + u32 reg; 632 + 633 + of_property_read_u32(np, "clock-frequency", &clk_freq); 634 + if (clk_freq > RSB_MAX_FREQ) { 635 + dev_err(dev, 636 + "clock-frequency (%u Hz) is too high (max = 20MHz)\n", 637 + clk_freq); 638 + return -EINVAL; 639 + } 640 + 641 + rsb = devm_kzalloc(dev, sizeof(*rsb), GFP_KERNEL); 642 + if (!rsb) 643 + return -ENOMEM; 644 + 645 + rsb->dev = dev; 646 + platform_set_drvdata(pdev, rsb); 647 + r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 648 + rsb->regs = devm_ioremap_resource(dev, r); 649 + if (IS_ERR(rsb->regs)) 650 + return PTR_ERR(rsb->regs); 651 + 652 + irq = platform_get_irq(pdev, 0); 653 + if (irq < 0) { 654 + dev_err(dev, "failed to retrieve irq: %d\n", irq); 655 + return irq; 656 + } 657 + 658 + rsb->clk = devm_clk_get(dev, NULL); 659 + if (IS_ERR(rsb->clk)) { 660 + ret = PTR_ERR(rsb->clk); 661 + dev_err(dev, "failed to retrieve clk: %d\n", ret); 662 + return ret; 663 + } 664 + 665 + ret = clk_prepare_enable(rsb->clk); 666 + if (ret) { 667 + dev_err(dev, "failed to enable clk: %d\n", ret); 668 + return ret; 669 + } 670 + 671 + p_clk_freq = clk_get_rate(rsb->clk); 672 + 673 + rsb->rstc = devm_reset_control_get(dev, NULL); 674 + if (IS_ERR(rsb->rstc)) { 675 + ret = PTR_ERR(rsb->rstc); 676 + dev_err(dev, "failed to retrieve reset controller: %d\n", ret); 677 + goto err_clk_disable; 678 + } 679 + 680 + ret = reset_control_deassert(rsb->rstc); 681 + if (ret) { 682 + dev_err(dev, "failed to deassert reset line: %d\n", ret); 683 + goto err_clk_disable; 684 + } 685 + 686 + init_completion(&rsb->complete); 687 + mutex_init(&rsb->lock); 688 + 689 + /* reset the controller */ 690 + writel(RSB_CTRL_SOFT_RST, rsb->regs + RSB_CTRL); 691 + readl_poll_timeout(rsb->regs + RSB_CTRL, reg, 692 + !(reg & RSB_CTRL_SOFT_RST), 1000, 100000); 693 + 694 + /* 695 + * Clock frequency and delay calculation code is from 696 + * Allwinner U-boot sources. 697 + * 698 + * From A83 user manual: 699 + * bus clock frequency = parent clock frequency / (2 * (divider + 1)) 700 + */ 701 + clk_div = p_clk_freq / clk_freq / 2; 702 + if (!clk_div) 703 + clk_div = 1; 704 + else if (clk_div > RSB_CCR_MAX_CLK_DIV + 1) 705 + clk_div = RSB_CCR_MAX_CLK_DIV + 1; 706 + 707 + clk_delay = clk_div >> 1; 708 + if (!clk_delay) 709 + clk_delay = 1; 710 + 711 + dev_info(dev, "RSB running at %lu Hz\n", p_clk_freq / clk_div / 2); 712 + writel(RSB_CCR_SDA_OUT_DELAY(clk_delay) | RSB_CCR_CLK_DIV(clk_div - 1), 713 + rsb->regs + RSB_CCR); 714 + 715 + ret = devm_request_irq(dev, irq, sunxi_rsb_irq, 0, RSB_CTRL_NAME, rsb); 716 + if (ret) { 717 + dev_err(dev, "can't register interrupt handler irq %d: %d\n", 718 + irq, ret); 719 + goto err_reset_assert; 720 + } 721 + 722 + /* initialize all devices on the bus into RSB mode */ 723 + ret = sunxi_rsb_init_device_mode(rsb); 724 + if (ret) 725 + dev_warn(dev, "Initialize device mode failed: %d\n", ret); 726 + 727 + of_rsb_register_devices(rsb); 728 + 729 + return 0; 730 + 731 + err_reset_assert: 732 + reset_control_assert(rsb->rstc); 733 + 734 + err_clk_disable: 735 + clk_disable_unprepare(rsb->clk); 736 + 737 + return ret; 738 + } 739 + 740 + static int sunxi_rsb_remove(struct platform_device *pdev) 741 + { 742 + struct sunxi_rsb *rsb = platform_get_drvdata(pdev); 743 + 744 + device_for_each_child(rsb->dev, NULL, sunxi_rsb_remove_devices); 745 + reset_control_assert(rsb->rstc); 746 + clk_disable_unprepare(rsb->clk); 747 + 748 + return 0; 749 + } 750 + 751 + static struct platform_driver sunxi_rsb_driver = { 752 + .probe = sunxi_rsb_probe, 753 + .remove = sunxi_rsb_remove, 754 + .driver = { 755 + .name = RSB_CTRL_NAME, 756 + .of_match_table = sunxi_rsb_of_match_table, 757 + }, 758 + }; 759 + 760 + static int __init sunxi_rsb_init(void) 761 + { 762 + int ret; 763 + 764 + ret = bus_register(&sunxi_rsb_bus); 765 + if (ret) { 766 + pr_err("failed to register sunxi sunxi_rsb bus: %d\n", ret); 767 + return ret; 768 + } 769 + 770 + return platform_driver_register(&sunxi_rsb_driver); 771 + } 772 + module_init(sunxi_rsb_init); 773 + 774 + static void __exit sunxi_rsb_exit(void) 775 + { 776 + platform_driver_unregister(&sunxi_rsb_driver); 777 + bus_unregister(&sunxi_rsb_bus); 778 + } 779 + module_exit(sunxi_rsb_exit); 780 + 781 + MODULE_AUTHOR("Chen-Yu Tsai <wens@csie.org>"); 782 + MODULE_DESCRIPTION("Allwinner sunXi Reduced Serial Bus controller driver"); 783 + MODULE_LICENSE("GPL v2");
+10
drivers/clk/Kconfig
··· 60 60 clocked at 32KHz each. Clkout1 is always on, Clkout2 can off 61 61 by control register. 62 62 63 + config COMMON_CLK_SCPI 64 + tristate "Clock driver controlled via SCPI interface" 65 + depends on ARM_SCPI_PROTOCOL || COMPILE_TEST 66 + ---help--- 67 + This driver provides support for clocks that are controlled 68 + by firmware that implements the SCPI interface. 69 + 70 + This driver uses SCPI Message Protocol to interact with the 71 + firmware providing all the clock controls. 72 + 63 73 config COMMON_CLK_SI5351 64 74 tristate "Clock driver for SiLabs 5351A/B/C" 65 75 depends on I2C
+1
drivers/clk/Makefile
··· 36 36 obj-$(CONFIG_CLK_QORIQ) += clk-qoriq.o 37 37 obj-$(CONFIG_COMMON_CLK_RK808) += clk-rk808.o 38 38 obj-$(CONFIG_COMMON_CLK_S2MPS11) += clk-s2mps11.o 39 + obj-$(CONFIG_COMMON_CLK_SCPI) += clk-scpi.o 39 40 obj-$(CONFIG_COMMON_CLK_SI5351) += clk-si5351.o 40 41 obj-$(CONFIG_COMMON_CLK_SI514) += clk-si514.o 41 42 obj-$(CONFIG_COMMON_CLK_SI570) += clk-si570.o
+7 -7
drivers/clk/berlin/bg2q.c
··· 45 45 #define REG_SDIO0XIN_CLKCTL 0x0158 46 46 #define REG_SDIO1XIN_CLKCTL 0x015c 47 47 48 - #define MAX_CLKS 27 48 + #define MAX_CLKS 28 49 49 static struct clk *clks[MAX_CLKS]; 50 50 static struct clk_onecell_data clk_data; 51 51 static DEFINE_SPINLOCK(lock); ··· 356 356 gd->bit_idx, 0, &lock); 357 357 } 358 358 359 - /* 360 - * twdclk is derived from cpu/3 361 - * TODO: use cpupll until cpuclk is not available 362 - */ 359 + /* cpuclk divider is fixed to 1 */ 360 + clks[CLKID_CPU] = 361 + clk_register_fixed_factor(NULL, "cpu", clk_names[CPUPLL], 362 + 0, 1, 1); 363 + /* twdclk is derived from cpu/3 */ 363 364 clks[CLKID_TWD] = 364 - clk_register_fixed_factor(NULL, "twd", clk_names[CPUPLL], 365 - 0, 1, 3); 365 + clk_register_fixed_factor(NULL, "twd", "cpu", 0, 1, 3); 366 366 367 367 /* check for errors on leaf clocks */ 368 368 for (n = 0; n < MAX_CLKS; n++) {
+325
drivers/clk/clk-scpi.c
··· 1 + /* 2 + * System Control and Power Interface (SCPI) Protocol based clock driver 3 + * 4 + * Copyright (C) 2015 ARM Ltd. 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms and conditions of the GNU General Public License, 8 + * version 2, as published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope it will be useful, but WITHOUT 11 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 + * more details. 14 + * 15 + * You should have received a copy of the GNU General Public License along with 16 + * this program. If not, see <http://www.gnu.org/licenses/>. 17 + */ 18 + 19 + #include <linux/clk-provider.h> 20 + #include <linux/device.h> 21 + #include <linux/err.h> 22 + #include <linux/of.h> 23 + #include <linux/module.h> 24 + #include <linux/of_platform.h> 25 + #include <linux/platform_device.h> 26 + #include <linux/scpi_protocol.h> 27 + 28 + struct scpi_clk { 29 + u32 id; 30 + struct clk_hw hw; 31 + struct scpi_dvfs_info *info; 32 + struct scpi_ops *scpi_ops; 33 + }; 34 + 35 + #define to_scpi_clk(clk) container_of(clk, struct scpi_clk, hw) 36 + 37 + static struct platform_device *cpufreq_dev; 38 + 39 + static unsigned long scpi_clk_recalc_rate(struct clk_hw *hw, 40 + unsigned long parent_rate) 41 + { 42 + struct scpi_clk *clk = to_scpi_clk(hw); 43 + 44 + return clk->scpi_ops->clk_get_val(clk->id); 45 + } 46 + 47 + static long scpi_clk_round_rate(struct clk_hw *hw, unsigned long rate, 48 + unsigned long *parent_rate) 49 + { 50 + /* 51 + * We can't figure out what rate it will be, so just return the 52 + * rate back to the caller. scpi_clk_recalc_rate() will be called 53 + * after the rate is set and we'll know what rate the clock is 54 + * running at then. 55 + */ 56 + return rate; 57 + } 58 + 59 + static int scpi_clk_set_rate(struct clk_hw *hw, unsigned long rate, 60 + unsigned long parent_rate) 61 + { 62 + struct scpi_clk *clk = to_scpi_clk(hw); 63 + 64 + return clk->scpi_ops->clk_set_val(clk->id, rate); 65 + } 66 + 67 + static const struct clk_ops scpi_clk_ops = { 68 + .recalc_rate = scpi_clk_recalc_rate, 69 + .round_rate = scpi_clk_round_rate, 70 + .set_rate = scpi_clk_set_rate, 71 + }; 72 + 73 + /* find closest match to given frequency in OPP table */ 74 + static int __scpi_dvfs_round_rate(struct scpi_clk *clk, unsigned long rate) 75 + { 76 + int idx; 77 + u32 fmin = 0, fmax = ~0, ftmp; 78 + const struct scpi_opp *opp = clk->info->opps; 79 + 80 + for (idx = 0; idx < clk->info->count; idx++, opp++) { 81 + ftmp = opp->freq; 82 + if (ftmp >= (u32)rate) { 83 + if (ftmp <= fmax) 84 + fmax = ftmp; 85 + break; 86 + } else if (ftmp >= fmin) { 87 + fmin = ftmp; 88 + } 89 + } 90 + return fmax != ~0 ? fmax : fmin; 91 + } 92 + 93 + static unsigned long scpi_dvfs_recalc_rate(struct clk_hw *hw, 94 + unsigned long parent_rate) 95 + { 96 + struct scpi_clk *clk = to_scpi_clk(hw); 97 + int idx = clk->scpi_ops->dvfs_get_idx(clk->id); 98 + const struct scpi_opp *opp; 99 + 100 + if (idx < 0) 101 + return 0; 102 + 103 + opp = clk->info->opps + idx; 104 + return opp->freq; 105 + } 106 + 107 + static long scpi_dvfs_round_rate(struct clk_hw *hw, unsigned long rate, 108 + unsigned long *parent_rate) 109 + { 110 + struct scpi_clk *clk = to_scpi_clk(hw); 111 + 112 + return __scpi_dvfs_round_rate(clk, rate); 113 + } 114 + 115 + static int __scpi_find_dvfs_index(struct scpi_clk *clk, unsigned long rate) 116 + { 117 + int idx, max_opp = clk->info->count; 118 + const struct scpi_opp *opp = clk->info->opps; 119 + 120 + for (idx = 0; idx < max_opp; idx++, opp++) 121 + if (opp->freq == rate) 122 + return idx; 123 + return -EINVAL; 124 + } 125 + 126 + static int scpi_dvfs_set_rate(struct clk_hw *hw, unsigned long rate, 127 + unsigned long parent_rate) 128 + { 129 + struct scpi_clk *clk = to_scpi_clk(hw); 130 + int ret = __scpi_find_dvfs_index(clk, rate); 131 + 132 + if (ret < 0) 133 + return ret; 134 + return clk->scpi_ops->dvfs_set_idx(clk->id, (u8)ret); 135 + } 136 + 137 + static const struct clk_ops scpi_dvfs_ops = { 138 + .recalc_rate = scpi_dvfs_recalc_rate, 139 + .round_rate = scpi_dvfs_round_rate, 140 + .set_rate = scpi_dvfs_set_rate, 141 + }; 142 + 143 + static const struct of_device_id scpi_clk_match[] = { 144 + { .compatible = "arm,scpi-dvfs-clocks", .data = &scpi_dvfs_ops, }, 145 + { .compatible = "arm,scpi-variable-clocks", .data = &scpi_clk_ops, }, 146 + {} 147 + }; 148 + 149 + static struct clk * 150 + scpi_clk_ops_init(struct device *dev, const struct of_device_id *match, 151 + struct scpi_clk *sclk, const char *name) 152 + { 153 + struct clk_init_data init; 154 + struct clk *clk; 155 + unsigned long min = 0, max = 0; 156 + 157 + init.name = name; 158 + init.flags = CLK_IS_ROOT; 159 + init.num_parents = 0; 160 + init.ops = match->data; 161 + sclk->hw.init = &init; 162 + sclk->scpi_ops = get_scpi_ops(); 163 + 164 + if (init.ops == &scpi_dvfs_ops) { 165 + sclk->info = sclk->scpi_ops->dvfs_get_info(sclk->id); 166 + if (IS_ERR(sclk->info)) 167 + return NULL; 168 + } else if (init.ops == &scpi_clk_ops) { 169 + if (sclk->scpi_ops->clk_get_range(sclk->id, &min, &max) || !max) 170 + return NULL; 171 + } else { 172 + return NULL; 173 + } 174 + 175 + clk = devm_clk_register(dev, &sclk->hw); 176 + if (!IS_ERR(clk) && max) 177 + clk_hw_set_rate_range(&sclk->hw, min, max); 178 + return clk; 179 + } 180 + 181 + struct scpi_clk_data { 182 + struct scpi_clk **clk; 183 + unsigned int clk_num; 184 + }; 185 + 186 + static struct clk * 187 + scpi_of_clk_src_get(struct of_phandle_args *clkspec, void *data) 188 + { 189 + struct scpi_clk *sclk; 190 + struct scpi_clk_data *clk_data = data; 191 + unsigned int idx = clkspec->args[0], count; 192 + 193 + for (count = 0; count < clk_data->clk_num; count++) { 194 + sclk = clk_data->clk[count]; 195 + if (idx == sclk->id) 196 + return sclk->hw.clk; 197 + } 198 + 199 + return ERR_PTR(-EINVAL); 200 + } 201 + 202 + static int scpi_clk_add(struct device *dev, struct device_node *np, 203 + const struct of_device_id *match) 204 + { 205 + struct clk **clks; 206 + int idx, count; 207 + struct scpi_clk_data *clk_data; 208 + 209 + count = of_property_count_strings(np, "clock-output-names"); 210 + if (count < 0) { 211 + dev_err(dev, "%s: invalid clock output count\n", np->name); 212 + return -EINVAL; 213 + } 214 + 215 + clk_data = devm_kmalloc(dev, sizeof(*clk_data), GFP_KERNEL); 216 + if (!clk_data) 217 + return -ENOMEM; 218 + 219 + clk_data->clk_num = count; 220 + clk_data->clk = devm_kcalloc(dev, count, sizeof(*clk_data->clk), 221 + GFP_KERNEL); 222 + if (!clk_data->clk) 223 + return -ENOMEM; 224 + 225 + clks = devm_kcalloc(dev, count, sizeof(*clks), GFP_KERNEL); 226 + if (!clks) 227 + return -ENOMEM; 228 + 229 + for (idx = 0; idx < count; idx++) { 230 + struct scpi_clk *sclk; 231 + const char *name; 232 + u32 val; 233 + 234 + sclk = devm_kzalloc(dev, sizeof(*sclk), GFP_KERNEL); 235 + if (!sclk) 236 + return -ENOMEM; 237 + 238 + if (of_property_read_string_index(np, "clock-output-names", 239 + idx, &name)) { 240 + dev_err(dev, "invalid clock name @ %s\n", np->name); 241 + return -EINVAL; 242 + } 243 + 244 + if (of_property_read_u32_index(np, "clock-indices", 245 + idx, &val)) { 246 + dev_err(dev, "invalid clock index @ %s\n", np->name); 247 + return -EINVAL; 248 + } 249 + 250 + sclk->id = val; 251 + 252 + clks[idx] = scpi_clk_ops_init(dev, match, sclk, name); 253 + if (IS_ERR_OR_NULL(clks[idx])) 254 + dev_err(dev, "failed to register clock '%s'\n", name); 255 + else 256 + dev_dbg(dev, "Registered clock '%s'\n", name); 257 + clk_data->clk[idx] = sclk; 258 + } 259 + 260 + return of_clk_add_provider(np, scpi_of_clk_src_get, clk_data); 261 + } 262 + 263 + static int scpi_clocks_remove(struct platform_device *pdev) 264 + { 265 + struct device *dev = &pdev->dev; 266 + struct device_node *child, *np = dev->of_node; 267 + 268 + if (cpufreq_dev) { 269 + platform_device_unregister(cpufreq_dev); 270 + cpufreq_dev = NULL; 271 + } 272 + 273 + for_each_available_child_of_node(np, child) 274 + of_clk_del_provider(np); 275 + return 0; 276 + } 277 + 278 + static int scpi_clocks_probe(struct platform_device *pdev) 279 + { 280 + int ret; 281 + struct device *dev = &pdev->dev; 282 + struct device_node *child, *np = dev->of_node; 283 + const struct of_device_id *match; 284 + 285 + if (!get_scpi_ops()) 286 + return -ENXIO; 287 + 288 + for_each_available_child_of_node(np, child) { 289 + match = of_match_node(scpi_clk_match, child); 290 + if (!match) 291 + continue; 292 + ret = scpi_clk_add(dev, child, match); 293 + if (ret) { 294 + scpi_clocks_remove(pdev); 295 + return ret; 296 + } 297 + } 298 + /* Add the virtual cpufreq device */ 299 + cpufreq_dev = platform_device_register_simple("scpi-cpufreq", 300 + -1, NULL, 0); 301 + if (!cpufreq_dev) 302 + pr_warn("unable to register cpufreq device"); 303 + 304 + return 0; 305 + } 306 + 307 + static const struct of_device_id scpi_clocks_ids[] = { 308 + { .compatible = "arm,scpi-clocks", }, 309 + {} 310 + }; 311 + MODULE_DEVICE_TABLE(of, scpi_clocks_ids); 312 + 313 + static struct platform_driver scpi_clocks_driver = { 314 + .driver = { 315 + .name = "scpi_clocks", 316 + .of_match_table = scpi_clocks_ids, 317 + }, 318 + .probe = scpi_clocks_probe, 319 + .remove = scpi_clocks_remove, 320 + }; 321 + module_platform_driver(scpi_clocks_driver); 322 + 323 + MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>"); 324 + MODULE_DESCRIPTION("ARM SCPI clock driver"); 325 + MODULE_LICENSE("GPL v2");
+11 -3
drivers/clocksource/tcb_clksrc.c
··· 193 193 struct clk *t2_clk = tc->clk[2]; 194 194 int irq = tc->irq[2]; 195 195 196 - /* try to enable t2 clk to avoid future errors in mode change */ 197 - ret = clk_prepare_enable(t2_clk); 196 + ret = clk_prepare_enable(tc->slow_clk); 198 197 if (ret) 199 198 return ret; 199 + 200 + /* try to enable t2 clk to avoid future errors in mode change */ 201 + ret = clk_prepare_enable(t2_clk); 202 + if (ret) { 203 + clk_disable_unprepare(tc->slow_clk); 204 + return ret; 205 + } 206 + 200 207 clk_disable(t2_clk); 201 208 202 209 clkevt.regs = tc->regs; ··· 215 208 216 209 ret = request_irq(irq, ch2_irq, IRQF_TIMER, "tc_clkevt", &clkevt); 217 210 if (ret) { 218 - clk_disable_unprepare(t2_clk); 211 + clk_unprepare(t2_clk); 212 + clk_disable_unprepare(tc->slow_clk); 219 213 return ret; 220 214 } 221 215
+22 -9
drivers/clocksource/timer-atmel-st.c
··· 22 22 #include <linux/kernel.h> 23 23 #include <linux/interrupt.h> 24 24 #include <linux/irq.h> 25 + #include <linux/clk.h> 25 26 #include <linux/clockchips.h> 26 27 #include <linux/export.h> 27 28 #include <linux/mfd/syscon.h> ··· 34 33 static u32 irqmask; 35 34 static struct clock_event_device clkevt; 36 35 static struct regmap *regmap_st; 37 - 38 - #define AT91_SLOW_CLOCK 32768 39 - #define RM9200_TIMER_LATCH ((AT91_SLOW_CLOCK + HZ/2) / HZ) 36 + static int timer_latch; 40 37 41 38 /* 42 39 * The ST_CRTR is updated asynchronously to the master clock ... but ··· 81 82 if (sr & AT91_ST_PITS) { 82 83 u32 crtr = read_CRTR(); 83 84 84 - while (((crtr - last_crtr) & AT91_ST_CRTV) >= RM9200_TIMER_LATCH) { 85 - last_crtr += RM9200_TIMER_LATCH; 85 + while (((crtr - last_crtr) & AT91_ST_CRTV) >= timer_latch) { 86 + last_crtr += timer_latch; 86 87 clkevt.event_handler(&clkevt); 87 88 } 88 89 return IRQ_HANDLED; ··· 143 144 144 145 /* PIT for periodic irqs; fixed rate of 1/HZ */ 145 146 irqmask = AT91_ST_PITS; 146 - regmap_write(regmap_st, AT91_ST_PIMR, RM9200_TIMER_LATCH); 147 + regmap_write(regmap_st, AT91_ST_PIMR, timer_latch); 147 148 regmap_write(regmap_st, AT91_ST_IER, irqmask); 148 149 return 0; 149 150 } ··· 196 197 */ 197 198 static void __init atmel_st_timer_init(struct device_node *node) 198 199 { 199 - unsigned int val; 200 + struct clk *sclk; 201 + unsigned int sclk_rate, val; 200 202 int irq, ret; 201 203 202 204 regmap_st = syscon_node_to_regmap(node); ··· 221 221 if (ret) 222 222 panic(pr_fmt("Unable to setup IRQ\n")); 223 223 224 + sclk = of_clk_get(node, 0); 225 + if (IS_ERR(sclk)) 226 + panic(pr_fmt("Unable to get slow clock\n")); 227 + 228 + clk_prepare_enable(sclk); 229 + if (ret) 230 + panic(pr_fmt("Could not enable slow clock\n")); 231 + 232 + sclk_rate = clk_get_rate(sclk); 233 + if (!sclk_rate) 234 + panic(pr_fmt("Invalid slow clock rate\n")); 235 + timer_latch = (sclk_rate + HZ / 2) / HZ; 236 + 224 237 /* The 32KiHz "Slow Clock" (tick every 30517.58 nanoseconds) is used 225 238 * directly for the clocksource and all clockevents, after adjusting 226 239 * its prescaler from the 1 Hz default. ··· 242 229 243 230 /* Setup timer clockevent, with minimum of two ticks (important!!) */ 244 231 clkevt.cpumask = cpumask_of(0); 245 - clockevents_config_and_register(&clkevt, AT91_SLOW_CLOCK, 232 + clockevents_config_and_register(&clkevt, sclk_rate, 246 233 2, AT91_ST_ALMV); 247 234 248 235 /* register clocksource */ 249 - clocksource_register_hz(&clk32k, AT91_SLOW_CLOCK); 236 + clocksource_register_hz(&clk32k, sclk_rate); 250 237 } 251 238 CLOCKSOURCE_OF_DECLARE(atmel_st_timer, "atmel,at91rm9200-st", 252 239 atmel_st_timer_init);
+10
drivers/cpufreq/Kconfig.arm
··· 199 199 config ARM_SA1110_CPUFREQ 200 200 bool 201 201 202 + config ARM_SCPI_CPUFREQ 203 + tristate "SCPI based CPUfreq driver" 204 + depends on ARM_BIG_LITTLE_CPUFREQ && ARM_SCPI_PROTOCOL 205 + help 206 + This adds the CPUfreq driver support for ARM big.LITTLE platforms 207 + using SCPI protocol for CPU power management. 208 + 209 + This driver uses SCPI Message Protocol driver to interact with the 210 + firmware providing the CPU DVFS functionality. 211 + 202 212 config ARM_SPEAR_CPUFREQ 203 213 bool "SPEAr CPUFreq support" 204 214 depends on PLAT_SPEAR
+1
drivers/cpufreq/Makefile
··· 71 71 obj-$(CONFIG_ARM_S5PV210_CPUFREQ) += s5pv210-cpufreq.o 72 72 obj-$(CONFIG_ARM_SA1100_CPUFREQ) += sa1100-cpufreq.o 73 73 obj-$(CONFIG_ARM_SA1110_CPUFREQ) += sa1110-cpufreq.o 74 + obj-$(CONFIG_ARM_SCPI_CPUFREQ) += scpi-cpufreq.o 74 75 obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o 75 76 obj-$(CONFIG_ARM_TEGRA20_CPUFREQ) += tegra20-cpufreq.o 76 77 obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o
+124
drivers/cpufreq/scpi-cpufreq.c
··· 1 + /* 2 + * System Control and Power Interface (SCPI) based CPUFreq Interface driver 3 + * 4 + * It provides necessary ops to arm_big_little cpufreq driver. 5 + * 6 + * Copyright (C) 2015 ARM Ltd. 7 + * Sudeep Holla <sudeep.holla@arm.com> 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License version 2 as 11 + * published by the Free Software Foundation. 12 + * 13 + * This program is distributed "as is" WITHOUT ANY WARRANTY of any 14 + * kind, whether express or implied; without even the implied warranty 15 + * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 + * GNU General Public License for more details. 17 + */ 18 + 19 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 20 + 21 + #include <linux/cpufreq.h> 22 + #include <linux/module.h> 23 + #include <linux/platform_device.h> 24 + #include <linux/pm_opp.h> 25 + #include <linux/scpi_protocol.h> 26 + #include <linux/types.h> 27 + 28 + #include "arm_big_little.h" 29 + 30 + static struct scpi_ops *scpi_ops; 31 + 32 + static struct scpi_dvfs_info *scpi_get_dvfs_info(struct device *cpu_dev) 33 + { 34 + u8 domain = topology_physical_package_id(cpu_dev->id); 35 + 36 + if (domain < 0) 37 + return ERR_PTR(-EINVAL); 38 + return scpi_ops->dvfs_get_info(domain); 39 + } 40 + 41 + static int scpi_opp_table_ops(struct device *cpu_dev, bool remove) 42 + { 43 + int idx, ret = 0; 44 + struct scpi_opp *opp; 45 + struct scpi_dvfs_info *info = scpi_get_dvfs_info(cpu_dev); 46 + 47 + if (IS_ERR(info)) 48 + return PTR_ERR(info); 49 + 50 + if (!info->opps) 51 + return -EIO; 52 + 53 + for (opp = info->opps, idx = 0; idx < info->count; idx++, opp++) { 54 + if (remove) 55 + dev_pm_opp_remove(cpu_dev, opp->freq); 56 + else 57 + ret = dev_pm_opp_add(cpu_dev, opp->freq, 58 + opp->m_volt * 1000); 59 + if (ret) { 60 + dev_warn(cpu_dev, "failed to add opp %uHz %umV\n", 61 + opp->freq, opp->m_volt); 62 + while (idx-- > 0) 63 + dev_pm_opp_remove(cpu_dev, (--opp)->freq); 64 + return ret; 65 + } 66 + } 67 + return ret; 68 + } 69 + 70 + static int scpi_get_transition_latency(struct device *cpu_dev) 71 + { 72 + struct scpi_dvfs_info *info = scpi_get_dvfs_info(cpu_dev); 73 + 74 + if (IS_ERR(info)) 75 + return PTR_ERR(info); 76 + return info->latency; 77 + } 78 + 79 + static int scpi_init_opp_table(struct device *cpu_dev) 80 + { 81 + return scpi_opp_table_ops(cpu_dev, false); 82 + } 83 + 84 + static void scpi_free_opp_table(struct device *cpu_dev) 85 + { 86 + scpi_opp_table_ops(cpu_dev, true); 87 + } 88 + 89 + static struct cpufreq_arm_bL_ops scpi_cpufreq_ops = { 90 + .name = "scpi", 91 + .get_transition_latency = scpi_get_transition_latency, 92 + .init_opp_table = scpi_init_opp_table, 93 + .free_opp_table = scpi_free_opp_table, 94 + }; 95 + 96 + static int scpi_cpufreq_probe(struct platform_device *pdev) 97 + { 98 + scpi_ops = get_scpi_ops(); 99 + if (!scpi_ops) 100 + return -EIO; 101 + 102 + return bL_cpufreq_register(&scpi_cpufreq_ops); 103 + } 104 + 105 + static int scpi_cpufreq_remove(struct platform_device *pdev) 106 + { 107 + bL_cpufreq_unregister(&scpi_cpufreq_ops); 108 + scpi_ops = NULL; 109 + return 0; 110 + } 111 + 112 + static struct platform_driver scpi_cpufreq_platdrv = { 113 + .driver = { 114 + .name = "scpi-cpufreq", 115 + .owner = THIS_MODULE, 116 + }, 117 + .probe = scpi_cpufreq_probe, 118 + .remove = scpi_cpufreq_remove, 119 + }; 120 + module_platform_driver(scpi_cpufreq_platdrv); 121 + 122 + MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>"); 123 + MODULE_DESCRIPTION("ARM SCPI CPUFreq interface driver"); 124 + MODULE_LICENSE("GPL v2");
+26
drivers/firmware/Kconfig
··· 8 8 config ARM_PSCI_FW 9 9 bool 10 10 11 + config ARM_SCPI_PROTOCOL 12 + tristate "ARM System Control and Power Interface (SCPI) Message Protocol" 13 + depends on ARM_MHU 14 + help 15 + System Control and Power Interface (SCPI) Message Protocol is 16 + defined for the purpose of communication between the Application 17 + Cores(AP) and the System Control Processor(SCP). The MHU peripheral 18 + provides a mechanism for inter-processor communication between SCP 19 + and AP. 20 + 21 + SCP controls most of the power managament on the Application 22 + Processors. It offers control and management of: the core/cluster 23 + power states, various power domain DVFS including the core/cluster, 24 + certain system clocks configuration, thermal sensors and many 25 + others. 26 + 27 + This protocol library provides interface for all the client drivers 28 + making use of the features offered by the SCP. 29 + 11 30 config EDD 12 31 tristate "BIOS Enhanced Disk Drive calls determine boot disk" 13 32 depends on X86 ··· 153 134 Boot Firmware Table (iBFT) via sysfs to userspace. If you wish to 154 135 detect iSCSI boot parameters dynamically during system boot, say Y. 155 136 Otherwise, say N. 137 + 138 + config RASPBERRYPI_FIRMWARE 139 + tristate "Raspberry Pi Firmware Driver" 140 + depends on BCM2835_MBOX 141 + help 142 + This option enables support for communicating with the firmware on the 143 + Raspberry Pi. 156 144 157 145 config QCOM_SCM 158 146 bool
+3 -1
drivers/firmware/Makefile
··· 2 2 # Makefile for the linux kernel. 3 3 # 4 4 obj-$(CONFIG_ARM_PSCI_FW) += psci.o 5 + obj-$(CONFIG_ARM_SCPI_PROTOCOL) += arm_scpi.o 5 6 obj-$(CONFIG_DMI) += dmi_scan.o 6 7 obj-$(CONFIG_DMI_SYSFS) += dmi-sysfs.o 7 8 obj-$(CONFIG_EDD) += edd.o ··· 13 12 obj-$(CONFIG_ISCSI_IBFT_FIND) += iscsi_ibft_find.o 14 13 obj-$(CONFIG_ISCSI_IBFT) += iscsi_ibft.o 15 14 obj-$(CONFIG_FIRMWARE_MEMMAP) += memmap.o 15 + obj-$(CONFIG_RASPBERRYPI_FIRMWARE) += raspberrypi.o 16 16 obj-$(CONFIG_QCOM_SCM) += qcom_scm.o 17 17 obj-$(CONFIG_QCOM_SCM_64) += qcom_scm-64.o 18 18 obj-$(CONFIG_QCOM_SCM_32) += qcom_scm-32.o 19 - CFLAGS_qcom_scm-32.o :=$(call as-instr,.arch_extension sec,-DREQUIRES_SEC=1) 19 + CFLAGS_qcom_scm-32.o :=$(call as-instr,.arch armv7-a\n.arch_extension sec,-DREQUIRES_SEC=1) -march=armv7-a 20 20 21 21 obj-y += broadcom/ 22 22 obj-$(CONFIG_GOOGLE_FIRMWARE) += google/
+771
drivers/firmware/arm_scpi.c
··· 1 + /* 2 + * System Control and Power Interface (SCPI) Message Protocol driver 3 + * 4 + * SCPI Message Protocol is used between the System Control Processor(SCP) 5 + * and the Application Processors(AP). The Message Handling Unit(MHU) 6 + * provides a mechanism for inter-processor communication between SCP's 7 + * Cortex M3 and AP. 8 + * 9 + * SCP offers control and management of the core/cluster power states, 10 + * various power domain DVFS including the core/cluster, certain system 11 + * clocks configuration, thermal sensors and many others. 12 + * 13 + * Copyright (C) 2015 ARM Ltd. 14 + * 15 + * This program is free software; you can redistribute it and/or modify it 16 + * under the terms and conditions of the GNU General Public License, 17 + * version 2, as published by the Free Software Foundation. 18 + * 19 + * This program is distributed in the hope it will be useful, but WITHOUT 20 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 21 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 22 + * more details. 23 + * 24 + * You should have received a copy of the GNU General Public License along 25 + * with this program. If not, see <http://www.gnu.org/licenses/>. 26 + */ 27 + 28 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 29 + 30 + #include <linux/bitmap.h> 31 + #include <linux/device.h> 32 + #include <linux/err.h> 33 + #include <linux/export.h> 34 + #include <linux/io.h> 35 + #include <linux/kernel.h> 36 + #include <linux/list.h> 37 + #include <linux/mailbox_client.h> 38 + #include <linux/module.h> 39 + #include <linux/of_address.h> 40 + #include <linux/of_platform.h> 41 + #include <linux/printk.h> 42 + #include <linux/scpi_protocol.h> 43 + #include <linux/slab.h> 44 + #include <linux/sort.h> 45 + #include <linux/spinlock.h> 46 + 47 + #define CMD_ID_SHIFT 0 48 + #define CMD_ID_MASK 0x7f 49 + #define CMD_TOKEN_ID_SHIFT 8 50 + #define CMD_TOKEN_ID_MASK 0xff 51 + #define CMD_DATA_SIZE_SHIFT 16 52 + #define CMD_DATA_SIZE_MASK 0x1ff 53 + #define PACK_SCPI_CMD(cmd_id, tx_sz) \ 54 + ((((cmd_id) & CMD_ID_MASK) << CMD_ID_SHIFT) | \ 55 + (((tx_sz) & CMD_DATA_SIZE_MASK) << CMD_DATA_SIZE_SHIFT)) 56 + #define ADD_SCPI_TOKEN(cmd, token) \ 57 + ((cmd) |= (((token) & CMD_TOKEN_ID_MASK) << CMD_TOKEN_ID_SHIFT)) 58 + 59 + #define CMD_SIZE(cmd) (((cmd) >> CMD_DATA_SIZE_SHIFT) & CMD_DATA_SIZE_MASK) 60 + #define CMD_UNIQ_MASK (CMD_TOKEN_ID_MASK << CMD_TOKEN_ID_SHIFT | CMD_ID_MASK) 61 + #define CMD_XTRACT_UNIQ(cmd) ((cmd) & CMD_UNIQ_MASK) 62 + 63 + #define SCPI_SLOT 0 64 + 65 + #define MAX_DVFS_DOMAINS 8 66 + #define MAX_DVFS_OPPS 8 67 + #define DVFS_LATENCY(hdr) (le32_to_cpu(hdr) >> 16) 68 + #define DVFS_OPP_COUNT(hdr) ((le32_to_cpu(hdr) >> 8) & 0xff) 69 + 70 + #define PROTOCOL_REV_MINOR_BITS 16 71 + #define PROTOCOL_REV_MINOR_MASK ((1U << PROTOCOL_REV_MINOR_BITS) - 1) 72 + #define PROTOCOL_REV_MAJOR(x) ((x) >> PROTOCOL_REV_MINOR_BITS) 73 + #define PROTOCOL_REV_MINOR(x) ((x) & PROTOCOL_REV_MINOR_MASK) 74 + 75 + #define FW_REV_MAJOR_BITS 24 76 + #define FW_REV_MINOR_BITS 16 77 + #define FW_REV_PATCH_MASK ((1U << FW_REV_MINOR_BITS) - 1) 78 + #define FW_REV_MINOR_MASK ((1U << FW_REV_MAJOR_BITS) - 1) 79 + #define FW_REV_MAJOR(x) ((x) >> FW_REV_MAJOR_BITS) 80 + #define FW_REV_MINOR(x) (((x) & FW_REV_MINOR_MASK) >> FW_REV_MINOR_BITS) 81 + #define FW_REV_PATCH(x) ((x) & FW_REV_PATCH_MASK) 82 + 83 + #define MAX_RX_TIMEOUT (msecs_to_jiffies(20)) 84 + 85 + enum scpi_error_codes { 86 + SCPI_SUCCESS = 0, /* Success */ 87 + SCPI_ERR_PARAM = 1, /* Invalid parameter(s) */ 88 + SCPI_ERR_ALIGN = 2, /* Invalid alignment */ 89 + SCPI_ERR_SIZE = 3, /* Invalid size */ 90 + SCPI_ERR_HANDLER = 4, /* Invalid handler/callback */ 91 + SCPI_ERR_ACCESS = 5, /* Invalid access/permission denied */ 92 + SCPI_ERR_RANGE = 6, /* Value out of range */ 93 + SCPI_ERR_TIMEOUT = 7, /* Timeout has occurred */ 94 + SCPI_ERR_NOMEM = 8, /* Invalid memory area or pointer */ 95 + SCPI_ERR_PWRSTATE = 9, /* Invalid power state */ 96 + SCPI_ERR_SUPPORT = 10, /* Not supported or disabled */ 97 + SCPI_ERR_DEVICE = 11, /* Device error */ 98 + SCPI_ERR_BUSY = 12, /* Device busy */ 99 + SCPI_ERR_MAX 100 + }; 101 + 102 + enum scpi_std_cmd { 103 + SCPI_CMD_INVALID = 0x00, 104 + SCPI_CMD_SCPI_READY = 0x01, 105 + SCPI_CMD_SCPI_CAPABILITIES = 0x02, 106 + SCPI_CMD_SET_CSS_PWR_STATE = 0x03, 107 + SCPI_CMD_GET_CSS_PWR_STATE = 0x04, 108 + SCPI_CMD_SET_SYS_PWR_STATE = 0x05, 109 + SCPI_CMD_SET_CPU_TIMER = 0x06, 110 + SCPI_CMD_CANCEL_CPU_TIMER = 0x07, 111 + SCPI_CMD_DVFS_CAPABILITIES = 0x08, 112 + SCPI_CMD_GET_DVFS_INFO = 0x09, 113 + SCPI_CMD_SET_DVFS = 0x0a, 114 + SCPI_CMD_GET_DVFS = 0x0b, 115 + SCPI_CMD_GET_DVFS_STAT = 0x0c, 116 + SCPI_CMD_CLOCK_CAPABILITIES = 0x0d, 117 + SCPI_CMD_GET_CLOCK_INFO = 0x0e, 118 + SCPI_CMD_SET_CLOCK_VALUE = 0x0f, 119 + SCPI_CMD_GET_CLOCK_VALUE = 0x10, 120 + SCPI_CMD_PSU_CAPABILITIES = 0x11, 121 + SCPI_CMD_GET_PSU_INFO = 0x12, 122 + SCPI_CMD_SET_PSU = 0x13, 123 + SCPI_CMD_GET_PSU = 0x14, 124 + SCPI_CMD_SENSOR_CAPABILITIES = 0x15, 125 + SCPI_CMD_SENSOR_INFO = 0x16, 126 + SCPI_CMD_SENSOR_VALUE = 0x17, 127 + SCPI_CMD_SENSOR_CFG_PERIODIC = 0x18, 128 + SCPI_CMD_SENSOR_CFG_BOUNDS = 0x19, 129 + SCPI_CMD_SENSOR_ASYNC_VALUE = 0x1a, 130 + SCPI_CMD_SET_DEVICE_PWR_STATE = 0x1b, 131 + SCPI_CMD_GET_DEVICE_PWR_STATE = 0x1c, 132 + SCPI_CMD_COUNT 133 + }; 134 + 135 + struct scpi_xfer { 136 + u32 slot; /* has to be first element */ 137 + u32 cmd; 138 + u32 status; 139 + const void *tx_buf; 140 + void *rx_buf; 141 + unsigned int tx_len; 142 + unsigned int rx_len; 143 + struct list_head node; 144 + struct completion done; 145 + }; 146 + 147 + struct scpi_chan { 148 + struct mbox_client cl; 149 + struct mbox_chan *chan; 150 + void __iomem *tx_payload; 151 + void __iomem *rx_payload; 152 + struct list_head rx_pending; 153 + struct list_head xfers_list; 154 + struct scpi_xfer *xfers; 155 + spinlock_t rx_lock; /* locking for the rx pending list */ 156 + struct mutex xfers_lock; 157 + u8 token; 158 + }; 159 + 160 + struct scpi_drvinfo { 161 + u32 protocol_version; 162 + u32 firmware_version; 163 + int num_chans; 164 + atomic_t next_chan; 165 + struct scpi_ops *scpi_ops; 166 + struct scpi_chan *channels; 167 + struct scpi_dvfs_info *dvfs[MAX_DVFS_DOMAINS]; 168 + }; 169 + 170 + /* 171 + * The SCP firmware only executes in little-endian mode, so any buffers 172 + * shared through SCPI should have their contents converted to little-endian 173 + */ 174 + struct scpi_shared_mem { 175 + __le32 command; 176 + __le32 status; 177 + u8 payload[0]; 178 + } __packed; 179 + 180 + struct scp_capabilities { 181 + __le32 protocol_version; 182 + __le32 event_version; 183 + __le32 platform_version; 184 + __le32 commands[4]; 185 + } __packed; 186 + 187 + struct clk_get_info { 188 + __le16 id; 189 + __le16 flags; 190 + __le32 min_rate; 191 + __le32 max_rate; 192 + u8 name[20]; 193 + } __packed; 194 + 195 + struct clk_get_value { 196 + __le32 rate; 197 + } __packed; 198 + 199 + struct clk_set_value { 200 + __le16 id; 201 + __le16 reserved; 202 + __le32 rate; 203 + } __packed; 204 + 205 + struct dvfs_info { 206 + __le32 header; 207 + struct { 208 + __le32 freq; 209 + __le32 m_volt; 210 + } opps[MAX_DVFS_OPPS]; 211 + } __packed; 212 + 213 + struct dvfs_get { 214 + u8 index; 215 + } __packed; 216 + 217 + struct dvfs_set { 218 + u8 domain; 219 + u8 index; 220 + } __packed; 221 + 222 + struct sensor_capabilities { 223 + __le16 sensors; 224 + } __packed; 225 + 226 + struct _scpi_sensor_info { 227 + __le16 sensor_id; 228 + u8 class; 229 + u8 trigger_type; 230 + char name[20]; 231 + }; 232 + 233 + struct sensor_value { 234 + __le32 val; 235 + } __packed; 236 + 237 + static struct scpi_drvinfo *scpi_info; 238 + 239 + static int scpi_linux_errmap[SCPI_ERR_MAX] = { 240 + /* better than switch case as long as return value is continuous */ 241 + 0, /* SCPI_SUCCESS */ 242 + -EINVAL, /* SCPI_ERR_PARAM */ 243 + -ENOEXEC, /* SCPI_ERR_ALIGN */ 244 + -EMSGSIZE, /* SCPI_ERR_SIZE */ 245 + -EINVAL, /* SCPI_ERR_HANDLER */ 246 + -EACCES, /* SCPI_ERR_ACCESS */ 247 + -ERANGE, /* SCPI_ERR_RANGE */ 248 + -ETIMEDOUT, /* SCPI_ERR_TIMEOUT */ 249 + -ENOMEM, /* SCPI_ERR_NOMEM */ 250 + -EINVAL, /* SCPI_ERR_PWRSTATE */ 251 + -EOPNOTSUPP, /* SCPI_ERR_SUPPORT */ 252 + -EIO, /* SCPI_ERR_DEVICE */ 253 + -EBUSY, /* SCPI_ERR_BUSY */ 254 + }; 255 + 256 + static inline int scpi_to_linux_errno(int errno) 257 + { 258 + if (errno >= SCPI_SUCCESS && errno < SCPI_ERR_MAX) 259 + return scpi_linux_errmap[errno]; 260 + return -EIO; 261 + } 262 + 263 + static void scpi_process_cmd(struct scpi_chan *ch, u32 cmd) 264 + { 265 + unsigned long flags; 266 + struct scpi_xfer *t, *match = NULL; 267 + 268 + spin_lock_irqsave(&ch->rx_lock, flags); 269 + if (list_empty(&ch->rx_pending)) { 270 + spin_unlock_irqrestore(&ch->rx_lock, flags); 271 + return; 272 + } 273 + 274 + list_for_each_entry(t, &ch->rx_pending, node) 275 + if (CMD_XTRACT_UNIQ(t->cmd) == CMD_XTRACT_UNIQ(cmd)) { 276 + list_del(&t->node); 277 + match = t; 278 + break; 279 + } 280 + /* check if wait_for_completion is in progress or timed-out */ 281 + if (match && !completion_done(&match->done)) { 282 + struct scpi_shared_mem *mem = ch->rx_payload; 283 + unsigned int len = min(match->rx_len, CMD_SIZE(cmd)); 284 + 285 + match->status = le32_to_cpu(mem->status); 286 + memcpy_fromio(match->rx_buf, mem->payload, len); 287 + if (match->rx_len > len) 288 + memset(match->rx_buf + len, 0, match->rx_len - len); 289 + complete(&match->done); 290 + } 291 + spin_unlock_irqrestore(&ch->rx_lock, flags); 292 + } 293 + 294 + static void scpi_handle_remote_msg(struct mbox_client *c, void *msg) 295 + { 296 + struct scpi_chan *ch = container_of(c, struct scpi_chan, cl); 297 + struct scpi_shared_mem *mem = ch->rx_payload; 298 + u32 cmd = le32_to_cpu(mem->command); 299 + 300 + scpi_process_cmd(ch, cmd); 301 + } 302 + 303 + static void scpi_tx_prepare(struct mbox_client *c, void *msg) 304 + { 305 + unsigned long flags; 306 + struct scpi_xfer *t = msg; 307 + struct scpi_chan *ch = container_of(c, struct scpi_chan, cl); 308 + struct scpi_shared_mem *mem = (struct scpi_shared_mem *)ch->tx_payload; 309 + 310 + if (t->tx_buf) 311 + memcpy_toio(mem->payload, t->tx_buf, t->tx_len); 312 + if (t->rx_buf) { 313 + if (!(++ch->token)) 314 + ++ch->token; 315 + ADD_SCPI_TOKEN(t->cmd, ch->token); 316 + spin_lock_irqsave(&ch->rx_lock, flags); 317 + list_add_tail(&t->node, &ch->rx_pending); 318 + spin_unlock_irqrestore(&ch->rx_lock, flags); 319 + } 320 + mem->command = cpu_to_le32(t->cmd); 321 + } 322 + 323 + static struct scpi_xfer *get_scpi_xfer(struct scpi_chan *ch) 324 + { 325 + struct scpi_xfer *t; 326 + 327 + mutex_lock(&ch->xfers_lock); 328 + if (list_empty(&ch->xfers_list)) { 329 + mutex_unlock(&ch->xfers_lock); 330 + return NULL; 331 + } 332 + t = list_first_entry(&ch->xfers_list, struct scpi_xfer, node); 333 + list_del(&t->node); 334 + mutex_unlock(&ch->xfers_lock); 335 + return t; 336 + } 337 + 338 + static void put_scpi_xfer(struct scpi_xfer *t, struct scpi_chan *ch) 339 + { 340 + mutex_lock(&ch->xfers_lock); 341 + list_add_tail(&t->node, &ch->xfers_list); 342 + mutex_unlock(&ch->xfers_lock); 343 + } 344 + 345 + static int scpi_send_message(u8 cmd, void *tx_buf, unsigned int tx_len, 346 + void *rx_buf, unsigned int rx_len) 347 + { 348 + int ret; 349 + u8 chan; 350 + struct scpi_xfer *msg; 351 + struct scpi_chan *scpi_chan; 352 + 353 + chan = atomic_inc_return(&scpi_info->next_chan) % scpi_info->num_chans; 354 + scpi_chan = scpi_info->channels + chan; 355 + 356 + msg = get_scpi_xfer(scpi_chan); 357 + if (!msg) 358 + return -ENOMEM; 359 + 360 + msg->slot = BIT(SCPI_SLOT); 361 + msg->cmd = PACK_SCPI_CMD(cmd, tx_len); 362 + msg->tx_buf = tx_buf; 363 + msg->tx_len = tx_len; 364 + msg->rx_buf = rx_buf; 365 + msg->rx_len = rx_len; 366 + init_completion(&msg->done); 367 + 368 + ret = mbox_send_message(scpi_chan->chan, msg); 369 + if (ret < 0 || !rx_buf) 370 + goto out; 371 + 372 + if (!wait_for_completion_timeout(&msg->done, MAX_RX_TIMEOUT)) 373 + ret = -ETIMEDOUT; 374 + else 375 + /* first status word */ 376 + ret = le32_to_cpu(msg->status); 377 + out: 378 + if (ret < 0 && rx_buf) /* remove entry from the list if timed-out */ 379 + scpi_process_cmd(scpi_chan, msg->cmd); 380 + 381 + put_scpi_xfer(msg, scpi_chan); 382 + /* SCPI error codes > 0, translate them to Linux scale*/ 383 + return ret > 0 ? scpi_to_linux_errno(ret) : ret; 384 + } 385 + 386 + static u32 scpi_get_version(void) 387 + { 388 + return scpi_info->protocol_version; 389 + } 390 + 391 + static int 392 + scpi_clk_get_range(u16 clk_id, unsigned long *min, unsigned long *max) 393 + { 394 + int ret; 395 + struct clk_get_info clk; 396 + __le16 le_clk_id = cpu_to_le16(clk_id); 397 + 398 + ret = scpi_send_message(SCPI_CMD_GET_CLOCK_INFO, &le_clk_id, 399 + sizeof(le_clk_id), &clk, sizeof(clk)); 400 + if (!ret) { 401 + *min = le32_to_cpu(clk.min_rate); 402 + *max = le32_to_cpu(clk.max_rate); 403 + } 404 + return ret; 405 + } 406 + 407 + static unsigned long scpi_clk_get_val(u16 clk_id) 408 + { 409 + int ret; 410 + struct clk_get_value clk; 411 + __le16 le_clk_id = cpu_to_le16(clk_id); 412 + 413 + ret = scpi_send_message(SCPI_CMD_GET_CLOCK_VALUE, &le_clk_id, 414 + sizeof(le_clk_id), &clk, sizeof(clk)); 415 + return ret ? ret : le32_to_cpu(clk.rate); 416 + } 417 + 418 + static int scpi_clk_set_val(u16 clk_id, unsigned long rate) 419 + { 420 + int stat; 421 + struct clk_set_value clk = { 422 + .id = cpu_to_le16(clk_id), 423 + .rate = cpu_to_le32(rate) 424 + }; 425 + 426 + return scpi_send_message(SCPI_CMD_SET_CLOCK_VALUE, &clk, sizeof(clk), 427 + &stat, sizeof(stat)); 428 + } 429 + 430 + static int scpi_dvfs_get_idx(u8 domain) 431 + { 432 + int ret; 433 + struct dvfs_get dvfs; 434 + 435 + ret = scpi_send_message(SCPI_CMD_GET_DVFS, &domain, sizeof(domain), 436 + &dvfs, sizeof(dvfs)); 437 + return ret ? ret : dvfs.index; 438 + } 439 + 440 + static int scpi_dvfs_set_idx(u8 domain, u8 index) 441 + { 442 + int stat; 443 + struct dvfs_set dvfs = {domain, index}; 444 + 445 + return scpi_send_message(SCPI_CMD_SET_DVFS, &dvfs, sizeof(dvfs), 446 + &stat, sizeof(stat)); 447 + } 448 + 449 + static int opp_cmp_func(const void *opp1, const void *opp2) 450 + { 451 + const struct scpi_opp *t1 = opp1, *t2 = opp2; 452 + 453 + return t1->freq - t2->freq; 454 + } 455 + 456 + static struct scpi_dvfs_info *scpi_dvfs_get_info(u8 domain) 457 + { 458 + struct scpi_dvfs_info *info; 459 + struct scpi_opp *opp; 460 + struct dvfs_info buf; 461 + int ret, i; 462 + 463 + if (domain >= MAX_DVFS_DOMAINS) 464 + return ERR_PTR(-EINVAL); 465 + 466 + if (scpi_info->dvfs[domain]) /* data already populated */ 467 + return scpi_info->dvfs[domain]; 468 + 469 + ret = scpi_send_message(SCPI_CMD_GET_DVFS_INFO, &domain, sizeof(domain), 470 + &buf, sizeof(buf)); 471 + 472 + if (ret) 473 + return ERR_PTR(ret); 474 + 475 + info = kmalloc(sizeof(*info), GFP_KERNEL); 476 + if (!info) 477 + return ERR_PTR(-ENOMEM); 478 + 479 + info->count = DVFS_OPP_COUNT(buf.header); 480 + info->latency = DVFS_LATENCY(buf.header) * 1000; /* uS to nS */ 481 + 482 + info->opps = kcalloc(info->count, sizeof(*opp), GFP_KERNEL); 483 + if (!info->opps) { 484 + kfree(info); 485 + return ERR_PTR(-ENOMEM); 486 + } 487 + 488 + for (i = 0, opp = info->opps; i < info->count; i++, opp++) { 489 + opp->freq = le32_to_cpu(buf.opps[i].freq); 490 + opp->m_volt = le32_to_cpu(buf.opps[i].m_volt); 491 + } 492 + 493 + sort(info->opps, info->count, sizeof(*opp), opp_cmp_func, NULL); 494 + 495 + scpi_info->dvfs[domain] = info; 496 + return info; 497 + } 498 + 499 + static int scpi_sensor_get_capability(u16 *sensors) 500 + { 501 + struct sensor_capabilities cap_buf; 502 + int ret; 503 + 504 + ret = scpi_send_message(SCPI_CMD_SENSOR_CAPABILITIES, NULL, 0, &cap_buf, 505 + sizeof(cap_buf)); 506 + if (!ret) 507 + *sensors = le16_to_cpu(cap_buf.sensors); 508 + 509 + return ret; 510 + } 511 + 512 + static int scpi_sensor_get_info(u16 sensor_id, struct scpi_sensor_info *info) 513 + { 514 + __le16 id = cpu_to_le16(sensor_id); 515 + struct _scpi_sensor_info _info; 516 + int ret; 517 + 518 + ret = scpi_send_message(SCPI_CMD_SENSOR_INFO, &id, sizeof(id), 519 + &_info, sizeof(_info)); 520 + if (!ret) { 521 + memcpy(info, &_info, sizeof(*info)); 522 + info->sensor_id = le16_to_cpu(_info.sensor_id); 523 + } 524 + 525 + return ret; 526 + } 527 + 528 + int scpi_sensor_get_value(u16 sensor, u32 *val) 529 + { 530 + struct sensor_value buf; 531 + int ret; 532 + 533 + ret = scpi_send_message(SCPI_CMD_SENSOR_VALUE, &sensor, sizeof(sensor), 534 + &buf, sizeof(buf)); 535 + if (!ret) 536 + *val = le32_to_cpu(buf.val); 537 + 538 + return ret; 539 + } 540 + 541 + static struct scpi_ops scpi_ops = { 542 + .get_version = scpi_get_version, 543 + .clk_get_range = scpi_clk_get_range, 544 + .clk_get_val = scpi_clk_get_val, 545 + .clk_set_val = scpi_clk_set_val, 546 + .dvfs_get_idx = scpi_dvfs_get_idx, 547 + .dvfs_set_idx = scpi_dvfs_set_idx, 548 + .dvfs_get_info = scpi_dvfs_get_info, 549 + .sensor_get_capability = scpi_sensor_get_capability, 550 + .sensor_get_info = scpi_sensor_get_info, 551 + .sensor_get_value = scpi_sensor_get_value, 552 + }; 553 + 554 + struct scpi_ops *get_scpi_ops(void) 555 + { 556 + return scpi_info ? scpi_info->scpi_ops : NULL; 557 + } 558 + EXPORT_SYMBOL_GPL(get_scpi_ops); 559 + 560 + static int scpi_init_versions(struct scpi_drvinfo *info) 561 + { 562 + int ret; 563 + struct scp_capabilities caps; 564 + 565 + ret = scpi_send_message(SCPI_CMD_SCPI_CAPABILITIES, NULL, 0, 566 + &caps, sizeof(caps)); 567 + if (!ret) { 568 + info->protocol_version = le32_to_cpu(caps.protocol_version); 569 + info->firmware_version = le32_to_cpu(caps.platform_version); 570 + } 571 + return ret; 572 + } 573 + 574 + static ssize_t protocol_version_show(struct device *dev, 575 + struct device_attribute *attr, char *buf) 576 + { 577 + struct scpi_drvinfo *scpi_info = dev_get_drvdata(dev); 578 + 579 + return sprintf(buf, "%d.%d\n", 580 + PROTOCOL_REV_MAJOR(scpi_info->protocol_version), 581 + PROTOCOL_REV_MINOR(scpi_info->protocol_version)); 582 + } 583 + static DEVICE_ATTR_RO(protocol_version); 584 + 585 + static ssize_t firmware_version_show(struct device *dev, 586 + struct device_attribute *attr, char *buf) 587 + { 588 + struct scpi_drvinfo *scpi_info = dev_get_drvdata(dev); 589 + 590 + return sprintf(buf, "%d.%d.%d\n", 591 + FW_REV_MAJOR(scpi_info->firmware_version), 592 + FW_REV_MINOR(scpi_info->firmware_version), 593 + FW_REV_PATCH(scpi_info->firmware_version)); 594 + } 595 + static DEVICE_ATTR_RO(firmware_version); 596 + 597 + static struct attribute *versions_attrs[] = { 598 + &dev_attr_firmware_version.attr, 599 + &dev_attr_protocol_version.attr, 600 + NULL, 601 + }; 602 + ATTRIBUTE_GROUPS(versions); 603 + 604 + static void 605 + scpi_free_channels(struct device *dev, struct scpi_chan *pchan, int count) 606 + { 607 + int i; 608 + 609 + for (i = 0; i < count && pchan->chan; i++, pchan++) { 610 + mbox_free_channel(pchan->chan); 611 + devm_kfree(dev, pchan->xfers); 612 + devm_iounmap(dev, pchan->rx_payload); 613 + } 614 + } 615 + 616 + static int scpi_remove(struct platform_device *pdev) 617 + { 618 + int i; 619 + struct device *dev = &pdev->dev; 620 + struct scpi_drvinfo *info = platform_get_drvdata(pdev); 621 + 622 + scpi_info = NULL; /* stop exporting SCPI ops through get_scpi_ops */ 623 + 624 + of_platform_depopulate(dev); 625 + sysfs_remove_groups(&dev->kobj, versions_groups); 626 + scpi_free_channels(dev, info->channels, info->num_chans); 627 + platform_set_drvdata(pdev, NULL); 628 + 629 + for (i = 0; i < MAX_DVFS_DOMAINS && info->dvfs[i]; i++) { 630 + kfree(info->dvfs[i]->opps); 631 + kfree(info->dvfs[i]); 632 + } 633 + devm_kfree(dev, info->channels); 634 + devm_kfree(dev, info); 635 + 636 + return 0; 637 + } 638 + 639 + #define MAX_SCPI_XFERS 10 640 + static int scpi_alloc_xfer_list(struct device *dev, struct scpi_chan *ch) 641 + { 642 + int i; 643 + struct scpi_xfer *xfers; 644 + 645 + xfers = devm_kzalloc(dev, MAX_SCPI_XFERS * sizeof(*xfers), GFP_KERNEL); 646 + if (!xfers) 647 + return -ENOMEM; 648 + 649 + ch->xfers = xfers; 650 + for (i = 0; i < MAX_SCPI_XFERS; i++, xfers++) 651 + list_add_tail(&xfers->node, &ch->xfers_list); 652 + return 0; 653 + } 654 + 655 + static int scpi_probe(struct platform_device *pdev) 656 + { 657 + int count, idx, ret; 658 + struct resource res; 659 + struct scpi_chan *scpi_chan; 660 + struct device *dev = &pdev->dev; 661 + struct device_node *np = dev->of_node; 662 + 663 + scpi_info = devm_kzalloc(dev, sizeof(*scpi_info), GFP_KERNEL); 664 + if (!scpi_info) 665 + return -ENOMEM; 666 + 667 + count = of_count_phandle_with_args(np, "mboxes", "#mbox-cells"); 668 + if (count < 0) { 669 + dev_err(dev, "no mboxes property in '%s'\n", np->full_name); 670 + return -ENODEV; 671 + } 672 + 673 + scpi_chan = devm_kcalloc(dev, count, sizeof(*scpi_chan), GFP_KERNEL); 674 + if (!scpi_chan) 675 + return -ENOMEM; 676 + 677 + for (idx = 0; idx < count; idx++) { 678 + resource_size_t size; 679 + struct scpi_chan *pchan = scpi_chan + idx; 680 + struct mbox_client *cl = &pchan->cl; 681 + struct device_node *shmem = of_parse_phandle(np, "shmem", idx); 682 + 683 + if (of_address_to_resource(shmem, 0, &res)) { 684 + dev_err(dev, "failed to get SCPI payload mem resource\n"); 685 + ret = -EINVAL; 686 + goto err; 687 + } 688 + 689 + size = resource_size(&res); 690 + pchan->rx_payload = devm_ioremap(dev, res.start, size); 691 + if (!pchan->rx_payload) { 692 + dev_err(dev, "failed to ioremap SCPI payload\n"); 693 + ret = -EADDRNOTAVAIL; 694 + goto err; 695 + } 696 + pchan->tx_payload = pchan->rx_payload + (size >> 1); 697 + 698 + cl->dev = dev; 699 + cl->rx_callback = scpi_handle_remote_msg; 700 + cl->tx_prepare = scpi_tx_prepare; 701 + cl->tx_block = true; 702 + cl->tx_tout = 50; 703 + cl->knows_txdone = false; /* controller can't ack */ 704 + 705 + INIT_LIST_HEAD(&pchan->rx_pending); 706 + INIT_LIST_HEAD(&pchan->xfers_list); 707 + spin_lock_init(&pchan->rx_lock); 708 + mutex_init(&pchan->xfers_lock); 709 + 710 + ret = scpi_alloc_xfer_list(dev, pchan); 711 + if (!ret) { 712 + pchan->chan = mbox_request_channel(cl, idx); 713 + if (!IS_ERR(pchan->chan)) 714 + continue; 715 + ret = PTR_ERR(pchan->chan); 716 + if (ret != -EPROBE_DEFER) 717 + dev_err(dev, "failed to get channel%d err %d\n", 718 + idx, ret); 719 + } 720 + err: 721 + scpi_free_channels(dev, scpi_chan, idx); 722 + scpi_info = NULL; 723 + return ret; 724 + } 725 + 726 + scpi_info->channels = scpi_chan; 727 + scpi_info->num_chans = count; 728 + platform_set_drvdata(pdev, scpi_info); 729 + 730 + ret = scpi_init_versions(scpi_info); 731 + if (ret) { 732 + dev_err(dev, "incorrect or no SCP firmware found\n"); 733 + scpi_remove(pdev); 734 + return ret; 735 + } 736 + 737 + _dev_info(dev, "SCP Protocol %d.%d Firmware %d.%d.%d version\n", 738 + PROTOCOL_REV_MAJOR(scpi_info->protocol_version), 739 + PROTOCOL_REV_MINOR(scpi_info->protocol_version), 740 + FW_REV_MAJOR(scpi_info->firmware_version), 741 + FW_REV_MINOR(scpi_info->firmware_version), 742 + FW_REV_PATCH(scpi_info->firmware_version)); 743 + scpi_info->scpi_ops = &scpi_ops; 744 + 745 + ret = sysfs_create_groups(&dev->kobj, versions_groups); 746 + if (ret) 747 + dev_err(dev, "unable to create sysfs version group\n"); 748 + 749 + return of_platform_populate(dev->of_node, NULL, NULL, dev); 750 + } 751 + 752 + static const struct of_device_id scpi_of_match[] = { 753 + {.compatible = "arm,scpi"}, 754 + {}, 755 + }; 756 + 757 + MODULE_DEVICE_TABLE(of, scpi_of_match); 758 + 759 + static struct platform_driver scpi_driver = { 760 + .driver = { 761 + .name = "scpi_protocol", 762 + .of_match_table = scpi_of_match, 763 + }, 764 + .probe = scpi_probe, 765 + .remove = scpi_remove, 766 + }; 767 + module_platform_driver(scpi_driver); 768 + 769 + MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>"); 770 + MODULE_DESCRIPTION("ARM SCPI mailbox protocol driver"); 771 + MODULE_LICENSE("GPL v2");
+98 -10
drivers/firmware/psci.c
··· 20 20 #include <linux/printk.h> 21 21 #include <linux/psci.h> 22 22 #include <linux/reboot.h> 23 + #include <linux/suspend.h> 23 24 24 25 #include <uapi/linux/psci.h> 25 26 26 27 #include <asm/cputype.h> 27 28 #include <asm/system_misc.h> 28 29 #include <asm/smp_plat.h> 30 + #include <asm/suspend.h> 29 31 30 32 /* 31 33 * While a 64-bit OS can make calls with SMC32 calling conventions, for some 32 - * calls it is necessary to use SMC64 to pass or return 64-bit values. For such 33 - * calls PSCI_0_2_FN_NATIVE(x) will choose the appropriate (native-width) 34 - * function ID. 34 + * calls it is necessary to use SMC64 to pass or return 64-bit values. 35 + * For such calls PSCI_FN_NATIVE(version, name) will choose the appropriate 36 + * (native-width) function ID. 35 37 */ 36 38 #ifdef CONFIG_64BIT 37 - #define PSCI_0_2_FN_NATIVE(name) PSCI_0_2_FN64_##name 39 + #define PSCI_FN_NATIVE(version, name) PSCI_##version##_FN64_##name 38 40 #else 39 - #define PSCI_0_2_FN_NATIVE(name) PSCI_0_2_FN_##name 41 + #define PSCI_FN_NATIVE(version, name) PSCI_##version##_FN_##name 40 42 #endif 41 43 42 44 /* ··· 72 70 73 71 static u32 psci_function_id[PSCI_FN_MAX]; 74 72 73 + #define PSCI_0_2_POWER_STATE_MASK \ 74 + (PSCI_0_2_POWER_STATE_ID_MASK | \ 75 + PSCI_0_2_POWER_STATE_TYPE_MASK | \ 76 + PSCI_0_2_POWER_STATE_AFFL_MASK) 77 + 78 + #define PSCI_1_0_EXT_POWER_STATE_MASK \ 79 + (PSCI_1_0_EXT_POWER_STATE_ID_MASK | \ 80 + PSCI_1_0_EXT_POWER_STATE_TYPE_MASK) 81 + 82 + static u32 psci_cpu_suspend_feature; 83 + 84 + static inline bool psci_has_ext_power_state(void) 85 + { 86 + return psci_cpu_suspend_feature & 87 + PSCI_1_0_FEATURES_CPU_SUSPEND_PF_MASK; 88 + } 89 + 90 + bool psci_power_state_loses_context(u32 state) 91 + { 92 + const u32 mask = psci_has_ext_power_state() ? 93 + PSCI_1_0_EXT_POWER_STATE_TYPE_MASK : 94 + PSCI_0_2_POWER_STATE_TYPE_MASK; 95 + 96 + return state & mask; 97 + } 98 + 99 + bool psci_power_state_is_valid(u32 state) 100 + { 101 + const u32 valid_mask = psci_has_ext_power_state() ? 102 + PSCI_1_0_EXT_POWER_STATE_MASK : 103 + PSCI_0_2_POWER_STATE_MASK; 104 + 105 + return !(state & ~valid_mask); 106 + } 107 + 75 108 static int psci_to_linux_errno(int errno) 76 109 { 77 110 switch (errno) { ··· 115 78 case PSCI_RET_NOT_SUPPORTED: 116 79 return -EOPNOTSUPP; 117 80 case PSCI_RET_INVALID_PARAMS: 81 + case PSCI_RET_INVALID_ADDRESS: 118 82 return -EINVAL; 119 83 case PSCI_RET_DENIED: 120 84 return -EPERM; ··· 172 134 static int psci_affinity_info(unsigned long target_affinity, 173 135 unsigned long lowest_affinity_level) 174 136 { 175 - return invoke_psci_fn(PSCI_0_2_FN_NATIVE(AFFINITY_INFO), 137 + return invoke_psci_fn(PSCI_FN_NATIVE(0_2, AFFINITY_INFO), 176 138 target_affinity, lowest_affinity_level, 0); 177 139 } 178 140 ··· 183 145 184 146 static unsigned long psci_migrate_info_up_cpu(void) 185 147 { 186 - return invoke_psci_fn(PSCI_0_2_FN_NATIVE(MIGRATE_INFO_UP_CPU), 148 + return invoke_psci_fn(PSCI_FN_NATIVE(0_2, MIGRATE_INFO_UP_CPU), 187 149 0, 0, 0); 188 150 } 189 151 ··· 217 179 static void psci_sys_poweroff(void) 218 180 { 219 181 invoke_psci_fn(PSCI_0_2_FN_SYSTEM_OFF, 0, 0, 0); 182 + } 183 + 184 + static int __init psci_features(u32 psci_func_id) 185 + { 186 + return invoke_psci_fn(PSCI_1_0_FN_PSCI_FEATURES, 187 + psci_func_id, 0, 0); 188 + } 189 + 190 + static int psci_system_suspend(unsigned long unused) 191 + { 192 + return invoke_psci_fn(PSCI_FN_NATIVE(1_0, SYSTEM_SUSPEND), 193 + virt_to_phys(cpu_resume), 0, 0); 194 + } 195 + 196 + static int psci_system_suspend_enter(suspend_state_t state) 197 + { 198 + return cpu_suspend(0, psci_system_suspend); 199 + } 200 + 201 + static const struct platform_suspend_ops psci_suspend_ops = { 202 + .valid = suspend_valid_only_mem, 203 + .enter = psci_system_suspend_enter, 204 + }; 205 + 206 + static void __init psci_init_system_suspend(void) 207 + { 208 + int ret; 209 + 210 + if (!IS_ENABLED(CONFIG_SUSPEND)) 211 + return; 212 + 213 + ret = psci_features(PSCI_FN_NATIVE(1_0, SYSTEM_SUSPEND)); 214 + 215 + if (ret != PSCI_RET_NOT_SUPPORTED) 216 + suspend_set_ops(&psci_suspend_ops); 217 + } 218 + 219 + static void __init psci_init_cpu_suspend(void) 220 + { 221 + int feature = psci_features(psci_function_id[PSCI_FN_CPU_SUSPEND]); 222 + 223 + if (feature != PSCI_RET_NOT_SUPPORTED) 224 + psci_cpu_suspend_feature = feature; 220 225 } 221 226 222 227 /* ··· 305 224 static void __init psci_0_2_set_functions(void) 306 225 { 307 226 pr_info("Using standard PSCI v0.2 function IDs\n"); 308 - psci_function_id[PSCI_FN_CPU_SUSPEND] = PSCI_0_2_FN_NATIVE(CPU_SUSPEND); 227 + psci_function_id[PSCI_FN_CPU_SUSPEND] = 228 + PSCI_FN_NATIVE(0_2, CPU_SUSPEND); 309 229 psci_ops.cpu_suspend = psci_cpu_suspend; 310 230 311 231 psci_function_id[PSCI_FN_CPU_OFF] = PSCI_0_2_FN_CPU_OFF; 312 232 psci_ops.cpu_off = psci_cpu_off; 313 233 314 - psci_function_id[PSCI_FN_CPU_ON] = PSCI_0_2_FN_NATIVE(CPU_ON); 234 + psci_function_id[PSCI_FN_CPU_ON] = PSCI_FN_NATIVE(0_2, CPU_ON); 315 235 psci_ops.cpu_on = psci_cpu_on; 316 236 317 - psci_function_id[PSCI_FN_MIGRATE] = PSCI_0_2_FN_NATIVE(MIGRATE); 237 + psci_function_id[PSCI_FN_MIGRATE] = PSCI_FN_NATIVE(0_2, MIGRATE); 318 238 psci_ops.migrate = psci_migrate; 319 239 320 240 psci_ops.affinity_info = psci_affinity_info; ··· 346 264 psci_0_2_set_functions(); 347 265 348 266 psci_init_migrate(); 267 + 268 + if (PSCI_VERSION_MAJOR(ver) >= 1) { 269 + psci_init_cpu_suspend(); 270 + psci_init_system_suspend(); 271 + } 349 272 350 273 return 0; 351 274 } ··· 427 340 static const struct of_device_id const psci_of_match[] __initconst = { 428 341 { .compatible = "arm,psci", .data = psci_0_1_init}, 429 342 { .compatible = "arm,psci-0.2", .data = psci_0_2_init}, 343 + { .compatible = "arm,psci-1.0", .data = psci_0_2_init}, 430 344 {}, 431 345 }; 432 346
+3 -3
drivers/firmware/qcom_scm-32.c
··· 480 480 int __qcom_scm_is_call_available(u32 svc_id, u32 cmd_id) 481 481 { 482 482 int ret; 483 - u32 svc_cmd = (svc_id << 10) | cmd_id; 484 - u32 ret_val = 0; 483 + __le32 svc_cmd = cpu_to_le32((svc_id << 10) | cmd_id); 484 + __le32 ret_val = 0; 485 485 486 486 ret = qcom_scm_call(QCOM_SCM_SVC_INFO, QCOM_IS_CALL_AVAIL_CMD, &svc_cmd, 487 487 sizeof(svc_cmd), &ret_val, sizeof(ret_val)); 488 488 if (ret) 489 489 return ret; 490 490 491 - return ret_val; 491 + return le32_to_cpu(ret_val); 492 492 } 493 493 494 494 int __qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt, u32 *resp)
+260
drivers/firmware/raspberrypi.c
··· 1 + /* 2 + * Defines interfaces for interacting wtih the Raspberry Pi firmware's 3 + * property channel. 4 + * 5 + * Copyright © 2015 Broadcom 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #include <linux/dma-mapping.h> 13 + #include <linux/mailbox_client.h> 14 + #include <linux/module.h> 15 + #include <linux/of_platform.h> 16 + #include <linux/platform_device.h> 17 + #include <soc/bcm2835/raspberrypi-firmware.h> 18 + 19 + #define MBOX_MSG(chan, data28) (((data28) & ~0xf) | ((chan) & 0xf)) 20 + #define MBOX_CHAN(msg) ((msg) & 0xf) 21 + #define MBOX_DATA28(msg) ((msg) & ~0xf) 22 + #define MBOX_CHAN_PROPERTY 8 23 + 24 + struct rpi_firmware { 25 + struct mbox_client cl; 26 + struct mbox_chan *chan; /* The property channel. */ 27 + struct completion c; 28 + u32 enabled; 29 + }; 30 + 31 + static DEFINE_MUTEX(transaction_lock); 32 + 33 + static void response_callback(struct mbox_client *cl, void *msg) 34 + { 35 + struct rpi_firmware *fw = container_of(cl, struct rpi_firmware, cl); 36 + complete(&fw->c); 37 + } 38 + 39 + /* 40 + * Sends a request to the firmware through the BCM2835 mailbox driver, 41 + * and synchronously waits for the reply. 42 + */ 43 + static int 44 + rpi_firmware_transaction(struct rpi_firmware *fw, u32 chan, u32 data) 45 + { 46 + u32 message = MBOX_MSG(chan, data); 47 + int ret; 48 + 49 + WARN_ON(data & 0xf); 50 + 51 + mutex_lock(&transaction_lock); 52 + reinit_completion(&fw->c); 53 + ret = mbox_send_message(fw->chan, &message); 54 + if (ret >= 0) { 55 + wait_for_completion(&fw->c); 56 + ret = 0; 57 + } else { 58 + dev_err(fw->cl.dev, "mbox_send_message returned %d\n", ret); 59 + } 60 + mutex_unlock(&transaction_lock); 61 + 62 + return ret; 63 + } 64 + 65 + /** 66 + * rpi_firmware_property_list - Submit firmware property list 67 + * @fw: Pointer to firmware structure from rpi_firmware_get(). 68 + * @data: Buffer holding tags. 69 + * @tag_size: Size of tags buffer. 70 + * 71 + * Submits a set of concatenated tags to the VPU firmware through the 72 + * mailbox property interface. 73 + * 74 + * The buffer header and the ending tag are added by this function and 75 + * don't need to be supplied, just the actual tags for your operation. 76 + * See struct rpi_firmware_property_tag_header for the per-tag 77 + * structure. 78 + */ 79 + int rpi_firmware_property_list(struct rpi_firmware *fw, 80 + void *data, size_t tag_size) 81 + { 82 + size_t size = tag_size + 12; 83 + u32 *buf; 84 + dma_addr_t bus_addr; 85 + int ret; 86 + 87 + /* Packets are processed a dword at a time. */ 88 + if (size & 3) 89 + return -EINVAL; 90 + 91 + buf = dma_alloc_coherent(fw->cl.dev, PAGE_ALIGN(size), &bus_addr, 92 + GFP_ATOMIC); 93 + if (!buf) 94 + return -ENOMEM; 95 + 96 + /* The firmware will error out without parsing in this case. */ 97 + WARN_ON(size >= 1024 * 1024); 98 + 99 + buf[0] = size; 100 + buf[1] = RPI_FIRMWARE_STATUS_REQUEST; 101 + memcpy(&buf[2], data, tag_size); 102 + buf[size / 4 - 1] = RPI_FIRMWARE_PROPERTY_END; 103 + wmb(); 104 + 105 + ret = rpi_firmware_transaction(fw, MBOX_CHAN_PROPERTY, bus_addr); 106 + 107 + rmb(); 108 + memcpy(data, &buf[2], tag_size); 109 + if (ret == 0 && buf[1] != RPI_FIRMWARE_STATUS_SUCCESS) { 110 + /* 111 + * The tag name here might not be the one causing the 112 + * error, if there were multiple tags in the request. 113 + * But single-tag is the most common, so go with it. 114 + */ 115 + dev_err(fw->cl.dev, "Request 0x%08x returned status 0x%08x\n", 116 + buf[2], buf[1]); 117 + ret = -EINVAL; 118 + } 119 + 120 + dma_free_coherent(fw->cl.dev, PAGE_ALIGN(size), buf, bus_addr); 121 + 122 + return ret; 123 + } 124 + EXPORT_SYMBOL_GPL(rpi_firmware_property_list); 125 + 126 + /** 127 + * rpi_firmware_property - Submit single firmware property 128 + * @fw: Pointer to firmware structure from rpi_firmware_get(). 129 + * @tag: One of enum_mbox_property_tag. 130 + * @tag_data: Tag data buffer. 131 + * @buf_size: Buffer size. 132 + * 133 + * Submits a single tag to the VPU firmware through the mailbox 134 + * property interface. 135 + * 136 + * This is a convenience wrapper around 137 + * rpi_firmware_property_list() to avoid some of the 138 + * boilerplate in property calls. 139 + */ 140 + int rpi_firmware_property(struct rpi_firmware *fw, 141 + u32 tag, void *tag_data, size_t buf_size) 142 + { 143 + /* Single tags are very small (generally 8 bytes), so the 144 + * stack should be safe. 145 + */ 146 + u8 data[buf_size + sizeof(struct rpi_firmware_property_tag_header)]; 147 + struct rpi_firmware_property_tag_header *header = 148 + (struct rpi_firmware_property_tag_header *)data; 149 + int ret; 150 + 151 + header->tag = tag; 152 + header->buf_size = buf_size; 153 + header->req_resp_size = 0; 154 + memcpy(data + sizeof(struct rpi_firmware_property_tag_header), 155 + tag_data, buf_size); 156 + 157 + ret = rpi_firmware_property_list(fw, &data, sizeof(data)); 158 + memcpy(tag_data, 159 + data + sizeof(struct rpi_firmware_property_tag_header), 160 + buf_size); 161 + 162 + return ret; 163 + } 164 + EXPORT_SYMBOL_GPL(rpi_firmware_property); 165 + 166 + static void 167 + rpi_firmware_print_firmware_revision(struct rpi_firmware *fw) 168 + { 169 + u32 packet; 170 + int ret = rpi_firmware_property(fw, 171 + RPI_FIRMWARE_GET_FIRMWARE_REVISION, 172 + &packet, sizeof(packet)); 173 + 174 + if (ret == 0) { 175 + struct tm tm; 176 + 177 + time_to_tm(packet, 0, &tm); 178 + 179 + dev_info(fw->cl.dev, 180 + "Attached to firmware from %04ld-%02d-%02d %02d:%02d\n", 181 + tm.tm_year + 1900, tm.tm_mon + 1, tm.tm_mday, 182 + tm.tm_hour, tm.tm_min); 183 + } 184 + } 185 + 186 + static int rpi_firmware_probe(struct platform_device *pdev) 187 + { 188 + struct device *dev = &pdev->dev; 189 + struct rpi_firmware *fw; 190 + 191 + fw = devm_kzalloc(dev, sizeof(*fw), GFP_KERNEL); 192 + if (!fw) 193 + return -ENOMEM; 194 + 195 + fw->cl.dev = dev; 196 + fw->cl.rx_callback = response_callback; 197 + fw->cl.tx_block = true; 198 + 199 + fw->chan = mbox_request_channel(&fw->cl, 0); 200 + if (IS_ERR(fw->chan)) { 201 + int ret = PTR_ERR(fw->chan); 202 + if (ret != -EPROBE_DEFER) 203 + dev_err(dev, "Failed to get mbox channel: %d\n", ret); 204 + return ret; 205 + } 206 + 207 + init_completion(&fw->c); 208 + 209 + platform_set_drvdata(pdev, fw); 210 + 211 + rpi_firmware_print_firmware_revision(fw); 212 + 213 + return 0; 214 + } 215 + 216 + static int rpi_firmware_remove(struct platform_device *pdev) 217 + { 218 + struct rpi_firmware *fw = platform_get_drvdata(pdev); 219 + 220 + mbox_free_channel(fw->chan); 221 + 222 + return 0; 223 + } 224 + 225 + /** 226 + * rpi_firmware_get - Get pointer to rpi_firmware structure. 227 + * @firmware_node: Pointer to the firmware Device Tree node. 228 + * 229 + * Returns NULL is the firmware device is not ready. 230 + */ 231 + struct rpi_firmware *rpi_firmware_get(struct device_node *firmware_node) 232 + { 233 + struct platform_device *pdev = of_find_device_by_node(firmware_node); 234 + 235 + if (!pdev) 236 + return NULL; 237 + 238 + return platform_get_drvdata(pdev); 239 + } 240 + EXPORT_SYMBOL_GPL(rpi_firmware_get); 241 + 242 + static const struct of_device_id rpi_firmware_of_match[] = { 243 + { .compatible = "raspberrypi,bcm2835-firmware", }, 244 + {}, 245 + }; 246 + MODULE_DEVICE_TABLE(of, rpi_firmware_of_match); 247 + 248 + static struct platform_driver rpi_firmware_driver = { 249 + .driver = { 250 + .name = "raspberrypi-firmware", 251 + .of_match_table = rpi_firmware_of_match, 252 + }, 253 + .probe = rpi_firmware_probe, 254 + .remove = rpi_firmware_remove, 255 + }; 256 + module_platform_driver(rpi_firmware_driver); 257 + 258 + MODULE_AUTHOR("Eric Anholt <eric@anholt.net>"); 259 + MODULE_DESCRIPTION("Raspberry Pi firmware driver"); 260 + MODULE_LICENSE("GPL v2");
+8
drivers/hwmon/Kconfig
··· 321 321 Say Y here if you have an applicable laptop and want to experience 322 322 the awesome power of applesmc. 323 323 324 + config SENSORS_ARM_SCPI 325 + tristate "ARM SCPI Sensors" 326 + depends on ARM_SCPI_PROTOCOL 327 + help 328 + This driver provides support for temperature, voltage, current 329 + and power sensors available on ARM Ltd's SCP based platforms. The 330 + actual number and type of sensors exported depend on the platform. 331 + 324 332 config SENSORS_ASB100 325 333 tristate "Asus ASB100 Bach" 326 334 depends on X86 && I2C
+1
drivers/hwmon/Makefile
··· 44 44 obj-$(CONFIG_SENSORS_ADT7470) += adt7470.o 45 45 obj-$(CONFIG_SENSORS_ADT7475) += adt7475.o 46 46 obj-$(CONFIG_SENSORS_APPLESMC) += applesmc.o 47 + obj-$(CONFIG_SENSORS_ARM_SCPI) += scpi-hwmon.o 47 48 obj-$(CONFIG_SENSORS_ASC7621) += asc7621.o 48 49 obj-$(CONFIG_SENSORS_ATXP1) += atxp1.o 49 50 obj-$(CONFIG_SENSORS_CORETEMP) += coretemp.o
+288
drivers/hwmon/scpi-hwmon.c
··· 1 + /* 2 + * System Control and Power Interface(SCPI) based hwmon sensor driver 3 + * 4 + * Copyright (C) 2015 ARM Ltd. 5 + * Punit Agrawal <punit.agrawal@arm.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + * 11 + * This program is distributed "as is" WITHOUT ANY WARRANTY of any 12 + * kind, whether express or implied; without even the implied warranty 13 + * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + */ 16 + 17 + #include <linux/hwmon.h> 18 + #include <linux/module.h> 19 + #include <linux/platform_device.h> 20 + #include <linux/scpi_protocol.h> 21 + #include <linux/slab.h> 22 + #include <linux/sysfs.h> 23 + #include <linux/thermal.h> 24 + 25 + struct sensor_data { 26 + struct scpi_sensor_info info; 27 + struct device_attribute dev_attr_input; 28 + struct device_attribute dev_attr_label; 29 + char input[20]; 30 + char label[20]; 31 + }; 32 + 33 + struct scpi_thermal_zone { 34 + struct list_head list; 35 + int sensor_id; 36 + struct scpi_sensors *scpi_sensors; 37 + struct thermal_zone_device *tzd; 38 + }; 39 + 40 + struct scpi_sensors { 41 + struct scpi_ops *scpi_ops; 42 + struct sensor_data *data; 43 + struct list_head thermal_zones; 44 + struct attribute **attrs; 45 + struct attribute_group group; 46 + const struct attribute_group *groups[2]; 47 + }; 48 + 49 + static int scpi_read_temp(void *dev, int *temp) 50 + { 51 + struct scpi_thermal_zone *zone = dev; 52 + struct scpi_sensors *scpi_sensors = zone->scpi_sensors; 53 + struct scpi_ops *scpi_ops = scpi_sensors->scpi_ops; 54 + struct sensor_data *sensor = &scpi_sensors->data[zone->sensor_id]; 55 + u32 value; 56 + int ret; 57 + 58 + ret = scpi_ops->sensor_get_value(sensor->info.sensor_id, &value); 59 + if (ret) 60 + return ret; 61 + 62 + *temp = value; 63 + return 0; 64 + } 65 + 66 + /* hwmon callback functions */ 67 + static ssize_t 68 + scpi_show_sensor(struct device *dev, struct device_attribute *attr, char *buf) 69 + { 70 + struct scpi_sensors *scpi_sensors = dev_get_drvdata(dev); 71 + struct scpi_ops *scpi_ops = scpi_sensors->scpi_ops; 72 + struct sensor_data *sensor; 73 + u32 value; 74 + int ret; 75 + 76 + sensor = container_of(attr, struct sensor_data, dev_attr_input); 77 + 78 + ret = scpi_ops->sensor_get_value(sensor->info.sensor_id, &value); 79 + if (ret) 80 + return ret; 81 + 82 + return sprintf(buf, "%u\n", value); 83 + } 84 + 85 + static ssize_t 86 + scpi_show_label(struct device *dev, struct device_attribute *attr, char *buf) 87 + { 88 + struct sensor_data *sensor; 89 + 90 + sensor = container_of(attr, struct sensor_data, dev_attr_label); 91 + 92 + return sprintf(buf, "%s\n", sensor->info.name); 93 + } 94 + 95 + static void 96 + unregister_thermal_zones(struct platform_device *pdev, 97 + struct scpi_sensors *scpi_sensors) 98 + { 99 + struct list_head *pos; 100 + 101 + list_for_each(pos, &scpi_sensors->thermal_zones) { 102 + struct scpi_thermal_zone *zone; 103 + 104 + zone = list_entry(pos, struct scpi_thermal_zone, list); 105 + thermal_zone_of_sensor_unregister(&pdev->dev, zone->tzd); 106 + } 107 + } 108 + 109 + static struct thermal_zone_of_device_ops scpi_sensor_ops = { 110 + .get_temp = scpi_read_temp, 111 + }; 112 + 113 + static int scpi_hwmon_probe(struct platform_device *pdev) 114 + { 115 + u16 nr_sensors, i; 116 + int num_temp = 0, num_volt = 0, num_current = 0, num_power = 0; 117 + struct scpi_ops *scpi_ops; 118 + struct device *hwdev, *dev = &pdev->dev; 119 + struct scpi_sensors *scpi_sensors; 120 + int ret; 121 + 122 + scpi_ops = get_scpi_ops(); 123 + if (!scpi_ops) 124 + return -EPROBE_DEFER; 125 + 126 + ret = scpi_ops->sensor_get_capability(&nr_sensors); 127 + if (ret) 128 + return ret; 129 + 130 + if (!nr_sensors) 131 + return -ENODEV; 132 + 133 + scpi_sensors = devm_kzalloc(dev, sizeof(*scpi_sensors), GFP_KERNEL); 134 + if (!scpi_sensors) 135 + return -ENOMEM; 136 + 137 + scpi_sensors->data = devm_kcalloc(dev, nr_sensors, 138 + sizeof(*scpi_sensors->data), GFP_KERNEL); 139 + if (!scpi_sensors->data) 140 + return -ENOMEM; 141 + 142 + scpi_sensors->attrs = devm_kcalloc(dev, (nr_sensors * 2) + 1, 143 + sizeof(*scpi_sensors->attrs), GFP_KERNEL); 144 + if (!scpi_sensors->attrs) 145 + return -ENOMEM; 146 + 147 + scpi_sensors->scpi_ops = scpi_ops; 148 + 149 + for (i = 0; i < nr_sensors; i++) { 150 + struct sensor_data *sensor = &scpi_sensors->data[i]; 151 + 152 + ret = scpi_ops->sensor_get_info(i, &sensor->info); 153 + if (ret) 154 + return ret; 155 + 156 + switch (sensor->info.class) { 157 + case TEMPERATURE: 158 + snprintf(sensor->input, sizeof(sensor->input), 159 + "temp%d_input", num_temp + 1); 160 + snprintf(sensor->label, sizeof(sensor->input), 161 + "temp%d_label", num_temp + 1); 162 + num_temp++; 163 + break; 164 + case VOLTAGE: 165 + snprintf(sensor->input, sizeof(sensor->input), 166 + "in%d_input", num_volt); 167 + snprintf(sensor->label, sizeof(sensor->input), 168 + "in%d_label", num_volt); 169 + num_volt++; 170 + break; 171 + case CURRENT: 172 + snprintf(sensor->input, sizeof(sensor->input), 173 + "curr%d_input", num_current + 1); 174 + snprintf(sensor->label, sizeof(sensor->input), 175 + "curr%d_label", num_current + 1); 176 + num_current++; 177 + break; 178 + case POWER: 179 + snprintf(sensor->input, sizeof(sensor->input), 180 + "power%d_input", num_power + 1); 181 + snprintf(sensor->label, sizeof(sensor->input), 182 + "power%d_label", num_power + 1); 183 + num_power++; 184 + break; 185 + default: 186 + break; 187 + } 188 + 189 + sensor->dev_attr_input.attr.mode = S_IRUGO; 190 + sensor->dev_attr_input.show = scpi_show_sensor; 191 + sensor->dev_attr_input.attr.name = sensor->input; 192 + 193 + sensor->dev_attr_label.attr.mode = S_IRUGO; 194 + sensor->dev_attr_label.show = scpi_show_label; 195 + sensor->dev_attr_label.attr.name = sensor->label; 196 + 197 + scpi_sensors->attrs[i << 1] = &sensor->dev_attr_input.attr; 198 + scpi_sensors->attrs[(i << 1) + 1] = &sensor->dev_attr_label.attr; 199 + 200 + sysfs_attr_init(scpi_sensors->attrs[i << 1]); 201 + sysfs_attr_init(scpi_sensors->attrs[(i << 1) + 1]); 202 + } 203 + 204 + scpi_sensors->group.attrs = scpi_sensors->attrs; 205 + scpi_sensors->groups[0] = &scpi_sensors->group; 206 + 207 + platform_set_drvdata(pdev, scpi_sensors); 208 + 209 + hwdev = devm_hwmon_device_register_with_groups(dev, 210 + "scpi_sensors", scpi_sensors, scpi_sensors->groups); 211 + 212 + if (IS_ERR(hwdev)) 213 + return PTR_ERR(hwdev); 214 + 215 + /* 216 + * Register the temperature sensors with the thermal framework 217 + * to allow their usage in setting up the thermal zones from 218 + * device tree. 219 + * 220 + * NOTE: Not all temperature sensors maybe used for thermal 221 + * control 222 + */ 223 + INIT_LIST_HEAD(&scpi_sensors->thermal_zones); 224 + for (i = 0; i < nr_sensors; i++) { 225 + struct sensor_data *sensor = &scpi_sensors->data[i]; 226 + struct scpi_thermal_zone *zone; 227 + 228 + if (sensor->info.class != TEMPERATURE) 229 + continue; 230 + 231 + zone = devm_kzalloc(dev, sizeof(*zone), GFP_KERNEL); 232 + if (!zone) { 233 + ret = -ENOMEM; 234 + goto unregister_tzd; 235 + } 236 + 237 + zone->sensor_id = i; 238 + zone->scpi_sensors = scpi_sensors; 239 + zone->tzd = thermal_zone_of_sensor_register(dev, i, zone, 240 + &scpi_sensor_ops); 241 + /* 242 + * The call to thermal_zone_of_sensor_register returns 243 + * an error for sensors that are not associated with 244 + * any thermal zones or if the thermal subsystem is 245 + * not configured. 246 + */ 247 + if (IS_ERR(zone->tzd)) { 248 + devm_kfree(dev, zone); 249 + continue; 250 + } 251 + list_add(&zone->list, &scpi_sensors->thermal_zones); 252 + } 253 + 254 + return 0; 255 + 256 + unregister_tzd: 257 + unregister_thermal_zones(pdev, scpi_sensors); 258 + return ret; 259 + } 260 + 261 + static int scpi_hwmon_remove(struct platform_device *pdev) 262 + { 263 + struct scpi_sensors *scpi_sensors = platform_get_drvdata(pdev); 264 + 265 + unregister_thermal_zones(pdev, scpi_sensors); 266 + 267 + return 0; 268 + } 269 + 270 + static const struct of_device_id scpi_of_match[] = { 271 + {.compatible = "arm,scpi-sensors"}, 272 + {}, 273 + }; 274 + 275 + static struct platform_driver scpi_hwmon_platdrv = { 276 + .driver = { 277 + .name = "scpi-hwmon", 278 + .owner = THIS_MODULE, 279 + .of_match_table = scpi_of_match, 280 + }, 281 + .probe = scpi_hwmon_probe, 282 + .remove = scpi_hwmon_remove, 283 + }; 284 + module_platform_driver(scpi_hwmon_platdrv); 285 + 286 + MODULE_AUTHOR("Punit Agrawal <punit.agrawal@arm.com>"); 287 + MODULE_DESCRIPTION("ARM SCPI HWMON interface driver"); 288 + MODULE_LICENSE("GPL v2");
+23 -3
drivers/memory/pl172.c
··· 118 118 if (of_property_read_bool(np, "mpmc,extended-wait")) 119 119 cfg |= MPMC_STATIC_CFG_EW; 120 120 121 - if (of_property_read_bool(np, "mpmc,buffer-enable")) 121 + if (amba_part(adev) == 0x172 && 122 + of_property_read_bool(np, "mpmc,buffer-enable")) 122 123 cfg |= MPMC_STATIC_CFG_B; 123 124 124 125 if (of_property_read_bool(np, "mpmc,write-protect")) ··· 191 190 } 192 191 193 192 static const char * const pl172_revisions[] = {"r1", "r2", "r2p3", "r2p4"}; 193 + static const char * const pl175_revisions[] = {"r1"}; 194 + static const char * const pl176_revisions[] = {"r0"}; 194 195 195 196 static int pl172_probe(struct amba_device *adev, const struct amba_id *id) 196 197 { ··· 205 202 if (amba_part(adev) == 0x172) { 206 203 if (amba_rev(adev) < ARRAY_SIZE(pl172_revisions)) 207 204 rev = pl172_revisions[amba_rev(adev)]; 205 + } else if (amba_part(adev) == 0x175) { 206 + if (amba_rev(adev) < ARRAY_SIZE(pl175_revisions)) 207 + rev = pl175_revisions[amba_rev(adev)]; 208 + } else if (amba_part(adev) == 0x176) { 209 + if (amba_rev(adev) < ARRAY_SIZE(pl176_revisions)) 210 + rev = pl176_revisions[amba_rev(adev)]; 208 211 } 209 212 210 213 dev_info(dev, "ARM PL%x revision %s\n", amba_part(adev), rev); ··· 287 278 } 288 279 289 280 static const struct amba_id pl172_ids[] = { 281 + /* PrimeCell MPMC PL172, EMC found on NXP LPC18xx and LPC43xx */ 290 282 { 291 - .id = 0x07341172, 292 - .mask = 0xffffffff, 283 + .id = 0x07041172, 284 + .mask = 0x3f0fffff, 285 + }, 286 + /* PrimeCell MPMC PL175, EMC found on NXP LPC32xx */ 287 + { 288 + .id = 0x07041175, 289 + .mask = 0x3f0fffff, 290 + }, 291 + /* PrimeCell MPMC PL176 */ 292 + { 293 + .id = 0x89041176, 294 + .mask = 0xff0fffff, 293 295 }, 294 296 { 0, 0 }, 295 297 };
+4
drivers/misc/atmel_tclib.c
··· 125 125 if (IS_ERR(clk)) 126 126 return PTR_ERR(clk); 127 127 128 + tc->slow_clk = devm_clk_get(&pdev->dev, "slow_clk"); 129 + if (IS_ERR(tc->slow_clk)) 130 + return PTR_ERR(tc->slow_clk); 131 + 128 132 r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 129 133 tc->regs = devm_ioremap_resource(&pdev->dev, r); 130 134 if (IS_ERR(tc->regs))
+19 -7
drivers/pwm/pwm-atmel-tcb.c
··· 305 305 */ 306 306 if (i == 5) { 307 307 i = slowclk; 308 - rate = 32768; 308 + rate = clk_get_rate(tc->slow_clk); 309 309 min = div_u64(NSEC_PER_SEC, rate); 310 310 max = min << tc->tcb_config->counter_width; 311 311 ··· 387 387 388 388 tcbpwm = devm_kzalloc(&pdev->dev, sizeof(*tcbpwm), GFP_KERNEL); 389 389 if (tcbpwm == NULL) { 390 - atmel_tc_free(tc); 390 + err = -ENOMEM; 391 391 dev_err(&pdev->dev, "failed to allocate memory\n"); 392 - return -ENOMEM; 392 + goto err_free_tc; 393 393 } 394 394 395 395 tcbpwm->chip.dev = &pdev->dev; ··· 400 400 tcbpwm->chip.npwm = NPWM; 401 401 tcbpwm->tc = tc; 402 402 403 + err = clk_prepare_enable(tc->slow_clk); 404 + if (err) 405 + goto err_free_tc; 406 + 403 407 spin_lock_init(&tcbpwm->lock); 404 408 405 409 err = pwmchip_add(&tcbpwm->chip); 406 - if (err < 0) { 407 - atmel_tc_free(tc); 408 - return err; 409 - } 410 + if (err < 0) 411 + goto err_disable_clk; 410 412 411 413 platform_set_drvdata(pdev, tcbpwm); 412 414 413 415 return 0; 416 + 417 + err_disable_clk: 418 + clk_disable_unprepare(tcbpwm->tc->slow_clk); 419 + 420 + err_free_tc: 421 + atmel_tc_free(tc); 422 + 423 + return err; 414 424 } 415 425 416 426 static int atmel_tcb_pwm_remove(struct platform_device *pdev) 417 427 { 418 428 struct atmel_tcb_pwm_chip *tcbpwm = platform_get_drvdata(pdev); 419 429 int err; 430 + 431 + clk_disable_unprepare(tcbpwm->tc->slow_clk); 420 432 421 433 err = pwmchip_remove(&tcbpwm->chip); 422 434 if (err < 0)
+1
drivers/soc/Kconfig
··· 3 3 source "drivers/soc/brcmstb/Kconfig" 4 4 source "drivers/soc/mediatek/Kconfig" 5 5 source "drivers/soc/qcom/Kconfig" 6 + source "drivers/soc/rockchip/Kconfig" 6 7 source "drivers/soc/sunxi/Kconfig" 7 8 source "drivers/soc/ti/Kconfig" 8 9 source "drivers/soc/versatile/Kconfig"
+1
drivers/soc/Makefile
··· 6 6 obj-$(CONFIG_MACH_DOVE) += dove/ 7 7 obj-$(CONFIG_ARCH_MEDIATEK) += mediatek/ 8 8 obj-$(CONFIG_ARCH_QCOM) += qcom/ 9 + obj-$(CONFIG_ARCH_ROCKCHIP) += rockchip/ 9 10 obj-$(CONFIG_ARCH_SUNXI) += sunxi/ 10 11 obj-$(CONFIG_ARCH_TEGRA) += tegra/ 11 12 obj-$(CONFIG_SOC_TI) += ti/
+9 -8
drivers/soc/qcom/Kconfig
··· 19 19 modes. It interface with various system drivers to put the cores in 20 20 low power modes. 21 21 22 + config QCOM_SMEM 23 + tristate "Qualcomm Shared Memory Manager (SMEM)" 24 + depends on ARCH_QCOM 25 + depends on HWSPINLOCK 26 + help 27 + Say y here to enable support for the Qualcomm Shared Memory Manager. 28 + The driver provides an interface to items in a heap shared among all 29 + processors in a Qualcomm platform. 30 + 22 31 config QCOM_SMD 23 32 tristate "Qualcomm Shared Memory Driver (SMD)" 24 33 depends on QCOM_SMEM ··· 49 40 50 41 Say M here if you want to include support for the Qualcomm RPM as a 51 42 module. This will build a module called "qcom-smd-rpm". 52 - 53 - config QCOM_SMEM 54 - tristate "Qualcomm Shared Memory Manager (SMEM)" 55 - depends on ARCH_QCOM 56 - help 57 - Say y here to enable support for the Qualcomm Shared Memory Manager. 58 - The driver provides an interface to items in a heap shared among all 59 - processors in a Qualcomm platform.
+38 -30
drivers/soc/qcom/smd-rpm.c
··· 17 17 #include <linux/of_platform.h> 18 18 #include <linux/io.h> 19 19 #include <linux/interrupt.h> 20 + #include <linux/slab.h> 20 21 21 22 #include <linux/soc/qcom/smd.h> 22 23 #include <linux/soc/qcom/smd-rpm.h> ··· 45 44 * @length: length of the payload 46 45 */ 47 46 struct qcom_rpm_header { 48 - u32 service_type; 49 - u32 length; 47 + __le32 service_type; 48 + __le32 length; 50 49 }; 51 50 52 51 /** ··· 58 57 * @data_len: length of the payload following this header 59 58 */ 60 59 struct qcom_rpm_request { 61 - u32 msg_id; 62 - u32 flags; 63 - u32 type; 64 - u32 id; 65 - u32 data_len; 60 + __le32 msg_id; 61 + __le32 flags; 62 + __le32 type; 63 + __le32 id; 64 + __le32 data_len; 66 65 }; 67 66 68 67 /** ··· 75 74 * Multiple of these messages can be stacked in an rpm message. 76 75 */ 77 76 struct qcom_rpm_message { 78 - u32 msg_type; 79 - u32 length; 77 + __le32 msg_type; 78 + __le32 length; 80 79 union { 81 - u32 msg_id; 80 + __le32 msg_id; 82 81 u8 message[0]; 83 82 }; 84 83 }; ··· 105 104 static unsigned msg_id = 1; 106 105 int left; 107 106 int ret; 108 - 109 107 struct { 110 108 struct qcom_rpm_header hdr; 111 109 struct qcom_rpm_request req; 112 - u8 payload[count]; 113 - } pkt; 110 + u8 payload[]; 111 + } *pkt; 112 + size_t size = sizeof(*pkt) + count; 114 113 115 114 /* SMD packets to the RPM may not exceed 256 bytes */ 116 - if (WARN_ON(sizeof(pkt) >= 256)) 115 + if (WARN_ON(size >= 256)) 117 116 return -EINVAL; 117 + 118 + pkt = kmalloc(size, GFP_KERNEL); 119 + if (!pkt) 120 + return -ENOMEM; 118 121 119 122 mutex_lock(&rpm->lock); 120 123 121 - pkt.hdr.service_type = RPM_SERVICE_TYPE_REQUEST; 122 - pkt.hdr.length = sizeof(struct qcom_rpm_request) + count; 124 + pkt->hdr.service_type = cpu_to_le32(RPM_SERVICE_TYPE_REQUEST); 125 + pkt->hdr.length = cpu_to_le32(sizeof(struct qcom_rpm_request) + count); 123 126 124 - pkt.req.msg_id = msg_id++; 125 - pkt.req.flags = BIT(state); 126 - pkt.req.type = type; 127 - pkt.req.id = id; 128 - pkt.req.data_len = count; 129 - memcpy(pkt.payload, buf, count); 127 + pkt->req.msg_id = cpu_to_le32(msg_id++); 128 + pkt->req.flags = cpu_to_le32(state); 129 + pkt->req.type = cpu_to_le32(type); 130 + pkt->req.id = cpu_to_le32(id); 131 + pkt->req.data_len = cpu_to_le32(count); 132 + memcpy(pkt->payload, buf, count); 130 133 131 - ret = qcom_smd_send(rpm->rpm_channel, &pkt, sizeof(pkt)); 134 + ret = qcom_smd_send(rpm->rpm_channel, pkt, size); 132 135 if (ret) 133 136 goto out; 134 137 ··· 143 138 ret = rpm->ack_status; 144 139 145 140 out: 141 + kfree(pkt); 146 142 mutex_unlock(&rpm->lock); 147 143 return ret; 148 144 } ··· 154 148 size_t count) 155 149 { 156 150 const struct qcom_rpm_header *hdr = data; 151 + size_t hdr_length = le32_to_cpu(hdr->length); 157 152 const struct qcom_rpm_message *msg; 158 153 struct qcom_smd_rpm *rpm = dev_get_drvdata(&qsdev->dev); 159 154 const u8 *buf = data + sizeof(struct qcom_rpm_header); 160 - const u8 *end = buf + hdr->length; 155 + const u8 *end = buf + hdr_length; 161 156 char msgbuf[32]; 162 157 int status = 0; 163 - u32 len; 158 + u32 len, msg_length; 164 159 165 - if (hdr->service_type != RPM_SERVICE_TYPE_REQUEST || 166 - hdr->length < sizeof(struct qcom_rpm_message)) { 160 + if (le32_to_cpu(hdr->service_type) != RPM_SERVICE_TYPE_REQUEST || 161 + hdr_length < sizeof(struct qcom_rpm_message)) { 167 162 dev_err(&qsdev->dev, "invalid request\n"); 168 163 return 0; 169 164 } 170 165 171 166 while (buf < end) { 172 167 msg = (struct qcom_rpm_message *)buf; 173 - switch (msg->msg_type) { 168 + msg_length = le32_to_cpu(msg->length); 169 + switch (le32_to_cpu(msg->msg_type)) { 174 170 case RPM_MSG_TYPE_MSG_ID: 175 171 break; 176 172 case RPM_MSG_TYPE_ERR: 177 - len = min_t(u32, ALIGN(msg->length, 4), sizeof(msgbuf)); 173 + len = min_t(u32, ALIGN(msg_length, 4), sizeof(msgbuf)); 178 174 memcpy_fromio(msgbuf, msg->message, len); 179 175 msgbuf[len - 1] = 0; 180 176 ··· 187 179 break; 188 180 } 189 181 190 - buf = PTR_ALIGN(buf + 2 * sizeof(u32) + msg->length, 4); 182 + buf = PTR_ALIGN(buf + 2 * sizeof(u32) + msg_length, 4); 191 183 } 192 184 193 185 rpm->ack_status = status;
+178 -114
drivers/soc/qcom/smd.c
··· 65 65 */ 66 66 67 67 struct smd_channel_info; 68 + struct smd_channel_info_pair; 68 69 struct smd_channel_info_word; 70 + struct smd_channel_info_word_pair; 69 71 70 72 #define SMD_ALLOC_TBL_COUNT 2 71 73 #define SMD_ALLOC_TBL_SIZE 64 ··· 87 85 .fifo_base_id = 338 88 86 }, 89 87 { 90 - .alloc_tbl_id = 14, 91 - .info_base_id = 266, 88 + .alloc_tbl_id = 266, 89 + .info_base_id = 138, 92 90 .fifo_base_id = 202, 93 91 }, 94 92 }; ··· 153 151 * @name: name of the channel 154 152 * @state: local state of the channel 155 153 * @remote_state: remote state of the channel 156 - * @tx_info: byte aligned outgoing channel info 157 - * @rx_info: byte aligned incoming channel info 158 - * @tx_info_word: word aligned outgoing channel info 159 - * @rx_info_word: word aligned incoming channel info 154 + * @info: byte aligned outgoing/incoming channel info 155 + * @info_word: word aligned outgoing/incoming channel info 160 156 * @tx_lock: lock to make writes to the channel mutually exclusive 161 157 * @fblockread_event: wakeup event tied to tx fBLOCKREADINTR 162 158 * @tx_fifo: pointer to the outgoing ring buffer ··· 175 175 enum smd_channel_state state; 176 176 enum smd_channel_state remote_state; 177 177 178 - struct smd_channel_info *tx_info; 179 - struct smd_channel_info *rx_info; 180 - 181 - struct smd_channel_info_word *tx_info_word; 182 - struct smd_channel_info_word *rx_info_word; 178 + struct smd_channel_info_pair *info; 179 + struct smd_channel_info_word_pair *info_word; 183 180 184 181 struct mutex tx_lock; 185 182 wait_queue_head_t fblockread_event; ··· 212 215 * Format of the smd_info smem items, for byte aligned channels. 213 216 */ 214 217 struct smd_channel_info { 215 - u32 state; 218 + __le32 state; 216 219 u8 fDSR; 217 220 u8 fCTS; 218 221 u8 fCD; ··· 221 224 u8 fTAIL; 222 225 u8 fSTATE; 223 226 u8 fBLOCKREADINTR; 224 - u32 tail; 225 - u32 head; 227 + __le32 tail; 228 + __le32 head; 229 + }; 230 + 231 + struct smd_channel_info_pair { 232 + struct smd_channel_info tx; 233 + struct smd_channel_info rx; 226 234 }; 227 235 228 236 /* 229 237 * Format of the smd_info smem items, for word aligned channels. 230 238 */ 231 239 struct smd_channel_info_word { 232 - u32 state; 233 - u32 fDSR; 234 - u32 fCTS; 235 - u32 fCD; 236 - u32 fRI; 237 - u32 fHEAD; 238 - u32 fTAIL; 239 - u32 fSTATE; 240 - u32 fBLOCKREADINTR; 241 - u32 tail; 242 - u32 head; 240 + __le32 state; 241 + __le32 fDSR; 242 + __le32 fCTS; 243 + __le32 fCD; 244 + __le32 fRI; 245 + __le32 fHEAD; 246 + __le32 fTAIL; 247 + __le32 fSTATE; 248 + __le32 fBLOCKREADINTR; 249 + __le32 tail; 250 + __le32 head; 243 251 }; 244 252 245 - #define GET_RX_CHANNEL_INFO(channel, param) \ 246 - (channel->rx_info_word ? \ 247 - channel->rx_info_word->param : \ 248 - channel->rx_info->param) 253 + struct smd_channel_info_word_pair { 254 + struct smd_channel_info_word tx; 255 + struct smd_channel_info_word rx; 256 + }; 249 257 250 - #define SET_RX_CHANNEL_INFO(channel, param, value) \ 251 - (channel->rx_info_word ? \ 252 - (channel->rx_info_word->param = value) : \ 253 - (channel->rx_info->param = value)) 258 + #define GET_RX_CHANNEL_FLAG(channel, param) \ 259 + ({ \ 260 + BUILD_BUG_ON(sizeof(channel->info->rx.param) != sizeof(u8)); \ 261 + channel->info_word ? \ 262 + le32_to_cpu(channel->info_word->rx.param) : \ 263 + channel->info->rx.param; \ 264 + }) 254 265 255 - #define GET_TX_CHANNEL_INFO(channel, param) \ 256 - (channel->tx_info_word ? \ 257 - channel->tx_info_word->param : \ 258 - channel->tx_info->param) 266 + #define GET_RX_CHANNEL_INFO(channel, param) \ 267 + ({ \ 268 + BUILD_BUG_ON(sizeof(channel->info->rx.param) != sizeof(u32)); \ 269 + le32_to_cpu(channel->info_word ? \ 270 + channel->info_word->rx.param : \ 271 + channel->info->rx.param); \ 272 + }) 259 273 260 - #define SET_TX_CHANNEL_INFO(channel, param, value) \ 261 - (channel->tx_info_word ? \ 262 - (channel->tx_info_word->param = value) : \ 263 - (channel->tx_info->param = value)) 274 + #define SET_RX_CHANNEL_FLAG(channel, param, value) \ 275 + ({ \ 276 + BUILD_BUG_ON(sizeof(channel->info->rx.param) != sizeof(u8)); \ 277 + if (channel->info_word) \ 278 + channel->info_word->rx.param = cpu_to_le32(value); \ 279 + else \ 280 + channel->info->rx.param = value; \ 281 + }) 282 + 283 + #define SET_RX_CHANNEL_INFO(channel, param, value) \ 284 + ({ \ 285 + BUILD_BUG_ON(sizeof(channel->info->rx.param) != sizeof(u32)); \ 286 + if (channel->info_word) \ 287 + channel->info_word->rx.param = cpu_to_le32(value); \ 288 + else \ 289 + channel->info->rx.param = cpu_to_le32(value); \ 290 + }) 291 + 292 + #define GET_TX_CHANNEL_FLAG(channel, param) \ 293 + ({ \ 294 + BUILD_BUG_ON(sizeof(channel->info->tx.param) != sizeof(u8)); \ 295 + channel->info_word ? \ 296 + le32_to_cpu(channel->info_word->tx.param) : \ 297 + channel->info->tx.param; \ 298 + }) 299 + 300 + #define GET_TX_CHANNEL_INFO(channel, param) \ 301 + ({ \ 302 + BUILD_BUG_ON(sizeof(channel->info->tx.param) != sizeof(u32)); \ 303 + le32_to_cpu(channel->info_word ? \ 304 + channel->info_word->tx.param : \ 305 + channel->info->tx.param); \ 306 + }) 307 + 308 + #define SET_TX_CHANNEL_FLAG(channel, param, value) \ 309 + ({ \ 310 + BUILD_BUG_ON(sizeof(channel->info->tx.param) != sizeof(u8)); \ 311 + if (channel->info_word) \ 312 + channel->info_word->tx.param = cpu_to_le32(value); \ 313 + else \ 314 + channel->info->tx.param = value; \ 315 + }) 316 + 317 + #define SET_TX_CHANNEL_INFO(channel, param, value) \ 318 + ({ \ 319 + BUILD_BUG_ON(sizeof(channel->info->tx.param) != sizeof(u32)); \ 320 + if (channel->info_word) \ 321 + channel->info_word->tx.param = cpu_to_le32(value); \ 322 + else \ 323 + channel->info->tx.param = cpu_to_le32(value); \ 324 + }) 264 325 265 326 /** 266 327 * struct qcom_smd_alloc_entry - channel allocation entry ··· 329 274 */ 330 275 struct qcom_smd_alloc_entry { 331 276 u8 name[20]; 332 - u32 cid; 333 - u32 flags; 334 - u32 ref_count; 277 + __le32 cid; 278 + __le32 flags; 279 + __le32 ref_count; 335 280 } __packed; 336 281 337 282 #define SMD_CHANNEL_FLAGS_EDGE_MASK 0xff ··· 360 305 static void qcom_smd_channel_reset(struct qcom_smd_channel *channel) 361 306 { 362 307 SET_TX_CHANNEL_INFO(channel, state, SMD_CHANNEL_CLOSED); 363 - SET_TX_CHANNEL_INFO(channel, fDSR, 0); 364 - SET_TX_CHANNEL_INFO(channel, fCTS, 0); 365 - SET_TX_CHANNEL_INFO(channel, fCD, 0); 366 - SET_TX_CHANNEL_INFO(channel, fRI, 0); 367 - SET_TX_CHANNEL_INFO(channel, fHEAD, 0); 368 - SET_TX_CHANNEL_INFO(channel, fTAIL, 0); 369 - SET_TX_CHANNEL_INFO(channel, fSTATE, 1); 370 - SET_TX_CHANNEL_INFO(channel, fBLOCKREADINTR, 1); 308 + SET_TX_CHANNEL_FLAG(channel, fDSR, 0); 309 + SET_TX_CHANNEL_FLAG(channel, fCTS, 0); 310 + SET_TX_CHANNEL_FLAG(channel, fCD, 0); 311 + SET_TX_CHANNEL_FLAG(channel, fRI, 0); 312 + SET_TX_CHANNEL_FLAG(channel, fHEAD, 0); 313 + SET_TX_CHANNEL_FLAG(channel, fTAIL, 0); 314 + SET_TX_CHANNEL_FLAG(channel, fSTATE, 1); 315 + SET_TX_CHANNEL_FLAG(channel, fBLOCKREADINTR, 1); 371 316 SET_TX_CHANNEL_INFO(channel, head, 0); 372 317 SET_TX_CHANNEL_INFO(channel, tail, 0); 373 318 ··· 405 350 406 351 dev_dbg(edge->smd->dev, "set_state(%s, %d)\n", channel->name, state); 407 352 408 - SET_TX_CHANNEL_INFO(channel, fDSR, is_open); 409 - SET_TX_CHANNEL_INFO(channel, fCTS, is_open); 410 - SET_TX_CHANNEL_INFO(channel, fCD, is_open); 353 + SET_TX_CHANNEL_FLAG(channel, fDSR, is_open); 354 + SET_TX_CHANNEL_FLAG(channel, fCTS, is_open); 355 + SET_TX_CHANNEL_FLAG(channel, fCD, is_open); 411 356 412 357 SET_TX_CHANNEL_INFO(channel, state, state); 413 - SET_TX_CHANNEL_INFO(channel, fSTATE, 1); 358 + SET_TX_CHANNEL_FLAG(channel, fSTATE, 1); 414 359 415 360 channel->state = state; 416 361 qcom_smd_signal_channel(channel); ··· 419 364 /* 420 365 * Copy count bytes of data using 32bit accesses, if that's required. 421 366 */ 422 - static void smd_copy_to_fifo(void __iomem *_dst, 423 - const void *_src, 367 + static void smd_copy_to_fifo(void __iomem *dst, 368 + const void *src, 424 369 size_t count, 425 370 bool word_aligned) 426 371 { 427 - u32 *dst = (u32 *)_dst; 428 - u32 *src = (u32 *)_src; 429 - 430 372 if (word_aligned) { 431 - count /= sizeof(u32); 432 - while (count--) 433 - writel_relaxed(*src++, dst++); 373 + __iowrite32_copy(dst, src, count / sizeof(u32)); 434 374 } else { 435 - memcpy_toio(_dst, _src, count); 375 + memcpy_toio(dst, src, count); 436 376 } 437 377 } 438 378 ··· 445 395 if (word_aligned) { 446 396 count /= sizeof(u32); 447 397 while (count--) 448 - *dst++ = readl_relaxed(src++); 398 + *dst++ = __raw_readl(src++); 449 399 } else { 450 400 memcpy_fromio(_dst, _src, count); 451 401 } ··· 462 412 unsigned tail; 463 413 size_t len; 464 414 465 - word_aligned = channel->rx_info_word != NULL; 415 + word_aligned = channel->info_word; 466 416 tail = GET_RX_CHANNEL_INFO(channel, tail); 467 417 468 418 len = min_t(size_t, count, channel->fifo_size - tail); ··· 541 491 { 542 492 bool need_state_scan = false; 543 493 int remote_state; 544 - u32 pktlen; 494 + __le32 pktlen; 545 495 int avail; 546 496 int ret; 547 497 ··· 552 502 need_state_scan = true; 553 503 } 554 504 /* Indicate that we have seen any state change */ 555 - SET_RX_CHANNEL_INFO(channel, fSTATE, 0); 505 + SET_RX_CHANNEL_FLAG(channel, fSTATE, 0); 556 506 557 507 /* Signal waiting qcom_smd_send() about the interrupt */ 558 - if (!GET_TX_CHANNEL_INFO(channel, fBLOCKREADINTR)) 508 + if (!GET_TX_CHANNEL_FLAG(channel, fBLOCKREADINTR)) 559 509 wake_up_interruptible(&channel->fblockread_event); 560 510 561 511 /* Don't consume any data until we've opened the channel */ ··· 563 513 goto out; 564 514 565 515 /* Indicate that we've seen the new data */ 566 - SET_RX_CHANNEL_INFO(channel, fHEAD, 0); 516 + SET_RX_CHANNEL_FLAG(channel, fHEAD, 0); 567 517 568 518 /* Consume data */ 569 519 for (;;) { ··· 572 522 if (!channel->pkt_size && avail >= SMD_PACKET_HEADER_LEN) { 573 523 qcom_smd_channel_peek(channel, &pktlen, sizeof(pktlen)); 574 524 qcom_smd_channel_advance(channel, SMD_PACKET_HEADER_LEN); 575 - channel->pkt_size = pktlen; 525 + channel->pkt_size = le32_to_cpu(pktlen); 576 526 } else if (channel->pkt_size && avail >= channel->pkt_size) { 577 527 ret = qcom_smd_channel_recv_single(channel); 578 528 if (ret) ··· 583 533 } 584 534 585 535 /* Indicate that we have seen and updated tail */ 586 - SET_RX_CHANNEL_INFO(channel, fTAIL, 1); 536 + SET_RX_CHANNEL_FLAG(channel, fTAIL, 1); 587 537 588 538 /* Signal the remote that we've consumed the data (if requested) */ 589 - if (!GET_RX_CHANNEL_INFO(channel, fBLOCKREADINTR)) { 539 + if (!GET_RX_CHANNEL_FLAG(channel, fBLOCKREADINTR)) { 590 540 /* Ensure ordering of channel info updates */ 591 541 wmb(); 592 542 ··· 677 627 unsigned head; 678 628 size_t len; 679 629 680 - word_aligned = channel->tx_info_word != NULL; 630 + word_aligned = channel->info_word; 681 631 head = GET_TX_CHANNEL_INFO(channel, head); 682 632 683 633 len = min_t(size_t, count, channel->fifo_size - head); ··· 715 665 */ 716 666 int qcom_smd_send(struct qcom_smd_channel *channel, const void *data, int len) 717 667 { 718 - u32 hdr[5] = {len,}; 668 + __le32 hdr[5] = { cpu_to_le32(len), }; 719 669 int tlen = sizeof(hdr) + len; 720 670 int ret; 721 671 722 672 /* Word aligned channels only accept word size aligned data */ 723 - if (channel->rx_info_word != NULL && len % 4) 673 + if (channel->info_word && len % 4) 674 + return -EINVAL; 675 + 676 + /* Reject packets that are too big */ 677 + if (tlen >= channel->fifo_size) 724 678 return -EINVAL; 725 679 726 680 ret = mutex_lock_interruptible(&channel->tx_lock); ··· 737 683 goto out; 738 684 } 739 685 740 - SET_TX_CHANNEL_INFO(channel, fBLOCKREADINTR, 0); 686 + SET_TX_CHANNEL_FLAG(channel, fBLOCKREADINTR, 0); 741 687 742 688 ret = wait_event_interruptible(channel->fblockread_event, 743 689 qcom_smd_get_tx_avail(channel) >= tlen || ··· 745 691 if (ret) 746 692 goto out; 747 693 748 - SET_TX_CHANNEL_INFO(channel, fBLOCKREADINTR, 1); 694 + SET_TX_CHANNEL_FLAG(channel, fBLOCKREADINTR, 1); 749 695 } 750 696 751 - SET_TX_CHANNEL_INFO(channel, fTAIL, 0); 697 + SET_TX_CHANNEL_FLAG(channel, fTAIL, 0); 752 698 753 699 qcom_smd_write_fifo(channel, hdr, sizeof(hdr)); 754 700 qcom_smd_write_fifo(channel, data, len); 755 701 756 - SET_TX_CHANNEL_INFO(channel, fHEAD, 1); 702 + SET_TX_CHANNEL_FLAG(channel, fHEAD, 1); 757 703 758 704 /* Ensure ordering of channel info updates */ 759 705 wmb(); ··· 781 727 782 728 static int qcom_smd_dev_match(struct device *dev, struct device_driver *drv) 783 729 { 730 + struct qcom_smd_device *qsdev = to_smd_device(dev); 731 + struct qcom_smd_driver *qsdrv = container_of(drv, struct qcom_smd_driver, driver); 732 + const struct qcom_smd_id *match = qsdrv->smd_match_table; 733 + const char *name = qsdev->channel->name; 734 + 735 + if (match) { 736 + while (match->name[0]) { 737 + if (!strcmp(match->name, name)) 738 + return 1; 739 + match++; 740 + } 741 + } 742 + 784 743 return of_driver_match_device(dev, drv); 785 744 } 786 745 ··· 921 854 for_each_available_child_of_node(edge_node, child) { 922 855 key = "qcom,smd-channels"; 923 856 ret = of_property_read_string(child, key, &name); 924 - if (ret) { 925 - of_node_put(child); 857 + if (ret) 926 858 continue; 927 - } 928 859 929 860 if (strcmp(name, channel) == 0) 930 861 return child; ··· 945 880 if (channel->qsdev) 946 881 return -EEXIST; 947 882 948 - node = qcom_smd_match_channel(edge->of_node, channel->name); 949 - if (!node) { 950 - dev_dbg(smd->dev, "no match for '%s'\n", channel->name); 951 - return -ENXIO; 952 - } 953 - 954 883 dev_dbg(smd->dev, "registering '%s'\n", channel->name); 955 884 956 885 qsdev = kzalloc(sizeof(*qsdev), GFP_KERNEL); 957 886 if (!qsdev) 958 887 return -ENOMEM; 959 888 960 - dev_set_name(&qsdev->dev, "%s.%s", edge->of_node->name, node->name); 889 + node = qcom_smd_match_channel(edge->of_node, channel->name); 890 + dev_set_name(&qsdev->dev, "%s.%s", 891 + edge->of_node->name, 892 + node ? node->name : channel->name); 893 + 961 894 qsdev->dev.parent = smd->dev; 962 895 qsdev->dev.bus = &qcom_smd_bus; 963 896 qsdev->dev.release = qcom_smd_release_device; ··· 1041 978 spin_lock_init(&channel->recv_lock); 1042 979 init_waitqueue_head(&channel->fblockread_event); 1043 980 1044 - ret = qcom_smem_get(edge->remote_pid, smem_info_item, (void **)&info, 1045 - &info_size); 1046 - if (ret) 981 + info = qcom_smem_get(edge->remote_pid, smem_info_item, &info_size); 982 + if (IS_ERR(info)) { 983 + ret = PTR_ERR(info); 1047 984 goto free_name_and_channel; 985 + } 1048 986 1049 987 /* 1050 988 * Use the size of the item to figure out which channel info struct to 1051 989 * use. 1052 990 */ 1053 991 if (info_size == 2 * sizeof(struct smd_channel_info_word)) { 1054 - channel->tx_info_word = info; 1055 - channel->rx_info_word = info + sizeof(struct smd_channel_info_word); 992 + channel->info_word = info; 1056 993 } else if (info_size == 2 * sizeof(struct smd_channel_info)) { 1057 - channel->tx_info = info; 1058 - channel->rx_info = info + sizeof(struct smd_channel_info); 994 + channel->info = info; 1059 995 } else { 1060 996 dev_err(smd->dev, 1061 997 "channel info of size %zu not supported\n", info_size); ··· 1062 1000 goto free_name_and_channel; 1063 1001 } 1064 1002 1065 - ret = qcom_smem_get(edge->remote_pid, smem_fifo_item, &fifo_base, 1066 - &fifo_size); 1067 - if (ret) 1003 + fifo_base = qcom_smem_get(edge->remote_pid, smem_fifo_item, &fifo_size); 1004 + if (IS_ERR(fifo_base)) { 1005 + ret = PTR_ERR(fifo_base); 1068 1006 goto free_name_and_channel; 1007 + } 1069 1008 1070 1009 /* The channel consist of a rx and tx fifo of equal size */ 1071 1010 fifo_size /= 2; ··· 1103 1040 unsigned long flags; 1104 1041 unsigned fifo_id; 1105 1042 unsigned info_id; 1106 - int ret; 1107 1043 int tbl; 1108 1044 int i; 1045 + u32 eflags, cid; 1109 1046 1110 1047 for (tbl = 0; tbl < SMD_ALLOC_TBL_COUNT; tbl++) { 1111 - ret = qcom_smem_get(edge->remote_pid, 1112 - smem_items[tbl].alloc_tbl_id, 1113 - (void **)&alloc_tbl, 1114 - NULL); 1115 - if (ret < 0) 1048 + alloc_tbl = qcom_smem_get(edge->remote_pid, 1049 + smem_items[tbl].alloc_tbl_id, NULL); 1050 + if (IS_ERR(alloc_tbl)) 1116 1051 continue; 1117 1052 1118 1053 for (i = 0; i < SMD_ALLOC_TBL_SIZE; i++) { 1119 1054 entry = &alloc_tbl[i]; 1055 + eflags = le32_to_cpu(entry->flags); 1120 1056 if (test_bit(i, edge->allocated[tbl])) 1121 1057 continue; 1122 1058 ··· 1125 1063 if (!entry->name[0]) 1126 1064 continue; 1127 1065 1128 - if (!(entry->flags & SMD_CHANNEL_FLAGS_PACKET)) 1066 + if (!(eflags & SMD_CHANNEL_FLAGS_PACKET)) 1129 1067 continue; 1130 1068 1131 - if ((entry->flags & SMD_CHANNEL_FLAGS_EDGE_MASK) != edge->edge_id) 1069 + if ((eflags & SMD_CHANNEL_FLAGS_EDGE_MASK) != edge->edge_id) 1132 1070 continue; 1133 1071 1134 - info_id = smem_items[tbl].info_base_id + entry->cid; 1135 - fifo_id = smem_items[tbl].fifo_base_id + entry->cid; 1072 + cid = le32_to_cpu(entry->cid); 1073 + info_id = smem_items[tbl].info_base_id + cid; 1074 + fifo_id = smem_items[tbl].fifo_base_id + cid; 1136 1075 1137 1076 channel = qcom_smd_create_channel(edge, info_id, fifo_id, entry->name); 1138 1077 if (IS_ERR(channel)) ··· 1290 1227 int num_edges; 1291 1228 int ret; 1292 1229 int i = 0; 1230 + void *p; 1293 1231 1294 1232 /* Wait for smem */ 1295 - ret = qcom_smem_get(QCOM_SMEM_HOST_ANY, smem_items[0].alloc_tbl_id, NULL, NULL); 1296 - if (ret == -EPROBE_DEFER) 1297 - return ret; 1233 + p = qcom_smem_get(QCOM_SMEM_HOST_ANY, smem_items[0].alloc_tbl_id, NULL); 1234 + if (PTR_ERR(p) == -EPROBE_DEFER) 1235 + return PTR_ERR(p); 1298 1236 1299 1237 num_edges = of_get_available_child_count(pdev->dev.of_node); 1300 1238 array_size = sizeof(*smd) + num_edges * sizeof(struct qcom_smd_edge);
+197 -171
drivers/soc/qcom/smem.c
··· 92 92 * @params: parameters to the command 93 93 */ 94 94 struct smem_proc_comm { 95 - u32 command; 96 - u32 status; 97 - u32 params[2]; 95 + __le32 command; 96 + __le32 status; 97 + __le32 params[2]; 98 98 }; 99 99 100 100 /** ··· 106 106 * the default region. bits 0,1 are reserved 107 107 */ 108 108 struct smem_global_entry { 109 - u32 allocated; 110 - u32 offset; 111 - u32 size; 112 - u32 aux_base; /* bits 1:0 reserved */ 109 + __le32 allocated; 110 + __le32 offset; 111 + __le32 size; 112 + __le32 aux_base; /* bits 1:0 reserved */ 113 113 }; 114 114 #define AUX_BASE_MASK 0xfffffffc 115 115 ··· 125 125 */ 126 126 struct smem_header { 127 127 struct smem_proc_comm proc_comm[4]; 128 - u32 version[32]; 129 - u32 initialized; 130 - u32 free_offset; 131 - u32 available; 132 - u32 reserved; 128 + __le32 version[32]; 129 + __le32 initialized; 130 + __le32 free_offset; 131 + __le32 available; 132 + __le32 reserved; 133 133 struct smem_global_entry toc[SMEM_ITEM_COUNT]; 134 134 }; 135 135 ··· 143 143 * @reserved: reserved entries for later use 144 144 */ 145 145 struct smem_ptable_entry { 146 - u32 offset; 147 - u32 size; 148 - u32 flags; 149 - u16 host0; 150 - u16 host1; 151 - u32 reserved[8]; 146 + __le32 offset; 147 + __le32 size; 148 + __le32 flags; 149 + __le16 host0; 150 + __le16 host1; 151 + __le32 reserved[8]; 152 152 }; 153 153 154 154 /** ··· 160 160 * @entry: list of @smem_ptable_entry for the @num_entries partitions 161 161 */ 162 162 struct smem_ptable { 163 - u32 magic; 164 - u32 version; 165 - u32 num_entries; 166 - u32 reserved[5]; 163 + u8 magic[4]; 164 + __le32 version; 165 + __le32 num_entries; 166 + __le32 reserved[5]; 167 167 struct smem_ptable_entry entry[]; 168 168 }; 169 - #define SMEM_PTABLE_MAGIC 0x434f5424 /* "$TOC" */ 169 + 170 + static const u8 SMEM_PTABLE_MAGIC[] = { 0x24, 0x54, 0x4f, 0x43 }; /* "$TOC" */ 170 171 171 172 /** 172 173 * struct smem_partition_header - header of the partitions ··· 182 181 * @reserved: for now reserved entries 183 182 */ 184 183 struct smem_partition_header { 185 - u32 magic; 186 - u16 host0; 187 - u16 host1; 188 - u32 size; 189 - u32 offset_free_uncached; 190 - u32 offset_free_cached; 191 - u32 reserved[3]; 184 + u8 magic[4]; 185 + __le16 host0; 186 + __le16 host1; 187 + __le32 size; 188 + __le32 offset_free_uncached; 189 + __le32 offset_free_cached; 190 + __le32 reserved[3]; 192 191 }; 193 - #define SMEM_PART_MAGIC 0x54525024 /* "$PRT" */ 192 + 193 + static const u8 SMEM_PART_MAGIC[] = { 0x24, 0x50, 0x52, 0x54 }; 194 194 195 195 /** 196 196 * struct smem_private_entry - header of each item in the private partition ··· 203 201 * @reserved: for now reserved entry 204 202 */ 205 203 struct smem_private_entry { 206 - u16 canary; 207 - u16 item; 208 - u32 size; /* includes padding bytes */ 209 - u16 padding_data; 210 - u16 padding_hdr; 211 - u32 reserved; 204 + u16 canary; /* bytes are the same so no swapping needed */ 205 + __le16 item; 206 + __le32 size; /* includes padding bytes */ 207 + __le16 padding_data; 208 + __le16 padding_hdr; 209 + __le32 reserved; 212 210 }; 213 211 #define SMEM_PRIVATE_CANARY 0xa5a5 214 212 ··· 244 242 struct smem_region regions[0]; 245 243 }; 246 244 245 + static struct smem_private_entry * 246 + phdr_to_last_private_entry(struct smem_partition_header *phdr) 247 + { 248 + void *p = phdr; 249 + 250 + return p + le32_to_cpu(phdr->offset_free_uncached); 251 + } 252 + 253 + static void *phdr_to_first_cached_entry(struct smem_partition_header *phdr) 254 + { 255 + void *p = phdr; 256 + 257 + return p + le32_to_cpu(phdr->offset_free_cached); 258 + } 259 + 260 + static struct smem_private_entry * 261 + phdr_to_first_private_entry(struct smem_partition_header *phdr) 262 + { 263 + void *p = phdr; 264 + 265 + return p + sizeof(*phdr); 266 + } 267 + 268 + static struct smem_private_entry * 269 + private_entry_next(struct smem_private_entry *e) 270 + { 271 + void *p = e; 272 + 273 + return p + sizeof(*e) + le16_to_cpu(e->padding_hdr) + 274 + le32_to_cpu(e->size); 275 + } 276 + 277 + static void *entry_to_item(struct smem_private_entry *e) 278 + { 279 + void *p = e; 280 + 281 + return p + sizeof(*e) + le16_to_cpu(e->padding_hdr); 282 + } 283 + 247 284 /* Pointer to the one and only smem handle */ 248 285 static struct qcom_smem *__smem; 249 286 ··· 295 254 size_t size) 296 255 { 297 256 struct smem_partition_header *phdr; 298 - struct smem_private_entry *hdr; 257 + struct smem_private_entry *hdr, *end; 299 258 size_t alloc_size; 300 - void *p; 259 + void *cached; 301 260 302 261 phdr = smem->partitions[host]; 262 + hdr = phdr_to_first_private_entry(phdr); 263 + end = phdr_to_last_private_entry(phdr); 264 + cached = phdr_to_first_cached_entry(phdr); 303 265 304 - p = (void *)phdr + sizeof(*phdr); 305 - while (p < (void *)phdr + phdr->offset_free_uncached) { 306 - hdr = p; 307 - 266 + while (hdr < end) { 308 267 if (hdr->canary != SMEM_PRIVATE_CANARY) { 309 268 dev_err(smem->dev, 310 269 "Found invalid canary in host %d partition\n", ··· 312 271 return -EINVAL; 313 272 } 314 273 315 - if (hdr->item == item) 274 + if (le16_to_cpu(hdr->item) == item) 316 275 return -EEXIST; 317 276 318 - p += sizeof(*hdr) + hdr->padding_hdr + hdr->size; 277 + hdr = private_entry_next(hdr); 319 278 } 320 279 321 280 /* Check that we don't grow into the cached region */ 322 281 alloc_size = sizeof(*hdr) + ALIGN(size, 8); 323 - if (p + alloc_size >= (void *)phdr + phdr->offset_free_cached) { 282 + if ((void *)hdr + alloc_size >= cached) { 324 283 dev_err(smem->dev, "Out of memory\n"); 325 284 return -ENOSPC; 326 285 } 327 286 328 - hdr = p; 329 287 hdr->canary = SMEM_PRIVATE_CANARY; 330 - hdr->item = item; 331 - hdr->size = ALIGN(size, 8); 332 - hdr->padding_data = hdr->size - size; 288 + hdr->item = cpu_to_le16(item); 289 + hdr->size = cpu_to_le32(ALIGN(size, 8)); 290 + hdr->padding_data = cpu_to_le16(le32_to_cpu(hdr->size) - size); 333 291 hdr->padding_hdr = 0; 334 292 335 293 /* ··· 337 297 * gets a consistent view of the linked list. 338 298 */ 339 299 wmb(); 340 - phdr->offset_free_uncached += alloc_size; 300 + le32_add_cpu(&phdr->offset_free_uncached, alloc_size); 341 301 342 302 return 0; 343 303 } ··· 358 318 return -EEXIST; 359 319 360 320 size = ALIGN(size, 8); 361 - if (WARN_ON(size > header->available)) 321 + if (WARN_ON(size > le32_to_cpu(header->available))) 362 322 return -ENOMEM; 363 323 364 324 entry->offset = header->free_offset; 365 - entry->size = size; 325 + entry->size = cpu_to_le32(size); 366 326 367 327 /* 368 328 * Ensure the header is consistent before we mark the item allocated, ··· 370 330 * even though they do not take the spinlock on read. 371 331 */ 372 332 wmb(); 373 - entry->allocated = 1; 333 + entry->allocated = cpu_to_le32(1); 374 334 375 - header->free_offset += size; 376 - header->available -= size; 335 + le32_add_cpu(&header->free_offset, size); 336 + le32_add_cpu(&header->available, -size); 377 337 378 338 return 0; 379 339 } ··· 418 378 } 419 379 EXPORT_SYMBOL(qcom_smem_alloc); 420 380 421 - static int qcom_smem_get_global(struct qcom_smem *smem, 422 - unsigned item, 423 - void **ptr, 424 - size_t *size) 381 + static void *qcom_smem_get_global(struct qcom_smem *smem, 382 + unsigned item, 383 + size_t *size) 425 384 { 426 385 struct smem_header *header; 427 386 struct smem_region *area; ··· 429 390 unsigned i; 430 391 431 392 if (WARN_ON(item >= SMEM_ITEM_COUNT)) 432 - return -EINVAL; 393 + return ERR_PTR(-EINVAL); 433 394 434 395 header = smem->regions[0].virt_base; 435 396 entry = &header->toc[item]; 436 397 if (!entry->allocated) 437 - return -ENXIO; 398 + return ERR_PTR(-ENXIO); 438 399 439 - if (ptr != NULL) { 440 - aux_base = entry->aux_base & AUX_BASE_MASK; 400 + aux_base = le32_to_cpu(entry->aux_base) & AUX_BASE_MASK; 441 401 442 - for (i = 0; i < smem->num_regions; i++) { 443 - area = &smem->regions[i]; 402 + for (i = 0; i < smem->num_regions; i++) { 403 + area = &smem->regions[i]; 444 404 445 - if (area->aux_base == aux_base || !aux_base) { 446 - *ptr = area->virt_base + entry->offset; 447 - break; 448 - } 405 + if (area->aux_base == aux_base || !aux_base) { 406 + if (size != NULL) 407 + *size = le32_to_cpu(entry->size); 408 + return area->virt_base + le32_to_cpu(entry->offset); 449 409 } 450 410 } 451 - if (size != NULL) 452 - *size = entry->size; 453 411 454 - return 0; 412 + return ERR_PTR(-ENOENT); 455 413 } 456 414 457 - static int qcom_smem_get_private(struct qcom_smem *smem, 458 - unsigned host, 459 - unsigned item, 460 - void **ptr, 461 - size_t *size) 415 + static void *qcom_smem_get_private(struct qcom_smem *smem, 416 + unsigned host, 417 + unsigned item, 418 + size_t *size) 462 419 { 463 420 struct smem_partition_header *phdr; 464 - struct smem_private_entry *hdr; 465 - void *p; 421 + struct smem_private_entry *e, *end; 466 422 467 423 phdr = smem->partitions[host]; 424 + e = phdr_to_first_private_entry(phdr); 425 + end = phdr_to_last_private_entry(phdr); 468 426 469 - p = (void *)phdr + sizeof(*phdr); 470 - while (p < (void *)phdr + phdr->offset_free_uncached) { 471 - hdr = p; 472 - 473 - if (hdr->canary != SMEM_PRIVATE_CANARY) { 427 + while (e < end) { 428 + if (e->canary != SMEM_PRIVATE_CANARY) { 474 429 dev_err(smem->dev, 475 430 "Found invalid canary in host %d partition\n", 476 431 host); 477 - return -EINVAL; 432 + return ERR_PTR(-EINVAL); 478 433 } 479 434 480 - if (hdr->item == item) { 481 - if (ptr != NULL) 482 - *ptr = p + sizeof(*hdr) + hdr->padding_hdr; 483 - 435 + if (le16_to_cpu(e->item) == item) { 484 436 if (size != NULL) 485 - *size = hdr->size - hdr->padding_data; 437 + *size = le32_to_cpu(e->size) - 438 + le16_to_cpu(e->padding_data); 486 439 487 - return 0; 440 + return entry_to_item(e); 488 441 } 489 442 490 - p += sizeof(*hdr) + hdr->padding_hdr + hdr->size; 443 + e = private_entry_next(e); 491 444 } 492 445 493 - return -ENOENT; 446 + return ERR_PTR(-ENOENT); 494 447 } 495 448 496 449 /** 497 450 * qcom_smem_get() - resolve ptr of size of a smem item 498 451 * @host: the remote processor, or -1 499 452 * @item: smem item handle 500 - * @ptr: pointer to be filled out with address of the item 501 453 * @size: pointer to be filled out with size of the item 502 454 * 503 - * Looks up pointer and size of a smem item. 455 + * Looks up smem item and returns pointer to it. Size of smem 456 + * item is returned in @size. 504 457 */ 505 - int qcom_smem_get(unsigned host, unsigned item, void **ptr, size_t *size) 458 + void *qcom_smem_get(unsigned host, unsigned item, size_t *size) 506 459 { 507 460 unsigned long flags; 508 461 int ret; 462 + void *ptr = ERR_PTR(-EPROBE_DEFER); 509 463 510 464 if (!__smem) 511 - return -EPROBE_DEFER; 465 + return ptr; 512 466 513 467 ret = hwspin_lock_timeout_irqsave(__smem->hwlock, 514 468 HWSPINLOCK_TIMEOUT, 515 469 &flags); 516 470 if (ret) 517 - return ret; 471 + return ERR_PTR(ret); 518 472 519 473 if (host < SMEM_HOST_COUNT && __smem->partitions[host]) 520 - ret = qcom_smem_get_private(__smem, host, item, ptr, size); 474 + ptr = qcom_smem_get_private(__smem, host, item, size); 521 475 else 522 - ret = qcom_smem_get_global(__smem, item, ptr, size); 476 + ptr = qcom_smem_get_global(__smem, item, size); 523 477 524 478 hwspin_unlock_irqrestore(__smem->hwlock, &flags); 525 - return ret; 479 + 480 + return ptr; 526 481 527 482 } 528 483 EXPORT_SYMBOL(qcom_smem_get); ··· 539 506 540 507 if (host < SMEM_HOST_COUNT && __smem->partitions[host]) { 541 508 phdr = __smem->partitions[host]; 542 - ret = phdr->offset_free_cached - phdr->offset_free_uncached; 509 + ret = le32_to_cpu(phdr->offset_free_cached) - 510 + le32_to_cpu(phdr->offset_free_uncached); 543 511 } else { 544 512 header = __smem->regions[0].virt_base; 545 - ret = header->available; 513 + ret = le32_to_cpu(header->available); 546 514 } 547 515 548 516 return ret; ··· 552 518 553 519 static int qcom_smem_get_sbl_version(struct qcom_smem *smem) 554 520 { 555 - unsigned *versions; 521 + __le32 *versions; 556 522 size_t size; 557 - int ret; 558 523 559 - ret = qcom_smem_get_global(smem, SMEM_ITEM_VERSION, 560 - (void **)&versions, &size); 561 - if (ret < 0) { 524 + versions = qcom_smem_get_global(smem, SMEM_ITEM_VERSION, &size); 525 + if (IS_ERR(versions)) { 562 526 dev_err(smem->dev, "Unable to read the version item\n"); 563 527 return -ENOENT; 564 528 } ··· 566 534 return -EINVAL; 567 535 } 568 536 569 - return versions[SMEM_MASTER_SBL_VERSION_INDEX]; 537 + return le32_to_cpu(versions[SMEM_MASTER_SBL_VERSION_INDEX]); 570 538 } 571 539 572 540 static int qcom_smem_enumerate_partitions(struct qcom_smem *smem, ··· 576 544 struct smem_ptable_entry *entry; 577 545 struct smem_ptable *ptable; 578 546 unsigned remote_host; 547 + u32 version, host0, host1; 579 548 int i; 580 549 581 550 ptable = smem->regions[0].virt_base + smem->regions[0].size - SZ_4K; 582 - if (ptable->magic != SMEM_PTABLE_MAGIC) 551 + if (memcmp(ptable->magic, SMEM_PTABLE_MAGIC, sizeof(ptable->magic))) 583 552 return 0; 584 553 585 - if (ptable->version != 1) { 554 + version = le32_to_cpu(ptable->version); 555 + if (version != 1) { 586 556 dev_err(smem->dev, 587 - "Unsupported partition header version %d\n", 588 - ptable->version); 557 + "Unsupported partition header version %d\n", version); 589 558 return -EINVAL; 590 559 } 591 560 592 - for (i = 0; i < ptable->num_entries; i++) { 561 + for (i = 0; i < le32_to_cpu(ptable->num_entries); i++) { 593 562 entry = &ptable->entry[i]; 563 + host0 = le16_to_cpu(entry->host0); 564 + host1 = le16_to_cpu(entry->host1); 594 565 595 - if (entry->host0 != local_host && entry->host1 != local_host) 566 + if (host0 != local_host && host1 != local_host) 596 567 continue; 597 568 598 - if (!entry->offset) 569 + if (!le32_to_cpu(entry->offset)) 599 570 continue; 600 571 601 - if (!entry->size) 572 + if (!le32_to_cpu(entry->size)) 602 573 continue; 603 574 604 - if (entry->host0 == local_host) 605 - remote_host = entry->host1; 575 + if (host0 == local_host) 576 + remote_host = host1; 606 577 else 607 - remote_host = entry->host0; 578 + remote_host = host0; 608 579 609 580 if (remote_host >= SMEM_HOST_COUNT) { 610 581 dev_err(smem->dev, ··· 623 588 return -EINVAL; 624 589 } 625 590 626 - header = smem->regions[0].virt_base + entry->offset; 591 + header = smem->regions[0].virt_base + le32_to_cpu(entry->offset); 592 + host0 = le16_to_cpu(header->host0); 593 + host1 = le16_to_cpu(header->host1); 627 594 628 - if (header->magic != SMEM_PART_MAGIC) { 595 + if (memcmp(header->magic, SMEM_PART_MAGIC, 596 + sizeof(header->magic))) { 629 597 dev_err(smem->dev, 630 598 "Partition %d has invalid magic\n", i); 631 599 return -EINVAL; 632 600 } 633 601 634 - if (header->host0 != local_host && header->host1 != local_host) { 602 + if (host0 != local_host && host1 != local_host) { 635 603 dev_err(smem->dev, 636 604 "Partition %d hosts are invalid\n", i); 637 605 return -EINVAL; 638 606 } 639 607 640 - if (header->host0 != remote_host && header->host1 != remote_host) { 608 + if (host0 != remote_host && host1 != remote_host) { 641 609 dev_err(smem->dev, 642 610 "Partition %d hosts are invalid\n", i); 643 611 return -EINVAL; ··· 652 614 return -EINVAL; 653 615 } 654 616 655 - if (header->offset_free_uncached > header->size) { 617 + if (le32_to_cpu(header->offset_free_uncached) > le32_to_cpu(header->size)) { 656 618 dev_err(smem->dev, 657 619 "Partition %d has invalid free pointer\n", i); 658 620 return -EINVAL; ··· 664 626 return 0; 665 627 } 666 628 667 - static int qcom_smem_count_mem_regions(struct platform_device *pdev) 629 + static int qcom_smem_map_memory(struct qcom_smem *smem, struct device *dev, 630 + const char *name, int i) 668 631 { 669 - struct resource *res; 670 - int num_regions = 0; 671 - int i; 632 + struct device_node *np; 633 + struct resource r; 634 + int ret; 672 635 673 - for (i = 0; i < pdev->num_resources; i++) { 674 - res = &pdev->resource[i]; 675 - 676 - if (resource_type(res) == IORESOURCE_MEM) 677 - num_regions++; 636 + np = of_parse_phandle(dev->of_node, name, 0); 637 + if (!np) { 638 + dev_err(dev, "No %s specified\n", name); 639 + return -EINVAL; 678 640 } 679 641 680 - return num_regions; 642 + ret = of_address_to_resource(np, 0, &r); 643 + of_node_put(np); 644 + if (ret) 645 + return ret; 646 + 647 + smem->regions[i].aux_base = (u32)r.start; 648 + smem->regions[i].size = resource_size(&r); 649 + smem->regions[i].virt_base = devm_ioremap_nocache(dev, r.start, 650 + resource_size(&r)); 651 + if (!smem->regions[i].virt_base) 652 + return -ENOMEM; 653 + 654 + return 0; 681 655 } 682 656 683 657 static int qcom_smem_probe(struct platform_device *pdev) 684 658 { 685 659 struct smem_header *header; 686 - struct device_node *np; 687 660 struct qcom_smem *smem; 688 - struct resource *res; 689 - struct resource r; 690 661 size_t array_size; 691 - int num_regions = 0; 662 + int num_regions; 692 663 int hwlock_id; 693 664 u32 version; 694 665 int ret; 695 - int i; 696 666 697 - num_regions = qcom_smem_count_mem_regions(pdev) + 1; 667 + num_regions = 1; 668 + if (of_find_property(pdev->dev.of_node, "qcom,rpm-msg-ram", NULL)) 669 + num_regions++; 698 670 699 671 array_size = num_regions * sizeof(struct smem_region); 700 672 smem = devm_kzalloc(&pdev->dev, sizeof(*smem) + array_size, GFP_KERNEL); ··· 714 666 smem->dev = &pdev->dev; 715 667 smem->num_regions = num_regions; 716 668 717 - np = of_parse_phandle(pdev->dev.of_node, "memory-region", 0); 718 - if (!np) { 719 - dev_err(&pdev->dev, "No memory-region specified\n"); 720 - return -EINVAL; 721 - } 722 - 723 - ret = of_address_to_resource(np, 0, &r); 724 - of_node_put(np); 669 + ret = qcom_smem_map_memory(smem, &pdev->dev, "memory-region", 0); 725 670 if (ret) 726 671 return ret; 727 672 728 - smem->regions[0].aux_base = (u32)r.start; 729 - smem->regions[0].size = resource_size(&r); 730 - smem->regions[0].virt_base = devm_ioremap_nocache(&pdev->dev, 731 - r.start, 732 - resource_size(&r)); 733 - if (!smem->regions[0].virt_base) 734 - return -ENOMEM; 735 - 736 - for (i = 1; i < num_regions; i++) { 737 - res = platform_get_resource(pdev, IORESOURCE_MEM, i - 1); 738 - 739 - smem->regions[i].aux_base = (u32)res->start; 740 - smem->regions[i].size = resource_size(res); 741 - smem->regions[i].virt_base = devm_ioremap_nocache(&pdev->dev, 742 - res->start, 743 - resource_size(res)); 744 - if (!smem->regions[i].virt_base) 745 - return -ENOMEM; 746 - } 673 + if (num_regions > 1 && (ret = qcom_smem_map_memory(smem, &pdev->dev, 674 + "qcom,rpm-msg-ram", 1))) 675 + return ret; 747 676 748 677 header = smem->regions[0].virt_base; 749 - if (header->initialized != 1 || header->reserved) { 678 + if (le32_to_cpu(header->initialized) != 1 || 679 + le32_to_cpu(header->reserved)) { 750 680 dev_err(&pdev->dev, "SMEM is not initialized by SBL\n"); 751 681 return -EINVAL; 752 682 } ··· 756 730 757 731 static int qcom_smem_remove(struct platform_device *pdev) 758 732 { 759 - __smem = NULL; 760 733 hwspin_lock_free(__smem->hwlock); 734 + __smem = NULL; 761 735 762 736 return 0; 763 737 }
+18
drivers/soc/rockchip/Kconfig
··· 1 + if ARCH_ROCKCHIP || COMPILE_TEST 2 + 3 + # 4 + # Rockchip Soc drivers 5 + # 6 + config ROCKCHIP_PM_DOMAINS 7 + bool "Rockchip generic power domain" 8 + depends on PM 9 + select PM_GENERIC_DOMAINS 10 + help 11 + Say y here to enable power domain support. 12 + In order to meet high performance and low power requirements, a power 13 + management unit is designed or saving power when RK3288 in low power 14 + mode. The RK3288 PMU is dedicated for managing the power of the whole chip. 15 + 16 + If unsure, say N. 17 + 18 + endif
+4
drivers/soc/rockchip/Makefile
··· 1 + # 2 + # Rockchip Soc drivers 3 + # 4 + obj-$(CONFIG_ROCKCHIP_PM_DOMAINS) += pm_domains.o
+490
drivers/soc/rockchip/pm_domains.c
··· 1 + /* 2 + * Rockchip Generic power domain support. 3 + * 4 + * Copyright (c) 2015 ROCKCHIP, Co. Ltd. 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + */ 10 + 11 + #include <linux/io.h> 12 + #include <linux/err.h> 13 + #include <linux/pm_clock.h> 14 + #include <linux/pm_domain.h> 15 + #include <linux/of_address.h> 16 + #include <linux/of_platform.h> 17 + #include <linux/clk.h> 18 + #include <linux/regmap.h> 19 + #include <linux/mfd/syscon.h> 20 + #include <dt-bindings/power/rk3288-power.h> 21 + 22 + struct rockchip_domain_info { 23 + int pwr_mask; 24 + int status_mask; 25 + int req_mask; 26 + int idle_mask; 27 + int ack_mask; 28 + }; 29 + 30 + struct rockchip_pmu_info { 31 + u32 pwr_offset; 32 + u32 status_offset; 33 + u32 req_offset; 34 + u32 idle_offset; 35 + u32 ack_offset; 36 + 37 + u32 core_pwrcnt_offset; 38 + u32 gpu_pwrcnt_offset; 39 + 40 + unsigned int core_power_transition_time; 41 + unsigned int gpu_power_transition_time; 42 + 43 + int num_domains; 44 + const struct rockchip_domain_info *domain_info; 45 + }; 46 + 47 + struct rockchip_pm_domain { 48 + struct generic_pm_domain genpd; 49 + const struct rockchip_domain_info *info; 50 + struct rockchip_pmu *pmu; 51 + int num_clks; 52 + struct clk *clks[]; 53 + }; 54 + 55 + struct rockchip_pmu { 56 + struct device *dev; 57 + struct regmap *regmap; 58 + const struct rockchip_pmu_info *info; 59 + struct mutex mutex; /* mutex lock for pmu */ 60 + struct genpd_onecell_data genpd_data; 61 + struct generic_pm_domain *domains[]; 62 + }; 63 + 64 + #define to_rockchip_pd(gpd) container_of(gpd, struct rockchip_pm_domain, genpd) 65 + 66 + #define DOMAIN(pwr, status, req, idle, ack) \ 67 + { \ 68 + .pwr_mask = BIT(pwr), \ 69 + .status_mask = BIT(status), \ 70 + .req_mask = BIT(req), \ 71 + .idle_mask = BIT(idle), \ 72 + .ack_mask = BIT(ack), \ 73 + } 74 + 75 + #define DOMAIN_RK3288(pwr, status, req) \ 76 + DOMAIN(pwr, status, req, req, (req) + 16) 77 + 78 + static bool rockchip_pmu_domain_is_idle(struct rockchip_pm_domain *pd) 79 + { 80 + struct rockchip_pmu *pmu = pd->pmu; 81 + const struct rockchip_domain_info *pd_info = pd->info; 82 + unsigned int val; 83 + 84 + regmap_read(pmu->regmap, pmu->info->idle_offset, &val); 85 + return (val & pd_info->idle_mask) == pd_info->idle_mask; 86 + } 87 + 88 + static int rockchip_pmu_set_idle_request(struct rockchip_pm_domain *pd, 89 + bool idle) 90 + { 91 + const struct rockchip_domain_info *pd_info = pd->info; 92 + struct rockchip_pmu *pmu = pd->pmu; 93 + unsigned int val; 94 + 95 + regmap_update_bits(pmu->regmap, pmu->info->req_offset, 96 + pd_info->req_mask, idle ? -1U : 0); 97 + 98 + dsb(sy); 99 + 100 + do { 101 + regmap_read(pmu->regmap, pmu->info->ack_offset, &val); 102 + } while ((val & pd_info->ack_mask) != (idle ? pd_info->ack_mask : 0)); 103 + 104 + while (rockchip_pmu_domain_is_idle(pd) != idle) 105 + cpu_relax(); 106 + 107 + return 0; 108 + } 109 + 110 + static bool rockchip_pmu_domain_is_on(struct rockchip_pm_domain *pd) 111 + { 112 + struct rockchip_pmu *pmu = pd->pmu; 113 + unsigned int val; 114 + 115 + regmap_read(pmu->regmap, pmu->info->status_offset, &val); 116 + 117 + /* 1'b0: power on, 1'b1: power off */ 118 + return !(val & pd->info->status_mask); 119 + } 120 + 121 + static void rockchip_do_pmu_set_power_domain(struct rockchip_pm_domain *pd, 122 + bool on) 123 + { 124 + struct rockchip_pmu *pmu = pd->pmu; 125 + 126 + regmap_update_bits(pmu->regmap, pmu->info->pwr_offset, 127 + pd->info->pwr_mask, on ? 0 : -1U); 128 + 129 + dsb(sy); 130 + 131 + while (rockchip_pmu_domain_is_on(pd) != on) 132 + cpu_relax(); 133 + } 134 + 135 + static int rockchip_pd_power(struct rockchip_pm_domain *pd, bool power_on) 136 + { 137 + int i; 138 + 139 + mutex_lock(&pd->pmu->mutex); 140 + 141 + if (rockchip_pmu_domain_is_on(pd) != power_on) { 142 + for (i = 0; i < pd->num_clks; i++) 143 + clk_enable(pd->clks[i]); 144 + 145 + if (!power_on) { 146 + /* FIXME: add code to save AXI_QOS */ 147 + 148 + /* if powering down, idle request to NIU first */ 149 + rockchip_pmu_set_idle_request(pd, true); 150 + } 151 + 152 + rockchip_do_pmu_set_power_domain(pd, power_on); 153 + 154 + if (power_on) { 155 + /* if powering up, leave idle mode */ 156 + rockchip_pmu_set_idle_request(pd, false); 157 + 158 + /* FIXME: add code to restore AXI_QOS */ 159 + } 160 + 161 + for (i = pd->num_clks - 1; i >= 0; i--) 162 + clk_disable(pd->clks[i]); 163 + } 164 + 165 + mutex_unlock(&pd->pmu->mutex); 166 + return 0; 167 + } 168 + 169 + static int rockchip_pd_power_on(struct generic_pm_domain *domain) 170 + { 171 + struct rockchip_pm_domain *pd = to_rockchip_pd(domain); 172 + 173 + return rockchip_pd_power(pd, true); 174 + } 175 + 176 + static int rockchip_pd_power_off(struct generic_pm_domain *domain) 177 + { 178 + struct rockchip_pm_domain *pd = to_rockchip_pd(domain); 179 + 180 + return rockchip_pd_power(pd, false); 181 + } 182 + 183 + static int rockchip_pd_attach_dev(struct generic_pm_domain *genpd, 184 + struct device *dev) 185 + { 186 + struct clk *clk; 187 + int i; 188 + int error; 189 + 190 + dev_dbg(dev, "attaching to power domain '%s'\n", genpd->name); 191 + 192 + error = pm_clk_create(dev); 193 + if (error) { 194 + dev_err(dev, "pm_clk_create failed %d\n", error); 195 + return error; 196 + } 197 + 198 + i = 0; 199 + while ((clk = of_clk_get(dev->of_node, i++)) && !IS_ERR(clk)) { 200 + dev_dbg(dev, "adding clock '%pC' to list of PM clocks\n", clk); 201 + error = pm_clk_add_clk(dev, clk); 202 + if (error) { 203 + dev_err(dev, "pm_clk_add_clk failed %d\n", error); 204 + clk_put(clk); 205 + pm_clk_destroy(dev); 206 + return error; 207 + } 208 + } 209 + 210 + return 0; 211 + } 212 + 213 + static void rockchip_pd_detach_dev(struct generic_pm_domain *genpd, 214 + struct device *dev) 215 + { 216 + dev_dbg(dev, "detaching from power domain '%s'\n", genpd->name); 217 + 218 + pm_clk_destroy(dev); 219 + } 220 + 221 + static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu, 222 + struct device_node *node) 223 + { 224 + const struct rockchip_domain_info *pd_info; 225 + struct rockchip_pm_domain *pd; 226 + struct clk *clk; 227 + int clk_cnt; 228 + int i; 229 + u32 id; 230 + int error; 231 + 232 + error = of_property_read_u32(node, "reg", &id); 233 + if (error) { 234 + dev_err(pmu->dev, 235 + "%s: failed to retrieve domain id (reg): %d\n", 236 + node->name, error); 237 + return -EINVAL; 238 + } 239 + 240 + if (id >= pmu->info->num_domains) { 241 + dev_err(pmu->dev, "%s: invalid domain id %d\n", 242 + node->name, id); 243 + return -EINVAL; 244 + } 245 + 246 + pd_info = &pmu->info->domain_info[id]; 247 + if (!pd_info) { 248 + dev_err(pmu->dev, "%s: undefined domain id %d\n", 249 + node->name, id); 250 + return -EINVAL; 251 + } 252 + 253 + clk_cnt = of_count_phandle_with_args(node, "clocks", "#clock-cells"); 254 + pd = devm_kzalloc(pmu->dev, 255 + sizeof(*pd) + clk_cnt * sizeof(pd->clks[0]), 256 + GFP_KERNEL); 257 + if (!pd) 258 + return -ENOMEM; 259 + 260 + pd->info = pd_info; 261 + pd->pmu = pmu; 262 + 263 + for (i = 0; i < clk_cnt; i++) { 264 + clk = of_clk_get(node, i); 265 + if (IS_ERR(clk)) { 266 + error = PTR_ERR(clk); 267 + dev_err(pmu->dev, 268 + "%s: failed to get clk at index %d: %d\n", 269 + node->name, i, error); 270 + goto err_out; 271 + } 272 + 273 + error = clk_prepare(clk); 274 + if (error) { 275 + dev_err(pmu->dev, 276 + "%s: failed to prepare clk %pC (index %d): %d\n", 277 + node->name, clk, i, error); 278 + clk_put(clk); 279 + goto err_out; 280 + } 281 + 282 + pd->clks[pd->num_clks++] = clk; 283 + 284 + dev_dbg(pmu->dev, "added clock '%pC' to domain '%s'\n", 285 + clk, node->name); 286 + } 287 + 288 + error = rockchip_pd_power(pd, true); 289 + if (error) { 290 + dev_err(pmu->dev, 291 + "failed to power on domain '%s': %d\n", 292 + node->name, error); 293 + goto err_out; 294 + } 295 + 296 + pd->genpd.name = node->name; 297 + pd->genpd.power_off = rockchip_pd_power_off; 298 + pd->genpd.power_on = rockchip_pd_power_on; 299 + pd->genpd.attach_dev = rockchip_pd_attach_dev; 300 + pd->genpd.detach_dev = rockchip_pd_detach_dev; 301 + pd->genpd.flags = GENPD_FLAG_PM_CLK; 302 + pm_genpd_init(&pd->genpd, NULL, false); 303 + 304 + pmu->genpd_data.domains[id] = &pd->genpd; 305 + return 0; 306 + 307 + err_out: 308 + while (--i >= 0) { 309 + clk_unprepare(pd->clks[i]); 310 + clk_put(pd->clks[i]); 311 + } 312 + return error; 313 + } 314 + 315 + static void rockchip_pm_remove_one_domain(struct rockchip_pm_domain *pd) 316 + { 317 + int i; 318 + 319 + for (i = 0; i < pd->num_clks; i++) { 320 + clk_unprepare(pd->clks[i]); 321 + clk_put(pd->clks[i]); 322 + } 323 + 324 + /* protect the zeroing of pm->num_clks */ 325 + mutex_lock(&pd->pmu->mutex); 326 + pd->num_clks = 0; 327 + mutex_unlock(&pd->pmu->mutex); 328 + 329 + /* devm will free our memory */ 330 + } 331 + 332 + static void rockchip_pm_domain_cleanup(struct rockchip_pmu *pmu) 333 + { 334 + struct generic_pm_domain *genpd; 335 + struct rockchip_pm_domain *pd; 336 + int i; 337 + 338 + for (i = 0; i < pmu->genpd_data.num_domains; i++) { 339 + genpd = pmu->genpd_data.domains[i]; 340 + if (genpd) { 341 + pd = to_rockchip_pd(genpd); 342 + rockchip_pm_remove_one_domain(pd); 343 + } 344 + } 345 + 346 + /* devm will free our memory */ 347 + } 348 + 349 + static void rockchip_configure_pd_cnt(struct rockchip_pmu *pmu, 350 + u32 domain_reg_offset, 351 + unsigned int count) 352 + { 353 + /* First configure domain power down transition count ... */ 354 + regmap_write(pmu->regmap, domain_reg_offset, count); 355 + /* ... and then power up count. */ 356 + regmap_write(pmu->regmap, domain_reg_offset + 4, count); 357 + } 358 + 359 + static int rockchip_pm_domain_probe(struct platform_device *pdev) 360 + { 361 + struct device *dev = &pdev->dev; 362 + struct device_node *np = dev->of_node; 363 + struct device_node *node; 364 + struct device *parent; 365 + struct rockchip_pmu *pmu; 366 + const struct of_device_id *match; 367 + const struct rockchip_pmu_info *pmu_info; 368 + int error; 369 + 370 + if (!np) { 371 + dev_err(dev, "device tree node not found\n"); 372 + return -ENODEV; 373 + } 374 + 375 + match = of_match_device(dev->driver->of_match_table, dev); 376 + if (!match || !match->data) { 377 + dev_err(dev, "missing pmu data\n"); 378 + return -EINVAL; 379 + } 380 + 381 + pmu_info = match->data; 382 + 383 + pmu = devm_kzalloc(dev, 384 + sizeof(*pmu) + 385 + pmu_info->num_domains * sizeof(pmu->domains[0]), 386 + GFP_KERNEL); 387 + if (!pmu) 388 + return -ENOMEM; 389 + 390 + pmu->dev = &pdev->dev; 391 + mutex_init(&pmu->mutex); 392 + 393 + pmu->info = pmu_info; 394 + 395 + pmu->genpd_data.domains = pmu->domains; 396 + pmu->genpd_data.num_domains = pmu_info->num_domains; 397 + 398 + parent = dev->parent; 399 + if (!parent) { 400 + dev_err(dev, "no parent for syscon devices\n"); 401 + return -ENODEV; 402 + } 403 + 404 + pmu->regmap = syscon_node_to_regmap(parent->of_node); 405 + 406 + /* 407 + * Configure power up and down transition delays for CORE 408 + * and GPU domains. 409 + */ 410 + rockchip_configure_pd_cnt(pmu, pmu_info->core_pwrcnt_offset, 411 + pmu_info->core_power_transition_time); 412 + rockchip_configure_pd_cnt(pmu, pmu_info->gpu_pwrcnt_offset, 413 + pmu_info->gpu_power_transition_time); 414 + 415 + error = -ENODEV; 416 + 417 + for_each_available_child_of_node(np, node) { 418 + error = rockchip_pm_add_one_domain(pmu, node); 419 + if (error) { 420 + dev_err(dev, "failed to handle node %s: %d\n", 421 + node->name, error); 422 + goto err_out; 423 + } 424 + } 425 + 426 + if (error) { 427 + dev_dbg(dev, "no power domains defined\n"); 428 + goto err_out; 429 + } 430 + 431 + of_genpd_add_provider_onecell(np, &pmu->genpd_data); 432 + 433 + return 0; 434 + 435 + err_out: 436 + rockchip_pm_domain_cleanup(pmu); 437 + return error; 438 + } 439 + 440 + static const struct rockchip_domain_info rk3288_pm_domains[] = { 441 + [RK3288_PD_VIO] = DOMAIN_RK3288(7, 7, 4), 442 + [RK3288_PD_HEVC] = DOMAIN_RK3288(14, 10, 9), 443 + [RK3288_PD_VIDEO] = DOMAIN_RK3288(8, 8, 3), 444 + [RK3288_PD_GPU] = DOMAIN_RK3288(9, 9, 2), 445 + }; 446 + 447 + static const struct rockchip_pmu_info rk3288_pmu = { 448 + .pwr_offset = 0x08, 449 + .status_offset = 0x0c, 450 + .req_offset = 0x10, 451 + .idle_offset = 0x14, 452 + .ack_offset = 0x14, 453 + 454 + .core_pwrcnt_offset = 0x34, 455 + .gpu_pwrcnt_offset = 0x3c, 456 + 457 + .core_power_transition_time = 24, /* 1us */ 458 + .gpu_power_transition_time = 24, /* 1us */ 459 + 460 + .num_domains = ARRAY_SIZE(rk3288_pm_domains), 461 + .domain_info = rk3288_pm_domains, 462 + }; 463 + 464 + static const struct of_device_id rockchip_pm_domain_dt_match[] = { 465 + { 466 + .compatible = "rockchip,rk3288-power-controller", 467 + .data = (void *)&rk3288_pmu, 468 + }, 469 + { /* sentinel */ }, 470 + }; 471 + 472 + static struct platform_driver rockchip_pm_domain_driver = { 473 + .probe = rockchip_pm_domain_probe, 474 + .driver = { 475 + .name = "rockchip-pm-domain", 476 + .of_match_table = rockchip_pm_domain_dt_match, 477 + /* 478 + * We can't forcibly eject devices form power domain, 479 + * so we can't really remove power domains once they 480 + * were added. 481 + */ 482 + .suppress_bind_attrs = true, 483 + }, 484 + }; 485 + 486 + static int __init rockchip_pm_domain_drv_register(void) 487 + { 488 + return platform_driver_register(&rockchip_pm_domain_driver); 489 + } 490 + postcore_initcall(rockchip_pm_domain_drv_register);
+1
include/dt-bindings/clock/berlin2q.h
··· 29 29 #define CLKID_SMEMC 24 30 30 #define CLKID_PCIE 25 31 31 #define CLKID_TWD 26 32 + #define CLKID_CPU 27
+31
include/dt-bindings/power/rk3288-power.h
··· 1 + #ifndef __DT_BINDINGS_POWER_RK3288_POWER_H__ 2 + #define __DT_BINDINGS_POWER_RK3288_POWER_H__ 3 + 4 + /** 5 + * RK3288 Power Domain and Voltage Domain Summary. 6 + */ 7 + 8 + /* VD_CORE */ 9 + #define RK3288_PD_A17_0 0 10 + #define RK3288_PD_A17_1 1 11 + #define RK3288_PD_A17_2 2 12 + #define RK3288_PD_A17_3 3 13 + #define RK3288_PD_SCU 4 14 + #define RK3288_PD_DEBUG 5 15 + #define RK3288_PD_MEM 6 16 + 17 + /* VD_LOGIC */ 18 + #define RK3288_PD_BUS 7 19 + #define RK3288_PD_PERI 8 20 + #define RK3288_PD_VIO 9 21 + #define RK3288_PD_ALIVE 10 22 + #define RK3288_PD_HEVC 11 23 + #define RK3288_PD_VIDEO 12 24 + 25 + /* VD_GPU */ 26 + #define RK3288_PD_GPU 13 27 + 28 + /* VD_PMU */ 29 + #define RK3288_PD_PMU 14 30 + 31 + #endif
+1
include/linux/atmel_tc.h
··· 67 67 const struct atmel_tcb_config *tcb_config; 68 68 int irq[3]; 69 69 struct clk *clk[3]; 70 + struct clk *slow_clk; 70 71 struct list_head node; 71 72 bool allocated; 72 73 };
+2
include/linux/psci.h
··· 21 21 #define PSCI_POWER_STATE_TYPE_POWER_DOWN 1 22 22 23 23 bool psci_tos_resident_on(int cpu); 24 + bool psci_power_state_loses_context(u32 state); 25 + bool psci_power_state_is_valid(u32 state); 24 26 25 27 struct psci_operations { 26 28 int (*cpu_suspend)(u32 state, unsigned long entry_point);
+2
include/linux/qcom_scm.h
··· 23 23 u32 val; 24 24 }; 25 25 26 + extern bool qcom_scm_is_available(void); 27 + 26 28 extern bool qcom_scm_hdcp_available(void); 27 29 extern int qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt, 28 30 u32 *resp);
+78
include/linux/scpi_protocol.h
··· 1 + /* 2 + * SCPI Message Protocol driver header 3 + * 4 + * Copyright (C) 2014 ARM Ltd. 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms and conditions of the GNU General Public License, 8 + * version 2, as published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope it will be useful, but WITHOUT 11 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 + * more details. 14 + * 15 + * You should have received a copy of the GNU General Public License along with 16 + * this program. If not, see <http://www.gnu.org/licenses/>. 17 + */ 18 + #include <linux/types.h> 19 + 20 + struct scpi_opp { 21 + u32 freq; 22 + u32 m_volt; 23 + } __packed; 24 + 25 + struct scpi_dvfs_info { 26 + unsigned int count; 27 + unsigned int latency; /* in nanoseconds */ 28 + struct scpi_opp *opps; 29 + }; 30 + 31 + enum scpi_sensor_class { 32 + TEMPERATURE, 33 + VOLTAGE, 34 + CURRENT, 35 + POWER, 36 + }; 37 + 38 + struct scpi_sensor_info { 39 + u16 sensor_id; 40 + u8 class; 41 + u8 trigger_type; 42 + char name[20]; 43 + } __packed; 44 + 45 + /** 46 + * struct scpi_ops - represents the various operations provided 47 + * by SCP through SCPI message protocol 48 + * @get_version: returns the major and minor revision on the SCPI 49 + * message protocol 50 + * @clk_get_range: gets clock range limit(min - max in Hz) 51 + * @clk_get_val: gets clock value(in Hz) 52 + * @clk_set_val: sets the clock value, setting to 0 will disable the 53 + * clock (if supported) 54 + * @dvfs_get_idx: gets the Operating Point of the given power domain. 55 + * OPP is an index to the list return by @dvfs_get_info 56 + * @dvfs_set_idx: sets the Operating Point of the given power domain. 57 + * OPP is an index to the list return by @dvfs_get_info 58 + * @dvfs_get_info: returns the DVFS capabilities of the given power 59 + * domain. It includes the OPP list and the latency information 60 + */ 61 + struct scpi_ops { 62 + u32 (*get_version)(void); 63 + int (*clk_get_range)(u16, unsigned long *, unsigned long *); 64 + unsigned long (*clk_get_val)(u16); 65 + int (*clk_set_val)(u16, unsigned long); 66 + int (*dvfs_get_idx)(u8); 67 + int (*dvfs_set_idx)(u8, u8); 68 + struct scpi_dvfs_info *(*dvfs_get_info)(u8); 69 + int (*sensor_get_capability)(u16 *sensors); 70 + int (*sensor_get_info)(u16 sensor_id, struct scpi_sensor_info *); 71 + int (*sensor_get_value)(u16, u32 *); 72 + }; 73 + 74 + #if IS_ENABLED(CONFIG_ARM_SCPI_PROTOCOL) 75 + struct scpi_ops *get_scpi_ops(void); 76 + #else 77 + static inline struct scpi_ops *get_scpi_ops(void) { return NULL; } 78 + #endif
+11
include/linux/soc/qcom/smd.h
··· 9 9 struct qcom_smd_lookup; 10 10 11 11 /** 12 + * struct qcom_smd_id - struct used for matching a smd device 13 + * @name: name of the channel 14 + */ 15 + struct qcom_smd_id { 16 + char name[20]; 17 + }; 18 + 19 + /** 12 20 * struct qcom_smd_device - smd device struct 13 21 * @dev: the device struct 14 22 * @channel: handle to the smd channel for this device ··· 29 21 /** 30 22 * struct qcom_smd_driver - smd driver struct 31 23 * @driver: underlying device driver 24 + * @smd_match_table: static channel match table 32 25 * @probe: invoked when the smd channel is found 33 26 * @remove: invoked when the smd channel is closed 34 27 * @callback: invoked when an inbound message is received on the channel, ··· 38 29 */ 39 30 struct qcom_smd_driver { 40 31 struct device_driver driver; 32 + const struct qcom_smd_id *smd_match_table; 33 + 41 34 int (*probe)(struct qcom_smd_device *dev); 42 35 void (*remove)(struct qcom_smd_device *dev); 43 36 int (*callback)(struct qcom_smd_device *, const void *, size_t);
+1 -1
include/linux/soc/qcom/smem.h
··· 4 4 #define QCOM_SMEM_HOST_ANY -1 5 5 6 6 int qcom_smem_alloc(unsigned host, unsigned item, size_t size); 7 - int qcom_smem_get(unsigned host, unsigned item, void **ptr, size_t *size); 7 + void *qcom_smem_get(unsigned host, unsigned item, size_t *size); 8 8 9 9 int qcom_smem_get_free_space(unsigned host); 10 10
+105
include/linux/sunxi-rsb.h
··· 1 + /* 2 + * Allwinner Reduced Serial Bus Driver 3 + * 4 + * Copyright (c) 2015 Chen-Yu Tsai 5 + * 6 + * Author: Chen-Yu Tsai <wens@csie.org> 7 + * 8 + * This file is licensed under the terms of the GNU General Public 9 + * License version 2. This program is licensed "as is" without any 10 + * warranty of any kind, whether express or implied. 11 + */ 12 + #ifndef _SUNXI_RSB_H 13 + #define _SUNXI_RSB_H 14 + 15 + #include <linux/device.h> 16 + #include <linux/regmap.h> 17 + #include <linux/types.h> 18 + 19 + struct sunxi_rsb; 20 + 21 + /** 22 + * struct sunxi_rsb_device - Basic representation of an RSB device 23 + * @dev: Driver model representation of the device. 24 + * @ctrl: RSB controller managing the bus hosting this device. 25 + * @rtaddr: This device's runtime address 26 + * @hwaddr: This device's hardware address 27 + */ 28 + struct sunxi_rsb_device { 29 + struct device dev; 30 + struct sunxi_rsb *rsb; 31 + int irq; 32 + u8 rtaddr; 33 + u16 hwaddr; 34 + }; 35 + 36 + static inline struct sunxi_rsb_device *to_sunxi_rsb_device(struct device *d) 37 + { 38 + return container_of(d, struct sunxi_rsb_device, dev); 39 + } 40 + 41 + static inline void *sunxi_rsb_device_get_drvdata(const struct sunxi_rsb_device *rdev) 42 + { 43 + return dev_get_drvdata(&rdev->dev); 44 + } 45 + 46 + static inline void sunxi_rsb_device_set_drvdata(struct sunxi_rsb_device *rdev, 47 + void *data) 48 + { 49 + dev_set_drvdata(&rdev->dev, data); 50 + } 51 + 52 + /** 53 + * struct sunxi_rsb_driver - RSB slave device driver 54 + * @driver: RSB device drivers should initialize name and owner field of 55 + * this structure. 56 + * @probe: binds this driver to a RSB device. 57 + * @remove: unbinds this driver from the RSB device. 58 + */ 59 + struct sunxi_rsb_driver { 60 + struct device_driver driver; 61 + int (*probe)(struct sunxi_rsb_device *rdev); 62 + int (*remove)(struct sunxi_rsb_device *rdev); 63 + }; 64 + 65 + static inline struct sunxi_rsb_driver *to_sunxi_rsb_driver(struct device_driver *d) 66 + { 67 + return container_of(d, struct sunxi_rsb_driver, driver); 68 + } 69 + 70 + int sunxi_rsb_driver_register(struct sunxi_rsb_driver *rdrv); 71 + 72 + /** 73 + * sunxi_rsb_driver_unregister() - unregister an RSB client driver 74 + * @rdrv: the driver to unregister 75 + */ 76 + static inline void sunxi_rsb_driver_unregister(struct sunxi_rsb_driver *rdrv) 77 + { 78 + if (rdrv) 79 + driver_unregister(&rdrv->driver); 80 + } 81 + 82 + #define module_sunxi_rsb_driver(__sunxi_rsb_driver) \ 83 + module_driver(__sunxi_rsb_driver, sunxi_rsb_driver_register, \ 84 + sunxi_rsb_driver_unregister) 85 + 86 + struct regmap *__devm_regmap_init_sunxi_rsb(struct sunxi_rsb_device *rdev, 87 + const struct regmap_config *config, 88 + struct lock_class_key *lock_key, 89 + const char *lock_name); 90 + 91 + /** 92 + * devm_regmap_init_sunxi_rsb(): Initialise managed register map 93 + * 94 + * @rdev: Device that will be interacted with 95 + * @config: Configuration for register map 96 + * 97 + * The return value will be an ERR_PTR() on error or a valid pointer 98 + * to a struct regmap. The regmap will be automatically freed by the 99 + * device management code. 100 + */ 101 + #define devm_regmap_init_sunxi_rsb(rdev, config) \ 102 + __regmap_lockdep_wrapper(__devm_regmap_init_sunxi_rsb, #config, \ 103 + rdev, config) 104 + 105 + #endif /* _SUNXI_RSB_H */
+120
include/soc/bcm2835/raspberrypi-firmware.h
··· 1 + /* 2 + * Copyright © 2015 Broadcom 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + */ 8 + 9 + #ifndef __SOC_RASPBERRY_FIRMWARE_H__ 10 + #define __SOC_RASPBERRY_FIRMWARE_H__ 11 + 12 + #include <linux/types.h> 13 + #include <linux/of_device.h> 14 + 15 + struct rpi_firmware; 16 + 17 + enum rpi_firmware_property_status { 18 + RPI_FIRMWARE_STATUS_REQUEST = 0, 19 + RPI_FIRMWARE_STATUS_SUCCESS = 0x80000000, 20 + RPI_FIRMWARE_STATUS_ERROR = 0x80000001, 21 + }; 22 + 23 + /** 24 + * struct rpi_firmware_property_tag_header - Firmware property tag header 25 + * @tag: One of enum_mbox_property_tag. 26 + * @buf_size: The number of bytes in the value buffer following this 27 + * struct. 28 + * @req_resp_size: On submit, the length of the request (though it doesn't 29 + * appear to be currently used by the firmware). On return, 30 + * the length of the response (always 4 byte aligned), with 31 + * the low bit set. 32 + */ 33 + struct rpi_firmware_property_tag_header { 34 + u32 tag; 35 + u32 buf_size; 36 + u32 req_resp_size; 37 + }; 38 + 39 + enum rpi_firmware_property_tag { 40 + RPI_FIRMWARE_PROPERTY_END = 0, 41 + RPI_FIRMWARE_GET_FIRMWARE_REVISION = 0x00000001, 42 + 43 + RPI_FIRMWARE_SET_CURSOR_INFO = 0x00008010, 44 + RPI_FIRMWARE_SET_CURSOR_STATE = 0x00008011, 45 + 46 + RPI_FIRMWARE_GET_BOARD_MODEL = 0x00010001, 47 + RPI_FIRMWARE_GET_BOARD_REVISION = 0x00010002, 48 + RPI_FIRMWARE_GET_BOARD_MAC_ADDRESS = 0x00010003, 49 + RPI_FIRMWARE_GET_BOARD_SERIAL = 0x00010004, 50 + RPI_FIRMWARE_GET_ARM_MEMORY = 0x00010005, 51 + RPI_FIRMWARE_GET_VC_MEMORY = 0x00010006, 52 + RPI_FIRMWARE_GET_CLOCKS = 0x00010007, 53 + RPI_FIRMWARE_GET_POWER_STATE = 0x00020001, 54 + RPI_FIRMWARE_GET_TIMING = 0x00020002, 55 + RPI_FIRMWARE_SET_POWER_STATE = 0x00028001, 56 + RPI_FIRMWARE_GET_CLOCK_STATE = 0x00030001, 57 + RPI_FIRMWARE_GET_CLOCK_RATE = 0x00030002, 58 + RPI_FIRMWARE_GET_VOLTAGE = 0x00030003, 59 + RPI_FIRMWARE_GET_MAX_CLOCK_RATE = 0x00030004, 60 + RPI_FIRMWARE_GET_MAX_VOLTAGE = 0x00030005, 61 + RPI_FIRMWARE_GET_TEMPERATURE = 0x00030006, 62 + RPI_FIRMWARE_GET_MIN_CLOCK_RATE = 0x00030007, 63 + RPI_FIRMWARE_GET_MIN_VOLTAGE = 0x00030008, 64 + RPI_FIRMWARE_GET_TURBO = 0x00030009, 65 + RPI_FIRMWARE_GET_MAX_TEMPERATURE = 0x0003000a, 66 + RPI_FIRMWARE_ALLOCATE_MEMORY = 0x0003000c, 67 + RPI_FIRMWARE_LOCK_MEMORY = 0x0003000d, 68 + RPI_FIRMWARE_UNLOCK_MEMORY = 0x0003000e, 69 + RPI_FIRMWARE_RELEASE_MEMORY = 0x0003000f, 70 + RPI_FIRMWARE_EXECUTE_CODE = 0x00030010, 71 + RPI_FIRMWARE_EXECUTE_QPU = 0x00030011, 72 + RPI_FIRMWARE_SET_ENABLE_QPU = 0x00030012, 73 + RPI_FIRMWARE_GET_DISPMANX_RESOURCE_MEM_HANDLE = 0x00030014, 74 + RPI_FIRMWARE_GET_EDID_BLOCK = 0x00030020, 75 + RPI_FIRMWARE_SET_CLOCK_STATE = 0x00038001, 76 + RPI_FIRMWARE_SET_CLOCK_RATE = 0x00038002, 77 + RPI_FIRMWARE_SET_VOLTAGE = 0x00038003, 78 + RPI_FIRMWARE_SET_TURBO = 0x00038009, 79 + 80 + /* Dispmanx TAGS */ 81 + RPI_FIRMWARE_FRAMEBUFFER_ALLOCATE = 0x00040001, 82 + RPI_FIRMWARE_FRAMEBUFFER_BLANK = 0x00040002, 83 + RPI_FIRMWARE_FRAMEBUFFER_GET_PHYSICAL_WIDTH_HEIGHT = 0x00040003, 84 + RPI_FIRMWARE_FRAMEBUFFER_GET_VIRTUAL_WIDTH_HEIGHT = 0x00040004, 85 + RPI_FIRMWARE_FRAMEBUFFER_GET_DEPTH = 0x00040005, 86 + RPI_FIRMWARE_FRAMEBUFFER_GET_PIXEL_ORDER = 0x00040006, 87 + RPI_FIRMWARE_FRAMEBUFFER_GET_ALPHA_MODE = 0x00040007, 88 + RPI_FIRMWARE_FRAMEBUFFER_GET_PITCH = 0x00040008, 89 + RPI_FIRMWARE_FRAMEBUFFER_GET_VIRTUAL_OFFSET = 0x00040009, 90 + RPI_FIRMWARE_FRAMEBUFFER_GET_OVERSCAN = 0x0004000a, 91 + RPI_FIRMWARE_FRAMEBUFFER_GET_PALETTE = 0x0004000b, 92 + RPI_FIRMWARE_FRAMEBUFFER_RELEASE = 0x00048001, 93 + RPI_FIRMWARE_FRAMEBUFFER_TEST_PHYSICAL_WIDTH_HEIGHT = 0x00044003, 94 + RPI_FIRMWARE_FRAMEBUFFER_TEST_VIRTUAL_WIDTH_HEIGHT = 0x00044004, 95 + RPI_FIRMWARE_FRAMEBUFFER_TEST_DEPTH = 0x00044005, 96 + RPI_FIRMWARE_FRAMEBUFFER_TEST_PIXEL_ORDER = 0x00044006, 97 + RPI_FIRMWARE_FRAMEBUFFER_TEST_ALPHA_MODE = 0x00044007, 98 + RPI_FIRMWARE_FRAMEBUFFER_TEST_VIRTUAL_OFFSET = 0x00044009, 99 + RPI_FIRMWARE_FRAMEBUFFER_TEST_OVERSCAN = 0x0004400a, 100 + RPI_FIRMWARE_FRAMEBUFFER_TEST_PALETTE = 0x0004400b, 101 + RPI_FIRMWARE_FRAMEBUFFER_SET_PHYSICAL_WIDTH_HEIGHT = 0x00048003, 102 + RPI_FIRMWARE_FRAMEBUFFER_SET_VIRTUAL_WIDTH_HEIGHT = 0x00048004, 103 + RPI_FIRMWARE_FRAMEBUFFER_SET_DEPTH = 0x00048005, 104 + RPI_FIRMWARE_FRAMEBUFFER_SET_PIXEL_ORDER = 0x00048006, 105 + RPI_FIRMWARE_FRAMEBUFFER_SET_ALPHA_MODE = 0x00048007, 106 + RPI_FIRMWARE_FRAMEBUFFER_SET_VIRTUAL_OFFSET = 0x00048009, 107 + RPI_FIRMWARE_FRAMEBUFFER_SET_OVERSCAN = 0x0004800a, 108 + RPI_FIRMWARE_FRAMEBUFFER_SET_PALETTE = 0x0004800b, 109 + 110 + RPI_FIRMWARE_GET_COMMAND_LINE = 0x00050001, 111 + RPI_FIRMWARE_GET_DMA_CHANNELS = 0x00060001, 112 + }; 113 + 114 + int rpi_firmware_property(struct rpi_firmware *fw, 115 + u32 tag, void *data, size_t len); 116 + int rpi_firmware_property_list(struct rpi_firmware *fw, 117 + void *data, size_t tag_size); 118 + struct rpi_firmware *rpi_firmware_get(struct device_node *firmware_node); 119 + 120 + #endif /* __SOC_RASPBERRY_FIRMWARE_H__ */
+18
include/uapi/linux/psci.h
··· 46 46 #define PSCI_0_2_FN64_MIGRATE PSCI_0_2_FN64(5) 47 47 #define PSCI_0_2_FN64_MIGRATE_INFO_UP_CPU PSCI_0_2_FN64(7) 48 48 49 + #define PSCI_1_0_FN_PSCI_FEATURES PSCI_0_2_FN(10) 50 + #define PSCI_1_0_FN_SYSTEM_SUSPEND PSCI_0_2_FN(14) 51 + 52 + #define PSCI_1_0_FN64_SYSTEM_SUSPEND PSCI_0_2_FN64(14) 53 + 49 54 /* PSCI v0.2 power state encoding for CPU_SUSPEND function */ 50 55 #define PSCI_0_2_POWER_STATE_ID_MASK 0xffff 51 56 #define PSCI_0_2_POWER_STATE_ID_SHIFT 0 ··· 60 55 #define PSCI_0_2_POWER_STATE_AFFL_SHIFT 24 61 56 #define PSCI_0_2_POWER_STATE_AFFL_MASK \ 62 57 (0x3 << PSCI_0_2_POWER_STATE_AFFL_SHIFT) 58 + 59 + /* PSCI extended power state encoding for CPU_SUSPEND function */ 60 + #define PSCI_1_0_EXT_POWER_STATE_ID_MASK 0xfffffff 61 + #define PSCI_1_0_EXT_POWER_STATE_ID_SHIFT 0 62 + #define PSCI_1_0_EXT_POWER_STATE_TYPE_SHIFT 30 63 + #define PSCI_1_0_EXT_POWER_STATE_TYPE_MASK \ 64 + (0x1 << PSCI_1_0_EXT_POWER_STATE_TYPE_SHIFT) 63 65 64 66 /* PSCI v0.2 affinity level state returned by AFFINITY_INFO */ 65 67 #define PSCI_0_2_AFFINITY_LEVEL_ON 0 ··· 88 76 #define PSCI_VERSION_MINOR(ver) \ 89 77 ((ver) & PSCI_VERSION_MINOR_MASK) 90 78 79 + /* PSCI features decoding (>=1.0) */ 80 + #define PSCI_1_0_FEATURES_CPU_SUSPEND_PF_SHIFT 1 81 + #define PSCI_1_0_FEATURES_CPU_SUSPEND_PF_MASK \ 82 + (0x1 << PSCI_1_0_FEATURES_CPU_SUSPEND_PF_SHIFT) 83 + 91 84 /* PSCI return values (inclusive of all PSCI versions) */ 92 85 #define PSCI_RET_SUCCESS 0 93 86 #define PSCI_RET_NOT_SUPPORTED -1 ··· 103 86 #define PSCI_RET_INTERNAL_FAILURE -6 104 87 #define PSCI_RET_NOT_PRESENT -7 105 88 #define PSCI_RET_DISABLED -8 89 + #define PSCI_RET_INVALID_ADDRESS -9 106 90 107 91 #endif /* _UAPI_LINUX_PSCI_H */