Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC driver updates from Olof Johansson:
"Driver updates for v4.1. Some of these are for drivers/soc, where we
find more and more SoC-specific drivers these days. Some are for
other driver subsystems where we have received acks from the
appropriate maintainers.

The larger parts of this branch are:

- MediaTek support for their PMIC wrapper interface, a high-level
interface for talking to the system PMIC over a dedicated I2C
interface.

- Qualcomm SCM driver has been moved to drivers/firmware. It's used
for CPU up/down and needs to be in a shared location for arm/arm64
common code.

- cleanup of ARM-CCI PMU code.

- another set of cleanusp to the OMAP GPMC code"

* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (43 commits)
soc/mediatek: Remove unused variables
clocksource: atmel-st: select MFD_SYSCON
soc: mediatek: Add PMIC wrapper for MT8135 and MT8173 SoCs
arm-cci: Fix CCI PMU event validation
arm-cci: Split the code for PMU vs driver support
arm-cci: Get rid of secure transactions for PMU driver
arm-cci: Abstract the CCI400 PMU specific definitions
arm-cci: Rearrange code for splitting PMU vs driver code
drivers: cci: reject groups spanning multiple HW PMUs
ARM: at91: remove useless include
clocksource: atmel-st: remove mach/hardware dependency
clocksource: atmel-st: use syscon/regmap
ARM: at91: time: move the system timer driver to drivers/clocksource
ARM: at91: properly initialize timer
ARM: at91: at91rm9200: remove deprecated arm_pm_restart
watchdog: at91rm9200: implement restart handler
watchdog: at91rm9200: use the system timer syscon
mfd: syscon: Add atmel system timer registers definition
ARM: at91/dt: declare atmel,at91rm9200-st as a syscon
soc: qcom: gsbi: Add support for ADM CRCI muxing
...

+2756 -963
+3 -1
Documentation/devicetree/bindings/arm/atmel-at91.txt
··· 46 46 shared across all System Controller members. 47 47 48 48 System Timer (ST) required properties: 49 - - compatible: Should be "atmel,at91rm9200-st" 49 + - compatible: Should be "atmel,at91rm9200-st", "syscon", "simple-mfd" 50 50 - reg: Should contain registers location and length 51 51 - interrupts: Should contain interrupt for the ST which is the IRQ line 52 52 shared across all System Controller members. 53 + Its subnodes can be: 54 + - watchdog: compatible should be "atmel,at91rm9200-wdt" 53 55 54 56 TC/TCLIB Timer required properties: 55 57 - compatible: Should be "atmel,<chip>-tcb".
+5 -2
Documentation/devicetree/bindings/arm/cci.txt
··· 94 94 - compatible 95 95 Usage: required 96 96 Value type: <string> 97 - Definition: must be "arm,cci-400-pmu" 98 - 97 + Definition: Must contain one of: 98 + "arm,cci-400-pmu,r0" 99 + "arm,cci-400-pmu,r1" 100 + "arm,cci-400-pmu" - DEPRECATED, permitted only where OS has 101 + secure acces to CCI registers 99 102 - reg: 100 103 Usage: required 101 104 Value type: Integer cells. A register entry, expressed
+46
Documentation/devicetree/bindings/bus/renesas,bsc.txt
··· 1 + Renesas Bus State Controller (BSC) 2 + ================================== 3 + 4 + The Renesas Bus State Controller (BSC, sometimes called "LBSC within Bus 5 + Bridge", or "External Bus Interface") can be found in several Renesas ARM SoCs. 6 + It provides an external bus for connecting multiple external devices to the 7 + SoC, driving several chip select lines, for e.g. NOR FLASH, Ethernet and USB. 8 + 9 + While the BSC is a fairly simple memory-mapped bus, it may be part of a PM 10 + domain, and may have a gateable functional clock. 11 + Before a device connected to the BSC can be accessed, the PM domain 12 + containing the BSC must be powered on, and the functional clock 13 + driving the BSC must be enabled. 14 + 15 + The bindings for the BSC extend the bindings for "simple-pm-bus". 16 + 17 + 18 + Required properties 19 + - compatible: Must contain an SoC-specific value, and "renesas,bsc" and 20 + "simple-pm-bus" as fallbacks. 21 + SoC-specific values can be: 22 + "renesas,bsc-r8a73a4" for R-Mobile APE6 (r8a73a4) 23 + "renesas,bsc-sh73a0" for SH-Mobile AG5 (sh73a0) 24 + - #address-cells, #size-cells, ranges: Must describe the mapping between 25 + parent address and child address spaces. 26 + - reg: Must contain the base address and length to access the bus controller. 27 + 28 + Optional properties: 29 + - interrupts: Must contain a reference to the BSC interrupt, if available. 30 + - clocks: Must contain a reference to the functional clock, if available. 31 + - power-domains: Must contain a reference to the PM domain, if available. 32 + 33 + 34 + Example: 35 + 36 + bsc: bus@fec10000 { 37 + compatible = "renesas,bsc-sh73a0", "renesas,bsc", 38 + "simple-pm-bus"; 39 + #address-cells = <1>; 40 + #size-cells = <1>; 41 + ranges = <0 0 0x20000000>; 42 + reg = <0xfec10000 0x400>; 43 + interrupts = <0 39 IRQ_TYPE_LEVEL_HIGH>; 44 + clocks = <&zb_clk>; 45 + power-domains = <&pd_a4s>; 46 + };
+44
Documentation/devicetree/bindings/bus/simple-pm-bus.txt
··· 1 + Simple Power-Managed Bus 2 + ======================== 3 + 4 + A Simple Power-Managed Bus is a transparent bus that doesn't need a real 5 + driver, as it's typically initialized by the boot loader. 6 + 7 + However, its bus controller is part of a PM domain, or under the control of a 8 + functional clock. Hence, the bus controller's PM domain and/or clock must be 9 + enabled for child devices connected to the bus (either on-SoC or externally) 10 + to function. 11 + 12 + While "simple-pm-bus" follows the "simple-bus" set of properties, as specified 13 + in ePAPR, it is not an extension of "simple-bus". 14 + 15 + 16 + Required properties: 17 + - compatible: Must contain at least "simple-pm-bus". 18 + Must not contain "simple-bus". 19 + It's recommended to let this be preceded by one or more 20 + vendor-specific compatible values. 21 + - #address-cells, #size-cells, ranges: Must describe the mapping between 22 + parent address and child address spaces. 23 + 24 + Optional platform-specific properties for clock or PM domain control (at least 25 + one of them is required): 26 + - clocks: Must contain a reference to the functional clock(s), 27 + - power-domains: Must contain a reference to the PM domain. 28 + Please refer to the binding documentation for the clock and/or PM domain 29 + providers for more details. 30 + 31 + 32 + Example: 33 + 34 + bsc: bus@fec10000 { 35 + compatible = "renesas,bsc-sh73a0", "renesas,bsc", 36 + "simple-pm-bus"; 37 + #address-cells = <1>; 38 + #size-cells = <1>; 39 + ranges = <0 0 0x20000000>; 40 + reg = <0xfec10000 0x400>; 41 + interrupts = <0 39 IRQ_TYPE_LEVEL_HIGH>; 42 + clocks = <&zb_clk>; 43 + power-domains = <&pd_a4s>; 44 + };
+20 -10
Documentation/devicetree/bindings/soc/qcom/qcom,gsbi.txt
··· 6 6 the 4 GSBI IOs. 7 7 8 8 Required properties: 9 - - compatible: must contain "qcom,gsbi-v1.0.0" for APQ8064/IPQ8064 9 + - compatible: Should contain "qcom,gsbi-v1.0.0" 10 + - cell-index: Should contain the GSBI index 10 11 - reg: Address range for GSBI registers 11 12 - clocks: required clock 12 13 - clock-names: must contain "iface" entry ··· 17 16 Optional properties: 18 17 - qcom,crci : indicates CRCI MUX value for QUP CRCI ports. Please reference 19 18 dt-bindings/soc/qcom,gsbi.h for valid CRCI mux values. 19 + - syscon-tcsr: indicates phandle of TCSR syscon node. Required if child uses 20 + dma. 20 21 21 22 Required properties if child node exists: 22 23 - #address-cells: Must be 1 ··· 42 39 43 40 gsbi4@16300000 { 44 41 compatible = "qcom,gsbi-v1.0.0"; 42 + cell-index = <4>; 45 43 reg = <0x16300000 0x100>; 46 44 clocks = <&gcc GSBI4_H_CLK>; 47 45 clock-names = "iface"; ··· 52 48 qcom,mode = <GSBI_PROT_I2C_UART>; 53 49 qcom,crci = <GSBI_CRCI_QUP>; 54 50 51 + syscon-tcsr = <&tcsr>; 52 + 55 53 /* child nodes go under here */ 56 54 57 55 i2c_qup4: i2c@16380000 { 58 - compatible = "qcom,i2c-qup-v1.1.1"; 59 - reg = <0x16380000 0x1000>; 60 - interrupts = <0 153 0>; 56 + compatible = "qcom,i2c-qup-v1.1.1"; 57 + reg = <0x16380000 0x1000>; 58 + interrupts = <0 153 0>; 61 59 62 - clocks = <&gcc GSBI4_QUP_CLK>, <&gcc GSBI4_H_CLK>; 63 - clock-names = "core", "iface"; 60 + clocks = <&gcc GSBI4_QUP_CLK>, <&gcc GSBI4_H_CLK>; 61 + clock-names = "core", "iface"; 64 62 65 - clock-frequency = <200000>; 63 + clock-frequency = <200000>; 66 64 67 - #address-cells = <1>; 68 - #size-cells = <0>; 65 + #address-cells = <1>; 66 + #size-cells = <0>; 69 67 70 - }; 68 + }; 71 69 72 70 uart4: serial@16340000 { 73 71 compatible = "qcom,msm-uartdm-v1.3", "qcom,msm-uartdm"; ··· 82 76 }; 83 77 }; 84 78 79 + tcsr: syscon@1a400000 { 80 + compatible = "qcom,apq8064-tcsr", "syscon"; 81 + reg = <0x1a400000 0x100>; 82 + };
+1
MAINTAINERS
··· 1327 1327 F: drivers/tty/serial/msm_serial.c 1328 1328 F: drivers/*/pm8???-* 1329 1329 F: drivers/mfd/ssbi.c 1330 + F: drivers/firmware/qcom_scm.c 1330 1331 T: git git://git.kernel.org/pub/scm/linux/kernel/git/galak/linux-qcom.git 1331 1332 1332 1333 ARM/RADISYS ENP2611 MACHINE SUPPORT
+2
arch/arm/Kconfig
··· 2146 2146 2147 2147 source "drivers/Kconfig" 2148 2148 2149 + source "drivers/firmware/Kconfig" 2150 + 2149 2151 source "fs/Kconfig" 2150 2152 2151 2153 source "arch/arm/Kconfig.debug"
+5 -1
arch/arm/boot/dts/at91rm9200.dtsi
··· 356 356 }; 357 357 358 358 st: timer@fffffd00 { 359 - compatible = "atmel,at91rm9200-st"; 359 + compatible = "atmel,at91rm9200-st", "syscon", "simple-mfd"; 360 360 reg = <0xfffffd00 0x100>; 361 361 interrupts = <1 IRQ_TYPE_LEVEL_HIGH 7>; 362 + 363 + watchdog { 364 + compatible = "atmel,at91rm9200-wdt"; 365 + }; 362 366 }; 363 367 364 368 rtc: rtc@fffffe00 {
+42
arch/arm/include/asm/arm-cci.h
··· 1 + /* 2 + * arch/arm/include/asm/arm-cci.h 3 + * 4 + * Copyright (C) 2015 ARM Ltd. 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 17 + */ 18 + 19 + #ifndef __ASM_ARM_CCI_H 20 + #define __ASM_ARM_CCI_H 21 + 22 + #ifdef CONFIG_MCPM 23 + #include <asm/mcpm.h> 24 + 25 + /* 26 + * We don't have a reliable way of detecting whether, 27 + * if we have access to secure-only registers, unless 28 + * mcpm is registered. 29 + */ 30 + static inline bool platform_has_secure_cci_access(void) 31 + { 32 + return mcpm_is_available(); 33 + } 34 + 35 + #else 36 + static inline bool platform_has_secure_cci_access(void) 37 + { 38 + return false; 39 + } 40 + #endif 41 + 42 + #endif
+1
arch/arm/mach-at91/Kconfig
··· 77 77 config SOC_AT91RM9200 78 78 bool "AT91RM9200" 79 79 select ATMEL_AIC_IRQ 80 + select ATMEL_ST 80 81 select COMMON_CLK_AT91 81 82 select CPU_ARM920T 82 83 select GENERIC_CLOCKEVENTS
+1 -1
arch/arm/mach-at91/Makefile
··· 7 7 obj-$(CONFIG_SOC_AT91SAM9) += sam9_smc.o 8 8 9 9 # CPU-specific support 10 - obj-$(CONFIG_SOC_AT91RM9200) += at91rm9200.o at91rm9200_time.o 10 + obj-$(CONFIG_SOC_AT91RM9200) += at91rm9200.o 11 11 obj-$(CONFIG_SOC_AT91SAM9) += at91sam9.o 12 12 obj-$(CONFIG_SOC_SAMA5) += sama5.o 13 13
-19
arch/arm/mach-at91/at91rm9200.c
··· 15 15 #include <asm/mach/arch.h> 16 16 #include <asm/system_misc.h> 17 17 18 - #include <mach/at91_st.h> 19 - 20 18 #include "generic.h" 21 19 #include "soc.h" 22 20 ··· 22 24 AT91_SOC(AT91RM9200_CIDR_MATCH, 0, "at91rm9200 BGA", "at91rm9200"), 23 25 { /* sentinel */ }, 24 26 }; 25 - 26 - static void at91rm9200_restart(enum reboot_mode reboot_mode, const char *cmd) 27 - { 28 - /* 29 - * Perform a hardware reset with the use of the Watchdog timer. 30 - */ 31 - at91_st_write(AT91_ST_WDMR, AT91_ST_RSTEN | AT91_ST_EXTEN | 1); 32 - at91_st_write(AT91_ST_CR, AT91_ST_WDRST); 33 - } 34 - 35 - static void __init at91rm9200_dt_timer_init(void) 36 - { 37 - of_clk_init(NULL); 38 - at91rm9200_timer_init(); 39 - } 40 27 41 28 static void __init at91rm9200_dt_device_init(void) 42 29 { ··· 35 52 of_platform_populate(NULL, of_default_bus_match_table, NULL, soc_dev); 36 53 37 54 arm_pm_idle = at91rm9200_idle; 38 - arm_pm_restart = at91rm9200_restart; 39 55 at91rm9200_pm_init(); 40 56 } 41 57 ··· 44 62 }; 45 63 46 64 DT_MACHINE_START(at91rm9200_dt, "Atmel AT91RM9200") 47 - .init_time = at91rm9200_dt_timer_init, 48 65 .init_machine = at91rm9200_dt_device_init, 49 66 .dt_compat = at91rm9200_dt_board_compat, 50 67 MACHINE_END
+45 -72
arch/arm/mach-at91/at91rm9200_time.c drivers/clocksource/timer-atmel-st.c
··· 24 24 #include <linux/irq.h> 25 25 #include <linux/clockchips.h> 26 26 #include <linux/export.h> 27 - #include <linux/of.h> 28 - #include <linux/of_address.h> 27 + #include <linux/mfd/syscon.h> 28 + #include <linux/mfd/syscon/atmel-st.h> 29 29 #include <linux/of_irq.h> 30 - 31 - #include <asm/mach/time.h> 32 - 33 - #include <mach/at91_st.h> 34 - #include <mach/hardware.h> 30 + #include <linux/regmap.h> 35 31 36 32 static unsigned long last_crtr; 37 33 static u32 irqmask; 38 34 static struct clock_event_device clkevt; 35 + static struct regmap *regmap_st; 39 36 37 + #define AT91_SLOW_CLOCK 32768 40 38 #define RM9200_TIMER_LATCH ((AT91_SLOW_CLOCK + HZ/2) / HZ) 41 39 42 40 /* ··· 44 46 */ 45 47 static inline unsigned long read_CRTR(void) 46 48 { 47 - unsigned long x1, x2; 49 + unsigned int x1, x2; 48 50 49 - x1 = at91_st_read(AT91_ST_CRTR); 51 + regmap_read(regmap_st, AT91_ST_CRTR, &x1); 50 52 do { 51 - x2 = at91_st_read(AT91_ST_CRTR); 53 + regmap_read(regmap_st, AT91_ST_CRTR, &x2); 52 54 if (x1 == x2) 53 55 break; 54 56 x1 = x2; ··· 61 63 */ 62 64 static irqreturn_t at91rm9200_timer_interrupt(int irq, void *dev_id) 63 65 { 64 - u32 sr = at91_st_read(AT91_ST_SR) & irqmask; 66 + u32 sr; 67 + 68 + regmap_read(regmap_st, AT91_ST_SR, &sr); 69 + sr &= irqmask; 65 70 66 71 /* 67 72 * irqs should be disabled here, but as the irq is shared they are only ··· 93 92 return IRQ_NONE; 94 93 } 95 94 96 - static struct irqaction at91rm9200_timer_irq = { 97 - .name = "at91_tick", 98 - .flags = IRQF_SHARED | IRQF_TIMER | IRQF_IRQPOLL, 99 - .handler = at91rm9200_timer_interrupt, 100 - .irq = NR_IRQS_LEGACY + AT91_ID_SYS, 101 - }; 102 - 103 95 static cycle_t read_clk32k(struct clocksource *cs) 104 96 { 105 97 return read_CRTR(); ··· 109 115 static void 110 116 clkevt32k_mode(enum clock_event_mode mode, struct clock_event_device *dev) 111 117 { 118 + unsigned int val; 119 + 112 120 /* Disable and flush pending timer interrupts */ 113 - at91_st_write(AT91_ST_IDR, AT91_ST_PITS | AT91_ST_ALMS); 114 - at91_st_read(AT91_ST_SR); 121 + regmap_write(regmap_st, AT91_ST_IDR, AT91_ST_PITS | AT91_ST_ALMS); 122 + regmap_read(regmap_st, AT91_ST_SR, &val); 115 123 116 124 last_crtr = read_CRTR(); 117 125 switch (mode) { 118 126 case CLOCK_EVT_MODE_PERIODIC: 119 127 /* PIT for periodic irqs; fixed rate of 1/HZ */ 120 128 irqmask = AT91_ST_PITS; 121 - at91_st_write(AT91_ST_PIMR, RM9200_TIMER_LATCH); 129 + regmap_write(regmap_st, AT91_ST_PIMR, RM9200_TIMER_LATCH); 122 130 break; 123 131 case CLOCK_EVT_MODE_ONESHOT: 124 132 /* ALM for oneshot irqs, set by next_event() 125 133 * before 32 seconds have passed 126 134 */ 127 135 irqmask = AT91_ST_ALMS; 128 - at91_st_write(AT91_ST_RTAR, last_crtr); 136 + regmap_write(regmap_st, AT91_ST_RTAR, last_crtr); 129 137 break; 130 138 case CLOCK_EVT_MODE_SHUTDOWN: 131 139 case CLOCK_EVT_MODE_UNUSED: ··· 135 139 irqmask = 0; 136 140 break; 137 141 } 138 - at91_st_write(AT91_ST_IER, irqmask); 142 + regmap_write(regmap_st, AT91_ST_IER, irqmask); 139 143 } 140 144 141 145 static int ··· 143 147 { 144 148 u32 alm; 145 149 int status = 0; 150 + unsigned int val; 146 151 147 152 BUG_ON(delta < 2); 148 153 ··· 159 162 alm = read_CRTR(); 160 163 161 164 /* Cancel any pending alarm; flush any pending IRQ */ 162 - at91_st_write(AT91_ST_RTAR, alm); 163 - at91_st_read(AT91_ST_SR); 165 + regmap_write(regmap_st, AT91_ST_RTAR, alm); 166 + regmap_read(regmap_st, AT91_ST_SR, &val); 164 167 165 168 /* Schedule alarm by writing RTAR. */ 166 169 alm += delta; 167 - at91_st_write(AT91_ST_RTAR, alm); 170 + regmap_write(regmap_st, AT91_ST_RTAR, alm); 168 171 169 172 return status; 170 173 } ··· 177 180 .set_mode = clkevt32k_mode, 178 181 }; 179 182 180 - void __iomem *at91_st_base; 181 - EXPORT_SYMBOL_GPL(at91_st_base); 182 - 183 - static const struct of_device_id at91rm9200_st_timer_ids[] = { 184 - { .compatible = "atmel,at91rm9200-st" }, 185 - { /* sentinel */ } 186 - }; 187 - 188 - static int __init of_at91rm9200_st_init(void) 189 - { 190 - struct device_node *np; 191 - int ret; 192 - 193 - np = of_find_matching_node(NULL, at91rm9200_st_timer_ids); 194 - if (!np) 195 - goto err; 196 - 197 - at91_st_base = of_iomap(np, 0); 198 - if (!at91_st_base) 199 - goto node_err; 200 - 201 - /* Get the interrupts property */ 202 - ret = irq_of_parse_and_map(np, 0); 203 - if (!ret) 204 - goto ioremap_err; 205 - at91rm9200_timer_irq.irq = ret; 206 - 207 - of_node_put(np); 208 - 209 - return 0; 210 - 211 - ioremap_err: 212 - iounmap(at91_st_base); 213 - node_err: 214 - of_node_put(np); 215 - err: 216 - return -EINVAL; 217 - } 218 - 219 183 /* 220 184 * ST (system timer) module supports both clockevents and clocksource. 221 185 */ 222 - void __init at91rm9200_timer_init(void) 186 + static void __init atmel_st_timer_init(struct device_node *node) 223 187 { 224 - /* For device tree enabled device: initialize here */ 225 - of_at91rm9200_st_init(); 188 + unsigned int val; 189 + int irq, ret; 190 + 191 + regmap_st = syscon_node_to_regmap(node); 192 + if (IS_ERR(regmap_st)) 193 + panic(pr_fmt("Unable to get regmap\n")); 226 194 227 195 /* Disable all timer interrupts, and clear any pending ones */ 228 - at91_st_write(AT91_ST_IDR, 196 + regmap_write(regmap_st, AT91_ST_IDR, 229 197 AT91_ST_PITS | AT91_ST_WDOVF | AT91_ST_RTTINC | AT91_ST_ALMS); 230 - at91_st_read(AT91_ST_SR); 198 + regmap_read(regmap_st, AT91_ST_SR, &val); 199 + 200 + /* Get the interrupts property */ 201 + irq = irq_of_parse_and_map(node, 0); 202 + if (!irq) 203 + panic(pr_fmt("Unable to get IRQ from DT\n")); 231 204 232 205 /* Make IRQs happen for the system timer */ 233 - setup_irq(at91rm9200_timer_irq.irq, &at91rm9200_timer_irq); 206 + ret = request_irq(irq, at91rm9200_timer_interrupt, 207 + IRQF_SHARED | IRQF_TIMER | IRQF_IRQPOLL, 208 + "at91_tick", regmap_st); 209 + if (ret) 210 + panic(pr_fmt("Unable to setup IRQ\n")); 234 211 235 212 /* The 32KiHz "Slow Clock" (tick every 30517.58 nanoseconds) is used 236 213 * directly for the clocksource and all clockevents, after adjusting 237 214 * its prescaler from the 1 Hz default. 238 215 */ 239 - at91_st_write(AT91_ST_RTMR, 1); 216 + regmap_write(regmap_st, AT91_ST_RTMR, 1); 240 217 241 218 /* Setup timer clockevent, with minimum of two ticks (important!!) */ 242 219 clkevt.cpumask = cpumask_of(0); ··· 220 249 /* register clocksource */ 221 250 clocksource_register_hz(&clk32k, AT91_SLOW_CLOCK); 222 251 } 252 + CLOCKSOURCE_OF_DECLARE(atmel_st_timer, "atmel,at91rm9200-st", 253 + atmel_st_timer_init);
-3
arch/arm/mach-at91/generic.h
··· 18 18 extern void __init at91_map_io(void); 19 19 extern void __init at91_alt_map_io(void); 20 20 21 - /* Timer */ 22 - extern void at91rm9200_timer_init(void); 23 - 24 21 /* idle */ 25 22 extern void at91rm9200_idle(void); 26 23 extern void at91sam9_idle(void);
-61
arch/arm/mach-at91/include/mach/at91_st.h
··· 1 - /* 2 - * arch/arm/mach-at91/include/mach/at91_st.h 3 - * 4 - * Copyright (C) 2005 Ivan Kokshaysky 5 - * Copyright (C) SAN People 6 - * 7 - * System Timer (ST) - System peripherals registers. 8 - * Based on AT91RM9200 datasheet revision E. 9 - * 10 - * This program is free software; you can redistribute it and/or modify 11 - * it under the terms of the GNU General Public License as published by 12 - * the Free Software Foundation; either version 2 of the License, or 13 - * (at your option) any later version. 14 - */ 15 - 16 - #ifndef AT91_ST_H 17 - #define AT91_ST_H 18 - 19 - #ifndef __ASSEMBLY__ 20 - extern void __iomem *at91_st_base; 21 - 22 - #define at91_st_read(field) \ 23 - __raw_readl(at91_st_base + field) 24 - 25 - #define at91_st_write(field, value) \ 26 - __raw_writel(value, at91_st_base + field) 27 - #else 28 - .extern at91_st_base 29 - #endif 30 - 31 - #define AT91_ST_CR 0x00 /* Control Register */ 32 - #define AT91_ST_WDRST (1 << 0) /* Watchdog Timer Restart */ 33 - 34 - #define AT91_ST_PIMR 0x04 /* Period Interval Mode Register */ 35 - #define AT91_ST_PIV (0xffff << 0) /* Period Interval Value */ 36 - 37 - #define AT91_ST_WDMR 0x08 /* Watchdog Mode Register */ 38 - #define AT91_ST_WDV (0xffff << 0) /* Watchdog Counter Value */ 39 - #define AT91_ST_RSTEN (1 << 16) /* Reset Enable */ 40 - #define AT91_ST_EXTEN (1 << 17) /* External Signal Assertion Enable */ 41 - 42 - #define AT91_ST_RTMR 0x0c /* Real-time Mode Register */ 43 - #define AT91_ST_RTPRES (0xffff << 0) /* Real-time Prescalar Value */ 44 - 45 - #define AT91_ST_SR 0x10 /* Status Register */ 46 - #define AT91_ST_PITS (1 << 0) /* Period Interval Timer Status */ 47 - #define AT91_ST_WDOVF (1 << 1) /* Watchdog Overflow */ 48 - #define AT91_ST_RTTINC (1 << 2) /* Real-time Timer Increment */ 49 - #define AT91_ST_ALMS (1 << 3) /* Alarm Status */ 50 - 51 - #define AT91_ST_IER 0x14 /* Interrupt Enable Register */ 52 - #define AT91_ST_IDR 0x18 /* Interrupt Disable Register */ 53 - #define AT91_ST_IMR 0x1c /* Interrupt Mask Register */ 54 - 55 - #define AT91_ST_RTAR 0x20 /* Real-time Alarm Register */ 56 - #define AT91_ST_ALMV (0xfffff << 0) /* Alarm Value */ 57 - 58 - #define AT91_ST_CRTR 0x24 /* Current Real-time Register */ 59 - #define AT91_ST_CRTV (0xfffff << 0) /* Current Real-Time Value */ 60 - 61 - #endif
+1 -1
arch/arm/mach-exynos/Kconfig
··· 123 123 config EXYNOS5420_MCPM 124 124 bool "Exynos5420 Multi-Cluster PM support" 125 125 depends on MCPM && SOC_EXYNOS5420 126 - select ARM_CCI 126 + select ARM_CCI400_PORT_CTRL 127 127 select ARM_CPU_SUSPEND 128 128 help 129 129 This is needed to provide CPU and cluster power management
+1
arch/arm/mach-mediatek/Kconfig
··· 1 1 menuconfig ARCH_MEDIATEK 2 2 bool "Mediatek MT65xx & MT81xx SoC" if ARCH_MULTI_V7 3 3 select ARM_GIC 4 + select PINCTRL 4 5 select MTK_TIMER 5 6 help 6 7 Support for Mediatek MT65xx & MT81xx SoCs
+10 -8
arch/arm/mach-omap2/gpmc-nand.c
··· 96 96 gpmc_nand_res[1].start = gpmc_get_client_irq(GPMC_IRQ_FIFOEVENTENABLE); 97 97 gpmc_nand_res[2].start = gpmc_get_client_irq(GPMC_IRQ_COUNT_EVENT); 98 98 99 - if (gpmc_t) { 100 - err = gpmc_cs_set_timings(gpmc_nand_data->cs, gpmc_t); 101 - if (err < 0) { 102 - pr_err("omap2-gpmc: Unable to set gpmc timings: %d\n", err); 103 - return err; 104 - } 105 - } 106 - 107 99 memset(&s, 0, sizeof(struct gpmc_settings)); 108 100 if (gpmc_nand_data->of_node) 109 101 gpmc_read_settings_dt(gpmc_nand_data->of_node, &s); ··· 103 111 gpmc_set_legacy(gpmc_nand_data, &s); 104 112 105 113 s.device_nand = true; 114 + 115 + if (gpmc_t) { 116 + err = gpmc_cs_set_timings(gpmc_nand_data->cs, gpmc_t, &s); 117 + if (err < 0) { 118 + pr_err("omap2-gpmc: Unable to set gpmc timings: %d\n", 119 + err); 120 + return err; 121 + } 122 + } 123 + 106 124 err = gpmc_cs_program_settings(gpmc_nand_data->cs, &s); 107 125 if (err < 0) 108 126 goto out_free_cs;
+2 -2
arch/arm/mach-omap2/gpmc-onenand.c
··· 293 293 if (ret < 0) 294 294 return ret; 295 295 296 - ret = gpmc_cs_set_timings(gpmc_onenand_data->cs, &t); 296 + ret = gpmc_cs_set_timings(gpmc_onenand_data->cs, &t, &onenand_async); 297 297 if (ret < 0) 298 298 return ret; 299 299 ··· 331 331 if (ret < 0) 332 332 return ret; 333 333 334 - ret = gpmc_cs_set_timings(gpmc_onenand_data->cs, &t); 334 + ret = gpmc_cs_set_timings(gpmc_onenand_data->cs, &t, &onenand_sync); 335 335 if (ret < 0) 336 336 return ret; 337 337
+2 -2
arch/arm/mach-omap2/usb-tusb6010.c
··· 71 71 72 72 gpmc_calc_timings(&t, &tusb_async, &dev_t); 73 73 74 - return gpmc_cs_set_timings(async_cs, &t); 74 + return gpmc_cs_set_timings(async_cs, &t, &tusb_async); 75 75 } 76 76 77 77 static int tusb_set_sync_mode(unsigned sysclk_ps) ··· 98 98 99 99 gpmc_calc_timings(&t, &tusb_sync, &dev_t); 100 100 101 - return gpmc_cs_set_timings(sync_cs, &t); 101 + return gpmc_cs_set_timings(sync_cs, &t, &tusb_sync); 102 102 } 103 103 104 104 /* tusb driver calls this when it changes the chip's clocking */
-3
arch/arm/mach-qcom/Kconfig
··· 22 22 bool "Enable support for MSM8974" 23 23 select HAVE_ARM_ARCH_TIMER 24 24 25 - config QCOM_SCM 26 - bool 27 - 28 25 endif
-3
arch/arm/mach-qcom/Makefile
··· 1 1 obj-y := board.o 2 2 obj-$(CONFIG_SMP) += platsmp.o 3 - obj-$(CONFIG_QCOM_SCM) += scm.o scm-boot.o 4 - 5 - CFLAGS_scm.o :=$(call as-instr,.arch_extension sec,-DREQUIRES_SEC=1)
+4 -19
arch/arm/mach-qcom/platsmp.c
··· 17 17 #include <linux/of_address.h> 18 18 #include <linux/smp.h> 19 19 #include <linux/io.h> 20 + #include <linux/qcom_scm.h> 20 21 21 22 #include <asm/smp_plat.h> 22 23 23 - #include "scm-boot.h" 24 24 25 25 #define VDD_SC1_ARRAY_CLAMP_GFS_CTL 0x35a0 26 26 #define SCSS_CPU1CORE_RESET 0x2d80 ··· 319 319 320 320 static void __init qcom_smp_prepare_cpus(unsigned int max_cpus) 321 321 { 322 - int cpu, map; 323 - unsigned int flags = 0; 324 - static const int cold_boot_flags[] = { 325 - 0, 326 - SCM_FLAG_COLDBOOT_CPU1, 327 - SCM_FLAG_COLDBOOT_CPU2, 328 - SCM_FLAG_COLDBOOT_CPU3, 329 - }; 322 + int cpu; 330 323 331 - for_each_present_cpu(cpu) { 332 - map = cpu_logical_map(cpu); 333 - if (WARN_ON(map >= ARRAY_SIZE(cold_boot_flags))) { 334 - set_cpu_present(cpu, false); 335 - continue; 336 - } 337 - flags |= cold_boot_flags[map]; 338 - } 339 - 340 - if (scm_set_boot_addr(virt_to_phys(secondary_startup_arm), flags)) { 324 + if (qcom_scm_set_cold_boot_addr(secondary_startup_arm, 325 + cpu_present_mask)) { 341 326 for_each_present_cpu(cpu) { 342 327 if (cpu == smp_processor_id()) 343 328 continue;
-39
arch/arm/mach-qcom/scm-boot.c
··· 1 - /* Copyright (c) 2010, Code Aurora Forum. All rights reserved. 2 - * 3 - * This program is free software; you can redistribute it and/or modify 4 - * it under the terms of the GNU General Public License version 2 and 5 - * only version 2 as published by the Free Software Foundation. 6 - * 7 - * This program is distributed in the hope that it will be useful, 8 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 9 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10 - * GNU General Public License for more details. 11 - * 12 - * You should have received a copy of the GNU General Public License 13 - * along with this program; if not, write to the Free Software 14 - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 15 - * 02110-1301, USA. 16 - */ 17 - 18 - #include <linux/module.h> 19 - #include <linux/slab.h> 20 - 21 - #include "scm.h" 22 - #include "scm-boot.h" 23 - 24 - /* 25 - * Set the cold/warm boot address for one of the CPU cores. 26 - */ 27 - int scm_set_boot_addr(u32 addr, int flags) 28 - { 29 - struct { 30 - __le32 flags; 31 - __le32 addr; 32 - } cmd; 33 - 34 - cmd.addr = cpu_to_le32(addr); 35 - cmd.flags = cpu_to_le32(flags); 36 - return scm_call(SCM_SVC_BOOT, SCM_BOOT_ADDR, 37 - &cmd, sizeof(cmd), NULL, 0); 38 - } 39 - EXPORT_SYMBOL(scm_set_boot_addr);
-26
arch/arm/mach-qcom/scm-boot.h
··· 1 - /* Copyright (c) 2010, Code Aurora Forum. All rights reserved. 2 - * 3 - * This program is free software; you can redistribute it and/or modify 4 - * it under the terms of the GNU General Public License version 2 and 5 - * only version 2 as published by the Free Software Foundation. 6 - * 7 - * This program is distributed in the hope that it will be useful, 8 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 9 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10 - * GNU General Public License for more details. 11 - */ 12 - #ifndef __MACH_SCM_BOOT_H 13 - #define __MACH_SCM_BOOT_H 14 - 15 - #define SCM_BOOT_ADDR 0x1 16 - #define SCM_FLAG_COLDBOOT_CPU1 0x01 17 - #define SCM_FLAG_COLDBOOT_CPU2 0x08 18 - #define SCM_FLAG_COLDBOOT_CPU3 0x20 19 - #define SCM_FLAG_WARMBOOT_CPU0 0x04 20 - #define SCM_FLAG_WARMBOOT_CPU1 0x02 21 - #define SCM_FLAG_WARMBOOT_CPU2 0x10 22 - #define SCM_FLAG_WARMBOOT_CPU3 0x40 23 - 24 - int scm_set_boot_addr(u32 addr, int flags); 25 - 26 - #endif
-326
arch/arm/mach-qcom/scm.c
··· 1 - /* Copyright (c) 2010, Code Aurora Forum. All rights reserved. 2 - * 3 - * This program is free software; you can redistribute it and/or modify 4 - * it under the terms of the GNU General Public License version 2 and 5 - * only version 2 as published by the Free Software Foundation. 6 - * 7 - * This program is distributed in the hope that it will be useful, 8 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 9 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10 - * GNU General Public License for more details. 11 - * 12 - * You should have received a copy of the GNU General Public License 13 - * along with this program; if not, write to the Free Software 14 - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 15 - * 02110-1301, USA. 16 - */ 17 - 18 - #include <linux/slab.h> 19 - #include <linux/io.h> 20 - #include <linux/module.h> 21 - #include <linux/mutex.h> 22 - #include <linux/errno.h> 23 - #include <linux/err.h> 24 - 25 - #include <asm/outercache.h> 26 - #include <asm/cacheflush.h> 27 - 28 - #include "scm.h" 29 - 30 - #define SCM_ENOMEM -5 31 - #define SCM_EOPNOTSUPP -4 32 - #define SCM_EINVAL_ADDR -3 33 - #define SCM_EINVAL_ARG -2 34 - #define SCM_ERROR -1 35 - #define SCM_INTERRUPTED 1 36 - 37 - static DEFINE_MUTEX(scm_lock); 38 - 39 - /** 40 - * struct scm_command - one SCM command buffer 41 - * @len: total available memory for command and response 42 - * @buf_offset: start of command buffer 43 - * @resp_hdr_offset: start of response buffer 44 - * @id: command to be executed 45 - * @buf: buffer returned from scm_get_command_buffer() 46 - * 47 - * An SCM command is laid out in memory as follows: 48 - * 49 - * ------------------- <--- struct scm_command 50 - * | command header | 51 - * ------------------- <--- scm_get_command_buffer() 52 - * | command buffer | 53 - * ------------------- <--- struct scm_response and 54 - * | response header | scm_command_to_response() 55 - * ------------------- <--- scm_get_response_buffer() 56 - * | response buffer | 57 - * ------------------- 58 - * 59 - * There can be arbitrary padding between the headers and buffers so 60 - * you should always use the appropriate scm_get_*_buffer() routines 61 - * to access the buffers in a safe manner. 62 - */ 63 - struct scm_command { 64 - __le32 len; 65 - __le32 buf_offset; 66 - __le32 resp_hdr_offset; 67 - __le32 id; 68 - __le32 buf[0]; 69 - }; 70 - 71 - /** 72 - * struct scm_response - one SCM response buffer 73 - * @len: total available memory for response 74 - * @buf_offset: start of response data relative to start of scm_response 75 - * @is_complete: indicates if the command has finished processing 76 - */ 77 - struct scm_response { 78 - __le32 len; 79 - __le32 buf_offset; 80 - __le32 is_complete; 81 - }; 82 - 83 - /** 84 - * alloc_scm_command() - Allocate an SCM command 85 - * @cmd_size: size of the command buffer 86 - * @resp_size: size of the response buffer 87 - * 88 - * Allocate an SCM command, including enough room for the command 89 - * and response headers as well as the command and response buffers. 90 - * 91 - * Returns a valid &scm_command on success or %NULL if the allocation fails. 92 - */ 93 - static struct scm_command *alloc_scm_command(size_t cmd_size, size_t resp_size) 94 - { 95 - struct scm_command *cmd; 96 - size_t len = sizeof(*cmd) + sizeof(struct scm_response) + cmd_size + 97 - resp_size; 98 - u32 offset; 99 - 100 - cmd = kzalloc(PAGE_ALIGN(len), GFP_KERNEL); 101 - if (cmd) { 102 - cmd->len = cpu_to_le32(len); 103 - offset = offsetof(struct scm_command, buf); 104 - cmd->buf_offset = cpu_to_le32(offset); 105 - cmd->resp_hdr_offset = cpu_to_le32(offset + cmd_size); 106 - } 107 - return cmd; 108 - } 109 - 110 - /** 111 - * free_scm_command() - Free an SCM command 112 - * @cmd: command to free 113 - * 114 - * Free an SCM command. 115 - */ 116 - static inline void free_scm_command(struct scm_command *cmd) 117 - { 118 - kfree(cmd); 119 - } 120 - 121 - /** 122 - * scm_command_to_response() - Get a pointer to a scm_response 123 - * @cmd: command 124 - * 125 - * Returns a pointer to a response for a command. 126 - */ 127 - static inline struct scm_response *scm_command_to_response( 128 - const struct scm_command *cmd) 129 - { 130 - return (void *)cmd + le32_to_cpu(cmd->resp_hdr_offset); 131 - } 132 - 133 - /** 134 - * scm_get_command_buffer() - Get a pointer to a command buffer 135 - * @cmd: command 136 - * 137 - * Returns a pointer to the command buffer of a command. 138 - */ 139 - static inline void *scm_get_command_buffer(const struct scm_command *cmd) 140 - { 141 - return (void *)cmd->buf; 142 - } 143 - 144 - /** 145 - * scm_get_response_buffer() - Get a pointer to a response buffer 146 - * @rsp: response 147 - * 148 - * Returns a pointer to a response buffer of a response. 149 - */ 150 - static inline void *scm_get_response_buffer(const struct scm_response *rsp) 151 - { 152 - return (void *)rsp + le32_to_cpu(rsp->buf_offset); 153 - } 154 - 155 - static int scm_remap_error(int err) 156 - { 157 - pr_err("scm_call failed with error code %d\n", err); 158 - switch (err) { 159 - case SCM_ERROR: 160 - return -EIO; 161 - case SCM_EINVAL_ADDR: 162 - case SCM_EINVAL_ARG: 163 - return -EINVAL; 164 - case SCM_EOPNOTSUPP: 165 - return -EOPNOTSUPP; 166 - case SCM_ENOMEM: 167 - return -ENOMEM; 168 - } 169 - return -EINVAL; 170 - } 171 - 172 - static u32 smc(u32 cmd_addr) 173 - { 174 - int context_id; 175 - register u32 r0 asm("r0") = 1; 176 - register u32 r1 asm("r1") = (u32)&context_id; 177 - register u32 r2 asm("r2") = cmd_addr; 178 - do { 179 - asm volatile( 180 - __asmeq("%0", "r0") 181 - __asmeq("%1", "r0") 182 - __asmeq("%2", "r1") 183 - __asmeq("%3", "r2") 184 - #ifdef REQUIRES_SEC 185 - ".arch_extension sec\n" 186 - #endif 187 - "smc #0 @ switch to secure world\n" 188 - : "=r" (r0) 189 - : "r" (r0), "r" (r1), "r" (r2) 190 - : "r3"); 191 - } while (r0 == SCM_INTERRUPTED); 192 - 193 - return r0; 194 - } 195 - 196 - static int __scm_call(const struct scm_command *cmd) 197 - { 198 - int ret; 199 - u32 cmd_addr = virt_to_phys(cmd); 200 - 201 - /* 202 - * Flush the command buffer so that the secure world sees 203 - * the correct data. 204 - */ 205 - __cpuc_flush_dcache_area((void *)cmd, cmd->len); 206 - outer_flush_range(cmd_addr, cmd_addr + cmd->len); 207 - 208 - ret = smc(cmd_addr); 209 - if (ret < 0) 210 - ret = scm_remap_error(ret); 211 - 212 - return ret; 213 - } 214 - 215 - static void scm_inv_range(unsigned long start, unsigned long end) 216 - { 217 - u32 cacheline_size, ctr; 218 - 219 - asm volatile("mrc p15, 0, %0, c0, c0, 1" : "=r" (ctr)); 220 - cacheline_size = 4 << ((ctr >> 16) & 0xf); 221 - 222 - start = round_down(start, cacheline_size); 223 - end = round_up(end, cacheline_size); 224 - outer_inv_range(start, end); 225 - while (start < end) { 226 - asm ("mcr p15, 0, %0, c7, c6, 1" : : "r" (start) 227 - : "memory"); 228 - start += cacheline_size; 229 - } 230 - dsb(); 231 - isb(); 232 - } 233 - 234 - /** 235 - * scm_call() - Send an SCM command 236 - * @svc_id: service identifier 237 - * @cmd_id: command identifier 238 - * @cmd_buf: command buffer 239 - * @cmd_len: length of the command buffer 240 - * @resp_buf: response buffer 241 - * @resp_len: length of the response buffer 242 - * 243 - * Sends a command to the SCM and waits for the command to finish processing. 244 - * 245 - * A note on cache maintenance: 246 - * Note that any buffers that are expected to be accessed by the secure world 247 - * must be flushed before invoking scm_call and invalidated in the cache 248 - * immediately after scm_call returns. Cache maintenance on the command and 249 - * response buffers is taken care of by scm_call; however, callers are 250 - * responsible for any other cached buffers passed over to the secure world. 251 - */ 252 - int scm_call(u32 svc_id, u32 cmd_id, const void *cmd_buf, size_t cmd_len, 253 - void *resp_buf, size_t resp_len) 254 - { 255 - int ret; 256 - struct scm_command *cmd; 257 - struct scm_response *rsp; 258 - unsigned long start, end; 259 - 260 - cmd = alloc_scm_command(cmd_len, resp_len); 261 - if (!cmd) 262 - return -ENOMEM; 263 - 264 - cmd->id = cpu_to_le32((svc_id << 10) | cmd_id); 265 - if (cmd_buf) 266 - memcpy(scm_get_command_buffer(cmd), cmd_buf, cmd_len); 267 - 268 - mutex_lock(&scm_lock); 269 - ret = __scm_call(cmd); 270 - mutex_unlock(&scm_lock); 271 - if (ret) 272 - goto out; 273 - 274 - rsp = scm_command_to_response(cmd); 275 - start = (unsigned long)rsp; 276 - 277 - do { 278 - scm_inv_range(start, start + sizeof(*rsp)); 279 - } while (!rsp->is_complete); 280 - 281 - end = (unsigned long)scm_get_response_buffer(rsp) + resp_len; 282 - scm_inv_range(start, end); 283 - 284 - if (resp_buf) 285 - memcpy(resp_buf, scm_get_response_buffer(rsp), resp_len); 286 - out: 287 - free_scm_command(cmd); 288 - return ret; 289 - } 290 - EXPORT_SYMBOL(scm_call); 291 - 292 - u32 scm_get_version(void) 293 - { 294 - int context_id; 295 - static u32 version = -1; 296 - register u32 r0 asm("r0"); 297 - register u32 r1 asm("r1"); 298 - 299 - if (version != -1) 300 - return version; 301 - 302 - mutex_lock(&scm_lock); 303 - 304 - r0 = 0x1 << 8; 305 - r1 = (u32)&context_id; 306 - do { 307 - asm volatile( 308 - __asmeq("%0", "r0") 309 - __asmeq("%1", "r1") 310 - __asmeq("%2", "r0") 311 - __asmeq("%3", "r1") 312 - #ifdef REQUIRES_SEC 313 - ".arch_extension sec\n" 314 - #endif 315 - "smc #0 @ switch to secure world\n" 316 - : "=r" (r0), "=r" (r1) 317 - : "r" (r0), "r" (r1) 318 - : "r2", "r3"); 319 - } while (r0 == SCM_INTERRUPTED); 320 - 321 - version = r1; 322 - mutex_unlock(&scm_lock); 323 - 324 - return version; 325 - } 326 - EXPORT_SYMBOL(scm_get_version);
-25
arch/arm/mach-qcom/scm.h
··· 1 - /* Copyright (c) 2010, Code Aurora Forum. All rights reserved. 2 - * 3 - * This program is free software; you can redistribute it and/or modify 4 - * it under the terms of the GNU General Public License version 2 and 5 - * only version 2 as published by the Free Software Foundation. 6 - * 7 - * This program is distributed in the hope that it will be useful, 8 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 9 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10 - * GNU General Public License for more details. 11 - */ 12 - #ifndef __MACH_SCM_H 13 - #define __MACH_SCM_H 14 - 15 - #define SCM_SVC_BOOT 0x1 16 - #define SCM_SVC_PIL 0x2 17 - 18 - extern int scm_call(u32 svc_id, u32 cmd_id, const void *cmd_buf, size_t cmd_len, 19 - void *resp_buf, size_t resp_len); 20 - 21 - #define SCM_VERSION(major, minor) (((major) << 16) | ((minor) & 0xFF)) 22 - 23 - extern u32 scm_get_version(void); 24 - 25 - #endif
+2 -2
arch/arm/mach-vexpress/Kconfig
··· 54 54 config ARCH_VEXPRESS_DCSCB 55 55 bool "Dual Cluster System Control Block (DCSCB) support" 56 56 depends on MCPM 57 - select ARM_CCI 57 + select ARM_CCI400_PORT_CTRL 58 58 help 59 59 Support for the Dual Cluster System Configuration Block (DCSCB). 60 60 This is needed to provide CPU and cluster power management ··· 72 72 config ARCH_VEXPRESS_TC2_PM 73 73 bool "Versatile Express TC2 power management" 74 74 depends on MCPM 75 - select ARM_CCI 75 + select ARM_CCI400_PORT_CTRL 76 76 select ARCH_VEXPRESS_SPC 77 77 select ARM_CPU_SUSPEND 78 78 help
+27
arch/arm64/include/asm/arm-cci.h
··· 1 + /* 2 + * arch/arm64/include/asm/arm-cci.h 3 + * 4 + * Copyright (C) 2015 ARM Ltd. 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 17 + */ 18 + 19 + #ifndef __ASM_ARM_CCI_H 20 + #define __ASM_ARM_CCI_H 21 + 22 + static inline bool platform_has_secure_cci_access(void) 23 + { 24 + return false; 25 + } 26 + 27 + #endif
+53 -20
drivers/bus/Kconfig
··· 4 4 5 5 menu "Bus devices" 6 6 7 + config ARM_CCI 8 + bool 9 + 10 + config ARM_CCI400_COMMON 11 + bool 12 + select ARM_CCI 13 + 14 + config ARM_CCI400_PMU 15 + bool "ARM CCI400 PMU support" 16 + default y 17 + depends on ARM || ARM64 18 + depends on HW_PERF_EVENTS 19 + select ARM_CCI400_COMMON 20 + help 21 + Support for PMU events monitoring on the ARM CCI cache coherent 22 + interconnect. 23 + 24 + If unsure, say Y 25 + 26 + config ARM_CCI400_PORT_CTRL 27 + bool 28 + depends on ARM && OF && CPU_V7 29 + select ARM_CCI400_COMMON 30 + help 31 + Low level power management driver for CCI400 cache coherent 32 + interconnect for ARM platforms. 33 + 34 + config ARM_CCN 35 + bool "ARM CCN driver support" 36 + depends on ARM || ARM64 37 + depends on PERF_EVENTS 38 + help 39 + PMU (perf) driver supporting the ARM CCN (Cache Coherent Network) 40 + interconnect. 41 + 7 42 config BRCMSTB_GISB_ARB 8 43 bool "Broadcom STB GISB bus arbiter" 9 44 depends on ARM || MIPS ··· 75 40 Driver needed for the MBus configuration on Marvell EBU SoCs 76 41 (Kirkwood, Dove, Orion5x, MV78XX0 and Armada 370/XP). 77 42 43 + config OMAP_INTERCONNECT 44 + tristate "OMAP INTERCONNECT DRIVER" 45 + depends on ARCH_OMAP2PLUS 46 + 47 + help 48 + Driver to enable OMAP interconnect error handling driver. 49 + 78 50 config OMAP_OCP2SCP 79 51 tristate "OMAP OCP2SCP DRIVER" 80 52 depends on ARCH_OMAP2PLUS ··· 91 49 OCP2SCP and in OMAP5, both USB PHY and SATA PHY is connected via 92 50 OCP2SCP. 93 51 94 - config OMAP_INTERCONNECT 95 - tristate "OMAP INTERCONNECT DRIVER" 96 - depends on ARCH_OMAP2PLUS 97 - 52 + config SIMPLE_PM_BUS 53 + bool "Simple Power-Managed Bus Driver" 54 + depends on OF && PM 55 + depends on ARCH_SHMOBILE || COMPILE_TEST 98 56 help 99 - Driver to enable OMAP interconnect error handling driver. 100 - 101 - config ARM_CCI 102 - bool "ARM CCI driver support" 103 - depends on ARM && OF && CPU_V7 104 - help 105 - Driver supporting the CCI cache coherent interconnect for ARM 106 - platforms. 107 - 108 - config ARM_CCN 109 - bool "ARM CCN driver support" 110 - depends on ARM || ARM64 111 - depends on PERF_EVENTS 112 - help 113 - PMU (perf) driver supporting the ARM CCN (Cache Coherent Network) 114 - interconnect. 57 + Driver for transparent busses that don't need a real driver, but 58 + where the bus controller is part of a PM domain, or under the control 59 + of a functional clock, and thus relies on runtime PM for managing 60 + this PM domain and/or clock. 61 + An example of such a bus controller is the Renesas Bus State 62 + Controller (BSC, sometimes called "LBSC within Bus Bridge", or 63 + "External Bus Interface") as found on several Renesas ARM SoCs. 115 64 116 65 config VEXPRESS_CONFIG 117 66 bool "Versatile Express configuration bus"
+10 -9
drivers/bus/Makefile
··· 2 2 # Makefile for the bus drivers. 3 3 # 4 4 5 - obj-$(CONFIG_BRCMSTB_GISB_ARB) += brcmstb_gisb.o 6 - obj-$(CONFIG_IMX_WEIM) += imx-weim.o 7 - obj-$(CONFIG_MIPS_CDMM) += mips_cdmm.o 8 - obj-$(CONFIG_MVEBU_MBUS) += mvebu-mbus.o 9 - obj-$(CONFIG_OMAP_OCP2SCP) += omap-ocp2scp.o 10 - 11 - # Interconnect bus driver for OMAP SoCs. 12 - obj-$(CONFIG_OMAP_INTERCONNECT) += omap_l3_smx.o omap_l3_noc.o 13 - 14 5 # Interconnect bus drivers for ARM platforms 15 6 obj-$(CONFIG_ARM_CCI) += arm-cci.o 16 7 obj-$(CONFIG_ARM_CCN) += arm-ccn.o 17 8 9 + obj-$(CONFIG_BRCMSTB_GISB_ARB) += brcmstb_gisb.o 10 + obj-$(CONFIG_IMX_WEIM) += imx-weim.o 11 + obj-$(CONFIG_MIPS_CDMM) += mips_cdmm.o 12 + obj-$(CONFIG_MVEBU_MBUS) += mvebu-mbus.o 13 + 14 + # Interconnect bus driver for OMAP SoCs. 15 + obj-$(CONFIG_OMAP_INTERCONNECT) += omap_l3_smx.o omap_l3_noc.o 16 + 17 + obj-$(CONFIG_OMAP_OCP2SCP) += omap-ocp2scp.o 18 + obj-$(CONFIG_SIMPLE_PM_BUS) += simple-pm-bus.o 18 19 obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o
+293 -230
drivers/bus/arm-cci.c
··· 29 29 #include <asm/cacheflush.h> 30 30 #include <asm/smp_plat.h> 31 31 32 - #define DRIVER_NAME "CCI-400" 33 - #define DRIVER_NAME_PMU DRIVER_NAME " PMU" 32 + static void __iomem *cci_ctrl_base; 33 + static unsigned long cci_ctrl_phys; 34 34 35 - #define CCI_PORT_CTRL 0x0 36 - #define CCI_CTRL_STATUS 0xc 37 - 38 - #define CCI_ENABLE_SNOOP_REQ 0x1 39 - #define CCI_ENABLE_DVM_REQ 0x2 40 - #define CCI_ENABLE_REQ (CCI_ENABLE_SNOOP_REQ | CCI_ENABLE_DVM_REQ) 41 - 35 + #ifdef CONFIG_ARM_CCI400_PORT_CTRL 42 36 struct cci_nb_ports { 43 37 unsigned int nb_ace; 44 38 unsigned int nb_ace_lite; 45 39 }; 46 40 47 - enum cci_ace_port_type { 48 - ACE_INVALID_PORT = 0x0, 49 - ACE_PORT, 50 - ACE_LITE_PORT, 41 + static const struct cci_nb_ports cci400_ports = { 42 + .nb_ace = 2, 43 + .nb_ace_lite = 3 51 44 }; 52 45 53 - struct cci_ace_port { 54 - void __iomem *base; 55 - unsigned long phys; 56 - enum cci_ace_port_type type; 57 - struct device_node *dn; 46 + #define CCI400_PORTS_DATA (&cci400_ports) 47 + #else 48 + #define CCI400_PORTS_DATA (NULL) 49 + #endif 50 + 51 + static const struct of_device_id arm_cci_matches[] = { 52 + #ifdef CONFIG_ARM_CCI400_COMMON 53 + {.compatible = "arm,cci-400", .data = CCI400_PORTS_DATA }, 54 + #endif 55 + {}, 58 56 }; 59 57 60 - static struct cci_ace_port *ports; 61 - static unsigned int nb_cci_ports; 58 + #ifdef CONFIG_ARM_CCI400_PMU 62 59 63 - static void __iomem *cci_ctrl_base; 64 - static unsigned long cci_ctrl_phys; 65 - 66 - #ifdef CONFIG_HW_PERF_EVENTS 60 + #define DRIVER_NAME "CCI-400" 61 + #define DRIVER_NAME_PMU DRIVER_NAME " PMU" 67 62 68 63 #define CCI_PMCR 0x0100 69 64 #define CCI_PID2 0x0fe8 ··· 69 74 70 75 #define CCI_PID2_REV_MASK 0xf0 71 76 #define CCI_PID2_REV_SHIFT 4 77 + 78 + #define CCI_PMU_EVT_SEL 0x000 79 + #define CCI_PMU_CNTR 0x004 80 + #define CCI_PMU_CNTR_CTRL 0x008 81 + #define CCI_PMU_OVRFLW 0x00c 82 + 83 + #define CCI_PMU_OVRFLW_FLAG 1 84 + 85 + #define CCI_PMU_CNTR_BASE(idx) ((idx) * SZ_4K) 86 + 87 + #define CCI_PMU_CNTR_MASK ((1ULL << 32) -1) 88 + 89 + #define CCI_PMU_EVENT_MASK 0xffUL 90 + #define CCI_PMU_EVENT_SOURCE(event) ((event >> 5) & 0x7) 91 + #define CCI_PMU_EVENT_CODE(event) (event & 0x1f) 92 + 93 + #define CCI_PMU_MAX_HW_EVENTS 5 /* CCI PMU has 4 counters + 1 cycle counter */ 94 + 95 + /* Types of interfaces that can generate events */ 96 + enum { 97 + CCI_IF_SLAVE, 98 + CCI_IF_MASTER, 99 + CCI_IF_MAX, 100 + }; 101 + 102 + struct event_range { 103 + u32 min; 104 + u32 max; 105 + }; 106 + 107 + struct cci_pmu_hw_events { 108 + struct perf_event *events[CCI_PMU_MAX_HW_EVENTS]; 109 + unsigned long used_mask[BITS_TO_LONGS(CCI_PMU_MAX_HW_EVENTS)]; 110 + raw_spinlock_t pmu_lock; 111 + }; 112 + 113 + struct cci_pmu_model { 114 + char *name; 115 + struct event_range event_ranges[CCI_IF_MAX]; 116 + }; 117 + 118 + static struct cci_pmu_model cci_pmu_models[]; 119 + 120 + struct cci_pmu { 121 + void __iomem *base; 122 + struct pmu pmu; 123 + int nr_irqs; 124 + int irqs[CCI_PMU_MAX_HW_EVENTS]; 125 + unsigned long active_irqs; 126 + const struct cci_pmu_model *model; 127 + struct cci_pmu_hw_events hw_events; 128 + struct platform_device *plat_device; 129 + int num_events; 130 + atomic_t active_events; 131 + struct mutex reserve_mutex; 132 + cpumask_t cpus; 133 + }; 134 + static struct cci_pmu *pmu; 135 + 136 + #define to_cci_pmu(c) (container_of(c, struct cci_pmu, pmu)) 72 137 73 138 /* Port ids */ 74 139 #define CCI_PORT_S0 0 ··· 144 89 #define CCI_REV_R1 1 145 90 #define CCI_REV_R1_PX 5 146 91 147 - #define CCI_PMU_EVT_SEL 0x000 148 - #define CCI_PMU_CNTR 0x004 149 - #define CCI_PMU_CNTR_CTRL 0x008 150 - #define CCI_PMU_OVRFLW 0x00c 151 - 152 - #define CCI_PMU_OVRFLW_FLAG 1 153 - 154 - #define CCI_PMU_CNTR_BASE(idx) ((idx) * SZ_4K) 155 - 156 - #define CCI_PMU_CNTR_MASK ((1ULL << 32) -1) 157 - 158 92 /* 159 93 * Instead of an event id to monitor CCI cycles, a dedicated counter is 160 94 * provided. Use 0xff to represent CCI cycles and hope that no future revisions ··· 152 108 enum cci400_perf_events { 153 109 CCI_PMU_CYCLES = 0xff 154 110 }; 155 - 156 - #define CCI_PMU_EVENT_MASK 0xff 157 - #define CCI_PMU_EVENT_SOURCE(event) ((event >> 5) & 0x7) 158 - #define CCI_PMU_EVENT_CODE(event) (event & 0x1f) 159 - 160 - #define CCI_PMU_MAX_HW_EVENTS 5 /* CCI PMU has 4 counters + 1 cycle counter */ 161 111 162 112 #define CCI_PMU_CYCLE_CNTR_IDX 0 163 113 #define CCI_PMU_CNTR0_IDX 1 ··· 179 141 #define CCI_REV_R1_MASTER_PORT_MIN_EV 0x00 180 142 #define CCI_REV_R1_MASTER_PORT_MAX_EV 0x11 181 143 182 - struct pmu_port_event_ranges { 183 - u8 slave_min; 184 - u8 slave_max; 185 - u8 master_min; 186 - u8 master_max; 187 - }; 188 - 189 - static struct pmu_port_event_ranges port_event_range[] = { 190 - [CCI_REV_R0] = { 191 - .slave_min = CCI_REV_R0_SLAVE_PORT_MIN_EV, 192 - .slave_max = CCI_REV_R0_SLAVE_PORT_MAX_EV, 193 - .master_min = CCI_REV_R0_MASTER_PORT_MIN_EV, 194 - .master_max = CCI_REV_R0_MASTER_PORT_MAX_EV, 195 - }, 196 - [CCI_REV_R1] = { 197 - .slave_min = CCI_REV_R1_SLAVE_PORT_MIN_EV, 198 - .slave_max = CCI_REV_R1_SLAVE_PORT_MAX_EV, 199 - .master_min = CCI_REV_R1_MASTER_PORT_MIN_EV, 200 - .master_max = CCI_REV_R1_MASTER_PORT_MAX_EV, 201 - }, 202 - }; 203 - 204 - /* 205 - * Export different PMU names for the different revisions so userspace knows 206 - * because the event ids are different 207 - */ 208 - static char *const pmu_names[] = { 209 - [CCI_REV_R0] = "CCI_400", 210 - [CCI_REV_R1] = "CCI_400_r1", 211 - }; 212 - 213 - struct cci_pmu_hw_events { 214 - struct perf_event *events[CCI_PMU_MAX_HW_EVENTS]; 215 - unsigned long used_mask[BITS_TO_LONGS(CCI_PMU_MAX_HW_EVENTS)]; 216 - raw_spinlock_t pmu_lock; 217 - }; 218 - 219 - struct cci_pmu { 220 - void __iomem *base; 221 - struct pmu pmu; 222 - int nr_irqs; 223 - int irqs[CCI_PMU_MAX_HW_EVENTS]; 224 - unsigned long active_irqs; 225 - struct pmu_port_event_ranges *port_ranges; 226 - struct cci_pmu_hw_events hw_events; 227 - struct platform_device *plat_device; 228 - int num_events; 229 - atomic_t active_events; 230 - struct mutex reserve_mutex; 231 - cpumask_t cpus; 232 - }; 233 - static struct cci_pmu *pmu; 234 - 235 - #define to_cci_pmu(c) (container_of(c, struct cci_pmu, pmu)) 236 - 237 - static bool is_duplicate_irq(int irq, int *irqs, int nr_irqs) 144 + static int pmu_validate_hw_event(unsigned long hw_event) 238 145 { 239 - int i; 146 + u8 ev_source = CCI_PMU_EVENT_SOURCE(hw_event); 147 + u8 ev_code = CCI_PMU_EVENT_CODE(hw_event); 148 + int if_type; 240 149 241 - for (i = 0; i < nr_irqs; i++) 242 - if (irq == irqs[i]) 243 - return true; 150 + if (hw_event & ~CCI_PMU_EVENT_MASK) 151 + return -ENOENT; 244 152 245 - return false; 153 + switch (ev_source) { 154 + case CCI_PORT_S0: 155 + case CCI_PORT_S1: 156 + case CCI_PORT_S2: 157 + case CCI_PORT_S3: 158 + case CCI_PORT_S4: 159 + /* Slave Interface */ 160 + if_type = CCI_IF_SLAVE; 161 + break; 162 + case CCI_PORT_M0: 163 + case CCI_PORT_M1: 164 + case CCI_PORT_M2: 165 + /* Master Interface */ 166 + if_type = CCI_IF_MASTER; 167 + break; 168 + default: 169 + return -ENOENT; 170 + } 171 + 172 + if (ev_code >= pmu->model->event_ranges[if_type].min && 173 + ev_code <= pmu->model->event_ranges[if_type].max) 174 + return hw_event; 175 + 176 + return -ENOENT; 246 177 } 247 178 248 179 static int probe_cci_revision(void) ··· 226 219 return CCI_REV_R1; 227 220 } 228 221 229 - static struct pmu_port_event_ranges *port_range_by_rev(void) 222 + static const struct cci_pmu_model *probe_cci_model(struct platform_device *pdev) 230 223 { 231 - int rev = probe_cci_revision(); 232 - 233 - return &port_event_range[rev]; 234 - } 235 - 236 - static int pmu_is_valid_slave_event(u8 ev_code) 237 - { 238 - return pmu->port_ranges->slave_min <= ev_code && 239 - ev_code <= pmu->port_ranges->slave_max; 240 - } 241 - 242 - static int pmu_is_valid_master_event(u8 ev_code) 243 - { 244 - return pmu->port_ranges->master_min <= ev_code && 245 - ev_code <= pmu->port_ranges->master_max; 246 - } 247 - 248 - static int pmu_validate_hw_event(u8 hw_event) 249 - { 250 - u8 ev_source = CCI_PMU_EVENT_SOURCE(hw_event); 251 - u8 ev_code = CCI_PMU_EVENT_CODE(hw_event); 252 - 253 - switch (ev_source) { 254 - case CCI_PORT_S0: 255 - case CCI_PORT_S1: 256 - case CCI_PORT_S2: 257 - case CCI_PORT_S3: 258 - case CCI_PORT_S4: 259 - /* Slave Interface */ 260 - if (pmu_is_valid_slave_event(ev_code)) 261 - return hw_event; 262 - break; 263 - case CCI_PORT_M0: 264 - case CCI_PORT_M1: 265 - case CCI_PORT_M2: 266 - /* Master Interface */ 267 - if (pmu_is_valid_master_event(ev_code)) 268 - return hw_event; 269 - break; 270 - } 271 - 272 - return -ENOENT; 224 + if (platform_has_secure_cci_access()) 225 + return &cci_pmu_models[probe_cci_revision()]; 226 + return NULL; 273 227 } 274 228 275 229 static int pmu_is_valid_counter(struct cci_pmu *cci_pmu, int idx) ··· 261 293 262 294 static void pmu_set_event(int idx, unsigned long event) 263 295 { 264 - event &= CCI_PMU_EVENT_MASK; 265 296 pmu_write_register(event, idx, CCI_PMU_EVT_SEL); 266 297 } 267 298 ··· 277 310 { 278 311 struct cci_pmu *cci_pmu = to_cci_pmu(event->pmu); 279 312 struct hw_perf_event *hw_event = &event->hw; 280 - unsigned long cci_event = hw_event->config_base & CCI_PMU_EVENT_MASK; 313 + unsigned long cci_event = hw_event->config_base; 281 314 int idx; 282 315 283 316 if (cci_event == CCI_PMU_CYCLES) { ··· 298 331 static int pmu_map_event(struct perf_event *event) 299 332 { 300 333 int mapping; 301 - u8 config = event->attr.config & CCI_PMU_EVENT_MASK; 334 + unsigned long config = event->attr.config; 302 335 303 336 if (event->attr.type < PERF_TYPE_MAX) 304 337 return -ENOENT; ··· 627 660 } 628 661 629 662 static int 630 - validate_event(struct cci_pmu_hw_events *hw_events, 631 - struct perf_event *event) 663 + validate_event(struct pmu *cci_pmu, 664 + struct cci_pmu_hw_events *hw_events, 665 + struct perf_event *event) 632 666 { 633 667 if (is_software_event(event)) 634 668 return 1; 669 + 670 + /* 671 + * Reject groups spanning multiple HW PMUs (e.g. CPU + CCI). The 672 + * core perf code won't check that the pmu->ctx == leader->ctx 673 + * until after pmu->event_init(event). 674 + */ 675 + if (event->pmu != cci_pmu) 676 + return 0; 635 677 636 678 if (event->state < PERF_EVENT_STATE_OFF) 637 679 return 1; ··· 663 687 .used_mask = CPU_BITS_NONE, 664 688 }; 665 689 666 - if (!validate_event(&fake_pmu, leader)) 690 + if (!validate_event(event->pmu, &fake_pmu, leader)) 667 691 return -EINVAL; 668 692 669 693 list_for_each_entry(sibling, &leader->sibling_list, group_entry) { 670 - if (!validate_event(&fake_pmu, sibling)) 694 + if (!validate_event(event->pmu, &fake_pmu, sibling)) 671 695 return -EINVAL; 672 696 } 673 697 674 - if (!validate_event(&fake_pmu, event)) 698 + if (!validate_event(event->pmu, &fake_pmu, event)) 675 699 return -EINVAL; 676 700 677 701 return 0; ··· 807 831 808 832 static int cci_pmu_init(struct cci_pmu *cci_pmu, struct platform_device *pdev) 809 833 { 810 - char *name = pmu_names[probe_cci_revision()]; 834 + char *name = cci_pmu->model->name; 811 835 cci_pmu->pmu = (struct pmu) { 812 - .name = pmu_names[probe_cci_revision()], 836 + .name = cci_pmu->model->name, 813 837 .task_ctx_nr = perf_invalid_context, 814 838 .pmu_enable = cci_pmu_enable, 815 839 .pmu_disable = cci_pmu_disable, ··· 862 886 .priority = CPU_PRI_PERF + 1, 863 887 }; 864 888 889 + static struct cci_pmu_model cci_pmu_models[] = { 890 + [CCI_REV_R0] = { 891 + .name = "CCI_400", 892 + .event_ranges = { 893 + [CCI_IF_SLAVE] = { 894 + CCI_REV_R0_SLAVE_PORT_MIN_EV, 895 + CCI_REV_R0_SLAVE_PORT_MAX_EV, 896 + }, 897 + [CCI_IF_MASTER] = { 898 + CCI_REV_R0_MASTER_PORT_MIN_EV, 899 + CCI_REV_R0_MASTER_PORT_MAX_EV, 900 + }, 901 + }, 902 + }, 903 + [CCI_REV_R1] = { 904 + .name = "CCI_400_r1", 905 + .event_ranges = { 906 + [CCI_IF_SLAVE] = { 907 + CCI_REV_R1_SLAVE_PORT_MIN_EV, 908 + CCI_REV_R1_SLAVE_PORT_MAX_EV, 909 + }, 910 + [CCI_IF_MASTER] = { 911 + CCI_REV_R1_MASTER_PORT_MIN_EV, 912 + CCI_REV_R1_MASTER_PORT_MAX_EV, 913 + }, 914 + }, 915 + }, 916 + }; 917 + 865 918 static const struct of_device_id arm_cci_pmu_matches[] = { 866 919 { 867 920 .compatible = "arm,cci-400-pmu", 921 + .data = NULL, 922 + }, 923 + { 924 + .compatible = "arm,cci-400-pmu,r0", 925 + .data = &cci_pmu_models[CCI_REV_R0], 926 + }, 927 + { 928 + .compatible = "arm,cci-400-pmu,r1", 929 + .data = &cci_pmu_models[CCI_REV_R1], 868 930 }, 869 931 {}, 870 932 }; 933 + 934 + static inline const struct cci_pmu_model *get_cci_model(struct platform_device *pdev) 935 + { 936 + const struct of_device_id *match = of_match_node(arm_cci_pmu_matches, 937 + pdev->dev.of_node); 938 + if (!match) 939 + return NULL; 940 + if (match->data) 941 + return match->data; 942 + 943 + dev_warn(&pdev->dev, "DEPRECATED compatible property," 944 + "requires secure access to CCI registers"); 945 + return probe_cci_model(pdev); 946 + } 947 + 948 + static bool is_duplicate_irq(int irq, int *irqs, int nr_irqs) 949 + { 950 + int i; 951 + 952 + for (i = 0; i < nr_irqs; i++) 953 + if (irq == irqs[i]) 954 + return true; 955 + 956 + return false; 957 + } 871 958 872 959 static int cci_pmu_probe(struct platform_device *pdev) 873 960 { 874 961 struct resource *res; 875 962 int i, ret, irq; 963 + const struct cci_pmu_model *model; 964 + 965 + model = get_cci_model(pdev); 966 + if (!model) { 967 + dev_warn(&pdev->dev, "CCI PMU version not supported\n"); 968 + return -ENODEV; 969 + } 876 970 877 971 pmu = devm_kzalloc(&pdev->dev, sizeof(*pmu), GFP_KERNEL); 878 972 if (!pmu) 879 973 return -ENOMEM; 880 974 975 + pmu->model = model; 881 976 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 882 977 pmu->base = devm_ioremap_resource(&pdev->dev, res); 883 978 if (IS_ERR(pmu->base)) ··· 980 933 return -EINVAL; 981 934 } 982 935 983 - pmu->port_ranges = port_range_by_rev(); 984 - if (!pmu->port_ranges) { 985 - dev_warn(&pdev->dev, "CCI PMU version not supported\n"); 986 - return -EINVAL; 987 - } 988 - 989 936 raw_spin_lock_init(&pmu->hw_events.pmu_lock); 990 937 mutex_init(&pmu->reserve_mutex); 991 938 atomic_set(&pmu->active_events, 0); ··· 993 952 if (ret) 994 953 return ret; 995 954 955 + pr_info("ARM %s PMU driver probed", pmu->model->name); 996 956 return 0; 997 957 } 998 958 ··· 1005 963 return of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev); 1006 964 } 1007 965 1008 - #endif /* CONFIG_HW_PERF_EVENTS */ 966 + static struct platform_driver cci_pmu_driver = { 967 + .driver = { 968 + .name = DRIVER_NAME_PMU, 969 + .of_match_table = arm_cci_pmu_matches, 970 + }, 971 + .probe = cci_pmu_probe, 972 + }; 973 + 974 + static struct platform_driver cci_platform_driver = { 975 + .driver = { 976 + .name = DRIVER_NAME, 977 + .of_match_table = arm_cci_matches, 978 + }, 979 + .probe = cci_platform_probe, 980 + }; 981 + 982 + static int __init cci_platform_init(void) 983 + { 984 + int ret; 985 + 986 + ret = platform_driver_register(&cci_pmu_driver); 987 + if (ret) 988 + return ret; 989 + 990 + return platform_driver_register(&cci_platform_driver); 991 + } 992 + 993 + #else /* !CONFIG_ARM_CCI400_PMU */ 994 + 995 + static int __init cci_platform_init(void) 996 + { 997 + return 0; 998 + } 999 + 1000 + #endif /* CONFIG_ARM_CCI400_PMU */ 1001 + 1002 + #ifdef CONFIG_ARM_CCI400_PORT_CTRL 1003 + 1004 + #define CCI_PORT_CTRL 0x0 1005 + #define CCI_CTRL_STATUS 0xc 1006 + 1007 + #define CCI_ENABLE_SNOOP_REQ 0x1 1008 + #define CCI_ENABLE_DVM_REQ 0x2 1009 + #define CCI_ENABLE_REQ (CCI_ENABLE_SNOOP_REQ | CCI_ENABLE_DVM_REQ) 1010 + 1011 + enum cci_ace_port_type { 1012 + ACE_INVALID_PORT = 0x0, 1013 + ACE_PORT, 1014 + ACE_LITE_PORT, 1015 + }; 1016 + 1017 + struct cci_ace_port { 1018 + void __iomem *base; 1019 + unsigned long phys; 1020 + enum cci_ace_port_type type; 1021 + struct device_node *dn; 1022 + }; 1023 + 1024 + static struct cci_ace_port *ports; 1025 + static unsigned int nb_cci_ports; 1009 1026 1010 1027 struct cpu_port { 1011 1028 u64 mpidr; ··· 1385 1284 } 1386 1285 EXPORT_SYMBOL_GPL(__cci_control_port_by_index); 1387 1286 1388 - static const struct cci_nb_ports cci400_ports = { 1389 - .nb_ace = 2, 1390 - .nb_ace_lite = 3 1391 - }; 1392 - 1393 - static const struct of_device_id arm_cci_matches[] = { 1394 - {.compatible = "arm,cci-400", .data = &cci400_ports }, 1395 - {}, 1396 - }; 1397 - 1398 1287 static const struct of_device_id arm_cci_ctrl_if_matches[] = { 1399 1288 {.compatible = "arm,cci-400-ctrl-if", }, 1400 1289 {}, 1401 1290 }; 1402 1291 1403 - static int cci_probe(void) 1292 + static int cci_probe_ports(struct device_node *np) 1404 1293 { 1405 1294 struct cci_nb_ports const *cci_config; 1406 1295 int ret, i, nb_ace = 0, nb_ace_lite = 0; 1407 - struct device_node *np, *cp; 1296 + struct device_node *cp; 1408 1297 struct resource res; 1409 1298 const char *match_str; 1410 1299 bool is_ace; 1411 1300 1412 - np = of_find_matching_node(NULL, arm_cci_matches); 1413 - if (!np) 1414 - return -ENODEV; 1415 - 1416 - if (!of_device_is_available(np)) 1417 - return -ENODEV; 1418 1301 1419 1302 cci_config = of_match_node(arm_cci_matches, np)->data; 1420 1303 if (!cci_config) ··· 1409 1324 ports = kcalloc(nb_cci_ports, sizeof(*ports), GFP_KERNEL); 1410 1325 if (!ports) 1411 1326 return -ENOMEM; 1412 - 1413 - ret = of_address_to_resource(np, 0, &res); 1414 - if (!ret) { 1415 - cci_ctrl_base = ioremap(res.start, resource_size(&res)); 1416 - cci_ctrl_phys = res.start; 1417 - } 1418 - if (ret || !cci_ctrl_base) { 1419 - WARN(1, "unable to ioremap CCI ctrl\n"); 1420 - ret = -ENXIO; 1421 - goto memalloc_err; 1422 - } 1423 1327 1424 1328 for_each_child_of_node(np, cp) { 1425 1329 if (!of_match_node(arm_cci_ctrl_if_matches, cp)) ··· 1469 1395 sync_cache_w(&cpu_port); 1470 1396 __sync_cache_range_w(ports, sizeof(*ports) * nb_cci_ports); 1471 1397 pr_info("ARM CCI driver probed\n"); 1398 + 1472 1399 return 0; 1400 + } 1401 + #else /* !CONFIG_ARM_CCI400_PORT_CTRL */ 1402 + static inline int cci_probe_ports(struct device_node *np) 1403 + { 1404 + return 0; 1405 + } 1406 + #endif /* CONFIG_ARM_CCI400_PORT_CTRL */ 1473 1407 1474 - memalloc_err: 1408 + static int cci_probe(void) 1409 + { 1410 + int ret; 1411 + struct device_node *np; 1412 + struct resource res; 1475 1413 1476 - kfree(ports); 1477 - return ret; 1414 + np = of_find_matching_node(NULL, arm_cci_matches); 1415 + if(!np || !of_device_is_available(np)) 1416 + return -ENODEV; 1417 + 1418 + ret = of_address_to_resource(np, 0, &res); 1419 + if (!ret) { 1420 + cci_ctrl_base = ioremap(res.start, resource_size(&res)); 1421 + cci_ctrl_phys = res.start; 1422 + } 1423 + if (ret || !cci_ctrl_base) { 1424 + WARN(1, "unable to ioremap CCI ctrl\n"); 1425 + return -ENXIO; 1426 + } 1427 + 1428 + return cci_probe_ports(np); 1478 1429 } 1479 1430 1480 1431 static int cci_init_status = -EAGAIN; ··· 1517 1418 return cci_init_status; 1518 1419 } 1519 1420 1520 - #ifdef CONFIG_HW_PERF_EVENTS 1521 - static struct platform_driver cci_pmu_driver = { 1522 - .driver = { 1523 - .name = DRIVER_NAME_PMU, 1524 - .of_match_table = arm_cci_pmu_matches, 1525 - }, 1526 - .probe = cci_pmu_probe, 1527 - }; 1528 - 1529 - static struct platform_driver cci_platform_driver = { 1530 - .driver = { 1531 - .name = DRIVER_NAME, 1532 - .of_match_table = arm_cci_matches, 1533 - }, 1534 - .probe = cci_platform_probe, 1535 - }; 1536 - 1537 - static int __init cci_platform_init(void) 1538 - { 1539 - int ret; 1540 - 1541 - ret = platform_driver_register(&cci_pmu_driver); 1542 - if (ret) 1543 - return ret; 1544 - 1545 - return platform_driver_register(&cci_platform_driver); 1546 - } 1547 - 1548 - #else 1549 - 1550 - static int __init cci_platform_init(void) 1551 - { 1552 - return 0; 1553 - } 1554 - 1555 - #endif 1556 1421 /* 1557 1422 * To sort out early init calls ordering a helper function is provided to 1558 1423 * check if the CCI driver has beed initialized. Function check if the driver
+58
drivers/bus/simple-pm-bus.c
··· 1 + /* 2 + * Simple Power-Managed Bus Driver 3 + * 4 + * Copyright (C) 2014-2015 Glider bvba 5 + * 6 + * This file is subject to the terms and conditions of the GNU General Public 7 + * License. See the file "COPYING" in the main directory of this archive 8 + * for more details. 9 + */ 10 + 11 + #include <linux/module.h> 12 + #include <linux/of_platform.h> 13 + #include <linux/platform_device.h> 14 + #include <linux/pm_runtime.h> 15 + 16 + 17 + static int simple_pm_bus_probe(struct platform_device *pdev) 18 + { 19 + struct device_node *np = pdev->dev.of_node; 20 + 21 + dev_dbg(&pdev->dev, "%s\n", __func__); 22 + 23 + pm_runtime_enable(&pdev->dev); 24 + 25 + if (np) 26 + of_platform_populate(np, NULL, NULL, &pdev->dev); 27 + 28 + return 0; 29 + } 30 + 31 + static int simple_pm_bus_remove(struct platform_device *pdev) 32 + { 33 + dev_dbg(&pdev->dev, "%s\n", __func__); 34 + 35 + pm_runtime_disable(&pdev->dev); 36 + return 0; 37 + } 38 + 39 + static const struct of_device_id simple_pm_bus_of_match[] = { 40 + { .compatible = "simple-pm-bus", }, 41 + { /* sentinel */ } 42 + }; 43 + MODULE_DEVICE_TABLE(of, simple_pm_bus_of_match); 44 + 45 + static struct platform_driver simple_pm_bus_driver = { 46 + .probe = simple_pm_bus_probe, 47 + .remove = simple_pm_bus_remove, 48 + .driver = { 49 + .name = "simple-pm-bus", 50 + .of_match_table = simple_pm_bus_of_match, 51 + }, 52 + }; 53 + 54 + module_platform_driver(simple_pm_bus_driver); 55 + 56 + MODULE_DESCRIPTION("Simple Power-Managed Bus Driver"); 57 + MODULE_AUTHOR("Geert Uytterhoeven <geert+renesas@glider.be>"); 58 + MODULE_LICENSE("GPL v2");
+5
drivers/clocksource/Kconfig
··· 143 143 select CLKSRC_OF if OF 144 144 def_bool SOC_AT91SAM9 || SOC_SAMA5 145 145 146 + config ATMEL_ST 147 + bool 148 + select CLKSRC_OF 149 + select MFD_SYSCON 150 + 146 151 config CLKSRC_METAG_GENERIC 147 152 def_bool y if METAG 148 153 help
+1
drivers/clocksource/Makefile
··· 1 1 obj-$(CONFIG_CLKSRC_OF) += clksrc-of.o 2 2 obj-$(CONFIG_ATMEL_PIT) += timer-atmel-pit.o 3 + obj-$(CONFIG_ATMEL_ST) += timer-atmel-st.o 3 4 obj-$(CONFIG_ATMEL_TCB_CLKSRC) += tcb_clksrc.o 4 5 obj-$(CONFIG_X86_PM_TIMER) += acpi_pm.o 5 6 obj-$(CONFIG_SCx200HR_TIMER) += scx200_hrt.o
+4
drivers/firmware/Kconfig
··· 132 132 detect iSCSI boot parameters dynamically during system boot, say Y. 133 133 Otherwise, say N. 134 134 135 + config QCOM_SCM 136 + bool 137 + depends on ARM || ARM64 138 + 135 139 source "drivers/firmware/google/Kconfig" 136 140 source "drivers/firmware/efi/Kconfig" 137 141
+2
drivers/firmware/Makefile
··· 11 11 obj-$(CONFIG_ISCSI_IBFT_FIND) += iscsi_ibft_find.o 12 12 obj-$(CONFIG_ISCSI_IBFT) += iscsi_ibft.o 13 13 obj-$(CONFIG_FIRMWARE_MEMMAP) += memmap.o 14 + obj-$(CONFIG_QCOM_SCM) += qcom_scm.o 15 + CFLAGS_qcom_scm.o :=$(call as-instr,.arch_extension sec,-DREQUIRES_SEC=1) 14 16 15 17 obj-$(CONFIG_GOOGLE_FIRMWARE) += google/ 16 18 obj-$(CONFIG_EFI) += efi/
+494
drivers/firmware/qcom_scm.c
··· 1 + /* Copyright (c) 2010, Code Aurora Forum. All rights reserved. 2 + * Copyright (C) 2015 Linaro Ltd. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 and 6 + * only version 2 as published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + * 13 + * You should have received a copy of the GNU General Public License 14 + * along with this program; if not, write to the Free Software 15 + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 16 + * 02110-1301, USA. 17 + */ 18 + 19 + #include <linux/slab.h> 20 + #include <linux/io.h> 21 + #include <linux/module.h> 22 + #include <linux/mutex.h> 23 + #include <linux/errno.h> 24 + #include <linux/err.h> 25 + #include <linux/qcom_scm.h> 26 + 27 + #include <asm/outercache.h> 28 + #include <asm/cacheflush.h> 29 + 30 + 31 + #define QCOM_SCM_ENOMEM -5 32 + #define QCOM_SCM_EOPNOTSUPP -4 33 + #define QCOM_SCM_EINVAL_ADDR -3 34 + #define QCOM_SCM_EINVAL_ARG -2 35 + #define QCOM_SCM_ERROR -1 36 + #define QCOM_SCM_INTERRUPTED 1 37 + 38 + #define QCOM_SCM_FLAG_COLDBOOT_CPU0 0x00 39 + #define QCOM_SCM_FLAG_COLDBOOT_CPU1 0x01 40 + #define QCOM_SCM_FLAG_COLDBOOT_CPU2 0x08 41 + #define QCOM_SCM_FLAG_COLDBOOT_CPU3 0x20 42 + 43 + #define QCOM_SCM_FLAG_WARMBOOT_CPU0 0x04 44 + #define QCOM_SCM_FLAG_WARMBOOT_CPU1 0x02 45 + #define QCOM_SCM_FLAG_WARMBOOT_CPU2 0x10 46 + #define QCOM_SCM_FLAG_WARMBOOT_CPU3 0x40 47 + 48 + struct qcom_scm_entry { 49 + int flag; 50 + void *entry; 51 + }; 52 + 53 + static struct qcom_scm_entry qcom_scm_wb[] = { 54 + { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU0 }, 55 + { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU1 }, 56 + { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU2 }, 57 + { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU3 }, 58 + }; 59 + 60 + static DEFINE_MUTEX(qcom_scm_lock); 61 + 62 + /** 63 + * struct qcom_scm_command - one SCM command buffer 64 + * @len: total available memory for command and response 65 + * @buf_offset: start of command buffer 66 + * @resp_hdr_offset: start of response buffer 67 + * @id: command to be executed 68 + * @buf: buffer returned from qcom_scm_get_command_buffer() 69 + * 70 + * An SCM command is laid out in memory as follows: 71 + * 72 + * ------------------- <--- struct qcom_scm_command 73 + * | command header | 74 + * ------------------- <--- qcom_scm_get_command_buffer() 75 + * | command buffer | 76 + * ------------------- <--- struct qcom_scm_response and 77 + * | response header | qcom_scm_command_to_response() 78 + * ------------------- <--- qcom_scm_get_response_buffer() 79 + * | response buffer | 80 + * ------------------- 81 + * 82 + * There can be arbitrary padding between the headers and buffers so 83 + * you should always use the appropriate qcom_scm_get_*_buffer() routines 84 + * to access the buffers in a safe manner. 85 + */ 86 + struct qcom_scm_command { 87 + __le32 len; 88 + __le32 buf_offset; 89 + __le32 resp_hdr_offset; 90 + __le32 id; 91 + __le32 buf[0]; 92 + }; 93 + 94 + /** 95 + * struct qcom_scm_response - one SCM response buffer 96 + * @len: total available memory for response 97 + * @buf_offset: start of response data relative to start of qcom_scm_response 98 + * @is_complete: indicates if the command has finished processing 99 + */ 100 + struct qcom_scm_response { 101 + __le32 len; 102 + __le32 buf_offset; 103 + __le32 is_complete; 104 + }; 105 + 106 + /** 107 + * alloc_qcom_scm_command() - Allocate an SCM command 108 + * @cmd_size: size of the command buffer 109 + * @resp_size: size of the response buffer 110 + * 111 + * Allocate an SCM command, including enough room for the command 112 + * and response headers as well as the command and response buffers. 113 + * 114 + * Returns a valid &qcom_scm_command on success or %NULL if the allocation fails. 115 + */ 116 + static struct qcom_scm_command *alloc_qcom_scm_command(size_t cmd_size, size_t resp_size) 117 + { 118 + struct qcom_scm_command *cmd; 119 + size_t len = sizeof(*cmd) + sizeof(struct qcom_scm_response) + cmd_size + 120 + resp_size; 121 + u32 offset; 122 + 123 + cmd = kzalloc(PAGE_ALIGN(len), GFP_KERNEL); 124 + if (cmd) { 125 + cmd->len = cpu_to_le32(len); 126 + offset = offsetof(struct qcom_scm_command, buf); 127 + cmd->buf_offset = cpu_to_le32(offset); 128 + cmd->resp_hdr_offset = cpu_to_le32(offset + cmd_size); 129 + } 130 + return cmd; 131 + } 132 + 133 + /** 134 + * free_qcom_scm_command() - Free an SCM command 135 + * @cmd: command to free 136 + * 137 + * Free an SCM command. 138 + */ 139 + static inline void free_qcom_scm_command(struct qcom_scm_command *cmd) 140 + { 141 + kfree(cmd); 142 + } 143 + 144 + /** 145 + * qcom_scm_command_to_response() - Get a pointer to a qcom_scm_response 146 + * @cmd: command 147 + * 148 + * Returns a pointer to a response for a command. 149 + */ 150 + static inline struct qcom_scm_response *qcom_scm_command_to_response( 151 + const struct qcom_scm_command *cmd) 152 + { 153 + return (void *)cmd + le32_to_cpu(cmd->resp_hdr_offset); 154 + } 155 + 156 + /** 157 + * qcom_scm_get_command_buffer() - Get a pointer to a command buffer 158 + * @cmd: command 159 + * 160 + * Returns a pointer to the command buffer of a command. 161 + */ 162 + static inline void *qcom_scm_get_command_buffer(const struct qcom_scm_command *cmd) 163 + { 164 + return (void *)cmd->buf; 165 + } 166 + 167 + /** 168 + * qcom_scm_get_response_buffer() - Get a pointer to a response buffer 169 + * @rsp: response 170 + * 171 + * Returns a pointer to a response buffer of a response. 172 + */ 173 + static inline void *qcom_scm_get_response_buffer(const struct qcom_scm_response *rsp) 174 + { 175 + return (void *)rsp + le32_to_cpu(rsp->buf_offset); 176 + } 177 + 178 + static int qcom_scm_remap_error(int err) 179 + { 180 + pr_err("qcom_scm_call failed with error code %d\n", err); 181 + switch (err) { 182 + case QCOM_SCM_ERROR: 183 + return -EIO; 184 + case QCOM_SCM_EINVAL_ADDR: 185 + case QCOM_SCM_EINVAL_ARG: 186 + return -EINVAL; 187 + case QCOM_SCM_EOPNOTSUPP: 188 + return -EOPNOTSUPP; 189 + case QCOM_SCM_ENOMEM: 190 + return -ENOMEM; 191 + } 192 + return -EINVAL; 193 + } 194 + 195 + static u32 smc(u32 cmd_addr) 196 + { 197 + int context_id; 198 + register u32 r0 asm("r0") = 1; 199 + register u32 r1 asm("r1") = (u32)&context_id; 200 + register u32 r2 asm("r2") = cmd_addr; 201 + do { 202 + asm volatile( 203 + __asmeq("%0", "r0") 204 + __asmeq("%1", "r0") 205 + __asmeq("%2", "r1") 206 + __asmeq("%3", "r2") 207 + #ifdef REQUIRES_SEC 208 + ".arch_extension sec\n" 209 + #endif 210 + "smc #0 @ switch to secure world\n" 211 + : "=r" (r0) 212 + : "r" (r0), "r" (r1), "r" (r2) 213 + : "r3"); 214 + } while (r0 == QCOM_SCM_INTERRUPTED); 215 + 216 + return r0; 217 + } 218 + 219 + static int __qcom_scm_call(const struct qcom_scm_command *cmd) 220 + { 221 + int ret; 222 + u32 cmd_addr = virt_to_phys(cmd); 223 + 224 + /* 225 + * Flush the command buffer so that the secure world sees 226 + * the correct data. 227 + */ 228 + __cpuc_flush_dcache_area((void *)cmd, cmd->len); 229 + outer_flush_range(cmd_addr, cmd_addr + cmd->len); 230 + 231 + ret = smc(cmd_addr); 232 + if (ret < 0) 233 + ret = qcom_scm_remap_error(ret); 234 + 235 + return ret; 236 + } 237 + 238 + static void qcom_scm_inv_range(unsigned long start, unsigned long end) 239 + { 240 + u32 cacheline_size, ctr; 241 + 242 + asm volatile("mrc p15, 0, %0, c0, c0, 1" : "=r" (ctr)); 243 + cacheline_size = 4 << ((ctr >> 16) & 0xf); 244 + 245 + start = round_down(start, cacheline_size); 246 + end = round_up(end, cacheline_size); 247 + outer_inv_range(start, end); 248 + while (start < end) { 249 + asm ("mcr p15, 0, %0, c7, c6, 1" : : "r" (start) 250 + : "memory"); 251 + start += cacheline_size; 252 + } 253 + dsb(); 254 + isb(); 255 + } 256 + 257 + /** 258 + * qcom_scm_call() - Send an SCM command 259 + * @svc_id: service identifier 260 + * @cmd_id: command identifier 261 + * @cmd_buf: command buffer 262 + * @cmd_len: length of the command buffer 263 + * @resp_buf: response buffer 264 + * @resp_len: length of the response buffer 265 + * 266 + * Sends a command to the SCM and waits for the command to finish processing. 267 + * 268 + * A note on cache maintenance: 269 + * Note that any buffers that are expected to be accessed by the secure world 270 + * must be flushed before invoking qcom_scm_call and invalidated in the cache 271 + * immediately after qcom_scm_call returns. Cache maintenance on the command 272 + * and response buffers is taken care of by qcom_scm_call; however, callers are 273 + * responsible for any other cached buffers passed over to the secure world. 274 + */ 275 + static int qcom_scm_call(u32 svc_id, u32 cmd_id, const void *cmd_buf, 276 + size_t cmd_len, void *resp_buf, size_t resp_len) 277 + { 278 + int ret; 279 + struct qcom_scm_command *cmd; 280 + struct qcom_scm_response *rsp; 281 + unsigned long start, end; 282 + 283 + cmd = alloc_qcom_scm_command(cmd_len, resp_len); 284 + if (!cmd) 285 + return -ENOMEM; 286 + 287 + cmd->id = cpu_to_le32((svc_id << 10) | cmd_id); 288 + if (cmd_buf) 289 + memcpy(qcom_scm_get_command_buffer(cmd), cmd_buf, cmd_len); 290 + 291 + mutex_lock(&qcom_scm_lock); 292 + ret = __qcom_scm_call(cmd); 293 + mutex_unlock(&qcom_scm_lock); 294 + if (ret) 295 + goto out; 296 + 297 + rsp = qcom_scm_command_to_response(cmd); 298 + start = (unsigned long)rsp; 299 + 300 + do { 301 + qcom_scm_inv_range(start, start + sizeof(*rsp)); 302 + } while (!rsp->is_complete); 303 + 304 + end = (unsigned long)qcom_scm_get_response_buffer(rsp) + resp_len; 305 + qcom_scm_inv_range(start, end); 306 + 307 + if (resp_buf) 308 + memcpy(resp_buf, qcom_scm_get_response_buffer(rsp), resp_len); 309 + out: 310 + free_qcom_scm_command(cmd); 311 + return ret; 312 + } 313 + 314 + #define SCM_CLASS_REGISTER (0x2 << 8) 315 + #define SCM_MASK_IRQS BIT(5) 316 + #define SCM_ATOMIC(svc, cmd, n) (((((svc) << 10)|((cmd) & 0x3ff)) << 12) | \ 317 + SCM_CLASS_REGISTER | \ 318 + SCM_MASK_IRQS | \ 319 + (n & 0xf)) 320 + 321 + /** 322 + * qcom_scm_call_atomic1() - Send an atomic SCM command with one argument 323 + * @svc_id: service identifier 324 + * @cmd_id: command identifier 325 + * @arg1: first argument 326 + * 327 + * This shall only be used with commands that are guaranteed to be 328 + * uninterruptable, atomic and SMP safe. 329 + */ 330 + static s32 qcom_scm_call_atomic1(u32 svc, u32 cmd, u32 arg1) 331 + { 332 + int context_id; 333 + 334 + register u32 r0 asm("r0") = SCM_ATOMIC(svc, cmd, 1); 335 + register u32 r1 asm("r1") = (u32)&context_id; 336 + register u32 r2 asm("r2") = arg1; 337 + 338 + asm volatile( 339 + __asmeq("%0", "r0") 340 + __asmeq("%1", "r0") 341 + __asmeq("%2", "r1") 342 + __asmeq("%3", "r2") 343 + #ifdef REQUIRES_SEC 344 + ".arch_extension sec\n" 345 + #endif 346 + "smc #0 @ switch to secure world\n" 347 + : "=r" (r0) 348 + : "r" (r0), "r" (r1), "r" (r2) 349 + : "r3"); 350 + return r0; 351 + } 352 + 353 + u32 qcom_scm_get_version(void) 354 + { 355 + int context_id; 356 + static u32 version = -1; 357 + register u32 r0 asm("r0"); 358 + register u32 r1 asm("r1"); 359 + 360 + if (version != -1) 361 + return version; 362 + 363 + mutex_lock(&qcom_scm_lock); 364 + 365 + r0 = 0x1 << 8; 366 + r1 = (u32)&context_id; 367 + do { 368 + asm volatile( 369 + __asmeq("%0", "r0") 370 + __asmeq("%1", "r1") 371 + __asmeq("%2", "r0") 372 + __asmeq("%3", "r1") 373 + #ifdef REQUIRES_SEC 374 + ".arch_extension sec\n" 375 + #endif 376 + "smc #0 @ switch to secure world\n" 377 + : "=r" (r0), "=r" (r1) 378 + : "r" (r0), "r" (r1) 379 + : "r2", "r3"); 380 + } while (r0 == QCOM_SCM_INTERRUPTED); 381 + 382 + version = r1; 383 + mutex_unlock(&qcom_scm_lock); 384 + 385 + return version; 386 + } 387 + EXPORT_SYMBOL(qcom_scm_get_version); 388 + 389 + #define QCOM_SCM_SVC_BOOT 0x1 390 + #define QCOM_SCM_BOOT_ADDR 0x1 391 + /* 392 + * Set the cold/warm boot address for one of the CPU cores. 393 + */ 394 + static int qcom_scm_set_boot_addr(u32 addr, int flags) 395 + { 396 + struct { 397 + __le32 flags; 398 + __le32 addr; 399 + } cmd; 400 + 401 + cmd.addr = cpu_to_le32(addr); 402 + cmd.flags = cpu_to_le32(flags); 403 + return qcom_scm_call(QCOM_SCM_SVC_BOOT, QCOM_SCM_BOOT_ADDR, 404 + &cmd, sizeof(cmd), NULL, 0); 405 + } 406 + 407 + /** 408 + * qcom_scm_set_cold_boot_addr() - Set the cold boot address for cpus 409 + * @entry: Entry point function for the cpus 410 + * @cpus: The cpumask of cpus that will use the entry point 411 + * 412 + * Set the cold boot address of the cpus. Any cpu outside the supported 413 + * range would be removed from the cpu present mask. 414 + */ 415 + int qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus) 416 + { 417 + int flags = 0; 418 + int cpu; 419 + int scm_cb_flags[] = { 420 + QCOM_SCM_FLAG_COLDBOOT_CPU0, 421 + QCOM_SCM_FLAG_COLDBOOT_CPU1, 422 + QCOM_SCM_FLAG_COLDBOOT_CPU2, 423 + QCOM_SCM_FLAG_COLDBOOT_CPU3, 424 + }; 425 + 426 + if (!cpus || (cpus && cpumask_empty(cpus))) 427 + return -EINVAL; 428 + 429 + for_each_cpu(cpu, cpus) { 430 + if (cpu < ARRAY_SIZE(scm_cb_flags)) 431 + flags |= scm_cb_flags[cpu]; 432 + else 433 + set_cpu_present(cpu, false); 434 + } 435 + 436 + return qcom_scm_set_boot_addr(virt_to_phys(entry), flags); 437 + } 438 + EXPORT_SYMBOL(qcom_scm_set_cold_boot_addr); 439 + 440 + /** 441 + * qcom_scm_set_warm_boot_addr() - Set the warm boot address for cpus 442 + * @entry: Entry point function for the cpus 443 + * @cpus: The cpumask of cpus that will use the entry point 444 + * 445 + * Set the Linux entry point for the SCM to transfer control to when coming 446 + * out of a power down. CPU power down may be executed on cpuidle or hotplug. 447 + */ 448 + int qcom_scm_set_warm_boot_addr(void *entry, const cpumask_t *cpus) 449 + { 450 + int ret; 451 + int flags = 0; 452 + int cpu; 453 + 454 + /* 455 + * Reassign only if we are switching from hotplug entry point 456 + * to cpuidle entry point or vice versa. 457 + */ 458 + for_each_cpu(cpu, cpus) { 459 + if (entry == qcom_scm_wb[cpu].entry) 460 + continue; 461 + flags |= qcom_scm_wb[cpu].flag; 462 + } 463 + 464 + /* No change in entry function */ 465 + if (!flags) 466 + return 0; 467 + 468 + ret = qcom_scm_set_boot_addr(virt_to_phys(entry), flags); 469 + if (!ret) { 470 + for_each_cpu(cpu, cpus) 471 + qcom_scm_wb[cpu].entry = entry; 472 + } 473 + 474 + return ret; 475 + } 476 + EXPORT_SYMBOL(qcom_scm_set_warm_boot_addr); 477 + 478 + #define QCOM_SCM_CMD_TERMINATE_PC 0x2 479 + #define QCOM_SCM_FLUSH_FLAG_MASK 0x3 480 + 481 + /** 482 + * qcom_scm_cpu_power_down() - Power down the cpu 483 + * @flags - Flags to flush cache 484 + * 485 + * This is an end point to power down cpu. If there was a pending interrupt, 486 + * the control would return from this function, otherwise, the cpu jumps to the 487 + * warm boot entry point set for this cpu upon reset. 488 + */ 489 + void qcom_scm_cpu_power_down(u32 flags) 490 + { 491 + qcom_scm_call_atomic1(QCOM_SCM_SVC_BOOT, QCOM_SCM_CMD_TERMINATE_PC, 492 + flags & QCOM_SCM_FLUSH_FLAG_MASK); 493 + } 494 + EXPORT_SYMBOL(qcom_scm_cpu_power_down);
+288 -68
drivers/memory/omap-gpmc.c
··· 12 12 * it under the terms of the GNU General Public License version 2 as 13 13 * published by the Free Software Foundation. 14 14 */ 15 - #undef DEBUG 16 - 17 15 #include <linux/irq.h> 18 16 #include <linux/kernel.h> 19 17 #include <linux/init.h> ··· 27 29 #include <linux/of_address.h> 28 30 #include <linux/of_mtd.h> 29 31 #include <linux/of_device.h> 32 + #include <linux/of_platform.h> 30 33 #include <linux/omap-gpmc.h> 31 34 #include <linux/mtd/nand.h> 32 35 #include <linux/pm_runtime.h> ··· 135 136 #define GPMC_CONFIG1_WRITETYPE_ASYNC (0 << 27) 136 137 #define GPMC_CONFIG1_WRITETYPE_SYNC (1 << 27) 137 138 #define GPMC_CONFIG1_CLKACTIVATIONTIME(val) ((val & 3) << 25) 139 + /** CLKACTIVATIONTIME Max Ticks */ 140 + #define GPMC_CONFIG1_CLKACTIVATIONTIME_MAX 2 138 141 #define GPMC_CONFIG1_PAGE_LEN(val) ((val & 3) << 23) 142 + /** ATTACHEDDEVICEPAGELENGTH Max Value */ 143 + #define GPMC_CONFIG1_ATTACHEDDEVICEPAGELENGTH_MAX 2 139 144 #define GPMC_CONFIG1_WAIT_READ_MON (1 << 22) 140 145 #define GPMC_CONFIG1_WAIT_WRITE_MON (1 << 21) 141 - #define GPMC_CONFIG1_WAIT_MON_IIME(val) ((val & 3) << 18) 146 + #define GPMC_CONFIG1_WAIT_MON_TIME(val) ((val & 3) << 18) 147 + /** WAITMONITORINGTIME Max Ticks */ 148 + #define GPMC_CONFIG1_WAITMONITORINGTIME_MAX 2 142 149 #define GPMC_CONFIG1_WAIT_PIN_SEL(val) ((val & 3) << 16) 143 150 #define GPMC_CONFIG1_DEVICESIZE(val) ((val & 3) << 12) 144 151 #define GPMC_CONFIG1_DEVICESIZE_16 GPMC_CONFIG1_DEVICESIZE(1) 152 + /** DEVICESIZE Max Value */ 153 + #define GPMC_CONFIG1_DEVICESIZE_MAX 1 145 154 #define GPMC_CONFIG1_DEVICETYPE(val) ((val & 3) << 10) 146 155 #define GPMC_CONFIG1_DEVICETYPE_NOR GPMC_CONFIG1_DEVICETYPE(0) 147 156 #define GPMC_CONFIG1_MUXTYPE(val) ((val & 3) << 8) ··· 159 152 #define GPMC_CONFIG1_FCLK_DIV3 (GPMC_CONFIG1_FCLK_DIV(2)) 160 153 #define GPMC_CONFIG1_FCLK_DIV4 (GPMC_CONFIG1_FCLK_DIV(3)) 161 154 #define GPMC_CONFIG7_CSVALID (1 << 6) 155 + 156 + #define GPMC_CONFIG7_BASEADDRESS_MASK 0x3f 157 + #define GPMC_CONFIG7_CSVALID_MASK BIT(6) 158 + #define GPMC_CONFIG7_MASKADDRESS_OFFSET 8 159 + #define GPMC_CONFIG7_MASKADDRESS_MASK (0xf << GPMC_CONFIG7_MASKADDRESS_OFFSET) 160 + /* All CONFIG7 bits except reserved bits */ 161 + #define GPMC_CONFIG7_MASK (GPMC_CONFIG7_BASEADDRESS_MASK | \ 162 + GPMC_CONFIG7_CSVALID_MASK | \ 163 + GPMC_CONFIG7_MASKADDRESS_MASK) 162 164 163 165 #define GPMC_DEVICETYPE_NOR 0 164 166 #define GPMC_DEVICETYPE_NAND 2 ··· 184 168 /* XXX: Only NAND irq has been considered,currently these are the only ones used 185 169 */ 186 170 #define GPMC_NR_IRQ 2 171 + 172 + enum gpmc_clk_domain { 173 + GPMC_CD_FCLK, 174 + GPMC_CD_CLK 175 + }; 187 176 188 177 struct gpmc_cs_data { 189 178 const char *name; ··· 288 267 return rate; 289 268 } 290 269 291 - static unsigned int gpmc_ns_to_ticks(unsigned int time_ns) 270 + /** 271 + * gpmc_get_clk_period - get period of selected clock domain in ps 272 + * @cs Chip Select Region. 273 + * @cd Clock Domain. 274 + * 275 + * GPMC_CS_CONFIG1 GPMCFCLKDIVIDER for cs has to be setup 276 + * prior to calling this function with GPMC_CD_CLK. 277 + */ 278 + static unsigned long gpmc_get_clk_period(int cs, enum gpmc_clk_domain cd) 279 + { 280 + 281 + unsigned long tick_ps = gpmc_get_fclk_period(); 282 + u32 l; 283 + int div; 284 + 285 + switch (cd) { 286 + case GPMC_CD_CLK: 287 + /* get current clk divider */ 288 + l = gpmc_cs_read_reg(cs, GPMC_CS_CONFIG1); 289 + div = (l & 0x03) + 1; 290 + /* get GPMC_CLK period */ 291 + tick_ps *= div; 292 + break; 293 + case GPMC_CD_FCLK: 294 + /* FALL-THROUGH */ 295 + default: 296 + break; 297 + } 298 + 299 + return tick_ps; 300 + 301 + } 302 + 303 + static unsigned int gpmc_ns_to_clk_ticks(unsigned int time_ns, int cs, 304 + enum gpmc_clk_domain cd) 292 305 { 293 306 unsigned long tick_ps; 294 307 295 308 /* Calculate in picosecs to yield more exact results */ 296 - tick_ps = gpmc_get_fclk_period(); 309 + tick_ps = gpmc_get_clk_period(cs, cd); 297 310 298 311 return (time_ns * 1000 + tick_ps - 1) / tick_ps; 312 + } 313 + 314 + static unsigned int gpmc_ns_to_ticks(unsigned int time_ns) 315 + { 316 + return gpmc_ns_to_clk_ticks(time_ns, /* any CS */ 0, GPMC_CD_FCLK); 299 317 } 300 318 301 319 static unsigned int gpmc_ps_to_ticks(unsigned int time_ps) ··· 347 287 return (time_ps + tick_ps - 1) / tick_ps; 348 288 } 349 289 290 + unsigned int gpmc_clk_ticks_to_ns(unsigned ticks, int cs, 291 + enum gpmc_clk_domain cd) 292 + { 293 + return ticks * gpmc_get_clk_period(cs, cd) / 1000; 294 + } 295 + 350 296 unsigned int gpmc_ticks_to_ns(unsigned int ticks) 351 297 { 352 - return ticks * gpmc_get_fclk_period() / 1000; 298 + return gpmc_clk_ticks_to_ns(ticks, /* any CS */ 0, GPMC_CD_FCLK); 353 299 } 354 300 355 301 static unsigned int gpmc_ticks_to_ps(unsigned int ticks) ··· 404 338 } 405 339 406 340 #ifdef DEBUG 407 - static int get_gpmc_timing_reg(int cs, int reg, int st_bit, int end_bit, 408 - bool raw, bool noval, int shift, 409 - const char *name) 341 + /** 342 + * get_gpmc_timing_reg - read a timing parameter and print DTS settings for it. 343 + * @cs: Chip Select Region 344 + * @reg: GPMC_CS_CONFIGn register offset. 345 + * @st_bit: Start Bit 346 + * @end_bit: End Bit. Must be >= @st_bit. 347 + * @ma:x Maximum parameter value (before optional @shift). 348 + * If 0, maximum is as high as @st_bit and @end_bit allow. 349 + * @name: DTS node name, w/o "gpmc," 350 + * @cd: Clock Domain of timing parameter. 351 + * @shift: Parameter value left shifts @shift, which is then printed instead of value. 352 + * @raw: Raw Format Option. 353 + * raw format: gpmc,name = <value> 354 + * tick format: gpmc,name = <value> /&zwj;* x ns -- y ns; x ticks *&zwj;/ 355 + * Where x ns -- y ns result in the same tick value. 356 + * When @max is exceeded, "invalid" is printed inside comment. 357 + * @noval: Parameter values equal to 0 are not printed. 358 + * @return: Specified timing parameter (after optional @shift). 359 + * 360 + */ 361 + static int get_gpmc_timing_reg( 362 + /* timing specifiers */ 363 + int cs, int reg, int st_bit, int end_bit, int max, 364 + const char *name, const enum gpmc_clk_domain cd, 365 + /* value transform */ 366 + int shift, 367 + /* format specifiers */ 368 + bool raw, bool noval) 410 369 { 411 370 u32 l; 412 - int nr_bits, max_value, mask; 371 + int nr_bits; 372 + int mask; 373 + bool invalid; 413 374 414 375 l = gpmc_cs_read_reg(cs, reg); 415 376 nr_bits = end_bit - st_bit + 1; 416 - max_value = (1 << nr_bits) - 1; 417 - mask = max_value << st_bit; 418 - l = (l & mask) >> st_bit; 377 + mask = (1 << nr_bits) - 1; 378 + l = (l >> st_bit) & mask; 379 + if (!max) 380 + max = mask; 381 + invalid = l > max; 419 382 if (shift) 420 383 l = (shift << l); 421 384 if (noval && (l == 0)) 422 385 return 0; 423 386 if (!raw) { 424 - unsigned int time_ns_min, time_ns, time_ns_max; 387 + /* DTS tick format for timings in ns */ 388 + unsigned int time_ns; 389 + unsigned int time_ns_min = 0; 425 390 426 - time_ns_min = gpmc_ticks_to_ns(l ? l - 1 : 0); 427 - time_ns = gpmc_ticks_to_ns(l); 428 - time_ns_max = gpmc_ticks_to_ns(l + 1 > max_value ? 429 - max_value : l + 1); 430 - pr_info("gpmc,%s = <%u> (%u - %u ns, %i ticks)\n", 431 - name, time_ns, time_ns_min, time_ns_max, l); 391 + if (l) 392 + time_ns_min = gpmc_clk_ticks_to_ns(l - 1, cs, cd) + 1; 393 + time_ns = gpmc_clk_ticks_to_ns(l, cs, cd); 394 + pr_info("gpmc,%s = <%u> /* %u ns - %u ns; %i ticks%s*/\n", 395 + name, time_ns, time_ns_min, time_ns, l, 396 + invalid ? "; invalid " : " "); 432 397 } else { 433 - pr_info("gpmc,%s = <%u>\n", name, l); 398 + /* raw format */ 399 + pr_info("gpmc,%s = <%u>%s\n", name, l, 400 + invalid ? " /* invalid */" : ""); 434 401 } 435 402 436 403 return l; ··· 473 374 pr_info("cs%i %s: 0x%08x\n", cs, #config, \ 474 375 gpmc_cs_read_reg(cs, config)) 475 376 #define GPMC_GET_RAW(reg, st, end, field) \ 476 - get_gpmc_timing_reg(cs, (reg), (st), (end), 1, 0, 0, field) 377 + get_gpmc_timing_reg(cs, (reg), (st), (end), 0, field, GPMC_CD_FCLK, 0, 1, 0) 378 + #define GPMC_GET_RAW_MAX(reg, st, end, max, field) \ 379 + get_gpmc_timing_reg(cs, (reg), (st), (end), (max), field, GPMC_CD_FCLK, 0, 1, 0) 477 380 #define GPMC_GET_RAW_BOOL(reg, st, end, field) \ 478 - get_gpmc_timing_reg(cs, (reg), (st), (end), 1, 1, 0, field) 479 - #define GPMC_GET_RAW_SHIFT(reg, st, end, shift, field) \ 480 - get_gpmc_timing_reg(cs, (reg), (st), (end), 1, 1, (shift), field) 381 + get_gpmc_timing_reg(cs, (reg), (st), (end), 0, field, GPMC_CD_FCLK, 0, 1, 1) 382 + #define GPMC_GET_RAW_SHIFT_MAX(reg, st, end, shift, max, field) \ 383 + get_gpmc_timing_reg(cs, (reg), (st), (end), (max), field, GPMC_CD_FCLK, (shift), 1, 1) 481 384 #define GPMC_GET_TICKS(reg, st, end, field) \ 482 - get_gpmc_timing_reg(cs, (reg), (st), (end), 0, 0, 0, field) 385 + get_gpmc_timing_reg(cs, (reg), (st), (end), 0, field, GPMC_CD_FCLK, 0, 0, 0) 386 + #define GPMC_GET_TICKS_CD(reg, st, end, field, cd) \ 387 + get_gpmc_timing_reg(cs, (reg), (st), (end), 0, field, (cd), 0, 0, 0) 388 + #define GPMC_GET_TICKS_CD_MAX(reg, st, end, max, field, cd) \ 389 + get_gpmc_timing_reg(cs, (reg), (st), (end), (max), field, (cd), 0, 0, 0) 483 390 484 391 static void gpmc_show_regs(int cs, const char *desc) 485 392 { ··· 509 404 pr_info("gpmc cs%i access configuration:\n", cs); 510 405 GPMC_GET_RAW_BOOL(GPMC_CS_CONFIG1, 4, 4, "time-para-granularity"); 511 406 GPMC_GET_RAW(GPMC_CS_CONFIG1, 8, 9, "mux-add-data"); 512 - GPMC_GET_RAW(GPMC_CS_CONFIG1, 12, 13, "device-width"); 407 + GPMC_GET_RAW_MAX(GPMC_CS_CONFIG1, 12, 13, 408 + GPMC_CONFIG1_DEVICESIZE_MAX, "device-width"); 513 409 GPMC_GET_RAW(GPMC_CS_CONFIG1, 16, 17, "wait-pin"); 514 410 GPMC_GET_RAW_BOOL(GPMC_CS_CONFIG1, 21, 21, "wait-on-write"); 515 411 GPMC_GET_RAW_BOOL(GPMC_CS_CONFIG1, 22, 22, "wait-on-read"); 516 - GPMC_GET_RAW_SHIFT(GPMC_CS_CONFIG1, 23, 24, 4, "burst-length"); 412 + GPMC_GET_RAW_SHIFT_MAX(GPMC_CS_CONFIG1, 23, 24, 4, 413 + GPMC_CONFIG1_ATTACHEDDEVICEPAGELENGTH_MAX, 414 + "burst-length"); 517 415 GPMC_GET_RAW_BOOL(GPMC_CS_CONFIG1, 27, 27, "sync-write"); 518 416 GPMC_GET_RAW_BOOL(GPMC_CS_CONFIG1, 28, 28, "burst-write"); 519 417 GPMC_GET_RAW_BOOL(GPMC_CS_CONFIG1, 29, 29, "gpmc,sync-read"); ··· 556 448 GPMC_GET_TICKS(GPMC_CS_CONFIG6, 0, 3, "bus-turnaround-ns"); 557 449 GPMC_GET_TICKS(GPMC_CS_CONFIG6, 8, 11, "cycle2cycle-delay-ns"); 558 450 559 - GPMC_GET_TICKS(GPMC_CS_CONFIG1, 18, 19, "wait-monitoring-ns"); 560 - GPMC_GET_TICKS(GPMC_CS_CONFIG1, 25, 26, "clk-activation-ns"); 451 + GPMC_GET_TICKS_CD_MAX(GPMC_CS_CONFIG1, 18, 19, 452 + GPMC_CONFIG1_WAITMONITORINGTIME_MAX, 453 + "wait-monitoring-ns", GPMC_CD_CLK); 454 + GPMC_GET_TICKS_CD_MAX(GPMC_CS_CONFIG1, 25, 26, 455 + GPMC_CONFIG1_CLKACTIVATIONTIME_MAX, 456 + "clk-activation-ns", GPMC_CD_FCLK); 561 457 562 458 GPMC_GET_TICKS(GPMC_CS_CONFIG6, 16, 19, "wr-data-mux-bus-ns"); 563 459 GPMC_GET_TICKS(GPMC_CS_CONFIG6, 24, 28, "wr-access-ns"); ··· 572 460 } 573 461 #endif 574 462 575 - static int set_gpmc_timing_reg(int cs, int reg, int st_bit, int end_bit, 576 - int time, const char *name) 463 + /** 464 + * set_gpmc_timing_reg - set a single timing parameter for Chip Select Region. 465 + * Caller is expected to have initialized CONFIG1 GPMCFCLKDIVIDER 466 + * prior to calling this function with @cd equal to GPMC_CD_CLK. 467 + * 468 + * @cs: Chip Select Region. 469 + * @reg: GPMC_CS_CONFIGn register offset. 470 + * @st_bit: Start Bit 471 + * @end_bit: End Bit. Must be >= @st_bit. 472 + * @max: Maximum parameter value. 473 + * If 0, maximum is as high as @st_bit and @end_bit allow. 474 + * @time: Timing parameter in ns. 475 + * @cd: Timing parameter clock domain. 476 + * @name: Timing parameter name. 477 + * @return: 0 on success, -1 on error. 478 + */ 479 + static int set_gpmc_timing_reg(int cs, int reg, int st_bit, int end_bit, int max, 480 + int time, enum gpmc_clk_domain cd, const char *name) 577 481 { 578 482 u32 l; 579 483 int ticks, mask, nr_bits; ··· 597 469 if (time == 0) 598 470 ticks = 0; 599 471 else 600 - ticks = gpmc_ns_to_ticks(time); 472 + ticks = gpmc_ns_to_clk_ticks(time, cs, cd); 601 473 nr_bits = end_bit - st_bit + 1; 602 474 mask = (1 << nr_bits) - 1; 603 475 604 - if (ticks > mask) { 605 - pr_err("%s: GPMC error! CS%d: %s: %d ns, %d ticks > %d\n", 606 - __func__, cs, name, time, ticks, mask); 476 + if (!max) 477 + max = mask; 478 + 479 + if (ticks > max) { 480 + pr_err("%s: GPMC CS%d: %s %d ns, %d ticks > %d ticks\n", 481 + __func__, cs, name, time, ticks, max); 607 482 608 483 return -1; 609 484 } 610 485 611 486 l = gpmc_cs_read_reg(cs, reg); 612 487 #ifdef DEBUG 613 - printk(KERN_INFO 614 - "GPMC CS%d: %-10s: %3d ticks, %3lu ns (was %3i ticks) %3d ns\n", 615 - cs, name, ticks, gpmc_get_fclk_period() * ticks / 1000, 488 + pr_info( 489 + "GPMC CS%d: %-17s: %3d ticks, %3lu ns (was %3i ticks) %3d ns\n", 490 + cs, name, ticks, gpmc_get_clk_period(cs, cd) * ticks / 1000, 616 491 (l >> st_bit) & mask, time); 617 492 #endif 618 493 l &= ~(mask << st_bit); ··· 625 494 return 0; 626 495 } 627 496 628 - #define GPMC_SET_ONE(reg, st, end, field) \ 629 - if (set_gpmc_timing_reg(cs, (reg), (st), (end), \ 630 - t->field, #field) < 0) \ 497 + #define GPMC_SET_ONE_CD_MAX(reg, st, end, max, field, cd) \ 498 + if (set_gpmc_timing_reg(cs, (reg), (st), (end), (max), \ 499 + t->field, (cd), #field) < 0) \ 631 500 return -1 632 501 502 + #define GPMC_SET_ONE(reg, st, end, field) \ 503 + GPMC_SET_ONE_CD_MAX(reg, st, end, 0, field, GPMC_CD_FCLK) 504 + 505 + /** 506 + * gpmc_calc_waitmonitoring_divider - calculate proper GPMCFCLKDIVIDER based on WAITMONITORINGTIME 507 + * WAITMONITORINGTIME will be _at least_ as long as desired, i.e. 508 + * read --> don't sample bus too early 509 + * write --> data is longer on bus 510 + * 511 + * Formula: 512 + * gpmc_clk_div + 1 = ceil(ceil(waitmonitoringtime_ns / gpmc_fclk_ns) 513 + * / waitmonitoring_ticks) 514 + * WAITMONITORINGTIME resulting in 0 or 1 tick with div = 1 are caught by 515 + * div <= 0 check. 516 + * 517 + * @wait_monitoring: WAITMONITORINGTIME in ns. 518 + * @return: -1 on failure to scale, else proper divider > 0. 519 + */ 520 + static int gpmc_calc_waitmonitoring_divider(unsigned int wait_monitoring) 521 + { 522 + 523 + int div = gpmc_ns_to_ticks(wait_monitoring); 524 + 525 + div += GPMC_CONFIG1_WAITMONITORINGTIME_MAX - 1; 526 + div /= GPMC_CONFIG1_WAITMONITORINGTIME_MAX; 527 + 528 + if (div > 4) 529 + return -1; 530 + if (div <= 0) 531 + div = 1; 532 + 533 + return div; 534 + 535 + } 536 + 537 + /** 538 + * gpmc_calc_divider - calculate GPMC_FCLK divider for sync_clk GPMC_CLK period. 539 + * @sync_clk: GPMC_CLK period in ps. 540 + * @return: Returns at least 1 if GPMC_FCLK can be divided to GPMC_CLK. 541 + * Else, returns -1. 542 + */ 633 543 int gpmc_calc_divider(unsigned int sync_clk) 634 544 { 635 - int div; 636 - u32 l; 545 + int div = gpmc_ps_to_ticks(sync_clk); 637 546 638 - l = sync_clk + (gpmc_get_fclk_period() - 1); 639 - div = l / gpmc_get_fclk_period(); 640 547 if (div > 4) 641 548 return -1; 642 549 if (div <= 0) ··· 683 514 return div; 684 515 } 685 516 686 - int gpmc_cs_set_timings(int cs, const struct gpmc_timings *t) 517 + /** 518 + * gpmc_cs_set_timings - program timing parameters for Chip Select Region. 519 + * @cs: Chip Select Region. 520 + * @t: GPMC timing parameters. 521 + * @s: GPMC timing settings. 522 + * @return: 0 on success, -1 on error. 523 + */ 524 + int gpmc_cs_set_timings(int cs, const struct gpmc_timings *t, 525 + const struct gpmc_settings *s) 687 526 { 688 527 int div; 689 528 u32 l; ··· 700 523 div = gpmc_calc_divider(t->sync_clk); 701 524 if (div < 0) 702 525 return div; 526 + 527 + /* 528 + * See if we need to change the divider for waitmonitoringtime. 529 + * 530 + * Calculate GPMCFCLKDIVIDER independent of gpmc,sync-clk-ps in DT for 531 + * pure asynchronous accesses, i.e. both read and write asynchronous. 532 + * However, only do so if WAITMONITORINGTIME is actually used, i.e. 533 + * either WAITREADMONITORING or WAITWRITEMONITORING is set. 534 + * 535 + * This statement must not change div to scale async WAITMONITORINGTIME 536 + * to protect mixed synchronous and asynchronous accesses. 537 + * 538 + * We raise an error later if WAITMONITORINGTIME does not fit. 539 + */ 540 + if (!s->sync_read && !s->sync_write && 541 + (s->wait_on_read || s->wait_on_write) 542 + ) { 543 + 544 + div = gpmc_calc_waitmonitoring_divider(t->wait_monitoring); 545 + if (div < 0) { 546 + pr_err("%s: waitmonitoringtime %3d ns too large for greatest gpmcfclkdivider.\n", 547 + __func__, 548 + t->wait_monitoring 549 + ); 550 + return -1; 551 + } 552 + } 703 553 704 554 GPMC_SET_ONE(GPMC_CS_CONFIG2, 0, 3, cs_on); 705 555 GPMC_SET_ONE(GPMC_CS_CONFIG2, 8, 12, cs_rd_off); ··· 750 546 GPMC_SET_ONE(GPMC_CS_CONFIG6, 0, 3, bus_turnaround); 751 547 GPMC_SET_ONE(GPMC_CS_CONFIG6, 8, 11, cycle2cycle_delay); 752 548 753 - GPMC_SET_ONE(GPMC_CS_CONFIG1, 18, 19, wait_monitoring); 754 - GPMC_SET_ONE(GPMC_CS_CONFIG1, 25, 26, clk_activation); 755 - 756 549 if (gpmc_capability & GPMC_HAS_WR_DATA_MUX_BUS) 757 550 GPMC_SET_ONE(GPMC_CS_CONFIG6, 16, 19, wr_data_mux_bus); 758 551 if (gpmc_capability & GPMC_HAS_WR_ACCESS) 759 552 GPMC_SET_ONE(GPMC_CS_CONFIG6, 24, 28, wr_access); 760 553 761 - /* caller is expected to have initialized CONFIG1 to cover 762 - * at least sync vs async 763 - */ 764 554 l = gpmc_cs_read_reg(cs, GPMC_CS_CONFIG1); 765 - if (l & (GPMC_CONFIG1_READTYPE_SYNC | GPMC_CONFIG1_WRITETYPE_SYNC)) { 555 + l &= ~0x03; 556 + l |= (div - 1); 557 + gpmc_cs_write_reg(cs, GPMC_CS_CONFIG1, l); 558 + 559 + GPMC_SET_ONE_CD_MAX(GPMC_CS_CONFIG1, 18, 19, 560 + GPMC_CONFIG1_WAITMONITORINGTIME_MAX, 561 + wait_monitoring, GPMC_CD_CLK); 562 + GPMC_SET_ONE_CD_MAX(GPMC_CS_CONFIG1, 25, 26, 563 + GPMC_CONFIG1_CLKACTIVATIONTIME_MAX, 564 + clk_activation, GPMC_CD_FCLK); 565 + 766 566 #ifdef DEBUG 767 - printk(KERN_INFO "GPMC CS%d CLK period is %lu ns (div %d)\n", 768 - cs, (div * gpmc_get_fclk_period()) / 1000, div); 567 + pr_info("GPMC CS%d CLK period is %lu ns (div %d)\n", 568 + cs, (div * gpmc_get_fclk_period()) / 1000, div); 769 569 #endif 770 - l &= ~0x03; 771 - l |= (div - 1); 772 - gpmc_cs_write_reg(cs, GPMC_CS_CONFIG1, l); 773 - } 774 570 775 571 gpmc_cs_bool_timings(cs, &t->bool_timings); 776 572 gpmc_cs_show_timings(cs, "after gpmc_cs_set_timings"); ··· 790 586 if (base & (size - 1)) 791 587 return -EINVAL; 792 588 589 + base >>= GPMC_CHUNK_SHIFT; 793 590 mask = (1 << GPMC_SECTION_SHIFT) - size; 591 + mask >>= GPMC_CHUNK_SHIFT; 592 + mask <<= GPMC_CONFIG7_MASKADDRESS_OFFSET; 593 + 794 594 l = gpmc_cs_read_reg(cs, GPMC_CS_CONFIG7); 795 - l &= ~0x3f; 796 - l = (base >> GPMC_CHUNK_SHIFT) & 0x3f; 797 - l &= ~(0x0f << 8); 798 - l |= ((mask >> GPMC_CHUNK_SHIFT) & 0x0f) << 8; 595 + l &= ~GPMC_CONFIG7_MASK; 596 + l |= base & GPMC_CONFIG7_BASEADDRESS_MASK; 597 + l |= mask & GPMC_CONFIG7_MASKADDRESS_MASK; 799 598 l |= GPMC_CONFIG7_CSVALID; 800 599 gpmc_cs_write_reg(cs, GPMC_CS_CONFIG7, l); 801 600 ··· 863 656 gpmc->name = name; 864 657 } 865 658 866 - const char *gpmc_cs_get_name(int cs) 659 + static const char *gpmc_cs_get_name(int cs) 867 660 { 868 661 struct gpmc_cs_data *gpmc = &gpmc_cs[cs]; 869 662 ··· 1993 1786 if (ret < 0) 1994 1787 goto err; 1995 1788 1996 - ret = gpmc_cs_set_timings(cs, &gpmc_t); 1789 + ret = gpmc_cs_set_timings(cs, &gpmc_t, &gpmc_s); 1997 1790 if (ret) { 1998 1791 dev_err(&pdev->dev, "failed to set gpmc timings for: %s\n", 1999 1792 child->name); ··· 2009 1802 gpmc_cs_enable_mem(cs); 2010 1803 2011 1804 no_timings: 2012 - if (of_platform_device_create(child, NULL, &pdev->dev)) 2013 - return 0; 1805 + 1806 + /* create platform device, NULL on error or when disabled */ 1807 + if (!of_platform_device_create(child, NULL, &pdev->dev)) 1808 + goto err_child_fail; 1809 + 1810 + /* is child a common bus? */ 1811 + if (of_match_node(of_default_bus_match_table, child)) 1812 + /* create children and other common bus children */ 1813 + if (of_platform_populate(child, of_default_bus_match_table, 1814 + NULL, &pdev->dev)) 1815 + goto err_child_fail; 1816 + 1817 + return 0; 1818 + 1819 + err_child_fail: 2014 1820 2015 1821 dev_err(&pdev->dev, "failed to create gpmc child %s\n", child->name); 2016 1822 ret = -ENODEV;
+1
drivers/soc/Kconfig
··· 1 1 menu "SOC (System On Chip) specific Drivers" 2 2 3 + source "drivers/soc/mediatek/Kconfig" 3 4 source "drivers/soc/qcom/Kconfig" 4 5 source "drivers/soc/ti/Kconfig" 5 6 source "drivers/soc/versatile/Kconfig"
+1
drivers/soc/Makefile
··· 2 2 # Makefile for the Linux Kernel SOC specific device drivers. 3 3 # 4 4 5 + obj-$(CONFIG_ARCH_MEDIATEK) += mediatek/ 5 6 obj-$(CONFIG_ARCH_QCOM) += qcom/ 6 7 obj-$(CONFIG_ARCH_TEGRA) += tegra/ 7 8 obj-$(CONFIG_SOC_TI) += ti/
+11
drivers/soc/mediatek/Kconfig
··· 1 + # 2 + # MediaTek SoC drivers 3 + # 4 + config MTK_PMIC_WRAP 5 + tristate "MediaTek PMIC Wrapper Support" 6 + depends on ARCH_MEDIATEK 7 + select REGMAP 8 + help 9 + Say yes here to add support for MediaTek PMIC Wrapper found 10 + on different MediaTek SoCs. The PMIC wrapper is a proprietary 11 + hardware to connect the PMIC.
+1
drivers/soc/mediatek/Makefile
··· 1 + obj-$(CONFIG_MTK_PMIC_WRAP) += mtk-pmic-wrap.o
+975
drivers/soc/mediatek/mtk-pmic-wrap.c
··· 1 + /* 2 + * Copyright (c) 2014 MediaTek Inc. 3 + * Author: Flora Fu, MediaTek 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + */ 14 + #include <linux/clk.h> 15 + #include <linux/interrupt.h> 16 + #include <linux/io.h> 17 + #include <linux/kernel.h> 18 + #include <linux/module.h> 19 + #include <linux/of_device.h> 20 + #include <linux/platform_device.h> 21 + #include <linux/regmap.h> 22 + #include <linux/reset.h> 23 + 24 + #define PWRAP_MT8135_BRIDGE_IORD_ARB_EN 0x4 25 + #define PWRAP_MT8135_BRIDGE_WACS3_EN 0x10 26 + #define PWRAP_MT8135_BRIDGE_INIT_DONE3 0x14 27 + #define PWRAP_MT8135_BRIDGE_WACS4_EN 0x24 28 + #define PWRAP_MT8135_BRIDGE_INIT_DONE4 0x28 29 + #define PWRAP_MT8135_BRIDGE_INT_EN 0x38 30 + #define PWRAP_MT8135_BRIDGE_TIMER_EN 0x48 31 + #define PWRAP_MT8135_BRIDGE_WDT_UNIT 0x50 32 + #define PWRAP_MT8135_BRIDGE_WDT_SRC_EN 0x54 33 + 34 + /* macro for wrapper status */ 35 + #define PWRAP_GET_WACS_RDATA(x) (((x) >> 0) & 0x0000ffff) 36 + #define PWRAP_GET_WACS_FSM(x) (((x) >> 16) & 0x00000007) 37 + #define PWRAP_GET_WACS_REQ(x) (((x) >> 19) & 0x00000001) 38 + #define PWRAP_STATE_SYNC_IDLE0 (1 << 20) 39 + #define PWRAP_STATE_INIT_DONE0 (1 << 21) 40 + 41 + /* macro for WACS FSM */ 42 + #define PWRAP_WACS_FSM_IDLE 0x00 43 + #define PWRAP_WACS_FSM_REQ 0x02 44 + #define PWRAP_WACS_FSM_WFDLE 0x04 45 + #define PWRAP_WACS_FSM_WFVLDCLR 0x06 46 + #define PWRAP_WACS_INIT_DONE 0x01 47 + #define PWRAP_WACS_WACS_SYNC_IDLE 0x01 48 + #define PWRAP_WACS_SYNC_BUSY 0x00 49 + 50 + /* macro for device wrapper default value */ 51 + #define PWRAP_DEW_READ_TEST_VAL 0x5aa5 52 + #define PWRAP_DEW_WRITE_TEST_VAL 0xa55a 53 + 54 + /* macro for manual command */ 55 + #define PWRAP_MAN_CMD_SPI_WRITE (1 << 13) 56 + #define PWRAP_MAN_CMD_OP_CSH (0x0 << 8) 57 + #define PWRAP_MAN_CMD_OP_CSL (0x1 << 8) 58 + #define PWRAP_MAN_CMD_OP_CK (0x2 << 8) 59 + #define PWRAP_MAN_CMD_OP_OUTS (0x8 << 8) 60 + #define PWRAP_MAN_CMD_OP_OUTD (0x9 << 8) 61 + #define PWRAP_MAN_CMD_OP_OUTQ (0xa << 8) 62 + 63 + /* macro for slave device wrapper registers */ 64 + #define PWRAP_DEW_BASE 0xbc00 65 + #define PWRAP_DEW_EVENT_OUT_EN (PWRAP_DEW_BASE + 0x0) 66 + #define PWRAP_DEW_DIO_EN (PWRAP_DEW_BASE + 0x2) 67 + #define PWRAP_DEW_EVENT_SRC_EN (PWRAP_DEW_BASE + 0x4) 68 + #define PWRAP_DEW_EVENT_SRC (PWRAP_DEW_BASE + 0x6) 69 + #define PWRAP_DEW_EVENT_FLAG (PWRAP_DEW_BASE + 0x8) 70 + #define PWRAP_DEW_READ_TEST (PWRAP_DEW_BASE + 0xa) 71 + #define PWRAP_DEW_WRITE_TEST (PWRAP_DEW_BASE + 0xc) 72 + #define PWRAP_DEW_CRC_EN (PWRAP_DEW_BASE + 0xe) 73 + #define PWRAP_DEW_CRC_VAL (PWRAP_DEW_BASE + 0x10) 74 + #define PWRAP_DEW_MON_GRP_SEL (PWRAP_DEW_BASE + 0x12) 75 + #define PWRAP_DEW_MON_FLAG_SEL (PWRAP_DEW_BASE + 0x14) 76 + #define PWRAP_DEW_EVENT_TEST (PWRAP_DEW_BASE + 0x16) 77 + #define PWRAP_DEW_CIPHER_KEY_SEL (PWRAP_DEW_BASE + 0x18) 78 + #define PWRAP_DEW_CIPHER_IV_SEL (PWRAP_DEW_BASE + 0x1a) 79 + #define PWRAP_DEW_CIPHER_LOAD (PWRAP_DEW_BASE + 0x1c) 80 + #define PWRAP_DEW_CIPHER_START (PWRAP_DEW_BASE + 0x1e) 81 + #define PWRAP_DEW_CIPHER_RDY (PWRAP_DEW_BASE + 0x20) 82 + #define PWRAP_DEW_CIPHER_MODE (PWRAP_DEW_BASE + 0x22) 83 + #define PWRAP_DEW_CIPHER_SWRST (PWRAP_DEW_BASE + 0x24) 84 + #define PWRAP_MT8173_DEW_CIPHER_IV0 (PWRAP_DEW_BASE + 0x26) 85 + #define PWRAP_MT8173_DEW_CIPHER_IV1 (PWRAP_DEW_BASE + 0x28) 86 + #define PWRAP_MT8173_DEW_CIPHER_IV2 (PWRAP_DEW_BASE + 0x2a) 87 + #define PWRAP_MT8173_DEW_CIPHER_IV3 (PWRAP_DEW_BASE + 0x2c) 88 + #define PWRAP_MT8173_DEW_CIPHER_IV4 (PWRAP_DEW_BASE + 0x2e) 89 + #define PWRAP_MT8173_DEW_CIPHER_IV5 (PWRAP_DEW_BASE + 0x30) 90 + 91 + enum pwrap_regs { 92 + PWRAP_MUX_SEL, 93 + PWRAP_WRAP_EN, 94 + PWRAP_DIO_EN, 95 + PWRAP_SIDLY, 96 + PWRAP_CSHEXT_WRITE, 97 + PWRAP_CSHEXT_READ, 98 + PWRAP_CSLEXT_START, 99 + PWRAP_CSLEXT_END, 100 + PWRAP_STAUPD_PRD, 101 + PWRAP_STAUPD_GRPEN, 102 + PWRAP_STAUPD_MAN_TRIG, 103 + PWRAP_STAUPD_STA, 104 + PWRAP_WRAP_STA, 105 + PWRAP_HARB_INIT, 106 + PWRAP_HARB_HPRIO, 107 + PWRAP_HIPRIO_ARB_EN, 108 + PWRAP_HARB_STA0, 109 + PWRAP_HARB_STA1, 110 + PWRAP_MAN_EN, 111 + PWRAP_MAN_CMD, 112 + PWRAP_MAN_RDATA, 113 + PWRAP_MAN_VLDCLR, 114 + PWRAP_WACS0_EN, 115 + PWRAP_INIT_DONE0, 116 + PWRAP_WACS0_CMD, 117 + PWRAP_WACS0_RDATA, 118 + PWRAP_WACS0_VLDCLR, 119 + PWRAP_WACS1_EN, 120 + PWRAP_INIT_DONE1, 121 + PWRAP_WACS1_CMD, 122 + PWRAP_WACS1_RDATA, 123 + PWRAP_WACS1_VLDCLR, 124 + PWRAP_WACS2_EN, 125 + PWRAP_INIT_DONE2, 126 + PWRAP_WACS2_CMD, 127 + PWRAP_WACS2_RDATA, 128 + PWRAP_WACS2_VLDCLR, 129 + PWRAP_INT_EN, 130 + PWRAP_INT_FLG_RAW, 131 + PWRAP_INT_FLG, 132 + PWRAP_INT_CLR, 133 + PWRAP_SIG_ADR, 134 + PWRAP_SIG_MODE, 135 + PWRAP_SIG_VALUE, 136 + PWRAP_SIG_ERRVAL, 137 + PWRAP_CRC_EN, 138 + PWRAP_TIMER_EN, 139 + PWRAP_TIMER_STA, 140 + PWRAP_WDT_UNIT, 141 + PWRAP_WDT_SRC_EN, 142 + PWRAP_WDT_FLG, 143 + PWRAP_DEBUG_INT_SEL, 144 + PWRAP_CIPHER_KEY_SEL, 145 + PWRAP_CIPHER_IV_SEL, 146 + PWRAP_CIPHER_RDY, 147 + PWRAP_CIPHER_MODE, 148 + PWRAP_CIPHER_SWRST, 149 + PWRAP_DCM_EN, 150 + PWRAP_DCM_DBC_PRD, 151 + 152 + /* MT8135 only regs */ 153 + PWRAP_CSHEXT, 154 + PWRAP_EVENT_IN_EN, 155 + PWRAP_EVENT_DST_EN, 156 + PWRAP_RRARB_INIT, 157 + PWRAP_RRARB_EN, 158 + PWRAP_RRARB_STA0, 159 + PWRAP_RRARB_STA1, 160 + PWRAP_EVENT_STA, 161 + PWRAP_EVENT_STACLR, 162 + PWRAP_CIPHER_LOAD, 163 + PWRAP_CIPHER_START, 164 + 165 + /* MT8173 only regs */ 166 + PWRAP_RDDMY, 167 + PWRAP_SI_CK_CON, 168 + PWRAP_DVFS_ADR0, 169 + PWRAP_DVFS_WDATA0, 170 + PWRAP_DVFS_ADR1, 171 + PWRAP_DVFS_WDATA1, 172 + PWRAP_DVFS_ADR2, 173 + PWRAP_DVFS_WDATA2, 174 + PWRAP_DVFS_ADR3, 175 + PWRAP_DVFS_WDATA3, 176 + PWRAP_DVFS_ADR4, 177 + PWRAP_DVFS_WDATA4, 178 + PWRAP_DVFS_ADR5, 179 + PWRAP_DVFS_WDATA5, 180 + PWRAP_DVFS_ADR6, 181 + PWRAP_DVFS_WDATA6, 182 + PWRAP_DVFS_ADR7, 183 + PWRAP_DVFS_WDATA7, 184 + PWRAP_SPMINF_STA, 185 + PWRAP_CIPHER_EN, 186 + }; 187 + 188 + static int mt8173_regs[] = { 189 + [PWRAP_MUX_SEL] = 0x0, 190 + [PWRAP_WRAP_EN] = 0x4, 191 + [PWRAP_DIO_EN] = 0x8, 192 + [PWRAP_SIDLY] = 0xc, 193 + [PWRAP_RDDMY] = 0x10, 194 + [PWRAP_SI_CK_CON] = 0x14, 195 + [PWRAP_CSHEXT_WRITE] = 0x18, 196 + [PWRAP_CSHEXT_READ] = 0x1c, 197 + [PWRAP_CSLEXT_START] = 0x20, 198 + [PWRAP_CSLEXT_END] = 0x24, 199 + [PWRAP_STAUPD_PRD] = 0x28, 200 + [PWRAP_STAUPD_GRPEN] = 0x2c, 201 + [PWRAP_STAUPD_MAN_TRIG] = 0x40, 202 + [PWRAP_STAUPD_STA] = 0x44, 203 + [PWRAP_WRAP_STA] = 0x48, 204 + [PWRAP_HARB_INIT] = 0x4c, 205 + [PWRAP_HARB_HPRIO] = 0x50, 206 + [PWRAP_HIPRIO_ARB_EN] = 0x54, 207 + [PWRAP_HARB_STA0] = 0x58, 208 + [PWRAP_HARB_STA1] = 0x5c, 209 + [PWRAP_MAN_EN] = 0x60, 210 + [PWRAP_MAN_CMD] = 0x64, 211 + [PWRAP_MAN_RDATA] = 0x68, 212 + [PWRAP_MAN_VLDCLR] = 0x6c, 213 + [PWRAP_WACS0_EN] = 0x70, 214 + [PWRAP_INIT_DONE0] = 0x74, 215 + [PWRAP_WACS0_CMD] = 0x78, 216 + [PWRAP_WACS0_RDATA] = 0x7c, 217 + [PWRAP_WACS0_VLDCLR] = 0x80, 218 + [PWRAP_WACS1_EN] = 0x84, 219 + [PWRAP_INIT_DONE1] = 0x88, 220 + [PWRAP_WACS1_CMD] = 0x8c, 221 + [PWRAP_WACS1_RDATA] = 0x90, 222 + [PWRAP_WACS1_VLDCLR] = 0x94, 223 + [PWRAP_WACS2_EN] = 0x98, 224 + [PWRAP_INIT_DONE2] = 0x9c, 225 + [PWRAP_WACS2_CMD] = 0xa0, 226 + [PWRAP_WACS2_RDATA] = 0xa4, 227 + [PWRAP_WACS2_VLDCLR] = 0xa8, 228 + [PWRAP_INT_EN] = 0xac, 229 + [PWRAP_INT_FLG_RAW] = 0xb0, 230 + [PWRAP_INT_FLG] = 0xb4, 231 + [PWRAP_INT_CLR] = 0xb8, 232 + [PWRAP_SIG_ADR] = 0xbc, 233 + [PWRAP_SIG_MODE] = 0xc0, 234 + [PWRAP_SIG_VALUE] = 0xc4, 235 + [PWRAP_SIG_ERRVAL] = 0xc8, 236 + [PWRAP_CRC_EN] = 0xcc, 237 + [PWRAP_TIMER_EN] = 0xd0, 238 + [PWRAP_TIMER_STA] = 0xd4, 239 + [PWRAP_WDT_UNIT] = 0xd8, 240 + [PWRAP_WDT_SRC_EN] = 0xdc, 241 + [PWRAP_WDT_FLG] = 0xe0, 242 + [PWRAP_DEBUG_INT_SEL] = 0xe4, 243 + [PWRAP_DVFS_ADR0] = 0xe8, 244 + [PWRAP_DVFS_WDATA0] = 0xec, 245 + [PWRAP_DVFS_ADR1] = 0xf0, 246 + [PWRAP_DVFS_WDATA1] = 0xf4, 247 + [PWRAP_DVFS_ADR2] = 0xf8, 248 + [PWRAP_DVFS_WDATA2] = 0xfc, 249 + [PWRAP_DVFS_ADR3] = 0x100, 250 + [PWRAP_DVFS_WDATA3] = 0x104, 251 + [PWRAP_DVFS_ADR4] = 0x108, 252 + [PWRAP_DVFS_WDATA4] = 0x10c, 253 + [PWRAP_DVFS_ADR5] = 0x110, 254 + [PWRAP_DVFS_WDATA5] = 0x114, 255 + [PWRAP_DVFS_ADR6] = 0x118, 256 + [PWRAP_DVFS_WDATA6] = 0x11c, 257 + [PWRAP_DVFS_ADR7] = 0x120, 258 + [PWRAP_DVFS_WDATA7] = 0x124, 259 + [PWRAP_SPMINF_STA] = 0x128, 260 + [PWRAP_CIPHER_KEY_SEL] = 0x12c, 261 + [PWRAP_CIPHER_IV_SEL] = 0x130, 262 + [PWRAP_CIPHER_EN] = 0x134, 263 + [PWRAP_CIPHER_RDY] = 0x138, 264 + [PWRAP_CIPHER_MODE] = 0x13c, 265 + [PWRAP_CIPHER_SWRST] = 0x140, 266 + [PWRAP_DCM_EN] = 0x144, 267 + [PWRAP_DCM_DBC_PRD] = 0x148, 268 + }; 269 + 270 + static int mt8135_regs[] = { 271 + [PWRAP_MUX_SEL] = 0x0, 272 + [PWRAP_WRAP_EN] = 0x4, 273 + [PWRAP_DIO_EN] = 0x8, 274 + [PWRAP_SIDLY] = 0xc, 275 + [PWRAP_CSHEXT] = 0x10, 276 + [PWRAP_CSHEXT_WRITE] = 0x14, 277 + [PWRAP_CSHEXT_READ] = 0x18, 278 + [PWRAP_CSLEXT_START] = 0x1c, 279 + [PWRAP_CSLEXT_END] = 0x20, 280 + [PWRAP_STAUPD_PRD] = 0x24, 281 + [PWRAP_STAUPD_GRPEN] = 0x28, 282 + [PWRAP_STAUPD_MAN_TRIG] = 0x2c, 283 + [PWRAP_STAUPD_STA] = 0x30, 284 + [PWRAP_EVENT_IN_EN] = 0x34, 285 + [PWRAP_EVENT_DST_EN] = 0x38, 286 + [PWRAP_WRAP_STA] = 0x3c, 287 + [PWRAP_RRARB_INIT] = 0x40, 288 + [PWRAP_RRARB_EN] = 0x44, 289 + [PWRAP_RRARB_STA0] = 0x48, 290 + [PWRAP_RRARB_STA1] = 0x4c, 291 + [PWRAP_HARB_INIT] = 0x50, 292 + [PWRAP_HARB_HPRIO] = 0x54, 293 + [PWRAP_HIPRIO_ARB_EN] = 0x58, 294 + [PWRAP_HARB_STA0] = 0x5c, 295 + [PWRAP_HARB_STA1] = 0x60, 296 + [PWRAP_MAN_EN] = 0x64, 297 + [PWRAP_MAN_CMD] = 0x68, 298 + [PWRAP_MAN_RDATA] = 0x6c, 299 + [PWRAP_MAN_VLDCLR] = 0x70, 300 + [PWRAP_WACS0_EN] = 0x74, 301 + [PWRAP_INIT_DONE0] = 0x78, 302 + [PWRAP_WACS0_CMD] = 0x7c, 303 + [PWRAP_WACS0_RDATA] = 0x80, 304 + [PWRAP_WACS0_VLDCLR] = 0x84, 305 + [PWRAP_WACS1_EN] = 0x88, 306 + [PWRAP_INIT_DONE1] = 0x8c, 307 + [PWRAP_WACS1_CMD] = 0x90, 308 + [PWRAP_WACS1_RDATA] = 0x94, 309 + [PWRAP_WACS1_VLDCLR] = 0x98, 310 + [PWRAP_WACS2_EN] = 0x9c, 311 + [PWRAP_INIT_DONE2] = 0xa0, 312 + [PWRAP_WACS2_CMD] = 0xa4, 313 + [PWRAP_WACS2_RDATA] = 0xa8, 314 + [PWRAP_WACS2_VLDCLR] = 0xac, 315 + [PWRAP_INT_EN] = 0xb0, 316 + [PWRAP_INT_FLG_RAW] = 0xb4, 317 + [PWRAP_INT_FLG] = 0xb8, 318 + [PWRAP_INT_CLR] = 0xbc, 319 + [PWRAP_SIG_ADR] = 0xc0, 320 + [PWRAP_SIG_MODE] = 0xc4, 321 + [PWRAP_SIG_VALUE] = 0xc8, 322 + [PWRAP_SIG_ERRVAL] = 0xcc, 323 + [PWRAP_CRC_EN] = 0xd0, 324 + [PWRAP_EVENT_STA] = 0xd4, 325 + [PWRAP_EVENT_STACLR] = 0xd8, 326 + [PWRAP_TIMER_EN] = 0xdc, 327 + [PWRAP_TIMER_STA] = 0xe0, 328 + [PWRAP_WDT_UNIT] = 0xe4, 329 + [PWRAP_WDT_SRC_EN] = 0xe8, 330 + [PWRAP_WDT_FLG] = 0xec, 331 + [PWRAP_DEBUG_INT_SEL] = 0xf0, 332 + [PWRAP_CIPHER_KEY_SEL] = 0x134, 333 + [PWRAP_CIPHER_IV_SEL] = 0x138, 334 + [PWRAP_CIPHER_LOAD] = 0x13c, 335 + [PWRAP_CIPHER_START] = 0x140, 336 + [PWRAP_CIPHER_RDY] = 0x144, 337 + [PWRAP_CIPHER_MODE] = 0x148, 338 + [PWRAP_CIPHER_SWRST] = 0x14c, 339 + [PWRAP_DCM_EN] = 0x15c, 340 + [PWRAP_DCM_DBC_PRD] = 0x160, 341 + }; 342 + 343 + enum pwrap_type { 344 + PWRAP_MT8135, 345 + PWRAP_MT8173, 346 + }; 347 + 348 + struct pmic_wrapper_type { 349 + int *regs; 350 + enum pwrap_type type; 351 + u32 arb_en_all; 352 + }; 353 + 354 + static struct pmic_wrapper_type pwrap_mt8135 = { 355 + .regs = mt8135_regs, 356 + .type = PWRAP_MT8135, 357 + .arb_en_all = 0x1ff, 358 + }; 359 + 360 + static struct pmic_wrapper_type pwrap_mt8173 = { 361 + .regs = mt8173_regs, 362 + .type = PWRAP_MT8173, 363 + .arb_en_all = 0x3f, 364 + }; 365 + 366 + struct pmic_wrapper { 367 + struct device *dev; 368 + void __iomem *base; 369 + struct regmap *regmap; 370 + int *regs; 371 + enum pwrap_type type; 372 + u32 arb_en_all; 373 + struct clk *clk_spi; 374 + struct clk *clk_wrap; 375 + struct reset_control *rstc; 376 + 377 + struct reset_control *rstc_bridge; 378 + void __iomem *bridge_base; 379 + }; 380 + 381 + static inline int pwrap_is_mt8135(struct pmic_wrapper *wrp) 382 + { 383 + return wrp->type == PWRAP_MT8135; 384 + } 385 + 386 + static inline int pwrap_is_mt8173(struct pmic_wrapper *wrp) 387 + { 388 + return wrp->type == PWRAP_MT8173; 389 + } 390 + 391 + static u32 pwrap_readl(struct pmic_wrapper *wrp, enum pwrap_regs reg) 392 + { 393 + return readl(wrp->base + wrp->regs[reg]); 394 + } 395 + 396 + static void pwrap_writel(struct pmic_wrapper *wrp, u32 val, enum pwrap_regs reg) 397 + { 398 + writel(val, wrp->base + wrp->regs[reg]); 399 + } 400 + 401 + static bool pwrap_is_fsm_idle(struct pmic_wrapper *wrp) 402 + { 403 + u32 val = pwrap_readl(wrp, PWRAP_WACS2_RDATA); 404 + 405 + return PWRAP_GET_WACS_FSM(val) == PWRAP_WACS_FSM_IDLE; 406 + } 407 + 408 + static bool pwrap_is_fsm_vldclr(struct pmic_wrapper *wrp) 409 + { 410 + u32 val = pwrap_readl(wrp, PWRAP_WACS2_RDATA); 411 + 412 + return PWRAP_GET_WACS_FSM(val) == PWRAP_WACS_FSM_WFVLDCLR; 413 + } 414 + 415 + static bool pwrap_is_sync_idle(struct pmic_wrapper *wrp) 416 + { 417 + return pwrap_readl(wrp, PWRAP_WACS2_RDATA) & PWRAP_STATE_SYNC_IDLE0; 418 + } 419 + 420 + static bool pwrap_is_fsm_idle_and_sync_idle(struct pmic_wrapper *wrp) 421 + { 422 + u32 val = pwrap_readl(wrp, PWRAP_WACS2_RDATA); 423 + 424 + return (PWRAP_GET_WACS_FSM(val) == PWRAP_WACS_FSM_IDLE) && 425 + (val & PWRAP_STATE_SYNC_IDLE0); 426 + } 427 + 428 + static int pwrap_wait_for_state(struct pmic_wrapper *wrp, 429 + bool (*fp)(struct pmic_wrapper *)) 430 + { 431 + unsigned long timeout; 432 + 433 + timeout = jiffies + usecs_to_jiffies(255); 434 + 435 + do { 436 + if (time_after(jiffies, timeout)) 437 + return fp(wrp) ? 0 : -ETIMEDOUT; 438 + if (fp(wrp)) 439 + return 0; 440 + } while (1); 441 + } 442 + 443 + static int pwrap_write(struct pmic_wrapper *wrp, u32 adr, u32 wdata) 444 + { 445 + int ret; 446 + u32 val; 447 + 448 + val = pwrap_readl(wrp, PWRAP_WACS2_RDATA); 449 + if (PWRAP_GET_WACS_FSM(val) == PWRAP_WACS_FSM_WFVLDCLR) 450 + pwrap_writel(wrp, 1, PWRAP_WACS2_VLDCLR); 451 + 452 + ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle); 453 + if (ret) 454 + return ret; 455 + 456 + pwrap_writel(wrp, (1 << 31) | ((adr >> 1) << 16) | wdata, 457 + PWRAP_WACS2_CMD); 458 + 459 + return 0; 460 + } 461 + 462 + static int pwrap_read(struct pmic_wrapper *wrp, u32 adr, u32 *rdata) 463 + { 464 + int ret; 465 + u32 val; 466 + 467 + val = pwrap_readl(wrp, PWRAP_WACS2_RDATA); 468 + if (PWRAP_GET_WACS_FSM(val) == PWRAP_WACS_FSM_WFVLDCLR) 469 + pwrap_writel(wrp, 1, PWRAP_WACS2_VLDCLR); 470 + 471 + ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle); 472 + if (ret) 473 + return ret; 474 + 475 + pwrap_writel(wrp, (adr >> 1) << 16, PWRAP_WACS2_CMD); 476 + 477 + ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_vldclr); 478 + if (ret) 479 + return ret; 480 + 481 + *rdata = PWRAP_GET_WACS_RDATA(pwrap_readl(wrp, PWRAP_WACS2_RDATA)); 482 + 483 + return 0; 484 + } 485 + 486 + static int pwrap_regmap_read(void *context, u32 adr, u32 *rdata) 487 + { 488 + return pwrap_read(context, adr, rdata); 489 + } 490 + 491 + static int pwrap_regmap_write(void *context, u32 adr, u32 wdata) 492 + { 493 + return pwrap_write(context, adr, wdata); 494 + } 495 + 496 + static int pwrap_reset_spislave(struct pmic_wrapper *wrp) 497 + { 498 + int ret, i; 499 + 500 + pwrap_writel(wrp, 0, PWRAP_HIPRIO_ARB_EN); 501 + pwrap_writel(wrp, 0, PWRAP_WRAP_EN); 502 + pwrap_writel(wrp, 1, PWRAP_MUX_SEL); 503 + pwrap_writel(wrp, 1, PWRAP_MAN_EN); 504 + pwrap_writel(wrp, 0, PWRAP_DIO_EN); 505 + 506 + pwrap_writel(wrp, PWRAP_MAN_CMD_SPI_WRITE | PWRAP_MAN_CMD_OP_CSL, 507 + PWRAP_MAN_CMD); 508 + pwrap_writel(wrp, PWRAP_MAN_CMD_SPI_WRITE | PWRAP_MAN_CMD_OP_OUTS, 509 + PWRAP_MAN_CMD); 510 + pwrap_writel(wrp, PWRAP_MAN_CMD_SPI_WRITE | PWRAP_MAN_CMD_OP_CSH, 511 + PWRAP_MAN_CMD); 512 + 513 + for (i = 0; i < 4; i++) 514 + pwrap_writel(wrp, PWRAP_MAN_CMD_SPI_WRITE | PWRAP_MAN_CMD_OP_OUTS, 515 + PWRAP_MAN_CMD); 516 + 517 + ret = pwrap_wait_for_state(wrp, pwrap_is_sync_idle); 518 + if (ret) { 519 + dev_err(wrp->dev, "%s fail, ret=%d\n", __func__, ret); 520 + return ret; 521 + } 522 + 523 + pwrap_writel(wrp, 0, PWRAP_MAN_EN); 524 + pwrap_writel(wrp, 0, PWRAP_MUX_SEL); 525 + 526 + return 0; 527 + } 528 + 529 + /* 530 + * pwrap_init_sidly - configure serial input delay 531 + * 532 + * This configures the serial input delay. We can configure 0, 2, 4 or 6ns 533 + * delay. Do a read test with all possible values and chose the best delay. 534 + */ 535 + static int pwrap_init_sidly(struct pmic_wrapper *wrp) 536 + { 537 + u32 rdata; 538 + u32 i; 539 + u32 pass = 0; 540 + signed char dly[16] = { 541 + -1, 0, 1, 0, 2, -1, 1, 1, 3, -1, -1, -1, 3, -1, 2, 1 542 + }; 543 + 544 + for (i = 0; i < 4; i++) { 545 + pwrap_writel(wrp, i, PWRAP_SIDLY); 546 + pwrap_read(wrp, PWRAP_DEW_READ_TEST, &rdata); 547 + if (rdata == PWRAP_DEW_READ_TEST_VAL) { 548 + dev_dbg(wrp->dev, "[Read Test] pass, SIDLY=%x\n", i); 549 + pass |= 1 << i; 550 + } 551 + } 552 + 553 + if (dly[pass] < 0) { 554 + dev_err(wrp->dev, "sidly pass range 0x%x not continuous\n", 555 + pass); 556 + return -EIO; 557 + } 558 + 559 + pwrap_writel(wrp, dly[pass], PWRAP_SIDLY); 560 + 561 + return 0; 562 + } 563 + 564 + static int pwrap_init_reg_clock(struct pmic_wrapper *wrp) 565 + { 566 + unsigned long rate_spi; 567 + int ck_mhz; 568 + 569 + rate_spi = clk_get_rate(wrp->clk_spi); 570 + 571 + if (rate_spi > 26000000) 572 + ck_mhz = 26; 573 + else if (rate_spi > 18000000) 574 + ck_mhz = 18; 575 + else 576 + ck_mhz = 0; 577 + 578 + switch (ck_mhz) { 579 + case 18: 580 + if (pwrap_is_mt8135(wrp)) 581 + pwrap_writel(wrp, 0xc, PWRAP_CSHEXT); 582 + pwrap_writel(wrp, 0x4, PWRAP_CSHEXT_WRITE); 583 + pwrap_writel(wrp, 0xc, PWRAP_CSHEXT_READ); 584 + pwrap_writel(wrp, 0x0, PWRAP_CSLEXT_START); 585 + pwrap_writel(wrp, 0x0, PWRAP_CSLEXT_END); 586 + break; 587 + case 26: 588 + if (pwrap_is_mt8135(wrp)) 589 + pwrap_writel(wrp, 0x4, PWRAP_CSHEXT); 590 + pwrap_writel(wrp, 0x0, PWRAP_CSHEXT_WRITE); 591 + pwrap_writel(wrp, 0x4, PWRAP_CSHEXT_READ); 592 + pwrap_writel(wrp, 0x0, PWRAP_CSLEXT_START); 593 + pwrap_writel(wrp, 0x0, PWRAP_CSLEXT_END); 594 + break; 595 + case 0: 596 + if (pwrap_is_mt8135(wrp)) 597 + pwrap_writel(wrp, 0xf, PWRAP_CSHEXT); 598 + pwrap_writel(wrp, 0xf, PWRAP_CSHEXT_WRITE); 599 + pwrap_writel(wrp, 0xf, PWRAP_CSHEXT_READ); 600 + pwrap_writel(wrp, 0xf, PWRAP_CSLEXT_START); 601 + pwrap_writel(wrp, 0xf, PWRAP_CSLEXT_END); 602 + break; 603 + default: 604 + return -EINVAL; 605 + } 606 + 607 + return 0; 608 + } 609 + 610 + static bool pwrap_is_cipher_ready(struct pmic_wrapper *wrp) 611 + { 612 + return pwrap_readl(wrp, PWRAP_CIPHER_RDY) & 1; 613 + } 614 + 615 + static bool pwrap_is_pmic_cipher_ready(struct pmic_wrapper *wrp) 616 + { 617 + u32 rdata; 618 + int ret; 619 + 620 + ret = pwrap_read(wrp, PWRAP_DEW_CIPHER_RDY, &rdata); 621 + if (ret) 622 + return 0; 623 + 624 + return rdata == 1; 625 + } 626 + 627 + static int pwrap_init_cipher(struct pmic_wrapper *wrp) 628 + { 629 + int ret; 630 + u32 rdata; 631 + 632 + pwrap_writel(wrp, 0x1, PWRAP_CIPHER_SWRST); 633 + pwrap_writel(wrp, 0x0, PWRAP_CIPHER_SWRST); 634 + pwrap_writel(wrp, 0x1, PWRAP_CIPHER_KEY_SEL); 635 + pwrap_writel(wrp, 0x2, PWRAP_CIPHER_IV_SEL); 636 + 637 + if (pwrap_is_mt8135(wrp)) { 638 + pwrap_writel(wrp, 1, PWRAP_CIPHER_LOAD); 639 + pwrap_writel(wrp, 1, PWRAP_CIPHER_START); 640 + } else { 641 + pwrap_writel(wrp, 1, PWRAP_CIPHER_EN); 642 + } 643 + 644 + /* Config cipher mode @PMIC */ 645 + pwrap_write(wrp, PWRAP_DEW_CIPHER_SWRST, 0x1); 646 + pwrap_write(wrp, PWRAP_DEW_CIPHER_SWRST, 0x0); 647 + pwrap_write(wrp, PWRAP_DEW_CIPHER_KEY_SEL, 0x1); 648 + pwrap_write(wrp, PWRAP_DEW_CIPHER_IV_SEL, 0x2); 649 + pwrap_write(wrp, PWRAP_DEW_CIPHER_LOAD, 0x1); 650 + pwrap_write(wrp, PWRAP_DEW_CIPHER_START, 0x1); 651 + 652 + /* wait for cipher data ready@AP */ 653 + ret = pwrap_wait_for_state(wrp, pwrap_is_cipher_ready); 654 + if (ret) { 655 + dev_err(wrp->dev, "cipher data ready@AP fail, ret=%d\n", ret); 656 + return ret; 657 + } 658 + 659 + /* wait for cipher data ready@PMIC */ 660 + ret = pwrap_wait_for_state(wrp, pwrap_is_pmic_cipher_ready); 661 + if (ret) { 662 + dev_err(wrp->dev, "timeout waiting for cipher data ready@PMIC\n"); 663 + return ret; 664 + } 665 + 666 + /* wait for cipher mode idle */ 667 + pwrap_write(wrp, PWRAP_DEW_CIPHER_MODE, 0x1); 668 + ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle_and_sync_idle); 669 + if (ret) { 670 + dev_err(wrp->dev, "cipher mode idle fail, ret=%d\n", ret); 671 + return ret; 672 + } 673 + 674 + pwrap_writel(wrp, 1, PWRAP_CIPHER_MODE); 675 + 676 + /* Write Test */ 677 + if (pwrap_write(wrp, PWRAP_DEW_WRITE_TEST, PWRAP_DEW_WRITE_TEST_VAL) || 678 + pwrap_read(wrp, PWRAP_DEW_WRITE_TEST, &rdata) || 679 + (rdata != PWRAP_DEW_WRITE_TEST_VAL)) { 680 + dev_err(wrp->dev, "rdata=0x%04X\n", rdata); 681 + return -EFAULT; 682 + } 683 + 684 + return 0; 685 + } 686 + 687 + static int pwrap_init(struct pmic_wrapper *wrp) 688 + { 689 + int ret; 690 + u32 rdata; 691 + 692 + reset_control_reset(wrp->rstc); 693 + if (wrp->rstc_bridge) 694 + reset_control_reset(wrp->rstc_bridge); 695 + 696 + if (pwrap_is_mt8173(wrp)) { 697 + /* Enable DCM */ 698 + pwrap_writel(wrp, 3, PWRAP_DCM_EN); 699 + pwrap_writel(wrp, 0, PWRAP_DCM_DBC_PRD); 700 + } 701 + 702 + /* Reset SPI slave */ 703 + ret = pwrap_reset_spislave(wrp); 704 + if (ret) 705 + return ret; 706 + 707 + pwrap_writel(wrp, 1, PWRAP_WRAP_EN); 708 + 709 + pwrap_writel(wrp, wrp->arb_en_all, PWRAP_HIPRIO_ARB_EN); 710 + 711 + pwrap_writel(wrp, 1, PWRAP_WACS2_EN); 712 + 713 + ret = pwrap_init_reg_clock(wrp); 714 + if (ret) 715 + return ret; 716 + 717 + /* Setup serial input delay */ 718 + ret = pwrap_init_sidly(wrp); 719 + if (ret) 720 + return ret; 721 + 722 + /* Enable dual IO mode */ 723 + pwrap_write(wrp, PWRAP_DEW_DIO_EN, 1); 724 + 725 + /* Check IDLE & INIT_DONE in advance */ 726 + ret = pwrap_wait_for_state(wrp, pwrap_is_fsm_idle_and_sync_idle); 727 + if (ret) { 728 + dev_err(wrp->dev, "%s fail, ret=%d\n", __func__, ret); 729 + return ret; 730 + } 731 + 732 + pwrap_writel(wrp, 1, PWRAP_DIO_EN); 733 + 734 + /* Read Test */ 735 + pwrap_read(wrp, PWRAP_DEW_READ_TEST, &rdata); 736 + if (rdata != PWRAP_DEW_READ_TEST_VAL) { 737 + dev_err(wrp->dev, "Read test failed after switch to DIO mode: 0x%04x != 0x%04x\n", 738 + PWRAP_DEW_READ_TEST_VAL, rdata); 739 + return -EFAULT; 740 + } 741 + 742 + /* Enable encryption */ 743 + ret = pwrap_init_cipher(wrp); 744 + if (ret) 745 + return ret; 746 + 747 + /* Signature checking - using CRC */ 748 + if (pwrap_write(wrp, PWRAP_DEW_CRC_EN, 0x1)) 749 + return -EFAULT; 750 + 751 + pwrap_writel(wrp, 0x1, PWRAP_CRC_EN); 752 + pwrap_writel(wrp, 0x0, PWRAP_SIG_MODE); 753 + pwrap_writel(wrp, PWRAP_DEW_CRC_VAL, PWRAP_SIG_ADR); 754 + pwrap_writel(wrp, wrp->arb_en_all, PWRAP_HIPRIO_ARB_EN); 755 + 756 + if (pwrap_is_mt8135(wrp)) 757 + pwrap_writel(wrp, 0x7, PWRAP_RRARB_EN); 758 + 759 + pwrap_writel(wrp, 0x1, PWRAP_WACS0_EN); 760 + pwrap_writel(wrp, 0x1, PWRAP_WACS1_EN); 761 + pwrap_writel(wrp, 0x1, PWRAP_WACS2_EN); 762 + pwrap_writel(wrp, 0x5, PWRAP_STAUPD_PRD); 763 + pwrap_writel(wrp, 0xff, PWRAP_STAUPD_GRPEN); 764 + pwrap_writel(wrp, 0xf, PWRAP_WDT_UNIT); 765 + pwrap_writel(wrp, 0xffffffff, PWRAP_WDT_SRC_EN); 766 + pwrap_writel(wrp, 0x1, PWRAP_TIMER_EN); 767 + pwrap_writel(wrp, ~((1 << 31) | (1 << 1)), PWRAP_INT_EN); 768 + 769 + if (pwrap_is_mt8135(wrp)) { 770 + /* enable pwrap events and pwrap bridge in AP side */ 771 + pwrap_writel(wrp, 0x1, PWRAP_EVENT_IN_EN); 772 + pwrap_writel(wrp, 0xffff, PWRAP_EVENT_DST_EN); 773 + writel(0x7f, wrp->bridge_base + PWRAP_MT8135_BRIDGE_IORD_ARB_EN); 774 + writel(0x1, wrp->bridge_base + PWRAP_MT8135_BRIDGE_WACS3_EN); 775 + writel(0x1, wrp->bridge_base + PWRAP_MT8135_BRIDGE_WACS4_EN); 776 + writel(0x1, wrp->bridge_base + PWRAP_MT8135_BRIDGE_WDT_UNIT); 777 + writel(0xffff, wrp->bridge_base + PWRAP_MT8135_BRIDGE_WDT_SRC_EN); 778 + writel(0x1, wrp->bridge_base + PWRAP_MT8135_BRIDGE_TIMER_EN); 779 + writel(0x7ff, wrp->bridge_base + PWRAP_MT8135_BRIDGE_INT_EN); 780 + 781 + /* enable PMIC event out and sources */ 782 + if (pwrap_write(wrp, PWRAP_DEW_EVENT_OUT_EN, 0x1) || 783 + pwrap_write(wrp, PWRAP_DEW_EVENT_SRC_EN, 0xffff)) { 784 + dev_err(wrp->dev, "enable dewrap fail\n"); 785 + return -EFAULT; 786 + } 787 + } else { 788 + /* PMIC_DEWRAP enables */ 789 + if (pwrap_write(wrp, PWRAP_DEW_EVENT_OUT_EN, 0x1) || 790 + pwrap_write(wrp, PWRAP_DEW_EVENT_SRC_EN, 0xffff)) { 791 + dev_err(wrp->dev, "enable dewrap fail\n"); 792 + return -EFAULT; 793 + } 794 + } 795 + 796 + /* Setup the init done registers */ 797 + pwrap_writel(wrp, 1, PWRAP_INIT_DONE2); 798 + pwrap_writel(wrp, 1, PWRAP_INIT_DONE0); 799 + pwrap_writel(wrp, 1, PWRAP_INIT_DONE1); 800 + 801 + if (pwrap_is_mt8135(wrp)) { 802 + writel(1, wrp->bridge_base + PWRAP_MT8135_BRIDGE_INIT_DONE3); 803 + writel(1, wrp->bridge_base + PWRAP_MT8135_BRIDGE_INIT_DONE4); 804 + } 805 + 806 + return 0; 807 + } 808 + 809 + static irqreturn_t pwrap_interrupt(int irqno, void *dev_id) 810 + { 811 + u32 rdata; 812 + struct pmic_wrapper *wrp = dev_id; 813 + 814 + rdata = pwrap_readl(wrp, PWRAP_INT_FLG); 815 + 816 + dev_err(wrp->dev, "unexpected interrupt int=0x%x\n", rdata); 817 + 818 + pwrap_writel(wrp, 0xffffffff, PWRAP_INT_CLR); 819 + 820 + return IRQ_HANDLED; 821 + } 822 + 823 + static const struct regmap_config pwrap_regmap_config = { 824 + .reg_bits = 16, 825 + .val_bits = 16, 826 + .reg_stride = 2, 827 + .reg_read = pwrap_regmap_read, 828 + .reg_write = pwrap_regmap_write, 829 + .max_register = 0xffff, 830 + }; 831 + 832 + static struct of_device_id of_pwrap_match_tbl[] = { 833 + { 834 + .compatible = "mediatek,mt8135-pwrap", 835 + .data = &pwrap_mt8135, 836 + }, { 837 + .compatible = "mediatek,mt8173-pwrap", 838 + .data = &pwrap_mt8173, 839 + }, { 840 + /* sentinel */ 841 + } 842 + }; 843 + MODULE_DEVICE_TABLE(of, of_pwrap_match_tbl); 844 + 845 + static int pwrap_probe(struct platform_device *pdev) 846 + { 847 + int ret, irq; 848 + struct pmic_wrapper *wrp; 849 + struct device_node *np = pdev->dev.of_node; 850 + const struct of_device_id *of_id = 851 + of_match_device(of_pwrap_match_tbl, &pdev->dev); 852 + const struct pmic_wrapper_type *type; 853 + struct resource *res; 854 + 855 + wrp = devm_kzalloc(&pdev->dev, sizeof(*wrp), GFP_KERNEL); 856 + if (!wrp) 857 + return -ENOMEM; 858 + 859 + platform_set_drvdata(pdev, wrp); 860 + 861 + type = of_id->data; 862 + wrp->regs = type->regs; 863 + wrp->type = type->type; 864 + wrp->arb_en_all = type->arb_en_all; 865 + wrp->dev = &pdev->dev; 866 + 867 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "pwrap"); 868 + wrp->base = devm_ioremap_resource(wrp->dev, res); 869 + if (IS_ERR(wrp->base)) 870 + return PTR_ERR(wrp->base); 871 + 872 + wrp->rstc = devm_reset_control_get(wrp->dev, "pwrap"); 873 + if (IS_ERR(wrp->rstc)) { 874 + ret = PTR_ERR(wrp->rstc); 875 + dev_dbg(wrp->dev, "cannot get pwrap reset: %d\n", ret); 876 + return ret; 877 + } 878 + 879 + if (pwrap_is_mt8135(wrp)) { 880 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 881 + "pwrap-bridge"); 882 + wrp->bridge_base = devm_ioremap_resource(wrp->dev, res); 883 + if (IS_ERR(wrp->bridge_base)) 884 + return PTR_ERR(wrp->bridge_base); 885 + 886 + wrp->rstc_bridge = devm_reset_control_get(wrp->dev, "pwrap-bridge"); 887 + if (IS_ERR(wrp->rstc_bridge)) { 888 + ret = PTR_ERR(wrp->rstc_bridge); 889 + dev_dbg(wrp->dev, "cannot get pwrap-bridge reset: %d\n", ret); 890 + return ret; 891 + } 892 + } 893 + 894 + wrp->clk_spi = devm_clk_get(wrp->dev, "spi"); 895 + if (IS_ERR(wrp->clk_spi)) { 896 + dev_dbg(wrp->dev, "failed to get clock: %ld\n", PTR_ERR(wrp->clk_spi)); 897 + return PTR_ERR(wrp->clk_spi); 898 + } 899 + 900 + wrp->clk_wrap = devm_clk_get(wrp->dev, "wrap"); 901 + if (IS_ERR(wrp->clk_wrap)) { 902 + dev_dbg(wrp->dev, "failed to get clock: %ld\n", PTR_ERR(wrp->clk_wrap)); 903 + return PTR_ERR(wrp->clk_wrap); 904 + } 905 + 906 + ret = clk_prepare_enable(wrp->clk_spi); 907 + if (ret) 908 + return ret; 909 + 910 + ret = clk_prepare_enable(wrp->clk_wrap); 911 + if (ret) 912 + goto err_out1; 913 + 914 + /* Enable internal dynamic clock */ 915 + pwrap_writel(wrp, 1, PWRAP_DCM_EN); 916 + pwrap_writel(wrp, 0, PWRAP_DCM_DBC_PRD); 917 + 918 + /* 919 + * The PMIC could already be initialized by the bootloader. 920 + * Skip initialization here in this case. 921 + */ 922 + if (!pwrap_readl(wrp, PWRAP_INIT_DONE2)) { 923 + ret = pwrap_init(wrp); 924 + if (ret) { 925 + dev_dbg(wrp->dev, "init failed with %d\n", ret); 926 + goto err_out2; 927 + } 928 + } 929 + 930 + if (!(pwrap_readl(wrp, PWRAP_WACS2_RDATA) & PWRAP_STATE_INIT_DONE0)) { 931 + dev_dbg(wrp->dev, "initialization isn't finished\n"); 932 + return -ENODEV; 933 + } 934 + 935 + irq = platform_get_irq(pdev, 0); 936 + ret = devm_request_irq(wrp->dev, irq, pwrap_interrupt, IRQF_TRIGGER_HIGH, 937 + "mt-pmic-pwrap", wrp); 938 + if (ret) 939 + goto err_out2; 940 + 941 + wrp->regmap = devm_regmap_init(wrp->dev, NULL, wrp, &pwrap_regmap_config); 942 + if (IS_ERR(wrp->regmap)) 943 + return PTR_ERR(wrp->regmap); 944 + 945 + ret = of_platform_populate(np, NULL, NULL, wrp->dev); 946 + if (ret) { 947 + dev_dbg(wrp->dev, "failed to create child devices at %s\n", 948 + np->full_name); 949 + goto err_out2; 950 + } 951 + 952 + return 0; 953 + 954 + err_out2: 955 + clk_disable_unprepare(wrp->clk_wrap); 956 + err_out1: 957 + clk_disable_unprepare(wrp->clk_spi); 958 + 959 + return ret; 960 + } 961 + 962 + static struct platform_driver pwrap_drv = { 963 + .driver = { 964 + .name = "mt-pmic-pwrap", 965 + .owner = THIS_MODULE, 966 + .of_match_table = of_match_ptr(of_pwrap_match_tbl), 967 + }, 968 + .probe = pwrap_probe, 969 + }; 970 + 971 + module_platform_driver(pwrap_drv); 972 + 973 + MODULE_AUTHOR("Flora Fu, MediaTek"); 974 + MODULE_DESCRIPTION("MediaTek MT8135 PMIC Wrapper Driver"); 975 + MODULE_LICENSE("GPL v2");
+1
drivers/soc/qcom/Kconfig
··· 4 4 config QCOM_GSBI 5 5 tristate "QCOM General Serial Bus Interface" 6 6 depends on ARCH_QCOM 7 + select MFD_SYSCON 7 8 help 8 9 Say y here to enable GSBI support. The GSBI provides control 9 10 functions for connecting the underlying serial UART, SPI, and I2C
+152
drivers/soc/qcom/qcom_gsbi.c
··· 18 18 #include <linux/of.h> 19 19 #include <linux/of_platform.h> 20 20 #include <linux/platform_device.h> 21 + #include <linux/regmap.h> 22 + #include <linux/mfd/syscon.h> 23 + #include <dt-bindings/soc/qcom,gsbi.h> 21 24 22 25 #define GSBI_CTRL_REG 0x0000 23 26 #define GSBI_PROTOCOL_SHIFT 4 27 + #define MAX_GSBI 12 28 + 29 + #define TCSR_ADM_CRCI_BASE 0x70 30 + 31 + struct crci_config { 32 + u32 num_rows; 33 + const u32 (*array)[MAX_GSBI]; 34 + }; 35 + 36 + static const u32 crci_ipq8064[][MAX_GSBI] = { 37 + { 38 + 0x000003, 0x00000c, 0x000030, 0x0000c0, 39 + 0x000300, 0x000c00, 0x003000, 0x00c000, 40 + 0x030000, 0x0c0000, 0x300000, 0xc00000 41 + }, 42 + { 43 + 0x000003, 0x00000c, 0x000030, 0x0000c0, 44 + 0x000300, 0x000c00, 0x003000, 0x00c000, 45 + 0x030000, 0x0c0000, 0x300000, 0xc00000 46 + }, 47 + }; 48 + 49 + static const struct crci_config config_ipq8064 = { 50 + .num_rows = ARRAY_SIZE(crci_ipq8064), 51 + .array = crci_ipq8064, 52 + }; 53 + 54 + static const unsigned int crci_apq8064[][MAX_GSBI] = { 55 + { 56 + 0x001800, 0x006000, 0x000030, 0x0000c0, 57 + 0x000300, 0x000400, 0x000000, 0x000000, 58 + 0x000000, 0x000000, 0x000000, 0x000000 59 + }, 60 + { 61 + 0x000000, 0x000000, 0x000000, 0x000000, 62 + 0x000000, 0x000020, 0x0000c0, 0x000000, 63 + 0x000000, 0x000000, 0x000000, 0x000000 64 + }, 65 + }; 66 + 67 + static const struct crci_config config_apq8064 = { 68 + .num_rows = ARRAY_SIZE(crci_apq8064), 69 + .array = crci_apq8064, 70 + }; 71 + 72 + static const unsigned int crci_msm8960[][MAX_GSBI] = { 73 + { 74 + 0x000003, 0x00000c, 0x000030, 0x0000c0, 75 + 0x000300, 0x000400, 0x000000, 0x000000, 76 + 0x000000, 0x000000, 0x000000, 0x000000 77 + }, 78 + { 79 + 0x000000, 0x000000, 0x000000, 0x000000, 80 + 0x000000, 0x000020, 0x0000c0, 0x000300, 81 + 0x001800, 0x006000, 0x000000, 0x000000 82 + }, 83 + }; 84 + 85 + static const struct crci_config config_msm8960 = { 86 + .num_rows = ARRAY_SIZE(crci_msm8960), 87 + .array = crci_msm8960, 88 + }; 89 + 90 + static const unsigned int crci_msm8660[][MAX_GSBI] = { 91 + { /* ADM 0 - B */ 92 + 0x000003, 0x00000c, 0x000030, 0x0000c0, 93 + 0x000300, 0x000c00, 0x003000, 0x00c000, 94 + 0x030000, 0x0c0000, 0x300000, 0xc00000 95 + }, 96 + { /* ADM 0 - B */ 97 + 0x000003, 0x00000c, 0x000030, 0x0000c0, 98 + 0x000300, 0x000c00, 0x003000, 0x00c000, 99 + 0x030000, 0x0c0000, 0x300000, 0xc00000 100 + }, 101 + { /* ADM 1 - A */ 102 + 0x000003, 0x00000c, 0x000030, 0x0000c0, 103 + 0x000300, 0x000c00, 0x003000, 0x00c000, 104 + 0x030000, 0x0c0000, 0x300000, 0xc00000 105 + }, 106 + { /* ADM 1 - B */ 107 + 0x000003, 0x00000c, 0x000030, 0x0000c0, 108 + 0x000300, 0x000c00, 0x003000, 0x00c000, 109 + 0x030000, 0x0c0000, 0x300000, 0xc00000 110 + }, 111 + }; 112 + 113 + static const struct crci_config config_msm8660 = { 114 + .num_rows = ARRAY_SIZE(crci_msm8660), 115 + .array = crci_msm8660, 116 + }; 24 117 25 118 struct gsbi_info { 26 119 struct clk *hclk; 27 120 u32 mode; 28 121 u32 crci; 122 + struct regmap *tcsr; 123 + }; 124 + 125 + static const struct of_device_id tcsr_dt_match[] = { 126 + { .compatible = "qcom,tcsr-ipq8064", .data = &config_ipq8064}, 127 + { .compatible = "qcom,tcsr-apq8064", .data = &config_apq8064}, 128 + { .compatible = "qcom,tcsr-msm8960", .data = &config_msm8960}, 129 + { .compatible = "qcom,tcsr-msm8660", .data = &config_msm8660}, 130 + { }, 29 131 }; 30 132 31 133 static int gsbi_probe(struct platform_device *pdev) 32 134 { 33 135 struct device_node *node = pdev->dev.of_node; 136 + struct device_node *tcsr_node; 137 + const struct of_device_id *match; 34 138 struct resource *res; 35 139 void __iomem *base; 36 140 struct gsbi_info *gsbi; 141 + int i; 142 + u32 mask, gsbi_num; 143 + const struct crci_config *config = NULL; 37 144 38 145 gsbi = devm_kzalloc(&pdev->dev, sizeof(*gsbi), GFP_KERNEL); 39 146 ··· 151 44 base = devm_ioremap_resource(&pdev->dev, res); 152 45 if (IS_ERR(base)) 153 46 return PTR_ERR(base); 47 + 48 + /* get the tcsr node and setup the config and regmap */ 49 + gsbi->tcsr = syscon_regmap_lookup_by_phandle(node, "syscon-tcsr"); 50 + 51 + if (!IS_ERR(gsbi->tcsr)) { 52 + tcsr_node = of_parse_phandle(node, "syscon-tcsr", 0); 53 + if (tcsr_node) { 54 + match = of_match_node(tcsr_dt_match, tcsr_node); 55 + if (match) 56 + config = match->data; 57 + else 58 + dev_warn(&pdev->dev, "no matching TCSR\n"); 59 + 60 + of_node_put(tcsr_node); 61 + } 62 + } 63 + 64 + if (of_property_read_u32(node, "cell-index", &gsbi_num)) { 65 + dev_err(&pdev->dev, "missing cell-index\n"); 66 + return -EINVAL; 67 + } 68 + 69 + if (gsbi_num < 1 || gsbi_num > MAX_GSBI) { 70 + dev_err(&pdev->dev, "invalid cell-index\n"); 71 + return -EINVAL; 72 + } 154 73 155 74 if (of_property_read_u32(node, "qcom,mode", &gsbi->mode)) { 156 75 dev_err(&pdev->dev, "missing mode configuration\n"); ··· 196 63 197 64 writel_relaxed((gsbi->mode << GSBI_PROTOCOL_SHIFT) | gsbi->crci, 198 65 base + GSBI_CTRL_REG); 66 + 67 + /* 68 + * modify tcsr to reflect mode and ADM CRCI mux 69 + * Each gsbi contains a pair of bits, one for RX and one for TX 70 + * SPI mode requires both bits cleared, otherwise they are set 71 + */ 72 + if (config) { 73 + for (i = 0; i < config->num_rows; i++) { 74 + mask = config->array[i][gsbi_num - 1]; 75 + 76 + if (gsbi->mode == GSBI_PROT_SPI) 77 + regmap_update_bits(gsbi->tcsr, 78 + TCSR_ADM_CRCI_BASE + 4 * i, mask, 0); 79 + else 80 + regmap_update_bits(gsbi->tcsr, 81 + TCSR_ADM_CRCI_BASE + 4 * i, mask, mask); 82 + 83 + } 84 + } 199 85 200 86 /* make sure the gsbi control write is not reordered */ 201 87 wmb();
+1 -1
drivers/watchdog/Kconfig
··· 154 154 155 155 config AT91RM9200_WATCHDOG 156 156 tristate "AT91RM9200 watchdog" 157 - depends on SOC_AT91RM9200 157 + depends on SOC_AT91RM9200 && MFD_SYSCON 158 158 help 159 159 Watchdog timer embedded into AT91RM9200 chips. This will reboot your 160 160 system when the timeout is reached.
+54 -7
drivers/watchdog/at91rm9200_wdt.c
··· 12 12 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 13 13 14 14 #include <linux/bitops.h> 15 + #include <linux/delay.h> 15 16 #include <linux/errno.h> 16 17 #include <linux/fs.h> 17 18 #include <linux/init.h> 18 19 #include <linux/io.h> 19 20 #include <linux/kernel.h> 21 + #include <linux/mfd/syscon.h> 22 + #include <linux/mfd/syscon/atmel-st.h> 20 23 #include <linux/miscdevice.h> 21 24 #include <linux/module.h> 22 25 #include <linux/moduleparam.h> 23 26 #include <linux/platform_device.h> 27 + #include <linux/reboot.h> 28 + #include <linux/regmap.h> 24 29 #include <linux/types.h> 25 30 #include <linux/watchdog.h> 26 31 #include <linux/uaccess.h> 27 32 #include <linux/of.h> 28 33 #include <linux/of_device.h> 29 - #include <mach/at91_st.h> 30 34 31 35 #define WDT_DEFAULT_TIME 5 /* seconds */ 32 36 #define WDT_MAX_TIME 256 /* seconds */ 33 37 34 38 static int wdt_time = WDT_DEFAULT_TIME; 35 39 static bool nowayout = WATCHDOG_NOWAYOUT; 40 + static struct regmap *regmap_st; 36 41 37 42 module_param(wdt_time, int, 0); 38 43 MODULE_PARM_DESC(wdt_time, "Watchdog time in seconds. (default=" ··· 55 50 56 51 /* ......................................................................... */ 57 52 53 + static int at91rm9200_restart(struct notifier_block *this, 54 + unsigned long mode, void *cmd) 55 + { 56 + /* 57 + * Perform a hardware reset with the use of the Watchdog timer. 58 + */ 59 + regmap_write(regmap_st, AT91_ST_WDMR, 60 + AT91_ST_RSTEN | AT91_ST_EXTEN | 1); 61 + regmap_write(regmap_st, AT91_ST_CR, AT91_ST_WDRST); 62 + 63 + mdelay(2000); 64 + 65 + pr_emerg("Unable to restart system\n"); 66 + return NOTIFY_DONE; 67 + } 68 + 69 + static struct notifier_block at91rm9200_restart_nb = { 70 + .notifier_call = at91rm9200_restart, 71 + .priority = 192, 72 + }; 73 + 58 74 /* 59 75 * Disable the watchdog. 60 76 */ 61 77 static inline void at91_wdt_stop(void) 62 78 { 63 - at91_st_write(AT91_ST_WDMR, AT91_ST_EXTEN); 79 + regmap_write(regmap_st, AT91_ST_WDMR, AT91_ST_EXTEN); 64 80 } 65 81 66 82 /* ··· 89 63 */ 90 64 static inline void at91_wdt_start(void) 91 65 { 92 - at91_st_write(AT91_ST_WDMR, AT91_ST_EXTEN | AT91_ST_RSTEN | 66 + regmap_write(regmap_st, AT91_ST_WDMR, AT91_ST_EXTEN | AT91_ST_RSTEN | 93 67 (((65536 * wdt_time) >> 8) & AT91_ST_WDV)); 94 - at91_st_write(AT91_ST_CR, AT91_ST_WDRST); 68 + regmap_write(regmap_st, AT91_ST_CR, AT91_ST_WDRST); 95 69 } 96 70 97 71 /* ··· 99 73 */ 100 74 static inline void at91_wdt_reload(void) 101 75 { 102 - at91_st_write(AT91_ST_CR, AT91_ST_WDRST); 76 + regmap_write(regmap_st, AT91_ST_CR, AT91_ST_WDRST); 103 77 } 104 78 105 79 /* ......................................................................... */ ··· 229 203 230 204 static int at91wdt_probe(struct platform_device *pdev) 231 205 { 206 + struct device *dev = &pdev->dev; 207 + struct device *parent; 232 208 int res; 233 209 234 210 if (at91wdt_miscdev.parent) 235 211 return -EBUSY; 236 212 at91wdt_miscdev.parent = &pdev->dev; 237 213 214 + parent = dev->parent; 215 + if (!parent) { 216 + dev_err(dev, "no parent\n"); 217 + return -ENODEV; 218 + } 219 + 220 + regmap_st = syscon_node_to_regmap(parent->of_node); 221 + if (!regmap_st) 222 + return -ENODEV; 223 + 238 224 res = misc_register(&at91wdt_miscdev); 239 225 if (res) 240 226 return res; 227 + 228 + res = register_restart_handler(&at91rm9200_restart_nb); 229 + if (res) 230 + dev_warn(dev, "failed to register restart handler\n"); 241 231 242 232 pr_info("AT91 Watchdog Timer enabled (%d seconds%s)\n", 243 233 wdt_time, nowayout ? ", nowayout" : ""); ··· 262 220 263 221 static int at91wdt_remove(struct platform_device *pdev) 264 222 { 223 + struct device *dev = &pdev->dev; 265 224 int res; 225 + 226 + res = unregister_restart_handler(&at91rm9200_restart_nb); 227 + if (res) 228 + dev_warn(dev, "failed to unregister restart handler\n"); 266 229 267 230 res = misc_deregister(&at91wdt_miscdev); 268 231 if (!res) ··· 314 267 .suspend = at91wdt_suspend, 315 268 .resume = at91wdt_resume, 316 269 .driver = { 317 - .name = "at91_wdt", 270 + .name = "atmel_st_watchdog", 318 271 .of_match_table = at91_wdt_dt_ids, 319 272 }, 320 273 }; ··· 343 296 MODULE_AUTHOR("Andrew Victor"); 344 297 MODULE_DESCRIPTION("Watchdog driver for Atmel AT91RM9200"); 345 298 MODULE_LICENSE("GPL"); 346 - MODULE_ALIAS("platform:at91_wdt"); 299 + MODULE_ALIAS("platform:atmel_st_watchdog");
+8 -1
include/linux/arm-cci.h
··· 24 24 #include <linux/errno.h> 25 25 #include <linux/types.h> 26 26 27 + #include <asm/arm-cci.h> 28 + 27 29 struct device_node; 28 30 29 31 #ifdef CONFIG_ARM_CCI 30 32 extern bool cci_probed(void); 33 + #else 34 + static inline bool cci_probed(void) { return false; } 35 + #endif 36 + 37 + #ifdef CONFIG_ARM_CCI400_PORT_CTRL 31 38 extern int cci_ace_get_port(struct device_node *dn); 32 39 extern int cci_disable_port_by_cpu(u64 mpidr); 33 40 extern int __cci_control_port_by_device(struct device_node *dn, bool enable); 34 41 extern int __cci_control_port_by_index(u32 port, bool enable); 35 42 #else 36 - static inline bool cci_probed(void) { return false; } 37 43 static inline int cci_ace_get_port(struct device_node *dn) 38 44 { 39 45 return -ENODEV; ··· 55 49 return -ENODEV; 56 50 } 57 51 #endif 52 + 58 53 #define cci_disable_port_by_device(dev) \ 59 54 __cci_control_port_by_device(dev, false) 60 55 #define cci_enable_port_by_device(dev) \
+49
include/linux/mfd/syscon/atmel-st.h
··· 1 + /* 2 + * Copyright (C) 2005 Ivan Kokshaysky 3 + * Copyright (C) SAN People 4 + * 5 + * System Timer (ST) - System peripherals registers. 6 + * Based on AT91RM9200 datasheet revision E. 7 + * 8 + * This program is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License as published by 10 + * the Free Software Foundation; either version 2 of the License, or 11 + * (at your option) any later version. 12 + */ 13 + 14 + #ifndef _LINUX_MFD_SYSCON_ATMEL_ST_H 15 + #define _LINUX_MFD_SYSCON_ATMEL_ST_H 16 + 17 + #include <linux/bitops.h> 18 + 19 + #define AT91_ST_CR 0x00 /* Control Register */ 20 + #define AT91_ST_WDRST BIT(0) /* Watchdog Timer Restart */ 21 + 22 + #define AT91_ST_PIMR 0x04 /* Period Interval Mode Register */ 23 + #define AT91_ST_PIV 0xffff /* Period Interval Value */ 24 + 25 + #define AT91_ST_WDMR 0x08 /* Watchdog Mode Register */ 26 + #define AT91_ST_WDV 0xffff /* Watchdog Counter Value */ 27 + #define AT91_ST_RSTEN BIT(16) /* Reset Enable */ 28 + #define AT91_ST_EXTEN BIT(17) /* External Signal Assertion Enable */ 29 + 30 + #define AT91_ST_RTMR 0x0c /* Real-time Mode Register */ 31 + #define AT91_ST_RTPRES 0xffff /* Real-time Prescalar Value */ 32 + 33 + #define AT91_ST_SR 0x10 /* Status Register */ 34 + #define AT91_ST_PITS BIT(0) /* Period Interval Timer Status */ 35 + #define AT91_ST_WDOVF BIT(1) /* Watchdog Overflow */ 36 + #define AT91_ST_RTTINC BIT(2) /* Real-time Timer Increment */ 37 + #define AT91_ST_ALMS BIT(3) /* Alarm Status */ 38 + 39 + #define AT91_ST_IER 0x14 /* Interrupt Enable Register */ 40 + #define AT91_ST_IDR 0x18 /* Interrupt Disable Register */ 41 + #define AT91_ST_IMR 0x1c /* Interrupt Mask Register */ 42 + 43 + #define AT91_ST_RTAR 0x20 /* Real-time Alarm Register */ 44 + #define AT91_ST_ALMV 0xfffff /* Alarm Value */ 45 + 46 + #define AT91_ST_CRTR 0x24 /* Current Real-time Register */ 47 + #define AT91_ST_CRTV 0xfffff /* Current Real-Time Value */ 48 + 49 + #endif /* _LINUX_MFD_SYSCON_ATMEL_ST_H */
+2 -1
include/linux/omap-gpmc.h
··· 163 163 164 164 extern void gpmc_cs_write_reg(int cs, int idx, u32 val); 165 165 extern int gpmc_calc_divider(unsigned int sync_clk); 166 - extern int gpmc_cs_set_timings(int cs, const struct gpmc_timings *t); 166 + extern int gpmc_cs_set_timings(int cs, const struct gpmc_timings *t, 167 + const struct gpmc_settings *s); 167 168 extern int gpmc_cs_program_settings(int cs, struct gpmc_settings *p); 168 169 extern int gpmc_cs_request(int cs, unsigned long size, unsigned long *base); 169 170 extern void gpmc_cs_free(int cs);
+28
include/linux/qcom_scm.h
··· 1 + /* Copyright (c) 2010-2014, The Linux Foundation. All rights reserved. 2 + * Copyright (C) 2015 Linaro Ltd. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 and 6 + * only version 2 as published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + */ 13 + #ifndef __QCOM_SCM_H 14 + #define __QCOM_SCM_H 15 + 16 + extern int qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus); 17 + extern int qcom_scm_set_warm_boot_addr(void *entry, const cpumask_t *cpus); 18 + 19 + #define QCOM_SCM_CPU_PWR_DOWN_L2_ON 0x0 20 + #define QCOM_SCM_CPU_PWR_DOWN_L2_OFF 0x1 21 + 22 + extern void qcom_scm_cpu_power_down(u32 flags); 23 + 24 + #define QCOM_SCM_VERSION(major, minor) (((major) << 16) | ((minor) & 0xFF)) 25 + 26 + extern u32 qcom_scm_get_version(void); 27 + 28 + #endif